1
The following changes since commit 0fc0142828b5bc965790a1c5c6e241897d3387cb:
1
The following changes since commit 77f3804ab7ed94b471a14acb260e5aeacf26193f:
2
2
3
Merge remote-tracking branch 'remotes/kraxel/tags/input-20200921-pull-request' into staging (2020-09-22 21:11:10 +0100)
3
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging (2021-02-02 16:47:51 +0000)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://github.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to d73415a315471ac0b127ed3fad45c8ec5d711de1:
9
for you to fetch changes up to 026362226f1ff6a1168524a326bbd6347ad40e85:
10
10
11
qemu/atomic.h: rename atomic_ to qatomic_ (2020-09-23 16:07:44 +0100)
11
docs: fix Parallels Image "dirty bitmap" section (2021-02-03 16:48:21 +0000)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Pull request
14
Pull request
15
15
16
This includes the atomic_ -> qatomic_ rename that touches many files and is
16
The pull request includes Multi-Process QEMU, GitLab repo URL updates, and even
17
prone to conflicts.
17
a block layer patch to fix the Parallels Image format specification!
18
18
19
----------------------------------------------------------------
19
----------------------------------------------------------------
20
20
21
Halil Pasic (1):
21
Denis V. Lunev (1):
22
virtio: add vhost-user-fs-ccw device
22
docs: fix Parallels Image "dirty bitmap" section
23
23
24
Marc Hartmayer (1):
24
Elena Ufimtseva (8):
25
libvhost-user: handle endianness as mandated by the spec
25
multi-process: add configure and usage information
26
io: add qio_channel_writev_full_all helper
27
io: add qio_channel_readv_full_all_eof & qio_channel_readv_full_all
28
helpers
29
multi-process: define MPQemuMsg format and transmission functions
30
multi-process: introduce proxy object
31
multi-process: add proxy communication functions
32
multi-process: Forward PCI config space acceses to the remote process
33
multi-process: perform device reset in the remote process
26
34
27
Stefan Hajnoczi (11):
35
Jagannathan Raman (11):
28
MAINTAINERS: add Stefan Hajnoczi as block/nvme.c maintainer
36
memory: alloc RAM from file at offset
29
util/iov: add iov_discard_undo()
37
multi-process: Add config option for multi-process QEMU
30
virtio-blk: undo destructive iov_discard_*() operations
38
multi-process: setup PCI host bridge for remote device
31
virtio-crypto: don't modify elem->in/out_sg
39
multi-process: setup a machine object for remote device process
32
docs/system: clarify deprecation schedule
40
multi-process: Initialize message handler in remote device
33
gitmodules: switch to qemu.org qboot mirror
41
multi-process: Associate fd of a PCIDevice with its object
34
gitmodules: switch to qemu.org meson mirror
42
multi-process: setup memory manager for remote device
35
gitmodules: add qemu.org vbootrom submodule
43
multi-process: PCI BAR read/write handling for proxy & remote
36
fdmon-poll: reset npfd when upgrading to fdmon-epoll
44
endpoints
37
tests: add test-fdmon-epoll
45
multi-process: Synchronize remote memory
38
qemu/atomic.h: rename atomic_ to qatomic_
46
multi-process: create IOHUB object to handle irq
47
multi-process: Retrieve PCI info from remote process
39
48
40
MAINTAINERS | 5 +-
49
John G Johnson (1):
41
include/qemu/atomic.h | 248 +++++++++---------
50
multi-process: add the concept description to
42
docs/devel/lockcnt.txt | 8 +-
51
docs/devel/qemu-multiprocess
43
docs/devel/rcu.txt | 34 +--
52
44
accel/tcg/atomic_template.h | 20 +-
53
Stefan Hajnoczi (6):
45
include/block/aio-wait.h | 4 +-
54
.github: point Repo Lockdown bot to GitLab repo
46
include/block/aio.h | 8 +-
55
gitmodules: use GitLab repos instead of qemu.org
47
include/exec/cpu_ldst.h | 2 +-
56
gitlab-ci: remove redundant GitLab repo URL command
48
include/exec/exec-all.h | 6 +-
57
docs: update README to use GitLab repo URLs
49
include/exec/log.h | 6 +-
58
pc-bios: update mirror URLs to GitLab
50
include/exec/memory.h | 2 +-
59
get_maintainer: update repo URL to GitLab
51
include/exec/ram_addr.h | 26 +-
60
52
include/exec/ramlist.h | 2 +-
61
MAINTAINERS | 24 +
53
include/exec/tb-lookup.h | 4 +-
62
README.rst | 4 +-
54
include/hw/core/cpu.h | 2 +-
63
docs/devel/index.rst | 1 +
55
include/hw/virtio/virtio-blk.h | 2 +
64
docs/devel/multi-process.rst | 966 ++++++++++++++++++++++
56
include/qemu/atomic128.h | 6 +-
65
docs/system/index.rst | 1 +
57
include/qemu/bitops.h | 2 +-
66
docs/system/multi-process.rst | 64 ++
58
include/qemu/coroutine.h | 2 +-
67
docs/interop/parallels.txt | 2 +-
59
include/qemu/iov.h | 23 ++
68
configure | 10 +
60
include/qemu/log.h | 6 +-
69
meson.build | 5 +-
61
include/qemu/queue.h | 7 +-
70
hw/remote/trace.h | 1 +
62
include/qemu/rcu.h | 10 +-
71
include/exec/memory.h | 2 +
63
include/qemu/rcu_queue.h | 100 +++----
72
include/exec/ram_addr.h | 2 +-
64
include/qemu/seqlock.h | 8 +-
73
include/hw/pci-host/remote.h | 30 +
65
include/qemu/stats64.h | 28 +-
74
include/hw/pci/pci_ids.h | 3 +
66
include/qemu/thread.h | 24 +-
75
include/hw/remote/iohub.h | 42 +
67
.../infiniband/hw/vmw_pvrdma/pvrdma_ring.h | 14 +-
76
include/hw/remote/machine.h | 38 +
68
linux-user/qemu.h | 2 +-
77
include/hw/remote/memory.h | 19 +
69
tcg/i386/tcg-target.h | 2 +-
78
include/hw/remote/mpqemu-link.h | 99 +++
70
tcg/s390/tcg-target.h | 2 +-
79
include/hw/remote/proxy-memory-listener.h | 28 +
71
tcg/tci/tcg-target.h | 2 +-
80
include/hw/remote/proxy.h | 48 ++
72
accel/kvm/kvm-all.c | 12 +-
81
include/io/channel.h | 78 ++
73
accel/tcg/cpu-exec.c | 15 +-
82
include/qemu/mmap-alloc.h | 4 +-
74
accel/tcg/cputlb.c | 24 +-
83
include/sysemu/iothread.h | 6 +
75
accel/tcg/tcg-all.c | 2 +-
84
backends/hostmem-memfd.c | 2 +-
76
accel/tcg/translate-all.c | 55 ++--
85
hw/misc/ivshmem.c | 3 +-
77
audio/jackaudio.c | 18 +-
86
hw/pci-host/remote.c | 75 ++
78
block.c | 4 +-
87
hw/remote/iohub.c | 119 +++
79
block/block-backend.c | 15 +-
88
hw/remote/machine.c | 80 ++
80
block/io.c | 48 ++--
89
hw/remote/memory.c | 65 ++
81
block/nfs.c | 2 +-
90
hw/remote/message.c | 230 ++++++
82
block/sheepdog.c | 2 +-
91
hw/remote/mpqemu-link.c | 267 ++++++
83
block/throttle-groups.c | 12 +-
92
hw/remote/proxy-memory-listener.c | 227 +++++
84
block/throttle.c | 4 +-
93
hw/remote/proxy.c | 379 +++++++++
85
blockdev.c | 2 +-
94
hw/remote/remote-obj.c | 203 +++++
86
blockjob.c | 2 +-
95
io/channel.c | 116 ++-
87
contrib/libvhost-user/libvhost-user.c | 79 +++---
96
iothread.c | 6 +
88
cpus-common.c | 26 +-
97
softmmu/memory.c | 3 +-
89
dump/dump.c | 8 +-
98
softmmu/physmem.c | 11 +-
90
exec.c | 49 ++--
99
util/mmap-alloc.c | 7 +-
91
hw/block/virtio-blk.c | 11 +-
100
util/oslib-posix.c | 2 +-
92
hw/core/cpu.c | 6 +-
101
.github/lockdown.yml | 8 +-
93
hw/display/qxl.c | 4 +-
102
.gitlab-ci.yml | 1 -
94
hw/hyperv/hyperv.c | 10 +-
103
.gitmodules | 44 +-
95
hw/hyperv/vmbus.c | 2 +-
104
Kconfig.host | 4 +
96
hw/i386/xen/xen-hvm.c | 2 +-
105
hw/Kconfig | 1 +
97
hw/intc/rx_icu.c | 12 +-
106
hw/meson.build | 1 +
98
hw/intc/sifive_plic.c | 4 +-
107
hw/pci-host/Kconfig | 3 +
99
hw/misc/edu.c | 16 +-
108
hw/pci-host/meson.build | 1 +
100
hw/net/virtio-net.c | 10 +-
109
hw/remote/Kconfig | 4 +
101
hw/rdma/rdma_backend.c | 18 +-
110
hw/remote/meson.build | 13 +
102
hw/rdma/rdma_rm.c | 2 +-
111
hw/remote/trace-events | 4 +
103
hw/rdma/vmw/pvrdma_dev_ring.c | 4 +-
112
pc-bios/README | 4 +-
104
hw/s390x/s390-pci-bus.c | 2 +-
113
scripts/get_maintainer.pl | 2 +-
105
hw/s390x/vhost-user-fs-ccw.c | 75 ++++++
114
53 files changed, 3294 insertions(+), 68 deletions(-)
106
hw/s390x/virtio-ccw.c | 2 +-
115
create mode 100644 docs/devel/multi-process.rst
107
hw/virtio/vhost.c | 2 +-
116
create mode 100644 docs/system/multi-process.rst
108
hw/virtio/virtio-crypto.c | 17 +-
117
create mode 100644 hw/remote/trace.h
109
hw/virtio/virtio-mmio.c | 6 +-
118
create mode 100644 include/hw/pci-host/remote.h
110
hw/virtio/virtio-pci.c | 6 +-
119
create mode 100644 include/hw/remote/iohub.h
111
hw/virtio/virtio.c | 16 +-
120
create mode 100644 include/hw/remote/machine.h
112
hw/xtensa/pic_cpu.c | 4 +-
121
create mode 100644 include/hw/remote/memory.h
113
iothread.c | 6 +-
122
create mode 100644 include/hw/remote/mpqemu-link.h
114
linux-user/hppa/cpu_loop.c | 11 +-
123
create mode 100644 include/hw/remote/proxy-memory-listener.h
115
linux-user/signal.c | 8 +-
124
create mode 100644 include/hw/remote/proxy.h
116
migration/colo-failover.c | 4 +-
125
create mode 100644 hw/pci-host/remote.c
117
migration/migration.c | 8 +-
126
create mode 100644 hw/remote/iohub.c
118
migration/multifd.c | 18 +-
127
create mode 100644 hw/remote/machine.c
119
migration/postcopy-ram.c | 34 +--
128
create mode 100644 hw/remote/memory.c
120
migration/rdma.c | 34 +--
129
create mode 100644 hw/remote/message.c
121
monitor/hmp.c | 6 +-
130
create mode 100644 hw/remote/mpqemu-link.c
122
monitor/misc.c | 2 +-
131
create mode 100644 hw/remote/proxy-memory-listener.c
123
monitor/monitor.c | 6 +-
132
create mode 100644 hw/remote/proxy.c
124
qemu-nbd.c | 2 +-
133
create mode 100644 hw/remote/remote-obj.c
125
qga/commands.c | 12 +-
134
create mode 100644 hw/remote/Kconfig
126
qom/object.c | 20 +-
135
create mode 100644 hw/remote/meson.build
127
scsi/qemu-pr-helper.c | 4 +-
136
create mode 100644 hw/remote/trace-events
128
softmmu/cpu-throttle.c | 10 +-
129
softmmu/cpus.c | 42 +--
130
softmmu/memory.c | 6 +-
131
softmmu/vl.c | 2 +-
132
target/arm/mte_helper.c | 6 +-
133
target/hppa/op_helper.c | 2 +-
134
target/i386/mem_helper.c | 2 +-
135
target/i386/whpx-all.c | 6 +-
136
target/riscv/cpu_helper.c | 2 +-
137
target/s390x/mem_helper.c | 4 +-
138
target/xtensa/exc_helper.c | 4 +-
139
target/xtensa/op_helper.c | 2 +-
140
tcg/tcg.c | 58 ++--
141
tcg/tci.c | 2 +-
142
tests/atomic64-bench.c | 14 +-
143
tests/atomic_add-bench.c | 14 +-
144
tests/iothread.c | 2 +-
145
tests/qht-bench.c | 12 +-
146
tests/rcutorture.c | 24 +-
147
tests/test-aio-multithread.c | 52 ++--
148
tests/test-fdmon-epoll.c | 73 ++++++
149
tests/test-iov.c | 165 ++++++++++++
150
tests/test-logging.c | 4 +-
151
tests/test-rcu-list.c | 38 +--
152
tests/test-thread-pool.c | 10 +-
153
util/aio-posix.c | 14 +-
154
util/aio-wait.c | 2 +-
155
util/aio-win32.c | 5 +-
156
util/async.c | 28 +-
157
util/atomic64.c | 10 +-
158
util/bitmap.c | 14 +-
159
util/cacheinfo.c | 2 +-
160
util/fdmon-epoll.c | 4 +-
161
util/fdmon-io_uring.c | 12 +-
162
util/fdmon-poll.c | 1 +
163
util/iov.c | 50 +++-
164
util/lockcnt.c | 52 ++--
165
util/log.c | 10 +-
166
util/qemu-coroutine-lock.c | 18 +-
167
util/qemu-coroutine-sleep.c | 4 +-
168
util/qemu-coroutine.c | 6 +-
169
util/qemu-sockets.c | 4 +-
170
util/qemu-thread-posix.c | 12 +-
171
util/qemu-thread-win32.c | 12 +-
172
util/qemu-timer.c | 12 +-
173
util/qht.c | 57 ++--
174
util/qsp.c | 50 ++--
175
util/rcu.c | 36 +--
176
util/stats64.c | 34 +--
177
.gitmodules | 6 +-
178
docs/devel/atomics.rst | 134 +++++-----
179
docs/system/deprecated.rst | 9 +-
180
hw/s390x/meson.build | 1 +
181
scripts/kernel-doc | 2 +-
182
tcg/aarch64/tcg-target.c.inc | 2 +-
183
tcg/mips/tcg-target.c.inc | 2 +-
184
tcg/ppc/tcg-target.c.inc | 6 +-
185
tcg/sparc/tcg-target.c.inc | 5 +-
186
tests/meson.build | 3 +
187
147 files changed, 1508 insertions(+), 1069 deletions(-)
188
create mode 100644 hw/s390x/vhost-user-fs-ccw.c
189
create mode 100644 tests/test-fdmon-epoll.c
190
137
191
--
138
--
192
2.26.2
139
2.29.2
193
140
diff view generated by jsdifflib
New patch
1
Use the GitLab repo URL as the main repo location in order to reduce
2
load on qemu.org.
1
3
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
6
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
Message-id: 20210111115017.156802-2-stefanha@redhat.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
.github/lockdown.yml | 8 ++++----
11
1 file changed, 4 insertions(+), 4 deletions(-)
12
13
diff --git a/.github/lockdown.yml b/.github/lockdown.yml
14
index XXXXXXX..XXXXXXX 100644
15
--- a/.github/lockdown.yml
16
+++ b/.github/lockdown.yml
17
@@ -XXX,XX +XXX,XX @@ issues:
18
comment: |
19
Thank you for your interest in the QEMU project.
20
21
- This repository is a read-only mirror of the project's master
22
- repostories hosted on https://git.qemu.org/git/qemu.git.
23
+ This repository is a read-only mirror of the project's repostories hosted
24
+ at https://gitlab.com/qemu-project/qemu.git.
25
The project does not process issues filed on GitHub.
26
27
The project issues are tracked on Launchpad:
28
@@ -XXX,XX +XXX,XX @@ pulls:
29
comment: |
30
Thank you for your interest in the QEMU project.
31
32
- This repository is a read-only mirror of the project's master
33
- repostories hosted on https://git.qemu.org/git/qemu.git.
34
+ This repository is a read-only mirror of the project's repostories hosted
35
+ on https://gitlab.com/qemu-project/qemu.git.
36
The project does not process merge requests filed on GitHub.
37
38
QEMU welcomes contributions of code (either fixing bugs or adding new
39
--
40
2.29.2
41
diff view generated by jsdifflib
1
QEMU now hosts a mirror of meson.git. QEMU mirrors third-party code to
1
qemu.org is running out of bandwidth and the QEMU project is moving
2
ensure that users can always build QEMU even if the dependency goes
2
towards a gating CI on GitLab. Use the GitLab repos instead of qemu.org
3
offline and so QEMU meets its responsibilities to provide full source
3
(they will become mirrors).
4
code under software licenses.
5
4
6
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
9
Message-id: 20210111115017.156802-3-stefanha@redhat.com
11
Cc: Paolo Bonzini <pbonzini@redhat.com>
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Message-Id: <20200915130834.706758-3-stefanha@redhat.com>
13
---
11
---
14
.gitmodules | 2 +-
12
.gitmodules | 44 ++++++++++++++++++++++----------------------
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 22 insertions(+), 22 deletions(-)
16
14
17
diff --git a/.gitmodules b/.gitmodules
15
diff --git a/.gitmodules b/.gitmodules
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/.gitmodules
17
--- a/.gitmodules
20
+++ b/.gitmodules
18
+++ b/.gitmodules
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
22
    url = https://git.qemu.org/git/qboot.git
20
[submodule "roms/seabios"]
21
    path = roms/seabios
22
-    url = https://git.qemu.org/git/seabios.git/
23
+    url = https://gitlab.com/qemu-project/seabios.git/
24
[submodule "roms/SLOF"]
25
    path = roms/SLOF
26
-    url = https://git.qemu.org/git/SLOF.git
27
+    url = https://gitlab.com/qemu-project/SLOF.git
28
[submodule "roms/ipxe"]
29
    path = roms/ipxe
30
-    url = https://git.qemu.org/git/ipxe.git
31
+    url = https://gitlab.com/qemu-project/ipxe.git
32
[submodule "roms/openbios"]
33
    path = roms/openbios
34
-    url = https://git.qemu.org/git/openbios.git
35
+    url = https://gitlab.com/qemu-project/openbios.git
36
[submodule "roms/qemu-palcode"]
37
    path = roms/qemu-palcode
38
-    url = https://git.qemu.org/git/qemu-palcode.git
39
+    url = https://gitlab.com/qemu-project/qemu-palcode.git
40
[submodule "roms/sgabios"]
41
    path = roms/sgabios
42
-    url = https://git.qemu.org/git/sgabios.git
43
+    url = https://gitlab.com/qemu-project/sgabios.git
44
[submodule "dtc"]
45
    path = dtc
46
-    url = https://git.qemu.org/git/dtc.git
47
+    url = https://gitlab.com/qemu-project/dtc.git
48
[submodule "roms/u-boot"]
49
    path = roms/u-boot
50
-    url = https://git.qemu.org/git/u-boot.git
51
+    url = https://gitlab.com/qemu-project/u-boot.git
52
[submodule "roms/skiboot"]
53
    path = roms/skiboot
54
-    url = https://git.qemu.org/git/skiboot.git
55
+    url = https://gitlab.com/qemu-project/skiboot.git
56
[submodule "roms/QemuMacDrivers"]
57
    path = roms/QemuMacDrivers
58
-    url = https://git.qemu.org/git/QemuMacDrivers.git
59
+    url = https://gitlab.com/qemu-project/QemuMacDrivers.git
60
[submodule "ui/keycodemapdb"]
61
    path = ui/keycodemapdb
62
-    url = https://git.qemu.org/git/keycodemapdb.git
63
+    url = https://gitlab.com/qemu-project/keycodemapdb.git
64
[submodule "capstone"]
65
    path = capstone
66
-    url = https://git.qemu.org/git/capstone.git
67
+    url = https://gitlab.com/qemu-project/capstone.git
68
[submodule "roms/seabios-hppa"]
69
    path = roms/seabios-hppa
70
-    url = https://git.qemu.org/git/seabios-hppa.git
71
+    url = https://gitlab.com/qemu-project/seabios-hppa.git
72
[submodule "roms/u-boot-sam460ex"]
73
    path = roms/u-boot-sam460ex
74
-    url = https://git.qemu.org/git/u-boot-sam460ex.git
75
+    url = https://gitlab.com/qemu-project/u-boot-sam460ex.git
76
[submodule "tests/fp/berkeley-testfloat-3"]
77
    path = tests/fp/berkeley-testfloat-3
78
-    url = https://git.qemu.org/git/berkeley-testfloat-3.git
79
+    url = https://gitlab.com/qemu-project/berkeley-testfloat-3.git
80
[submodule "tests/fp/berkeley-softfloat-3"]
81
    path = tests/fp/berkeley-softfloat-3
82
-    url = https://git.qemu.org/git/berkeley-softfloat-3.git
83
+    url = https://gitlab.com/qemu-project/berkeley-softfloat-3.git
84
[submodule "roms/edk2"]
85
    path = roms/edk2
86
-    url = https://git.qemu.org/git/edk2.git
87
+    url = https://gitlab.com/qemu-project/edk2.git
88
[submodule "slirp"]
89
    path = slirp
90
-    url = https://git.qemu.org/git/libslirp.git
91
+    url = https://gitlab.com/qemu-project/libslirp.git
92
[submodule "roms/opensbi"]
93
    path = roms/opensbi
94
-    url =     https://git.qemu.org/git/opensbi.git
95
+    url =     https://gitlab.com/qemu-project/opensbi.git
96
[submodule "roms/qboot"]
97
    path = roms/qboot
98
-    url = https://git.qemu.org/git/qboot.git
99
+    url = https://gitlab.com/qemu-project/qboot.git
23
[submodule "meson"]
100
[submodule "meson"]
24
    path = meson
101
    path = meson
25
-    url = https://github.com/mesonbuild/meson/
102
-    url = https://git.qemu.org/git/meson.git
26
+    url = https://git.qemu.org/git/meson.git
103
+    url = https://gitlab.com/qemu-project/meson.git
27
[submodule "roms/vbootrom"]
104
[submodule "roms/vbootrom"]
28
    path = roms/vbootrom
105
    path = roms/vbootrom
29
    url = https://github.com/google/vbootrom.git
106
-    url = https://git.qemu.org/git/vbootrom.git
107
+    url = https://gitlab.com/qemu-project/vbootrom.git
30
--
108
--
31
2.26.2
109
2.29.2
32
110
diff view generated by jsdifflib
New patch
1
It is no longer necessary to point .gitmodules at GitLab repos when
2
running in GitLab CI since they are now used all the time.
1
3
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
6
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20210111115017.156802-4-stefanha@redhat.com
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
11
.gitlab-ci.yml | 1 -
12
1 file changed, 1 deletion(-)
13
14
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
15
index XXXXXXX..XXXXXXX 100644
16
--- a/.gitlab-ci.yml
17
+++ b/.gitlab-ci.yml
18
@@ -XXX,XX +XXX,XX @@ include:
19
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
20
before_script:
21
- JOBS=$(expr $(nproc) + 1)
22
- - sed -i s,git.qemu.org/git,gitlab.com/qemu-project, .gitmodules
23
script:
24
- mkdir build
25
- cd build
26
--
27
2.29.2
28
diff view generated by jsdifflib
New patch
1
qemu.org is running out of bandwidth and the QEMU project is moving
2
towards a gating CI on GitLab. Use the GitLab repos instead of qemu.org
3
(they will become mirrors).
1
4
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20210111115017.156802-5-stefanha@redhat.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
12
README.rst | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
14
15
diff --git a/README.rst b/README.rst
16
index XXXXXXX..XXXXXXX 100644
17
--- a/README.rst
18
+++ b/README.rst
19
@@ -XXX,XX +XXX,XX @@ The QEMU source code is maintained under the GIT version control system.
20
21
.. code-block:: shell
22
23
- git clone https://git.qemu.org/git/qemu.git
24
+ git clone https://gitlab.com/qemu-project/qemu.git
25
26
When submitting patches, one common approach is to use 'git
27
format-patch' and/or 'git send-email' to format & send the mail to the
28
@@ -XXX,XX +XXX,XX @@ The QEMU website is also maintained under source control.
29
30
.. code-block:: shell
31
32
- git clone https://git.qemu.org/git/qemu-web.git
33
+ git clone https://gitlab.com/qemu-project/qemu-web.git
34
35
* `<https://www.qemu.org/2017/02/04/the-new-qemu-website-is-up/>`_
36
37
--
38
2.29.2
39
diff view generated by jsdifflib
New patch
1
qemu.org is running out of bandwidth and the QEMU project is moving
2
towards a gating CI on GitLab. Use the GitLab repos instead of qemu.org
3
(they will become mirrors).
1
4
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20210111115017.156802-6-stefanha@redhat.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
12
pc-bios/README | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
14
15
diff --git a/pc-bios/README b/pc-bios/README
16
index XXXXXXX..XXXXXXX 100644
17
--- a/pc-bios/README
18
+++ b/pc-bios/README
19
@@ -XXX,XX +XXX,XX @@
20
legacy x86 software to communicate with an attached serial console as
21
if a video card were attached. The master sources reside in a subversion
22
repository at http://sgabios.googlecode.com/svn/trunk. A git mirror is
23
- available at https://git.qemu.org/git/sgabios.git.
24
+ available at https://gitlab.com/qemu-project/sgabios.git.
25
26
- The PXE roms come from the iPXE project. Built with BANNER_TIME 0.
27
Sources available at http://ipxe.org. Vendor:Device ID -> ROM mapping:
28
@@ -XXX,XX +XXX,XX @@
29
30
- The u-boot binary for e500 comes from the upstream denx u-boot project where
31
it was compiled using the qemu-ppce500 target.
32
- A git mirror is available at: https://git.qemu.org/git/u-boot.git
33
+ A git mirror is available at: https://gitlab.com/qemu-project/u-boot.git
34
The hash used to compile the current version is: 2072e72
35
36
- Skiboot (https://github.com/open-power/skiboot/) is an OPAL
37
--
38
2.29.2
39
diff view generated by jsdifflib
1
The vbootrom module is needed for the new NPCM7xx ARM SoCs. The
1
qemu.org is running out of bandwidth and the QEMU project is moving
2
vbootrom.git repo is now mirrored on qemu.org. QEMU mirrors third-party
2
towards a gating CI on GitLab. Use the GitLab repos instead of qemu.org
3
code to ensure that users can always build QEMU even if the dependency
3
(they will become mirrors).
4
goes offline and so QEMU meets its responsibilities to provide full
5
source code under software licenses.
6
4
7
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Cc: Havard Skinnemoen <hskinnemoen@google.com>
6
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20210111115017.156802-7-stefanha@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Message-Id: <20200915130834.706758-4-stefanha@redhat.com>
13
---
11
---
14
.gitmodules | 2 +-
12
scripts/get_maintainer.pl | 2 +-
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
16
14
17
diff --git a/.gitmodules b/.gitmodules
15
diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100755
19
--- a/.gitmodules
17
--- a/scripts/get_maintainer.pl
20
+++ b/.gitmodules
18
+++ b/scripts/get_maintainer.pl
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ sub vcs_exists {
22
    url = https://git.qemu.org/git/meson.git
20
    warn("$P: No supported VCS found. Add --nogit to options?\n");
23
[submodule "roms/vbootrom"]
21
    warn("Using a git repository produces better results.\n");
24
    path = roms/vbootrom
22
    warn("Try latest git repository using:\n");
25
-    url = https://github.com/google/vbootrom.git
23
-    warn("git clone https://git.qemu.org/git/qemu.git\n");
26
+    url = https://git.qemu.org/git/vbootrom.git
24
+    warn("git clone https://gitlab.com/qemu-project/qemu.git\n");
25
    $printed_novcs = 1;
26
}
27
return 0;
27
--
28
--
28
2.26.2
29
2.29.2
29
30
diff view generated by jsdifflib
New patch
1
From: John G Johnson <john.g.johnson@oracle.com>
1
2
3
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
4
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
5
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-id: 02a68adef99f5df6a380bf8fd7b90948777e411c.1611938319.git.jag.raman@oracle.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
MAINTAINERS | 7 +
11
docs/devel/index.rst | 1 +
12
docs/devel/multi-process.rst | 966 +++++++++++++++++++++++++++++++++++
13
3 files changed, 974 insertions(+)
14
create mode 100644 docs/devel/multi-process.rst
15
16
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ S: Maintained
21
F: hw/semihosting/
22
F: include/hw/semihosting/
23
24
+Multi-process QEMU
25
+M: Elena Ufimtseva <elena.ufimtseva@oracle.com>
26
+M: Jagannathan Raman <jag.raman@oracle.com>
27
+M: John G Johnson <john.g.johnson@oracle.com>
28
+S: Maintained
29
+F: docs/devel/multi-process.rst
30
+
31
Build and test automation
32
-------------------------
33
Build and test automation
34
diff --git a/docs/devel/index.rst b/docs/devel/index.rst
35
index XXXXXXX..XXXXXXX 100644
36
--- a/docs/devel/index.rst
37
+++ b/docs/devel/index.rst
38
@@ -XXX,XX +XXX,XX @@ Contents:
39
clocks
40
qom
41
block-coroutine-wrapper
42
+ multi-process
43
diff --git a/docs/devel/multi-process.rst b/docs/devel/multi-process.rst
44
new file mode 100644
45
index XXXXXXX..XXXXXXX
46
--- /dev/null
47
+++ b/docs/devel/multi-process.rst
48
@@ -XXX,XX +XXX,XX @@
49
+This is the design document for multi-process QEMU. It does not
50
+necessarily reflect the status of the current implementation, which
51
+may lack features or be considerably different from what is described
52
+in this document. This document is still useful as a description of
53
+the goals and general direction of this feature.
54
+
55
+Please refer to the following wiki for latest details:
56
+https://wiki.qemu.org/Features/MultiProcessQEMU
57
+
58
+Multi-process QEMU
59
+===================
60
+
61
+QEMU is often used as the hypervisor for virtual machines running in the
62
+Oracle cloud. Since one of the advantages of cloud computing is the
63
+ability to run many VMs from different tenants in the same cloud
64
+infrastructure, a guest that compromised its hypervisor could
65
+potentially use the hypervisor's access privileges to access data it is
66
+not authorized for.
67
+
68
+QEMU can be susceptible to security attacks because it is a large,
69
+monolithic program that provides many features to the VMs it services.
70
+Many of these features can be configured out of QEMU, but even a reduced
71
+configuration QEMU has a large amount of code a guest can potentially
72
+attack. Separating QEMU reduces the attack surface by aiding to
73
+limit each component in the system to only access the resources that
74
+it needs to perform its job.
75
+
76
+QEMU services
77
+-------------
78
+
79
+QEMU can be broadly described as providing three main services. One is a
80
+VM control point, where VMs can be created, migrated, re-configured, and
81
+destroyed. A second is to emulate the CPU instructions within the VM,
82
+often accelerated by HW virtualization features such as Intel's VT
83
+extensions. Finally, it provides IO services to the VM by emulating HW
84
+IO devices, such as disk and network devices.
85
+
86
+A multi-process QEMU
87
+~~~~~~~~~~~~~~~~~~~~
88
+
89
+A multi-process QEMU involves separating QEMU services into separate
90
+host processes. Each of these processes can be given only the privileges
91
+it needs to provide its service, e.g., a disk service could be given
92
+access only to the disk images it provides, and not be allowed to
93
+access other files, or any network devices. An attacker who compromised
94
+this service would not be able to use this exploit to access files or
95
+devices beyond what the disk service was given access to.
96
+
97
+A QEMU control process would remain, but in multi-process mode, will
98
+have no direct interfaces to the VM. During VM execution, it would still
99
+provide the user interface to hot-plug devices or live migrate the VM.
100
+
101
+A first step in creating a multi-process QEMU is to separate IO services
102
+from the main QEMU program, which would continue to provide CPU
103
+emulation. i.e., the control process would also be the CPU emulation
104
+process. In a later phase, CPU emulation could be separated from the
105
+control process.
106
+
107
+Separating IO services
108
+----------------------
109
+
110
+Separating IO services into individual host processes is a good place to
111
+begin for a couple of reasons. One is the sheer number of IO devices QEMU
112
+can emulate provides a large surface of interfaces which could potentially
113
+be exploited, and, indeed, have been a source of exploits in the past.
114
+Another is the modular nature of QEMU device emulation code provides
115
+interface points where the QEMU functions that perform device emulation
116
+can be separated from the QEMU functions that manage the emulation of
117
+guest CPU instructions. The devices emulated in the separate process are
118
+referred to as remote devices.
119
+
120
+QEMU device emulation
121
+~~~~~~~~~~~~~~~~~~~~~
122
+
123
+QEMU uses an object oriented SW architecture for device emulation code.
124
+Configured objects are all compiled into the QEMU binary, then objects
125
+are instantiated by name when used by the guest VM. For example, the
126
+code to emulate a device named "foo" is always present in QEMU, but its
127
+instantiation code is only run when the device is included in the target
128
+VM. (e.g., via the QEMU command line as *-device foo*)
129
+
130
+The object model is hierarchical, so device emulation code names its
131
+parent object (such as "pci-device" for a PCI device) and QEMU will
132
+instantiate a parent object before calling the device's instantiation
133
+code.
134
+
135
+Current separation models
136
+~~~~~~~~~~~~~~~~~~~~~~~~~
137
+
138
+In order to separate the device emulation code from the CPU emulation
139
+code, the device object code must run in a different process. There are
140
+a couple of existing QEMU features that can run emulation code
141
+separately from the main QEMU process. These are examined below.
142
+
143
+vhost user model
144
+^^^^^^^^^^^^^^^^
145
+
146
+Virtio guest device drivers can be connected to vhost user applications
147
+in order to perform their IO operations. This model uses special virtio
148
+device drivers in the guest and vhost user device objects in QEMU, but
149
+once the QEMU vhost user code has configured the vhost user application,
150
+mission-mode IO is performed by the application. The vhost user
151
+application is a daemon process that can be contacted via a known UNIX
152
+domain socket.
153
+
154
+vhost socket
155
+''''''''''''
156
+
157
+As mentioned above, one of the tasks of the vhost device object within
158
+QEMU is to contact the vhost application and send it configuration
159
+information about this device instance. As part of the configuration
160
+process, the application can also be sent other file descriptors over
161
+the socket, which then can be used by the vhost user application in
162
+various ways, some of which are described below.
163
+
164
+vhost MMIO store acceleration
165
+'''''''''''''''''''''''''''''
166
+
167
+VMs are often run using HW virtualization features via the KVM kernel
168
+driver. This driver allows QEMU to accelerate the emulation of guest CPU
169
+instructions by running the guest in a virtual HW mode. When the guest
170
+executes instructions that cannot be executed by virtual HW mode,
171
+execution returns to the KVM driver so it can inform QEMU to emulate the
172
+instructions in SW.
173
+
174
+One of the events that can cause a return to QEMU is when a guest device
175
+driver accesses an IO location. QEMU then dispatches the memory
176
+operation to the corresponding QEMU device object. In the case of a
177
+vhost user device, the memory operation would need to be sent over a
178
+socket to the vhost application. This path is accelerated by the QEMU
179
+virtio code by setting up an eventfd file descriptor that the vhost
180
+application can directly receive MMIO store notifications from the KVM
181
+driver, instead of needing them to be sent to the QEMU process first.
182
+
183
+vhost interrupt acceleration
184
+''''''''''''''''''''''''''''
185
+
186
+Another optimization used by the vhost application is the ability to
187
+directly inject interrupts into the VM via the KVM driver, again,
188
+bypassing the need to send the interrupt back to the QEMU process first.
189
+The QEMU virtio setup code configures the KVM driver with an eventfd
190
+that triggers the device interrupt in the guest when the eventfd is
191
+written. This irqfd file descriptor is then passed to the vhost user
192
+application program.
193
+
194
+vhost access to guest memory
195
+''''''''''''''''''''''''''''
196
+
197
+The vhost application is also allowed to directly access guest memory,
198
+instead of needing to send the data as messages to QEMU. This is also
199
+done with file descriptors sent to the vhost user application by QEMU.
200
+These descriptors can be passed to ``mmap()`` by the vhost application
201
+to map the guest address space into the vhost application.
202
+
203
+IOMMUs introduce another level of complexity, since the address given to
204
+the guest virtio device to DMA to or from is not a guest physical
205
+address. This case is handled by having vhost code within QEMU register
206
+as a listener for IOMMU mapping changes. The vhost application maintains
207
+a cache of IOMMMU translations: sending translation requests back to
208
+QEMU on cache misses, and in turn receiving flush requests from QEMU
209
+when mappings are purged.
210
+
211
+applicability to device separation
212
+''''''''''''''''''''''''''''''''''
213
+
214
+Much of the vhost model can be re-used by separated device emulation. In
215
+particular, the ideas of using a socket between QEMU and the device
216
+emulation application, using a file descriptor to inject interrupts into
217
+the VM via KVM, and allowing the application to ``mmap()`` the guest
218
+should be re used.
219
+
220
+There are, however, some notable differences between how a vhost
221
+application works and the needs of separated device emulation. The most
222
+basic is that vhost uses custom virtio device drivers which always
223
+trigger IO with MMIO stores. A separated device emulation model must
224
+work with existing IO device models and guest device drivers. MMIO loads
225
+break vhost store acceleration since they are synchronous - guest
226
+progress cannot continue until the load has been emulated. By contrast,
227
+stores are asynchronous, the guest can continue after the store event
228
+has been sent to the vhost application.
229
+
230
+Another difference is that in the vhost user model, a single daemon can
231
+support multiple QEMU instances. This is contrary to the security regime
232
+desired, in which the emulation application should only be allowed to
233
+access the files or devices the VM it's running on behalf of can access.
234
+#### qemu-io model
235
+
236
+Qemu-io is a test harness used to test changes to the QEMU block backend
237
+object code. (e.g., the code that implements disk images for disk driver
238
+emulation) Qemu-io is not a device emulation application per se, but it
239
+does compile the QEMU block objects into a separate binary from the main
240
+QEMU one. This could be useful for disk device emulation, since its
241
+emulation applications will need to include the QEMU block objects.
242
+
243
+New separation model based on proxy objects
244
+-------------------------------------------
245
+
246
+A different model based on proxy objects in the QEMU program
247
+communicating with remote emulation programs could provide separation
248
+while minimizing the changes needed to the device emulation code. The
249
+rest of this section is a discussion of how a proxy object model would
250
+work.
251
+
252
+Remote emulation processes
253
+~~~~~~~~~~~~~~~~~~~~~~~~~~
254
+
255
+The remote emulation process will run the QEMU object hierarchy without
256
+modification. The device emulation objects will be also be based on the
257
+QEMU code, because for anything but the simplest device, it would not be
258
+a tractable to re-implement both the object model and the many device
259
+backends that QEMU has.
260
+
261
+The processes will communicate with the QEMU process over UNIX domain
262
+sockets. The processes can be executed either as standalone processes,
263
+or be executed by QEMU. In both cases, the host backends the emulation
264
+processes will provide are specified on its command line, as they would
265
+be for QEMU. For example:
266
+
267
+::
268
+
269
+ disk-proc -blockdev driver=file,node-name=file0,filename=disk-file0 \
270
+ -blockdev driver=qcow2,node-name=drive0,file=file0
271
+
272
+would indicate process *disk-proc* uses a qcow2 emulated disk named
273
+*file0* as its backend.
274
+
275
+Emulation processes may emulate more than one guest controller. A common
276
+configuration might be to put all controllers of the same device class
277
+(e.g., disk, network, etc.) in a single process, so that all backends of
278
+the same type can be managed by a single QMP monitor.
279
+
280
+communication with QEMU
281
+^^^^^^^^^^^^^^^^^^^^^^^
282
+
283
+The first argument to the remote emulation process will be a Unix domain
284
+socket that connects with the Proxy object. This is a required argument.
285
+
286
+::
287
+
288
+ disk-proc <socket number> <backend list>
289
+
290
+remote process QMP monitor
291
+^^^^^^^^^^^^^^^^^^^^^^^^^^
292
+
293
+Remote emulation processes can be monitored via QMP, similar to QEMU
294
+itself. The QMP monitor socket is specified the same as for a QEMU
295
+process:
296
+
297
+::
298
+
299
+ disk-proc -qmp unix:/tmp/disk-mon,server
300
+
301
+can be monitored over the UNIX socket path */tmp/disk-mon*.
302
+
303
+QEMU command line
304
+~~~~~~~~~~~~~~~~~
305
+
306
+Each remote device emulated in a remote process on the host is
307
+represented as a *-device* of type *pci-proxy-dev*. A socket
308
+sub-option to this option specifies the Unix socket that connects
309
+to the remote process. An *id* sub-option is required, and it should
310
+be the same id as used in the remote process.
311
+
312
+::
313
+
314
+ qemu-system-x86_64 ... -device pci-proxy-dev,id=lsi0,socket=3
315
+
316
+can be used to add a device emulated in a remote process
317
+
318
+
319
+QEMU management of remote processes
320
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
321
+
322
+QEMU is not aware of the type of type of the remote PCI device. It is
323
+a pass through device as far as QEMU is concerned.
324
+
325
+communication with emulation process
326
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
327
+
328
+primary channel
329
+'''''''''''''''
330
+
331
+The primary channel (referred to as com in the code) is used to bootstrap
332
+the remote process. It is also used to pass on device-agnostic commands
333
+like reset.
334
+
335
+per-device channels
336
+'''''''''''''''''''
337
+
338
+Each remote device communicates with QEMU using a dedicated communication
339
+channel. The proxy object sets up this channel using the primary
340
+channel during its initialization.
341
+
342
+QEMU device proxy objects
343
+~~~~~~~~~~~~~~~~~~~~~~~~~
344
+
345
+QEMU has an object model based on sub-classes inherited from the
346
+"object" super-class. The sub-classes that are of interest here are the
347
+"device" and "bus" sub-classes whose child sub-classes make up the
348
+device tree of a QEMU emulated system.
349
+
350
+The proxy object model will use device proxy objects to replace the
351
+device emulation code within the QEMU process. These objects will live
352
+in the same place in the object and bus hierarchies as the objects they
353
+replace. i.e., the proxy object for an LSI SCSI controller will be a
354
+sub-class of the "pci-device" class, and will have the same PCI bus
355
+parent and the same SCSI bus child objects as the LSI controller object
356
+it replaces.
357
+
358
+It is worth noting that the same proxy object is used to mediate with
359
+all types of remote PCI devices.
360
+
361
+object initialization
362
+^^^^^^^^^^^^^^^^^^^^^
363
+
364
+The Proxy device objects are initialized in the exact same manner in
365
+which any other QEMU device would be initialized.
366
+
367
+In addition, the Proxy objects perform the following two tasks:
368
+- Parses the "socket" sub option and connects to the remote process
369
+using this channel
370
+- Uses the "id" sub-option to connect to the emulated device on the
371
+separate process
372
+
373
+class\_init
374
+'''''''''''
375
+
376
+The ``class_init()`` method of a proxy object will, in general behave
377
+similarly to the object it replaces, including setting any static
378
+properties and methods needed by the proxy.
379
+
380
+instance\_init / realize
381
+''''''''''''''''''''''''
382
+
383
+The ``instance_init()`` and ``realize()`` functions would only need to
384
+perform tasks related to being a proxy, such are registering its own
385
+MMIO handlers, or creating a child bus that other proxy devices can be
386
+attached to later.
387
+
388
+Other tasks will be device-specific. For example, PCI device objects
389
+will initialize the PCI config space in order to make a valid PCI device
390
+tree within the QEMU process.
391
+
392
+address space registration
393
+^^^^^^^^^^^^^^^^^^^^^^^^^^
394
+
395
+Most devices are driven by guest device driver accesses to IO addresses
396
+or ports. The QEMU device emulation code uses QEMU's memory region
397
+function calls (such as ``memory_region_init_io()``) to add callback
398
+functions that QEMU will invoke when the guest accesses the device's
399
+areas of the IO address space. When a guest driver does access the
400
+device, the VM will exit HW virtualization mode and return to QEMU,
401
+which will then lookup and execute the corresponding callback function.
402
+
403
+A proxy object would need to mirror the memory region calls the actual
404
+device emulator would perform in its initialization code, but with its
405
+own callbacks. When invoked by QEMU as a result of a guest IO operation,
406
+they will forward the operation to the device emulation process.
407
+
408
+PCI config space
409
+^^^^^^^^^^^^^^^^
410
+
411
+PCI devices also have a configuration space that can be accessed by the
412
+guest driver. Guest accesses to this space is not handled by the device
413
+emulation object, but by its PCI parent object. Much of this space is
414
+read-only, but certain registers (especially BAR and MSI-related ones)
415
+need to be propagated to the emulation process.
416
+
417
+PCI parent proxy
418
+''''''''''''''''
419
+
420
+One way to propagate guest PCI config accesses is to create a
421
+"pci-device-proxy" class that can serve as the parent of a PCI device
422
+proxy object. This class's parent would be "pci-device" and it would
423
+override the PCI parent's ``config_read()`` and ``config_write()``
424
+methods with ones that forward these operations to the emulation
425
+program.
426
+
427
+interrupt receipt
428
+^^^^^^^^^^^^^^^^^
429
+
430
+A proxy for a device that generates interrupts will need to create a
431
+socket to receive interrupt indications from the emulation process. An
432
+incoming interrupt indication would then be sent up to its bus parent to
433
+be injected into the guest. For example, a PCI device object may use
434
+``pci_set_irq()``.
435
+
436
+live migration
437
+^^^^^^^^^^^^^^
438
+
439
+The proxy will register to save and restore any *vmstate* it needs over
440
+a live migration event. The device proxy does not need to manage the
441
+remote device's *vmstate*; that will be handled by the remote process
442
+proxy (see below).
443
+
444
+QEMU remote device operation
445
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
446
+
447
+Generic device operations, such as DMA, will be performed by the remote
448
+process proxy by sending messages to the remote process.
449
+
450
+DMA operations
451
+^^^^^^^^^^^^^^
452
+
453
+DMA operations would be handled much like vhost applications do. One of
454
+the initial messages sent to the emulation process is a guest memory
455
+table. Each entry in this table consists of a file descriptor and size
456
+that the emulation process can ``mmap()`` to directly access guest
457
+memory, similar to ``vhost_user_set_mem_table()``. Note guest memory
458
+must be backed by file descriptors, such as when QEMU is given the
459
+*-mem-path* command line option.
460
+
461
+IOMMU operations
462
+^^^^^^^^^^^^^^^^
463
+
464
+When the emulated system includes an IOMMU, the remote process proxy in
465
+QEMU will need to create a socket for IOMMU requests from the emulation
466
+process. It will handle those requests with an
467
+``address_space_get_iotlb_entry()`` call. In order to handle IOMMU
468
+unmaps, the remote process proxy will also register as a listener on the
469
+device's DMA address space. When an IOMMU memory region is created
470
+within the DMA address space, an IOMMU notifier for unmaps will be added
471
+to the memory region that will forward unmaps to the emulation process
472
+over the IOMMU socket.
473
+
474
+device hot-plug via QMP
475
+^^^^^^^^^^^^^^^^^^^^^^^
476
+
477
+An QMP "device\_add" command can add a device emulated by a remote
478
+process. It will also have "rid" option to the command, just as the
479
+*-device* command line option does. The remote process may either be one
480
+started at QEMU startup, or be one added by the "add-process" QMP
481
+command described above. In either case, the remote process proxy will
482
+forward the new device's JSON description to the corresponding emulation
483
+process.
484
+
485
+live migration
486
+^^^^^^^^^^^^^^
487
+
488
+The remote process proxy will also register for live migration
489
+notifications with ``vmstate_register()``. When called to save state,
490
+the proxy will send the remote process a secondary socket file
491
+descriptor to save the remote process's device *vmstate* over. The
492
+incoming byte stream length and data will be saved as the proxy's
493
+*vmstate*. When the proxy is resumed on its new host, this *vmstate*
494
+will be extracted, and a secondary socket file descriptor will be sent
495
+to the new remote process through which it receives the *vmstate* in
496
+order to restore the devices there.
497
+
498
+device emulation in remote process
499
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
500
+
501
+The parts of QEMU that the emulation program will need include the
502
+object model; the memory emulation objects; the device emulation objects
503
+of the targeted device, and any dependent devices; and, the device's
504
+backends. It will also need code to setup the machine environment,
505
+handle requests from the QEMU process, and route machine-level requests
506
+(such as interrupts or IOMMU mappings) back to the QEMU process.
507
+
508
+initialization
509
+^^^^^^^^^^^^^^
510
+
511
+The process initialization sequence will follow the same sequence
512
+followed by QEMU. It will first initialize the backend objects, then
513
+device emulation objects. The JSON descriptions sent by the QEMU process
514
+will drive which objects need to be created.
515
+
516
+- address spaces
517
+
518
+Before the device objects are created, the initial address spaces and
519
+memory regions must be configured with ``memory_map_init()``. This
520
+creates a RAM memory region object (*system\_memory*) and an IO memory
521
+region object (*system\_io*).
522
+
523
+- RAM
524
+
525
+RAM memory region creation will follow how ``pc_memory_init()`` creates
526
+them, but must use ``memory_region_init_ram_from_fd()`` instead of
527
+``memory_region_allocate_system_memory()``. The file descriptors needed
528
+will be supplied by the guest memory table from above. Those RAM regions
529
+would then be added to the *system\_memory* memory region with
530
+``memory_region_add_subregion()``.
531
+
532
+- PCI
533
+
534
+IO initialization will be driven by the JSON descriptions sent from the
535
+QEMU process. For a PCI device, a PCI bus will need to be created with
536
+``pci_root_bus_new()``, and a PCI memory region will need to be created
537
+and added to the *system\_memory* memory region with
538
+``memory_region_add_subregion_overlap()``. The overlap version is
539
+required for architectures where PCI memory overlaps with RAM memory.
540
+
541
+MMIO handling
542
+^^^^^^^^^^^^^
543
+
544
+The device emulation objects will use ``memory_region_init_io()`` to
545
+install their MMIO handlers, and ``pci_register_bar()`` to associate
546
+those handlers with a PCI BAR, as they do within QEMU currently.
547
+
548
+In order to use ``address_space_rw()`` in the emulation process to
549
+handle MMIO requests from QEMU, the PCI physical addresses must be the
550
+same in the QEMU process and the device emulation process. In order to
551
+accomplish that, guest BAR programming must also be forwarded from QEMU
552
+to the emulation process.
553
+
554
+interrupt injection
555
+^^^^^^^^^^^^^^^^^^^
556
+
557
+When device emulation wants to inject an interrupt into the VM, the
558
+request climbs the device's bus object hierarchy until the point where a
559
+bus object knows how to signal the interrupt to the guest. The details
560
+depend on the type of interrupt being raised.
561
+
562
+- PCI pin interrupts
563
+
564
+On x86 systems, there is an emulated IOAPIC object attached to the root
565
+PCI bus object, and the root PCI object forwards interrupt requests to
566
+it. The IOAPIC object, in turn, calls the KVM driver to inject the
567
+corresponding interrupt into the VM. The simplest way to handle this in
568
+an emulation process would be to setup the root PCI bus driver (via
569
+``pci_bus_irqs()``) to send a interrupt request back to the QEMU
570
+process, and have the device proxy object reflect it up the PCI tree
571
+there.
572
+
573
+- PCI MSI/X interrupts
574
+
575
+PCI MSI/X interrupts are implemented in HW as DMA writes to a
576
+CPU-specific PCI address. In QEMU on x86, a KVM APIC object receives
577
+these DMA writes, then calls into the KVM driver to inject the interrupt
578
+into the VM. A simple emulation process implementation would be to send
579
+the MSI DMA address from QEMU as a message at initialization, then
580
+install an address space handler at that address which forwards the MSI
581
+message back to QEMU.
582
+
583
+DMA operations
584
+^^^^^^^^^^^^^^
585
+
586
+When a emulation object wants to DMA into or out of guest memory, it
587
+first must use dma\_memory\_map() to convert the DMA address to a local
588
+virtual address. The emulation process memory region objects setup above
589
+will be used to translate the DMA address to a local virtual address the
590
+device emulation code can access.
591
+
592
+IOMMU
593
+^^^^^
594
+
595
+When an IOMMU is in use in QEMU, DMA translation uses IOMMU memory
596
+regions to translate the DMA address to a guest physical address before
597
+that physical address can be translated to a local virtual address. The
598
+emulation process will need similar functionality.
599
+
600
+- IOTLB cache
601
+
602
+The emulation process will maintain a cache of recent IOMMU translations
603
+(the IOTLB). When the translate() callback of an IOMMU memory region is
604
+invoked, the IOTLB cache will be searched for an entry that will map the
605
+DMA address to a guest PA. On a cache miss, a message will be sent back
606
+to QEMU requesting the corresponding translation entry, which be both be
607
+used to return a guest address and be added to the cache.
608
+
609
+- IOTLB purge
610
+
611
+The IOMMU emulation will also need to act on unmap requests from QEMU.
612
+These happen when the guest IOMMU driver purges an entry from the
613
+guest's translation table.
614
+
615
+live migration
616
+^^^^^^^^^^^^^^
617
+
618
+When a remote process receives a live migration indication from QEMU, it
619
+will set up a channel using the received file descriptor with
620
+``qio_channel_socket_new_fd()``. This channel will be used to create a
621
+*QEMUfile* that can be passed to ``qemu_save_device_state()`` to send
622
+the process's device state back to QEMU. This method will be reversed on
623
+restore - the channel will be passed to ``qemu_loadvm_state()`` to
624
+restore the device state.
625
+
626
+Accelerating device emulation
627
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
628
+
629
+The messages that are required to be sent between QEMU and the emulation
630
+process can add considerable latency to IO operations. The optimizations
631
+described below attempt to ameliorate this effect by allowing the
632
+emulation process to communicate directly with the kernel KVM driver.
633
+The KVM file descriptors created would be passed to the emulation process
634
+via initialization messages, much like the guest memory table is done.
635
+#### MMIO acceleration
636
+
637
+Vhost user applications can receive guest virtio driver stores directly
638
+from KVM. The issue with the eventfd mechanism used by vhost user is
639
+that it does not pass any data with the event indication, so it cannot
640
+handle guest loads or guest stores that carry store data. This concept
641
+could, however, be expanded to cover more cases.
642
+
643
+The expanded idea would require a new type of KVM device:
644
+*KVM\_DEV\_TYPE\_USER*. This device has two file descriptors: a master
645
+descriptor that QEMU can use for configuration, and a slave descriptor
646
+that the emulation process can use to receive MMIO notifications. QEMU
647
+would create both descriptors using the KVM driver, and pass the slave
648
+descriptor to the emulation process via an initialization message.
649
+
650
+data structures
651
+^^^^^^^^^^^^^^^
652
+
653
+- guest physical range
654
+
655
+The guest physical range structure describes the address range that a
656
+device will respond to. It includes the base and length of the range, as
657
+well as which bus the range resides on (e.g., on an x86machine, it can
658
+specify whether the range refers to memory or IO addresses).
659
+
660
+A device can have multiple physical address ranges it responds to (e.g.,
661
+a PCI device can have multiple BARs), so the structure will also include
662
+an enumerated identifier to specify which of the device's ranges is
663
+being referred to.
664
+
665
++--------+----------------------------+
666
+| Name | Description |
667
++========+============================+
668
+| addr | range base address |
669
++--------+----------------------------+
670
+| len | range length |
671
++--------+----------------------------+
672
+| bus | addr type (memory or IO) |
673
++--------+----------------------------+
674
+| id | range ID (e.g., PCI BAR) |
675
++--------+----------------------------+
676
+
677
+- MMIO request structure
678
+
679
+This structure describes an MMIO operation. It includes which guest
680
+physical range the MMIO was within, the offset within that range, the
681
+MMIO type (e.g., load or store), and its length and data. It also
682
+includes a sequence number that can be used to reply to the MMIO, and
683
+the CPU that issued the MMIO.
684
+
685
++----------+------------------------+
686
+| Name | Description |
687
++==========+========================+
688
+| rid | range MMIO is within |
689
++----------+------------------------+
690
+| offset | offset withing *rid* |
691
++----------+------------------------+
692
+| type | e.g., load or store |
693
++----------+------------------------+
694
+| len | MMIO length |
695
++----------+------------------------+
696
+| data | store data |
697
++----------+------------------------+
698
+| seq | sequence ID |
699
++----------+------------------------+
700
+
701
+- MMIO request queues
702
+
703
+MMIO request queues are FIFO arrays of MMIO request structures. There
704
+are two queues: pending queue is for MMIOs that haven't been read by the
705
+emulation program, and the sent queue is for MMIOs that haven't been
706
+acknowledged. The main use of the second queue is to validate MMIO
707
+replies from the emulation program.
708
+
709
+- scoreboard
710
+
711
+Each CPU in the VM is emulated in QEMU by a separate thread, so multiple
712
+MMIOs may be waiting to be consumed by an emulation program and multiple
713
+threads may be waiting for MMIO replies. The scoreboard would contain a
714
+wait queue and sequence number for the per-CPU threads, allowing them to
715
+be individually woken when the MMIO reply is received from the emulation
716
+program. It also tracks the number of posted MMIO stores to the device
717
+that haven't been replied to, in order to satisfy the PCI constraint
718
+that a load to a device will not complete until all previous stores to
719
+that device have been completed.
720
+
721
+- device shadow memory
722
+
723
+Some MMIO loads do not have device side-effects. These MMIOs can be
724
+completed without sending a MMIO request to the emulation program if the
725
+emulation program shares a shadow image of the device's memory image
726
+with the KVM driver.
727
+
728
+The emulation program will ask the KVM driver to allocate memory for the
729
+shadow image, and will then use ``mmap()`` to directly access it. The
730
+emulation program can control KVM access to the shadow image by sending
731
+KVM an access map telling it which areas of the image have no
732
+side-effects (and can be completed immediately), and which require a
733
+MMIO request to the emulation program. The access map can also inform
734
+the KVM drive which size accesses are allowed to the image.
735
+
736
+master descriptor
737
+^^^^^^^^^^^^^^^^^
738
+
739
+The master descriptor is used by QEMU to configure the new KVM device.
740
+The descriptor would be returned by the KVM driver when QEMU issues a
741
+*KVM\_CREATE\_DEVICE* ``ioctl()`` with a *KVM\_DEV\_TYPE\_USER* type.
742
+
743
+KVM\_DEV\_TYPE\_USER device ops
744
+
745
+
746
+The *KVM\_DEV\_TYPE\_USER* operations vector will be registered by a
747
+``kvm_register_device_ops()`` call when the KVM system in initialized by
748
+``kvm_init()``. These device ops are called by the KVM driver when QEMU
749
+executes certain ``ioctl()`` operations on its KVM file descriptor. They
750
+include:
751
+
752
+- create
753
+
754
+This routine is called when QEMU issues a *KVM\_CREATE\_DEVICE*
755
+``ioctl()`` on its per-VM file descriptor. It will allocate and
756
+initialize a KVM user device specific data structure, and assign the
757
+*kvm\_device* private field to it.
758
+
759
+- ioctl
760
+
761
+This routine is invoked when QEMU issues an ``ioctl()`` on the master
762
+descriptor. The ``ioctl()`` commands supported are defined by the KVM
763
+device type. *KVM\_DEV\_TYPE\_USER* ones will need several commands:
764
+
765
+*KVM\_DEV\_USER\_SLAVE\_FD* creates the slave file descriptor that will
766
+be passed to the device emulation program. Only one slave can be created
767
+by each master descriptor. The file operations performed by this
768
+descriptor are described below.
769
+
770
+The *KVM\_DEV\_USER\_PA\_RANGE* command configures a guest physical
771
+address range that the slave descriptor will receive MMIO notifications
772
+for. The range is specified by a guest physical range structure
773
+argument. For buses that assign addresses to devices dynamically, this
774
+command can be executed while the guest is running, such as the case
775
+when a guest changes a device's PCI BAR registers.
776
+
777
+*KVM\_DEV\_USER\_PA\_RANGE* will use ``kvm_io_bus_register_dev()`` to
778
+register *kvm\_io\_device\_ops* callbacks to be invoked when the guest
779
+performs a MMIO operation within the range. When a range is changed,
780
+``kvm_io_bus_unregister_dev()`` is used to remove the previous
781
+instantiation.
782
+
783
+*KVM\_DEV\_USER\_TIMEOUT* will configure a timeout value that specifies
784
+how long KVM will wait for the emulation process to respond to a MMIO
785
+indication.
786
+
787
+- destroy
788
+
789
+This routine is called when the VM instance is destroyed. It will need
790
+to destroy the slave descriptor; and free any memory allocated by the
791
+driver, as well as the *kvm\_device* structure itself.
792
+
793
+slave descriptor
794
+^^^^^^^^^^^^^^^^
795
+
796
+The slave descriptor will have its own file operations vector, which
797
+responds to system calls on the descriptor performed by the device
798
+emulation program.
799
+
800
+- read
801
+
802
+A read returns any pending MMIO requests from the KVM driver as MMIO
803
+request structures. Multiple structures can be returned if there are
804
+multiple MMIO operations pending. The MMIO requests are moved from the
805
+pending queue to the sent queue, and if there are threads waiting for
806
+space in the pending to add new MMIO operations, they will be woken
807
+here.
808
+
809
+- write
810
+
811
+A write also consists of a set of MMIO requests. They are compared to
812
+the MMIO requests in the sent queue. Matches are removed from the sent
813
+queue, and any threads waiting for the reply are woken. If a store is
814
+removed, then the number of posted stores in the per-CPU scoreboard is
815
+decremented. When the number is zero, and a non side-effect load was
816
+waiting for posted stores to complete, the load is continued.
817
+
818
+- ioctl
819
+
820
+There are several ioctl()s that can be performed on the slave
821
+descriptor.
822
+
823
+A *KVM\_DEV\_USER\_SHADOW\_SIZE* ``ioctl()`` causes the KVM driver to
824
+allocate memory for the shadow image. This memory can later be
825
+``mmap()``\ ed by the emulation process to share the emulation's view of
826
+device memory with the KVM driver.
827
+
828
+A *KVM\_DEV\_USER\_SHADOW\_CTRL* ``ioctl()`` controls access to the
829
+shadow image. It will send the KVM driver a shadow control map, which
830
+specifies which areas of the image can complete guest loads without
831
+sending the load request to the emulation program. It will also specify
832
+the size of load operations that are allowed.
833
+
834
+- poll
835
+
836
+An emulation program will use the ``poll()`` call with a *POLLIN* flag
837
+to determine if there are MMIO requests waiting to be read. It will
838
+return if the pending MMIO request queue is not empty.
839
+
840
+- mmap
841
+
842
+This call allows the emulation program to directly access the shadow
843
+image allocated by the KVM driver. As device emulation updates device
844
+memory, changes with no side-effects will be reflected in the shadow,
845
+and the KVM driver can satisfy guest loads from the shadow image without
846
+needing to wait for the emulation program.
847
+
848
+kvm\_io\_device ops
849
+^^^^^^^^^^^^^^^^^^^
850
+
851
+Each KVM per-CPU thread can handle MMIO operation on behalf of the guest
852
+VM. KVM will use the MMIO's guest physical address to search for a
853
+matching *kvm\_io\_device* to see if the MMIO can be handled by the KVM
854
+driver instead of exiting back to QEMU. If a match is found, the
855
+corresponding callback will be invoked.
856
+
857
+- read
858
+
859
+This callback is invoked when the guest performs a load to the device.
860
+Loads with side-effects must be handled synchronously, with the KVM
861
+driver putting the QEMU thread to sleep waiting for the emulation
862
+process reply before re-starting the guest. Loads that do not have
863
+side-effects may be optimized by satisfying them from the shadow image,
864
+if there are no outstanding stores to the device by this CPU. PCI memory
865
+ordering demands that a load cannot complete before all older stores to
866
+the same device have been completed.
867
+
868
+- write
869
+
870
+Stores can be handled asynchronously unless the pending MMIO request
871
+queue is full. In this case, the QEMU thread must sleep waiting for
872
+space in the queue. Stores will increment the number of posted stores in
873
+the per-CPU scoreboard, in order to implement the PCI ordering
874
+constraint above.
875
+
876
+interrupt acceleration
877
+^^^^^^^^^^^^^^^^^^^^^^
878
+
879
+This performance optimization would work much like a vhost user
880
+application does, where the QEMU process sets up *eventfds* that cause
881
+the device's corresponding interrupt to be triggered by the KVM driver.
882
+These irq file descriptors are sent to the emulation process at
883
+initialization, and are used when the emulation code raises a device
884
+interrupt.
885
+
886
+intx acceleration
887
+'''''''''''''''''
888
+
889
+Traditional PCI pin interrupts are level based, so, in addition to an
890
+irq file descriptor, a re-sampling file descriptor needs to be sent to
891
+the emulation program. This second file descriptor allows multiple
892
+devices sharing an irq to be notified when the interrupt has been
893
+acknowledged by the guest, so they can re-trigger the interrupt if their
894
+device has not de-asserted its interrupt.
895
+
896
+intx irq descriptor
897
+
898
+
899
+The irq descriptors are created by the proxy object
900
+``using event_notifier_init()`` to create the irq and re-sampling
901
+*eventds*, and ``kvm_vm_ioctl(KVM_IRQFD)`` to bind them to an interrupt.
902
+The interrupt route can be found with
903
+``pci_device_route_intx_to_irq()``.
904
+
905
+intx routing changes
906
+
907
+
908
+Intx routing can be changed when the guest programs the APIC the device
909
+pin is connected to. The proxy object in QEMU will use
910
+``pci_device_set_intx_routing_notifier()`` to be informed of any guest
911
+changes to the route. This handler will broadly follow the VFIO
912
+interrupt logic to change the route: de-assigning the existing irq
913
+descriptor from its route, then assigning it the new route. (see
914
+``vfio_intx_update()``)
915
+
916
+MSI/X acceleration
917
+''''''''''''''''''
918
+
919
+MSI/X interrupts are sent as DMA transactions to the host. The interrupt
920
+data contains a vector that is programmed by the guest, A device may have
921
+multiple MSI interrupts associated with it, so multiple irq descriptors
922
+may need to be sent to the emulation program.
923
+
924
+MSI/X irq descriptor
925
+
926
+
927
+This case will also follow the VFIO example. For each MSI/X interrupt,
928
+an *eventfd* is created, a virtual interrupt is allocated by
929
+``kvm_irqchip_add_msi_route()``, and the virtual interrupt is bound to
930
+the eventfd with ``kvm_irqchip_add_irqfd_notifier()``.
931
+
932
+MSI/X config space changes
933
+
934
+
935
+The guest may dynamically update several MSI-related tables in the
936
+device's PCI config space. These include per-MSI interrupt enables and
937
+vector data. Additionally, MSIX tables exist in device memory space, not
938
+config space. Much like the BAR case above, the proxy object must look
939
+at guest config space programming to keep the MSI interrupt state
940
+consistent between QEMU and the emulation program.
941
+
942
+--------------
943
+
944
+Disaggregated CPU emulation
945
+---------------------------
946
+
947
+After IO services have been disaggregated, a second phase would be to
948
+separate a process to handle CPU instruction emulation from the main
949
+QEMU control function. There are no object separation points for this
950
+code, so the first task would be to create one.
951
+
952
+Host access controls
953
+--------------------
954
+
955
+Separating QEMU relies on the host OS's access restriction mechanisms to
956
+enforce that the differing processes can only access the objects they
957
+are entitled to. There are a couple types of mechanisms usually provided
958
+by general purpose OSs.
959
+
960
+Discretionary access control
961
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
962
+
963
+Discretionary access control allows each user to control who can access
964
+their files. In Linux, this type of control is usually too coarse for
965
+QEMU separation, since it only provides three separate access controls:
966
+one for the same user ID, the second for users IDs with the same group
967
+ID, and the third for all other user IDs. Each device instance would
968
+need a separate user ID to provide access control, which is likely to be
969
+unwieldy for dynamically created VMs.
970
+
971
+Mandatory access control
972
+~~~~~~~~~~~~~~~~~~~~~~~~
973
+
974
+Mandatory access control allows the OS to add an additional set of
975
+controls on top of discretionary access for the OS to control. It also
976
+adds other attributes to processes and files such as types, roles, and
977
+categories, and can establish rules for how processes and files can
978
+interact.
979
+
980
+Type enforcement
981
+^^^^^^^^^^^^^^^^
982
+
983
+Type enforcement assigns a *type* attribute to processes and files, and
984
+allows rules to be written on what operations a process with a given
985
+type can perform on a file with a given type. QEMU separation could take
986
+advantage of type enforcement by running the emulation processes with
987
+different types, both from the main QEMU process, and from the emulation
988
+processes of different classes of devices.
989
+
990
+For example, guest disk images and disk emulation processes could have
991
+types separate from the main QEMU process and non-disk emulation
992
+processes, and the type rules could prevent processes other than disk
993
+emulation ones from accessing guest disk images. Similarly, network
994
+emulation processes can have a type separate from the main QEMU process
995
+and non-network emulation process, and only that type can access the
996
+host tun/tap device used to provide guest networking.
997
+
998
+Category enforcement
999
+^^^^^^^^^^^^^^^^^^^^
1000
+
1001
+Category enforcement assigns a set of numbers within a given range to
1002
+the process or file. The process is granted access to the file if the
1003
+process's set is a superset of the file's set. This enforcement can be
1004
+used to separate multiple instances of devices in the same class.
1005
+
1006
+For example, if there are multiple disk devices provides to a guest,
1007
+each device emulation process could be provisioned with a separate
1008
+category. The different device emulation processes would not be able to
1009
+access each other's backing disk images.
1010
+
1011
+Alternatively, categories could be used in lieu of the type enforcement
1012
+scheme described above. In this scenario, different categories would be
1013
+used to prevent device emulation processes in different classes from
1014
+accessing resources assigned to other classes.
1015
--
1016
2.29.2
1017
diff view generated by jsdifflib
New patch
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
1
2
3
Adds documentation explaining the command-line arguments needed
4
to use multi-process.
5
6
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
7
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
8
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 49f757a84e5dd6fae14b22544897d1124c5fdbad.1611938319.git.jag.raman@oracle.com
11
12
[Move orphan docs/multi-process.rst document into docs/system/ and add
13
it to index.rst to prevent Sphinx "document isn't included in any
14
toctree" error.
15
--Stefan]
16
17
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
18
---
19
MAINTAINERS | 1 +
20
docs/system/index.rst | 1 +
21
docs/system/multi-process.rst | 64 +++++++++++++++++++++++++++++++++++
22
3 files changed, 66 insertions(+)
23
create mode 100644 docs/system/multi-process.rst
24
25
diff --git a/MAINTAINERS b/MAINTAINERS
26
index XXXXXXX..XXXXXXX 100644
27
--- a/MAINTAINERS
28
+++ b/MAINTAINERS
29
@@ -XXX,XX +XXX,XX @@ M: Jagannathan Raman <jag.raman@oracle.com>
30
M: John G Johnson <john.g.johnson@oracle.com>
31
S: Maintained
32
F: docs/devel/multi-process.rst
33
+F: docs/system/multi-process.rst
34
35
Build and test automation
36
-------------------------
37
diff --git a/docs/system/index.rst b/docs/system/index.rst
38
index XXXXXXX..XXXXXXX 100644
39
--- a/docs/system/index.rst
40
+++ b/docs/system/index.rst
41
@@ -XXX,XX +XXX,XX @@ Contents:
42
pr-manager
43
targets
44
security
45
+ multi-process
46
deprecated
47
removed-features
48
build-platforms
49
diff --git a/docs/system/multi-process.rst b/docs/system/multi-process.rst
50
new file mode 100644
51
index XXXXXXX..XXXXXXX
52
--- /dev/null
53
+++ b/docs/system/multi-process.rst
54
@@ -XXX,XX +XXX,XX @@
55
+Multi-process QEMU
56
+==================
57
+
58
+This document describes how to configure and use multi-process qemu.
59
+For the design document refer to docs/devel/qemu-multiprocess.
60
+
61
+1) Configuration
62
+----------------
63
+
64
+multi-process is enabled by default for targets that enable KVM
65
+
66
+
67
+2) Usage
68
+--------
69
+
70
+Multi-process QEMU requires an orchestrator to launch.
71
+
72
+Following is a description of command-line used to launch mpqemu.
73
+
74
+* Orchestrator:
75
+
76
+ - The Orchestrator creates a unix socketpair
77
+
78
+ - It launches the remote process and passes one of the
79
+ sockets to it via command-line.
80
+
81
+ - It then launches QEMU and specifies the other socket as an option
82
+ to the Proxy device object
83
+
84
+* Remote Process:
85
+
86
+ - QEMU can enter remote process mode by using the "remote" machine
87
+ option.
88
+
89
+ - The orchestrator creates a "remote-object" with details about
90
+ the device and the file descriptor for the device
91
+
92
+ - The remaining options are no different from how one launches QEMU with
93
+ devices.
94
+
95
+ - Example command-line for the remote process is as follows:
96
+
97
+ /usr/bin/qemu-system-x86_64 \
98
+ -machine x-remote \
99
+ -device lsi53c895a,id=lsi0 \
100
+ -drive id=drive_image2,file=/build/ol7-nvme-test-1.qcow2 \
101
+ -device scsi-hd,id=drive2,drive=drive_image2,bus=lsi0.0,scsi-id=0 \
102
+ -object x-remote-object,id=robj1,devid=lsi1,fd=4,
103
+
104
+* QEMU:
105
+
106
+ - Since parts of the RAM are shared between QEMU & remote process, a
107
+ memory-backend-memfd is required to facilitate this, as follows:
108
+
109
+ -object memory-backend-memfd,id=mem,size=2G
110
+
111
+ - A "x-pci-proxy-dev" device is created for each of the PCI devices emulated
112
+ in the remote process. A "socket" sub-option specifies the other end of
113
+ unix channel created by orchestrator. The "id" sub-option must be specified
114
+ and should be the same as the "id" specified for the remote PCI device
115
+
116
+ - Example commandline for QEMU is as follows:
117
+
118
+ -device x-pci-proxy-dev,id=lsi0,socket=3
119
--
120
2.29.2
121
diff view generated by jsdifflib
1
clang's C11 atomic_fetch_*() functions only take a C11 atomic type
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
pointer argument. QEMU uses direct types (int, etc) and this causes a
2
3
compiler error when a QEMU code calls these functions in a source file
3
Allow RAM MemoryRegion to be created from an offset in a file, instead
4
that also included <stdatomic.h> via a system header file:
4
of allocating at offset of 0 by default. This is needed to synchronize
5
5
RAM between QEMU & remote process.
6
$ CC=clang CXX=clang++ ./configure ... && make
6
7
../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid)
7
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
8
8
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
9
Avoid using atomic_*() names in QEMU's atomic.h since that namespace is
9
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
10
used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
and <stdatomic.h> can co-exist. I checked /usr/include on my machine and
11
Message-id: 609996697ad8617e3b01df38accc5c208c24d74e.1611938319.git.jag.raman@oracle.com
12
searched GitHub for existing "qatomic_" users but there seem to be none.
13
14
This patch was generated using:
15
16
$ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \
17
sort -u >/tmp/changed_identifiers
18
$ for identifier in $(</tmp/changed_identifiers); do
19
sed -i "s%\<$identifier\>%q$identifier%g" \
20
$(git grep -I -l "\<$identifier\>")
21
done
22
23
I manually fixed line-wrap issues and misaligned rST tables.
24
25
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
26
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
27
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
28
Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
29
---
13
---
30
include/qemu/atomic.h | 248 +++++++++---------
14
include/exec/memory.h | 2 ++
31
docs/devel/lockcnt.txt | 8 +-
15
include/exec/ram_addr.h | 2 +-
32
docs/devel/rcu.txt | 34 +--
16
include/qemu/mmap-alloc.h | 4 +++-
33
accel/tcg/atomic_template.h | 20 +-
17
backends/hostmem-memfd.c | 2 +-
34
include/block/aio-wait.h | 4 +-
18
hw/misc/ivshmem.c | 3 ++-
35
include/block/aio.h | 8 +-
19
softmmu/memory.c | 3 ++-
36
include/exec/cpu_ldst.h | 2 +-
20
softmmu/physmem.c | 11 +++++++----
37
include/exec/exec-all.h | 6 +-
21
util/mmap-alloc.c | 7 ++++---
38
include/exec/log.h | 6 +-
22
util/oslib-posix.c | 2 +-
39
include/exec/memory.h | 2 +-
23
9 files changed, 23 insertions(+), 13 deletions(-)
40
include/exec/ram_addr.h | 26 +-
24
41
include/exec/ramlist.h | 2 +-
42
include/exec/tb-lookup.h | 4 +-
43
include/hw/core/cpu.h | 2 +-
44
include/qemu/atomic128.h | 6 +-
45
include/qemu/bitops.h | 2 +-
46
include/qemu/coroutine.h | 2 +-
47
include/qemu/log.h | 6 +-
48
include/qemu/queue.h | 7 +-
49
include/qemu/rcu.h | 10 +-
50
include/qemu/rcu_queue.h | 100 +++----
51
include/qemu/seqlock.h | 8 +-
52
include/qemu/stats64.h | 28 +-
53
include/qemu/thread.h | 24 +-
54
.../infiniband/hw/vmw_pvrdma/pvrdma_ring.h | 14 +-
55
linux-user/qemu.h | 2 +-
56
tcg/i386/tcg-target.h | 2 +-
57
tcg/s390/tcg-target.h | 2 +-
58
tcg/tci/tcg-target.h | 2 +-
59
accel/kvm/kvm-all.c | 12 +-
60
accel/tcg/cpu-exec.c | 15 +-
61
accel/tcg/cputlb.c | 24 +-
62
accel/tcg/tcg-all.c | 2 +-
63
accel/tcg/translate-all.c | 55 ++--
64
audio/jackaudio.c | 18 +-
65
block.c | 4 +-
66
block/block-backend.c | 15 +-
67
block/io.c | 48 ++--
68
block/nfs.c | 2 +-
69
block/sheepdog.c | 2 +-
70
block/throttle-groups.c | 12 +-
71
block/throttle.c | 4 +-
72
blockdev.c | 2 +-
73
blockjob.c | 2 +-
74
contrib/libvhost-user/libvhost-user.c | 2 +-
75
cpus-common.c | 26 +-
76
dump/dump.c | 8 +-
77
exec.c | 49 ++--
78
hw/core/cpu.c | 6 +-
79
hw/display/qxl.c | 4 +-
80
hw/hyperv/hyperv.c | 10 +-
81
hw/hyperv/vmbus.c | 2 +-
82
hw/i386/xen/xen-hvm.c | 2 +-
83
hw/intc/rx_icu.c | 12 +-
84
hw/intc/sifive_plic.c | 4 +-
85
hw/misc/edu.c | 16 +-
86
hw/net/virtio-net.c | 10 +-
87
hw/rdma/rdma_backend.c | 18 +-
88
hw/rdma/rdma_rm.c | 2 +-
89
hw/rdma/vmw/pvrdma_dev_ring.c | 4 +-
90
hw/s390x/s390-pci-bus.c | 2 +-
91
hw/s390x/virtio-ccw.c | 2 +-
92
hw/virtio/vhost.c | 2 +-
93
hw/virtio/virtio-mmio.c | 6 +-
94
hw/virtio/virtio-pci.c | 6 +-
95
hw/virtio/virtio.c | 16 +-
96
hw/xtensa/pic_cpu.c | 4 +-
97
iothread.c | 6 +-
98
linux-user/hppa/cpu_loop.c | 11 +-
99
linux-user/signal.c | 8 +-
100
migration/colo-failover.c | 4 +-
101
migration/migration.c | 8 +-
102
migration/multifd.c | 18 +-
103
migration/postcopy-ram.c | 34 +--
104
migration/rdma.c | 34 +--
105
monitor/hmp.c | 6 +-
106
monitor/misc.c | 2 +-
107
monitor/monitor.c | 6 +-
108
qemu-nbd.c | 2 +-
109
qga/commands.c | 12 +-
110
qom/object.c | 20 +-
111
scsi/qemu-pr-helper.c | 4 +-
112
softmmu/cpu-throttle.c | 10 +-
113
softmmu/cpus.c | 42 +--
114
softmmu/memory.c | 6 +-
115
softmmu/vl.c | 2 +-
116
target/arm/mte_helper.c | 6 +-
117
target/hppa/op_helper.c | 2 +-
118
target/i386/mem_helper.c | 2 +-
119
target/i386/whpx-all.c | 6 +-
120
target/riscv/cpu_helper.c | 2 +-
121
target/s390x/mem_helper.c | 4 +-
122
target/xtensa/exc_helper.c | 4 +-
123
target/xtensa/op_helper.c | 2 +-
124
tcg/tcg.c | 58 ++--
125
tcg/tci.c | 2 +-
126
tests/atomic64-bench.c | 14 +-
127
tests/atomic_add-bench.c | 14 +-
128
tests/iothread.c | 2 +-
129
tests/qht-bench.c | 12 +-
130
tests/rcutorture.c | 24 +-
131
tests/test-aio-multithread.c | 52 ++--
132
tests/test-logging.c | 4 +-
133
tests/test-rcu-list.c | 38 +--
134
tests/test-thread-pool.c | 10 +-
135
util/aio-posix.c | 14 +-
136
util/aio-wait.c | 2 +-
137
util/aio-win32.c | 5 +-
138
util/async.c | 28 +-
139
util/atomic64.c | 10 +-
140
util/bitmap.c | 14 +-
141
util/cacheinfo.c | 2 +-
142
util/fdmon-epoll.c | 4 +-
143
util/fdmon-io_uring.c | 12 +-
144
util/lockcnt.c | 52 ++--
145
util/log.c | 10 +-
146
util/qemu-coroutine-lock.c | 18 +-
147
util/qemu-coroutine-sleep.c | 4 +-
148
util/qemu-coroutine.c | 6 +-
149
util/qemu-sockets.c | 4 +-
150
util/qemu-thread-posix.c | 12 +-
151
util/qemu-thread-win32.c | 12 +-
152
util/qemu-timer.c | 12 +-
153
util/qht.c | 57 ++--
154
util/qsp.c | 50 ++--
155
util/rcu.c | 36 +--
156
util/stats64.c | 34 +--
157
docs/devel/atomics.rst | 134 +++++-----
158
scripts/kernel-doc | 2 +-
159
tcg/aarch64/tcg-target.c.inc | 2 +-
160
tcg/mips/tcg-target.c.inc | 2 +-
161
tcg/ppc/tcg-target.c.inc | 6 +-
162
tcg/sparc/tcg-target.c.inc | 5 +-
163
133 files changed, 1041 insertions(+), 1018 deletions(-)
164
165
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
166
index XXXXXXX..XXXXXXX 100644
167
--- a/include/qemu/atomic.h
168
+++ b/include/qemu/atomic.h
169
@@ -XXX,XX +XXX,XX @@
170
* no effect on the generated code but not using the atomic primitives
171
* will get flagged by sanitizers as a violation.
172
*/
173
-#define atomic_read__nocheck(ptr) \
174
+#define qatomic_read__nocheck(ptr) \
175
__atomic_load_n(ptr, __ATOMIC_RELAXED)
176
177
-#define atomic_read(ptr) \
178
- ({ \
179
+#define qatomic_read(ptr) \
180
+ ({ \
181
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
182
- atomic_read__nocheck(ptr); \
183
+ qatomic_read__nocheck(ptr); \
184
})
185
186
-#define atomic_set__nocheck(ptr, i) \
187
+#define qatomic_set__nocheck(ptr, i) \
188
__atomic_store_n(ptr, i, __ATOMIC_RELAXED)
189
190
-#define atomic_set(ptr, i) do { \
191
+#define qatomic_set(ptr, i) do { \
192
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
193
- atomic_set__nocheck(ptr, i); \
194
+ qatomic_set__nocheck(ptr, i); \
195
} while(0)
196
197
/* See above: most compilers currently treat consume and acquire the
198
- * same, but this slows down atomic_rcu_read unnecessarily.
199
+ * same, but this slows down qatomic_rcu_read unnecessarily.
200
*/
201
#ifdef __SANITIZE_THREAD__
202
-#define atomic_rcu_read__nocheck(ptr, valptr) \
203
+#define qatomic_rcu_read__nocheck(ptr, valptr) \
204
__atomic_load(ptr, valptr, __ATOMIC_CONSUME);
205
#else
206
-#define atomic_rcu_read__nocheck(ptr, valptr) \
207
- __atomic_load(ptr, valptr, __ATOMIC_RELAXED); \
208
+#define qatomic_rcu_read__nocheck(ptr, valptr) \
209
+ __atomic_load(ptr, valptr, __ATOMIC_RELAXED); \
210
smp_read_barrier_depends();
211
#endif
212
213
-#define atomic_rcu_read(ptr) \
214
- ({ \
215
+#define qatomic_rcu_read(ptr) \
216
+ ({ \
217
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
218
- typeof_strip_qual(*ptr) _val; \
219
- atomic_rcu_read__nocheck(ptr, &_val); \
220
- _val; \
221
+ typeof_strip_qual(*ptr) _val; \
222
+ qatomic_rcu_read__nocheck(ptr, &_val); \
223
+ _val; \
224
})
225
226
-#define atomic_rcu_set(ptr, i) do { \
227
+#define qatomic_rcu_set(ptr, i) do { \
228
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
229
- __atomic_store_n(ptr, i, __ATOMIC_RELEASE); \
230
+ __atomic_store_n(ptr, i, __ATOMIC_RELEASE); \
231
} while(0)
232
233
-#define atomic_load_acquire(ptr) \
234
+#define qatomic_load_acquire(ptr) \
235
({ \
236
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
237
typeof_strip_qual(*ptr) _val; \
238
@@ -XXX,XX +XXX,XX @@
239
_val; \
240
})
241
242
-#define atomic_store_release(ptr, i) do { \
243
+#define qatomic_store_release(ptr, i) do { \
244
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
245
__atomic_store_n(ptr, i, __ATOMIC_RELEASE); \
246
} while(0)
247
@@ -XXX,XX +XXX,XX @@
248
249
/* All the remaining operations are fully sequentially consistent */
250
251
-#define atomic_xchg__nocheck(ptr, i) ({ \
252
+#define qatomic_xchg__nocheck(ptr, i) ({ \
253
__atomic_exchange_n(ptr, (i), __ATOMIC_SEQ_CST); \
254
})
255
256
-#define atomic_xchg(ptr, i) ({ \
257
+#define qatomic_xchg(ptr, i) ({ \
258
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
259
- atomic_xchg__nocheck(ptr, i); \
260
+ qatomic_xchg__nocheck(ptr, i); \
261
})
262
263
/* Returns the eventual value, failed or not */
264
-#define atomic_cmpxchg__nocheck(ptr, old, new) ({ \
265
+#define qatomic_cmpxchg__nocheck(ptr, old, new) ({ \
266
typeof_strip_qual(*ptr) _old = (old); \
267
(void)__atomic_compare_exchange_n(ptr, &_old, new, false, \
268
__ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); \
269
_old; \
270
})
271
272
-#define atomic_cmpxchg(ptr, old, new) ({ \
273
+#define qatomic_cmpxchg(ptr, old, new) ({ \
274
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
275
- atomic_cmpxchg__nocheck(ptr, old, new); \
276
+ qatomic_cmpxchg__nocheck(ptr, old, new); \
277
})
278
279
/* Provide shorter names for GCC atomic builtins, return old value */
280
-#define atomic_fetch_inc(ptr) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST)
281
-#define atomic_fetch_dec(ptr) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST)
282
+#define qatomic_fetch_inc(ptr) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST)
283
+#define qatomic_fetch_dec(ptr) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST)
284
285
-#ifndef atomic_fetch_add
286
-#define atomic_fetch_add(ptr, n) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST)
287
-#define atomic_fetch_sub(ptr, n) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST)
288
-#define atomic_fetch_and(ptr, n) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST)
289
-#define atomic_fetch_or(ptr, n) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST)
290
-#define atomic_fetch_xor(ptr, n) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_CST)
291
-#endif
292
+#define qatomic_fetch_add(ptr, n) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST)
293
+#define qatomic_fetch_sub(ptr, n) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST)
294
+#define qatomic_fetch_and(ptr, n) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST)
295
+#define qatomic_fetch_or(ptr, n) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST)
296
+#define qatomic_fetch_xor(ptr, n) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_CST)
297
298
-#define atomic_inc_fetch(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_SEQ_CST)
299
-#define atomic_dec_fetch(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_SEQ_CST)
300
-#define atomic_add_fetch(ptr, n) __atomic_add_fetch(ptr, n, __ATOMIC_SEQ_CST)
301
-#define atomic_sub_fetch(ptr, n) __atomic_sub_fetch(ptr, n, __ATOMIC_SEQ_CST)
302
-#define atomic_and_fetch(ptr, n) __atomic_and_fetch(ptr, n, __ATOMIC_SEQ_CST)
303
-#define atomic_or_fetch(ptr, n) __atomic_or_fetch(ptr, n, __ATOMIC_SEQ_CST)
304
-#define atomic_xor_fetch(ptr, n) __atomic_xor_fetch(ptr, n, __ATOMIC_SEQ_CST)
305
+#define qatomic_inc_fetch(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_SEQ_CST)
306
+#define qatomic_dec_fetch(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_SEQ_CST)
307
+#define qatomic_add_fetch(ptr, n) __atomic_add_fetch(ptr, n, __ATOMIC_SEQ_CST)
308
+#define qatomic_sub_fetch(ptr, n) __atomic_sub_fetch(ptr, n, __ATOMIC_SEQ_CST)
309
+#define qatomic_and_fetch(ptr, n) __atomic_and_fetch(ptr, n, __ATOMIC_SEQ_CST)
310
+#define qatomic_or_fetch(ptr, n) __atomic_or_fetch(ptr, n, __ATOMIC_SEQ_CST)
311
+#define qatomic_xor_fetch(ptr, n) __atomic_xor_fetch(ptr, n, __ATOMIC_SEQ_CST)
312
313
/* And even shorter names that return void. */
314
-#define atomic_inc(ptr) ((void) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST))
315
-#define atomic_dec(ptr) ((void) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST))
316
-#define atomic_add(ptr, n) ((void) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST))
317
-#define atomic_sub(ptr, n) ((void) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST))
318
-#define atomic_and(ptr, n) ((void) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST))
319
-#define atomic_or(ptr, n) ((void) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST))
320
-#define atomic_xor(ptr, n) ((void) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_CST))
321
+#define qatomic_inc(ptr) \
322
+ ((void) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST))
323
+#define qatomic_dec(ptr) \
324
+ ((void) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST))
325
+#define qatomic_add(ptr, n) \
326
+ ((void) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST))
327
+#define qatomic_sub(ptr, n) \
328
+ ((void) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST))
329
+#define qatomic_and(ptr, n) \
330
+ ((void) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST))
331
+#define qatomic_or(ptr, n) \
332
+ ((void) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST))
333
+#define qatomic_xor(ptr, n) \
334
+ ((void) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_CST))
335
336
#else /* __ATOMIC_RELAXED */
337
338
@@ -XXX,XX +XXX,XX @@
339
* but it is a full barrier at the hardware level. Add a compiler barrier
340
* to make it a full barrier also at the compiler level.
341
*/
342
-#define atomic_xchg(ptr, i) (barrier(), __sync_lock_test_and_set(ptr, i))
343
+#define qatomic_xchg(ptr, i) (barrier(), __sync_lock_test_and_set(ptr, i))
344
345
#elif defined(_ARCH_PPC)
346
347
@@ -XXX,XX +XXX,XX @@
348
/* These will only be atomic if the processor does the fetch or store
349
* in a single issue memory operation
350
*/
351
-#define atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile*) (p))
352
-#define atomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile*) (p)) = (i))
353
+#define qatomic_read__nocheck(p) (*(__typeof__(*(p)) volatile*) (p))
354
+#define qatomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile*) (p)) = (i))
355
356
-#define atomic_read(ptr) atomic_read__nocheck(ptr)
357
-#define atomic_set(ptr, i) atomic_set__nocheck(ptr,i)
358
+#define qatomic_read(ptr) qatomic_read__nocheck(ptr)
359
+#define qatomic_set(ptr, i) qatomic_set__nocheck(ptr,i)
360
361
/**
362
- * atomic_rcu_read - reads a RCU-protected pointer to a local variable
363
+ * qatomic_rcu_read - reads a RCU-protected pointer to a local variable
364
* into a RCU read-side critical section. The pointer can later be safely
365
* dereferenced within the critical section.
366
*
367
@@ -XXX,XX +XXX,XX @@
368
* Inserts memory barriers on architectures that require them (currently only
369
* Alpha) and documents which pointers are protected by RCU.
370
*
371
- * atomic_rcu_read also includes a compiler barrier to ensure that
372
+ * qatomic_rcu_read also includes a compiler barrier to ensure that
373
* value-speculative optimizations (e.g. VSS: Value Speculation
374
* Scheduling) does not perform the data read before the pointer read
375
* by speculating the value of the pointer.
376
*
377
- * Should match atomic_rcu_set(), atomic_xchg(), atomic_cmpxchg().
378
+ * Should match qatomic_rcu_set(), qatomic_xchg(), qatomic_cmpxchg().
379
*/
380
-#define atomic_rcu_read(ptr) ({ \
381
- typeof(*ptr) _val = atomic_read(ptr); \
382
+#define qatomic_rcu_read(ptr) ({ \
383
+ typeof(*ptr) _val = qatomic_read(ptr); \
384
smp_read_barrier_depends(); \
385
_val; \
386
})
387
388
/**
389
- * atomic_rcu_set - assigns (publicizes) a pointer to a new data structure
390
+ * qatomic_rcu_set - assigns (publicizes) a pointer to a new data structure
391
* meant to be read by RCU read-side critical sections.
392
*
393
* Documents which pointers will be dereferenced by RCU read-side critical
394
@@ -XXX,XX +XXX,XX @@
395
* them. It also makes sure the compiler does not reorder code initializing the
396
* data structure before its publication.
397
*
398
- * Should match atomic_rcu_read().
399
+ * Should match qatomic_rcu_read().
400
*/
401
-#define atomic_rcu_set(ptr, i) do { \
402
+#define qatomic_rcu_set(ptr, i) do { \
403
smp_wmb(); \
404
- atomic_set(ptr, i); \
405
+ qatomic_set(ptr, i); \
406
} while (0)
407
408
-#define atomic_load_acquire(ptr) ({ \
409
- typeof(*ptr) _val = atomic_read(ptr); \
410
+#define qatomic_load_acquire(ptr) ({ \
411
+ typeof(*ptr) _val = qatomic_read(ptr); \
412
smp_mb_acquire(); \
413
_val; \
414
})
415
416
-#define atomic_store_release(ptr, i) do { \
417
+#define qatomic_store_release(ptr, i) do { \
418
smp_mb_release(); \
419
- atomic_set(ptr, i); \
420
+ qatomic_set(ptr, i); \
421
} while (0)
422
423
-#ifndef atomic_xchg
424
+#ifndef qatomic_xchg
425
#if defined(__clang__)
426
-#define atomic_xchg(ptr, i) __sync_swap(ptr, i)
427
+#define qatomic_xchg(ptr, i) __sync_swap(ptr, i)
428
#else
429
/* __sync_lock_test_and_set() is documented to be an acquire barrier only. */
430
-#define atomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i))
431
+#define qatomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i))
432
#endif
433
#endif
434
-#define atomic_xchg__nocheck atomic_xchg
435
+#define qatomic_xchg__nocheck qatomic_xchg
436
437
/* Provide shorter names for GCC atomic builtins. */
438
-#define atomic_fetch_inc(ptr) __sync_fetch_and_add(ptr, 1)
439
-#define atomic_fetch_dec(ptr) __sync_fetch_and_add(ptr, -1)
440
+#define qatomic_fetch_inc(ptr) __sync_fetch_and_add(ptr, 1)
441
+#define qatomic_fetch_dec(ptr) __sync_fetch_and_add(ptr, -1)
442
443
-#ifndef atomic_fetch_add
444
-#define atomic_fetch_add(ptr, n) __sync_fetch_and_add(ptr, n)
445
-#define atomic_fetch_sub(ptr, n) __sync_fetch_and_sub(ptr, n)
446
-#define atomic_fetch_and(ptr, n) __sync_fetch_and_and(ptr, n)
447
-#define atomic_fetch_or(ptr, n) __sync_fetch_and_or(ptr, n)
448
-#define atomic_fetch_xor(ptr, n) __sync_fetch_and_xor(ptr, n)
449
-#endif
450
+#define qatomic_fetch_add(ptr, n) __sync_fetch_and_add(ptr, n)
451
+#define qatomic_fetch_sub(ptr, n) __sync_fetch_and_sub(ptr, n)
452
+#define qatomic_fetch_and(ptr, n) __sync_fetch_and_and(ptr, n)
453
+#define qatomic_fetch_or(ptr, n) __sync_fetch_and_or(ptr, n)
454
+#define qatomic_fetch_xor(ptr, n) __sync_fetch_and_xor(ptr, n)
455
456
-#define atomic_inc_fetch(ptr) __sync_add_and_fetch(ptr, 1)
457
-#define atomic_dec_fetch(ptr) __sync_add_and_fetch(ptr, -1)
458
-#define atomic_add_fetch(ptr, n) __sync_add_and_fetch(ptr, n)
459
-#define atomic_sub_fetch(ptr, n) __sync_sub_and_fetch(ptr, n)
460
-#define atomic_and_fetch(ptr, n) __sync_and_and_fetch(ptr, n)
461
-#define atomic_or_fetch(ptr, n) __sync_or_and_fetch(ptr, n)
462
-#define atomic_xor_fetch(ptr, n) __sync_xor_and_fetch(ptr, n)
463
+#define qatomic_inc_fetch(ptr) __sync_add_and_fetch(ptr, 1)
464
+#define qatomic_dec_fetch(ptr) __sync_add_and_fetch(ptr, -1)
465
+#define qatomic_add_fetch(ptr, n) __sync_add_and_fetch(ptr, n)
466
+#define qatomic_sub_fetch(ptr, n) __sync_sub_and_fetch(ptr, n)
467
+#define qatomic_and_fetch(ptr, n) __sync_and_and_fetch(ptr, n)
468
+#define qatomic_or_fetch(ptr, n) __sync_or_and_fetch(ptr, n)
469
+#define qatomic_xor_fetch(ptr, n) __sync_xor_and_fetch(ptr, n)
470
471
-#define atomic_cmpxchg(ptr, old, new) __sync_val_compare_and_swap(ptr, old, new)
472
-#define atomic_cmpxchg__nocheck(ptr, old, new) atomic_cmpxchg(ptr, old, new)
473
+#define qatomic_cmpxchg(ptr, old, new) \
474
+ __sync_val_compare_and_swap(ptr, old, new)
475
+#define qatomic_cmpxchg__nocheck(ptr, old, new) qatomic_cmpxchg(ptr, old, new)
476
477
/* And even shorter names that return void. */
478
-#define atomic_inc(ptr) ((void) __sync_fetch_and_add(ptr, 1))
479
-#define atomic_dec(ptr) ((void) __sync_fetch_and_add(ptr, -1))
480
-#define atomic_add(ptr, n) ((void) __sync_fetch_and_add(ptr, n))
481
-#define atomic_sub(ptr, n) ((void) __sync_fetch_and_sub(ptr, n))
482
-#define atomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n))
483
-#define atomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n))
484
-#define atomic_xor(ptr, n) ((void) __sync_fetch_and_xor(ptr, n))
485
+#define qatomic_inc(ptr) ((void) __sync_fetch_and_add(ptr, 1))
486
+#define qatomic_dec(ptr) ((void) __sync_fetch_and_add(ptr, -1))
487
+#define qatomic_add(ptr, n) ((void) __sync_fetch_and_add(ptr, n))
488
+#define qatomic_sub(ptr, n) ((void) __sync_fetch_and_sub(ptr, n))
489
+#define qatomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n))
490
+#define qatomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n))
491
+#define qatomic_xor(ptr, n) ((void) __sync_fetch_and_xor(ptr, n))
492
493
#endif /* __ATOMIC_RELAXED */
494
495
@@ -XXX,XX +XXX,XX @@
496
/* This is more efficient than a store plus a fence. */
497
#if !defined(__SANITIZE_THREAD__)
498
#if defined(__i386__) || defined(__x86_64__) || defined(__s390x__)
499
-#define atomic_mb_set(ptr, i) ((void)atomic_xchg(ptr, i))
500
+#define qatomic_mb_set(ptr, i) ((void)qatomic_xchg(ptr, i))
501
#endif
502
#endif
503
504
-/* atomic_mb_read/set semantics map Java volatile variables. They are
505
+/* qatomic_mb_read/set semantics map Java volatile variables. They are
506
* less expensive on some platforms (notably POWER) than fully
507
* sequentially consistent operations.
508
*
509
@@ -XXX,XX +XXX,XX @@
510
* use. See docs/devel/atomics.txt for more discussion.
511
*/
512
513
-#ifndef atomic_mb_read
514
-#define atomic_mb_read(ptr) \
515
- atomic_load_acquire(ptr)
516
+#ifndef qatomic_mb_read
517
+#define qatomic_mb_read(ptr) \
518
+ qatomic_load_acquire(ptr)
519
#endif
520
521
-#ifndef atomic_mb_set
522
-#define atomic_mb_set(ptr, i) do { \
523
- atomic_store_release(ptr, i); \
524
+#ifndef qatomic_mb_set
525
+#define qatomic_mb_set(ptr, i) do { \
526
+ qatomic_store_release(ptr, i); \
527
smp_mb(); \
528
} while(0)
529
#endif
530
531
-#define atomic_fetch_inc_nonzero(ptr) ({ \
532
- typeof_strip_qual(*ptr) _oldn = atomic_read(ptr); \
533
- while (_oldn && atomic_cmpxchg(ptr, _oldn, _oldn + 1) != _oldn) { \
534
- _oldn = atomic_read(ptr); \
535
+#define qatomic_fetch_inc_nonzero(ptr) ({ \
536
+ typeof_strip_qual(*ptr) _oldn = qatomic_read(ptr); \
537
+ while (_oldn && qatomic_cmpxchg(ptr, _oldn, _oldn + 1) != _oldn) { \
538
+ _oldn = qatomic_read(ptr); \
539
} \
540
_oldn; \
541
})
542
543
/* Abstractions to access atomically (i.e. "once") i64/u64 variables */
544
#ifdef CONFIG_ATOMIC64
545
-static inline int64_t atomic_read_i64(const int64_t *ptr)
546
+static inline int64_t qatomic_read_i64(const int64_t *ptr)
547
{
548
/* use __nocheck because sizeof(void *) might be < sizeof(u64) */
549
- return atomic_read__nocheck(ptr);
550
+ return qatomic_read__nocheck(ptr);
551
}
552
553
-static inline uint64_t atomic_read_u64(const uint64_t *ptr)
554
+static inline uint64_t qatomic_read_u64(const uint64_t *ptr)
555
{
556
- return atomic_read__nocheck(ptr);
557
+ return qatomic_read__nocheck(ptr);
558
}
559
560
-static inline void atomic_set_i64(int64_t *ptr, int64_t val)
561
+static inline void qatomic_set_i64(int64_t *ptr, int64_t val)
562
{
563
- atomic_set__nocheck(ptr, val);
564
+ qatomic_set__nocheck(ptr, val);
565
}
566
567
-static inline void atomic_set_u64(uint64_t *ptr, uint64_t val)
568
+static inline void qatomic_set_u64(uint64_t *ptr, uint64_t val)
569
{
570
- atomic_set__nocheck(ptr, val);
571
+ qatomic_set__nocheck(ptr, val);
572
}
573
574
-static inline void atomic64_init(void)
575
+static inline void qatomic64_init(void)
576
{
577
}
578
#else /* !CONFIG_ATOMIC64 */
579
-int64_t atomic_read_i64(const int64_t *ptr);
580
-uint64_t atomic_read_u64(const uint64_t *ptr);
581
-void atomic_set_i64(int64_t *ptr, int64_t val);
582
-void atomic_set_u64(uint64_t *ptr, uint64_t val);
583
-void atomic64_init(void);
584
+int64_t qatomic_read_i64(const int64_t *ptr);
585
+uint64_t qatomic_read_u64(const uint64_t *ptr);
586
+void qatomic_set_i64(int64_t *ptr, int64_t val);
587
+void qatomic_set_u64(uint64_t *ptr, uint64_t val);
588
+void qatomic64_init(void);
589
#endif /* !CONFIG_ATOMIC64 */
590
591
#endif /* QEMU_ATOMIC_H */
592
diff --git a/docs/devel/lockcnt.txt b/docs/devel/lockcnt.txt
593
index XXXXXXX..XXXXXXX 100644
594
--- a/docs/devel/lockcnt.txt
595
+++ b/docs/devel/lockcnt.txt
596
@@ -XXX,XX +XXX,XX @@ not just frees, though there could be cases where this is not necessary.
597
598
Reads, instead, can be done without taking the mutex, as long as the
599
readers and writers use the same macros that are used for RCU, for
600
-example atomic_rcu_read, atomic_rcu_set, QLIST_FOREACH_RCU, etc. This is
601
+example qatomic_rcu_read, qatomic_rcu_set, QLIST_FOREACH_RCU, etc. This is
602
because the reads are done outside a lock and a set or QLIST_INSERT_HEAD
603
can happen concurrently with the read. The RCU API ensures that the
604
processor and the compiler see all required memory barriers.
605
@@ -XXX,XX +XXX,XX @@ qemu_lockcnt_lock and qemu_lockcnt_unlock:
606
if (!xyz) {
607
new_xyz = g_new(XYZ, 1);
608
...
609
- atomic_rcu_set(&xyz, new_xyz);
610
+ qatomic_rcu_set(&xyz, new_xyz);
611
}
612
qemu_lockcnt_unlock(&xyz_lockcnt);
613
614
@@ -XXX,XX +XXX,XX @@ qemu_lockcnt_dec:
615
616
qemu_lockcnt_inc(&xyz_lockcnt);
617
if (xyz) {
618
- XYZ *p = atomic_rcu_read(&xyz);
619
+ XYZ *p = qatomic_rcu_read(&xyz);
620
...
621
/* Accesses can now be done through "p". */
622
}
623
@@ -XXX,XX +XXX,XX @@ the decrement, the locking and the check on count as follows:
624
625
qemu_lockcnt_inc(&xyz_lockcnt);
626
if (xyz) {
627
- XYZ *p = atomic_rcu_read(&xyz);
628
+ XYZ *p = qatomic_rcu_read(&xyz);
629
...
630
/* Accesses can now be done through "p". */
631
}
632
diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt
633
index XXXXXXX..XXXXXXX 100644
634
--- a/docs/devel/rcu.txt
635
+++ b/docs/devel/rcu.txt
636
@@ -XXX,XX +XXX,XX @@ The core RCU API is small:
637
638
g_free_rcu(&foo, rcu);
639
640
- typeof(*p) atomic_rcu_read(p);
641
+ typeof(*p) qatomic_rcu_read(p);
642
643
- atomic_rcu_read() is similar to atomic_load_acquire(), but it makes
644
+ qatomic_rcu_read() is similar to qatomic_load_acquire(), but it makes
645
some assumptions on the code that calls it. This allows a more
646
optimized implementation.
647
648
- atomic_rcu_read assumes that whenever a single RCU critical
649
+ qatomic_rcu_read assumes that whenever a single RCU critical
650
section reads multiple shared data, these reads are either
651
data-dependent or need no ordering. This is almost always the
652
case when using RCU, because read-side critical sections typically
653
@@ -XXX,XX +XXX,XX @@ The core RCU API is small:
654
every update) until reaching a data structure of interest,
655
and then read from there.
656
657
- RCU read-side critical sections must use atomic_rcu_read() to
658
+ RCU read-side critical sections must use qatomic_rcu_read() to
659
read data, unless concurrent writes are prevented by another
660
synchronization mechanism.
661
662
@@ -XXX,XX +XXX,XX @@ The core RCU API is small:
663
data structure in a single direction, opposite to the direction
664
in which the updater initializes it.
665
666
- void atomic_rcu_set(p, typeof(*p) v);
667
+ void qatomic_rcu_set(p, typeof(*p) v);
668
669
- atomic_rcu_set() is similar to atomic_store_release(), though it also
670
+ qatomic_rcu_set() is similar to qatomic_store_release(), though it also
671
makes assumptions on the code that calls it in order to allow a more
672
optimized implementation.
673
674
- In particular, atomic_rcu_set() suffices for synchronization
675
+ In particular, qatomic_rcu_set() suffices for synchronization
676
with readers, if the updater never mutates a field within a
677
data item that is already accessible to readers. This is the
678
case when initializing a new copy of the RCU-protected data
679
structure; just ensure that initialization of *p is carried out
680
- before atomic_rcu_set() makes the data item visible to readers.
681
+ before qatomic_rcu_set() makes the data item visible to readers.
682
If this rule is observed, writes will happen in the opposite
683
order as reads in the RCU read-side critical sections (or if
684
there is just one update), and there will be no need for other
685
@@ -XXX,XX +XXX,XX @@ DIFFERENCES WITH LINUX
686
programming; not allowing this would prevent upgrading an RCU read-side
687
critical section to become an updater.
688
689
-- atomic_rcu_read and atomic_rcu_set replace rcu_dereference and
690
+- qatomic_rcu_read and qatomic_rcu_set replace rcu_dereference and
691
rcu_assign_pointer. They take a _pointer_ to the variable being accessed.
692
693
- call_rcu is a macro that has an extra argument (the name of the first
694
@@ -XXX,XX +XXX,XX @@ may be used as a restricted reference-counting mechanism. For example,
695
consider the following code fragment:
696
697
rcu_read_lock();
698
- p = atomic_rcu_read(&foo);
699
+ p = qatomic_rcu_read(&foo);
700
/* do something with p. */
701
rcu_read_unlock();
702
703
@@ -XXX,XX +XXX,XX @@ The write side looks simply like this (with appropriate locking):
704
705
qemu_mutex_lock(&foo_mutex);
706
old = foo;
707
- atomic_rcu_set(&foo, new);
708
+ qatomic_rcu_set(&foo, new);
709
qemu_mutex_unlock(&foo_mutex);
710
synchronize_rcu();
711
free(old);
712
@@ -XXX,XX +XXX,XX @@ If the processing cannot be done purely within the critical section, it
713
is possible to combine this idiom with a "real" reference count:
714
715
rcu_read_lock();
716
- p = atomic_rcu_read(&foo);
717
+ p = qatomic_rcu_read(&foo);
718
foo_ref(p);
719
rcu_read_unlock();
720
/* do something with p. */
721
@@ -XXX,XX +XXX,XX @@ The write side can be like this:
722
723
qemu_mutex_lock(&foo_mutex);
724
old = foo;
725
- atomic_rcu_set(&foo, new);
726
+ qatomic_rcu_set(&foo, new);
727
qemu_mutex_unlock(&foo_mutex);
728
synchronize_rcu();
729
foo_unref(old);
730
@@ -XXX,XX +XXX,XX @@ or with call_rcu:
731
732
qemu_mutex_lock(&foo_mutex);
733
old = foo;
734
- atomic_rcu_set(&foo, new);
735
+ qatomic_rcu_set(&foo, new);
736
qemu_mutex_unlock(&foo_mutex);
737
call_rcu(foo_unref, old, rcu);
738
739
@@ -XXX,XX +XXX,XX @@ last reference may be dropped on the read side. Hence you can
740
use call_rcu() instead:
741
742
foo_unref(struct foo *p) {
743
- if (atomic_fetch_dec(&p->refcount) == 1) {
744
+ if (qatomic_fetch_dec(&p->refcount) == 1) {
745
call_rcu(foo_destroy, p, rcu);
746
}
747
}
748
@@ -XXX,XX +XXX,XX @@ Instead, we store the size of the array with the array itself:
749
750
read side:
751
rcu_read_lock();
752
- struct arr *array = atomic_rcu_read(&global_array);
753
+ struct arr *array = qatomic_rcu_read(&global_array);
754
x = i < array->size ? array->data[i] : -1;
755
rcu_read_unlock();
756
return x;
757
@@ -XXX,XX +XXX,XX @@ Instead, we store the size of the array with the array itself:
758
759
/* Removal phase. */
760
old_array = global_array;
761
- atomic_rcu_set(&new_array->data, new_array);
762
+ qatomic_rcu_set(&new_array->data, new_array);
763
synchronize_rcu();
764
765
/* Reclamation phase. */
766
diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h
767
index XXXXXXX..XXXXXXX 100644
768
--- a/accel/tcg/atomic_template.h
769
+++ b/accel/tcg/atomic_template.h
770
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
771
#if DATA_SIZE == 16
772
ret = atomic16_cmpxchg(haddr, cmpv, newv);
773
#else
774
- ret = atomic_cmpxchg__nocheck(haddr, cmpv, newv);
775
+ ret = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
776
#endif
777
ATOMIC_MMU_CLEANUP;
778
atomic_trace_rmw_post(env, addr, info);
779
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
780
ATOMIC_MMU_IDX);
781
782
atomic_trace_rmw_pre(env, addr, info);
783
- ret = atomic_xchg__nocheck(haddr, val);
784
+ ret = qatomic_xchg__nocheck(haddr, val);
785
ATOMIC_MMU_CLEANUP;
786
atomic_trace_rmw_post(env, addr, info);
787
return ret;
788
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
789
uint16_t info = trace_mem_build_info(SHIFT, false, 0, false, \
790
ATOMIC_MMU_IDX); \
791
atomic_trace_rmw_pre(env, addr, info); \
792
- ret = atomic_##X(haddr, val); \
793
+ ret = qatomic_##X(haddr, val); \
794
ATOMIC_MMU_CLEANUP; \
795
atomic_trace_rmw_post(env, addr, info); \
796
return ret; \
797
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
798
ATOMIC_MMU_IDX); \
799
atomic_trace_rmw_pre(env, addr, info); \
800
smp_mb(); \
801
- cmp = atomic_read__nocheck(haddr); \
802
+ cmp = qatomic_read__nocheck(haddr); \
803
do { \
804
old = cmp; new = FN(old, val); \
805
- cmp = atomic_cmpxchg__nocheck(haddr, old, new); \
806
+ cmp = qatomic_cmpxchg__nocheck(haddr, old, new); \
807
} while (cmp != old); \
808
ATOMIC_MMU_CLEANUP; \
809
atomic_trace_rmw_post(env, addr, info); \
810
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
811
#if DATA_SIZE == 16
812
ret = atomic16_cmpxchg(haddr, BSWAP(cmpv), BSWAP(newv));
813
#else
814
- ret = atomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
815
+ ret = qatomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
816
#endif
817
ATOMIC_MMU_CLEANUP;
818
atomic_trace_rmw_post(env, addr, info);
819
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
820
ATOMIC_MMU_IDX);
821
822
atomic_trace_rmw_pre(env, addr, info);
823
- ret = atomic_xchg__nocheck(haddr, BSWAP(val));
824
+ ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
825
ATOMIC_MMU_CLEANUP;
826
atomic_trace_rmw_post(env, addr, info);
827
return BSWAP(ret);
828
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
829
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, \
830
false, ATOMIC_MMU_IDX); \
831
atomic_trace_rmw_pre(env, addr, info); \
832
- ret = atomic_##X(haddr, BSWAP(val)); \
833
+ ret = qatomic_##X(haddr, BSWAP(val)); \
834
ATOMIC_MMU_CLEANUP; \
835
atomic_trace_rmw_post(env, addr, info); \
836
return BSWAP(ret); \
837
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
838
false, ATOMIC_MMU_IDX); \
839
atomic_trace_rmw_pre(env, addr, info); \
840
smp_mb(); \
841
- ldn = atomic_read__nocheck(haddr); \
842
+ ldn = qatomic_read__nocheck(haddr); \
843
do { \
844
ldo = ldn; old = BSWAP(ldo); new = FN(old, val); \
845
- ldn = atomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \
846
+ ldn = qatomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \
847
} while (ldo != ldn); \
848
ATOMIC_MMU_CLEANUP; \
849
atomic_trace_rmw_post(env, addr, info); \
850
diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
851
index XXXXXXX..XXXXXXX 100644
852
--- a/include/block/aio-wait.h
853
+++ b/include/block/aio-wait.h
854
@@ -XXX,XX +XXX,XX @@ extern AioWait global_aio_wait;
855
AioWait *wait_ = &global_aio_wait; \
856
AioContext *ctx_ = (ctx); \
857
/* Increment wait_->num_waiters before evaluating cond. */ \
858
- atomic_inc(&wait_->num_waiters); \
859
+ qatomic_inc(&wait_->num_waiters); \
860
if (ctx_ && in_aio_context_home_thread(ctx_)) { \
861
while ((cond)) { \
862
aio_poll(ctx_, true); \
863
@@ -XXX,XX +XXX,XX @@ extern AioWait global_aio_wait;
864
waited_ = true; \
865
} \
866
} \
867
- atomic_dec(&wait_->num_waiters); \
868
+ qatomic_dec(&wait_->num_waiters); \
869
waited_; })
870
871
/**
872
diff --git a/include/block/aio.h b/include/block/aio.h
873
index XXXXXXX..XXXXXXX 100644
874
--- a/include/block/aio.h
875
+++ b/include/block/aio.h
876
@@ -XXX,XX +XXX,XX @@ int64_t aio_compute_timeout(AioContext *ctx);
877
*/
878
static inline void aio_disable_external(AioContext *ctx)
879
{
880
- atomic_inc(&ctx->external_disable_cnt);
881
+ qatomic_inc(&ctx->external_disable_cnt);
882
}
883
884
/**
885
@@ -XXX,XX +XXX,XX @@ static inline void aio_enable_external(AioContext *ctx)
886
{
887
int old;
888
889
- old = atomic_fetch_dec(&ctx->external_disable_cnt);
890
+ old = qatomic_fetch_dec(&ctx->external_disable_cnt);
891
assert(old > 0);
892
if (old == 1) {
893
/* Kick event loop so it re-arms file descriptors */
894
@@ -XXX,XX +XXX,XX @@ static inline void aio_enable_external(AioContext *ctx)
895
*/
896
static inline bool aio_external_disabled(AioContext *ctx)
897
{
898
- return atomic_read(&ctx->external_disable_cnt);
899
+ return qatomic_read(&ctx->external_disable_cnt);
900
}
901
902
/**
903
@@ -XXX,XX +XXX,XX @@ static inline bool aio_external_disabled(AioContext *ctx)
904
*/
905
static inline bool aio_node_check(AioContext *ctx, bool is_external)
906
{
907
- return !is_external || !atomic_read(&ctx->external_disable_cnt);
908
+ return !is_external || !qatomic_read(&ctx->external_disable_cnt);
909
}
910
911
/**
912
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
913
index XXXXXXX..XXXXXXX 100644
914
--- a/include/exec/cpu_ldst.h
915
+++ b/include/exec/cpu_ldst.h
916
@@ -XXX,XX +XXX,XX @@ static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry)
917
#if TCG_OVERSIZED_GUEST
918
return entry->addr_write;
919
#else
920
- return atomic_read(&entry->addr_write);
921
+ return qatomic_read(&entry->addr_write);
922
#endif
923
}
924
925
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
926
index XXXXXXX..XXXXXXX 100644
927
--- a/include/exec/exec-all.h
928
+++ b/include/exec/exec-all.h
929
@@ -XXX,XX +XXX,XX @@ void QEMU_NORETURN cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc);
930
*/
931
static inline bool cpu_loop_exit_requested(CPUState *cpu)
932
{
933
- return (int32_t)atomic_read(&cpu_neg(cpu)->icount_decr.u32) < 0;
934
+ return (int32_t)qatomic_read(&cpu_neg(cpu)->icount_decr.u32) < 0;
935
}
936
937
#if !defined(CONFIG_USER_ONLY)
938
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock {
939
940
extern bool parallel_cpus;
941
942
-/* Hide the atomic_read to make code a little easier on the eyes */
943
+/* Hide the qatomic_read to make code a little easier on the eyes */
944
static inline uint32_t tb_cflags(const TranslationBlock *tb)
945
{
946
- return atomic_read(&tb->cflags);
947
+ return qatomic_read(&tb->cflags);
948
}
949
950
/* current cflags for hashing/comparison */
951
diff --git a/include/exec/log.h b/include/exec/log.h
952
index XXXXXXX..XXXXXXX 100644
953
--- a/include/exec/log.h
954
+++ b/include/exec/log.h
955
@@ -XXX,XX +XXX,XX @@ static inline void log_cpu_state(CPUState *cpu, int flags)
956
957
if (qemu_log_enabled()) {
958
rcu_read_lock();
959
- logfile = atomic_rcu_read(&qemu_logfile);
960
+ logfile = qatomic_rcu_read(&qemu_logfile);
961
if (logfile) {
962
cpu_dump_state(cpu, logfile->fd, flags);
963
}
964
@@ -XXX,XX +XXX,XX @@ static inline void log_target_disas(CPUState *cpu, target_ulong start,
965
{
966
QemuLogFile *logfile;
967
rcu_read_lock();
968
- logfile = atomic_rcu_read(&qemu_logfile);
969
+ logfile = qatomic_rcu_read(&qemu_logfile);
970
if (logfile) {
971
target_disas(logfile->fd, cpu, start, len);
972
}
973
@@ -XXX,XX +XXX,XX @@ static inline void log_disas(void *code, unsigned long size, const char *note)
974
{
975
QemuLogFile *logfile;
976
rcu_read_lock();
977
- logfile = atomic_rcu_read(&qemu_logfile);
978
+ logfile = qatomic_rcu_read(&qemu_logfile);
979
if (logfile) {
980
disas(logfile->fd, code, size, note);
981
}
982
diff --git a/include/exec/memory.h b/include/exec/memory.h
25
diff --git a/include/exec/memory.h b/include/exec/memory.h
983
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
984
--- a/include/exec/memory.h
27
--- a/include/exec/memory.h
985
+++ b/include/exec/memory.h
28
+++ b/include/exec/memory.h
986
@@ -XXX,XX +XXX,XX @@ struct FlatView {
29
@@ -XXX,XX +XXX,XX @@ void memory_region_init_ram_from_file(MemoryRegion *mr,
987
30
* @size: size of the region.
988
static inline FlatView *address_space_to_flatview(AddressSpace *as)
31
* @share: %true if memory must be mmaped with the MAP_SHARED flag
989
{
32
* @fd: the fd to mmap.
990
- return atomic_rcu_read(&as->current_map);
33
+ * @offset: offset within the file referenced by fd
991
+ return qatomic_rcu_read(&as->current_map);
34
* @errp: pointer to Error*, to store an error if it happens.
992
}
35
*
993
36
* Note that this function does not do anything to cause the data in the
37
@@ -XXX,XX +XXX,XX @@ void memory_region_init_ram_from_fd(MemoryRegion *mr,
38
uint64_t size,
39
bool share,
40
int fd,
41
+ ram_addr_t offset,
42
Error **errp);
43
#endif
994
44
995
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
45
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
996
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
997
--- a/include/exec/ram_addr.h
47
--- a/include/exec/ram_addr.h
998
+++ b/include/exec/ram_addr.h
48
+++ b/include/exec/ram_addr.h
999
@@ -XXX,XX +XXX,XX @@ static inline bool cpu_physical_memory_get_dirty(ram_addr_t start,
49
@@ -XXX,XX +XXX,XX @@ RAMBlock *qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
1000
page = start >> TARGET_PAGE_BITS;
50
Error **errp);
1001
51
RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
1002
WITH_RCU_READ_LOCK_GUARD() {
52
uint32_t ram_flags, int fd,
1003
- blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
53
- Error **errp);
1004
+ blocks = qatomic_rcu_read(&ram_list.dirty_memory[client]);
54
+ off_t offset, Error **errp);
1005
55
1006
idx = page / DIRTY_MEMORY_BLOCK_SIZE;
56
RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
1007
offset = page % DIRTY_MEMORY_BLOCK_SIZE;
57
MemoryRegion *mr, Error **errp);
1008
@@ -XXX,XX +XXX,XX @@ static inline bool cpu_physical_memory_all_dirty(ram_addr_t start,
58
diff --git a/include/qemu/mmap-alloc.h b/include/qemu/mmap-alloc.h
1009
59
index XXXXXXX..XXXXXXX 100644
1010
RCU_READ_LOCK_GUARD();
60
--- a/include/qemu/mmap-alloc.h
1011
61
+++ b/include/qemu/mmap-alloc.h
1012
- blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
62
@@ -XXX,XX +XXX,XX @@ size_t qemu_mempath_getpagesize(const char *mem_path);
1013
+ blocks = qatomic_rcu_read(&ram_list.dirty_memory[client]);
63
* otherwise, the alignment in use will be determined by QEMU.
1014
64
* @shared: map has RAM_SHARED flag.
1015
idx = page / DIRTY_MEMORY_BLOCK_SIZE;
65
* @is_pmem: map has RAM_PMEM flag.
1016
offset = page % DIRTY_MEMORY_BLOCK_SIZE;
66
+ * @map_offset: map starts at offset of map_offset from the start of fd
1017
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_set_dirty_flag(ram_addr_t addr,
67
*
1018
68
* Return:
1019
RCU_READ_LOCK_GUARD();
69
* On success, return a pointer to the mapped area.
1020
70
@@ -XXX,XX +XXX,XX @@ void *qemu_ram_mmap(int fd,
1021
- blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
71
size_t size,
1022
+ blocks = qatomic_rcu_read(&ram_list.dirty_memory[client]);
72
size_t align,
1023
73
bool shared,
1024
set_bit_atomic(offset, blocks->blocks[idx]);
74
- bool is_pmem);
75
+ bool is_pmem,
76
+ off_t map_offset);
77
78
void qemu_ram_munmap(int fd, void *ptr, size_t size);
79
80
diff --git a/backends/hostmem-memfd.c b/backends/hostmem-memfd.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/backends/hostmem-memfd.c
83
+++ b/backends/hostmem-memfd.c
84
@@ -XXX,XX +XXX,XX @@ memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
85
name = host_memory_backend_get_name(backend);
86
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
87
name, backend->size,
88
- backend->share, fd, errp);
89
+ backend->share, fd, 0, errp);
90
g_free(name);
1025
}
91
}
1026
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
92
1027
93
diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c
1028
WITH_RCU_READ_LOCK_GUARD() {
94
index XXXXXXX..XXXXXXX 100644
1029
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
95
--- a/hw/misc/ivshmem.c
1030
- blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i]);
96
+++ b/hw/misc/ivshmem.c
1031
+ blocks[i] = qatomic_rcu_read(&ram_list.dirty_memory[i]);
97
@@ -XXX,XX +XXX,XX @@ static void process_msg_shmem(IVShmemState *s, int fd, Error **errp)
1032
}
98
1033
99
/* mmap the region and map into the BAR2 */
1034
idx = page / DIRTY_MEMORY_BLOCK_SIZE;
100
memory_region_init_ram_from_fd(&s->server_bar2, OBJECT(s),
1035
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
101
- "ivshmem.bar2", size, true, fd, &local_err);
1036
102
+ "ivshmem.bar2", size, true, fd, 0,
1037
WITH_RCU_READ_LOCK_GUARD() {
103
+ &local_err);
1038
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
104
if (local_err) {
1039
- blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i])->blocks;
105
error_propagate(errp, local_err);
1040
+ blocks[i] =
106
return;
1041
+ qatomic_rcu_read(&ram_list.dirty_memory[i])->blocks;
107
diff --git a/softmmu/memory.c b/softmmu/memory.c
1042
}
108
index XXXXXXX..XXXXXXX 100644
1043
109
--- a/softmmu/memory.c
1044
for (k = 0; k < nr; k++) {
110
+++ b/softmmu/memory.c
1045
if (bitmap[k]) {
111
@@ -XXX,XX +XXX,XX @@ void memory_region_init_ram_from_fd(MemoryRegion *mr,
1046
unsigned long temp = leul_to_cpu(bitmap[k]);
112
uint64_t size,
1047
113
bool share,
1048
- atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
114
int fd,
1049
+ qatomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
115
+ ram_addr_t offset,
1050
116
Error **errp)
1051
if (global_dirty_log) {
117
{
1052
- atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset],
118
Error *err = NULL;
1053
- temp);
119
@@ -XXX,XX +XXX,XX @@ void memory_region_init_ram_from_fd(MemoryRegion *mr,
1054
+ qatomic_or(
120
mr->destructor = memory_region_destructor_ram;
1055
+ &blocks[DIRTY_MEMORY_MIGRATION][idx][offset],
121
mr->ram_block = qemu_ram_alloc_from_fd(size, mr,
1056
+ temp);
122
share ? RAM_SHARED : 0,
1057
}
123
- fd, &err);
1058
124
+ fd, offset, &err);
1059
if (tcg_enabled()) {
125
if (err) {
1060
- atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset],
126
mr->size = int128_zero();
1061
- temp);
127
object_unparent(OBJECT(mr));
1062
+ qatomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset],
128
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
1063
+ temp);
129
index XXXXXXX..XXXXXXX 100644
1064
}
130
--- a/softmmu/physmem.c
1065
}
131
+++ b/softmmu/physmem.c
1066
132
@@ -XXX,XX +XXX,XX @@ static void *file_ram_alloc(RAMBlock *block,
1067
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
133
ram_addr_t memory,
1068
DIRTY_MEMORY_BLOCK_SIZE);
134
int fd,
1069
unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
135
bool truncate,
1070
136
+ off_t offset,
1071
- src = atomic_rcu_read(
137
Error **errp)
1072
+ src = qatomic_rcu_read(
138
{
1073
&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION])->blocks;
139
void *area;
1074
140
@@ -XXX,XX +XXX,XX @@ static void *file_ram_alloc(RAMBlock *block,
1075
for (k = page; k < page + nr; k++) {
141
}
1076
if (src[idx][offset]) {
142
1077
- unsigned long bits = atomic_xchg(&src[idx][offset], 0);
143
area = qemu_ram_mmap(fd, memory, block->mr->align,
1078
+ unsigned long bits = qatomic_xchg(&src[idx][offset], 0);
144
- block->flags & RAM_SHARED, block->flags & RAM_PMEM);
1079
unsigned long new_dirty;
145
+ block->flags & RAM_SHARED, block->flags & RAM_PMEM,
1080
new_dirty = ~dest[k];
146
+ offset);
1081
dest[k] |= bits;
147
if (area == MAP_FAILED) {
1082
diff --git a/include/exec/ramlist.h b/include/exec/ramlist.h
148
error_setg_errno(errp, errno,
1083
index XXXXXXX..XXXXXXX 100644
149
"unable to map backing store for guest RAM");
1084
--- a/include/exec/ramlist.h
150
@@ -XXX,XX +XXX,XX @@ static void ram_block_add(RAMBlock *new_block, Error **errp, bool shared)
1085
+++ b/include/exec/ramlist.h
151
#ifdef CONFIG_POSIX
1086
@@ -XXX,XX +XXX,XX @@ typedef struct RAMBlockNotifier RAMBlockNotifier;
152
RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
1087
* rcu_read_lock();
153
uint32_t ram_flags, int fd,
1088
*
154
- Error **errp)
1089
* DirtyMemoryBlocks *blocks =
155
+ off_t offset, Error **errp)
1090
- * atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
156
{
1091
+ * qatomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
157
RAMBlock *new_block;
1092
*
158
Error *local_err = NULL;
1093
* ram_addr_t idx = (addr >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
159
@@ -XXX,XX +XXX,XX @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
1094
* unsigned long *block = blocks.blocks[idx];
160
new_block->used_length = size;
1095
diff --git a/include/exec/tb-lookup.h b/include/exec/tb-lookup.h
161
new_block->max_length = size;
1096
index XXXXXXX..XXXXXXX 100644
162
new_block->flags = ram_flags;
1097
--- a/include/exec/tb-lookup.h
163
- new_block->host = file_ram_alloc(new_block, size, fd, !file_size, errp);
1098
+++ b/include/exec/tb-lookup.h
164
+ new_block->host = file_ram_alloc(new_block, size, fd, !file_size, offset,
1099
@@ -XXX,XX +XXX,XX @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong *pc, target_ulong *cs_base,
165
+ errp);
1100
166
if (!new_block->host) {
1101
cpu_get_tb_cpu_state(env, pc, cs_base, flags);
167
g_free(new_block);
1102
hash = tb_jmp_cache_hash_func(*pc);
168
return NULL;
1103
- tb = atomic_rcu_read(&cpu->tb_jmp_cache[hash]);
169
@@ -XXX,XX +XXX,XX @@ RAMBlock *qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
1104
+ tb = qatomic_rcu_read(&cpu->tb_jmp_cache[hash]);
1105
1106
cf_mask &= ~CF_CLUSTER_MASK;
1107
cf_mask |= cpu->cluster_index << CF_CLUSTER_SHIFT;
1108
@@ -XXX,XX +XXX,XX @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong *pc, target_ulong *cs_base,
1109
if (tb == NULL) {
1110
return NULL;
170
return NULL;
1111
}
171
}
1112
- atomic_set(&cpu->tb_jmp_cache[hash], tb);
172
1113
+ qatomic_set(&cpu->tb_jmp_cache[hash], tb);
173
- block = qemu_ram_alloc_from_fd(size, mr, ram_flags, fd, errp);
1114
return tb;
174
+ block = qemu_ram_alloc_from_fd(size, mr, ram_flags, fd, 0, errp);
1115
}
175
if (!block) {
1116
176
if (created) {
1117
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
177
unlink(mem_path);
1118
index XXXXXXX..XXXXXXX 100644
178
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
1119
--- a/include/hw/core/cpu.h
179
index XXXXXXX..XXXXXXX 100644
1120
+++ b/include/hw/core/cpu.h
180
--- a/util/mmap-alloc.c
1121
@@ -XXX,XX +XXX,XX @@ static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
181
+++ b/util/mmap-alloc.c
1122
unsigned int i;
182
@@ -XXX,XX +XXX,XX @@ void *qemu_ram_mmap(int fd,
1123
183
size_t size,
1124
for (i = 0; i < TB_JMP_CACHE_SIZE; i++) {
184
size_t align,
1125
- atomic_set(&cpu->tb_jmp_cache[i], NULL);
185
bool shared,
1126
+ qatomic_set(&cpu->tb_jmp_cache[i], NULL);
186
- bool is_pmem)
187
+ bool is_pmem,
188
+ off_t map_offset)
189
{
190
int flags;
191
int map_sync_flags = 0;
192
@@ -XXX,XX +XXX,XX @@ void *qemu_ram_mmap(int fd,
193
offset = QEMU_ALIGN_UP((uintptr_t)guardptr, align) - (uintptr_t)guardptr;
194
195
ptr = mmap(guardptr + offset, size, PROT_READ | PROT_WRITE,
196
- flags | map_sync_flags, fd, 0);
197
+ flags | map_sync_flags, fd, map_offset);
198
199
if (ptr == MAP_FAILED && map_sync_flags) {
200
if (errno == ENOTSUP) {
201
@@ -XXX,XX +XXX,XX @@ void *qemu_ram_mmap(int fd,
202
* we will remove these flags to handle compatibility.
203
*/
204
ptr = mmap(guardptr + offset, size, PROT_READ | PROT_WRITE,
205
- flags, fd, 0);
206
+ flags, fd, map_offset);
1127
}
207
}
1128
}
208
1129
209
if (ptr == MAP_FAILED) {
1130
diff --git a/include/qemu/atomic128.h b/include/qemu/atomic128.h
210
diff --git a/util/oslib-posix.c b/util/oslib-posix.c
1131
index XXXXXXX..XXXXXXX 100644
211
index XXXXXXX..XXXXXXX 100644
1132
--- a/include/qemu/atomic128.h
212
--- a/util/oslib-posix.c
1133
+++ b/include/qemu/atomic128.h
213
+++ b/util/oslib-posix.c
1134
@@ -XXX,XX +XXX,XX @@
214
@@ -XXX,XX +XXX,XX @@ void *qemu_memalign(size_t alignment, size_t size)
1135
#if defined(CONFIG_ATOMIC128)
215
void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment, bool shared)
1136
static inline Int128 atomic16_cmpxchg(Int128 *ptr, Int128 cmp, Int128 new)
216
{
1137
{
217
size_t align = QEMU_VMALLOC_ALIGN;
1138
- return atomic_cmpxchg__nocheck(ptr, cmp, new);
218
- void *ptr = qemu_ram_mmap(-1, size, align, shared, false);
1139
+ return qatomic_cmpxchg__nocheck(ptr, cmp, new);
219
+ void *ptr = qemu_ram_mmap(-1, size, align, shared, false, 0);
1140
}
220
1141
# define HAVE_CMPXCHG128 1
221
if (ptr == MAP_FAILED) {
1142
#elif defined(CONFIG_CMPXCHG128)
1143
@@ -XXX,XX +XXX,XX @@ Int128 QEMU_ERROR("unsupported atomic")
1144
#if defined(CONFIG_ATOMIC128)
1145
static inline Int128 atomic16_read(Int128 *ptr)
1146
{
1147
- return atomic_read__nocheck(ptr);
1148
+ return qatomic_read__nocheck(ptr);
1149
}
1150
1151
static inline void atomic16_set(Int128 *ptr, Int128 val)
1152
{
1153
- atomic_set__nocheck(ptr, val);
1154
+ qatomic_set__nocheck(ptr, val);
1155
}
1156
1157
# define HAVE_ATOMIC128 1
1158
diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
1159
index XXXXXXX..XXXXXXX 100644
1160
--- a/include/qemu/bitops.h
1161
+++ b/include/qemu/bitops.h
1162
@@ -XXX,XX +XXX,XX @@ static inline void set_bit_atomic(long nr, unsigned long *addr)
1163
unsigned long mask = BIT_MASK(nr);
1164
unsigned long *p = addr + BIT_WORD(nr);
1165
1166
- atomic_or(p, mask);
1167
+ qatomic_or(p, mask);
1168
}
1169
1170
/**
1171
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
1172
index XXXXXXX..XXXXXXX 100644
1173
--- a/include/qemu/coroutine.h
1174
+++ b/include/qemu/coroutine.h
1175
@@ -XXX,XX +XXX,XX @@ static inline coroutine_fn void qemu_co_mutex_assert_locked(CoMutex *mutex)
1176
* because the condition will be false no matter whether we read NULL or
1177
* the pointer for any other coroutine.
1178
*/
1179
- assert(atomic_read(&mutex->locked) &&
1180
+ assert(qatomic_read(&mutex->locked) &&
1181
mutex->holder == qemu_coroutine_self());
1182
}
1183
1184
diff --git a/include/qemu/log.h b/include/qemu/log.h
1185
index XXXXXXX..XXXXXXX 100644
1186
--- a/include/qemu/log.h
1187
+++ b/include/qemu/log.h
1188
@@ -XXX,XX +XXX,XX @@ static inline bool qemu_log_separate(void)
1189
bool res = false;
1190
1191
rcu_read_lock();
1192
- logfile = atomic_rcu_read(&qemu_logfile);
1193
+ logfile = qatomic_rcu_read(&qemu_logfile);
1194
if (logfile && logfile->fd != stderr) {
1195
res = true;
1196
}
1197
@@ -XXX,XX +XXX,XX @@ static inline FILE *qemu_log_lock(void)
1198
{
1199
QemuLogFile *logfile;
1200
rcu_read_lock();
1201
- logfile = atomic_rcu_read(&qemu_logfile);
1202
+ logfile = qatomic_rcu_read(&qemu_logfile);
1203
if (logfile) {
1204
qemu_flockfile(logfile->fd);
1205
return logfile->fd;
1206
@@ -XXX,XX +XXX,XX @@ qemu_log_vprintf(const char *fmt, va_list va)
1207
QemuLogFile *logfile;
1208
1209
rcu_read_lock();
1210
- logfile = atomic_rcu_read(&qemu_logfile);
1211
+ logfile = qatomic_rcu_read(&qemu_logfile);
1212
if (logfile) {
1213
vfprintf(logfile->fd, fmt, va);
1214
}
1215
diff --git a/include/qemu/queue.h b/include/qemu/queue.h
1216
index XXXXXXX..XXXXXXX 100644
1217
--- a/include/qemu/queue.h
1218
+++ b/include/qemu/queue.h
1219
@@ -XXX,XX +XXX,XX @@ struct { \
1220
typeof(elm) save_sle_next; \
1221
do { \
1222
save_sle_next = (elm)->field.sle_next = (head)->slh_first; \
1223
- } while (atomic_cmpxchg(&(head)->slh_first, save_sle_next, (elm)) != \
1224
+ } while (qatomic_cmpxchg(&(head)->slh_first, save_sle_next, (elm)) !=\
1225
save_sle_next); \
1226
} while (/*CONSTCOND*/0)
1227
1228
#define QSLIST_MOVE_ATOMIC(dest, src) do { \
1229
- (dest)->slh_first = atomic_xchg(&(src)->slh_first, NULL); \
1230
+ (dest)->slh_first = qatomic_xchg(&(src)->slh_first, NULL); \
1231
} while (/*CONSTCOND*/0)
1232
1233
#define QSLIST_REMOVE_HEAD(head, field) do { \
1234
@@ -XXX,XX +XXX,XX @@ struct { \
1235
/*
1236
* Simple queue access methods.
1237
*/
1238
-#define QSIMPLEQ_EMPTY_ATOMIC(head) (atomic_read(&((head)->sqh_first)) == NULL)
1239
+#define QSIMPLEQ_EMPTY_ATOMIC(head) \
1240
+ (qatomic_read(&((head)->sqh_first)) == NULL)
1241
#define QSIMPLEQ_EMPTY(head) ((head)->sqh_first == NULL)
1242
#define QSIMPLEQ_FIRST(head) ((head)->sqh_first)
1243
#define QSIMPLEQ_NEXT(elm, field) ((elm)->field.sqe_next)
1244
diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
1245
index XXXXXXX..XXXXXXX 100644
1246
--- a/include/qemu/rcu.h
1247
+++ b/include/qemu/rcu.h
1248
@@ -XXX,XX +XXX,XX @@ static inline void rcu_read_lock(void)
1249
return;
1250
}
1251
1252
- ctr = atomic_read(&rcu_gp_ctr);
1253
- atomic_set(&p_rcu_reader->ctr, ctr);
1254
+ ctr = qatomic_read(&rcu_gp_ctr);
1255
+ qatomic_set(&p_rcu_reader->ctr, ctr);
1256
1257
/* Write p_rcu_reader->ctr before reading RCU-protected pointers. */
1258
smp_mb_placeholder();
1259
@@ -XXX,XX +XXX,XX @@ static inline void rcu_read_unlock(void)
1260
* smp_mb_placeholder(), this ensures writes to p_rcu_reader->ctr
1261
* are sequentially consistent.
1262
*/
1263
- atomic_store_release(&p_rcu_reader->ctr, 0);
1264
+ qatomic_store_release(&p_rcu_reader->ctr, 0);
1265
1266
/* Write p_rcu_reader->ctr before reading p_rcu_reader->waiting. */
1267
smp_mb_placeholder();
1268
- if (unlikely(atomic_read(&p_rcu_reader->waiting))) {
1269
- atomic_set(&p_rcu_reader->waiting, false);
1270
+ if (unlikely(qatomic_read(&p_rcu_reader->waiting))) {
1271
+ qatomic_set(&p_rcu_reader->waiting, false);
1272
qemu_event_set(&rcu_gp_event);
1273
}
1274
}
1275
diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h
1276
index XXXXXXX..XXXXXXX 100644
1277
--- a/include/qemu/rcu_queue.h
1278
+++ b/include/qemu/rcu_queue.h
1279
@@ -XXX,XX +XXX,XX @@ extern "C" {
1280
/*
1281
* List access methods.
1282
*/
1283
-#define QLIST_EMPTY_RCU(head) (atomic_read(&(head)->lh_first) == NULL)
1284
-#define QLIST_FIRST_RCU(head) (atomic_rcu_read(&(head)->lh_first))
1285
-#define QLIST_NEXT_RCU(elm, field) (atomic_rcu_read(&(elm)->field.le_next))
1286
+#define QLIST_EMPTY_RCU(head) (qatomic_read(&(head)->lh_first) == NULL)
1287
+#define QLIST_FIRST_RCU(head) (qatomic_rcu_read(&(head)->lh_first))
1288
+#define QLIST_NEXT_RCU(elm, field) (qatomic_rcu_read(&(elm)->field.le_next))
1289
1290
/*
1291
* List functions.
1292
@@ -XXX,XX +XXX,XX @@ extern "C" {
1293
1294
1295
/*
1296
- * The difference between atomic_read/set and atomic_rcu_read/set
1297
+ * The difference between qatomic_read/set and qatomic_rcu_read/set
1298
* is in the including of a read/write memory barrier to the volatile
1299
* access. atomic_rcu_* macros include the memory barrier, the
1300
* plain atomic macros do not. Therefore, it should be correct to
1301
@@ -XXX,XX +XXX,XX @@ extern "C" {
1302
#define QLIST_INSERT_AFTER_RCU(listelm, elm, field) do { \
1303
(elm)->field.le_next = (listelm)->field.le_next; \
1304
(elm)->field.le_prev = &(listelm)->field.le_next; \
1305
- atomic_rcu_set(&(listelm)->field.le_next, (elm)); \
1306
+ qatomic_rcu_set(&(listelm)->field.le_next, (elm)); \
1307
if ((elm)->field.le_next != NULL) { \
1308
(elm)->field.le_next->field.le_prev = \
1309
&(elm)->field.le_next; \
1310
@@ -XXX,XX +XXX,XX @@ extern "C" {
1311
#define QLIST_INSERT_BEFORE_RCU(listelm, elm, field) do { \
1312
(elm)->field.le_prev = (listelm)->field.le_prev; \
1313
(elm)->field.le_next = (listelm); \
1314
- atomic_rcu_set((listelm)->field.le_prev, (elm)); \
1315
+ qatomic_rcu_set((listelm)->field.le_prev, (elm)); \
1316
(listelm)->field.le_prev = &(elm)->field.le_next; \
1317
} while (/*CONSTCOND*/0)
1318
1319
@@ -XXX,XX +XXX,XX @@ extern "C" {
1320
#define QLIST_INSERT_HEAD_RCU(head, elm, field) do { \
1321
(elm)->field.le_prev = &(head)->lh_first; \
1322
(elm)->field.le_next = (head)->lh_first; \
1323
- atomic_rcu_set((&(head)->lh_first), (elm)); \
1324
+ qatomic_rcu_set((&(head)->lh_first), (elm)); \
1325
if ((elm)->field.le_next != NULL) { \
1326
(elm)->field.le_next->field.le_prev = \
1327
&(elm)->field.le_next; \
1328
@@ -XXX,XX +XXX,XX @@ extern "C" {
1329
(elm)->field.le_next->field.le_prev = \
1330
(elm)->field.le_prev; \
1331
} \
1332
- atomic_set((elm)->field.le_prev, (elm)->field.le_next); \
1333
+ qatomic_set((elm)->field.le_prev, (elm)->field.le_next); \
1334
} while (/*CONSTCOND*/0)
1335
1336
/* List traversal must occur within an RCU critical section. */
1337
#define QLIST_FOREACH_RCU(var, head, field) \
1338
- for ((var) = atomic_rcu_read(&(head)->lh_first); \
1339
+ for ((var) = qatomic_rcu_read(&(head)->lh_first); \
1340
(var); \
1341
- (var) = atomic_rcu_read(&(var)->field.le_next))
1342
+ (var) = qatomic_rcu_read(&(var)->field.le_next))
1343
1344
/* List traversal must occur within an RCU critical section. */
1345
#define QLIST_FOREACH_SAFE_RCU(var, head, field, next_var) \
1346
- for ((var) = (atomic_rcu_read(&(head)->lh_first)); \
1347
+ for ((var) = (qatomic_rcu_read(&(head)->lh_first)); \
1348
(var) && \
1349
- ((next_var) = atomic_rcu_read(&(var)->field.le_next), 1); \
1350
+ ((next_var) = qatomic_rcu_read(&(var)->field.le_next), 1); \
1351
(var) = (next_var))
1352
1353
/*
1354
@@ -XXX,XX +XXX,XX @@ extern "C" {
1355
*/
1356
1357
/* Simple queue access methods */
1358
-#define QSIMPLEQ_EMPTY_RCU(head) (atomic_read(&(head)->sqh_first) == NULL)
1359
-#define QSIMPLEQ_FIRST_RCU(head) atomic_rcu_read(&(head)->sqh_first)
1360
-#define QSIMPLEQ_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.sqe_next)
1361
+#define QSIMPLEQ_EMPTY_RCU(head) \
1362
+ (qatomic_read(&(head)->sqh_first) == NULL)
1363
+#define QSIMPLEQ_FIRST_RCU(head) qatomic_rcu_read(&(head)->sqh_first)
1364
+#define QSIMPLEQ_NEXT_RCU(elm, field) qatomic_rcu_read(&(elm)->field.sqe_next)
1365
1366
/* Simple queue functions */
1367
#define QSIMPLEQ_INSERT_HEAD_RCU(head, elm, field) do { \
1368
@@ -XXX,XX +XXX,XX @@ extern "C" {
1369
if ((elm)->field.sqe_next == NULL) { \
1370
(head)->sqh_last = &(elm)->field.sqe_next; \
1371
} \
1372
- atomic_rcu_set(&(head)->sqh_first, (elm)); \
1373
+ qatomic_rcu_set(&(head)->sqh_first, (elm)); \
1374
} while (/*CONSTCOND*/0)
1375
1376
#define QSIMPLEQ_INSERT_TAIL_RCU(head, elm, field) do { \
1377
(elm)->field.sqe_next = NULL; \
1378
- atomic_rcu_set((head)->sqh_last, (elm)); \
1379
+ qatomic_rcu_set((head)->sqh_last, (elm)); \
1380
(head)->sqh_last = &(elm)->field.sqe_next; \
1381
} while (/*CONSTCOND*/0)
1382
1383
@@ -XXX,XX +XXX,XX @@ extern "C" {
1384
if ((elm)->field.sqe_next == NULL) { \
1385
(head)->sqh_last = &(elm)->field.sqe_next; \
1386
} \
1387
- atomic_rcu_set(&(listelm)->field.sqe_next, (elm)); \
1388
+ qatomic_rcu_set(&(listelm)->field.sqe_next, (elm)); \
1389
} while (/*CONSTCOND*/0)
1390
1391
#define QSIMPLEQ_REMOVE_HEAD_RCU(head, field) do { \
1392
- atomic_set(&(head)->sqh_first, (head)->sqh_first->field.sqe_next); \
1393
+ qatomic_set(&(head)->sqh_first, (head)->sqh_first->field.sqe_next);\
1394
if ((head)->sqh_first == NULL) { \
1395
(head)->sqh_last = &(head)->sqh_first; \
1396
} \
1397
@@ -XXX,XX +XXX,XX @@ extern "C" {
1398
while (curr->field.sqe_next != (elm)) { \
1399
curr = curr->field.sqe_next; \
1400
} \
1401
- atomic_set(&curr->field.sqe_next, \
1402
+ qatomic_set(&curr->field.sqe_next, \
1403
curr->field.sqe_next->field.sqe_next); \
1404
if (curr->field.sqe_next == NULL) { \
1405
(head)->sqh_last = &(curr)->field.sqe_next; \
1406
@@ -XXX,XX +XXX,XX @@ extern "C" {
1407
} while (/*CONSTCOND*/0)
1408
1409
#define QSIMPLEQ_FOREACH_RCU(var, head, field) \
1410
- for ((var) = atomic_rcu_read(&(head)->sqh_first); \
1411
+ for ((var) = qatomic_rcu_read(&(head)->sqh_first); \
1412
(var); \
1413
- (var) = atomic_rcu_read(&(var)->field.sqe_next))
1414
+ (var) = qatomic_rcu_read(&(var)->field.sqe_next))
1415
1416
#define QSIMPLEQ_FOREACH_SAFE_RCU(var, head, field, next) \
1417
- for ((var) = atomic_rcu_read(&(head)->sqh_first); \
1418
- (var) && ((next) = atomic_rcu_read(&(var)->field.sqe_next), 1); \
1419
+ for ((var) = qatomic_rcu_read(&(head)->sqh_first); \
1420
+ (var) && ((next) = qatomic_rcu_read(&(var)->field.sqe_next), 1);\
1421
(var) = (next))
1422
1423
/*
1424
@@ -XXX,XX +XXX,XX @@ extern "C" {
1425
*/
1426
1427
/* Tail queue access methods */
1428
-#define QTAILQ_EMPTY_RCU(head) (atomic_read(&(head)->tqh_first) == NULL)
1429
-#define QTAILQ_FIRST_RCU(head) atomic_rcu_read(&(head)->tqh_first)
1430
-#define QTAILQ_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.tqe_next)
1431
+#define QTAILQ_EMPTY_RCU(head) (qatomic_read(&(head)->tqh_first) == NULL)
1432
+#define QTAILQ_FIRST_RCU(head) qatomic_rcu_read(&(head)->tqh_first)
1433
+#define QTAILQ_NEXT_RCU(elm, field) qatomic_rcu_read(&(elm)->field.tqe_next)
1434
1435
/* Tail queue functions */
1436
#define QTAILQ_INSERT_HEAD_RCU(head, elm, field) do { \
1437
@@ -XXX,XX +XXX,XX @@ extern "C" {
1438
} else { \
1439
(head)->tqh_circ.tql_prev = &(elm)->field.tqe_circ; \
1440
} \
1441
- atomic_rcu_set(&(head)->tqh_first, (elm)); \
1442
+ qatomic_rcu_set(&(head)->tqh_first, (elm)); \
1443
(elm)->field.tqe_circ.tql_prev = &(head)->tqh_circ; \
1444
} while (/*CONSTCOND*/0)
1445
1446
#define QTAILQ_INSERT_TAIL_RCU(head, elm, field) do { \
1447
(elm)->field.tqe_next = NULL; \
1448
(elm)->field.tqe_circ.tql_prev = (head)->tqh_circ.tql_prev; \
1449
- atomic_rcu_set(&(head)->tqh_circ.tql_prev->tql_next, (elm)); \
1450
+ qatomic_rcu_set(&(head)->tqh_circ.tql_prev->tql_next, (elm)); \
1451
(head)->tqh_circ.tql_prev = &(elm)->field.tqe_circ; \
1452
} while (/*CONSTCOND*/0)
1453
1454
@@ -XXX,XX +XXX,XX @@ extern "C" {
1455
} else { \
1456
(head)->tqh_circ.tql_prev = &(elm)->field.tqe_circ; \
1457
} \
1458
- atomic_rcu_set(&(listelm)->field.tqe_next, (elm)); \
1459
+ qatomic_rcu_set(&(listelm)->field.tqe_next, (elm)); \
1460
(elm)->field.tqe_circ.tql_prev = &(listelm)->field.tqe_circ; \
1461
} while (/*CONSTCOND*/0)
1462
1463
#define QTAILQ_INSERT_BEFORE_RCU(listelm, elm, field) do { \
1464
(elm)->field.tqe_circ.tql_prev = (listelm)->field.tqe_circ.tql_prev; \
1465
(elm)->field.tqe_next = (listelm); \
1466
- atomic_rcu_set(&(listelm)->field.tqe_circ.tql_prev->tql_next, (elm)); \
1467
+ qatomic_rcu_set(&(listelm)->field.tqe_circ.tql_prev->tql_next, (elm));\
1468
(listelm)->field.tqe_circ.tql_prev = &(elm)->field.tqe_circ; \
1469
} while (/*CONSTCOND*/0)
1470
1471
@@ -XXX,XX +XXX,XX @@ extern "C" {
1472
} else { \
1473
(head)->tqh_circ.tql_prev = (elm)->field.tqe_circ.tql_prev; \
1474
} \
1475
- atomic_set(&(elm)->field.tqe_circ.tql_prev->tql_next, (elm)->field.tqe_next); \
1476
+ qatomic_set(&(elm)->field.tqe_circ.tql_prev->tql_next, \
1477
+ (elm)->field.tqe_next); \
1478
(elm)->field.tqe_circ.tql_prev = NULL; \
1479
} while (/*CONSTCOND*/0)
1480
1481
#define QTAILQ_FOREACH_RCU(var, head, field) \
1482
- for ((var) = atomic_rcu_read(&(head)->tqh_first); \
1483
+ for ((var) = qatomic_rcu_read(&(head)->tqh_first); \
1484
(var); \
1485
- (var) = atomic_rcu_read(&(var)->field.tqe_next))
1486
+ (var) = qatomic_rcu_read(&(var)->field.tqe_next))
1487
1488
#define QTAILQ_FOREACH_SAFE_RCU(var, head, field, next) \
1489
- for ((var) = atomic_rcu_read(&(head)->tqh_first); \
1490
- (var) && ((next) = atomic_rcu_read(&(var)->field.tqe_next), 1); \
1491
+ for ((var) = qatomic_rcu_read(&(head)->tqh_first); \
1492
+ (var) && ((next) = qatomic_rcu_read(&(var)->field.tqe_next), 1);\
1493
(var) = (next))
1494
1495
/*
1496
@@ -XXX,XX +XXX,XX @@ extern "C" {
1497
*/
1498
1499
/* Singly-linked list access methods */
1500
-#define QSLIST_EMPTY_RCU(head) (atomic_read(&(head)->slh_first) == NULL)
1501
-#define QSLIST_FIRST_RCU(head) atomic_rcu_read(&(head)->slh_first)
1502
-#define QSLIST_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.sle_next)
1503
+#define QSLIST_EMPTY_RCU(head) (qatomic_read(&(head)->slh_first) == NULL)
1504
+#define QSLIST_FIRST_RCU(head) qatomic_rcu_read(&(head)->slh_first)
1505
+#define QSLIST_NEXT_RCU(elm, field) qatomic_rcu_read(&(elm)->field.sle_next)
1506
1507
/* Singly-linked list functions */
1508
#define QSLIST_INSERT_HEAD_RCU(head, elm, field) do { \
1509
(elm)->field.sle_next = (head)->slh_first; \
1510
- atomic_rcu_set(&(head)->slh_first, (elm)); \
1511
+ qatomic_rcu_set(&(head)->slh_first, (elm)); \
1512
} while (/*CONSTCOND*/0)
1513
1514
#define QSLIST_INSERT_AFTER_RCU(head, listelm, elm, field) do { \
1515
(elm)->field.sle_next = (listelm)->field.sle_next; \
1516
- atomic_rcu_set(&(listelm)->field.sle_next, (elm)); \
1517
+ qatomic_rcu_set(&(listelm)->field.sle_next, (elm)); \
1518
} while (/*CONSTCOND*/0)
1519
1520
#define QSLIST_REMOVE_HEAD_RCU(head, field) do { \
1521
- atomic_set(&(head)->slh_first, (head)->slh_first->field.sle_next); \
1522
+ qatomic_set(&(head)->slh_first, (head)->slh_first->field.sle_next);\
1523
} while (/*CONSTCOND*/0)
1524
1525
#define QSLIST_REMOVE_RCU(head, elm, type, field) do { \
1526
@@ -XXX,XX +XXX,XX @@ extern "C" {
1527
while (curr->field.sle_next != (elm)) { \
1528
curr = curr->field.sle_next; \
1529
} \
1530
- atomic_set(&curr->field.sle_next, \
1531
+ qatomic_set(&curr->field.sle_next, \
1532
curr->field.sle_next->field.sle_next); \
1533
} \
1534
} while (/*CONSTCOND*/0)
1535
1536
#define QSLIST_FOREACH_RCU(var, head, field) \
1537
- for ((var) = atomic_rcu_read(&(head)->slh_first); \
1538
- (var); \
1539
- (var) = atomic_rcu_read(&(var)->field.sle_next))
1540
+ for ((var) = qatomic_rcu_read(&(head)->slh_first); \
1541
+ (var); \
1542
+ (var) = qatomic_rcu_read(&(var)->field.sle_next))
1543
1544
-#define QSLIST_FOREACH_SAFE_RCU(var, head, field, next) \
1545
- for ((var) = atomic_rcu_read(&(head)->slh_first); \
1546
- (var) && ((next) = atomic_rcu_read(&(var)->field.sle_next), 1); \
1547
+#define QSLIST_FOREACH_SAFE_RCU(var, head, field, next) \
1548
+ for ((var) = qatomic_rcu_read(&(head)->slh_first); \
1549
+ (var) && ((next) = qatomic_rcu_read(&(var)->field.sle_next), 1); \
1550
(var) = (next))
1551
1552
#ifdef __cplusplus
1553
diff --git a/include/qemu/seqlock.h b/include/qemu/seqlock.h
1554
index XXXXXXX..XXXXXXX 100644
1555
--- a/include/qemu/seqlock.h
1556
+++ b/include/qemu/seqlock.h
1557
@@ -XXX,XX +XXX,XX @@ static inline void seqlock_init(QemuSeqLock *sl)
1558
/* Lock out other writers and update the count. */
1559
static inline void seqlock_write_begin(QemuSeqLock *sl)
1560
{
1561
- atomic_set(&sl->sequence, sl->sequence + 1);
1562
+ qatomic_set(&sl->sequence, sl->sequence + 1);
1563
1564
/* Write sequence before updating other fields. */
1565
smp_wmb();
1566
@@ -XXX,XX +XXX,XX @@ static inline void seqlock_write_end(QemuSeqLock *sl)
1567
/* Write other fields before finalizing sequence. */
1568
smp_wmb();
1569
1570
- atomic_set(&sl->sequence, sl->sequence + 1);
1571
+ qatomic_set(&sl->sequence, sl->sequence + 1);
1572
}
1573
1574
/* Lock out other writers and update the count. */
1575
@@ -XXX,XX +XXX,XX @@ static inline void seqlock_write_unlock_impl(QemuSeqLock *sl, QemuLockable *lock
1576
static inline unsigned seqlock_read_begin(const QemuSeqLock *sl)
1577
{
1578
/* Always fail if a write is in progress. */
1579
- unsigned ret = atomic_read(&sl->sequence);
1580
+ unsigned ret = qatomic_read(&sl->sequence);
1581
1582
/* Read sequence before reading other fields. */
1583
smp_rmb();
1584
@@ -XXX,XX +XXX,XX @@ static inline int seqlock_read_retry(const QemuSeqLock *sl, unsigned start)
1585
{
1586
/* Read other fields before reading final sequence. */
1587
smp_rmb();
1588
- return unlikely(atomic_read(&sl->sequence) != start);
1589
+ return unlikely(qatomic_read(&sl->sequence) != start);
1590
}
1591
1592
#endif
1593
diff --git a/include/qemu/stats64.h b/include/qemu/stats64.h
1594
index XXXXXXX..XXXXXXX 100644
1595
--- a/include/qemu/stats64.h
1596
+++ b/include/qemu/stats64.h
1597
@@ -XXX,XX +XXX,XX @@ static inline void stat64_init(Stat64 *s, uint64_t value)
1598
1599
static inline uint64_t stat64_get(const Stat64 *s)
1600
{
1601
- return atomic_read__nocheck(&s->value);
1602
+ return qatomic_read__nocheck(&s->value);
1603
}
1604
1605
static inline void stat64_add(Stat64 *s, uint64_t value)
1606
{
1607
- atomic_add(&s->value, value);
1608
+ qatomic_add(&s->value, value);
1609
}
1610
1611
static inline void stat64_min(Stat64 *s, uint64_t value)
1612
{
1613
- uint64_t orig = atomic_read__nocheck(&s->value);
1614
+ uint64_t orig = qatomic_read__nocheck(&s->value);
1615
while (orig > value) {
1616
- orig = atomic_cmpxchg__nocheck(&s->value, orig, value);
1617
+ orig = qatomic_cmpxchg__nocheck(&s->value, orig, value);
1618
}
1619
}
1620
1621
static inline void stat64_max(Stat64 *s, uint64_t value)
1622
{
1623
- uint64_t orig = atomic_read__nocheck(&s->value);
1624
+ uint64_t orig = qatomic_read__nocheck(&s->value);
1625
while (orig < value) {
1626
- orig = atomic_cmpxchg__nocheck(&s->value, orig, value);
1627
+ orig = qatomic_cmpxchg__nocheck(&s->value, orig, value);
1628
}
1629
}
1630
#else
1631
@@ -XXX,XX +XXX,XX @@ static inline void stat64_add(Stat64 *s, uint64_t value)
1632
low = (uint32_t) value;
1633
if (!low) {
1634
if (high) {
1635
- atomic_add(&s->high, high);
1636
+ qatomic_add(&s->high, high);
1637
}
1638
return;
1639
}
1640
@@ -XXX,XX +XXX,XX @@ static inline void stat64_add(Stat64 *s, uint64_t value)
1641
* the high 32 bits, so it can race just fine with stat64_add32_carry
1642
* and even stat64_get!
1643
*/
1644
- old = atomic_cmpxchg(&s->low, orig, result);
1645
+ old = qatomic_cmpxchg(&s->low, orig, result);
1646
if (orig == old) {
1647
return;
1648
}
1649
@@ -XXX,XX +XXX,XX @@ static inline void stat64_min(Stat64 *s, uint64_t value)
1650
high = value >> 32;
1651
low = (uint32_t) value;
1652
do {
1653
- orig_high = atomic_read(&s->high);
1654
+ orig_high = qatomic_read(&s->high);
1655
if (orig_high < high) {
1656
return;
1657
}
1658
@@ -XXX,XX +XXX,XX @@ static inline void stat64_min(Stat64 *s, uint64_t value)
1659
* the write barrier in stat64_min_slow.
1660
*/
1661
smp_rmb();
1662
- orig_low = atomic_read(&s->low);
1663
+ orig_low = qatomic_read(&s->low);
1664
if (orig_low <= low) {
1665
return;
1666
}
1667
@@ -XXX,XX +XXX,XX @@ static inline void stat64_min(Stat64 *s, uint64_t value)
1668
* we may miss being lucky.
1669
*/
1670
smp_rmb();
1671
- orig_high = atomic_read(&s->high);
1672
+ orig_high = qatomic_read(&s->high);
1673
if (orig_high < high) {
1674
return;
1675
}
1676
@@ -XXX,XX +XXX,XX @@ static inline void stat64_max(Stat64 *s, uint64_t value)
1677
high = value >> 32;
1678
low = (uint32_t) value;
1679
do {
1680
- orig_high = atomic_read(&s->high);
1681
+ orig_high = qatomic_read(&s->high);
1682
if (orig_high > high) {
1683
return;
1684
}
1685
@@ -XXX,XX +XXX,XX @@ static inline void stat64_max(Stat64 *s, uint64_t value)
1686
* the write barrier in stat64_max_slow.
1687
*/
1688
smp_rmb();
1689
- orig_low = atomic_read(&s->low);
1690
+ orig_low = qatomic_read(&s->low);
1691
if (orig_low >= low) {
1692
return;
1693
}
1694
@@ -XXX,XX +XXX,XX @@ static inline void stat64_max(Stat64 *s, uint64_t value)
1695
* we may miss being lucky.
1696
*/
1697
smp_rmb();
1698
- orig_high = atomic_read(&s->high);
1699
+ orig_high = qatomic_read(&s->high);
1700
if (orig_high > high) {
1701
return;
1702
}
1703
diff --git a/include/qemu/thread.h b/include/qemu/thread.h
1704
index XXXXXXX..XXXXXXX 100644
1705
--- a/include/qemu/thread.h
1706
+++ b/include/qemu/thread.h
1707
@@ -XXX,XX +XXX,XX @@ extern QemuCondTimedWaitFunc qemu_cond_timedwait_func;
1708
qemu_cond_timedwait_impl(c, m, ms, __FILE__, __LINE__)
1709
#else
1710
#define qemu_mutex_lock(m) ({ \
1711
- QemuMutexLockFunc _f = atomic_read(&qemu_mutex_lock_func); \
1712
+ QemuMutexLockFunc _f = qatomic_read(&qemu_mutex_lock_func); \
1713
_f(m, __FILE__, __LINE__); \
1714
})
1715
1716
-#define qemu_mutex_trylock(m) ({ \
1717
- QemuMutexTrylockFunc _f = atomic_read(&qemu_mutex_trylock_func); \
1718
- _f(m, __FILE__, __LINE__); \
1719
+#define qemu_mutex_trylock(m) ({ \
1720
+ QemuMutexTrylockFunc _f = qatomic_read(&qemu_mutex_trylock_func); \
1721
+ _f(m, __FILE__, __LINE__); \
1722
})
1723
1724
-#define qemu_rec_mutex_lock(m) ({ \
1725
- QemuRecMutexLockFunc _f = atomic_read(&qemu_rec_mutex_lock_func); \
1726
- _f(m, __FILE__, __LINE__); \
1727
+#define qemu_rec_mutex_lock(m) ({ \
1728
+ QemuRecMutexLockFunc _f = qatomic_read(&qemu_rec_mutex_lock_func);\
1729
+ _f(m, __FILE__, __LINE__); \
1730
})
1731
1732
#define qemu_rec_mutex_trylock(m) ({ \
1733
QemuRecMutexTrylockFunc _f; \
1734
- _f = atomic_read(&qemu_rec_mutex_trylock_func); \
1735
+ _f = qatomic_read(&qemu_rec_mutex_trylock_func); \
1736
_f(m, __FILE__, __LINE__); \
1737
})
1738
1739
#define qemu_cond_wait(c, m) ({ \
1740
- QemuCondWaitFunc _f = atomic_read(&qemu_cond_wait_func); \
1741
+ QemuCondWaitFunc _f = qatomic_read(&qemu_cond_wait_func); \
1742
_f(c, m, __FILE__, __LINE__); \
1743
})
1744
1745
#define qemu_cond_timedwait(c, m, ms) ({ \
1746
- QemuCondTimedWaitFunc _f = atomic_read(&qemu_cond_timedwait_func); \
1747
+ QemuCondTimedWaitFunc _f = qatomic_read(&qemu_cond_timedwait_func);\
1748
_f(c, m, ms, __FILE__, __LINE__); \
1749
})
1750
#endif
1751
@@ -XXX,XX +XXX,XX @@ static inline void qemu_spin_lock(QemuSpin *spin)
1752
__tsan_mutex_pre_lock(spin, 0);
1753
#endif
1754
while (unlikely(__sync_lock_test_and_set(&spin->value, true))) {
1755
- while (atomic_read(&spin->value)) {
1756
+ while (qatomic_read(&spin->value)) {
1757
cpu_relax();
1758
}
1759
}
1760
@@ -XXX,XX +XXX,XX @@ static inline bool qemu_spin_trylock(QemuSpin *spin)
1761
1762
static inline bool qemu_spin_locked(QemuSpin *spin)
1763
{
1764
- return atomic_read(&spin->value);
1765
+ return qatomic_read(&spin->value);
1766
}
1767
1768
static inline void qemu_spin_unlock(QemuSpin *spin)
1769
diff --git a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
1770
index XXXXXXX..XXXXXXX 100644
1771
--- a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
1772
+++ b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
1773
@@ -XXX,XX +XXX,XX @@ static inline int pvrdma_idx_valid(uint32_t idx, uint32_t max_elems)
1774
1775
static inline int32_t pvrdma_idx(int *var, uint32_t max_elems)
1776
{
1777
-    const unsigned int idx = atomic_read(var);
1778
+    const unsigned int idx = qatomic_read(var);
1779
1780
    if (pvrdma_idx_valid(idx, max_elems))
1781
        return idx & (max_elems - 1);
1782
@@ -XXX,XX +XXX,XX @@ static inline int32_t pvrdma_idx(int *var, uint32_t max_elems)
1783
1784
static inline void pvrdma_idx_ring_inc(int *var, uint32_t max_elems)
1785
{
1786
-    uint32_t idx = atomic_read(var) + 1;    /* Increment. */
1787
+    uint32_t idx = qatomic_read(var) + 1;    /* Increment. */
1788
1789
    idx &= (max_elems << 1) - 1;        /* Modulo size, flip gen. */
1790
-    atomic_set(var, idx);
1791
+    qatomic_set(var, idx);
1792
}
1793
1794
static inline int32_t pvrdma_idx_ring_has_space(const struct pvrdma_ring *r,
1795
                     uint32_t max_elems, uint32_t *out_tail)
1796
{
1797
-    const uint32_t tail = atomic_read(&r->prod_tail);
1798
-    const uint32_t head = atomic_read(&r->cons_head);
1799
+    const uint32_t tail = qatomic_read(&r->prod_tail);
1800
+    const uint32_t head = qatomic_read(&r->cons_head);
1801
1802
    if (pvrdma_idx_valid(tail, max_elems) &&
1803
     pvrdma_idx_valid(head, max_elems)) {
1804
@@ -XXX,XX +XXX,XX @@ static inline int32_t pvrdma_idx_ring_has_space(const struct pvrdma_ring *r,
1805
static inline int32_t pvrdma_idx_ring_has_data(const struct pvrdma_ring *r,
1806
                     uint32_t max_elems, uint32_t *out_head)
1807
{
1808
-    const uint32_t tail = atomic_read(&r->prod_tail);
1809
-    const uint32_t head = atomic_read(&r->cons_head);
1810
+    const uint32_t tail = qatomic_read(&r->prod_tail);
1811
+    const uint32_t head = qatomic_read(&r->cons_head);
1812
1813
    if (pvrdma_idx_valid(tail, max_elems) &&
1814
     pvrdma_idx_valid(head, max_elems)) {
1815
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
1816
index XXXXXXX..XXXXXXX 100644
1817
--- a/linux-user/qemu.h
1818
+++ b/linux-user/qemu.h
1819
@@ -XXX,XX +XXX,XX @@ typedef struct TaskState {
1820
/* Nonzero if process_pending_signals() needs to do something (either
1821
* handle a pending signal or unblock signals).
1822
* This flag is written from a signal handler so should be accessed via
1823
- * the atomic_read() and atomic_set() functions. (It is not accessed
1824
+ * the qatomic_read() and qatomic_set() functions. (It is not accessed
1825
* from multiple threads.)
1826
*/
1827
int signal_pending;
1828
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
1829
index XXXXXXX..XXXXXXX 100644
1830
--- a/tcg/i386/tcg-target.h
1831
+++ b/tcg/i386/tcg-target.h
1832
@@ -XXX,XX +XXX,XX @@ static inline void tb_target_set_jmp_target(uintptr_t tc_ptr,
1833
uintptr_t jmp_addr, uintptr_t addr)
1834
{
1835
/* patch the branch destination */
1836
- atomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4));
1837
+ qatomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4));
1838
/* no need to flush icache explicitly */
1839
}
1840
1841
diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h
1842
index XXXXXXX..XXXXXXX 100644
1843
--- a/tcg/s390/tcg-target.h
1844
+++ b/tcg/s390/tcg-target.h
1845
@@ -XXX,XX +XXX,XX @@ static inline void tb_target_set_jmp_target(uintptr_t tc_ptr,
1846
{
1847
/* patch the branch destination */
1848
intptr_t disp = addr - (jmp_addr - 2);
1849
- atomic_set((int32_t *)jmp_addr, disp / 2);
1850
+ qatomic_set((int32_t *)jmp_addr, disp / 2);
1851
/* no need to flush icache explicitly */
1852
}
1853
1854
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
1855
index XXXXXXX..XXXXXXX 100644
1856
--- a/tcg/tci/tcg-target.h
1857
+++ b/tcg/tci/tcg-target.h
1858
@@ -XXX,XX +XXX,XX @@ static inline void tb_target_set_jmp_target(uintptr_t tc_ptr,
1859
uintptr_t jmp_addr, uintptr_t addr)
1860
{
1861
/* patch the branch destination */
1862
- atomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4));
1863
+ qatomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4));
1864
/* no need to flush icache explicitly */
1865
}
1866
1867
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
1868
index XXXXXXX..XXXXXXX 100644
1869
--- a/accel/kvm/kvm-all.c
1870
+++ b/accel/kvm/kvm-all.c
1871
@@ -XXX,XX +XXX,XX @@ static __thread bool have_sigbus_pending;
1872
1873
static void kvm_cpu_kick(CPUState *cpu)
1874
{
1875
- atomic_set(&cpu->kvm_run->immediate_exit, 1);
1876
+ qatomic_set(&cpu->kvm_run->immediate_exit, 1);
1877
}
1878
1879
static void kvm_cpu_kick_self(void)
1880
@@ -XXX,XX +XXX,XX @@ static void kvm_eat_signals(CPUState *cpu)
1881
int r;
1882
1883
if (kvm_immediate_exit) {
1884
- atomic_set(&cpu->kvm_run->immediate_exit, 0);
1885
+ qatomic_set(&cpu->kvm_run->immediate_exit, 0);
1886
/* Write kvm_run->immediate_exit before the cpu->exit_request
1887
* write in kvm_cpu_exec.
1888
*/
1889
@@ -XXX,XX +XXX,XX @@ int kvm_cpu_exec(CPUState *cpu)
1890
DPRINTF("kvm_cpu_exec()\n");
1891
1892
if (kvm_arch_process_async_events(cpu)) {
1893
- atomic_set(&cpu->exit_request, 0);
1894
+ qatomic_set(&cpu->exit_request, 0);
1895
return EXCP_HLT;
1896
}
1897
1898
@@ -XXX,XX +XXX,XX @@ int kvm_cpu_exec(CPUState *cpu)
1899
}
1900
1901
kvm_arch_pre_run(cpu, run);
1902
- if (atomic_read(&cpu->exit_request)) {
1903
+ if (qatomic_read(&cpu->exit_request)) {
1904
DPRINTF("interrupt exit requested\n");
1905
/*
1906
* KVM requires us to reenter the kernel after IO exits to complete
1907
@@ -XXX,XX +XXX,XX @@ int kvm_cpu_exec(CPUState *cpu)
1908
vm_stop(RUN_STATE_INTERNAL_ERROR);
1909
}
1910
1911
- atomic_set(&cpu->exit_request, 0);
1912
+ qatomic_set(&cpu->exit_request, 0);
1913
return ret;
1914
}
1915
1916
@@ -XXX,XX +XXX,XX @@ int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
1917
have_sigbus_pending = true;
1918
pending_sigbus_addr = addr;
1919
pending_sigbus_code = code;
1920
- atomic_set(&cpu->exit_request, 1);
1921
+ qatomic_set(&cpu->exit_request, 1);
1922
return 0;
1923
#else
1924
return 1;
1925
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
1926
index XXXXXXX..XXXXXXX 100644
1927
--- a/accel/tcg/cpu-exec.c
1928
+++ b/accel/tcg/cpu-exec.c
1929
@@ -XXX,XX +XXX,XX @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
1930
goto out_unlock_next;
1931
}
1932
/* Atomically claim the jump destination slot only if it was NULL */
1933
- old = atomic_cmpxchg(&tb->jmp_dest[n], (uintptr_t)NULL, (uintptr_t)tb_next);
1934
+ old = qatomic_cmpxchg(&tb->jmp_dest[n], (uintptr_t)NULL,
1935
+ (uintptr_t)tb_next);
1936
if (old) {
1937
goto out_unlock_next;
1938
}
1939
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_find(CPUState *cpu,
1940
tb = tb_gen_code(cpu, pc, cs_base, flags, cf_mask);
1941
mmap_unlock();
1942
/* We add the TB in the virtual pc hash table for the fast lookup */
1943
- atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
1944
+ qatomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
1945
}
1946
#ifndef CONFIG_USER_ONLY
1947
/* We don't take care of direct jumps when address mapping changes in
1948
@@ -XXX,XX +XXX,XX @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
1949
* Ensure zeroing happens before reading cpu->exit_request or
1950
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
1951
*/
1952
- atomic_mb_set(&cpu_neg(cpu)->icount_decr.u16.high, 0);
1953
+ qatomic_mb_set(&cpu_neg(cpu)->icount_decr.u16.high, 0);
1954
1955
- if (unlikely(atomic_read(&cpu->interrupt_request))) {
1956
+ if (unlikely(qatomic_read(&cpu->interrupt_request))) {
1957
int interrupt_request;
1958
qemu_mutex_lock_iothread();
1959
interrupt_request = cpu->interrupt_request;
1960
@@ -XXX,XX +XXX,XX @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
1961
}
1962
1963
/* Finally, check if we need to exit to the main loop. */
1964
- if (unlikely(atomic_read(&cpu->exit_request))
1965
+ if (unlikely(qatomic_read(&cpu->exit_request))
1966
|| (use_icount
1967
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0)) {
1968
- atomic_set(&cpu->exit_request, 0);
1969
+ qatomic_set(&cpu->exit_request, 0);
1970
if (cpu->exception_index == -1) {
1971
cpu->exception_index = EXCP_INTERRUPT;
1972
}
1973
@@ -XXX,XX +XXX,XX @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
1974
}
1975
1976
*last_tb = NULL;
1977
- insns_left = atomic_read(&cpu_neg(cpu)->icount_decr.u32);
1978
+ insns_left = qatomic_read(&cpu_neg(cpu)->icount_decr.u32);
1979
if (insns_left < 0) {
1980
/* Something asked us to stop executing chained TBs; just
1981
* continue round the main loop. Whatever requested the exit
1982
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
1983
index XXXXXXX..XXXXXXX 100644
1984
--- a/accel/tcg/cputlb.c
1985
+++ b/accel/tcg/cputlb.c
1986
@@ -XXX,XX +XXX,XX @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
1987
CPU_FOREACH(cpu) {
1988
CPUArchState *env = cpu->env_ptr;
1989
1990
- full += atomic_read(&env_tlb(env)->c.full_flush_count);
1991
- part += atomic_read(&env_tlb(env)->c.part_flush_count);
1992
- elide += atomic_read(&env_tlb(env)->c.elide_flush_count);
1993
+ full += qatomic_read(&env_tlb(env)->c.full_flush_count);
1994
+ part += qatomic_read(&env_tlb(env)->c.part_flush_count);
1995
+ elide += qatomic_read(&env_tlb(env)->c.elide_flush_count);
1996
}
1997
*pfull = full;
1998
*ppart = part;
1999
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
2000
cpu_tb_jmp_cache_clear(cpu);
2001
2002
if (to_clean == ALL_MMUIDX_BITS) {
2003
- atomic_set(&env_tlb(env)->c.full_flush_count,
2004
+ qatomic_set(&env_tlb(env)->c.full_flush_count,
2005
env_tlb(env)->c.full_flush_count + 1);
2006
} else {
2007
- atomic_set(&env_tlb(env)->c.part_flush_count,
2008
+ qatomic_set(&env_tlb(env)->c.part_flush_count,
2009
env_tlb(env)->c.part_flush_count + ctpop16(to_clean));
2010
if (to_clean != asked) {
2011
- atomic_set(&env_tlb(env)->c.elide_flush_count,
2012
+ qatomic_set(&env_tlb(env)->c.elide_flush_count,
2013
env_tlb(env)->c.elide_flush_count +
2014
ctpop16(asked & ~to_clean));
2015
}
2016
@@ -XXX,XX +XXX,XX @@ void tlb_unprotect_code(ram_addr_t ram_addr)
2017
* generated code.
2018
*
2019
* Other vCPUs might be reading their TLBs during guest execution, so we update
2020
- * te->addr_write with atomic_set. We don't need to worry about this for
2021
+ * te->addr_write with qatomic_set. We don't need to worry about this for
2022
* oversized guests as MTTCG is disabled for them.
2023
*
2024
* Called with tlb_c.lock held.
2025
@@ -XXX,XX +XXX,XX @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry,
2026
#if TCG_OVERSIZED_GUEST
2027
tlb_entry->addr_write |= TLB_NOTDIRTY;
2028
#else
2029
- atomic_set(&tlb_entry->addr_write,
2030
+ qatomic_set(&tlb_entry->addr_write,
2031
tlb_entry->addr_write | TLB_NOTDIRTY);
2032
#endif
2033
}
2034
@@ -XXX,XX +XXX,XX @@ static inline target_ulong tlb_read_ofs(CPUTLBEntry *entry, size_t ofs)
2035
#if TCG_OVERSIZED_GUEST
2036
return *(target_ulong *)((uintptr_t)entry + ofs);
2037
#else
2038
- /* ofs might correspond to .addr_write, so use atomic_read */
2039
- return atomic_read((target_ulong *)((uintptr_t)entry + ofs));
2040
+ /* ofs might correspond to .addr_write, so use qatomic_read */
2041
+ return qatomic_read((target_ulong *)((uintptr_t)entry + ofs));
2042
#endif
2043
}
2044
2045
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
2046
CPUTLBEntry *vtlb = &env_tlb(env)->d[mmu_idx].vtable[vidx];
2047
target_ulong cmp;
2048
2049
- /* elt_ofs might correspond to .addr_write, so use atomic_read */
2050
+ /* elt_ofs might correspond to .addr_write, so use qatomic_read */
2051
#if TCG_OVERSIZED_GUEST
2052
cmp = *(target_ulong *)((uintptr_t)vtlb + elt_ofs);
2053
#else
2054
- cmp = atomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs));
2055
+ cmp = qatomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs));
2056
#endif
2057
2058
if (cmp == page) {
2059
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
2060
index XXXXXXX..XXXXXXX 100644
2061
--- a/accel/tcg/tcg-all.c
2062
+++ b/accel/tcg/tcg-all.c
2063
@@ -XXX,XX +XXX,XX @@ static void tcg_handle_interrupt(CPUState *cpu, int mask)
2064
if (!qemu_cpu_is_self(cpu)) {
2065
qemu_cpu_kick(cpu);
2066
} else {
2067
- atomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
2068
+ qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
2069
if (use_icount &&
2070
!cpu->can_do_io
2071
&& (mask & ~old_mask) != 0) {
2072
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
2073
index XXXXXXX..XXXXXXX 100644
2074
--- a/accel/tcg/translate-all.c
2075
+++ b/accel/tcg/translate-all.c
2076
@@ -XXX,XX +XXX,XX @@ static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
2077
restore_state_to_opc(env, tb, data);
2078
2079
#ifdef CONFIG_PROFILER
2080
- atomic_set(&prof->restore_time,
2081
+ qatomic_set(&prof->restore_time,
2082
prof->restore_time + profile_getclock() - ti);
2083
- atomic_set(&prof->restore_count, prof->restore_count + 1);
2084
+ qatomic_set(&prof->restore_count, prof->restore_count + 1);
2085
#endif
2086
return 0;
2087
}
2088
@@ -XXX,XX +XXX,XX @@ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
2089
2090
/* Level 2..N-1. */
2091
for (i = v_l2_levels; i > 0; i--) {
2092
- void **p = atomic_rcu_read(lp);
2093
+ void **p = qatomic_rcu_read(lp);
2094
2095
if (p == NULL) {
2096
void *existing;
2097
@@ -XXX,XX +XXX,XX @@ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
2098
return NULL;
2099
}
2100
p = g_new0(void *, V_L2_SIZE);
2101
- existing = atomic_cmpxchg(lp, NULL, p);
2102
+ existing = qatomic_cmpxchg(lp, NULL, p);
2103
if (unlikely(existing)) {
2104
g_free(p);
2105
p = existing;
2106
@@ -XXX,XX +XXX,XX @@ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
2107
lp = p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1));
2108
}
2109
2110
- pd = atomic_rcu_read(lp);
2111
+ pd = qatomic_rcu_read(lp);
2112
if (pd == NULL) {
2113
void *existing;
2114
2115
@@ -XXX,XX +XXX,XX @@ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
2116
}
2117
}
2118
#endif
2119
- existing = atomic_cmpxchg(lp, NULL, pd);
2120
+ existing = qatomic_cmpxchg(lp, NULL, pd);
2121
if (unlikely(existing)) {
2122
#ifndef CONFIG_USER_ONLY
2123
{
2124
@@ -XXX,XX +XXX,XX @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count)
2125
tcg_region_reset_all();
2126
/* XXX: flush processor icache at this point if cache flush is
2127
expensive */
2128
- atomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1);
2129
+ qatomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1);
2130
2131
done:
2132
mmap_unlock();
2133
@@ -XXX,XX +XXX,XX @@ done:
2134
void tb_flush(CPUState *cpu)
2135
{
2136
if (tcg_enabled()) {
2137
- unsigned tb_flush_count = atomic_mb_read(&tb_ctx.tb_flush_count);
2138
+ unsigned tb_flush_count = qatomic_mb_read(&tb_ctx.tb_flush_count);
2139
2140
if (cpu_in_exclusive_context(cpu)) {
2141
do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count));
2142
@@ -XXX,XX +XXX,XX @@ static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_orig)
2143
int n;
2144
2145
/* mark the LSB of jmp_dest[] so that no further jumps can be inserted */
2146
- ptr = atomic_or_fetch(&orig->jmp_dest[n_orig], 1);
2147
+ ptr = qatomic_or_fetch(&orig->jmp_dest[n_orig], 1);
2148
dest = (TranslationBlock *)(ptr & ~1);
2149
if (dest == NULL) {
2150
return;
2151
@@ -XXX,XX +XXX,XX @@ static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_orig)
2152
* While acquiring the lock, the jump might have been removed if the
2153
* destination TB was invalidated; check again.
2154
*/
2155
- ptr_locked = atomic_read(&orig->jmp_dest[n_orig]);
2156
+ ptr_locked = qatomic_read(&orig->jmp_dest[n_orig]);
2157
if (ptr_locked != ptr) {
2158
qemu_spin_unlock(&dest->jmp_lock);
2159
/*
2160
@@ -XXX,XX +XXX,XX @@ static inline void tb_jmp_unlink(TranslationBlock *dest)
2161
2162
TB_FOR_EACH_JMP(dest, tb, n) {
2163
tb_reset_jump(tb, n);
2164
- atomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1);
2165
+ qatomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1);
2166
/* No need to clear the list entry; setting the dest ptr is enough */
2167
}
2168
dest->jmp_list_head = (uintptr_t)NULL;
2169
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
2170
2171
/* make sure no further incoming jumps will be chained to this TB */
2172
qemu_spin_lock(&tb->jmp_lock);
2173
- atomic_set(&tb->cflags, tb->cflags | CF_INVALID);
2174
+ qatomic_set(&tb->cflags, tb->cflags | CF_INVALID);
2175
qemu_spin_unlock(&tb->jmp_lock);
2176
2177
/* remove the TB from the hash list */
2178
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
2179
/* remove the TB from the hash list */
2180
h = tb_jmp_cache_hash_func(tb->pc);
2181
CPU_FOREACH(cpu) {
2182
- if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
2183
- atomic_set(&cpu->tb_jmp_cache[h], NULL);
2184
+ if (qatomic_read(&cpu->tb_jmp_cache[h]) == tb) {
2185
+ qatomic_set(&cpu->tb_jmp_cache[h], NULL);
2186
}
2187
}
2188
2189
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
2190
/* suppress any remaining jumps to this TB */
2191
tb_jmp_unlink(tb);
2192
2193
- atomic_set(&tcg_ctx->tb_phys_invalidate_count,
2194
+ qatomic_set(&tcg_ctx->tb_phys_invalidate_count,
2195
tcg_ctx->tb_phys_invalidate_count + 1);
2196
}
2197
2198
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
2199
2200
#ifdef CONFIG_PROFILER
2201
/* includes aborted translations because of exceptions */
2202
- atomic_set(&prof->tb_count1, prof->tb_count1 + 1);
2203
+ qatomic_set(&prof->tb_count1, prof->tb_count1 + 1);
2204
ti = profile_getclock();
2205
#endif
2206
2207
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
2208
}
2209
2210
#ifdef CONFIG_PROFILER
2211
- atomic_set(&prof->tb_count, prof->tb_count + 1);
2212
- atomic_set(&prof->interm_time, prof->interm_time + profile_getclock() - ti);
2213
+ qatomic_set(&prof->tb_count, prof->tb_count + 1);
2214
+ qatomic_set(&prof->interm_time,
2215
+ prof->interm_time + profile_getclock() - ti);
2216
ti = profile_getclock();
2217
#endif
2218
2219
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
2220
tb->tc.size = gen_code_size;
2221
2222
#ifdef CONFIG_PROFILER
2223
- atomic_set(&prof->code_time, prof->code_time + profile_getclock() - ti);
2224
- atomic_set(&prof->code_in_len, prof->code_in_len + tb->size);
2225
- atomic_set(&prof->code_out_len, prof->code_out_len + gen_code_size);
2226
- atomic_set(&prof->search_out_len, prof->search_out_len + search_size);
2227
+ qatomic_set(&prof->code_time, prof->code_time + profile_getclock() - ti);
2228
+ qatomic_set(&prof->code_in_len, prof->code_in_len + tb->size);
2229
+ qatomic_set(&prof->code_out_len, prof->code_out_len + gen_code_size);
2230
+ qatomic_set(&prof->search_out_len, prof->search_out_len + search_size);
2231
#endif
2232
2233
#ifdef DEBUG_DISAS
2234
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
2235
}
2236
#endif
2237
2238
- atomic_set(&tcg_ctx->code_gen_ptr, (void *)
2239
+ qatomic_set(&tcg_ctx->code_gen_ptr, (void *)
2240
ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
2241
CODE_GEN_ALIGN));
2242
2243
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
2244
uintptr_t orig_aligned = (uintptr_t)gen_code_buf;
2245
2246
orig_aligned -= ROUND_UP(sizeof(*tb), qemu_icache_linesize);
2247
- atomic_set(&tcg_ctx->code_gen_ptr, (void *)orig_aligned);
2248
+ qatomic_set(&tcg_ctx->code_gen_ptr, (void *)orig_aligned);
2249
tb_destroy(tb);
2250
return existing_tb;
2251
}
2252
@@ -XXX,XX +XXX,XX @@ static void tb_jmp_cache_clear_page(CPUState *cpu, target_ulong page_addr)
2253
unsigned int i, i0 = tb_jmp_cache_hash_page(page_addr);
2254
2255
for (i = 0; i < TB_JMP_PAGE_SIZE; i++) {
2256
- atomic_set(&cpu->tb_jmp_cache[i0 + i], NULL);
2257
+ qatomic_set(&cpu->tb_jmp_cache[i0 + i], NULL);
2258
}
2259
}
2260
2261
@@ -XXX,XX +XXX,XX @@ void dump_exec_info(void)
2262
2263
qemu_printf("\nStatistics:\n");
2264
qemu_printf("TB flush count %u\n",
2265
- atomic_read(&tb_ctx.tb_flush_count));
2266
+ qatomic_read(&tb_ctx.tb_flush_count));
2267
qemu_printf("TB invalidate count %zu\n",
2268
tcg_tb_phys_invalidate_count());
2269
2270
@@ -XXX,XX +XXX,XX @@ void cpu_interrupt(CPUState *cpu, int mask)
2271
{
2272
g_assert(qemu_mutex_iothread_locked());
2273
cpu->interrupt_request |= mask;
2274
- atomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
2275
+ qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
2276
}
2277
2278
/*
2279
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
2280
index XXXXXXX..XXXXXXX 100644
2281
--- a/audio/jackaudio.c
2282
+++ b/audio/jackaudio.c
2283
@@ -XXX,XX +XXX,XX @@ static void qjack_buffer_create(QJackBuffer *buffer, int channels, int frames)
2284
static void qjack_buffer_clear(QJackBuffer *buffer)
2285
{
2286
assert(buffer->data);
2287
- atomic_store_release(&buffer->used, 0);
2288
+ qatomic_store_release(&buffer->used, 0);
2289
buffer->rptr = 0;
2290
buffer->wptr = 0;
2291
}
2292
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_write(QJackBuffer *buffer, float *data, int size)
2293
assert(buffer->data);
2294
const int samples = size / sizeof(float);
2295
int frames = samples / buffer->channels;
2296
- const int avail = buffer->frames - atomic_load_acquire(&buffer->used);
2297
+ const int avail = buffer->frames - qatomic_load_acquire(&buffer->used);
2298
2299
if (frames > avail) {
2300
frames = avail;
2301
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_write(QJackBuffer *buffer, float *data, int size)
2302
2303
buffer->wptr = wptr;
2304
2305
- atomic_add(&buffer->used, frames);
2306
+ qatomic_add(&buffer->used, frames);
2307
return frames * buffer->channels * sizeof(float);
2308
};
2309
2310
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_write(QJackBuffer *buffer, float *data, int size)
2311
static int qjack_buffer_write_l(QJackBuffer *buffer, float **dest, int frames)
2312
{
2313
assert(buffer->data);
2314
- const int avail = buffer->frames - atomic_load_acquire(&buffer->used);
2315
+ const int avail = buffer->frames - qatomic_load_acquire(&buffer->used);
2316
int wptr = buffer->wptr;
2317
2318
if (frames > avail) {
2319
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_write_l(QJackBuffer *buffer, float **dest, int frames)
2320
}
2321
buffer->wptr = wptr;
2322
2323
- atomic_add(&buffer->used, frames);
2324
+ qatomic_add(&buffer->used, frames);
2325
return frames;
2326
}
2327
2328
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_read(QJackBuffer *buffer, float *dest, int size)
2329
assert(buffer->data);
2330
const int samples = size / sizeof(float);
2331
int frames = samples / buffer->channels;
2332
- const int avail = atomic_load_acquire(&buffer->used);
2333
+ const int avail = qatomic_load_acquire(&buffer->used);
2334
2335
if (frames > avail) {
2336
frames = avail;
2337
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_read(QJackBuffer *buffer, float *dest, int size)
2338
2339
buffer->rptr = rptr;
2340
2341
- atomic_sub(&buffer->used, frames);
2342
+ qatomic_sub(&buffer->used, frames);
2343
return frames * buffer->channels * sizeof(float);
2344
}
2345
2346
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_read_l(QJackBuffer *buffer, float **dest, int frames)
2347
{
2348
assert(buffer->data);
2349
int copy = frames;
2350
- const int used = atomic_load_acquire(&buffer->used);
2351
+ const int used = qatomic_load_acquire(&buffer->used);
2352
int rptr = buffer->rptr;
2353
2354
if (copy > used) {
2355
@@ -XXX,XX +XXX,XX @@ static int qjack_buffer_read_l(QJackBuffer *buffer, float **dest, int frames)
2356
}
2357
buffer->rptr = rptr;
2358
2359
- atomic_sub(&buffer->used, copy);
2360
+ qatomic_sub(&buffer->used, copy);
2361
return copy;
2362
}
2363
2364
diff --git a/block.c b/block.c
2365
index XXXXXXX..XXXXXXX 100644
2366
--- a/block.c
2367
+++ b/block.c
2368
@@ -XXX,XX +XXX,XX @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
2369
}
2370
2371
/* bdrv_new() and bdrv_close() make it so */
2372
- assert(atomic_read(&bs->copy_on_read) == 0);
2373
+ assert(qatomic_read(&bs->copy_on_read) == 0);
2374
2375
if (bs->open_flags & BDRV_O_COPY_ON_READ) {
2376
if (!bs->read_only) {
2377
@@ -XXX,XX +XXX,XX @@ static void bdrv_close(BlockDriverState *bs)
2378
bs->file = NULL;
2379
g_free(bs->opaque);
2380
bs->opaque = NULL;
2381
- atomic_set(&bs->copy_on_read, 0);
2382
+ qatomic_set(&bs->copy_on_read, 0);
2383
bs->backing_file[0] = '\0';
2384
bs->backing_format[0] = '\0';
2385
bs->total_sectors = 0;
2386
diff --git a/block/block-backend.c b/block/block-backend.c
2387
index XXXXXXX..XXXXXXX 100644
2388
--- a/block/block-backend.c
2389
+++ b/block/block-backend.c
2390
@@ -XXX,XX +XXX,XX @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags)
2391
2392
void blk_inc_in_flight(BlockBackend *blk)
2393
{
2394
- atomic_inc(&blk->in_flight);
2395
+ qatomic_inc(&blk->in_flight);
2396
}
2397
2398
void blk_dec_in_flight(BlockBackend *blk)
2399
{
2400
- atomic_dec(&blk->in_flight);
2401
+ qatomic_dec(&blk->in_flight);
2402
aio_wait_kick();
2403
}
2404
2405
@@ -XXX,XX +XXX,XX @@ void blk_drain(BlockBackend *blk)
2406
2407
/* We may have -ENOMEDIUM completions in flight */
2408
AIO_WAIT_WHILE(blk_get_aio_context(blk),
2409
- atomic_mb_read(&blk->in_flight) > 0);
2410
+ qatomic_mb_read(&blk->in_flight) > 0);
2411
2412
if (bs) {
2413
bdrv_drained_end(bs);
2414
@@ -XXX,XX +XXX,XX @@ void blk_drain_all(void)
2415
aio_context_acquire(ctx);
2416
2417
/* We may have -ENOMEDIUM completions in flight */
2418
- AIO_WAIT_WHILE(ctx, atomic_mb_read(&blk->in_flight) > 0);
2419
+ AIO_WAIT_WHILE(ctx, qatomic_mb_read(&blk->in_flight) > 0);
2420
2421
aio_context_release(ctx);
2422
}
2423
@@ -XXX,XX +XXX,XX @@ void blk_io_limits_update_group(BlockBackend *blk, const char *group)
2424
static void blk_root_drained_begin(BdrvChild *child)
2425
{
2426
BlockBackend *blk = child->opaque;
2427
+ ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
2428
2429
if (++blk->quiesce_counter == 1) {
2430
if (blk->dev_ops && blk->dev_ops->drained_begin) {
2431
@@ -XXX,XX +XXX,XX @@ static void blk_root_drained_begin(BdrvChild *child)
2432
/* Note that blk->root may not be accessible here yet if we are just
2433
* attaching to a BlockDriverState that is drained. Use child instead. */
2434
2435
- if (atomic_fetch_inc(&blk->public.throttle_group_member.io_limits_disabled) == 0) {
2436
- throttle_group_restart_tgm(&blk->public.throttle_group_member);
2437
+ if (qatomic_fetch_inc(&tgm->io_limits_disabled) == 0) {
2438
+ throttle_group_restart_tgm(tgm);
2439
}
2440
}
2441
2442
@@ -XXX,XX +XXX,XX @@ static void blk_root_drained_end(BdrvChild *child, int *drained_end_counter)
2443
assert(blk->quiesce_counter);
2444
2445
assert(blk->public.throttle_group_member.io_limits_disabled);
2446
- atomic_dec(&blk->public.throttle_group_member.io_limits_disabled);
2447
+ qatomic_dec(&blk->public.throttle_group_member.io_limits_disabled);
2448
2449
if (--blk->quiesce_counter == 0) {
2450
if (blk->dev_ops && blk->dev_ops->drained_end) {
2451
diff --git a/block/io.c b/block/io.c
2452
index XXXXXXX..XXXXXXX 100644
2453
--- a/block/io.c
2454
+++ b/block/io.c
2455
@@ -XXX,XX +XXX,XX @@ void bdrv_parent_drained_end_single(BdrvChild *c)
2456
{
2457
int drained_end_counter = 0;
2458
bdrv_parent_drained_end_single_no_poll(c, &drained_end_counter);
2459
- BDRV_POLL_WHILE(c->bs, atomic_read(&drained_end_counter) > 0);
2460
+ BDRV_POLL_WHILE(c->bs, qatomic_read(&drained_end_counter) > 0);
2461
}
2462
2463
static void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore,
2464
@@ -XXX,XX +XXX,XX @@ void bdrv_refresh_limits(BlockDriverState *bs, Error **errp)
2465
*/
2466
void bdrv_enable_copy_on_read(BlockDriverState *bs)
2467
{
2468
- atomic_inc(&bs->copy_on_read);
2469
+ qatomic_inc(&bs->copy_on_read);
2470
}
2471
2472
void bdrv_disable_copy_on_read(BlockDriverState *bs)
2473
{
2474
- int old = atomic_fetch_dec(&bs->copy_on_read);
2475
+ int old = qatomic_fetch_dec(&bs->copy_on_read);
2476
assert(old >= 1);
2477
}
2478
2479
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
2480
}
2481
2482
/* Set data->done and decrement drained_end_counter before bdrv_wakeup() */
2483
- atomic_mb_set(&data->done, true);
2484
+ qatomic_mb_set(&data->done, true);
2485
if (!data->begin) {
2486
- atomic_dec(data->drained_end_counter);
2487
+ qatomic_dec(data->drained_end_counter);
2488
}
2489
bdrv_dec_in_flight(bs);
2490
2491
@@ -XXX,XX +XXX,XX @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin,
2492
};
2493
2494
if (!begin) {
2495
- atomic_inc(drained_end_counter);
2496
+ qatomic_inc(drained_end_counter);
2497
}
2498
2499
/* Make sure the driver callback completes during the polling phase for
2500
@@ -XXX,XX +XXX,XX @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursive,
2501
return true;
2502
}
2503
2504
- if (atomic_read(&bs->in_flight)) {
2505
+ if (qatomic_read(&bs->in_flight)) {
2506
return true;
2507
}
2508
2509
@@ -XXX,XX +XXX,XX @@ void bdrv_do_drained_begin_quiesce(BlockDriverState *bs,
2510
assert(!qemu_in_coroutine());
2511
2512
/* Stop things in parent-to-child order */
2513
- if (atomic_fetch_inc(&bs->quiesce_counter) == 0) {
2514
+ if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
2515
aio_disable_external(bdrv_get_aio_context(bs));
2516
}
2517
2518
@@ -XXX,XX +XXX,XX @@ static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
2519
bdrv_parent_drained_end(bs, parent, ignore_bds_parents,
2520
drained_end_counter);
2521
2522
- old_quiesce_counter = atomic_fetch_dec(&bs->quiesce_counter);
2523
+ old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter);
2524
if (old_quiesce_counter == 1) {
2525
aio_enable_external(bdrv_get_aio_context(bs));
2526
}
2527
@@ -XXX,XX +XXX,XX @@ void bdrv_drained_end(BlockDriverState *bs)
2528
{
2529
int drained_end_counter = 0;
2530
bdrv_do_drained_end(bs, false, NULL, false, &drained_end_counter);
2531
- BDRV_POLL_WHILE(bs, atomic_read(&drained_end_counter) > 0);
2532
+ BDRV_POLL_WHILE(bs, qatomic_read(&drained_end_counter) > 0);
2533
}
2534
2535
void bdrv_drained_end_no_poll(BlockDriverState *bs, int *drained_end_counter)
2536
@@ -XXX,XX +XXX,XX @@ void bdrv_subtree_drained_end(BlockDriverState *bs)
2537
{
2538
int drained_end_counter = 0;
2539
bdrv_do_drained_end(bs, true, NULL, false, &drained_end_counter);
2540
- BDRV_POLL_WHILE(bs, atomic_read(&drained_end_counter) > 0);
2541
+ BDRV_POLL_WHILE(bs, qatomic_read(&drained_end_counter) > 0);
2542
}
2543
2544
void bdrv_apply_subtree_drain(BdrvChild *child, BlockDriverState *new_parent)
2545
@@ -XXX,XX +XXX,XX @@ void bdrv_unapply_subtree_drain(BdrvChild *child, BlockDriverState *old_parent)
2546
&drained_end_counter);
2547
}
2548
2549
- BDRV_POLL_WHILE(child->bs, atomic_read(&drained_end_counter) > 0);
2550
+ BDRV_POLL_WHILE(child->bs, qatomic_read(&drained_end_counter) > 0);
2551
}
2552
2553
/*
2554
@@ -XXX,XX +XXX,XX @@ static void bdrv_drain_assert_idle(BlockDriverState *bs)
2555
{
2556
BdrvChild *child, *next;
2557
2558
- assert(atomic_read(&bs->in_flight) == 0);
2559
+ assert(qatomic_read(&bs->in_flight) == 0);
2560
QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
2561
bdrv_drain_assert_idle(child->bs);
2562
}
2563
@@ -XXX,XX +XXX,XX @@ void bdrv_drain_all_end(void)
2564
}
2565
2566
assert(qemu_get_current_aio_context() == qemu_get_aio_context());
2567
- AIO_WAIT_WHILE(NULL, atomic_read(&drained_end_counter) > 0);
2568
+ AIO_WAIT_WHILE(NULL, qatomic_read(&drained_end_counter) > 0);
2569
2570
assert(bdrv_drain_all_count > 0);
2571
bdrv_drain_all_count--;
2572
@@ -XXX,XX +XXX,XX @@ void bdrv_drain_all(void)
2573
static void tracked_request_end(BdrvTrackedRequest *req)
2574
{
2575
if (req->serialising) {
2576
- atomic_dec(&req->bs->serialising_in_flight);
2577
+ qatomic_dec(&req->bs->serialising_in_flight);
2578
}
2579
2580
qemu_co_mutex_lock(&req->bs->reqs_lock);
2581
@@ -XXX,XX +XXX,XX @@ bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
2582
2583
qemu_co_mutex_lock(&bs->reqs_lock);
2584
if (!req->serialising) {
2585
- atomic_inc(&req->bs->serialising_in_flight);
2586
+ qatomic_inc(&req->bs->serialising_in_flight);
2587
req->serialising = true;
2588
}
2589
2590
@@ -XXX,XX +XXX,XX @@ static int bdrv_get_cluster_size(BlockDriverState *bs)
2591
2592
void bdrv_inc_in_flight(BlockDriverState *bs)
2593
{
2594
- atomic_inc(&bs->in_flight);
2595
+ qatomic_inc(&bs->in_flight);
2596
}
2597
2598
void bdrv_wakeup(BlockDriverState *bs)
2599
@@ -XXX,XX +XXX,XX @@ void bdrv_wakeup(BlockDriverState *bs)
2600
2601
void bdrv_dec_in_flight(BlockDriverState *bs)
2602
{
2603
- atomic_dec(&bs->in_flight);
2604
+ qatomic_dec(&bs->in_flight);
2605
bdrv_wakeup(bs);
2606
}
2607
2608
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self
2609
BlockDriverState *bs = self->bs;
2610
bool waited = false;
2611
2612
- if (!atomic_read(&bs->serialising_in_flight)) {
2613
+ if (!qatomic_read(&bs->serialising_in_flight)) {
2614
return false;
2615
}
2616
2617
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_preadv_part(BdrvChild *child,
2618
bdrv_inc_in_flight(bs);
2619
2620
/* Don't do copy-on-read if we read data before write operation */
2621
- if (atomic_read(&bs->copy_on_read)) {
2622
+ if (qatomic_read(&bs->copy_on_read)) {
2623
flags |= BDRV_REQ_COPY_ON_READ;
2624
}
2625
2626
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_finish(BdrvChild *child, int64_t offset, uint64_t bytes,
2627
int64_t end_sector = DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE);
2628
BlockDriverState *bs = child->bs;
2629
2630
- atomic_inc(&bs->write_gen);
2631
+ qatomic_inc(&bs->write_gen);
2632
2633
/*
2634
* Discard cannot extend the image, but in error handling cases, such as
2635
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
2636
}
2637
2638
qemu_co_mutex_lock(&bs->reqs_lock);
2639
- current_gen = atomic_read(&bs->write_gen);
2640
+ current_gen = qatomic_read(&bs->write_gen);
2641
2642
/* Wait until any previous flushes are completed */
2643
while (bs->active_flush_req) {
2644
@@ -XXX,XX +XXX,XX @@ void bdrv_io_plug(BlockDriverState *bs)
2645
bdrv_io_plug(child->bs);
2646
}
2647
2648
- if (atomic_fetch_inc(&bs->io_plugged) == 0) {
2649
+ if (qatomic_fetch_inc(&bs->io_plugged) == 0) {
2650
BlockDriver *drv = bs->drv;
2651
if (drv && drv->bdrv_io_plug) {
2652
drv->bdrv_io_plug(bs);
2653
@@ -XXX,XX +XXX,XX @@ void bdrv_io_unplug(BlockDriverState *bs)
2654
BdrvChild *child;
2655
2656
assert(bs->io_plugged);
2657
- if (atomic_fetch_dec(&bs->io_plugged) == 1) {
2658
+ if (qatomic_fetch_dec(&bs->io_plugged) == 1) {
2659
BlockDriver *drv = bs->drv;
2660
if (drv && drv->bdrv_io_unplug) {
2661
drv->bdrv_io_unplug(bs);
2662
diff --git a/block/nfs.c b/block/nfs.c
2663
index XXXXXXX..XXXXXXX 100644
2664
--- a/block/nfs.c
2665
+++ b/block/nfs.c
2666
@@ -XXX,XX +XXX,XX @@ nfs_get_allocated_file_size_cb(int ret, struct nfs_context *nfs, void *data,
2667
}
2668
2669
/* Set task->complete before reading bs->wakeup. */
2670
- atomic_mb_set(&task->complete, 1);
2671
+ qatomic_mb_set(&task->complete, 1);
2672
bdrv_wakeup(task->bs);
2673
}
2674
2675
diff --git a/block/sheepdog.c b/block/sheepdog.c
2676
index XXXXXXX..XXXXXXX 100644
2677
--- a/block/sheepdog.c
2678
+++ b/block/sheepdog.c
2679
@@ -XXX,XX +XXX,XX @@ out:
2680
srco->co = NULL;
2681
srco->ret = ret;
2682
/* Set srco->finished before reading bs->wakeup. */
2683
- atomic_mb_set(&srco->finished, true);
2684
+ qatomic_mb_set(&srco->finished, true);
2685
if (srco->bs) {
2686
bdrv_wakeup(srco->bs);
2687
}
2688
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
2689
index XXXXXXX..XXXXXXX 100644
2690
--- a/block/throttle-groups.c
2691
+++ b/block/throttle-groups.c
2692
@@ -XXX,XX +XXX,XX @@ static ThrottleGroupMember *next_throttle_token(ThrottleGroupMember *tgm,
2693
* immediately if it has pending requests. Otherwise we could be
2694
* forcing it to wait for other member's throttled requests. */
2695
if (tgm_has_pending_reqs(tgm, is_write) &&
2696
- atomic_read(&tgm->io_limits_disabled)) {
2697
+ qatomic_read(&tgm->io_limits_disabled)) {
2698
return tgm;
2699
}
2700
2701
@@ -XXX,XX +XXX,XX @@ static bool throttle_group_schedule_timer(ThrottleGroupMember *tgm,
2702
ThrottleTimers *tt = &tgm->throttle_timers;
2703
bool must_wait;
2704
2705
- if (atomic_read(&tgm->io_limits_disabled)) {
2706
+ if (qatomic_read(&tgm->io_limits_disabled)) {
2707
return false;
2708
}
2709
2710
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn throttle_group_restart_queue_entry(void *opaque)
2711
2712
g_free(data);
2713
2714
- atomic_dec(&tgm->restart_pending);
2715
+ qatomic_dec(&tgm->restart_pending);
2716
aio_wait_kick();
2717
}
2718
2719
@@ -XXX,XX +XXX,XX @@ static void throttle_group_restart_queue(ThrottleGroupMember *tgm, bool is_write
2720
* be no timer pending on this tgm at this point */
2721
assert(!timer_pending(tgm->throttle_timers.timers[is_write]));
2722
2723
- atomic_inc(&tgm->restart_pending);
2724
+ qatomic_inc(&tgm->restart_pending);
2725
2726
co = qemu_coroutine_create(throttle_group_restart_queue_entry, rd);
2727
aio_co_enter(tgm->aio_context, co);
2728
@@ -XXX,XX +XXX,XX @@ void throttle_group_register_tgm(ThrottleGroupMember *tgm,
2729
2730
tgm->throttle_state = ts;
2731
tgm->aio_context = ctx;
2732
- atomic_set(&tgm->restart_pending, 0);
2733
+ qatomic_set(&tgm->restart_pending, 0);
2734
2735
qemu_mutex_lock(&tg->lock);
2736
/* If the ThrottleGroup is new set this ThrottleGroupMember as the token */
2737
@@ -XXX,XX +XXX,XX @@ void throttle_group_unregister_tgm(ThrottleGroupMember *tgm)
2738
}
2739
2740
/* Wait for throttle_group_restart_queue_entry() coroutines to finish */
2741
- AIO_WAIT_WHILE(tgm->aio_context, atomic_read(&tgm->restart_pending) > 0);
2742
+ AIO_WAIT_WHILE(tgm->aio_context, qatomic_read(&tgm->restart_pending) > 0);
2743
2744
qemu_mutex_lock(&tg->lock);
2745
for (i = 0; i < 2; i++) {
2746
diff --git a/block/throttle.c b/block/throttle.c
2747
index XXXXXXX..XXXXXXX 100644
2748
--- a/block/throttle.c
2749
+++ b/block/throttle.c
2750
@@ -XXX,XX +XXX,XX @@ static void throttle_reopen_abort(BDRVReopenState *reopen_state)
2751
static void coroutine_fn throttle_co_drain_begin(BlockDriverState *bs)
2752
{
2753
ThrottleGroupMember *tgm = bs->opaque;
2754
- if (atomic_fetch_inc(&tgm->io_limits_disabled) == 0) {
2755
+ if (qatomic_fetch_inc(&tgm->io_limits_disabled) == 0) {
2756
throttle_group_restart_tgm(tgm);
2757
}
2758
}
2759
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn throttle_co_drain_end(BlockDriverState *bs)
2760
{
2761
ThrottleGroupMember *tgm = bs->opaque;
2762
assert(tgm->io_limits_disabled);
2763
- atomic_dec(&tgm->io_limits_disabled);
2764
+ qatomic_dec(&tgm->io_limits_disabled);
2765
}
2766
2767
static const char *const throttle_strong_runtime_opts[] = {
2768
diff --git a/blockdev.c b/blockdev.c
2769
index XXXXXXX..XXXXXXX 100644
2770
--- a/blockdev.c
2771
+++ b/blockdev.c
2772
@@ -XXX,XX +XXX,XX @@ static void external_snapshot_commit(BlkActionState *common)
2773
/* We don't need (or want) to use the transactional
2774
* bdrv_reopen_multiple() across all the entries at once, because we
2775
* don't want to abort all of them if one of them fails the reopen */
2776
- if (!atomic_read(&state->old_bs->copy_on_read)) {
2777
+ if (!qatomic_read(&state->old_bs->copy_on_read)) {
2778
bdrv_reopen_set_read_only(state->old_bs, true, NULL);
2779
}
2780
2781
diff --git a/blockjob.c b/blockjob.c
2782
index XXXXXXX..XXXXXXX 100644
2783
--- a/blockjob.c
2784
+++ b/blockjob.c
2785
@@ -XXX,XX +XXX,XX @@ BlockJobInfo *block_job_query(BlockJob *job, Error **errp)
2786
info = g_new0(BlockJobInfo, 1);
2787
info->type = g_strdup(job_type_str(&job->job));
2788
info->device = g_strdup(job->job.id);
2789
- info->busy = atomic_read(&job->job.busy);
2790
+ info->busy = qatomic_read(&job->job.busy);
2791
info->paused = job->job.pause_count > 0;
2792
info->offset = job->job.progress.current;
2793
info->len = job->job.progress.total;
2794
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
2795
index XXXXXXX..XXXXXXX 100644
2796
--- a/contrib/libvhost-user/libvhost-user.c
2797
+++ b/contrib/libvhost-user/libvhost-user.c
2798
@@ -XXX,XX +XXX,XX @@ static void
2799
vu_log_page(uint8_t *log_table, uint64_t page)
2800
{
2801
DPRINT("Logged dirty guest page: %"PRId64"\n", page);
2802
- atomic_or(&log_table[page / 8], 1 << (page % 8));
2803
+ qatomic_or(&log_table[page / 8], 1 << (page % 8));
2804
}
2805
2806
static void
2807
diff --git a/cpus-common.c b/cpus-common.c
2808
index XXXXXXX..XXXXXXX 100644
2809
--- a/cpus-common.c
2810
+++ b/cpus-common.c
2811
@@ -XXX,XX +XXX,XX @@ void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
2812
wi.exclusive = false;
2813
2814
queue_work_on_cpu(cpu, &wi);
2815
- while (!atomic_mb_read(&wi.done)) {
2816
+ while (!qatomic_mb_read(&wi.done)) {
2817
CPUState *self_cpu = current_cpu;
2818
2819
qemu_cond_wait(&qemu_work_cond, mutex);
2820
@@ -XXX,XX +XXX,XX @@ void start_exclusive(void)
2821
exclusive_idle();
2822
2823
/* Make all other cpus stop executing. */
2824
- atomic_set(&pending_cpus, 1);
2825
+ qatomic_set(&pending_cpus, 1);
2826
2827
/* Write pending_cpus before reading other_cpu->running. */
2828
smp_mb();
2829
running_cpus = 0;
2830
CPU_FOREACH(other_cpu) {
2831
- if (atomic_read(&other_cpu->running)) {
2832
+ if (qatomic_read(&other_cpu->running)) {
2833
other_cpu->has_waiter = true;
2834
running_cpus++;
2835
qemu_cpu_kick(other_cpu);
2836
}
2837
}
2838
2839
- atomic_set(&pending_cpus, running_cpus + 1);
2840
+ qatomic_set(&pending_cpus, running_cpus + 1);
2841
while (pending_cpus > 1) {
2842
qemu_cond_wait(&exclusive_cond, &qemu_cpu_list_lock);
2843
}
2844
@@ -XXX,XX +XXX,XX @@ void end_exclusive(void)
2845
current_cpu->in_exclusive_context = false;
2846
2847
qemu_mutex_lock(&qemu_cpu_list_lock);
2848
- atomic_set(&pending_cpus, 0);
2849
+ qatomic_set(&pending_cpus, 0);
2850
qemu_cond_broadcast(&exclusive_resume);
2851
qemu_mutex_unlock(&qemu_cpu_list_lock);
2852
}
2853
@@ -XXX,XX +XXX,XX @@ void end_exclusive(void)
2854
/* Wait for exclusive ops to finish, and begin cpu execution. */
2855
void cpu_exec_start(CPUState *cpu)
2856
{
2857
- atomic_set(&cpu->running, true);
2858
+ qatomic_set(&cpu->running, true);
2859
2860
/* Write cpu->running before reading pending_cpus. */
2861
smp_mb();
2862
@@ -XXX,XX +XXX,XX @@ void cpu_exec_start(CPUState *cpu)
2863
* 3. pending_cpus == 0. Then start_exclusive is definitely going to
2864
* see cpu->running == true, and it will kick the CPU.
2865
*/
2866
- if (unlikely(atomic_read(&pending_cpus))) {
2867
+ if (unlikely(qatomic_read(&pending_cpus))) {
2868
QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
2869
if (!cpu->has_waiter) {
2870
/* Not counted in pending_cpus, let the exclusive item
2871
* run. Since we have the lock, just set cpu->running to true
2872
* while holding it; no need to check pending_cpus again.
2873
*/
2874
- atomic_set(&cpu->running, false);
2875
+ qatomic_set(&cpu->running, false);
2876
exclusive_idle();
2877
/* Now pending_cpus is zero. */
2878
- atomic_set(&cpu->running, true);
2879
+ qatomic_set(&cpu->running, true);
2880
} else {
2881
/* Counted in pending_cpus, go ahead and release the
2882
* waiter at cpu_exec_end.
2883
@@ -XXX,XX +XXX,XX @@ void cpu_exec_start(CPUState *cpu)
2884
/* Mark cpu as not executing, and release pending exclusive ops. */
2885
void cpu_exec_end(CPUState *cpu)
2886
{
2887
- atomic_set(&cpu->running, false);
2888
+ qatomic_set(&cpu->running, false);
2889
2890
/* Write cpu->running before reading pending_cpus. */
2891
smp_mb();
2892
@@ -XXX,XX +XXX,XX @@ void cpu_exec_end(CPUState *cpu)
2893
* see cpu->running == false, and it can ignore this CPU until the
2894
* next cpu_exec_start.
2895
*/
2896
- if (unlikely(atomic_read(&pending_cpus))) {
2897
+ if (unlikely(qatomic_read(&pending_cpus))) {
2898
QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
2899
if (cpu->has_waiter) {
2900
cpu->has_waiter = false;
2901
- atomic_set(&pending_cpus, pending_cpus - 1);
2902
+ qatomic_set(&pending_cpus, pending_cpus - 1);
2903
if (pending_cpus == 1) {
2904
qemu_cond_signal(&exclusive_cond);
2905
}
2906
@@ -XXX,XX +XXX,XX @@ void process_queued_cpu_work(CPUState *cpu)
2907
if (wi->free) {
2908
g_free(wi);
2909
} else {
2910
- atomic_mb_set(&wi->done, true);
2911
+ qatomic_mb_set(&wi->done, true);
2912
}
2913
}
2914
qemu_mutex_unlock(&cpu->work_mutex);
2915
diff --git a/dump/dump.c b/dump/dump.c
2916
index XXXXXXX..XXXXXXX 100644
2917
--- a/dump/dump.c
2918
+++ b/dump/dump.c
2919
@@ -XXX,XX +XXX,XX @@ static void dump_state_prepare(DumpState *s)
2920
bool dump_in_progress(void)
2921
{
2922
DumpState *state = &dump_state_global;
2923
- return (atomic_read(&state->status) == DUMP_STATUS_ACTIVE);
2924
+ return (qatomic_read(&state->status) == DUMP_STATUS_ACTIVE);
2925
}
2926
2927
/* calculate total size of memory to be dumped (taking filter into
2928
@@ -XXX,XX +XXX,XX @@ static void dump_process(DumpState *s, Error **errp)
2929
2930
/* make sure status is written after written_size updates */
2931
smp_wmb();
2932
- atomic_set(&s->status,
2933
+ qatomic_set(&s->status,
2934
(local_err ? DUMP_STATUS_FAILED : DUMP_STATUS_COMPLETED));
2935
2936
/* send DUMP_COMPLETED message (unconditionally) */
2937
@@ -XXX,XX +XXX,XX @@ DumpQueryResult *qmp_query_dump(Error **errp)
2938
{
2939
DumpQueryResult *result = g_new(DumpQueryResult, 1);
2940
DumpState *state = &dump_state_global;
2941
- result->status = atomic_read(&state->status);
2942
+ result->status = qatomic_read(&state->status);
2943
/* make sure we are reading status and written_size in order */
2944
smp_rmb();
2945
result->completed = state->written_size;
2946
@@ -XXX,XX +XXX,XX @@ void qmp_dump_guest_memory(bool paging, const char *file,
2947
begin, length, &local_err);
2948
if (local_err) {
2949
error_propagate(errp, local_err);
2950
- atomic_set(&s->status, DUMP_STATUS_FAILED);
2951
+ qatomic_set(&s->status, DUMP_STATUS_FAILED);
2952
return;
2953
}
2954
2955
diff --git a/exec.c b/exec.c
2956
index XXXXXXX..XXXXXXX 100644
2957
--- a/exec.c
2958
+++ b/exec.c
2959
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
2960
hwaddr addr,
2961
bool resolve_subpage)
2962
{
2963
- MemoryRegionSection *section = atomic_read(&d->mru_section);
2964
+ MemoryRegionSection *section = qatomic_read(&d->mru_section);
2965
subpage_t *subpage;
2966
2967
if (!section || section == &d->map.sections[PHYS_SECTION_UNASSIGNED] ||
2968
!section_covers_addr(section, addr)) {
2969
section = phys_page_find(d, addr);
2970
- atomic_set(&d->mru_section, section);
2971
+ qatomic_set(&d->mru_section, section);
2972
}
2973
if (resolve_subpage && section->mr->subpage) {
2974
subpage = container_of(section->mr, subpage_t, iomem);
2975
@@ -XXX,XX +XXX,XX @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
2976
IOMMUMemoryRegionClass *imrc;
2977
IOMMUTLBEntry iotlb;
2978
int iommu_idx;
2979
- AddressSpaceDispatch *d = atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
2980
+ AddressSpaceDispatch *d =
2981
+ qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
2982
2983
for (;;) {
2984
section = address_space_translate_internal(d, addr, &addr, plen, false);
2985
@@ -XXX,XX +XXX,XX @@ static RAMBlock *qemu_get_ram_block(ram_addr_t addr)
2986
{
2987
RAMBlock *block;
2988
2989
- block = atomic_rcu_read(&ram_list.mru_block);
2990
+ block = qatomic_rcu_read(&ram_list.mru_block);
2991
if (block && addr - block->offset < block->max_length) {
2992
return block;
2993
}
2994
@@ -XXX,XX +XXX,XX @@ found:
2995
* call_rcu(reclaim_ramblock, xxx);
2996
* rcu_read_unlock()
2997
*
2998
- * atomic_rcu_set is not needed here. The block was already published
2999
+ * qatomic_rcu_set is not needed here. The block was already published
3000
* when it was placed into the list. Here we're just making an extra
3001
* copy of the pointer.
3002
*/
3003
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start,
3004
page = start_page;
3005
3006
WITH_RCU_READ_LOCK_GUARD() {
3007
- blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
3008
+ blocks = qatomic_rcu_read(&ram_list.dirty_memory[client]);
3009
ramblock = qemu_get_ram_block(start);
3010
/* Range sanity check on the ramblock */
3011
assert(start >= ramblock->offset &&
3012
@@ -XXX,XX +XXX,XX @@ DirtyBitmapSnapshot *cpu_physical_memory_snapshot_and_clear_dirty
3013
dest = 0;
3014
3015
WITH_RCU_READ_LOCK_GUARD() {
3016
- blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
3017
+ blocks = qatomic_rcu_read(&ram_list.dirty_memory[client]);
3018
3019
while (page < end) {
3020
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
3021
@@ -XXX,XX +XXX,XX @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
3022
DirtyMemoryBlocks *new_blocks;
3023
int j;
3024
3025
- old_blocks = atomic_rcu_read(&ram_list.dirty_memory[i]);
3026
+ old_blocks = qatomic_rcu_read(&ram_list.dirty_memory[i]);
3027
new_blocks = g_malloc(sizeof(*new_blocks) +
3028
sizeof(new_blocks->blocks[0]) * new_num_blocks);
3029
3030
@@ -XXX,XX +XXX,XX @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
3031
new_blocks->blocks[j] = bitmap_new(DIRTY_MEMORY_BLOCK_SIZE);
3032
}
3033
3034
- atomic_rcu_set(&ram_list.dirty_memory[i], new_blocks);
3035
+ qatomic_rcu_set(&ram_list.dirty_memory[i], new_blocks);
3036
3037
if (old_blocks) {
3038
g_free_rcu(old_blocks, rcu);
3039
@@ -XXX,XX +XXX,XX @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
3040
}
3041
3042
RCU_READ_LOCK_GUARD();
3043
- block = atomic_rcu_read(&ram_list.mru_block);
3044
+ block = qatomic_rcu_read(&ram_list.mru_block);
3045
if (block && block->host && host - block->host < block->max_length) {
3046
goto found;
3047
}
3048
@@ -XXX,XX +XXX,XX @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu,
3049
{
3050
int asidx = cpu_asidx_from_attrs(cpu, attrs);
3051
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
3052
- AddressSpaceDispatch *d = atomic_rcu_read(&cpuas->memory_dispatch);
3053
+ AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch);
3054
MemoryRegionSection *sections = d->map.sections;
3055
3056
return &sections[index & ~TARGET_PAGE_MASK];
3057
@@ -XXX,XX +XXX,XX @@ static void tcg_commit(MemoryListener *listener)
3058
* may have split the RCU critical section.
3059
*/
3060
d = address_space_to_dispatch(cpuas->as);
3061
- atomic_rcu_set(&cpuas->memory_dispatch, d);
3062
+ qatomic_rcu_set(&cpuas->memory_dispatch, d);
3063
tlb_flush(cpuas->cpu);
3064
}
3065
3066
@@ -XXX,XX +XXX,XX @@ void cpu_register_map_client(QEMUBH *bh)
3067
qemu_mutex_lock(&map_client_list_lock);
3068
client->bh = bh;
3069
QLIST_INSERT_HEAD(&map_client_list, client, link);
3070
- if (!atomic_read(&bounce.in_use)) {
3071
+ if (!qatomic_read(&bounce.in_use)) {
3072
cpu_notify_map_clients_locked();
3073
}
3074
qemu_mutex_unlock(&map_client_list_lock);
3075
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
3076
mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
3077
3078
if (!memory_access_is_direct(mr, is_write)) {
3079
- if (atomic_xchg(&bounce.in_use, true)) {
3080
+ if (qatomic_xchg(&bounce.in_use, true)) {
3081
*plen = 0;
3082
return NULL;
3083
}
3084
@@ -XXX,XX +XXX,XX @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
3085
qemu_vfree(bounce.buffer);
3086
bounce.buffer = NULL;
3087
memory_region_unref(bounce.mr);
3088
- atomic_mb_set(&bounce.in_use, false);
3089
+ qatomic_mb_set(&bounce.in_use, false);
3090
cpu_notify_map_clients();
3091
}
3092
3093
@@ -XXX,XX +XXX,XX @@ int ram_block_discard_disable(bool state)
3094
int old;
3095
3096
if (!state) {
3097
- atomic_dec(&ram_block_discard_disabled);
3098
+ qatomic_dec(&ram_block_discard_disabled);
3099
return 0;
3100
}
3101
3102
do {
3103
- old = atomic_read(&ram_block_discard_disabled);
3104
+ old = qatomic_read(&ram_block_discard_disabled);
3105
if (old < 0) {
3106
return -EBUSY;
3107
}
3108
- } while (atomic_cmpxchg(&ram_block_discard_disabled, old, old + 1) != old);
3109
+ } while (qatomic_cmpxchg(&ram_block_discard_disabled,
3110
+ old, old + 1) != old);
3111
return 0;
3112
}
3113
3114
@@ -XXX,XX +XXX,XX @@ int ram_block_discard_require(bool state)
3115
int old;
3116
3117
if (!state) {
3118
- atomic_inc(&ram_block_discard_disabled);
3119
+ qatomic_inc(&ram_block_discard_disabled);
3120
return 0;
3121
}
3122
3123
do {
3124
- old = atomic_read(&ram_block_discard_disabled);
3125
+ old = qatomic_read(&ram_block_discard_disabled);
3126
if (old > 0) {
3127
return -EBUSY;
3128
}
3129
- } while (atomic_cmpxchg(&ram_block_discard_disabled, old, old - 1) != old);
3130
+ } while (qatomic_cmpxchg(&ram_block_discard_disabled,
3131
+ old, old - 1) != old);
3132
return 0;
3133
}
3134
3135
bool ram_block_discard_is_disabled(void)
3136
{
3137
- return atomic_read(&ram_block_discard_disabled) > 0;
3138
+ return qatomic_read(&ram_block_discard_disabled) > 0;
3139
}
3140
3141
bool ram_block_discard_is_required(void)
3142
{
3143
- return atomic_read(&ram_block_discard_disabled) < 0;
3144
+ return qatomic_read(&ram_block_discard_disabled) < 0;
3145
}
3146
3147
#endif
3148
diff --git a/hw/core/cpu.c b/hw/core/cpu.c
3149
index XXXXXXX..XXXXXXX 100644
3150
--- a/hw/core/cpu.c
3151
+++ b/hw/core/cpu.c
3152
@@ -XXX,XX +XXX,XX @@ void cpu_reset_interrupt(CPUState *cpu, int mask)
3153
3154
void cpu_exit(CPUState *cpu)
3155
{
3156
- atomic_set(&cpu->exit_request, 1);
3157
+ qatomic_set(&cpu->exit_request, 1);
3158
/* Ensure cpu_exec will see the exit request after TCG has exited. */
3159
smp_wmb();
3160
- atomic_set(&cpu->icount_decr_ptr->u16.high, -1);
3161
+ qatomic_set(&cpu->icount_decr_ptr->u16.high, -1);
3162
}
3163
3164
int cpu_write_elf32_qemunote(WriteCoreDumpFunction f, CPUState *cpu,
3165
@@ -XXX,XX +XXX,XX @@ static void cpu_common_reset(DeviceState *dev)
3166
cpu->halted = cpu->start_powered_off;
3167
cpu->mem_io_pc = 0;
3168
cpu->icount_extra = 0;
3169
- atomic_set(&cpu->icount_decr_ptr->u32, 0);
3170
+ qatomic_set(&cpu->icount_decr_ptr->u32, 0);
3171
cpu->can_do_io = 1;
3172
cpu->exception_index = -1;
3173
cpu->crash_occurred = false;
3174
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
3175
index XXXXXXX..XXXXXXX 100644
3176
--- a/hw/display/qxl.c
3177
+++ b/hw/display/qxl.c
3178
@@ -XXX,XX +XXX,XX @@ static void qxl_send_events(PCIQXLDevice *d, uint32_t events)
3179
/*
3180
* Older versions of Spice forgot to define the QXLRam struct
3181
* with the '__aligned__(4)' attribute. clang 7 and newer will
3182
- * thus warn that atomic_fetch_or(&d->ram->int_pending, ...)
3183
+ * thus warn that qatomic_fetch_or(&d->ram->int_pending, ...)
3184
* might be a misaligned atomic access, and will generate an
3185
* out-of-line call for it, which results in a link error since
3186
* we don't currently link against libatomic.
3187
@@ -XXX,XX +XXX,XX @@ static void qxl_send_events(PCIQXLDevice *d, uint32_t events)
3188
#define ALIGNED_UINT32_PTR(P) ((uint32_t *)P)
3189
#endif
3190
3191
- old_pending = atomic_fetch_or(ALIGNED_UINT32_PTR(&d->ram->int_pending),
3192
+ old_pending = qatomic_fetch_or(ALIGNED_UINT32_PTR(&d->ram->int_pending),
3193
le_events);
3194
if ((old_pending & le_events) == le_events) {
3195
return;
3196
diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c
3197
index XXXXXXX..XXXXXXX 100644
3198
--- a/hw/hyperv/hyperv.c
3199
+++ b/hw/hyperv/hyperv.c
3200
@@ -XXX,XX +XXX,XX @@ static void sint_msg_bh(void *opaque)
3201
HvSintRoute *sint_route = opaque;
3202
HvSintStagedMessage *staged_msg = sint_route->staged_msg;
3203
3204
- if (atomic_read(&staged_msg->state) != HV_STAGED_MSG_POSTED) {
3205
+ if (qatomic_read(&staged_msg->state) != HV_STAGED_MSG_POSTED) {
3206
/* status nor ready yet (spurious ack from guest?), ignore */
3207
return;
3208
}
3209
@@ -XXX,XX +XXX,XX @@ static void sint_msg_bh(void *opaque)
3210
staged_msg->status = 0;
3211
3212
/* staged message processing finished, ready to start over */
3213
- atomic_set(&staged_msg->state, HV_STAGED_MSG_FREE);
3214
+ qatomic_set(&staged_msg->state, HV_STAGED_MSG_FREE);
3215
/* drop the reference taken in hyperv_post_msg */
3216
hyperv_sint_route_unref(sint_route);
3217
}
3218
@@ -XXX,XX +XXX,XX @@ static void cpu_post_msg(CPUState *cs, run_on_cpu_data data)
3219
memory_region_set_dirty(&synic->msg_page_mr, 0, sizeof(*synic->msg_page));
3220
3221
posted:
3222
- atomic_set(&staged_msg->state, HV_STAGED_MSG_POSTED);
3223
+ qatomic_set(&staged_msg->state, HV_STAGED_MSG_POSTED);
3224
/*
3225
* Notify the msg originator of the progress made; if the slot was busy we
3226
* set msg_pending flag in it so it will be the guest who will do EOM and
3227
@@ -XXX,XX +XXX,XX @@ int hyperv_post_msg(HvSintRoute *sint_route, struct hyperv_message *src_msg)
3228
assert(staged_msg);
3229
3230
/* grab the staging area */
3231
- if (atomic_cmpxchg(&staged_msg->state, HV_STAGED_MSG_FREE,
3232
+ if (qatomic_cmpxchg(&staged_msg->state, HV_STAGED_MSG_FREE,
3233
HV_STAGED_MSG_BUSY) != HV_STAGED_MSG_FREE) {
3234
return -EAGAIN;
3235
}
3236
@@ -XXX,XX +XXX,XX @@ int hyperv_set_event_flag(HvSintRoute *sint_route, unsigned eventno)
3237
set_mask = BIT_MASK(eventno);
3238
flags = synic->event_page->slot[sint_route->sint].flags;
3239
3240
- if ((atomic_fetch_or(&flags[set_idx], set_mask) & set_mask) != set_mask) {
3241
+ if ((qatomic_fetch_or(&flags[set_idx], set_mask) & set_mask) != set_mask) {
3242
memory_region_set_dirty(&synic->event_page_mr, 0,
3243
sizeof(*synic->event_page));
3244
ret = hyperv_sint_route_set_sint(sint_route);
3245
diff --git a/hw/hyperv/vmbus.c b/hw/hyperv/vmbus.c
3246
index XXXXXXX..XXXXXXX 100644
3247
--- a/hw/hyperv/vmbus.c
3248
+++ b/hw/hyperv/vmbus.c
3249
@@ -XXX,XX +XXX,XX @@ static int vmbus_channel_notify_guest(VMBusChannel *chan)
3250
3251
idx = BIT_WORD(chan->id);
3252
mask = BIT_MASK(chan->id);
3253
- if ((atomic_fetch_or(&int_map[idx], mask) & mask) != mask) {
3254
+ if ((qatomic_fetch_or(&int_map[idx], mask) & mask) != mask) {
3255
res = hyperv_sint_route_set_sint(chan->notify_route);
3256
dirty = len;
3257
}
3258
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
3259
index XXXXXXX..XXXXXXX 100644
3260
--- a/hw/i386/xen/xen-hvm.c
3261
+++ b/hw/i386/xen/xen-hvm.c
3262
@@ -XXX,XX +XXX,XX @@ static int handle_buffered_iopage(XenIOState *state)
3263
assert(req.dir == IOREQ_WRITE);
3264
assert(!req.data_is_ptr);
3265
3266
- atomic_add(&buf_page->read_pointer, qw + 1);
3267
+ qatomic_add(&buf_page->read_pointer, qw + 1);
3268
}
3269
3270
return req.count;
3271
diff --git a/hw/intc/rx_icu.c b/hw/intc/rx_icu.c
3272
index XXXXXXX..XXXXXXX 100644
3273
--- a/hw/intc/rx_icu.c
3274
+++ b/hw/intc/rx_icu.c
3275
@@ -XXX,XX +XXX,XX @@ static void rxicu_request(RXICUState *icu, int n_IRQ)
3276
int enable;
3277
3278
enable = icu->ier[n_IRQ / 8] & (1 << (n_IRQ & 7));
3279
- if (n_IRQ > 0 && enable != 0 && atomic_read(&icu->req_irq) < 0) {
3280
- atomic_set(&icu->req_irq, n_IRQ);
3281
+ if (n_IRQ > 0 && enable != 0 && qatomic_read(&icu->req_irq) < 0) {
3282
+ qatomic_set(&icu->req_irq, n_IRQ);
3283
set_irq(icu, n_IRQ, rxicu_level(icu, n_IRQ));
3284
}
3285
}
3286
@@ -XXX,XX +XXX,XX @@ static void rxicu_set_irq(void *opaque, int n_IRQ, int level)
3287
}
3288
if (issue == 0 && src->sense == TRG_LEVEL) {
3289
icu->ir[n_IRQ] = 0;
3290
- if (atomic_read(&icu->req_irq) == n_IRQ) {
3291
+ if (qatomic_read(&icu->req_irq) == n_IRQ) {
3292
/* clear request */
3293
set_irq(icu, n_IRQ, 0);
3294
- atomic_set(&icu->req_irq, -1);
3295
+ qatomic_set(&icu->req_irq, -1);
3296
}
3297
return;
3298
}
3299
@@ -XXX,XX +XXX,XX @@ static void rxicu_ack_irq(void *opaque, int no, int level)
3300
int n_IRQ;
3301
int max_pri;
3302
3303
- n_IRQ = atomic_read(&icu->req_irq);
3304
+ n_IRQ = qatomic_read(&icu->req_irq);
3305
if (n_IRQ < 0) {
3306
return;
3307
}
3308
- atomic_set(&icu->req_irq, -1);
3309
+ qatomic_set(&icu->req_irq, -1);
3310
if (icu->src[n_IRQ].sense != TRG_LEVEL) {
3311
icu->ir[n_IRQ] = 0;
3312
}
3313
diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c
3314
index XXXXXXX..XXXXXXX 100644
3315
--- a/hw/intc/sifive_plic.c
3316
+++ b/hw/intc/sifive_plic.c
3317
@@ -XXX,XX +XXX,XX @@ static void sifive_plic_print_state(SiFivePLICState *plic)
3318
3319
static uint32_t atomic_set_masked(uint32_t *a, uint32_t mask, uint32_t value)
3320
{
3321
- uint32_t old, new, cmp = atomic_read(a);
3322
+ uint32_t old, new, cmp = qatomic_read(a);
3323
3324
do {
3325
old = cmp;
3326
new = (old & ~mask) | (value & mask);
3327
- cmp = atomic_cmpxchg(a, old, new);
3328
+ cmp = qatomic_cmpxchg(a, old, new);
3329
} while (old != cmp);
3330
3331
return old;
3332
diff --git a/hw/misc/edu.c b/hw/misc/edu.c
3333
index XXXXXXX..XXXXXXX 100644
3334
--- a/hw/misc/edu.c
3335
+++ b/hw/misc/edu.c
3336
@@ -XXX,XX +XXX,XX @@ static uint64_t edu_mmio_read(void *opaque, hwaddr addr, unsigned size)
3337
qemu_mutex_unlock(&edu->thr_mutex);
3338
break;
3339
case 0x20:
3340
- val = atomic_read(&edu->status);
3341
+ val = qatomic_read(&edu->status);
3342
break;
3343
case 0x24:
3344
val = edu->irq_status;
3345
@@ -XXX,XX +XXX,XX @@ static void edu_mmio_write(void *opaque, hwaddr addr, uint64_t val,
3346
edu->addr4 = ~val;
3347
break;
3348
case 0x08:
3349
- if (atomic_read(&edu->status) & EDU_STATUS_COMPUTING) {
3350
+ if (qatomic_read(&edu->status) & EDU_STATUS_COMPUTING) {
3351
break;
3352
}
3353
/* EDU_STATUS_COMPUTING cannot go 0->1 concurrently, because it is only
3354
@@ -XXX,XX +XXX,XX @@ static void edu_mmio_write(void *opaque, hwaddr addr, uint64_t val,
3355
*/
3356
qemu_mutex_lock(&edu->thr_mutex);
3357
edu->fact = val;
3358
- atomic_or(&edu->status, EDU_STATUS_COMPUTING);
3359
+ qatomic_or(&edu->status, EDU_STATUS_COMPUTING);
3360
qemu_cond_signal(&edu->thr_cond);
3361
qemu_mutex_unlock(&edu->thr_mutex);
3362
break;
3363
case 0x20:
3364
if (val & EDU_STATUS_IRQFACT) {
3365
- atomic_or(&edu->status, EDU_STATUS_IRQFACT);
3366
+ qatomic_or(&edu->status, EDU_STATUS_IRQFACT);
3367
} else {
3368
- atomic_and(&edu->status, ~EDU_STATUS_IRQFACT);
3369
+ qatomic_and(&edu->status, ~EDU_STATUS_IRQFACT);
3370
}
3371
break;
3372
case 0x60:
3373
@@ -XXX,XX +XXX,XX @@ static void *edu_fact_thread(void *opaque)
3374
uint32_t val, ret = 1;
3375
3376
qemu_mutex_lock(&edu->thr_mutex);
3377
- while ((atomic_read(&edu->status) & EDU_STATUS_COMPUTING) == 0 &&
3378
+ while ((qatomic_read(&edu->status) & EDU_STATUS_COMPUTING) == 0 &&
3379
!edu->stopping) {
3380
qemu_cond_wait(&edu->thr_cond, &edu->thr_mutex);
3381
}
3382
@@ -XXX,XX +XXX,XX @@ static void *edu_fact_thread(void *opaque)
3383
qemu_mutex_lock(&edu->thr_mutex);
3384
edu->fact = ret;
3385
qemu_mutex_unlock(&edu->thr_mutex);
3386
- atomic_and(&edu->status, ~EDU_STATUS_COMPUTING);
3387
+ qatomic_and(&edu->status, ~EDU_STATUS_COMPUTING);
3388
3389
- if (atomic_read(&edu->status) & EDU_STATUS_IRQFACT) {
3390
+ if (qatomic_read(&edu->status) & EDU_STATUS_IRQFACT) {
3391
qemu_mutex_lock_iothread();
3392
edu_raise_irq(edu, FACT_IRQ);
3393
qemu_mutex_unlock_iothread();
3394
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
3395
index XXXXXXX..XXXXXXX 100644
3396
--- a/hw/net/virtio-net.c
3397
+++ b/hw/net/virtio-net.c
3398
@@ -XXX,XX +XXX,XX @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
3399
3400
if (virtio_has_feature(features, VIRTIO_NET_F_STANDBY)) {
3401
qapi_event_send_failover_negotiated(n->netclient_name);
3402
- atomic_set(&n->primary_should_be_hidden, false);
3403
+ qatomic_set(&n->primary_should_be_hidden, false);
3404
failover_add_primary(n, &err);
3405
if (err) {
3406
n->primary_dev = virtio_connect_failover_devices(n, n->qdev, &err);
3407
@@ -XXX,XX +XXX,XX @@ static void virtio_net_handle_migration_primary(VirtIONet *n,
3408
bool should_be_hidden;
3409
Error *err = NULL;
3410
3411
- should_be_hidden = atomic_read(&n->primary_should_be_hidden);
3412
+ should_be_hidden = qatomic_read(&n->primary_should_be_hidden);
3413
3414
if (!n->primary_dev) {
3415
n->primary_dev = virtio_connect_failover_devices(n, n->qdev, &err);
3416
@@ -XXX,XX +XXX,XX @@ static void virtio_net_handle_migration_primary(VirtIONet *n,
3417
qdev_get_vmsd(n->primary_dev),
3418
n->primary_dev);
3419
qapi_event_send_unplug_primary(n->primary_device_id);
3420
- atomic_set(&n->primary_should_be_hidden, true);
3421
+ qatomic_set(&n->primary_should_be_hidden, true);
3422
} else {
3423
warn_report("couldn't unplug primary device");
3424
}
3425
@@ -XXX,XX +XXX,XX @@ static int virtio_net_primary_should_be_hidden(DeviceListener *listener,
3426
n->primary_device_opts = device_opts;
3427
3428
/* primary_should_be_hidden is set during feature negotiation */
3429
- hide = atomic_read(&n->primary_should_be_hidden);
3430
+ hide = qatomic_read(&n->primary_should_be_hidden);
3431
3432
if (n->primary_device_dict) {
3433
g_free(n->primary_device_id);
3434
@@ -XXX,XX +XXX,XX @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
3435
if (n->failover) {
3436
n->primary_listener.should_be_hidden =
3437
virtio_net_primary_should_be_hidden;
3438
- atomic_set(&n->primary_should_be_hidden, true);
3439
+ qatomic_set(&n->primary_should_be_hidden, true);
3440
device_listener_register(&n->primary_listener);
3441
n->migration_state.notify = virtio_net_migration_state_notifier;
3442
add_migration_state_change_notifier(&n->migration_state);
3443
diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c
3444
index XXXXXXX..XXXXXXX 100644
3445
--- a/hw/rdma/rdma_backend.c
3446
+++ b/hw/rdma/rdma_backend.c
3447
@@ -XXX,XX +XXX,XX @@ static void free_cqe_ctx(gpointer data, gpointer user_data)
3448
bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, cqe_ctx_id);
3449
if (bctx) {
3450
rdma_rm_dealloc_cqe_ctx(rdma_dev_res, cqe_ctx_id);
3451
- atomic_dec(&rdma_dev_res->stats.missing_cqe);
3452
+ qatomic_dec(&rdma_dev_res->stats.missing_cqe);
3453
}
3454
g_free(bctx);
3455
}
3456
@@ -XXX,XX +XXX,XX @@ static void clean_recv_mads(RdmaBackendDev *backend_dev)
3457
cqe_ctx_id = rdma_protected_qlist_pop_int64(&backend_dev->
3458
recv_mads_list);
3459
if (cqe_ctx_id != -ENOENT) {
3460
- atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3461
+ qatomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3462
free_cqe_ctx(GINT_TO_POINTER(cqe_ctx_id),
3463
backend_dev->rdma_dev_res);
3464
}
3465
@@ -XXX,XX +XXX,XX @@ static int rdma_poll_cq(RdmaDeviceResources *rdma_dev_res, struct ibv_cq *ibcq)
3466
}
3467
total_ne += ne;
3468
} while (ne > 0);
3469
- atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne);
3470
+ qatomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne);
3471
}
3472
3473
if (ne < 0) {
3474
@@ -XXX,XX +XXX,XX @@ static void *comp_handler_thread(void *arg)
3475
3476
static inline void disable_rdmacm_mux_async(RdmaBackendDev *backend_dev)
3477
{
3478
- atomic_set(&backend_dev->rdmacm_mux.can_receive, 0);
3479
+ qatomic_set(&backend_dev->rdmacm_mux.can_receive, 0);
3480
}
3481
3482
static inline void enable_rdmacm_mux_async(RdmaBackendDev *backend_dev)
3483
{
3484
- atomic_set(&backend_dev->rdmacm_mux.can_receive, sizeof(RdmaCmMuxMsg));
3485
+ qatomic_set(&backend_dev->rdmacm_mux.can_receive, sizeof(RdmaCmMuxMsg));
3486
}
3487
3488
static inline int rdmacm_mux_can_process_async(RdmaBackendDev *backend_dev)
3489
{
3490
- return atomic_read(&backend_dev->rdmacm_mux.can_receive);
3491
+ return qatomic_read(&backend_dev->rdmacm_mux.can_receive);
3492
}
3493
3494
static int rdmacm_mux_check_op_status(CharBackend *mad_chr_be)
3495
@@ -XXX,XX +XXX,XX @@ void rdma_backend_post_send(RdmaBackendDev *backend_dev,
3496
goto err_dealloc_cqe_ctx;
3497
}
3498
3499
- atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3500
+ qatomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3501
backend_dev->rdma_dev_res->stats.tx++;
3502
3503
return;
3504
@@ -XXX,XX +XXX,XX @@ void rdma_backend_post_recv(RdmaBackendDev *backend_dev,
3505
goto err_dealloc_cqe_ctx;
3506
}
3507
3508
- atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3509
+ qatomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3510
backend_dev->rdma_dev_res->stats.rx_bufs++;
3511
3512
return;
3513
@@ -XXX,XX +XXX,XX @@ void rdma_backend_post_srq_recv(RdmaBackendDev *backend_dev,
3514
goto err_dealloc_cqe_ctx;
3515
}
3516
3517
- atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3518
+ qatomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe);
3519
backend_dev->rdma_dev_res->stats.rx_bufs++;
3520
backend_dev->rdma_dev_res->stats.rx_srq++;
3521
3522
diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c
3523
index XXXXXXX..XXXXXXX 100644
3524
--- a/hw/rdma/rdma_rm.c
3525
+++ b/hw/rdma/rdma_rm.c
3526
@@ -XXX,XX +XXX,XX @@ int rdma_rm_init(RdmaDeviceResources *dev_res, struct ibv_device_attr *dev_attr)
3527
qemu_mutex_init(&dev_res->lock);
3528
3529
memset(&dev_res->stats, 0, sizeof(dev_res->stats));
3530
- atomic_set(&dev_res->stats.missing_cqe, 0);
3531
+ qatomic_set(&dev_res->stats.missing_cqe, 0);
3532
3533
return 0;
3534
}
3535
diff --git a/hw/rdma/vmw/pvrdma_dev_ring.c b/hw/rdma/vmw/pvrdma_dev_ring.c
3536
index XXXXXXX..XXXXXXX 100644
3537
--- a/hw/rdma/vmw/pvrdma_dev_ring.c
3538
+++ b/hw/rdma/vmw/pvrdma_dev_ring.c
3539
@@ -XXX,XX +XXX,XX @@ int pvrdma_ring_init(PvrdmaRing *ring, const char *name, PCIDevice *dev,
3540
ring->max_elems = max_elems;
3541
ring->elem_sz = elem_sz;
3542
/* TODO: Give a moment to think if we want to redo driver settings
3543
- atomic_set(&ring->ring_state->prod_tail, 0);
3544
- atomic_set(&ring->ring_state->cons_head, 0);
3545
+ qatomic_set(&ring->ring_state->prod_tail, 0);
3546
+ qatomic_set(&ring->ring_state->cons_head, 0);
3547
*/
3548
ring->npages = npages;
3549
ring->pages = g_malloc(npages * sizeof(void *));
3550
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
3551
index XXXXXXX..XXXXXXX 100644
3552
--- a/hw/s390x/s390-pci-bus.c
3553
+++ b/hw/s390x/s390-pci-bus.c
3554
@@ -XXX,XX +XXX,XX @@ static uint8_t set_ind_atomic(uint64_t ind_loc, uint8_t to_be_set)
3555
actual = *ind_addr;
3556
do {
3557
expected = actual;
3558
- actual = atomic_cmpxchg(ind_addr, expected, expected | to_be_set);
3559
+ actual = qatomic_cmpxchg(ind_addr, expected, expected | to_be_set);
3560
} while (actual != expected);
3561
cpu_physical_memory_unmap((void *)ind_addr, len, 1, len);
3562
3563
diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
3564
index XXXXXXX..XXXXXXX 100644
3565
--- a/hw/s390x/virtio-ccw.c
3566
+++ b/hw/s390x/virtio-ccw.c
3567
@@ -XXX,XX +XXX,XX @@ static uint8_t virtio_set_ind_atomic(SubchDev *sch, uint64_t ind_loc,
3568
actual = *ind_addr;
3569
do {
3570
expected = actual;
3571
- actual = atomic_cmpxchg(ind_addr, expected, expected | to_be_set);
3572
+ actual = qatomic_cmpxchg(ind_addr, expected, expected | to_be_set);
3573
} while (actual != expected);
3574
trace_virtio_ccw_set_ind(ind_loc, actual, actual | to_be_set);
3575
cpu_physical_memory_unmap((void *)ind_addr, len, 1, len);
3576
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
3577
index XXXXXXX..XXXXXXX 100644
3578
--- a/hw/virtio/vhost.c
3579
+++ b/hw/virtio/vhost.c
3580
@@ -XXX,XX +XXX,XX @@ static void vhost_dev_sync_region(struct vhost_dev *dev,
3581
}
3582
/* Data must be read atomically. We don't really need barrier semantics
3583
* but it's easier to use atomic_* than roll our own. */
3584
- log = atomic_xchg(from, 0);
3585
+ log = qatomic_xchg(from, 0);
3586
while (log) {
3587
int bit = ctzl(log);
3588
hwaddr page_addr;
3589
diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
3590
index XXXXXXX..XXXXXXX 100644
3591
--- a/hw/virtio/virtio-mmio.c
3592
+++ b/hw/virtio/virtio-mmio.c
3593
@@ -XXX,XX +XXX,XX @@ static uint64_t virtio_mmio_read(void *opaque, hwaddr offset, unsigned size)
3594
}
3595
return proxy->vqs[vdev->queue_sel].enabled;
3596
case VIRTIO_MMIO_INTERRUPT_STATUS:
3597
- return atomic_read(&vdev->isr);
3598
+ return qatomic_read(&vdev->isr);
3599
case VIRTIO_MMIO_STATUS:
3600
return vdev->status;
3601
case VIRTIO_MMIO_CONFIG_GENERATION:
3602
@@ -XXX,XX +XXX,XX @@ static void virtio_mmio_write(void *opaque, hwaddr offset, uint64_t value,
3603
}
3604
break;
3605
case VIRTIO_MMIO_INTERRUPT_ACK:
3606
- atomic_and(&vdev->isr, ~value);
3607
+ qatomic_and(&vdev->isr, ~value);
3608
virtio_update_irq(vdev);
3609
break;
3610
case VIRTIO_MMIO_STATUS:
3611
@@ -XXX,XX +XXX,XX @@ static void virtio_mmio_update_irq(DeviceState *opaque, uint16_t vector)
3612
if (!vdev) {
3613
return;
3614
}
3615
- level = (atomic_read(&vdev->isr) != 0);
3616
+ level = (qatomic_read(&vdev->isr) != 0);
3617
trace_virtio_mmio_setting_irq(level);
3618
qemu_set_irq(proxy->irq, level);
3619
}
3620
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
3621
index XXXXXXX..XXXXXXX 100644
3622
--- a/hw/virtio/virtio-pci.c
3623
+++ b/hw/virtio/virtio-pci.c
3624
@@ -XXX,XX +XXX,XX @@ static void virtio_pci_notify(DeviceState *d, uint16_t vector)
3625
msix_notify(&proxy->pci_dev, vector);
3626
else {
3627
VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
3628
- pci_set_irq(&proxy->pci_dev, atomic_read(&vdev->isr) & 1);
3629
+ pci_set_irq(&proxy->pci_dev, qatomic_read(&vdev->isr) & 1);
3630
}
3631
}
3632
3633
@@ -XXX,XX +XXX,XX @@ static uint32_t virtio_ioport_read(VirtIOPCIProxy *proxy, uint32_t addr)
3634
break;
3635
case VIRTIO_PCI_ISR:
3636
/* reading from the ISR also clears it. */
3637
- ret = atomic_xchg(&vdev->isr, 0);
3638
+ ret = qatomic_xchg(&vdev->isr, 0);
3639
pci_irq_deassert(&proxy->pci_dev);
3640
break;
3641
case VIRTIO_MSI_CONFIG_VECTOR:
3642
@@ -XXX,XX +XXX,XX @@ static uint64_t virtio_pci_isr_read(void *opaque, hwaddr addr,
3643
{
3644
VirtIOPCIProxy *proxy = opaque;
3645
VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
3646
- uint64_t val = atomic_xchg(&vdev->isr, 0);
3647
+ uint64_t val = qatomic_xchg(&vdev->isr, 0);
3648
pci_irq_deassert(&proxy->pci_dev);
3649
3650
return val;
3651
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
3652
index XXXXXXX..XXXXXXX 100644
3653
--- a/hw/virtio/virtio.c
3654
+++ b/hw/virtio/virtio.c
3655
@@ -XXX,XX +XXX,XX @@ static void virtio_virtqueue_reset_region_cache(struct VirtQueue *vq)
3656
{
3657
VRingMemoryRegionCaches *caches;
3658
3659
- caches = atomic_read(&vq->vring.caches);
3660
- atomic_rcu_set(&vq->vring.caches, NULL);
3661
+ caches = qatomic_read(&vq->vring.caches);
3662
+ qatomic_rcu_set(&vq->vring.caches, NULL);
3663
if (caches) {
3664
call_rcu(caches, virtio_free_region_cache, rcu);
3665
}
3666
@@ -XXX,XX +XXX,XX @@ static void virtio_init_region_cache(VirtIODevice *vdev, int n)
3667
goto err_avail;
3668
}
3669
3670
- atomic_rcu_set(&vq->vring.caches, new);
3671
+ qatomic_rcu_set(&vq->vring.caches, new);
3672
if (old) {
3673
call_rcu(old, virtio_free_region_cache, rcu);
3674
}
3675
@@ -XXX,XX +XXX,XX @@ static void vring_packed_flags_write(VirtIODevice *vdev,
3676
/* Called within rcu_read_lock(). */
3677
static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
3678
{
3679
- return atomic_rcu_read(&vq->vring.caches);
3680
+ return qatomic_rcu_read(&vq->vring.caches);
3681
}
3682
3683
/* Called within rcu_read_lock(). */
3684
@@ -XXX,XX +XXX,XX @@ void virtio_reset(void *opaque)
3685
vdev->queue_sel = 0;
3686
vdev->status = 0;
3687
vdev->disabled = false;
3688
- atomic_set(&vdev->isr, 0);
3689
+ qatomic_set(&vdev->isr, 0);
3690
vdev->config_vector = VIRTIO_NO_VECTOR;
3691
virtio_notify_vector(vdev, vdev->config_vector);
3692
3693
@@ -XXX,XX +XXX,XX @@ void virtio_del_queue(VirtIODevice *vdev, int n)
3694
3695
static void virtio_set_isr(VirtIODevice *vdev, int value)
3696
{
3697
- uint8_t old = atomic_read(&vdev->isr);
3698
+ uint8_t old = qatomic_read(&vdev->isr);
3699
3700
/* Do not write ISR if it does not change, so that its cacheline remains
3701
* shared in the common case where the guest does not read it.
3702
*/
3703
if ((old & value) != value) {
3704
- atomic_or(&vdev->isr, value);
3705
+ qatomic_or(&vdev->isr, value);
3706
}
3707
}
3708
3709
@@ -XXX,XX +XXX,XX @@ void virtio_init(VirtIODevice *vdev, const char *name,
3710
vdev->started = false;
3711
vdev->device_id = device_id;
3712
vdev->status = 0;
3713
- atomic_set(&vdev->isr, 0);
3714
+ qatomic_set(&vdev->isr, 0);
3715
vdev->queue_sel = 0;
3716
vdev->config_vector = VIRTIO_NO_VECTOR;
3717
vdev->vq = g_malloc0(sizeof(VirtQueue) * VIRTIO_QUEUE_MAX);
3718
diff --git a/hw/xtensa/pic_cpu.c b/hw/xtensa/pic_cpu.c
3719
index XXXXXXX..XXXXXXX 100644
3720
--- a/hw/xtensa/pic_cpu.c
3721
+++ b/hw/xtensa/pic_cpu.c
3722
@@ -XXX,XX +XXX,XX @@ static void xtensa_set_irq(void *opaque, int irq, int active)
3723
uint32_t irq_bit = 1 << irq;
3724
3725
if (active) {
3726
- atomic_or(&env->sregs[INTSET], irq_bit);
3727
+ qatomic_or(&env->sregs[INTSET], irq_bit);
3728
} else if (env->config->interrupt[irq].inttype == INTTYPE_LEVEL) {
3729
- atomic_and(&env->sregs[INTSET], ~irq_bit);
3730
+ qatomic_and(&env->sregs[INTSET], ~irq_bit);
3731
}
3732
3733
check_interrupts(env);
3734
diff --git a/iothread.c b/iothread.c
3735
index XXXXXXX..XXXXXXX 100644
3736
--- a/iothread.c
3737
+++ b/iothread.c
3738
@@ -XXX,XX +XXX,XX @@ static void *iothread_run(void *opaque)
3739
* We must check the running state again in case it was
3740
* changed in previous aio_poll()
3741
*/
3742
- if (iothread->running && atomic_read(&iothread->run_gcontext)) {
3743
+ if (iothread->running && qatomic_read(&iothread->run_gcontext)) {
3744
g_main_loop_run(iothread->main_loop);
3745
}
3746
}
3747
@@ -XXX,XX +XXX,XX @@ static void iothread_instance_init(Object *obj)
3748
iothread->thread_id = -1;
3749
qemu_sem_init(&iothread->init_done_sem, 0);
3750
/* By default, we don't run gcontext */
3751
- atomic_set(&iothread->run_gcontext, 0);
3752
+ qatomic_set(&iothread->run_gcontext, 0);
3753
}
3754
3755
static void iothread_instance_finalize(Object *obj)
3756
@@ -XXX,XX +XXX,XX @@ IOThreadInfoList *qmp_query_iothreads(Error **errp)
3757
3758
GMainContext *iothread_get_g_main_context(IOThread *iothread)
3759
{
3760
- atomic_set(&iothread->run_gcontext, 1);
3761
+ qatomic_set(&iothread->run_gcontext, 1);
3762
aio_notify(iothread->ctx);
3763
return iothread->worker_context;
3764
}
3765
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
3766
index XXXXXXX..XXXXXXX 100644
3767
--- a/linux-user/hppa/cpu_loop.c
3768
+++ b/linux-user/hppa/cpu_loop.c
3769
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
3770
}
3771
old = tswap32(old);
3772
new = tswap32(new);
3773
- ret = atomic_cmpxchg((uint32_t *)g2h(addr), old, new);
3774
+ ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
3775
ret = tswap32(ret);
3776
break;
3777
3778
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
3779
case 0:
3780
old = *(uint8_t *)g2h(old);
3781
new = *(uint8_t *)g2h(new);
3782
- ret = atomic_cmpxchg((uint8_t *)g2h(addr), old, new);
3783
+ ret = qatomic_cmpxchg((uint8_t *)g2h(addr), old, new);
3784
ret = ret != old;
3785
break;
3786
case 1:
3787
old = *(uint16_t *)g2h(old);
3788
new = *(uint16_t *)g2h(new);
3789
- ret = atomic_cmpxchg((uint16_t *)g2h(addr), old, new);
3790
+ ret = qatomic_cmpxchg((uint16_t *)g2h(addr), old, new);
3791
ret = ret != old;
3792
break;
3793
case 2:
3794
old = *(uint32_t *)g2h(old);
3795
new = *(uint32_t *)g2h(new);
3796
- ret = atomic_cmpxchg((uint32_t *)g2h(addr), old, new);
3797
+ ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
3798
ret = ret != old;
3799
break;
3800
case 3:
3801
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
3802
o64 = *(uint64_t *)g2h(old);
3803
n64 = *(uint64_t *)g2h(new);
3804
#ifdef CONFIG_ATOMIC64
3805
- r64 = atomic_cmpxchg__nocheck((uint64_t *)g2h(addr), o64, n64);
3806
+ r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(addr),
3807
+ o64, n64);
3808
ret = r64 != o64;
3809
#else
3810
start_exclusive();
3811
diff --git a/linux-user/signal.c b/linux-user/signal.c
3812
index XXXXXXX..XXXXXXX 100644
3813
--- a/linux-user/signal.c
3814
+++ b/linux-user/signal.c
3815
@@ -XXX,XX +XXX,XX @@ int block_signals(void)
3816
sigfillset(&set);
3817
sigprocmask(SIG_SETMASK, &set, 0);
3818
3819
- return atomic_xchg(&ts->signal_pending, 1);
3820
+ return qatomic_xchg(&ts->signal_pending, 1);
3821
}
3822
3823
/* Wrapper for sigprocmask function
3824
@@ -XXX,XX +XXX,XX @@ int queue_signal(CPUArchState *env, int sig, int si_type,
3825
ts->sync_signal.info = *info;
3826
ts->sync_signal.pending = sig;
3827
/* signal that a new signal is pending */
3828
- atomic_set(&ts->signal_pending, 1);
3829
+ qatomic_set(&ts->signal_pending, 1);
3830
return 1; /* indicates that the signal was queued */
3831
}
3832
3833
@@ -XXX,XX +XXX,XX @@ void process_pending_signals(CPUArchState *cpu_env)
3834
sigset_t set;
3835
sigset_t *blocked_set;
3836
3837
- while (atomic_read(&ts->signal_pending)) {
3838
+ while (qatomic_read(&ts->signal_pending)) {
3839
/* FIXME: This is not threadsafe. */
3840
sigfillset(&set);
3841
sigprocmask(SIG_SETMASK, &set, 0);
3842
@@ -XXX,XX +XXX,XX @@ void process_pending_signals(CPUArchState *cpu_env)
3843
* of unblocking might cause us to take another host signal which
3844
* will set signal_pending again).
3845
*/
3846
- atomic_set(&ts->signal_pending, 0);
3847
+ qatomic_set(&ts->signal_pending, 0);
3848
ts->in_sigsuspend = 0;
3849
set = ts->signal_mask;
3850
sigdelset(&set, SIGSEGV);
3851
diff --git a/migration/colo-failover.c b/migration/colo-failover.c
3852
index XXXXXXX..XXXXXXX 100644
3853
--- a/migration/colo-failover.c
3854
+++ b/migration/colo-failover.c
3855
@@ -XXX,XX +XXX,XX @@ FailoverStatus failover_set_state(FailoverStatus old_state,
3856
{
3857
FailoverStatus old;
3858
3859
- old = atomic_cmpxchg(&failover_state, old_state, new_state);
3860
+ old = qatomic_cmpxchg(&failover_state, old_state, new_state);
3861
if (old == old_state) {
3862
trace_colo_failover_set_state(FailoverStatus_str(new_state));
3863
}
3864
@@ -XXX,XX +XXX,XX @@ FailoverStatus failover_set_state(FailoverStatus old_state,
3865
3866
FailoverStatus failover_get_state(void)
3867
{
3868
- return atomic_read(&failover_state);
3869
+ return qatomic_read(&failover_state);
3870
}
3871
3872
void qmp_x_colo_lost_heartbeat(Error **errp)
3873
diff --git a/migration/migration.c b/migration/migration.c
3874
index XXXXXXX..XXXXXXX 100644
3875
--- a/migration/migration.c
3876
+++ b/migration/migration.c
3877
@@ -XXX,XX +XXX,XX @@ void qmp_migrate_start_postcopy(Error **errp)
3878
* we don't error if migration has finished since that would be racy
3879
* with issuing this command.
3880
*/
3881
- atomic_set(&s->start_postcopy, true);
3882
+ qatomic_set(&s->start_postcopy, true);
3883
}
3884
3885
/* shared migration helpers */
3886
@@ -XXX,XX +XXX,XX @@ void qmp_migrate_start_postcopy(Error **errp)
3887
void migrate_set_state(int *state, int old_state, int new_state)
3888
{
3889
assert(new_state < MIGRATION_STATUS__MAX);
3890
- if (atomic_cmpxchg(state, old_state, new_state) == old_state) {
3891
+ if (qatomic_cmpxchg(state, old_state, new_state) == old_state) {
3892
trace_migrate_set_state(MigrationStatus_str(new_state));
3893
migrate_generate_event(new_state);
3894
}
3895
@@ -XXX,XX +XXX,XX @@ void qmp_migrate_recover(const char *uri, Error **errp)
3896
return;
3897
}
3898
3899
- if (atomic_cmpxchg(&mis->postcopy_recover_triggered,
3900
+ if (qatomic_cmpxchg(&mis->postcopy_recover_triggered,
3901
false, true) == true) {
3902
error_setg(errp, "Migrate recovery is triggered already");
3903
return;
3904
@@ -XXX,XX +XXX,XX @@ static MigIterateState migration_iteration_run(MigrationState *s)
3905
if (pending_size && pending_size >= s->threshold_size) {
3906
/* Still a significant amount to transfer */
3907
if (!in_postcopy && pend_pre <= s->threshold_size &&
3908
- atomic_read(&s->start_postcopy)) {
3909
+ qatomic_read(&s->start_postcopy)) {
3910
if (postcopy_start(s)) {
3911
error_report("%s: postcopy failed to start", __func__);
3912
}
3913
diff --git a/migration/multifd.c b/migration/multifd.c
3914
index XXXXXXX..XXXXXXX 100644
3915
--- a/migration/multifd.c
3916
+++ b/migration/multifd.c
3917
@@ -XXX,XX +XXX,XX @@ static int multifd_send_pages(QEMUFile *f)
3918
MultiFDPages_t *pages = multifd_send_state->pages;
3919
uint64_t transferred;
3920
3921
- if (atomic_read(&multifd_send_state->exiting)) {
3922
+ if (qatomic_read(&multifd_send_state->exiting)) {
3923
return -1;
3924
}
3925
3926
@@ -XXX,XX +XXX,XX @@ static void multifd_send_terminate_threads(Error *err)
3927
* threads at the same time, we can end calling this function
3928
* twice.
3929
*/
3930
- if (atomic_xchg(&multifd_send_state->exiting, 1)) {
3931
+ if (qatomic_xchg(&multifd_send_state->exiting, 1)) {
3932
return;
3933
}
3934
3935
@@ -XXX,XX +XXX,XX @@ static void *multifd_send_thread(void *opaque)
3936
while (true) {
3937
qemu_sem_wait(&p->sem);
3938
3939
- if (atomic_read(&multifd_send_state->exiting)) {
3940
+ if (qatomic_read(&multifd_send_state->exiting)) {
3941
break;
3942
}
3943
qemu_mutex_lock(&p->mutex);
3944
@@ -XXX,XX +XXX,XX @@ int multifd_save_setup(Error **errp)
3945
multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
3946
multifd_send_state->pages = multifd_pages_init(page_count);
3947
qemu_sem_init(&multifd_send_state->channels_ready, 0);
3948
- atomic_set(&multifd_send_state->exiting, 0);
3949
+ qatomic_set(&multifd_send_state->exiting, 0);
3950
multifd_send_state->ops = multifd_ops[migrate_multifd_compression()];
3951
3952
for (i = 0; i < thread_count; i++) {
3953
@@ -XXX,XX +XXX,XX @@ int multifd_load_setup(Error **errp)
3954
thread_count = migrate_multifd_channels();
3955
multifd_recv_state = g_malloc0(sizeof(*multifd_recv_state));
3956
multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count);
3957
- atomic_set(&multifd_recv_state->count, 0);
3958
+ qatomic_set(&multifd_recv_state->count, 0);
3959
qemu_sem_init(&multifd_recv_state->sem_sync, 0);
3960
multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()];
3961
3962
@@ -XXX,XX +XXX,XX @@ bool multifd_recv_all_channels_created(void)
3963
return true;
3964
}
3965
3966
- return thread_count == atomic_read(&multifd_recv_state->count);
3967
+ return thread_count == qatomic_read(&multifd_recv_state->count);
3968
}
3969
3970
/*
3971
@@ -XXX,XX +XXX,XX @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
3972
error_propagate_prepend(errp, local_err,
3973
"failed to receive packet"
3974
" via multifd channel %d: ",
3975
- atomic_read(&multifd_recv_state->count));
3976
+ qatomic_read(&multifd_recv_state->count));
3977
return false;
3978
}
3979
trace_multifd_recv_new_channel(id);
3980
@@ -XXX,XX +XXX,XX @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
3981
p->running = true;
3982
qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
3983
QEMU_THREAD_JOINABLE);
3984
- atomic_inc(&multifd_recv_state->count);
3985
- return atomic_read(&multifd_recv_state->count) ==
3986
+ qatomic_inc(&multifd_recv_state->count);
3987
+ return qatomic_read(&multifd_recv_state->count) ==
3988
migrate_multifd_channels();
3989
}
3990
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
3991
index XXXXXXX..XXXXXXX 100644
3992
--- a/migration/postcopy-ram.c
3993
+++ b/migration/postcopy-ram.c
3994
@@ -XXX,XX +XXX,XX @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
3995
Error *local_err = NULL;
3996
3997
/* Let the fault thread quit */
3998
- atomic_set(&mis->fault_thread_quit, 1);
3999
+ qatomic_set(&mis->fault_thread_quit, 1);
4000
postcopy_fault_thread_notify(mis);
4001
trace_postcopy_ram_incoming_cleanup_join();
4002
qemu_thread_join(&mis->fault_thread);
4003
@@ -XXX,XX +XXX,XX @@ static void mark_postcopy_blocktime_begin(uintptr_t addr, uint32_t ptid,
4004
4005
low_time_offset = get_low_time_offset(dc);
4006
if (dc->vcpu_addr[cpu] == 0) {
4007
- atomic_inc(&dc->smp_cpus_down);
4008
+ qatomic_inc(&dc->smp_cpus_down);
4009
}
4010
4011
- atomic_xchg(&dc->last_begin, low_time_offset);
4012
- atomic_xchg(&dc->page_fault_vcpu_time[cpu], low_time_offset);
4013
- atomic_xchg(&dc->vcpu_addr[cpu], addr);
4014
+ qatomic_xchg(&dc->last_begin, low_time_offset);
4015
+ qatomic_xchg(&dc->page_fault_vcpu_time[cpu], low_time_offset);
4016
+ qatomic_xchg(&dc->vcpu_addr[cpu], addr);
4017
4018
/*
4019
* check it here, not at the beginning of the function,
4020
@@ -XXX,XX +XXX,XX @@ static void mark_postcopy_blocktime_begin(uintptr_t addr, uint32_t ptid,
4021
*/
4022
already_received = ramblock_recv_bitmap_test(rb, (void *)addr);
4023
if (already_received) {
4024
- atomic_xchg(&dc->vcpu_addr[cpu], 0);
4025
- atomic_xchg(&dc->page_fault_vcpu_time[cpu], 0);
4026
- atomic_dec(&dc->smp_cpus_down);
4027
+ qatomic_xchg(&dc->vcpu_addr[cpu], 0);
4028
+ qatomic_xchg(&dc->page_fault_vcpu_time[cpu], 0);
4029
+ qatomic_dec(&dc->smp_cpus_down);
4030
}
4031
trace_mark_postcopy_blocktime_begin(addr, dc, dc->page_fault_vcpu_time[cpu],
4032
cpu, already_received);
4033
@@ -XXX,XX +XXX,XX @@ static void mark_postcopy_blocktime_end(uintptr_t addr)
4034
for (i = 0; i < smp_cpus; i++) {
4035
uint32_t vcpu_blocktime = 0;
4036
4037
- read_vcpu_time = atomic_fetch_add(&dc->page_fault_vcpu_time[i], 0);
4038
- if (atomic_fetch_add(&dc->vcpu_addr[i], 0) != addr ||
4039
+ read_vcpu_time = qatomic_fetch_add(&dc->page_fault_vcpu_time[i], 0);
4040
+ if (qatomic_fetch_add(&dc->vcpu_addr[i], 0) != addr ||
4041
read_vcpu_time == 0) {
4042
continue;
4043
}
4044
- atomic_xchg(&dc->vcpu_addr[i], 0);
4045
+ qatomic_xchg(&dc->vcpu_addr[i], 0);
4046
vcpu_blocktime = low_time_offset - read_vcpu_time;
4047
affected_cpu += 1;
4048
/* we need to know is that mark_postcopy_end was due to
4049
* faulted page, another possible case it's prefetched
4050
* page and in that case we shouldn't be here */
4051
if (!vcpu_total_blocktime &&
4052
- atomic_fetch_add(&dc->smp_cpus_down, 0) == smp_cpus) {
4053
+ qatomic_fetch_add(&dc->smp_cpus_down, 0) == smp_cpus) {
4054
vcpu_total_blocktime = true;
4055
}
4056
/* continue cycle, due to one page could affect several vCPUs */
4057
dc->vcpu_blocktime[i] += vcpu_blocktime;
4058
}
4059
4060
- atomic_sub(&dc->smp_cpus_down, affected_cpu);
4061
+ qatomic_sub(&dc->smp_cpus_down, affected_cpu);
4062
if (vcpu_total_blocktime) {
4063
- dc->total_blocktime += low_time_offset - atomic_fetch_add(
4064
+ dc->total_blocktime += low_time_offset - qatomic_fetch_add(
4065
&dc->last_begin, 0);
4066
}
4067
trace_mark_postcopy_blocktime_end(addr, dc, dc->total_blocktime,
4068
@@ -XXX,XX +XXX,XX @@ static void *postcopy_ram_fault_thread(void *opaque)
4069
error_report("%s: read() failed", __func__);
4070
}
4071
4072
- if (atomic_read(&mis->fault_thread_quit)) {
4073
+ if (qatomic_read(&mis->fault_thread_quit)) {
4074
trace_postcopy_ram_fault_thread_quit();
4075
break;
4076
}
4077
@@ -XXX,XX +XXX,XX @@ static PostcopyState incoming_postcopy_state;
4078
4079
PostcopyState postcopy_state_get(void)
4080
{
4081
- return atomic_mb_read(&incoming_postcopy_state);
4082
+ return qatomic_mb_read(&incoming_postcopy_state);
4083
}
4084
4085
/* Set the state and return the old state */
4086
PostcopyState postcopy_state_set(PostcopyState new_state)
4087
{
4088
- return atomic_xchg(&incoming_postcopy_state, new_state);
4089
+ return qatomic_xchg(&incoming_postcopy_state, new_state);
4090
}
4091
4092
/* Register a handler for external shared memory postcopy
4093
diff --git a/migration/rdma.c b/migration/rdma.c
4094
index XXXXXXX..XXXXXXX 100644
4095
--- a/migration/rdma.c
4096
+++ b/migration/rdma.c
4097
@@ -XXX,XX +XXX,XX @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
4098
size_t len = 0;
4099
4100
RCU_READ_LOCK_GUARD();
4101
- rdma = atomic_rcu_read(&rioc->rdmaout);
4102
+ rdma = qatomic_rcu_read(&rioc->rdmaout);
4103
4104
if (!rdma) {
4105
return -EIO;
4106
@@ -XXX,XX +XXX,XX @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc,
4107
size_t done = 0;
4108
4109
RCU_READ_LOCK_GUARD();
4110
- rdma = atomic_rcu_read(&rioc->rdmain);
4111
+ rdma = qatomic_rcu_read(&rioc->rdmain);
4112
4113
if (!rdma) {
4114
return -EIO;
4115
@@ -XXX,XX +XXX,XX @@ qio_channel_rdma_source_prepare(GSource *source,
4116
4117
RCU_READ_LOCK_GUARD();
4118
if (rsource->condition == G_IO_IN) {
4119
- rdma = atomic_rcu_read(&rsource->rioc->rdmain);
4120
+ rdma = qatomic_rcu_read(&rsource->rioc->rdmain);
4121
} else {
4122
- rdma = atomic_rcu_read(&rsource->rioc->rdmaout);
4123
+ rdma = qatomic_rcu_read(&rsource->rioc->rdmaout);
4124
}
4125
4126
if (!rdma) {
4127
@@ -XXX,XX +XXX,XX @@ qio_channel_rdma_source_check(GSource *source)
4128
4129
RCU_READ_LOCK_GUARD();
4130
if (rsource->condition == G_IO_IN) {
4131
- rdma = atomic_rcu_read(&rsource->rioc->rdmain);
4132
+ rdma = qatomic_rcu_read(&rsource->rioc->rdmain);
4133
} else {
4134
- rdma = atomic_rcu_read(&rsource->rioc->rdmaout);
4135
+ rdma = qatomic_rcu_read(&rsource->rioc->rdmaout);
4136
}
4137
4138
if (!rdma) {
4139
@@ -XXX,XX +XXX,XX @@ qio_channel_rdma_source_dispatch(GSource *source,
4140
4141
RCU_READ_LOCK_GUARD();
4142
if (rsource->condition == G_IO_IN) {
4143
- rdma = atomic_rcu_read(&rsource->rioc->rdmain);
4144
+ rdma = qatomic_rcu_read(&rsource->rioc->rdmain);
4145
} else {
4146
- rdma = atomic_rcu_read(&rsource->rioc->rdmaout);
4147
+ rdma = qatomic_rcu_read(&rsource->rioc->rdmaout);
4148
}
4149
4150
if (!rdma) {
4151
@@ -XXX,XX +XXX,XX @@ static int qio_channel_rdma_close(QIOChannel *ioc,
4152
4153
rdmain = rioc->rdmain;
4154
if (rdmain) {
4155
- atomic_rcu_set(&rioc->rdmain, NULL);
4156
+ qatomic_rcu_set(&rioc->rdmain, NULL);
4157
}
4158
4159
rdmaout = rioc->rdmaout;
4160
if (rdmaout) {
4161
- atomic_rcu_set(&rioc->rdmaout, NULL);
4162
+ qatomic_rcu_set(&rioc->rdmaout, NULL);
4163
}
4164
4165
rcu->rdmain = rdmain;
4166
@@ -XXX,XX +XXX,XX @@ qio_channel_rdma_shutdown(QIOChannel *ioc,
4167
4168
RCU_READ_LOCK_GUARD();
4169
4170
- rdmain = atomic_rcu_read(&rioc->rdmain);
4171
- rdmaout = atomic_rcu_read(&rioc->rdmain);
4172
+ rdmain = qatomic_rcu_read(&rioc->rdmain);
4173
+ rdmaout = qatomic_rcu_read(&rioc->rdmain);
4174
4175
switch (how) {
4176
case QIO_CHANNEL_SHUTDOWN_READ:
4177
@@ -XXX,XX +XXX,XX @@ static size_t qemu_rdma_save_page(QEMUFile *f, void *opaque,
4178
int ret;
4179
4180
RCU_READ_LOCK_GUARD();
4181
- rdma = atomic_rcu_read(&rioc->rdmaout);
4182
+ rdma = qatomic_rcu_read(&rioc->rdmaout);
4183
4184
if (!rdma) {
4185
return -EIO;
4186
@@ -XXX,XX +XXX,XX @@ static int qemu_rdma_registration_handle(QEMUFile *f, void *opaque)
4187
int i = 0;
4188
4189
RCU_READ_LOCK_GUARD();
4190
- rdma = atomic_rcu_read(&rioc->rdmain);
4191
+ rdma = qatomic_rcu_read(&rioc->rdmain);
4192
4193
if (!rdma) {
4194
return -EIO;
4195
@@ -XXX,XX +XXX,XX @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, const char *name)
4196
int found = -1;
4197
4198
RCU_READ_LOCK_GUARD();
4199
- rdma = atomic_rcu_read(&rioc->rdmain);
4200
+ rdma = qatomic_rcu_read(&rioc->rdmain);
4201
4202
if (!rdma) {
4203
return -EIO;
4204
@@ -XXX,XX +XXX,XX @@ static int qemu_rdma_registration_start(QEMUFile *f, void *opaque,
4205
RDMAContext *rdma;
4206
4207
RCU_READ_LOCK_GUARD();
4208
- rdma = atomic_rcu_read(&rioc->rdmaout);
4209
+ rdma = qatomic_rcu_read(&rioc->rdmaout);
4210
if (!rdma) {
4211
return -EIO;
4212
}
4213
@@ -XXX,XX +XXX,XX @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
4214
int ret = 0;
4215
4216
RCU_READ_LOCK_GUARD();
4217
- rdma = atomic_rcu_read(&rioc->rdmaout);
4218
+ rdma = qatomic_rcu_read(&rioc->rdmaout);
4219
if (!rdma) {
4220
return -EIO;
4221
}
4222
diff --git a/monitor/hmp.c b/monitor/hmp.c
4223
index XXXXXXX..XXXXXXX 100644
4224
--- a/monitor/hmp.c
4225
+++ b/monitor/hmp.c
4226
@@ -XXX,XX +XXX,XX @@ static void monitor_event(void *opaque, QEMUChrEvent event)
4227
monitor_resume(mon);
4228
monitor_flush(mon);
4229
} else {
4230
- atomic_mb_set(&mon->suspend_cnt, 0);
4231
+ qatomic_mb_set(&mon->suspend_cnt, 0);
4232
}
4233
break;
4234
4235
case CHR_EVENT_MUX_OUT:
4236
if (mon->reset_seen) {
4237
- if (atomic_mb_read(&mon->suspend_cnt) == 0) {
4238
+ if (qatomic_mb_read(&mon->suspend_cnt) == 0) {
4239
monitor_printf(mon, "\n");
4240
}
4241
monitor_flush(mon);
4242
monitor_suspend(mon);
4243
} else {
4244
- atomic_inc(&mon->suspend_cnt);
4245
+ qatomic_inc(&mon->suspend_cnt);
4246
}
4247
qemu_mutex_lock(&mon->mon_lock);
4248
mon->mux_out = 1;
4249
diff --git a/monitor/misc.c b/monitor/misc.c
4250
index XXXXXXX..XXXXXXX 100644
4251
--- a/monitor/misc.c
4252
+++ b/monitor/misc.c
4253
@@ -XXX,XX +XXX,XX @@ static uint64_t vtop(void *ptr, Error **errp)
4254
}
4255
4256
/* Force copy-on-write if necessary. */
4257
- atomic_add((uint8_t *)ptr, 0);
4258
+ qatomic_add((uint8_t *)ptr, 0);
4259
4260
if (pread(fd, &pinfo, sizeof(pinfo), offset) != sizeof(pinfo)) {
4261
error_setg_errno(errp, errno, "Cannot read pagemap");
4262
diff --git a/monitor/monitor.c b/monitor/monitor.c
4263
index XXXXXXX..XXXXXXX 100644
4264
--- a/monitor/monitor.c
4265
+++ b/monitor/monitor.c
4266
@@ -XXX,XX +XXX,XX @@ int monitor_suspend(Monitor *mon)
4267
return -ENOTTY;
4268
}
4269
4270
- atomic_inc(&mon->suspend_cnt);
4271
+ qatomic_inc(&mon->suspend_cnt);
4272
4273
if (mon->use_io_thread) {
4274
/*
4275
@@ -XXX,XX +XXX,XX @@ void monitor_resume(Monitor *mon)
4276
return;
4277
}
4278
4279
- if (atomic_dec_fetch(&mon->suspend_cnt) == 0) {
4280
+ if (qatomic_dec_fetch(&mon->suspend_cnt) == 0) {
4281
AioContext *ctx;
4282
4283
if (mon->use_io_thread) {
4284
@@ -XXX,XX +XXX,XX @@ int monitor_can_read(void *opaque)
4285
{
4286
Monitor *mon = opaque;
4287
4288
- return !atomic_mb_read(&mon->suspend_cnt);
4289
+ return !qatomic_mb_read(&mon->suspend_cnt);
4290
}
4291
4292
void monitor_list_append(Monitor *mon)
4293
diff --git a/qemu-nbd.c b/qemu-nbd.c
4294
index XXXXXXX..XXXXXXX 100644
4295
--- a/qemu-nbd.c
4296
+++ b/qemu-nbd.c
4297
@@ -XXX,XX +XXX,XX @@ QEMU_COPYRIGHT "\n"
4298
#if HAVE_NBD_DEVICE
4299
static void termsig_handler(int signum)
4300
{
4301
- atomic_cmpxchg(&state, RUNNING, TERMINATE);
4302
+ qatomic_cmpxchg(&state, RUNNING, TERMINATE);
4303
qemu_notify_event();
4304
}
4305
#endif /* HAVE_NBD_DEVICE */
4306
diff --git a/qga/commands.c b/qga/commands.c
4307
index XXXXXXX..XXXXXXX 100644
4308
--- a/qga/commands.c
4309
+++ b/qga/commands.c
4310
@@ -XXX,XX +XXX,XX @@ GuestExecStatus *qmp_guest_exec_status(int64_t pid, Error **errp)
4311
4312
ges = g_new0(GuestExecStatus, 1);
4313
4314
- bool finished = atomic_mb_read(&gei->finished);
4315
+ bool finished = qatomic_mb_read(&gei->finished);
4316
4317
/* need to wait till output channels are closed
4318
* to be sure we captured all output at this point */
4319
if (gei->has_output) {
4320
- finished = finished && atomic_mb_read(&gei->out.closed);
4321
- finished = finished && atomic_mb_read(&gei->err.closed);
4322
+ finished = finished && qatomic_mb_read(&gei->out.closed);
4323
+ finished = finished && qatomic_mb_read(&gei->err.closed);
4324
}
4325
4326
ges->exited = finished;
4327
@@ -XXX,XX +XXX,XX @@ static void guest_exec_child_watch(GPid pid, gint status, gpointer data)
4328
(int32_t)gpid_to_int64(pid), (uint32_t)status);
4329
4330
gei->status = status;
4331
- atomic_mb_set(&gei->finished, true);
4332
+ qatomic_mb_set(&gei->finished, true);
4333
4334
g_spawn_close_pid(pid);
4335
}
4336
@@ -XXX,XX +XXX,XX @@ static gboolean guest_exec_input_watch(GIOChannel *ch,
4337
done:
4338
g_io_channel_shutdown(ch, true, NULL);
4339
g_io_channel_unref(ch);
4340
- atomic_mb_set(&p->closed, true);
4341
+ qatomic_mb_set(&p->closed, true);
4342
g_free(p->data);
4343
4344
return false;
4345
@@ -XXX,XX +XXX,XX @@ static gboolean guest_exec_output_watch(GIOChannel *ch,
4346
close:
4347
g_io_channel_shutdown(ch, true, NULL);
4348
g_io_channel_unref(ch);
4349
- atomic_mb_set(&p->closed, true);
4350
+ qatomic_mb_set(&p->closed, true);
4351
return false;
4352
}
4353
4354
diff --git a/qom/object.c b/qom/object.c
4355
index XXXXXXX..XXXXXXX 100644
4356
--- a/qom/object.c
4357
+++ b/qom/object.c
4358
@@ -XXX,XX +XXX,XX @@ Object *object_dynamic_cast_assert(Object *obj, const char *typename,
4359
Object *inst;
4360
4361
for (i = 0; obj && i < OBJECT_CLASS_CAST_CACHE; i++) {
4362
- if (atomic_read(&obj->class->object_cast_cache[i]) == typename) {
4363
+ if (qatomic_read(&obj->class->object_cast_cache[i]) == typename) {
4364
goto out;
4365
}
4366
}
4367
@@ -XXX,XX +XXX,XX @@ Object *object_dynamic_cast_assert(Object *obj, const char *typename,
4368
4369
if (obj && obj == inst) {
4370
for (i = 1; i < OBJECT_CLASS_CAST_CACHE; i++) {
4371
- atomic_set(&obj->class->object_cast_cache[i - 1],
4372
- atomic_read(&obj->class->object_cast_cache[i]));
4373
+ qatomic_set(&obj->class->object_cast_cache[i - 1],
4374
+ qatomic_read(&obj->class->object_cast_cache[i]));
4375
}
4376
- atomic_set(&obj->class->object_cast_cache[i - 1], typename);
4377
+ qatomic_set(&obj->class->object_cast_cache[i - 1], typename);
4378
}
4379
4380
out:
4381
@@ -XXX,XX +XXX,XX @@ ObjectClass *object_class_dynamic_cast_assert(ObjectClass *class,
4382
int i;
4383
4384
for (i = 0; class && i < OBJECT_CLASS_CAST_CACHE; i++) {
4385
- if (atomic_read(&class->class_cast_cache[i]) == typename) {
4386
+ if (qatomic_read(&class->class_cast_cache[i]) == typename) {
4387
ret = class;
4388
goto out;
4389
}
4390
@@ -XXX,XX +XXX,XX @@ ObjectClass *object_class_dynamic_cast_assert(ObjectClass *class,
4391
#ifdef CONFIG_QOM_CAST_DEBUG
4392
if (class && ret == class) {
4393
for (i = 1; i < OBJECT_CLASS_CAST_CACHE; i++) {
4394
- atomic_set(&class->class_cast_cache[i - 1],
4395
- atomic_read(&class->class_cast_cache[i]));
4396
+ qatomic_set(&class->class_cast_cache[i - 1],
4397
+ qatomic_read(&class->class_cast_cache[i]));
4398
}
4399
- atomic_set(&class->class_cast_cache[i - 1], typename);
4400
+ qatomic_set(&class->class_cast_cache[i - 1], typename);
4401
}
4402
out:
4403
#endif
4404
@@ -XXX,XX +XXX,XX @@ Object *object_ref(void *objptr)
4405
if (!obj) {
4406
return NULL;
222
return NULL;
4407
}
4408
- atomic_inc(&obj->ref);
4409
+ qatomic_inc(&obj->ref);
4410
return obj;
4411
}
4412
4413
@@ -XXX,XX +XXX,XX @@ void object_unref(void *objptr)
4414
g_assert(obj->ref > 0);
4415
4416
/* parent always holds a reference to its children */
4417
- if (atomic_fetch_dec(&obj->ref) == 1) {
4418
+ if (qatomic_fetch_dec(&obj->ref) == 1) {
4419
object_finalize(obj);
4420
}
4421
}
4422
diff --git a/scsi/qemu-pr-helper.c b/scsi/qemu-pr-helper.c
4423
index XXXXXXX..XXXXXXX 100644
4424
--- a/scsi/qemu-pr-helper.c
4425
+++ b/scsi/qemu-pr-helper.c
4426
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn prh_co_entry(void *opaque)
4427
goto out;
4428
}
4429
4430
- while (atomic_read(&state) == RUNNING) {
4431
+ while (qatomic_read(&state) == RUNNING) {
4432
PRHelperRequest req;
4433
PRHelperResponse resp;
4434
int sz;
4435
@@ -XXX,XX +XXX,XX @@ static gboolean accept_client(QIOChannel *ioc, GIOCondition cond, gpointer opaqu
4436
4437
static void termsig_handler(int signum)
4438
{
4439
- atomic_cmpxchg(&state, RUNNING, TERMINATE);
4440
+ qatomic_cmpxchg(&state, RUNNING, TERMINATE);
4441
qemu_notify_event();
4442
}
4443
4444
diff --git a/softmmu/cpu-throttle.c b/softmmu/cpu-throttle.c
4445
index XXXXXXX..XXXXXXX 100644
4446
--- a/softmmu/cpu-throttle.c
4447
+++ b/softmmu/cpu-throttle.c
4448
@@ -XXX,XX +XXX,XX @@ static void cpu_throttle_thread(CPUState *cpu, run_on_cpu_data opaque)
4449
}
4450
sleeptime_ns = endtime_ns - qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
4451
}
4452
- atomic_set(&cpu->throttle_thread_scheduled, 0);
4453
+ qatomic_set(&cpu->throttle_thread_scheduled, 0);
4454
}
4455
4456
static void cpu_throttle_timer_tick(void *opaque)
4457
@@ -XXX,XX +XXX,XX @@ static void cpu_throttle_timer_tick(void *opaque)
4458
return;
4459
}
4460
CPU_FOREACH(cpu) {
4461
- if (!atomic_xchg(&cpu->throttle_thread_scheduled, 1)) {
4462
+ if (!qatomic_xchg(&cpu->throttle_thread_scheduled, 1)) {
4463
async_run_on_cpu(cpu, cpu_throttle_thread,
4464
RUN_ON_CPU_NULL);
4465
}
4466
@@ -XXX,XX +XXX,XX @@ void cpu_throttle_set(int new_throttle_pct)
4467
new_throttle_pct = MIN(new_throttle_pct, CPU_THROTTLE_PCT_MAX);
4468
new_throttle_pct = MAX(new_throttle_pct, CPU_THROTTLE_PCT_MIN);
4469
4470
- atomic_set(&throttle_percentage, new_throttle_pct);
4471
+ qatomic_set(&throttle_percentage, new_throttle_pct);
4472
4473
timer_mod(throttle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT) +
4474
CPU_THROTTLE_TIMESLICE_NS);
4475
@@ -XXX,XX +XXX,XX @@ void cpu_throttle_set(int new_throttle_pct)
4476
4477
void cpu_throttle_stop(void)
4478
{
4479
- atomic_set(&throttle_percentage, 0);
4480
+ qatomic_set(&throttle_percentage, 0);
4481
}
4482
4483
bool cpu_throttle_active(void)
4484
@@ -XXX,XX +XXX,XX @@ bool cpu_throttle_active(void)
4485
4486
int cpu_throttle_get_percentage(void)
4487
{
4488
- return atomic_read(&throttle_percentage);
4489
+ return qatomic_read(&throttle_percentage);
4490
}
4491
4492
void cpu_throttle_init(void)
4493
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
4494
index XXXXXXX..XXXXXXX 100644
4495
--- a/softmmu/cpus.c
4496
+++ b/softmmu/cpus.c
4497
@@ -XXX,XX +XXX,XX @@ static void cpu_update_icount_locked(CPUState *cpu)
4498
int64_t executed = cpu_get_icount_executed(cpu);
4499
cpu->icount_budget -= executed;
4500
4501
- atomic_set_i64(&timers_state.qemu_icount,
4502
+ qatomic_set_i64(&timers_state.qemu_icount,
4503
timers_state.qemu_icount + executed);
4504
}
4505
4506
@@ -XXX,XX +XXX,XX @@ static int64_t cpu_get_icount_raw_locked(void)
4507
cpu_update_icount_locked(cpu);
4508
}
4509
/* The read is protected by the seqlock, but needs atomic64 to avoid UB */
4510
- return atomic_read_i64(&timers_state.qemu_icount);
4511
+ return qatomic_read_i64(&timers_state.qemu_icount);
4512
}
4513
4514
static int64_t cpu_get_icount_locked(void)
4515
{
4516
int64_t icount = cpu_get_icount_raw_locked();
4517
- return atomic_read_i64(&timers_state.qemu_icount_bias) +
4518
+ return qatomic_read_i64(&timers_state.qemu_icount_bias) +
4519
cpu_icount_to_ns(icount);
4520
}
4521
4522
@@ -XXX,XX +XXX,XX @@ int64_t cpu_get_icount(void)
4523
4524
int64_t cpu_icount_to_ns(int64_t icount)
4525
{
4526
- return icount << atomic_read(&timers_state.icount_time_shift);
4527
+ return icount << qatomic_read(&timers_state.icount_time_shift);
4528
}
4529
4530
static int64_t cpu_get_ticks_locked(void)
4531
@@ -XXX,XX +XXX,XX @@ static void icount_adjust(void)
4532
&& last_delta + ICOUNT_WOBBLE < delta * 2
4533
&& timers_state.icount_time_shift > 0) {
4534
/* The guest is getting too far ahead. Slow time down. */
4535
- atomic_set(&timers_state.icount_time_shift,
4536
+ qatomic_set(&timers_state.icount_time_shift,
4537
timers_state.icount_time_shift - 1);
4538
}
4539
if (delta < 0
4540
&& last_delta - ICOUNT_WOBBLE > delta * 2
4541
&& timers_state.icount_time_shift < MAX_ICOUNT_SHIFT) {
4542
/* The guest is getting too far behind. Speed time up. */
4543
- atomic_set(&timers_state.icount_time_shift,
4544
+ qatomic_set(&timers_state.icount_time_shift,
4545
timers_state.icount_time_shift + 1);
4546
}
4547
last_delta = delta;
4548
- atomic_set_i64(&timers_state.qemu_icount_bias,
4549
+ qatomic_set_i64(&timers_state.qemu_icount_bias,
4550
cur_icount - (timers_state.qemu_icount
4551
<< timers_state.icount_time_shift));
4552
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
4553
@@ -XXX,XX +XXX,XX @@ static void icount_adjust_vm(void *opaque)
4554
4555
static int64_t qemu_icount_round(int64_t count)
4556
{
4557
- int shift = atomic_read(&timers_state.icount_time_shift);
4558
+ int shift = qatomic_read(&timers_state.icount_time_shift);
4559
return (count + (1 << shift) - 1) >> shift;
4560
}
4561
4562
@@ -XXX,XX +XXX,XX @@ static void icount_warp_rt(void)
4563
int64_t delta = clock - cur_icount;
4564
warp_delta = MIN(warp_delta, delta);
4565
}
4566
- atomic_set_i64(&timers_state.qemu_icount_bias,
4567
+ qatomic_set_i64(&timers_state.qemu_icount_bias,
4568
timers_state.qemu_icount_bias + warp_delta);
4569
}
4570
timers_state.vm_clock_warp_start = -1;
4571
@@ -XXX,XX +XXX,XX @@ void qtest_clock_warp(int64_t dest)
4572
4573
seqlock_write_lock(&timers_state.vm_clock_seqlock,
4574
&timers_state.vm_clock_lock);
4575
- atomic_set_i64(&timers_state.qemu_icount_bias,
4576
+ qatomic_set_i64(&timers_state.qemu_icount_bias,
4577
timers_state.qemu_icount_bias + warp);
4578
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
4579
&timers_state.vm_clock_lock);
4580
@@ -XXX,XX +XXX,XX @@ void qemu_start_warp_timer(void)
4581
*/
4582
seqlock_write_lock(&timers_state.vm_clock_seqlock,
4583
&timers_state.vm_clock_lock);
4584
- atomic_set_i64(&timers_state.qemu_icount_bias,
4585
+ qatomic_set_i64(&timers_state.qemu_icount_bias,
4586
timers_state.qemu_icount_bias + deadline);
4587
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
4588
&timers_state.vm_clock_lock);
4589
@@ -XXX,XX +XXX,XX @@ static void qemu_cpu_kick_rr_next_cpu(void)
4590
{
4591
CPUState *cpu;
4592
do {
4593
- cpu = atomic_mb_read(&tcg_current_rr_cpu);
4594
+ cpu = qatomic_mb_read(&tcg_current_rr_cpu);
4595
if (cpu) {
4596
cpu_exit(cpu);
4597
}
4598
- } while (cpu != atomic_mb_read(&tcg_current_rr_cpu));
4599
+ } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu));
4600
}
4601
4602
/* Kick all RR vCPUs */
4603
@@ -XXX,XX +XXX,XX @@ static void qemu_cpu_stop(CPUState *cpu, bool exit)
4604
4605
static void qemu_wait_io_event_common(CPUState *cpu)
4606
{
4607
- atomic_mb_set(&cpu->thread_kicked, false);
4608
+ qatomic_mb_set(&cpu->thread_kicked, false);
4609
if (cpu->stop) {
4610
qemu_cpu_stop(cpu, false);
4611
}
4612
@@ -XXX,XX +XXX,XX @@ static int tcg_cpu_exec(CPUState *cpu)
4613
ret = cpu_exec(cpu);
4614
cpu_exec_end(cpu);
4615
#ifdef CONFIG_PROFILER
4616
- atomic_set(&tcg_ctx->prof.cpu_exec_time,
4617
+ qatomic_set(&tcg_ctx->prof.cpu_exec_time,
4618
tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti);
4619
#endif
4620
return ret;
4621
@@ -XXX,XX +XXX,XX @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
4622
4623
while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
4624
4625
- atomic_mb_set(&tcg_current_rr_cpu, cpu);
4626
+ qatomic_mb_set(&tcg_current_rr_cpu, cpu);
4627
current_cpu = cpu;
4628
4629
qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
4630
@@ -XXX,XX +XXX,XX @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
4631
cpu = CPU_NEXT(cpu);
4632
} /* while (cpu && !cpu->exit_request).. */
4633
4634
- /* Does not need atomic_mb_set because a spurious wakeup is okay. */
4635
- atomic_set(&tcg_current_rr_cpu, NULL);
4636
+ /* Does not need qatomic_mb_set because a spurious wakeup is okay. */
4637
+ qatomic_set(&tcg_current_rr_cpu, NULL);
4638
4639
if (cpu && cpu->exit_request) {
4640
- atomic_mb_set(&cpu->exit_request, 0);
4641
+ qatomic_mb_set(&cpu->exit_request, 0);
4642
}
4643
4644
if (use_icount && all_cpu_threads_idle()) {
4645
@@ -XXX,XX +XXX,XX @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
4646
}
4647
}
4648
4649
- atomic_mb_set(&cpu->exit_request, 0);
4650
+ qatomic_mb_set(&cpu->exit_request, 0);
4651
qemu_wait_io_event(cpu);
4652
} while (!cpu->unplug || cpu_can_run(cpu));
4653
4654
@@ -XXX,XX +XXX,XX @@ bool qemu_mutex_iothread_locked(void)
4655
*/
4656
void qemu_mutex_lock_iothread_impl(const char *file, int line)
4657
{
4658
- QemuMutexLockFunc bql_lock = atomic_read(&qemu_bql_mutex_lock_func);
4659
+ QemuMutexLockFunc bql_lock = qatomic_read(&qemu_bql_mutex_lock_func);
4660
4661
g_assert(!qemu_mutex_iothread_locked());
4662
bql_lock(&qemu_global_mutex, file, line);
4663
diff --git a/softmmu/memory.c b/softmmu/memory.c
4664
index XXXXXXX..XXXXXXX 100644
4665
--- a/softmmu/memory.c
4666
+++ b/softmmu/memory.c
4667
@@ -XXX,XX +XXX,XX @@ static void flatview_destroy(FlatView *view)
4668
4669
static bool flatview_ref(FlatView *view)
4670
{
4671
- return atomic_fetch_inc_nonzero(&view->ref) > 0;
4672
+ return qatomic_fetch_inc_nonzero(&view->ref) > 0;
4673
}
4674
4675
void flatview_unref(FlatView *view)
4676
{
4677
- if (atomic_fetch_dec(&view->ref) == 1) {
4678
+ if (qatomic_fetch_dec(&view->ref) == 1) {
4679
trace_flatview_destroy_rcu(view, view->root);
4680
assert(view->root);
4681
call_rcu(view, flatview_destroy, rcu);
4682
@@ -XXX,XX +XXX,XX @@ static void address_space_set_flatview(AddressSpace *as)
4683
}
4684
4685
/* Writes are protected by the BQL. */
4686
- atomic_rcu_set(&as->current_map, new_view);
4687
+ qatomic_rcu_set(&as->current_map, new_view);
4688
if (old_view) {
4689
flatview_unref(old_view);
4690
}
4691
diff --git a/softmmu/vl.c b/softmmu/vl.c
4692
index XXXXXXX..XXXXXXX 100644
4693
--- a/softmmu/vl.c
4694
+++ b/softmmu/vl.c
4695
@@ -XXX,XX +XXX,XX @@ ShutdownCause qemu_reset_requested_get(void)
4696
4697
static int qemu_shutdown_requested(void)
4698
{
4699
- return atomic_xchg(&shutdown_requested, SHUTDOWN_CAUSE_NONE);
4700
+ return qatomic_xchg(&shutdown_requested, SHUTDOWN_CAUSE_NONE);
4701
}
4702
4703
static void qemu_kill_report(void)
4704
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
4705
index XXXXXXX..XXXXXXX 100644
4706
--- a/target/arm/mte_helper.c
4707
+++ b/target/arm/mte_helper.c
4708
@@ -XXX,XX +XXX,XX @@ static void store_tag1(uint64_t ptr, uint8_t *mem, int tag)
4709
static void store_tag1_parallel(uint64_t ptr, uint8_t *mem, int tag)
4710
{
4711
int ofs = extract32(ptr, LOG2_TAG_GRANULE, 1) * 4;
4712
- uint8_t old = atomic_read(mem);
4713
+ uint8_t old = qatomic_read(mem);
4714
4715
while (1) {
4716
uint8_t new = deposit32(old, ofs, 4, tag);
4717
- uint8_t cmp = atomic_cmpxchg(mem, old, new);
4718
+ uint8_t cmp = qatomic_cmpxchg(mem, old, new);
4719
if (likely(cmp == old)) {
4720
return;
4721
}
4722
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
4723
2 * TAG_GRANULE, MMU_DATA_STORE, 1, ra);
4724
if (mem1) {
4725
tag |= tag << 4;
4726
- atomic_set(mem1, tag);
4727
+ qatomic_set(mem1, tag);
4728
}
4729
}
4730
}
4731
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
4732
index XXXXXXX..XXXXXXX 100644
4733
--- a/target/hppa/op_helper.c
4734
+++ b/target/hppa/op_helper.c
4735
@@ -XXX,XX +XXX,XX @@ static void atomic_store_3(CPUHPPAState *env, target_ulong addr, uint32_t val,
4736
old = *haddr;
4737
while (1) {
4738
new = (old & ~mask) | (val & mask);
4739
- cmp = atomic_cmpxchg(haddr, old, new);
4740
+ cmp = qatomic_cmpxchg(haddr, old, new);
4741
if (cmp == old) {
4742
return;
4743
}
4744
diff --git a/target/i386/mem_helper.c b/target/i386/mem_helper.c
4745
index XXXXXXX..XXXXXXX 100644
4746
--- a/target/i386/mem_helper.c
4747
+++ b/target/i386/mem_helper.c
4748
@@ -XXX,XX +XXX,XX @@ void helper_cmpxchg8b(CPUX86State *env, target_ulong a0)
4749
uint64_t *haddr = g2h(a0);
4750
cmpv = cpu_to_le64(cmpv);
4751
newv = cpu_to_le64(newv);
4752
- oldv = atomic_cmpxchg__nocheck(haddr, cmpv, newv);
4753
+ oldv = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
4754
oldv = le64_to_cpu(oldv);
4755
}
4756
#else
4757
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
4758
index XXXXXXX..XXXXXXX 100644
4759
--- a/target/i386/whpx-all.c
4760
+++ b/target/i386/whpx-all.c
4761
@@ -XXX,XX +XXX,XX @@ static int whpx_vcpu_run(CPUState *cpu)
4762
whpx_vcpu_process_async_events(cpu);
4763
if (cpu->halted) {
4764
cpu->exception_index = EXCP_HLT;
4765
- atomic_set(&cpu->exit_request, false);
4766
+ qatomic_set(&cpu->exit_request, false);
4767
return 0;
4768
}
4769
4770
@@ -XXX,XX +XXX,XX @@ static int whpx_vcpu_run(CPUState *cpu)
4771
4772
whpx_vcpu_pre_run(cpu);
4773
4774
- if (atomic_read(&cpu->exit_request)) {
4775
+ if (qatomic_read(&cpu->exit_request)) {
4776
whpx_vcpu_kick(cpu);
4777
}
4778
4779
@@ -XXX,XX +XXX,XX @@ static int whpx_vcpu_run(CPUState *cpu)
4780
qemu_mutex_lock_iothread();
4781
current_cpu = cpu;
4782
4783
- atomic_set(&cpu->exit_request, false);
4784
+ qatomic_set(&cpu->exit_request, false);
4785
4786
return ret < 0;
4787
}
4788
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
4789
index XXXXXXX..XXXXXXX 100644
4790
--- a/target/riscv/cpu_helper.c
4791
+++ b/target/riscv/cpu_helper.c
4792
@@ -XXX,XX +XXX,XX @@ restart:
4793
*pte_pa = pte = updated_pte;
4794
#else
4795
target_ulong old_pte =
4796
- atomic_cmpxchg(pte_pa, pte, updated_pte);
4797
+ qatomic_cmpxchg(pte_pa, pte, updated_pte);
4798
if (old_pte != pte) {
4799
goto restart;
4800
} else {
4801
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
4802
index XXXXXXX..XXXXXXX 100644
4803
--- a/target/s390x/mem_helper.c
4804
+++ b/target/s390x/mem_helper.c
4805
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
4806
if (parallel) {
4807
#ifdef CONFIG_USER_ONLY
4808
uint32_t *haddr = g2h(a1);
4809
- ov = atomic_cmpxchg__nocheck(haddr, cv, nv);
4810
+ ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
4811
#else
4812
TCGMemOpIdx oi = make_memop_idx(MO_TEUL | MO_ALIGN, mem_idx);
4813
ov = helper_atomic_cmpxchgl_be_mmu(env, a1, cv, nv, oi, ra);
4814
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
4815
#ifdef CONFIG_ATOMIC64
4816
# ifdef CONFIG_USER_ONLY
4817
uint64_t *haddr = g2h(a1);
4818
- ov = atomic_cmpxchg__nocheck(haddr, cv, nv);
4819
+ ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
4820
# else
4821
TCGMemOpIdx oi = make_memop_idx(MO_TEQ | MO_ALIGN, mem_idx);
4822
ov = helper_atomic_cmpxchgq_be_mmu(env, a1, cv, nv, oi, ra);
4823
diff --git a/target/xtensa/exc_helper.c b/target/xtensa/exc_helper.c
4824
index XXXXXXX..XXXXXXX 100644
4825
--- a/target/xtensa/exc_helper.c
4826
+++ b/target/xtensa/exc_helper.c
4827
@@ -XXX,XX +XXX,XX @@ void HELPER(check_interrupts)(CPUXtensaState *env)
4828
4829
void HELPER(intset)(CPUXtensaState *env, uint32_t v)
4830
{
4831
- atomic_or(&env->sregs[INTSET],
4832
+ qatomic_or(&env->sregs[INTSET],
4833
v & env->config->inttype_mask[INTTYPE_SOFTWARE]);
4834
}
4835
4836
static void intclear(CPUXtensaState *env, uint32_t v)
4837
{
4838
- atomic_and(&env->sregs[INTSET], ~v);
4839
+ qatomic_and(&env->sregs[INTSET], ~v);
4840
}
4841
4842
void HELPER(intclear)(CPUXtensaState *env, uint32_t v)
4843
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
4844
index XXXXXXX..XXXXXXX 100644
4845
--- a/target/xtensa/op_helper.c
4846
+++ b/target/xtensa/op_helper.c
4847
@@ -XXX,XX +XXX,XX @@ void HELPER(update_ccompare)(CPUXtensaState *env, uint32_t i)
4848
{
4849
uint64_t dcc;
4850
4851
- atomic_and(&env->sregs[INTSET],
4852
+ qatomic_and(&env->sregs[INTSET],
4853
~(1u << env->config->timerint[i]));
4854
HELPER(update_ccount)(env);
4855
dcc = (uint64_t)(env->sregs[CCOMPARE + i] - env->sregs[CCOUNT] - 1) + 1;
4856
diff --git a/tcg/tcg.c b/tcg/tcg.c
4857
index XXXXXXX..XXXXXXX 100644
4858
--- a/tcg/tcg.c
4859
+++ b/tcg/tcg.c
4860
@@ -XXX,XX +XXX,XX @@ static inline bool tcg_region_initial_alloc__locked(TCGContext *s)
4861
/* Call from a safe-work context */
4862
void tcg_region_reset_all(void)
4863
{
4864
- unsigned int n_ctxs = atomic_read(&n_tcg_ctxs);
4865
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
4866
unsigned int i;
4867
4868
qemu_mutex_lock(&region.lock);
4869
@@ -XXX,XX +XXX,XX @@ void tcg_region_reset_all(void)
4870
region.agg_size_full = 0;
4871
4872
for (i = 0; i < n_ctxs; i++) {
4873
- TCGContext *s = atomic_read(&tcg_ctxs[i]);
4874
+ TCGContext *s = qatomic_read(&tcg_ctxs[i]);
4875
bool err = tcg_region_initial_alloc__locked(s);
4876
4877
g_assert(!err);
4878
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
4879
}
4880
4881
/* Claim an entry in tcg_ctxs */
4882
- n = atomic_fetch_inc(&n_tcg_ctxs);
4883
+ n = qatomic_fetch_inc(&n_tcg_ctxs);
4884
g_assert(n < ms->smp.max_cpus);
4885
- atomic_set(&tcg_ctxs[n], s);
4886
+ qatomic_set(&tcg_ctxs[n], s);
4887
4888
if (n > 0) {
4889
alloc_tcg_plugin_context(s);
4890
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
4891
*/
4892
size_t tcg_code_size(void)
4893
{
4894
- unsigned int n_ctxs = atomic_read(&n_tcg_ctxs);
4895
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
4896
unsigned int i;
4897
size_t total;
4898
4899
qemu_mutex_lock(&region.lock);
4900
total = region.agg_size_full;
4901
for (i = 0; i < n_ctxs; i++) {
4902
- const TCGContext *s = atomic_read(&tcg_ctxs[i]);
4903
+ const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
4904
size_t size;
4905
4906
- size = atomic_read(&s->code_gen_ptr) - s->code_gen_buffer;
4907
+ size = qatomic_read(&s->code_gen_ptr) - s->code_gen_buffer;
4908
g_assert(size <= s->code_gen_buffer_size);
4909
total += size;
4910
}
4911
@@ -XXX,XX +XXX,XX @@ size_t tcg_code_capacity(void)
4912
4913
size_t tcg_tb_phys_invalidate_count(void)
4914
{
4915
- unsigned int n_ctxs = atomic_read(&n_tcg_ctxs);
4916
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
4917
unsigned int i;
4918
size_t total = 0;
4919
4920
for (i = 0; i < n_ctxs; i++) {
4921
- const TCGContext *s = atomic_read(&tcg_ctxs[i]);
4922
+ const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
4923
4924
- total += atomic_read(&s->tb_phys_invalidate_count);
4925
+ total += qatomic_read(&s->tb_phys_invalidate_count);
4926
}
4927
return total;
4928
}
4929
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tcg_tb_alloc(TCGContext *s)
4930
}
4931
goto retry;
4932
}
4933
- atomic_set(&s->code_gen_ptr, next);
4934
+ qatomic_set(&s->code_gen_ptr, next);
4935
s->data_gen_ptr = NULL;
4936
return tb;
4937
}
4938
@@ -XXX,XX +XXX,XX @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
4939
QemuLogFile *logfile;
4940
4941
rcu_read_lock();
4942
- logfile = atomic_rcu_read(&qemu_logfile);
4943
+ logfile = qatomic_rcu_read(&qemu_logfile);
4944
if (logfile) {
4945
for (; col < 40; ++col) {
4946
putc(' ', logfile->fd);
4947
@@ -XXX,XX +XXX,XX @@ void tcg_op_remove(TCGContext *s, TCGOp *op)
4948
s->nb_ops--;
4949
4950
#ifdef CONFIG_PROFILER
4951
- atomic_set(&s->prof.del_op_count, s->prof.del_op_count + 1);
4952
+ qatomic_set(&s->prof.del_op_count, s->prof.del_op_count + 1);
4953
#endif
4954
}
4955
4956
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
4957
/* avoid copy/paste errors */
4958
#define PROF_ADD(to, from, field) \
4959
do { \
4960
- (to)->field += atomic_read(&((from)->field)); \
4961
+ (to)->field += qatomic_read(&((from)->field)); \
4962
} while (0)
4963
4964
#define PROF_MAX(to, from, field) \
4965
do { \
4966
- typeof((from)->field) val__ = atomic_read(&((from)->field)); \
4967
+ typeof((from)->field) val__ = qatomic_read(&((from)->field)); \
4968
if (val__ > (to)->field) { \
4969
(to)->field = val__; \
4970
} \
4971
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
4972
static inline
4973
void tcg_profile_snapshot(TCGProfile *prof, bool counters, bool table)
4974
{
4975
- unsigned int n_ctxs = atomic_read(&n_tcg_ctxs);
4976
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
4977
unsigned int i;
4978
4979
for (i = 0; i < n_ctxs; i++) {
4980
- TCGContext *s = atomic_read(&tcg_ctxs[i]);
4981
+ TCGContext *s = qatomic_read(&tcg_ctxs[i]);
4982
const TCGProfile *orig = &s->prof;
4983
4984
if (counters) {
4985
@@ -XXX,XX +XXX,XX @@ void tcg_dump_op_count(void)
4986
4987
int64_t tcg_cpu_exec_time(void)
4988
{
4989
- unsigned int n_ctxs = atomic_read(&n_tcg_ctxs);
4990
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
4991
unsigned int i;
4992
int64_t ret = 0;
4993
4994
for (i = 0; i < n_ctxs; i++) {
4995
- const TCGContext *s = atomic_read(&tcg_ctxs[i]);
4996
+ const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
4997
const TCGProfile *prof = &s->prof;
4998
4999
- ret += atomic_read(&prof->cpu_exec_time);
5000
+ ret += qatomic_read(&prof->cpu_exec_time);
5001
}
5002
return ret;
5003
}
5004
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
5005
QTAILQ_FOREACH(op, &s->ops, link) {
5006
n++;
5007
}
5008
- atomic_set(&prof->op_count, prof->op_count + n);
5009
+ qatomic_set(&prof->op_count, prof->op_count + n);
5010
if (n > prof->op_count_max) {
5011
- atomic_set(&prof->op_count_max, n);
5012
+ qatomic_set(&prof->op_count_max, n);
5013
}
5014
5015
n = s->nb_temps;
5016
- atomic_set(&prof->temp_count, prof->temp_count + n);
5017
+ qatomic_set(&prof->temp_count, prof->temp_count + n);
5018
if (n > prof->temp_count_max) {
5019
- atomic_set(&prof->temp_count_max, n);
5020
+ qatomic_set(&prof->temp_count_max, n);
5021
}
5022
}
5023
#endif
5024
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
5025
#endif
5026
5027
#ifdef CONFIG_PROFILER
5028
- atomic_set(&prof->opt_time, prof->opt_time - profile_getclock());
5029
+ qatomic_set(&prof->opt_time, prof->opt_time - profile_getclock());
5030
#endif
5031
5032
#ifdef USE_TCG_OPTIMIZATIONS
5033
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
5034
#endif
5035
5036
#ifdef CONFIG_PROFILER
5037
- atomic_set(&prof->opt_time, prof->opt_time + profile_getclock());
5038
- atomic_set(&prof->la_time, prof->la_time - profile_getclock());
5039
+ qatomic_set(&prof->opt_time, prof->opt_time + profile_getclock());
5040
+ qatomic_set(&prof->la_time, prof->la_time - profile_getclock());
5041
#endif
5042
5043
reachable_code_pass(s);
5044
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
5045
}
5046
5047
#ifdef CONFIG_PROFILER
5048
- atomic_set(&prof->la_time, prof->la_time + profile_getclock());
5049
+ qatomic_set(&prof->la_time, prof->la_time + profile_getclock());
5050
#endif
5051
5052
#ifdef DEBUG_DISAS
5053
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
5054
TCGOpcode opc = op->opc;
5055
5056
#ifdef CONFIG_PROFILER
5057
- atomic_set(&prof->table_op_count[opc], prof->table_op_count[opc] + 1);
5058
+ qatomic_set(&prof->table_op_count[opc], prof->table_op_count[opc] + 1);
5059
#endif
5060
5061
switch (opc) {
5062
diff --git a/tcg/tci.c b/tcg/tci.c
5063
index XXXXXXX..XXXXXXX 100644
5064
--- a/tcg/tci.c
5065
+++ b/tcg/tci.c
5066
@@ -XXX,XX +XXX,XX @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
5067
case INDEX_op_goto_tb:
5068
/* Jump address is aligned */
5069
tb_ptr = QEMU_ALIGN_PTR_UP(tb_ptr, 4);
5070
- t0 = atomic_read((int32_t *)tb_ptr);
5071
+ t0 = qatomic_read((int32_t *)tb_ptr);
5072
tb_ptr += sizeof(int32_t);
5073
tci_assert(tb_ptr == old_code_ptr + op_size);
5074
tb_ptr += (int32_t)t0;
5075
diff --git a/tests/atomic64-bench.c b/tests/atomic64-bench.c
5076
index XXXXXXX..XXXXXXX 100644
5077
--- a/tests/atomic64-bench.c
5078
+++ b/tests/atomic64-bench.c
5079
@@ -XXX,XX +XXX,XX @@ static void *thread_func(void *arg)
5080
{
5081
struct thread_info *info = arg;
5082
5083
- atomic_inc(&n_ready_threads);
5084
- while (!atomic_read(&test_start)) {
5085
+ qatomic_inc(&n_ready_threads);
5086
+ while (!qatomic_read(&test_start)) {
5087
cpu_relax();
5088
}
5089
5090
- while (!atomic_read(&test_stop)) {
5091
+ while (!qatomic_read(&test_stop)) {
5092
unsigned int index;
5093
5094
info->r = xorshift64star(info->r);
5095
index = info->r & (range - 1);
5096
- atomic_read_i64(&counts[index].i64);
5097
+ qatomic_read_i64(&counts[index].i64);
5098
info->accesses++;
5099
}
5100
return NULL;
5101
@@ -XXX,XX +XXX,XX @@ static void run_test(void)
5102
{
5103
unsigned int i;
5104
5105
- while (atomic_read(&n_ready_threads) != n_threads) {
5106
+ while (qatomic_read(&n_ready_threads) != n_threads) {
5107
cpu_relax();
5108
}
5109
5110
- atomic_set(&test_start, true);
5111
+ qatomic_set(&test_start, true);
5112
g_usleep(duration * G_USEC_PER_SEC);
5113
- atomic_set(&test_stop, true);
5114
+ qatomic_set(&test_stop, true);
5115
5116
for (i = 0; i < n_threads; i++) {
5117
qemu_thread_join(&threads[i]);
5118
diff --git a/tests/atomic_add-bench.c b/tests/atomic_add-bench.c
5119
index XXXXXXX..XXXXXXX 100644
5120
--- a/tests/atomic_add-bench.c
5121
+++ b/tests/atomic_add-bench.c
5122
@@ -XXX,XX +XXX,XX @@ static void *thread_func(void *arg)
5123
{
5124
struct thread_info *info = arg;
5125
5126
- atomic_inc(&n_ready_threads);
5127
- while (!atomic_read(&test_start)) {
5128
+ qatomic_inc(&n_ready_threads);
5129
+ while (!qatomic_read(&test_start)) {
5130
cpu_relax();
5131
}
5132
5133
- while (!atomic_read(&test_stop)) {
5134
+ while (!qatomic_read(&test_stop)) {
5135
unsigned int index;
5136
5137
info->r = xorshift64star(info->r);
5138
@@ -XXX,XX +XXX,XX @@ static void *thread_func(void *arg)
5139
counts[index].val += 1;
5140
qemu_mutex_unlock(&counts[index].lock);
5141
} else {
5142
- atomic_inc(&counts[index].val);
5143
+ qatomic_inc(&counts[index].val);
5144
}
5145
}
5146
return NULL;
5147
@@ -XXX,XX +XXX,XX @@ static void run_test(void)
5148
{
5149
unsigned int i;
5150
5151
- while (atomic_read(&n_ready_threads) != n_threads) {
5152
+ while (qatomic_read(&n_ready_threads) != n_threads) {
5153
cpu_relax();
5154
}
5155
5156
- atomic_set(&test_start, true);
5157
+ qatomic_set(&test_start, true);
5158
g_usleep(duration * G_USEC_PER_SEC);
5159
- atomic_set(&test_stop, true);
5160
+ qatomic_set(&test_stop, true);
5161
5162
for (i = 0; i < n_threads; i++) {
5163
qemu_thread_join(&threads[i]);
5164
diff --git a/tests/iothread.c b/tests/iothread.c
5165
index XXXXXXX..XXXXXXX 100644
5166
--- a/tests/iothread.c
5167
+++ b/tests/iothread.c
5168
@@ -XXX,XX +XXX,XX @@ static void *iothread_run(void *opaque)
5169
qemu_cond_signal(&iothread->init_done_cond);
5170
qemu_mutex_unlock(&iothread->init_done_lock);
5171
5172
- while (!atomic_read(&iothread->stopping)) {
5173
+ while (!qatomic_read(&iothread->stopping)) {
5174
aio_poll(iothread->ctx, true);
5175
}
5176
5177
diff --git a/tests/qht-bench.c b/tests/qht-bench.c
5178
index XXXXXXX..XXXXXXX 100644
5179
--- a/tests/qht-bench.c
5180
+++ b/tests/qht-bench.c
5181
@@ -XXX,XX +XXX,XX @@ static void *thread_func(void *p)
5182
5183
rcu_register_thread();
5184
5185
- atomic_inc(&n_ready_threads);
5186
- while (!atomic_read(&test_start)) {
5187
+ qatomic_inc(&n_ready_threads);
5188
+ while (!qatomic_read(&test_start)) {
5189
cpu_relax();
5190
}
5191
5192
rcu_read_lock();
5193
- while (!atomic_read(&test_stop)) {
5194
+ while (!qatomic_read(&test_stop)) {
5195
info->seed = xorshift64star(info->seed);
5196
info->func(info);
5197
}
5198
@@ -XXX,XX +XXX,XX @@ static void run_test(void)
5199
{
5200
int i;
5201
5202
- while (atomic_read(&n_ready_threads) != n_rw_threads + n_rz_threads) {
5203
+ while (qatomic_read(&n_ready_threads) != n_rw_threads + n_rz_threads) {
5204
cpu_relax();
5205
}
5206
5207
- atomic_set(&test_start, true);
5208
+ qatomic_set(&test_start, true);
5209
g_usleep(duration * G_USEC_PER_SEC);
5210
- atomic_set(&test_stop, true);
5211
+ qatomic_set(&test_stop, true);
5212
5213
for (i = 0; i < n_rw_threads; i++) {
5214
qemu_thread_join(&rw_threads[i]);
5215
diff --git a/tests/rcutorture.c b/tests/rcutorture.c
5216
index XXXXXXX..XXXXXXX 100644
5217
--- a/tests/rcutorture.c
5218
+++ b/tests/rcutorture.c
5219
@@ -XXX,XX +XXX,XX @@ static void *rcu_read_perf_test(void *arg)
5220
rcu_register_thread();
5221
5222
*(struct rcu_reader_data **)arg = &rcu_reader;
5223
- atomic_inc(&nthreadsrunning);
5224
+ qatomic_inc(&nthreadsrunning);
5225
while (goflag == GOFLAG_INIT) {
5226
g_usleep(1000);
5227
}
5228
@@ -XXX,XX +XXX,XX @@ static void *rcu_update_perf_test(void *arg)
5229
rcu_register_thread();
5230
5231
*(struct rcu_reader_data **)arg = &rcu_reader;
5232
- atomic_inc(&nthreadsrunning);
5233
+ qatomic_inc(&nthreadsrunning);
5234
while (goflag == GOFLAG_INIT) {
5235
g_usleep(1000);
5236
}
5237
@@ -XXX,XX +XXX,XX @@ static void perftestinit(void)
5238
5239
static void perftestrun(int nthreads, int duration, int nreaders, int nupdaters)
5240
{
5241
- while (atomic_read(&nthreadsrunning) < nthreads) {
5242
+ while (qatomic_read(&nthreadsrunning) < nthreads) {
5243
g_usleep(1000);
5244
}
5245
goflag = GOFLAG_RUN;
5246
@@ -XXX,XX +XXX,XX @@ static void *rcu_read_stress_test(void *arg)
5247
}
5248
while (goflag == GOFLAG_RUN) {
5249
rcu_read_lock();
5250
- p = atomic_rcu_read(&rcu_stress_current);
5251
- if (atomic_read(&p->mbtest) == 0) {
5252
+ p = qatomic_rcu_read(&rcu_stress_current);
5253
+ if (qatomic_read(&p->mbtest) == 0) {
5254
n_mberror++;
5255
}
5256
rcu_read_lock();
5257
@@ -XXX,XX +XXX,XX @@ static void *rcu_read_stress_test(void *arg)
5258
garbage++;
5259
}
5260
rcu_read_unlock();
5261
- pc = atomic_read(&p->age);
5262
+ pc = qatomic_read(&p->age);
5263
rcu_read_unlock();
5264
if ((pc > RCU_STRESS_PIPE_LEN) || (pc < 0)) {
5265
pc = RCU_STRESS_PIPE_LEN;
5266
@@ -XXX,XX +XXX,XX @@ static void *rcu_read_stress_test(void *arg)
5267
static void *rcu_update_stress_test(void *arg)
5268
{
5269
int i, rcu_stress_idx = 0;
5270
- struct rcu_stress *cp = atomic_read(&rcu_stress_current);
5271
+ struct rcu_stress *cp = qatomic_read(&rcu_stress_current);
5272
5273
rcu_register_thread();
5274
*(struct rcu_reader_data **)arg = &rcu_reader;
5275
@@ -XXX,XX +XXX,XX @@ static void *rcu_update_stress_test(void *arg)
5276
p = &rcu_stress_array[rcu_stress_idx];
5277
/* catching up with ourselves would be a bug */
5278
assert(p != cp);
5279
- atomic_set(&p->mbtest, 0);
5280
+ qatomic_set(&p->mbtest, 0);
5281
smp_mb();
5282
- atomic_set(&p->age, 0);
5283
- atomic_set(&p->mbtest, 1);
5284
- atomic_rcu_set(&rcu_stress_current, p);
5285
+ qatomic_set(&p->age, 0);
5286
+ qatomic_set(&p->mbtest, 1);
5287
+ qatomic_rcu_set(&rcu_stress_current, p);
5288
cp = p;
5289
/*
5290
* New RCU structure is now live, update pipe counts on old
5291
@@ -XXX,XX +XXX,XX @@ static void *rcu_update_stress_test(void *arg)
5292
*/
5293
for (i = 0; i < RCU_STRESS_PIPE_LEN; i++) {
5294
if (i != rcu_stress_idx) {
5295
- atomic_set(&rcu_stress_array[i].age,
5296
+ qatomic_set(&rcu_stress_array[i].age,
5297
rcu_stress_array[i].age + 1);
5298
}
5299
}
5300
diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c
5301
index XXXXXXX..XXXXXXX 100644
5302
--- a/tests/test-aio-multithread.c
5303
+++ b/tests/test-aio-multithread.c
5304
@@ -XXX,XX +XXX,XX @@ static bool schedule_next(int n)
5305
{
5306
Coroutine *co;
5307
5308
- co = atomic_xchg(&to_schedule[n], NULL);
5309
+ co = qatomic_xchg(&to_schedule[n], NULL);
5310
if (!co) {
5311
- atomic_inc(&count_retry);
5312
+ qatomic_inc(&count_retry);
5313
return false;
5314
}
5315
5316
if (n == id) {
5317
- atomic_inc(&count_here);
5318
+ qatomic_inc(&count_here);
5319
} else {
5320
- atomic_inc(&count_other);
5321
+ qatomic_inc(&count_other);
5322
}
5323
5324
aio_co_schedule(ctx[n], co);
5325
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void test_multi_co_schedule_entry(void *opaque)
5326
{
5327
g_assert(to_schedule[id] == NULL);
5328
5329
- while (!atomic_mb_read(&now_stopping)) {
5330
+ while (!qatomic_mb_read(&now_stopping)) {
5331
int n;
5332
5333
n = g_test_rand_int_range(0, NUM_CONTEXTS);
5334
schedule_next(n);
5335
5336
- atomic_mb_set(&to_schedule[id], qemu_coroutine_self());
5337
+ qatomic_mb_set(&to_schedule[id], qemu_coroutine_self());
5338
qemu_coroutine_yield();
5339
g_assert(to_schedule[id] == NULL);
5340
}
5341
@@ -XXX,XX +XXX,XX @@ static void test_multi_co_schedule(int seconds)
5342
5343
g_usleep(seconds * 1000000);
5344
5345
- atomic_mb_set(&now_stopping, true);
5346
+ qatomic_mb_set(&now_stopping, true);
5347
for (i = 0; i < NUM_CONTEXTS; i++) {
5348
ctx_run(i, finish_cb, NULL);
5349
to_schedule[i] = NULL;
5350
@@ -XXX,XX +XXX,XX @@ static CoMutex comutex;
5351
5352
static void coroutine_fn test_multi_co_mutex_entry(void *opaque)
5353
{
5354
- while (!atomic_mb_read(&now_stopping)) {
5355
+ while (!qatomic_mb_read(&now_stopping)) {
5356
qemu_co_mutex_lock(&comutex);
5357
counter++;
5358
qemu_co_mutex_unlock(&comutex);
5359
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn test_multi_co_mutex_entry(void *opaque)
5360
* exits before the coroutine is woken up, causing a spurious
5361
* assertion failure.
5362
*/
5363
- atomic_inc(&atomic_counter);
5364
+ qatomic_inc(&atomic_counter);
5365
}
5366
- atomic_dec(&running);
5367
+ qatomic_dec(&running);
5368
}
5369
5370
static void test_multi_co_mutex(int threads, int seconds)
5371
@@ -XXX,XX +XXX,XX @@ static void test_multi_co_mutex(int threads, int seconds)
5372
5373
g_usleep(seconds * 1000000);
5374
5375
- atomic_mb_set(&now_stopping, true);
5376
+ qatomic_mb_set(&now_stopping, true);
5377
while (running > 0) {
5378
g_usleep(100000);
5379
}
5380
@@ -XXX,XX +XXX,XX @@ static void mcs_mutex_lock(void)
5381
5382
nodes[id].next = -1;
5383
nodes[id].locked = 1;
5384
- prev = atomic_xchg(&mutex_head, id);
5385
+ prev = qatomic_xchg(&mutex_head, id);
5386
if (prev != -1) {
5387
- atomic_set(&nodes[prev].next, id);
5388
+ qatomic_set(&nodes[prev].next, id);
5389
qemu_futex_wait(&nodes[id].locked, 1);
5390
}
5391
}
5392
@@ -XXX,XX +XXX,XX @@ static void mcs_mutex_lock(void)
5393
static void mcs_mutex_unlock(void)
5394
{
5395
int next;
5396
- if (atomic_read(&nodes[id].next) == -1) {
5397
- if (atomic_read(&mutex_head) == id &&
5398
- atomic_cmpxchg(&mutex_head, id, -1) == id) {
5399
+ if (qatomic_read(&nodes[id].next) == -1) {
5400
+ if (qatomic_read(&mutex_head) == id &&
5401
+ qatomic_cmpxchg(&mutex_head, id, -1) == id) {
5402
/* Last item in the list, exit. */
5403
return;
5404
}
5405
- while (atomic_read(&nodes[id].next) == -1) {
5406
+ while (qatomic_read(&nodes[id].next) == -1) {
5407
/* mcs_mutex_lock did the xchg, but has not updated
5408
* nodes[prev].next yet.
5409
*/
5410
@@ -XXX,XX +XXX,XX @@ static void mcs_mutex_unlock(void)
5411
}
5412
5413
/* Wake up the next in line. */
5414
- next = atomic_read(&nodes[id].next);
5415
+ next = qatomic_read(&nodes[id].next);
5416
nodes[next].locked = 0;
5417
qemu_futex_wake(&nodes[next].locked, 1);
5418
}
5419
5420
static void test_multi_fair_mutex_entry(void *opaque)
5421
{
5422
- while (!atomic_mb_read(&now_stopping)) {
5423
+ while (!qatomic_mb_read(&now_stopping)) {
5424
mcs_mutex_lock();
5425
counter++;
5426
mcs_mutex_unlock();
5427
- atomic_inc(&atomic_counter);
5428
+ qatomic_inc(&atomic_counter);
5429
}
5430
- atomic_dec(&running);
5431
+ qatomic_dec(&running);
5432
}
5433
5434
static void test_multi_fair_mutex(int threads, int seconds)
5435
@@ -XXX,XX +XXX,XX @@ static void test_multi_fair_mutex(int threads, int seconds)
5436
5437
g_usleep(seconds * 1000000);
5438
5439
- atomic_mb_set(&now_stopping, true);
5440
+ qatomic_mb_set(&now_stopping, true);
5441
while (running > 0) {
5442
g_usleep(100000);
5443
}
5444
@@ -XXX,XX +XXX,XX @@ static QemuMutex mutex;
5445
5446
static void test_multi_mutex_entry(void *opaque)
5447
{
5448
- while (!atomic_mb_read(&now_stopping)) {
5449
+ while (!qatomic_mb_read(&now_stopping)) {
5450
qemu_mutex_lock(&mutex);
5451
counter++;
5452
qemu_mutex_unlock(&mutex);
5453
- atomic_inc(&atomic_counter);
5454
+ qatomic_inc(&atomic_counter);
5455
}
5456
- atomic_dec(&running);
5457
+ qatomic_dec(&running);
5458
}
5459
5460
static void test_multi_mutex(int threads, int seconds)
5461
@@ -XXX,XX +XXX,XX @@ static void test_multi_mutex(int threads, int seconds)
5462
5463
g_usleep(seconds * 1000000);
5464
5465
- atomic_mb_set(&now_stopping, true);
5466
+ qatomic_mb_set(&now_stopping, true);
5467
while (running > 0) {
5468
g_usleep(100000);
5469
}
5470
diff --git a/tests/test-logging.c b/tests/test-logging.c
5471
index XXXXXXX..XXXXXXX 100644
5472
--- a/tests/test-logging.c
5473
+++ b/tests/test-logging.c
5474
@@ -XXX,XX +XXX,XX @@ static void test_logfile_write(gconstpointer data)
5475
*/
5476
qemu_set_log_filename(file_path, &error_abort);
5477
rcu_read_lock();
5478
- logfile = atomic_rcu_read(&qemu_logfile);
5479
+ logfile = qatomic_rcu_read(&qemu_logfile);
5480
orig_fd = logfile->fd;
5481
g_assert(logfile && logfile->fd);
5482
fprintf(logfile->fd, "%s 1st write to file\n", __func__);
5483
@@ -XXX,XX +XXX,XX @@ static void test_logfile_write(gconstpointer data)
5484
5485
/* Change the logfile and ensure that the handle is still valid. */
5486
qemu_set_log_filename(file_path1, &error_abort);
5487
- logfile2 = atomic_rcu_read(&qemu_logfile);
5488
+ logfile2 = qatomic_rcu_read(&qemu_logfile);
5489
g_assert(logfile->fd == orig_fd);
5490
g_assert(logfile2->fd != logfile->fd);
5491
fprintf(logfile->fd, "%s 2nd write to file\n", __func__);
5492
diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
5493
index XXXXXXX..XXXXXXX 100644
5494
--- a/tests/test-rcu-list.c
5495
+++ b/tests/test-rcu-list.c
5496
@@ -XXX,XX +XXX,XX @@ static void reclaim_list_el(struct rcu_head *prcu)
5497
struct list_element *el = container_of(prcu, struct list_element, rcu);
5498
g_free(el);
5499
/* Accessed only from call_rcu thread. */
5500
- atomic_set_i64(&n_reclaims, n_reclaims + 1);
5501
+ qatomic_set_i64(&n_reclaims, n_reclaims + 1);
5502
}
5503
5504
#if TEST_LIST_TYPE == 1
5505
@@ -XXX,XX +XXX,XX @@ static void *rcu_q_reader(void *arg)
5506
rcu_register_thread();
5507
5508
*(struct rcu_reader_data **)arg = &rcu_reader;
5509
- atomic_inc(&nthreadsrunning);
5510
- while (atomic_read(&goflag) == GOFLAG_INIT) {
5511
+ qatomic_inc(&nthreadsrunning);
5512
+ while (qatomic_read(&goflag) == GOFLAG_INIT) {
5513
g_usleep(1000);
5514
}
5515
5516
- while (atomic_read(&goflag) == GOFLAG_RUN) {
5517
+ while (qatomic_read(&goflag) == GOFLAG_RUN) {
5518
rcu_read_lock();
5519
TEST_LIST_FOREACH_RCU(el, &Q_list_head, entry) {
5520
n_reads_local++;
5521
- if (atomic_read(&goflag) == GOFLAG_STOP) {
5522
+ if (qatomic_read(&goflag) == GOFLAG_STOP) {
5523
break;
5524
}
5525
}
5526
@@ -XXX,XX +XXX,XX @@ static void *rcu_q_updater(void *arg)
5527
struct list_element *el, *prev_el;
5528
5529
*(struct rcu_reader_data **)arg = &rcu_reader;
5530
- atomic_inc(&nthreadsrunning);
5531
- while (atomic_read(&goflag) == GOFLAG_INIT) {
5532
+ qatomic_inc(&nthreadsrunning);
5533
+ while (qatomic_read(&goflag) == GOFLAG_INIT) {
5534
g_usleep(1000);
5535
}
5536
5537
- while (atomic_read(&goflag) == GOFLAG_RUN) {
5538
+ while (qatomic_read(&goflag) == GOFLAG_RUN) {
5539
target_el = select_random_el(RCU_Q_LEN);
5540
j = 0;
5541
/* FOREACH_RCU could work here but let's use both macros */
5542
@@ -XXX,XX +XXX,XX @@ static void *rcu_q_updater(void *arg)
5543
break;
5544
}
5545
}
5546
- if (atomic_read(&goflag) == GOFLAG_STOP) {
5547
+ if (qatomic_read(&goflag) == GOFLAG_STOP) {
5548
break;
5549
}
5550
target_el = select_random_el(RCU_Q_LEN);
5551
@@ -XXX,XX +XXX,XX @@ static void *rcu_q_updater(void *arg)
5552
qemu_mutex_lock(&counts_mutex);
5553
n_nodes += n_nodes_local;
5554
n_updates += n_updates_local;
5555
- atomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local);
5556
+ qatomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local);
5557
qemu_mutex_unlock(&counts_mutex);
5558
return NULL;
5559
}
5560
@@ -XXX,XX +XXX,XX @@ static void rcu_qtest_init(void)
5561
static void rcu_qtest_run(int duration, int nreaders)
5562
{
5563
int nthreads = nreaders + 1;
5564
- while (atomic_read(&nthreadsrunning) < nthreads) {
5565
+ while (qatomic_read(&nthreadsrunning) < nthreads) {
5566
g_usleep(1000);
5567
}
5568
5569
- atomic_set(&goflag, GOFLAG_RUN);
5570
+ qatomic_set(&goflag, GOFLAG_RUN);
5571
sleep(duration);
5572
- atomic_set(&goflag, GOFLAG_STOP);
5573
+ qatomic_set(&goflag, GOFLAG_STOP);
5574
wait_all_threads();
5575
}
5576
5577
@@ -XXX,XX +XXX,XX @@ static void rcu_qtest(const char *test, int duration, int nreaders)
5578
n_removed_local++;
5579
}
5580
qemu_mutex_lock(&counts_mutex);
5581
- atomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local);
5582
+ qatomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local);
5583
qemu_mutex_unlock(&counts_mutex);
5584
synchronize_rcu();
5585
- while (atomic_read_i64(&n_nodes_removed) > atomic_read_i64(&n_reclaims)) {
5586
+ while (qatomic_read_i64(&n_nodes_removed) >
5587
+ qatomic_read_i64(&n_reclaims)) {
5588
g_usleep(100);
5589
synchronize_rcu();
5590
}
5591
if (g_test_in_charge) {
5592
- g_assert_cmpint(atomic_read_i64(&n_nodes_removed), ==,
5593
- atomic_read_i64(&n_reclaims));
5594
+ g_assert_cmpint(qatomic_read_i64(&n_nodes_removed), ==,
5595
+ qatomic_read_i64(&n_reclaims));
5596
} else {
5597
printf("%s: %d readers; 1 updater; nodes read: " \
5598
"%lld, nodes removed: %"PRIi64"; nodes reclaimed: %"PRIi64"\n",
5599
test, nthreadsrunning - 1, n_reads,
5600
- atomic_read_i64(&n_nodes_removed), atomic_read_i64(&n_reclaims));
5601
+ qatomic_read_i64(&n_nodes_removed),
5602
+ qatomic_read_i64(&n_reclaims));
5603
exit(0);
5604
}
5605
}
5606
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
5607
index XXXXXXX..XXXXXXX 100644
5608
--- a/tests/test-thread-pool.c
5609
+++ b/tests/test-thread-pool.c
5610
@@ -XXX,XX +XXX,XX @@ typedef struct {
5611
static int worker_cb(void *opaque)
5612
{
5613
WorkerTestData *data = opaque;
5614
- return atomic_fetch_inc(&data->n);
5615
+ return qatomic_fetch_inc(&data->n);
5616
}
5617
5618
static int long_cb(void *opaque)
5619
{
5620
WorkerTestData *data = opaque;
5621
- if (atomic_cmpxchg(&data->n, 0, 1) == 0) {
5622
+ if (qatomic_cmpxchg(&data->n, 0, 1) == 0) {
5623
g_usleep(2000000);
5624
- atomic_or(&data->n, 2);
5625
+ qatomic_or(&data->n, 2);
5626
}
5627
return 0;
5628
}
5629
@@ -XXX,XX +XXX,XX @@ static void do_test_cancel(bool sync)
5630
/* Cancel the jobs that haven't been started yet. */
5631
num_canceled = 0;
5632
for (i = 0; i < 100; i++) {
5633
- if (atomic_cmpxchg(&data[i].n, 0, 4) == 0) {
5634
+ if (qatomic_cmpxchg(&data[i].n, 0, 4) == 0) {
5635
data[i].ret = -ECANCELED;
5636
if (sync) {
5637
bdrv_aio_cancel(data[i].aiocb);
5638
@@ -XXX,XX +XXX,XX @@ static void do_test_cancel(bool sync)
5639
g_assert_cmpint(num_canceled, <, 100);
5640
5641
for (i = 0; i < 100; i++) {
5642
- if (data[i].aiocb && atomic_read(&data[i].n) < 4) {
5643
+ if (data[i].aiocb && qatomic_read(&data[i].n) < 4) {
5644
if (sync) {
5645
/* Canceling the others will be a blocking operation. */
5646
bdrv_aio_cancel(data[i].aiocb);
5647
diff --git a/util/aio-posix.c b/util/aio-posix.c
5648
index XXXXXXX..XXXXXXX 100644
5649
--- a/util/aio-posix.c
5650
+++ b/util/aio-posix.c
5651
@@ -XXX,XX +XXX,XX @@
5652
5653
bool aio_poll_disabled(AioContext *ctx)
5654
{
5655
- return atomic_read(&ctx->poll_disable_cnt);
5656
+ return qatomic_read(&ctx->poll_disable_cnt);
5657
}
5658
5659
void aio_add_ready_handler(AioHandlerList *ready_list,
5660
@@ -XXX,XX +XXX,XX @@ void aio_set_fd_handler(AioContext *ctx,
5661
* Changing handlers is a rare event, and a little wasted polling until
5662
* the aio_notify below is not an issue.
5663
*/
5664
- atomic_set(&ctx->poll_disable_cnt,
5665
- atomic_read(&ctx->poll_disable_cnt) + poll_disable_change);
5666
+ qatomic_set(&ctx->poll_disable_cnt,
5667
+ qatomic_read(&ctx->poll_disable_cnt) + poll_disable_change);
5668
5669
ctx->fdmon_ops->update(ctx, node, new_node);
5670
if (node) {
5671
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
5672
*/
5673
use_notify_me = timeout != 0;
5674
if (use_notify_me) {
5675
- atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2);
5676
+ qatomic_set(&ctx->notify_me, qatomic_read(&ctx->notify_me) + 2);
5677
/*
5678
* Write ctx->notify_me before reading ctx->notified. Pairs with
5679
* smp_mb in aio_notify().
5680
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
5681
smp_mb();
5682
5683
/* Don't block if aio_notify() was called */
5684
- if (atomic_read(&ctx->notified)) {
5685
+ if (qatomic_read(&ctx->notified)) {
5686
timeout = 0;
5687
}
5688
}
5689
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
5690
5691
if (use_notify_me) {
5692
/* Finish the poll before clearing the flag. */
5693
- atomic_store_release(&ctx->notify_me,
5694
- atomic_read(&ctx->notify_me) - 2);
5695
+ qatomic_store_release(&ctx->notify_me,
5696
+ qatomic_read(&ctx->notify_me) - 2);
5697
}
5698
5699
aio_notify_accept(ctx);
5700
diff --git a/util/aio-wait.c b/util/aio-wait.c
5701
index XXXXXXX..XXXXXXX 100644
5702
--- a/util/aio-wait.c
5703
+++ b/util/aio-wait.c
5704
@@ -XXX,XX +XXX,XX @@ static void dummy_bh_cb(void *opaque)
5705
void aio_wait_kick(void)
5706
{
5707
/* The barrier (or an atomic op) is in the caller. */
5708
- if (atomic_read(&global_aio_wait.num_waiters)) {
5709
+ if (qatomic_read(&global_aio_wait.num_waiters)) {
5710
aio_bh_schedule_oneshot(qemu_get_aio_context(), dummy_bh_cb, NULL);
5711
}
5712
}
5713
diff --git a/util/aio-win32.c b/util/aio-win32.c
5714
index XXXXXXX..XXXXXXX 100644
5715
--- a/util/aio-win32.c
5716
+++ b/util/aio-win32.c
5717
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
5718
* so disable the optimization now.
5719
*/
5720
if (blocking) {
5721
- atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2);
5722
+ qatomic_set(&ctx->notify_me, qatomic_read(&ctx->notify_me) + 2);
5723
/*
5724
* Write ctx->notify_me before computing the timeout
5725
* (reading bottom half flags, etc.). Pairs with
5726
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
5727
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
5728
if (blocking) {
5729
assert(first);
5730
- atomic_store_release(&ctx->notify_me, atomic_read(&ctx->notify_me) - 2);
5731
+ qatomic_store_release(&ctx->notify_me,
5732
+ qatomic_read(&ctx->notify_me) - 2);
5733
aio_notify_accept(ctx);
5734
}
5735
5736
diff --git a/util/async.c b/util/async.c
5737
index XXXXXXX..XXXXXXX 100644
5738
--- a/util/async.c
5739
+++ b/util/async.c
5740
@@ -XXX,XX +XXX,XX @@ static void aio_bh_enqueue(QEMUBH *bh, unsigned new_flags)
5741
unsigned old_flags;
5742
5743
/*
5744
- * The memory barrier implicit in atomic_fetch_or makes sure that:
5745
+ * The memory barrier implicit in qatomic_fetch_or makes sure that:
5746
* 1. idle & any writes needed by the callback are done before the
5747
* locations are read in the aio_bh_poll.
5748
* 2. ctx is loaded before the callback has a chance to execute and bh
5749
* could be freed.
5750
*/
5751
- old_flags = atomic_fetch_or(&bh->flags, BH_PENDING | new_flags);
5752
+ old_flags = qatomic_fetch_or(&bh->flags, BH_PENDING | new_flags);
5753
if (!(old_flags & BH_PENDING)) {
5754
QSLIST_INSERT_HEAD_ATOMIC(&ctx->bh_list, bh, next);
5755
}
5756
@@ -XXX,XX +XXX,XX @@ static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags)
5757
QSLIST_REMOVE_HEAD(head, next);
5758
5759
/*
5760
- * The atomic_and is paired with aio_bh_enqueue(). The implicit memory
5761
+ * The qatomic_and is paired with aio_bh_enqueue(). The implicit memory
5762
* barrier ensures that the callback sees all writes done by the scheduling
5763
* thread. It also ensures that the scheduling thread sees the cleared
5764
* flag before bh->cb has run, and thus will call aio_notify again if
5765
* necessary.
5766
*/
5767
- *flags = atomic_fetch_and(&bh->flags,
5768
+ *flags = qatomic_fetch_and(&bh->flags,
5769
~(BH_PENDING | BH_SCHEDULED | BH_IDLE));
5770
return bh;
5771
}
5772
@@ -XXX,XX +XXX,XX @@ void qemu_bh_schedule(QEMUBH *bh)
5773
*/
5774
void qemu_bh_cancel(QEMUBH *bh)
5775
{
5776
- atomic_and(&bh->flags, ~BH_SCHEDULED);
5777
+ qatomic_and(&bh->flags, ~BH_SCHEDULED);
5778
}
5779
5780
/* This func is async.The bottom half will do the delete action at the finial
5781
@@ -XXX,XX +XXX,XX @@ aio_ctx_prepare(GSource *source, gint *timeout)
5782
{
5783
AioContext *ctx = (AioContext *) source;
5784
5785
- atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) | 1);
5786
+ qatomic_set(&ctx->notify_me, qatomic_read(&ctx->notify_me) | 1);
5787
5788
/*
5789
* Write ctx->notify_me before computing the timeout
5790
@@ -XXX,XX +XXX,XX @@ aio_ctx_check(GSource *source)
5791
BHListSlice *s;
5792
5793
/* Finish computing the timeout before clearing the flag. */
5794
- atomic_store_release(&ctx->notify_me, atomic_read(&ctx->notify_me) & ~1);
5795
+ qatomic_store_release(&ctx->notify_me, qatomic_read(&ctx->notify_me) & ~1);
5796
aio_notify_accept(ctx);
5797
5798
QSLIST_FOREACH_RCU(bh, &ctx->bh_list, next) {
5799
@@ -XXX,XX +XXX,XX @@ void aio_notify(AioContext *ctx)
5800
* aio_notify_accept.
5801
*/
5802
smp_wmb();
5803
- atomic_set(&ctx->notified, true);
5804
+ qatomic_set(&ctx->notified, true);
5805
5806
/*
5807
* Write ctx->notified before reading ctx->notify_me. Pairs
5808
* with smp_mb in aio_ctx_prepare or aio_poll.
5809
*/
5810
smp_mb();
5811
- if (atomic_read(&ctx->notify_me)) {
5812
+ if (qatomic_read(&ctx->notify_me)) {
5813
event_notifier_set(&ctx->notifier);
5814
}
5815
}
5816
5817
void aio_notify_accept(AioContext *ctx)
5818
{
5819
- atomic_set(&ctx->notified, false);
5820
+ qatomic_set(&ctx->notified, false);
5821
5822
/*
5823
* Write ctx->notified before reading e.g. bh->flags. Pairs with smp_wmb
5824
@@ -XXX,XX +XXX,XX @@ static bool aio_context_notifier_poll(void *opaque)
5825
EventNotifier *e = opaque;
5826
AioContext *ctx = container_of(e, AioContext, notifier);
5827
5828
- return atomic_read(&ctx->notified);
5829
+ return qatomic_read(&ctx->notified);
5830
}
5831
5832
static void co_schedule_bh_cb(void *opaque)
5833
@@ -XXX,XX +XXX,XX @@ static void co_schedule_bh_cb(void *opaque)
5834
aio_context_acquire(ctx);
5835
5836
/* Protected by write barrier in qemu_aio_coroutine_enter */
5837
- atomic_set(&co->scheduled, NULL);
5838
+ qatomic_set(&co->scheduled, NULL);
5839
qemu_aio_coroutine_enter(ctx, co);
5840
aio_context_release(ctx);
5841
}
5842
@@ -XXX,XX +XXX,XX @@ fail:
5843
void aio_co_schedule(AioContext *ctx, Coroutine *co)
5844
{
5845
trace_aio_co_schedule(ctx, co);
5846
- const char *scheduled = atomic_cmpxchg(&co->scheduled, NULL,
5847
+ const char *scheduled = qatomic_cmpxchg(&co->scheduled, NULL,
5848
__func__);
5849
5850
if (scheduled) {
5851
@@ -XXX,XX +XXX,XX @@ void aio_co_wake(struct Coroutine *co)
5852
* qemu_coroutine_enter.
5853
*/
5854
smp_read_barrier_depends();
5855
- ctx = atomic_read(&co->ctx);
5856
+ ctx = qatomic_read(&co->ctx);
5857
5858
aio_co_enter(ctx, co);
5859
}
5860
diff --git a/util/atomic64.c b/util/atomic64.c
5861
index XXXXXXX..XXXXXXX 100644
5862
--- a/util/atomic64.c
5863
+++ b/util/atomic64.c
5864
@@ -XXX,XX +XXX,XX @@ static QemuSpin *addr_to_lock(const void *addr)
5865
return ret; \
5866
}
5867
5868
-GEN_READ(atomic_read_i64, int64_t)
5869
-GEN_READ(atomic_read_u64, uint64_t)
5870
+GEN_READ(qatomic_read_i64, int64_t)
5871
+GEN_READ(qatomic_read_u64, uint64_t)
5872
#undef GEN_READ
5873
5874
#define GEN_SET(name, type) \
5875
@@ -XXX,XX +XXX,XX @@ GEN_READ(atomic_read_u64, uint64_t)
5876
qemu_spin_unlock(lock); \
5877
}
5878
5879
-GEN_SET(atomic_set_i64, int64_t)
5880
-GEN_SET(atomic_set_u64, uint64_t)
5881
+GEN_SET(qatomic_set_i64, int64_t)
5882
+GEN_SET(qatomic_set_u64, uint64_t)
5883
#undef GEN_SET
5884
5885
-void atomic64_init(void)
5886
+void qatomic64_init(void)
5887
{
5888
int i;
5889
5890
diff --git a/util/bitmap.c b/util/bitmap.c
5891
index XXXXXXX..XXXXXXX 100644
5892
--- a/util/bitmap.c
5893
+++ b/util/bitmap.c
5894
@@ -XXX,XX +XXX,XX @@ void bitmap_set_atomic(unsigned long *map, long start, long nr)
5895
5896
/* First word */
5897
if (nr - bits_to_set > 0) {
5898
- atomic_or(p, mask_to_set);
5899
+ qatomic_or(p, mask_to_set);
5900
nr -= bits_to_set;
5901
bits_to_set = BITS_PER_LONG;
5902
mask_to_set = ~0UL;
5903
@@ -XXX,XX +XXX,XX @@ void bitmap_set_atomic(unsigned long *map, long start, long nr)
5904
/* Last word */
5905
if (nr) {
5906
mask_to_set &= BITMAP_LAST_WORD_MASK(size);
5907
- atomic_or(p, mask_to_set);
5908
+ qatomic_or(p, mask_to_set);
5909
} else {
5910
- /* If we avoided the full barrier in atomic_or(), issue a
5911
+ /* If we avoided the full barrier in qatomic_or(), issue a
5912
* barrier to account for the assignments in the while loop.
5913
*/
5914
smp_mb();
5915
@@ -XXX,XX +XXX,XX @@ bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr)
5916
5917
/* First word */
5918
if (nr - bits_to_clear > 0) {
5919
- old_bits = atomic_fetch_and(p, ~mask_to_clear);
5920
+ old_bits = qatomic_fetch_and(p, ~mask_to_clear);
5921
dirty |= old_bits & mask_to_clear;
5922
nr -= bits_to_clear;
5923
bits_to_clear = BITS_PER_LONG;
5924
@@ -XXX,XX +XXX,XX @@ bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr)
5925
if (bits_to_clear == BITS_PER_LONG) {
5926
while (nr >= BITS_PER_LONG) {
5927
if (*p) {
5928
- old_bits = atomic_xchg(p, 0);
5929
+ old_bits = qatomic_xchg(p, 0);
5930
dirty |= old_bits;
5931
}
5932
nr -= BITS_PER_LONG;
5933
@@ -XXX,XX +XXX,XX @@ bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr)
5934
/* Last word */
5935
if (nr) {
5936
mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
5937
- old_bits = atomic_fetch_and(p, ~mask_to_clear);
5938
+ old_bits = qatomic_fetch_and(p, ~mask_to_clear);
5939
dirty |= old_bits & mask_to_clear;
5940
} else {
5941
if (!dirty) {
5942
@@ -XXX,XX +XXX,XX @@ void bitmap_copy_and_clear_atomic(unsigned long *dst, unsigned long *src,
5943
long nr)
5944
{
5945
while (nr > 0) {
5946
- *dst = atomic_xchg(src, 0);
5947
+ *dst = qatomic_xchg(src, 0);
5948
dst++;
5949
src++;
5950
nr -= BITS_PER_LONG;
5951
diff --git a/util/cacheinfo.c b/util/cacheinfo.c
5952
index XXXXXXX..XXXXXXX 100644
5953
--- a/util/cacheinfo.c
5954
+++ b/util/cacheinfo.c
5955
@@ -XXX,XX +XXX,XX @@ static void __attribute__((constructor)) init_cache_info(void)
5956
qemu_dcache_linesize = dsize;
5957
qemu_dcache_linesize_log = ctz32(dsize);
5958
5959
- atomic64_init();
5960
+ qatomic64_init();
5961
}
5962
diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
5963
index XXXXXXX..XXXXXXX 100644
5964
--- a/util/fdmon-epoll.c
5965
+++ b/util/fdmon-epoll.c
5966
@@ -XXX,XX +XXX,XX @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerList *ready_list,
5967
struct epoll_event events[128];
5968
5969
/* Fall back while external clients are disabled */
5970
- if (atomic_read(&ctx->external_disable_cnt)) {
5971
+ if (qatomic_read(&ctx->external_disable_cnt)) {
5972
return fdmon_poll_ops.wait(ctx, ready_list, timeout);
5973
}
5974
5975
@@ -XXX,XX +XXX,XX @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
5976
}
5977
5978
/* Do not upgrade while external clients are disabled */
5979
- if (atomic_read(&ctx->external_disable_cnt)) {
5980
+ if (qatomic_read(&ctx->external_disable_cnt)) {
5981
return false;
5982
}
5983
5984
diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
5985
index XXXXXXX..XXXXXXX 100644
5986
--- a/util/fdmon-io_uring.c
5987
+++ b/util/fdmon-io_uring.c
5988
@@ -XXX,XX +XXX,XX @@ static void enqueue(AioHandlerSList *head, AioHandler *node, unsigned flags)
5989
{
5990
unsigned old_flags;
5991
5992
- old_flags = atomic_fetch_or(&node->flags, FDMON_IO_URING_PENDING | flags);
5993
+ old_flags = qatomic_fetch_or(&node->flags, FDMON_IO_URING_PENDING | flags);
5994
if (!(old_flags & FDMON_IO_URING_PENDING)) {
5995
QSLIST_INSERT_HEAD_ATOMIC(head, node, node_submitted);
5996
}
5997
@@ -XXX,XX +XXX,XX @@ static AioHandler *dequeue(AioHandlerSList *head, unsigned *flags)
5998
* telling process_cqe() to delete the AioHandler when its
5999
* IORING_OP_POLL_ADD completes.
6000
*/
6001
- *flags = atomic_fetch_and(&node->flags, ~(FDMON_IO_URING_PENDING |
6002
+ *flags = qatomic_fetch_and(&node->flags, ~(FDMON_IO_URING_PENDING |
6003
FDMON_IO_URING_ADD));
6004
return node;
6005
}
6006
@@ -XXX,XX +XXX,XX @@ static bool process_cqe(AioContext *ctx,
6007
* with enqueue() here then we can safely clear the FDMON_IO_URING_REMOVE
6008
* bit before IORING_OP_POLL_REMOVE is submitted.
6009
*/
6010
- flags = atomic_fetch_and(&node->flags, ~FDMON_IO_URING_REMOVE);
6011
+ flags = qatomic_fetch_and(&node->flags, ~FDMON_IO_URING_REMOVE);
6012
if (flags & FDMON_IO_URING_REMOVE) {
6013
QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers, node, node_deleted);
6014
return false;
6015
@@ -XXX,XX +XXX,XX @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
6016
int ret;
6017
6018
/* Fall back while external clients are disabled */
6019
- if (atomic_read(&ctx->external_disable_cnt)) {
6020
+ if (qatomic_read(&ctx->external_disable_cnt)) {
6021
return fdmon_poll_ops.wait(ctx, ready_list, timeout);
6022
}
6023
6024
@@ -XXX,XX +XXX,XX @@ static bool fdmon_io_uring_need_wait(AioContext *ctx)
6025
}
6026
6027
/* Are we falling back to fdmon-poll? */
6028
- return atomic_read(&ctx->external_disable_cnt);
6029
+ return qatomic_read(&ctx->external_disable_cnt);
6030
}
6031
6032
static const FDMonOps fdmon_io_uring_ops = {
6033
@@ -XXX,XX +XXX,XX @@ void fdmon_io_uring_destroy(AioContext *ctx)
6034
6035
/* Move handlers due to be removed onto the deleted list */
6036
while ((node = QSLIST_FIRST_RCU(&ctx->submit_list))) {
6037
- unsigned flags = atomic_fetch_and(&node->flags,
6038
+ unsigned flags = qatomic_fetch_and(&node->flags,
6039
~(FDMON_IO_URING_PENDING |
6040
FDMON_IO_URING_ADD |
6041
FDMON_IO_URING_REMOVE));
6042
diff --git a/util/lockcnt.c b/util/lockcnt.c
6043
index XXXXXXX..XXXXXXX 100644
6044
--- a/util/lockcnt.c
6045
+++ b/util/lockcnt.c
6046
@@ -XXX,XX +XXX,XX @@ static bool qemu_lockcnt_cmpxchg_or_wait(QemuLockCnt *lockcnt, int *val,
6047
int expected = *val;
6048
6049
trace_lockcnt_fast_path_attempt(lockcnt, expected, new_if_free);
6050
- *val = atomic_cmpxchg(&lockcnt->count, expected, new_if_free);
6051
+ *val = qatomic_cmpxchg(&lockcnt->count, expected, new_if_free);
6052
if (*val == expected) {
6053
trace_lockcnt_fast_path_success(lockcnt, expected, new_if_free);
6054
*val = new_if_free;
6055
@@ -XXX,XX +XXX,XX @@ static bool qemu_lockcnt_cmpxchg_or_wait(QemuLockCnt *lockcnt, int *val,
6056
int new = expected - QEMU_LOCKCNT_STATE_LOCKED + QEMU_LOCKCNT_STATE_WAITING;
6057
6058
trace_lockcnt_futex_wait_prepare(lockcnt, expected, new);
6059
- *val = atomic_cmpxchg(&lockcnt->count, expected, new);
6060
+ *val = qatomic_cmpxchg(&lockcnt->count, expected, new);
6061
if (*val == expected) {
6062
*val = new;
6063
}
6064
@@ -XXX,XX +XXX,XX @@ static bool qemu_lockcnt_cmpxchg_or_wait(QemuLockCnt *lockcnt, int *val,
6065
*waited = true;
6066
trace_lockcnt_futex_wait(lockcnt, *val);
6067
qemu_futex_wait(&lockcnt->count, *val);
6068
- *val = atomic_read(&lockcnt->count);
6069
+ *val = qatomic_read(&lockcnt->count);
6070
trace_lockcnt_futex_wait_resume(lockcnt, *val);
6071
continue;
6072
}
6073
@@ -XXX,XX +XXX,XX @@ static void lockcnt_wake(QemuLockCnt *lockcnt)
6074
6075
void qemu_lockcnt_inc(QemuLockCnt *lockcnt)
6076
{
6077
- int val = atomic_read(&lockcnt->count);
6078
+ int val = qatomic_read(&lockcnt->count);
6079
bool waited = false;
6080
6081
for (;;) {
6082
if (val >= QEMU_LOCKCNT_COUNT_STEP) {
6083
int expected = val;
6084
- val = atomic_cmpxchg(&lockcnt->count, val, val + QEMU_LOCKCNT_COUNT_STEP);
6085
+ val = qatomic_cmpxchg(&lockcnt->count, val,
6086
+ val + QEMU_LOCKCNT_COUNT_STEP);
6087
if (val == expected) {
6088
break;
6089
}
6090
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_inc(QemuLockCnt *lockcnt)
6091
6092
void qemu_lockcnt_dec(QemuLockCnt *lockcnt)
6093
{
6094
- atomic_sub(&lockcnt->count, QEMU_LOCKCNT_COUNT_STEP);
6095
+ qatomic_sub(&lockcnt->count, QEMU_LOCKCNT_COUNT_STEP);
6096
}
6097
6098
/* Decrement a counter, and return locked if it is decremented to zero.
6099
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_dec(QemuLockCnt *lockcnt)
6100
*/
6101
bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt)
6102
{
6103
- int val = atomic_read(&lockcnt->count);
6104
+ int val = qatomic_read(&lockcnt->count);
6105
int locked_state = QEMU_LOCKCNT_STATE_LOCKED;
6106
bool waited = false;
6107
6108
for (;;) {
6109
if (val >= 2 * QEMU_LOCKCNT_COUNT_STEP) {
6110
int expected = val;
6111
- val = atomic_cmpxchg(&lockcnt->count, val, val - QEMU_LOCKCNT_COUNT_STEP);
6112
+ val = qatomic_cmpxchg(&lockcnt->count, val,
6113
+ val - QEMU_LOCKCNT_COUNT_STEP);
6114
if (val == expected) {
6115
break;
6116
}
6117
@@ -XXX,XX +XXX,XX @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt)
6118
*/
6119
bool qemu_lockcnt_dec_if_lock(QemuLockCnt *lockcnt)
6120
{
6121
- int val = atomic_read(&lockcnt->count);
6122
+ int val = qatomic_read(&lockcnt->count);
6123
int locked_state = QEMU_LOCKCNT_STATE_LOCKED;
6124
bool waited = false;
6125
6126
@@ -XXX,XX +XXX,XX @@ bool qemu_lockcnt_dec_if_lock(QemuLockCnt *lockcnt)
6127
6128
void qemu_lockcnt_lock(QemuLockCnt *lockcnt)
6129
{
6130
- int val = atomic_read(&lockcnt->count);
6131
+ int val = qatomic_read(&lockcnt->count);
6132
int step = QEMU_LOCKCNT_STATE_LOCKED;
6133
bool waited = false;
6134
6135
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_inc_and_unlock(QemuLockCnt *lockcnt)
6136
{
6137
int expected, new, val;
6138
6139
- val = atomic_read(&lockcnt->count);
6140
+ val = qatomic_read(&lockcnt->count);
6141
do {
6142
expected = val;
6143
new = (val + QEMU_LOCKCNT_COUNT_STEP) & ~QEMU_LOCKCNT_STATE_MASK;
6144
trace_lockcnt_unlock_attempt(lockcnt, val, new);
6145
- val = atomic_cmpxchg(&lockcnt->count, val, new);
6146
+ val = qatomic_cmpxchg(&lockcnt->count, val, new);
6147
} while (val != expected);
6148
6149
trace_lockcnt_unlock_success(lockcnt, val, new);
6150
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_unlock(QemuLockCnt *lockcnt)
6151
{
6152
int expected, new, val;
6153
6154
- val = atomic_read(&lockcnt->count);
6155
+ val = qatomic_read(&lockcnt->count);
6156
do {
6157
expected = val;
6158
new = val & ~QEMU_LOCKCNT_STATE_MASK;
6159
trace_lockcnt_unlock_attempt(lockcnt, val, new);
6160
- val = atomic_cmpxchg(&lockcnt->count, val, new);
6161
+ val = qatomic_cmpxchg(&lockcnt->count, val, new);
6162
} while (val != expected);
6163
6164
trace_lockcnt_unlock_success(lockcnt, val, new);
6165
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_unlock(QemuLockCnt *lockcnt)
6166
6167
unsigned qemu_lockcnt_count(QemuLockCnt *lockcnt)
6168
{
6169
- return atomic_read(&lockcnt->count) >> QEMU_LOCKCNT_COUNT_SHIFT;
6170
+ return qatomic_read(&lockcnt->count) >> QEMU_LOCKCNT_COUNT_SHIFT;
6171
}
6172
#else
6173
void qemu_lockcnt_init(QemuLockCnt *lockcnt)
6174
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_inc(QemuLockCnt *lockcnt)
6175
{
6176
int old;
6177
for (;;) {
6178
- old = atomic_read(&lockcnt->count);
6179
+ old = qatomic_read(&lockcnt->count);
6180
if (old == 0) {
6181
qemu_lockcnt_lock(lockcnt);
6182
qemu_lockcnt_inc_and_unlock(lockcnt);
6183
return;
6184
} else {
6185
- if (atomic_cmpxchg(&lockcnt->count, old, old + 1) == old) {
6186
+ if (qatomic_cmpxchg(&lockcnt->count, old, old + 1) == old) {
6187
return;
6188
}
6189
}
6190
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_inc(QemuLockCnt *lockcnt)
6191
6192
void qemu_lockcnt_dec(QemuLockCnt *lockcnt)
6193
{
6194
- atomic_dec(&lockcnt->count);
6195
+ qatomic_dec(&lockcnt->count);
6196
}
6197
6198
/* Decrement a counter, and return locked if it is decremented to zero.
6199
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_dec(QemuLockCnt *lockcnt)
6200
*/
6201
bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt)
6202
{
6203
- int val = atomic_read(&lockcnt->count);
6204
+ int val = qatomic_read(&lockcnt->count);
6205
while (val > 1) {
6206
- int old = atomic_cmpxchg(&lockcnt->count, val, val - 1);
6207
+ int old = qatomic_cmpxchg(&lockcnt->count, val, val - 1);
6208
if (old != val) {
6209
val = old;
6210
continue;
6211
@@ -XXX,XX +XXX,XX @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt)
6212
}
6213
6214
qemu_lockcnt_lock(lockcnt);
6215
- if (atomic_fetch_dec(&lockcnt->count) == 1) {
6216
+ if (qatomic_fetch_dec(&lockcnt->count) == 1) {
6217
return true;
6218
}
6219
6220
@@ -XXX,XX +XXX,XX @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt)
6221
bool qemu_lockcnt_dec_if_lock(QemuLockCnt *lockcnt)
6222
{
6223
/* No need for acquire semantics if we return false. */
6224
- int val = atomic_read(&lockcnt->count);
6225
+ int val = qatomic_read(&lockcnt->count);
6226
if (val > 1) {
6227
return false;
6228
}
6229
6230
qemu_lockcnt_lock(lockcnt);
6231
- if (atomic_fetch_dec(&lockcnt->count) == 1) {
6232
+ if (qatomic_fetch_dec(&lockcnt->count) == 1) {
6233
return true;
6234
}
6235
6236
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_lock(QemuLockCnt *lockcnt)
6237
6238
void qemu_lockcnt_inc_and_unlock(QemuLockCnt *lockcnt)
6239
{
6240
- atomic_inc(&lockcnt->count);
6241
+ qatomic_inc(&lockcnt->count);
6242
qemu_mutex_unlock(&lockcnt->mutex);
6243
}
6244
6245
@@ -XXX,XX +XXX,XX @@ void qemu_lockcnt_unlock(QemuLockCnt *lockcnt)
6246
6247
unsigned qemu_lockcnt_count(QemuLockCnt *lockcnt)
6248
{
6249
- return atomic_read(&lockcnt->count);
6250
+ return qatomic_read(&lockcnt->count);
6251
}
6252
#endif
6253
diff --git a/util/log.c b/util/log.c
6254
index XXXXXXX..XXXXXXX 100644
6255
--- a/util/log.c
6256
+++ b/util/log.c
6257
@@ -XXX,XX +XXX,XX @@ int qemu_log(const char *fmt, ...)
6258
QemuLogFile *logfile;
6259
6260
rcu_read_lock();
6261
- logfile = atomic_rcu_read(&qemu_logfile);
6262
+ logfile = qatomic_rcu_read(&qemu_logfile);
6263
if (logfile) {
6264
va_list ap;
6265
va_start(ap, fmt);
6266
@@ -XXX,XX +XXX,XX @@ void qemu_set_log(int log_flags)
6267
QEMU_LOCK_GUARD(&qemu_logfile_mutex);
6268
if (qemu_logfile && !need_to_open_file) {
6269
logfile = qemu_logfile;
6270
- atomic_rcu_set(&qemu_logfile, NULL);
6271
+ qatomic_rcu_set(&qemu_logfile, NULL);
6272
call_rcu(logfile, qemu_logfile_free, rcu);
6273
} else if (!qemu_logfile && need_to_open_file) {
6274
logfile = g_new0(QemuLogFile, 1);
6275
@@ -XXX,XX +XXX,XX @@ void qemu_set_log(int log_flags)
6276
#endif
6277
log_append = 1;
6278
}
6279
- atomic_rcu_set(&qemu_logfile, logfile);
6280
+ qatomic_rcu_set(&qemu_logfile, logfile);
6281
}
6282
}
6283
6284
@@ -XXX,XX +XXX,XX @@ void qemu_log_flush(void)
6285
QemuLogFile *logfile;
6286
6287
rcu_read_lock();
6288
- logfile = atomic_rcu_read(&qemu_logfile);
6289
+ logfile = qatomic_rcu_read(&qemu_logfile);
6290
if (logfile) {
6291
fflush(logfile->fd);
6292
}
6293
@@ -XXX,XX +XXX,XX @@ void qemu_log_close(void)
6294
logfile = qemu_logfile;
6295
6296
if (logfile) {
6297
- atomic_rcu_set(&qemu_logfile, NULL);
6298
+ qatomic_rcu_set(&qemu_logfile, NULL);
6299
call_rcu(logfile, qemu_logfile_free, rcu);
6300
}
6301
qemu_mutex_unlock(&qemu_logfile_mutex);
6302
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
6303
index XXXXXXX..XXXXXXX 100644
6304
--- a/util/qemu-coroutine-lock.c
6305
+++ b/util/qemu-coroutine-lock.c
6306
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(AioContext *ctx,
6307
/* This is the "Responsibility Hand-Off" protocol; a lock() picks from
6308
* a concurrent unlock() the responsibility of waking somebody up.
6309
*/
6310
- old_handoff = atomic_mb_read(&mutex->handoff);
6311
+ old_handoff = qatomic_mb_read(&mutex->handoff);
6312
if (old_handoff &&
6313
has_waiters(mutex) &&
6314
- atomic_cmpxchg(&mutex->handoff, old_handoff, 0) == old_handoff) {
6315
+ qatomic_cmpxchg(&mutex->handoff, old_handoff, 0) == old_handoff) {
6316
/* There can be no concurrent pops, because there can be only
6317
* one active handoff at a time.
6318
*/
6319
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex)
6320
*/
6321
i = 0;
6322
retry_fast_path:
6323
- waiters = atomic_cmpxchg(&mutex->locked, 0, 1);
6324
+ waiters = qatomic_cmpxchg(&mutex->locked, 0, 1);
6325
if (waiters != 0) {
6326
while (waiters == 1 && ++i < 1000) {
6327
- if (atomic_read(&mutex->ctx) == ctx) {
6328
+ if (qatomic_read(&mutex->ctx) == ctx) {
6329
break;
6330
}
6331
- if (atomic_read(&mutex->locked) == 0) {
6332
+ if (qatomic_read(&mutex->locked) == 0) {
6333
goto retry_fast_path;
6334
}
6335
cpu_relax();
6336
}
6337
- waiters = atomic_fetch_inc(&mutex->locked);
6338
+ waiters = qatomic_fetch_inc(&mutex->locked);
6339
}
6340
6341
if (waiters == 0) {
6342
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
6343
mutex->ctx = NULL;
6344
mutex->holder = NULL;
6345
self->locks_held--;
6346
- if (atomic_fetch_dec(&mutex->locked) == 1) {
6347
+ if (qatomic_fetch_dec(&mutex->locked) == 1) {
6348
/* No waiting qemu_co_mutex_lock(). Pfew, that was easy! */
6349
return;
6350
}
6351
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
6352
}
6353
6354
our_handoff = mutex->sequence;
6355
- atomic_mb_set(&mutex->handoff, our_handoff);
6356
+ qatomic_mb_set(&mutex->handoff, our_handoff);
6357
if (!has_waiters(mutex)) {
6358
/* The concurrent lock has not added itself yet, so it
6359
* will be able to pick our handoff.
6360
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
6361
/* Try to do the handoff protocol ourselves; if somebody else has
6362
* already taken it, however, we're done and they're responsible.
6363
*/
6364
- if (atomic_cmpxchg(&mutex->handoff, our_handoff, 0) != our_handoff) {
6365
+ if (qatomic_cmpxchg(&mutex->handoff, our_handoff, 0) != our_handoff) {
6366
break;
6367
}
6368
}
6369
diff --git a/util/qemu-coroutine-sleep.c b/util/qemu-coroutine-sleep.c
6370
index XXXXXXX..XXXXXXX 100644
6371
--- a/util/qemu-coroutine-sleep.c
6372
+++ b/util/qemu-coroutine-sleep.c
6373
@@ -XXX,XX +XXX,XX @@ struct QemuCoSleepState {
6374
void qemu_co_sleep_wake(QemuCoSleepState *sleep_state)
6375
{
6376
/* Write of schedule protected by barrier write in aio_co_schedule */
6377
- const char *scheduled = atomic_cmpxchg(&sleep_state->co->scheduled,
6378
+ const char *scheduled = qatomic_cmpxchg(&sleep_state->co->scheduled,
6379
qemu_co_sleep_ns__scheduled, NULL);
6380
6381
assert(scheduled == qemu_co_sleep_ns__scheduled);
6382
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_sleep_ns_wakeable(QEMUClockType type, int64_t ns,
6383
.user_state_pointer = sleep_state,
6384
};
6385
6386
- const char *scheduled = atomic_cmpxchg(&state.co->scheduled, NULL,
6387
+ const char *scheduled = qatomic_cmpxchg(&state.co->scheduled, NULL,
6388
qemu_co_sleep_ns__scheduled);
6389
if (scheduled) {
6390
fprintf(stderr,
6391
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
6392
index XXXXXXX..XXXXXXX 100644
6393
--- a/util/qemu-coroutine.c
6394
+++ b/util/qemu-coroutine.c
6395
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry, void *opaque)
6396
* release_pool_size and the actual size of release_pool. But
6397
* it is just a heuristic, it does not need to be perfect.
6398
*/
6399
- alloc_pool_size = atomic_xchg(&release_pool_size, 0);
6400
+ alloc_pool_size = qatomic_xchg(&release_pool_size, 0);
6401
QSLIST_MOVE_ATOMIC(&alloc_pool, &release_pool);
6402
co = QSLIST_FIRST(&alloc_pool);
6403
}
6404
@@ -XXX,XX +XXX,XX @@ static void coroutine_delete(Coroutine *co)
6405
if (CONFIG_COROUTINE_POOL) {
6406
if (release_pool_size < POOL_BATCH_SIZE * 2) {
6407
QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next);
6408
- atomic_inc(&release_pool_size);
6409
+ qatomic_inc(&release_pool_size);
6410
return;
6411
}
6412
if (alloc_pool_size < POOL_BATCH_SIZE) {
6413
@@ -XXX,XX +XXX,XX @@ void qemu_aio_coroutine_enter(AioContext *ctx, Coroutine *co)
6414
6415
/* Cannot rely on the read barrier for to in aio_co_wake(), as there are
6416
* callers outside of aio_co_wake() */
6417
- const char *scheduled = atomic_mb_read(&to->scheduled);
6418
+ const char *scheduled = qatomic_mb_read(&to->scheduled);
6419
6420
QSIMPLEQ_REMOVE_HEAD(&pending, co_queue_next);
6421
6422
diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
6423
index XXXXXXX..XXXXXXX 100644
6424
--- a/util/qemu-sockets.c
6425
+++ b/util/qemu-sockets.c
6426
@@ -XXX,XX +XXX,XX @@ static struct addrinfo *inet_parse_connect_saddr(InetSocketAddress *saddr,
6427
memset(&ai, 0, sizeof(ai));
6428
6429
ai.ai_flags = AI_CANONNAME | AI_ADDRCONFIG;
6430
- if (atomic_read(&useV4Mapped)) {
6431
+ if (qatomic_read(&useV4Mapped)) {
6432
ai.ai_flags |= AI_V4MAPPED;
6433
}
6434
ai.ai_family = inet_ai_family_from_address(saddr, &err);
6435
@@ -XXX,XX +XXX,XX @@ static struct addrinfo *inet_parse_connect_saddr(InetSocketAddress *saddr,
6436
*/
6437
if (rc == EAI_BADFLAGS &&
6438
(ai.ai_flags & AI_V4MAPPED)) {
6439
- atomic_set(&useV4Mapped, 0);
6440
+ qatomic_set(&useV4Mapped, 0);
6441
ai.ai_flags &= ~AI_V4MAPPED;
6442
rc = getaddrinfo(saddr->host, saddr->port, &ai, &res);
6443
}
6444
diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
6445
index XXXXXXX..XXXXXXX 100644
6446
--- a/util/qemu-thread-posix.c
6447
+++ b/util/qemu-thread-posix.c
6448
@@ -XXX,XX +XXX,XX @@ void qemu_event_set(QemuEvent *ev)
6449
*/
6450
assert(ev->initialized);
6451
smp_mb();
6452
- if (atomic_read(&ev->value) != EV_SET) {
6453
- if (atomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
6454
+ if (qatomic_read(&ev->value) != EV_SET) {
6455
+ if (qatomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
6456
/* There were waiters, wake them up. */
6457
qemu_futex_wake(ev, INT_MAX);
6458
}
6459
@@ -XXX,XX +XXX,XX @@ void qemu_event_reset(QemuEvent *ev)
6460
unsigned value;
6461
6462
assert(ev->initialized);
6463
- value = atomic_read(&ev->value);
6464
+ value = qatomic_read(&ev->value);
6465
smp_mb_acquire();
6466
if (value == EV_SET) {
6467
/*
6468
* If there was a concurrent reset (or even reset+wait),
6469
* do nothing. Otherwise change EV_SET->EV_FREE.
6470
*/
6471
- atomic_or(&ev->value, EV_FREE);
6472
+ qatomic_or(&ev->value, EV_FREE);
6473
}
6474
}
6475
6476
@@ -XXX,XX +XXX,XX @@ void qemu_event_wait(QemuEvent *ev)
6477
unsigned value;
6478
6479
assert(ev->initialized);
6480
- value = atomic_read(&ev->value);
6481
+ value = qatomic_read(&ev->value);
6482
smp_mb_acquire();
6483
if (value != EV_SET) {
6484
if (value == EV_FREE) {
6485
@@ -XXX,XX +XXX,XX @@ void qemu_event_wait(QemuEvent *ev)
6486
* a concurrent busy->free transition. After the CAS, the
6487
* event will be either set or busy.
6488
*/
6489
- if (atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
6490
+ if (qatomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
6491
return;
6492
}
6493
}
6494
diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
6495
index XXXXXXX..XXXXXXX 100644
6496
--- a/util/qemu-thread-win32.c
6497
+++ b/util/qemu-thread-win32.c
6498
@@ -XXX,XX +XXX,XX @@ void qemu_event_set(QemuEvent *ev)
6499
* ev->value we need a full memory barrier here.
6500
*/
6501
smp_mb();
6502
- if (atomic_read(&ev->value) != EV_SET) {
6503
- if (atomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
6504
+ if (qatomic_read(&ev->value) != EV_SET) {
6505
+ if (qatomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
6506
/* There were waiters, wake them up. */
6507
SetEvent(ev->event);
6508
}
6509
@@ -XXX,XX +XXX,XX @@ void qemu_event_reset(QemuEvent *ev)
6510
unsigned value;
6511
6512
assert(ev->initialized);
6513
- value = atomic_read(&ev->value);
6514
+ value = qatomic_read(&ev->value);
6515
smp_mb_acquire();
6516
if (value == EV_SET) {
6517
/* If there was a concurrent reset (or even reset+wait),
6518
* do nothing. Otherwise change EV_SET->EV_FREE.
6519
*/
6520
- atomic_or(&ev->value, EV_FREE);
6521
+ qatomic_or(&ev->value, EV_FREE);
6522
}
6523
}
6524
6525
@@ -XXX,XX +XXX,XX @@ void qemu_event_wait(QemuEvent *ev)
6526
unsigned value;
6527
6528
assert(ev->initialized);
6529
- value = atomic_read(&ev->value);
6530
+ value = qatomic_read(&ev->value);
6531
smp_mb_acquire();
6532
if (value != EV_SET) {
6533
if (value == EV_FREE) {
6534
@@ -XXX,XX +XXX,XX @@ void qemu_event_wait(QemuEvent *ev)
6535
* because there cannot be a concurrent busy->free transition.
6536
* After the CAS, the event will be either set or busy.
6537
*/
6538
- if (atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
6539
+ if (qatomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
6540
value = EV_SET;
6541
} else {
6542
value = EV_BUSY;
6543
diff --git a/util/qemu-timer.c b/util/qemu-timer.c
6544
index XXXXXXX..XXXXXXX 100644
6545
--- a/util/qemu-timer.c
6546
+++ b/util/qemu-timer.c
6547
@@ -XXX,XX +XXX,XX @@ void qemu_clock_enable(QEMUClockType type, bool enabled)
6548
6549
bool timerlist_has_timers(QEMUTimerList *timer_list)
6550
{
6551
- return !!atomic_read(&timer_list->active_timers);
6552
+ return !!qatomic_read(&timer_list->active_timers);
6553
}
6554
6555
bool qemu_clock_has_timers(QEMUClockType type)
6556
@@ -XXX,XX +XXX,XX @@ bool timerlist_expired(QEMUTimerList *timer_list)
6557
{
6558
int64_t expire_time;
6559
6560
- if (!atomic_read(&timer_list->active_timers)) {
6561
+ if (!qatomic_read(&timer_list->active_timers)) {
6562
return false;
6563
}
6564
6565
@@ -XXX,XX +XXX,XX @@ int64_t timerlist_deadline_ns(QEMUTimerList *timer_list)
6566
int64_t delta;
6567
int64_t expire_time;
6568
6569
- if (!atomic_read(&timer_list->active_timers)) {
6570
+ if (!qatomic_read(&timer_list->active_timers)) {
6571
return -1;
6572
}
6573
6574
@@ -XXX,XX +XXX,XX @@ static void timer_del_locked(QEMUTimerList *timer_list, QEMUTimer *ts)
6575
if (!t)
6576
break;
6577
if (t == ts) {
6578
- atomic_set(pt, t->next);
6579
+ qatomic_set(pt, t->next);
6580
break;
6581
}
6582
pt = &t->next;
6583
@@ -XXX,XX +XXX,XX @@ static bool timer_mod_ns_locked(QEMUTimerList *timer_list,
6584
}
6585
ts->expire_time = MAX(expire_time, 0);
6586
ts->next = *pt;
6587
- atomic_set(pt, ts);
6588
+ qatomic_set(pt, ts);
6589
6590
return pt == &timer_list->active_timers;
6591
}
6592
@@ -XXX,XX +XXX,XX @@ bool timerlist_run_timers(QEMUTimerList *timer_list)
6593
QEMUTimerCB *cb;
6594
void *opaque;
6595
6596
- if (!atomic_read(&timer_list->active_timers)) {
6597
+ if (!qatomic_read(&timer_list->active_timers)) {
6598
return false;
6599
}
6600
6601
diff --git a/util/qht.c b/util/qht.c
6602
index XXXXXXX..XXXXXXX 100644
6603
--- a/util/qht.c
6604
+++ b/util/qht.c
6605
@@ -XXX,XX +XXX,XX @@ static inline void qht_unlock(struct qht *ht)
6606
6607
/*
6608
* Note: reading partially-updated pointers in @pointers could lead to
6609
- * segfaults. We thus access them with atomic_read/set; this guarantees
6610
+ * segfaults. We thus access them with qatomic_read/set; this guarantees
6611
* that the compiler makes all those accesses atomic. We also need the
6612
- * volatile-like behavior in atomic_read, since otherwise the compiler
6613
+ * volatile-like behavior in qatomic_read, since otherwise the compiler
6614
* might refetch the pointer.
6615
- * atomic_read's are of course not necessary when the bucket lock is held.
6616
+ * qatomic_read's are of course not necessary when the bucket lock is held.
6617
*
6618
* If both ht->lock and b->lock are grabbed, ht->lock should always
6619
* be grabbed first.
6620
@@ -XXX,XX +XXX,XX @@ void qht_map_lock_buckets__no_stale(struct qht *ht, struct qht_map **pmap)
6621
{
6622
struct qht_map *map;
6623
6624
- map = atomic_rcu_read(&ht->map);
6625
+ map = qatomic_rcu_read(&ht->map);
6626
qht_map_lock_buckets(map);
6627
if (likely(!qht_map_is_stale__locked(ht, map))) {
6628
*pmap = map;
6629
@@ -XXX,XX +XXX,XX @@ struct qht_bucket *qht_bucket_lock__no_stale(struct qht *ht, uint32_t hash,
6630
struct qht_bucket *b;
6631
struct qht_map *map;
6632
6633
- map = atomic_rcu_read(&ht->map);
6634
+ map = qatomic_rcu_read(&ht->map);
6635
b = qht_map_to_bucket(map, hash);
6636
6637
qemu_spin_lock(&b->lock);
6638
@@ -XXX,XX +XXX,XX @@ struct qht_bucket *qht_bucket_lock__no_stale(struct qht *ht, uint32_t hash,
6639
6640
static inline bool qht_map_needs_resize(const struct qht_map *map)
6641
{
6642
- return atomic_read(&map->n_added_buckets) > map->n_added_buckets_threshold;
6643
+ return qatomic_read(&map->n_added_buckets) >
6644
+ map->n_added_buckets_threshold;
6645
}
6646
6647
static inline void qht_chain_destroy(const struct qht_bucket *head)
6648
@@ -XXX,XX +XXX,XX @@ void qht_init(struct qht *ht, qht_cmp_func_t cmp, size_t n_elems,
6649
ht->mode = mode;
6650
qemu_mutex_init(&ht->lock);
6651
map = qht_map_create(n_buckets);
6652
- atomic_rcu_set(&ht->map, map);
6653
+ qatomic_rcu_set(&ht->map, map);
6654
}
6655
6656
/* call only when there are no readers/writers left */
6657
@@ -XXX,XX +XXX,XX @@ static void qht_bucket_reset__locked(struct qht_bucket *head)
6658
if (b->pointers[i] == NULL) {
6659
goto done;
6660
}
6661
- atomic_set(&b->hashes[i], 0);
6662
- atomic_set(&b->pointers[i], NULL);
6663
+ qatomic_set(&b->hashes[i], 0);
6664
+ qatomic_set(&b->pointers[i], NULL);
6665
}
6666
b = b->next;
6667
} while (b);
6668
@@ -XXX,XX +XXX,XX @@ void *qht_do_lookup(const struct qht_bucket *head, qht_lookup_func_t func,
6669
6670
do {
6671
for (i = 0; i < QHT_BUCKET_ENTRIES; i++) {
6672
- if (atomic_read(&b->hashes[i]) == hash) {
6673
+ if (qatomic_read(&b->hashes[i]) == hash) {
6674
/* The pointer is dereferenced before seqlock_read_retry,
6675
* so (unlike qht_insert__locked) we need to use
6676
- * atomic_rcu_read here.
6677
+ * qatomic_rcu_read here.
6678
*/
6679
- void *p = atomic_rcu_read(&b->pointers[i]);
6680
+ void *p = qatomic_rcu_read(&b->pointers[i]);
6681
6682
if (likely(p) && likely(func(p, userp))) {
6683
return p;
6684
}
6685
}
6686
}
6687
- b = atomic_rcu_read(&b->next);
6688
+ b = qatomic_rcu_read(&b->next);
6689
} while (b);
6690
6691
return NULL;
6692
@@ -XXX,XX +XXX,XX @@ void *qht_lookup_custom(const struct qht *ht, const void *userp, uint32_t hash,
6693
unsigned int version;
6694
void *ret;
6695
6696
- map = atomic_rcu_read(&ht->map);
6697
+ map = qatomic_rcu_read(&ht->map);
6698
b = qht_map_to_bucket(map, hash);
6699
6700
version = seqlock_read_begin(&b->sequence);
6701
@@ -XXX,XX +XXX,XX @@ static void *qht_insert__locked(const struct qht *ht, struct qht_map *map,
6702
memset(b, 0, sizeof(*b));
6703
new = b;
6704
i = 0;
6705
- atomic_inc(&map->n_added_buckets);
6706
+ qatomic_inc(&map->n_added_buckets);
6707
if (unlikely(qht_map_needs_resize(map)) && needs_resize) {
6708
*needs_resize = true;
6709
}
6710
@@ -XXX,XX +XXX,XX @@ static void *qht_insert__locked(const struct qht *ht, struct qht_map *map,
6711
/* found an empty key: acquire the seqlock and write */
6712
seqlock_write_begin(&head->sequence);
6713
if (new) {
6714
- atomic_rcu_set(&prev->next, b);
6715
+ qatomic_rcu_set(&prev->next, b);
6716
}
6717
/* smp_wmb() implicit in seqlock_write_begin. */
6718
- atomic_set(&b->hashes[i], hash);
6719
- atomic_set(&b->pointers[i], p);
6720
+ qatomic_set(&b->hashes[i], hash);
6721
+ qatomic_set(&b->pointers[i], p);
6722
seqlock_write_end(&head->sequence);
6723
return NULL;
6724
}
6725
@@ -XXX,XX +XXX,XX @@ qht_entry_move(struct qht_bucket *to, int i, struct qht_bucket *from, int j)
6726
qht_debug_assert(to->pointers[i]);
6727
qht_debug_assert(from->pointers[j]);
6728
6729
- atomic_set(&to->hashes[i], from->hashes[j]);
6730
- atomic_set(&to->pointers[i], from->pointers[j]);
6731
+ qatomic_set(&to->hashes[i], from->hashes[j]);
6732
+ qatomic_set(&to->pointers[i], from->pointers[j]);
6733
6734
- atomic_set(&from->hashes[j], 0);
6735
- atomic_set(&from->pointers[j], NULL);
6736
+ qatomic_set(&from->hashes[j], 0);
6737
+ qatomic_set(&from->pointers[j], NULL);
6738
}
6739
6740
/*
6741
@@ -XXX,XX +XXX,XX @@ static inline void qht_bucket_remove_entry(struct qht_bucket *orig, int pos)
6742
6743
if (qht_entry_is_last(orig, pos)) {
6744
orig->hashes[pos] = 0;
6745
- atomic_set(&orig->pointers[pos], NULL);
6746
+ qatomic_set(&orig->pointers[pos], NULL);
6747
return;
6748
}
6749
do {
6750
@@ -XXX,XX +XXX,XX @@ do_qht_iter(struct qht *ht, const struct qht_iter *iter, void *userp)
6751
{
6752
struct qht_map *map;
6753
6754
- map = atomic_rcu_read(&ht->map);
6755
+ map = qatomic_rcu_read(&ht->map);
6756
qht_map_lock_buckets(map);
6757
qht_map_iter__all_locked(map, iter, userp);
6758
qht_map_unlock_buckets(map);
6759
@@ -XXX,XX +XXX,XX @@ static void qht_do_resize_reset(struct qht *ht, struct qht_map *new, bool reset)
6760
qht_map_iter__all_locked(old, &iter, &data);
6761
qht_map_debug__all_locked(new);
6762
6763
- atomic_rcu_set(&ht->map, new);
6764
+ qatomic_rcu_set(&ht->map, new);
6765
qht_map_unlock_buckets(old);
6766
call_rcu(old, qht_map_destroy, rcu);
6767
}
6768
@@ -XXX,XX +XXX,XX @@ void qht_statistics_init(const struct qht *ht, struct qht_stats *stats)
6769
const struct qht_map *map;
6770
int i;
6771
6772
- map = atomic_rcu_read(&ht->map);
6773
+ map = qatomic_rcu_read(&ht->map);
6774
6775
stats->used_head_buckets = 0;
6776
stats->entries = 0;
6777
@@ -XXX,XX +XXX,XX @@ void qht_statistics_init(const struct qht *ht, struct qht_stats *stats)
6778
b = head;
6779
do {
6780
for (j = 0; j < QHT_BUCKET_ENTRIES; j++) {
6781
- if (atomic_read(&b->pointers[j]) == NULL) {
6782
+ if (qatomic_read(&b->pointers[j]) == NULL) {
6783
break;
6784
}
6785
entries++;
6786
}
6787
buckets++;
6788
- b = atomic_rcu_read(&b->next);
6789
+ b = qatomic_rcu_read(&b->next);
6790
} while (b);
6791
} while (seqlock_read_retry(&head->sequence, version));
6792
6793
diff --git a/util/qsp.c b/util/qsp.c
6794
index XXXXXXX..XXXXXXX 100644
6795
--- a/util/qsp.c
6796
+++ b/util/qsp.c
6797
@@ -XXX,XX +XXX,XX @@ static void qsp_do_init(void)
6798
6799
static __attribute__((noinline)) void qsp_init__slowpath(void)
6800
{
6801
- if (atomic_cmpxchg(&qsp_initializing, false, true) == false) {
6802
+ if (qatomic_cmpxchg(&qsp_initializing, false, true) == false) {
6803
qsp_do_init();
6804
- atomic_set(&qsp_initialized, true);
6805
+ qatomic_set(&qsp_initialized, true);
6806
} else {
6807
- while (!atomic_read(&qsp_initialized)) {
6808
+ while (!qatomic_read(&qsp_initialized)) {
6809
cpu_relax();
6810
}
6811
}
6812
@@ -XXX,XX +XXX,XX @@ static __attribute__((noinline)) void qsp_init__slowpath(void)
6813
/* qsp_init() must be called from _all_ exported functions */
6814
static inline void qsp_init(void)
6815
{
6816
- if (likely(atomic_read(&qsp_initialized))) {
6817
+ if (likely(qatomic_read(&qsp_initialized))) {
6818
return;
6819
}
6820
qsp_init__slowpath();
6821
@@ -XXX,XX +XXX,XX @@ static QSPEntry *qsp_entry_get(const void *obj, const char *file, int line,
6822
*/
6823
static inline void do_qsp_entry_record(QSPEntry *e, int64_t delta, bool acq)
6824
{
6825
- atomic_set_u64(&e->ns, e->ns + delta);
6826
+ qatomic_set_u64(&e->ns, e->ns + delta);
6827
if (acq) {
6828
- atomic_set_u64(&e->n_acqs, e->n_acqs + 1);
6829
+ qatomic_set_u64(&e->n_acqs, e->n_acqs + 1);
6830
}
6831
}
6832
6833
@@ -XXX,XX +XXX,XX @@ qsp_cond_timedwait(QemuCond *cond, QemuMutex *mutex, int ms,
6834
6835
bool qsp_is_enabled(void)
6836
{
6837
- return atomic_read(&qemu_mutex_lock_func) == qsp_mutex_lock;
6838
+ return qatomic_read(&qemu_mutex_lock_func) == qsp_mutex_lock;
6839
}
6840
6841
void qsp_enable(void)
6842
{
6843
- atomic_set(&qemu_mutex_lock_func, qsp_mutex_lock);
6844
- atomic_set(&qemu_mutex_trylock_func, qsp_mutex_trylock);
6845
- atomic_set(&qemu_bql_mutex_lock_func, qsp_bql_mutex_lock);
6846
- atomic_set(&qemu_rec_mutex_lock_func, qsp_rec_mutex_lock);
6847
- atomic_set(&qemu_rec_mutex_trylock_func, qsp_rec_mutex_trylock);
6848
- atomic_set(&qemu_cond_wait_func, qsp_cond_wait);
6849
- atomic_set(&qemu_cond_timedwait_func, qsp_cond_timedwait);
6850
+ qatomic_set(&qemu_mutex_lock_func, qsp_mutex_lock);
6851
+ qatomic_set(&qemu_mutex_trylock_func, qsp_mutex_trylock);
6852
+ qatomic_set(&qemu_bql_mutex_lock_func, qsp_bql_mutex_lock);
6853
+ qatomic_set(&qemu_rec_mutex_lock_func, qsp_rec_mutex_lock);
6854
+ qatomic_set(&qemu_rec_mutex_trylock_func, qsp_rec_mutex_trylock);
6855
+ qatomic_set(&qemu_cond_wait_func, qsp_cond_wait);
6856
+ qatomic_set(&qemu_cond_timedwait_func, qsp_cond_timedwait);
6857
}
6858
6859
void qsp_disable(void)
6860
{
6861
- atomic_set(&qemu_mutex_lock_func, qemu_mutex_lock_impl);
6862
- atomic_set(&qemu_mutex_trylock_func, qemu_mutex_trylock_impl);
6863
- atomic_set(&qemu_bql_mutex_lock_func, qemu_mutex_lock_impl);
6864
- atomic_set(&qemu_rec_mutex_lock_func, qemu_rec_mutex_lock_impl);
6865
- atomic_set(&qemu_rec_mutex_trylock_func, qemu_rec_mutex_trylock_impl);
6866
- atomic_set(&qemu_cond_wait_func, qemu_cond_wait_impl);
6867
- atomic_set(&qemu_cond_timedwait_func, qemu_cond_timedwait_impl);
6868
+ qatomic_set(&qemu_mutex_lock_func, qemu_mutex_lock_impl);
6869
+ qatomic_set(&qemu_mutex_trylock_func, qemu_mutex_trylock_impl);
6870
+ qatomic_set(&qemu_bql_mutex_lock_func, qemu_mutex_lock_impl);
6871
+ qatomic_set(&qemu_rec_mutex_lock_func, qemu_rec_mutex_lock_impl);
6872
+ qatomic_set(&qemu_rec_mutex_trylock_func, qemu_rec_mutex_trylock_impl);
6873
+ qatomic_set(&qemu_cond_wait_func, qemu_cond_wait_impl);
6874
+ qatomic_set(&qemu_cond_timedwait_func, qemu_cond_timedwait_impl);
6875
}
6876
6877
static gint qsp_tree_cmp(gconstpointer ap, gconstpointer bp, gpointer up)
6878
@@ -XXX,XX +XXX,XX @@ static void qsp_aggregate(void *p, uint32_t h, void *up)
6879
* The entry is in the global hash table; read from it atomically (as in
6880
* "read once").
6881
*/
6882
- agg->ns += atomic_read_u64(&e->ns);
6883
- agg->n_acqs += atomic_read_u64(&e->n_acqs);
6884
+ agg->ns += qatomic_read_u64(&e->ns);
6885
+ agg->n_acqs += qatomic_read_u64(&e->n_acqs);
6886
}
6887
6888
static void qsp_iter_diff(void *p, uint32_t hash, void *htp)
6889
@@ -XXX,XX +XXX,XX @@ static void qsp_mktree(GTree *tree, bool callsite_coalesce)
6890
* with the snapshot.
6891
*/
6892
WITH_RCU_READ_LOCK_GUARD() {
6893
- QSPSnapshot *snap = atomic_rcu_read(&qsp_snapshot);
6894
+ QSPSnapshot *snap = qatomic_rcu_read(&qsp_snapshot);
6895
6896
/* Aggregate all results from the global hash table into a local one */
6897
qht_init(&ht, qsp_entry_no_thread_cmp, QSP_INITIAL_SIZE,
6898
@@ -XXX,XX +XXX,XX @@ void qsp_reset(void)
6899
qht_iter(&qsp_ht, qsp_aggregate, &new->ht);
6900
6901
/* replace the previous snapshot, if any */
6902
- old = atomic_xchg(&qsp_snapshot, new);
6903
+ old = qatomic_xchg(&qsp_snapshot, new);
6904
if (old) {
6905
call_rcu(old, qsp_snapshot_destroy, rcu);
6906
}
6907
diff --git a/util/rcu.c b/util/rcu.c
6908
index XXXXXXX..XXXXXXX 100644
6909
--- a/util/rcu.c
6910
+++ b/util/rcu.c
6911
@@ -XXX,XX +XXX,XX @@ static inline int rcu_gp_ongoing(unsigned long *ctr)
6912
{
6913
unsigned long v;
6914
6915
- v = atomic_read(ctr);
6916
+ v = qatomic_read(ctr);
6917
return v && (v != rcu_gp_ctr);
6918
}
6919
6920
@@ -XXX,XX +XXX,XX @@ static void wait_for_readers(void)
6921
*/
6922
qemu_event_reset(&rcu_gp_event);
6923
6924
- /* Instead of using atomic_mb_set for index->waiting, and
6925
- * atomic_mb_read for index->ctr, memory barriers are placed
6926
+ /* Instead of using qatomic_mb_set for index->waiting, and
6927
+ * qatomic_mb_read for index->ctr, memory barriers are placed
6928
* manually since writes to different threads are independent.
6929
* qemu_event_reset has acquire semantics, so no memory barrier
6930
* is needed here.
6931
*/
6932
QLIST_FOREACH(index, &registry, node) {
6933
- atomic_set(&index->waiting, true);
6934
+ qatomic_set(&index->waiting, true);
6935
}
6936
6937
/* Here, order the stores to index->waiting before the loads of
6938
@@ -XXX,XX +XXX,XX @@ static void wait_for_readers(void)
6939
/* No need for mb_set here, worst of all we
6940
* get some extra futex wakeups.
6941
*/
6942
- atomic_set(&index->waiting, false);
6943
+ qatomic_set(&index->waiting, false);
6944
}
6945
}
6946
6947
@@ -XXX,XX +XXX,XX @@ void synchronize_rcu(void)
6948
6949
QEMU_LOCK_GUARD(&rcu_registry_lock);
6950
if (!QLIST_EMPTY(&registry)) {
6951
- /* In either case, the atomic_mb_set below blocks stores that free
6952
+ /* In either case, the qatomic_mb_set below blocks stores that free
6953
* old RCU-protected pointers.
6954
*/
6955
if (sizeof(rcu_gp_ctr) < 8) {
6956
@@ -XXX,XX +XXX,XX @@ void synchronize_rcu(void)
6957
*
6958
* Switch parity: 0 -> 1, 1 -> 0.
6959
*/
6960
- atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR);
6961
+ qatomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR);
6962
wait_for_readers();
6963
- atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR);
6964
+ qatomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR);
6965
} else {
6966
/* Increment current grace period. */
6967
- atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr + RCU_GP_CTR);
6968
+ qatomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr + RCU_GP_CTR);
6969
}
6970
6971
wait_for_readers();
6972
@@ -XXX,XX +XXX,XX @@ static void enqueue(struct rcu_head *node)
6973
struct rcu_head **old_tail;
6974
6975
node->next = NULL;
6976
- old_tail = atomic_xchg(&tail, &node->next);
6977
- atomic_mb_set(old_tail, node);
6978
+ old_tail = qatomic_xchg(&tail, &node->next);
6979
+ qatomic_mb_set(old_tail, node);
6980
}
6981
6982
static struct rcu_head *try_dequeue(void)
6983
@@ -XXX,XX +XXX,XX @@ retry:
6984
* The tail, because it is the first step in the enqueuing.
6985
* It is only the next pointers that might be inconsistent.
6986
*/
6987
- if (head == &dummy && atomic_mb_read(&tail) == &dummy.next) {
6988
+ if (head == &dummy && qatomic_mb_read(&tail) == &dummy.next) {
6989
abort();
6990
}
6991
6992
@@ -XXX,XX +XXX,XX @@ retry:
6993
* wrong and we need to wait until its enqueuer finishes the update.
6994
*/
6995
node = head;
6996
- next = atomic_mb_read(&head->next);
6997
+ next = qatomic_mb_read(&head->next);
6998
if (!next) {
6999
return NULL;
7000
}
7001
@@ -XXX,XX +XXX,XX @@ static void *call_rcu_thread(void *opaque)
7002
7003
for (;;) {
7004
int tries = 0;
7005
- int n = atomic_read(&rcu_call_count);
7006
+ int n = qatomic_read(&rcu_call_count);
7007
7008
/* Heuristically wait for a decent number of callbacks to pile up.
7009
* Fetch rcu_call_count now, we only must process elements that were
7010
@@ -XXX,XX +XXX,XX @@ static void *call_rcu_thread(void *opaque)
7011
g_usleep(10000);
7012
if (n == 0) {
7013
qemu_event_reset(&rcu_call_ready_event);
7014
- n = atomic_read(&rcu_call_count);
7015
+ n = qatomic_read(&rcu_call_count);
7016
if (n == 0) {
7017
#if defined(CONFIG_MALLOC_TRIM)
7018
malloc_trim(4 * 1024 * 1024);
7019
@@ -XXX,XX +XXX,XX @@ static void *call_rcu_thread(void *opaque)
7020
qemu_event_wait(&rcu_call_ready_event);
7021
}
7022
}
7023
- n = atomic_read(&rcu_call_count);
7024
+ n = qatomic_read(&rcu_call_count);
7025
}
7026
7027
- atomic_sub(&rcu_call_count, n);
7028
+ qatomic_sub(&rcu_call_count, n);
7029
synchronize_rcu();
7030
qemu_mutex_lock_iothread();
7031
while (n > 0) {
7032
@@ -XXX,XX +XXX,XX @@ void call_rcu1(struct rcu_head *node, void (*func)(struct rcu_head *node))
7033
{
7034
node->func = func;
7035
enqueue(node);
7036
- atomic_inc(&rcu_call_count);
7037
+ qatomic_inc(&rcu_call_count);
7038
qemu_event_set(&rcu_call_ready_event);
7039
}
7040
7041
diff --git a/util/stats64.c b/util/stats64.c
7042
index XXXXXXX..XXXXXXX 100644
7043
--- a/util/stats64.c
7044
+++ b/util/stats64.c
7045
@@ -XXX,XX +XXX,XX @@
7046
static inline void stat64_rdlock(Stat64 *s)
7047
{
7048
/* Keep out incoming writers to avoid them starving us. */
7049
- atomic_add(&s->lock, 2);
7050
+ qatomic_add(&s->lock, 2);
7051
7052
/* If there is a concurrent writer, wait for it. */
7053
- while (atomic_read(&s->lock) & 1) {
7054
+ while (qatomic_read(&s->lock) & 1) {
7055
cpu_relax();
7056
}
7057
}
7058
7059
static inline void stat64_rdunlock(Stat64 *s)
7060
{
7061
- atomic_sub(&s->lock, 2);
7062
+ qatomic_sub(&s->lock, 2);
7063
}
7064
7065
static inline bool stat64_wrtrylock(Stat64 *s)
7066
{
7067
- return atomic_cmpxchg(&s->lock, 0, 1) == 0;
7068
+ return qatomic_cmpxchg(&s->lock, 0, 1) == 0;
7069
}
7070
7071
static inline void stat64_wrunlock(Stat64 *s)
7072
{
7073
- atomic_dec(&s->lock);
7074
+ qatomic_dec(&s->lock);
7075
}
7076
7077
uint64_t stat64_get(const Stat64 *s)
7078
@@ -XXX,XX +XXX,XX @@ uint64_t stat64_get(const Stat64 *s)
7079
/* 64-bit writes always take the lock, so we can read in
7080
* any order.
7081
*/
7082
- high = atomic_read(&s->high);
7083
- low = atomic_read(&s->low);
7084
+ high = qatomic_read(&s->high);
7085
+ low = qatomic_read(&s->low);
7086
stat64_rdunlock((Stat64 *)s);
7087
7088
return ((uint64_t)high << 32) | low;
7089
@@ -XXX,XX +XXX,XX @@ bool stat64_add32_carry(Stat64 *s, uint32_t low, uint32_t high)
7090
* order of our update. By updating s->low first, we can check
7091
* whether we have to carry into s->high.
7092
*/
7093
- old = atomic_fetch_add(&s->low, low);
7094
+ old = qatomic_fetch_add(&s->low, low);
7095
high += (old + low) < old;
7096
- atomic_add(&s->high, high);
7097
+ qatomic_add(&s->high, high);
7098
stat64_wrunlock(s);
7099
return true;
7100
}
7101
@@ -XXX,XX +XXX,XX @@ bool stat64_min_slow(Stat64 *s, uint64_t value)
7102
return false;
7103
}
7104
7105
- high = atomic_read(&s->high);
7106
- low = atomic_read(&s->low);
7107
+ high = qatomic_read(&s->high);
7108
+ low = qatomic_read(&s->low);
7109
7110
orig = ((uint64_t)high << 32) | low;
7111
if (value < orig) {
7112
@@ -XXX,XX +XXX,XX @@ bool stat64_min_slow(Stat64 *s, uint64_t value)
7113
* effect on stat64_min is that the slow path may be triggered
7114
* unnecessarily.
7115
*/
7116
- atomic_set(&s->low, (uint32_t)value);
7117
+ qatomic_set(&s->low, (uint32_t)value);
7118
smp_wmb();
7119
- atomic_set(&s->high, value >> 32);
7120
+ qatomic_set(&s->high, value >> 32);
7121
}
7122
stat64_wrunlock(s);
7123
return true;
7124
@@ -XXX,XX +XXX,XX @@ bool stat64_max_slow(Stat64 *s, uint64_t value)
7125
return false;
7126
}
7127
7128
- high = atomic_read(&s->high);
7129
- low = atomic_read(&s->low);
7130
+ high = qatomic_read(&s->high);
7131
+ low = qatomic_read(&s->low);
7132
7133
orig = ((uint64_t)high << 32) | low;
7134
if (value > orig) {
7135
@@ -XXX,XX +XXX,XX @@ bool stat64_max_slow(Stat64 *s, uint64_t value)
7136
* effect on stat64_max is that the slow path may be triggered
7137
* unnecessarily.
7138
*/
7139
- atomic_set(&s->low, (uint32_t)value);
7140
+ qatomic_set(&s->low, (uint32_t)value);
7141
smp_wmb();
7142
- atomic_set(&s->high, value >> 32);
7143
+ qatomic_set(&s->high, value >> 32);
7144
}
7145
stat64_wrunlock(s);
7146
return true;
7147
diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
7148
index XXXXXXX..XXXXXXX 100644
7149
--- a/docs/devel/atomics.rst
7150
+++ b/docs/devel/atomics.rst
7151
@@ -XXX,XX +XXX,XX @@ provides macros that fall in three camps:
7152
7153
- compiler barriers: ``barrier()``;
7154
7155
-- weak atomic access and manual memory barriers: ``atomic_read()``,
7156
- ``atomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``, ``smp_mb_acquire()``,
7157
- ``smp_mb_release()``, ``smp_read_barrier_depends()``;
7158
+- weak atomic access and manual memory barriers: ``qatomic_read()``,
7159
+ ``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
7160
+ ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
7161
7162
- sequentially consistent atomic access: everything else.
7163
7164
@@ -XXX,XX +XXX,XX @@ in the order specified by its program".
7165
``qemu/atomic.h`` provides the following set of atomic read-modify-write
7166
operations::
7167
7168
- void atomic_inc(ptr)
7169
- void atomic_dec(ptr)
7170
- void atomic_add(ptr, val)
7171
- void atomic_sub(ptr, val)
7172
- void atomic_and(ptr, val)
7173
- void atomic_or(ptr, val)
7174
+ void qatomic_inc(ptr)
7175
+ void qatomic_dec(ptr)
7176
+ void qatomic_add(ptr, val)
7177
+ void qatomic_sub(ptr, val)
7178
+ void qatomic_and(ptr, val)
7179
+ void qatomic_or(ptr, val)
7180
7181
- typeof(*ptr) atomic_fetch_inc(ptr)
7182
- typeof(*ptr) atomic_fetch_dec(ptr)
7183
- typeof(*ptr) atomic_fetch_add(ptr, val)
7184
- typeof(*ptr) atomic_fetch_sub(ptr, val)
7185
- typeof(*ptr) atomic_fetch_and(ptr, val)
7186
- typeof(*ptr) atomic_fetch_or(ptr, val)
7187
- typeof(*ptr) atomic_fetch_xor(ptr, val)
7188
- typeof(*ptr) atomic_fetch_inc_nonzero(ptr)
7189
- typeof(*ptr) atomic_xchg(ptr, val)
7190
- typeof(*ptr) atomic_cmpxchg(ptr, old, new)
7191
+ typeof(*ptr) qatomic_fetch_inc(ptr)
7192
+ typeof(*ptr) qatomic_fetch_dec(ptr)
7193
+ typeof(*ptr) qatomic_fetch_add(ptr, val)
7194
+ typeof(*ptr) qatomic_fetch_sub(ptr, val)
7195
+ typeof(*ptr) qatomic_fetch_and(ptr, val)
7196
+ typeof(*ptr) qatomic_fetch_or(ptr, val)
7197
+ typeof(*ptr) qatomic_fetch_xor(ptr, val)
7198
+ typeof(*ptr) qatomic_fetch_inc_nonzero(ptr)
7199
+ typeof(*ptr) qatomic_xchg(ptr, val)
7200
+ typeof(*ptr) qatomic_cmpxchg(ptr, old, new)
7201
7202
all of which return the old value of ``*ptr``. These operations are
7203
polymorphic; they operate on any type that is as wide as a pointer or
7204
@@ -XXX,XX +XXX,XX @@ smaller.
7205
7206
Similar operations return the new value of ``*ptr``::
7207
7208
- typeof(*ptr) atomic_inc_fetch(ptr)
7209
- typeof(*ptr) atomic_dec_fetch(ptr)
7210
- typeof(*ptr) atomic_add_fetch(ptr, val)
7211
- typeof(*ptr) atomic_sub_fetch(ptr, val)
7212
- typeof(*ptr) atomic_and_fetch(ptr, val)
7213
- typeof(*ptr) atomic_or_fetch(ptr, val)
7214
- typeof(*ptr) atomic_xor_fetch(ptr, val)
7215
+ typeof(*ptr) qatomic_inc_fetch(ptr)
7216
+ typeof(*ptr) qatomic_dec_fetch(ptr)
7217
+ typeof(*ptr) qatomic_add_fetch(ptr, val)
7218
+ typeof(*ptr) qatomic_sub_fetch(ptr, val)
7219
+ typeof(*ptr) qatomic_and_fetch(ptr, val)
7220
+ typeof(*ptr) qatomic_or_fetch(ptr, val)
7221
+ typeof(*ptr) qatomic_xor_fetch(ptr, val)
7222
7223
``qemu/atomic.h`` also provides loads and stores that cannot be reordered
7224
with each other::
7225
7226
- typeof(*ptr) atomic_mb_read(ptr)
7227
- void atomic_mb_set(ptr, val)
7228
+ typeof(*ptr) qatomic_mb_read(ptr)
7229
+ void qatomic_mb_set(ptr, val)
7230
7231
However these do not provide sequential consistency and, in particular,
7232
they do not participate in the total ordering enforced by
7233
@@ -XXX,XX +XXX,XX @@ easiest to hardest):
7234
7235
- lightweight synchronization primitives such as ``QemuEvent``
7236
7237
-- RCU operations (``atomic_rcu_read``, ``atomic_rcu_set``) when publishing
7238
+- RCU operations (``qatomic_rcu_read``, ``qatomic_rcu_set``) when publishing
7239
or accessing a new version of a data structure
7240
7241
-- other atomic accesses: ``atomic_read`` and ``atomic_load_acquire`` for
7242
- loads, ``atomic_set`` and ``atomic_store_release`` for stores, ``smp_mb``
7243
+- other atomic accesses: ``qatomic_read`` and ``qatomic_load_acquire`` for
7244
+ loads, ``qatomic_set`` and ``qatomic_store_release`` for stores, ``smp_mb``
7245
to forbid reordering subsequent loads before a store.
7246
7247
7248
@@ -XXX,XX +XXX,XX @@ The only guarantees that you can rely upon in this case are:
7249
7250
When using this model, variables are accessed with:
7251
7252
-- ``atomic_read()`` and ``atomic_set()``; these prevent the compiler from
7253
+- ``qatomic_read()`` and ``qatomic_set()``; these prevent the compiler from
7254
optimizing accesses out of existence and creating unsolicited
7255
accesses, but do not otherwise impose any ordering on loads and
7256
stores: both the compiler and the processor are free to reorder
7257
them.
7258
7259
-- ``atomic_load_acquire()``, which guarantees the LOAD to appear to
7260
+- ``qatomic_load_acquire()``, which guarantees the LOAD to appear to
7261
happen, with respect to the other components of the system,
7262
before all the LOAD or STORE operations specified afterwards.
7263
- Operations coming before ``atomic_load_acquire()`` can still be
7264
+ Operations coming before ``qatomic_load_acquire()`` can still be
7265
reordered after it.
7266
7267
-- ``atomic_store_release()``, which guarantees the STORE to appear to
7268
+- ``qatomic_store_release()``, which guarantees the STORE to appear to
7269
happen, with respect to the other components of the system,
7270
after all the LOAD or STORE operations specified before.
7271
- Operations coming after ``atomic_store_release()`` can still be
7272
+ Operations coming after ``qatomic_store_release()`` can still be
7273
reordered before it.
7274
7275
Restrictions to the ordering of accesses can also be specified
7276
@@ -XXX,XX +XXX,XX @@ They come in six kinds:
7277
dependency and a full read barrier or better is required.
7278
7279
7280
-Memory barriers and ``atomic_load_acquire``/``atomic_store_release`` are
7281
+Memory barriers and ``qatomic_load_acquire``/``qatomic_store_release`` are
7282
mostly used when a data structure has one thread that is always a writer
7283
and one thread that is always a reader:
7284
7285
@@ -XXX,XX +XXX,XX @@ and one thread that is always a reader:
7286
+==================================+==================================+
7287
| :: | :: |
7288
| | |
7289
- | atomic_store_release(&a, x); | y = atomic_load_acquire(&b); |
7290
- | atomic_store_release(&b, y); | x = atomic_load_acquire(&a); |
7291
+ | qatomic_store_release(&a, x); | y = qatomic_load_acquire(&b); |
7292
+ | qatomic_store_release(&b, y); | x = qatomic_load_acquire(&a); |
7293
+----------------------------------+----------------------------------+
7294
7295
In this case, correctness is easy to check for using the "pairing"
7296
@@ -XXX,XX +XXX,XX @@ outside a loop. For example:
7297
| | |
7298
| n = 0; | n = 0; |
7299
| for (i = 0; i < 10; i++) | for (i = 0; i < 10; i++) |
7300
- | n += atomic_load_acquire(&a[i]); | n += atomic_read(&a[i]); |
7301
+ | n += qatomic_load_acquire(&a[i]); | n += qatomic_read(&a[i]); |
7302
| | smp_mb_acquire(); |
7303
+------------------------------------------+----------------------------------+
7304
| :: | :: |
7305
| | |
7306
| | smp_mb_release(); |
7307
| for (i = 0; i < 10; i++) | for (i = 0; i < 10; i++) |
7308
- | atomic_store_release(&a[i], false); | atomic_set(&a[i], false); |
7309
+ | qatomic_store_release(&a[i], false); | qatomic_set(&a[i], false); |
7310
+------------------------------------------+----------------------------------+
7311
7312
Splitting a loop can also be useful to reduce the number of barriers:
7313
@@ -XXX,XX +XXX,XX @@ Splitting a loop can also be useful to reduce the number of barriers:
7314
| | |
7315
| n = 0; | smp_mb_release(); |
7316
| for (i = 0; i < 10; i++) { | for (i = 0; i < 10; i++) |
7317
- | atomic_store_release(&a[i], false); | atomic_set(&a[i], false); |
7318
+ | qatomic_store_release(&a[i], false); | qatomic_set(&a[i], false); |
7319
| smp_mb(); | smb_mb(); |
7320
- | n += atomic_read(&b[i]); | n = 0; |
7321
+ | n += qatomic_read(&b[i]); | n = 0; |
7322
| } | for (i = 0; i < 10; i++) |
7323
- | | n += atomic_read(&b[i]); |
7324
+ | | n += qatomic_read(&b[i]); |
7325
+------------------------------------------+----------------------------------+
7326
7327
In this case, a ``smp_mb_release()`` is also replaced with a (possibly cheaper, and clearer
7328
@@ -XXX,XX +XXX,XX @@ as well) ``smp_wmb()``:
7329
| | |
7330
| | smp_mb_release(); |
7331
| for (i = 0; i < 10; i++) { | for (i = 0; i < 10; i++) |
7332
- | atomic_store_release(&a[i], false); | atomic_set(&a[i], false); |
7333
- | atomic_store_release(&b[i], false); | smb_wmb(); |
7334
+ | qatomic_store_release(&a[i], false); | qatomic_set(&a[i], false); |
7335
+ | qatomic_store_release(&b[i], false); | smb_wmb(); |
7336
| } | for (i = 0; i < 10; i++) |
7337
- | | atomic_set(&b[i], false); |
7338
+ | | qatomic_set(&b[i], false); |
7339
+------------------------------------------+----------------------------------+
7340
7341
7342
@@ -XXX,XX +XXX,XX @@ as well) ``smp_wmb()``:
7343
Acquire/release pairing and the *synchronizes-with* relation
7344
------------------------------------------------------------
7345
7346
-Atomic operations other than ``atomic_set()`` and ``atomic_read()`` have
7347
+Atomic operations other than ``qatomic_set()`` and ``qatomic_read()`` have
7348
either *acquire* or *release* semantics [#rmw]_. This has two effects:
7349
7350
.. [#rmw] Read-modify-write operations can have both---acquire applies to the
7351
@@ -XXX,XX +XXX,XX @@ thread 2 is relying on the *synchronizes-with* relation between ``pthread_exit``
7352
7353
Synchronization between threads basically descends from this pairing of
7354
a release operation and an acquire operation. Therefore, atomic operations
7355
-other than ``atomic_set()`` and ``atomic_read()`` will almost always be
7356
+other than ``qatomic_set()`` and ``qatomic_read()`` will almost always be
7357
paired with another operation of the opposite kind: an acquire operation
7358
will pair with a release operation and vice versa. This rule of thumb is
7359
extremely useful; in the case of QEMU, however, note that the other
7360
operation may actually be in a driver that runs in the guest!
7361
7362
``smp_read_barrier_depends()``, ``smp_rmb()``, ``smp_mb_acquire()``,
7363
-``atomic_load_acquire()`` and ``atomic_rcu_read()`` all count
7364
+``qatomic_load_acquire()`` and ``qatomic_rcu_read()`` all count
7365
as acquire operations. ``smp_wmb()``, ``smp_mb_release()``,
7366
-``atomic_store_release()`` and ``atomic_rcu_set()`` all count as release
7367
+``qatomic_store_release()`` and ``qatomic_rcu_set()`` all count as release
7368
operations. ``smp_mb()`` counts as both acquire and release, therefore
7369
it can pair with any other atomic operation. Here is an example:
7370
7371
@@ -XXX,XX +XXX,XX @@ it can pair with any other atomic operation. Here is an example:
7372
+======================+==============================+
7373
| :: | :: |
7374
| | |
7375
- | atomic_set(&a, 1); | |
7376
+ | qatomic_set(&a, 1);| |
7377
| smp_wmb(); | |
7378
- | atomic_set(&b, 2); | x = atomic_read(&b); |
7379
+ | qatomic_set(&b, 2);| x = qatomic_read(&b); |
7380
| | smp_rmb(); |
7381
- | | y = atomic_read(&a); |
7382
+ | | y = qatomic_read(&a); |
7383
+----------------------+------------------------------+
7384
7385
Note that a load-store pair only counts if the two operations access the
7386
@@ -XXX,XX +XXX,XX @@ correct synchronization:
7387
+================================+================================+
7388
| :: | :: |
7389
| | |
7390
- | atomic_set(&a, 1); | |
7391
- | atomic_store_release(&b, 2); | x = atomic_load_acquire(&b); |
7392
- | | y = atomic_read(&a); |
7393
+ | qatomic_set(&a, 1); | |
7394
+ | qatomic_store_release(&b, 2);| x = qatomic_load_acquire(&b);|
7395
+ | | y = qatomic_read(&a); |
7396
+--------------------------------+--------------------------------+
7397
7398
Acquire and release semantics of higher-level primitives can also be
7399
@@ -XXX,XX +XXX,XX @@ cannot be a data race:
7400
| smp_wmb(); | |
7401
| x->i = 2; | |
7402
| smp_wmb(); | |
7403
- | atomic_set(&a, x); | x = atomic_read(&a); |
7404
+ | qatomic_set(&a, x);| x = qatomic_read(&a); |
7405
| | smp_read_barrier_depends(); |
7406
| | y = x->i; |
7407
| | smp_read_barrier_depends(); |
7408
@@ -XXX,XX +XXX,XX @@ and memory barriers, and the equivalents in QEMU:
7409
at all. Linux 4.1 updated them to implement volatile
7410
semantics via ``ACCESS_ONCE`` (or the more recent ``READ``/``WRITE_ONCE``).
7411
7412
- QEMU's ``atomic_read`` and ``atomic_set`` implement C11 atomic relaxed
7413
+ QEMU's ``qatomic_read`` and ``qatomic_set`` implement C11 atomic relaxed
7414
semantics if the compiler supports it, and volatile semantics otherwise.
7415
Both semantics prevent the compiler from doing certain transformations;
7416
the difference is that atomic accesses are guaranteed to be atomic,
7417
@@ -XXX,XX +XXX,XX @@ and memory barriers, and the equivalents in QEMU:
7418
since we assume the variables passed are machine-word sized and
7419
properly aligned.
7420
7421
- No barriers are implied by ``atomic_read`` and ``atomic_set`` in either Linux
7422
- or QEMU.
7423
+ No barriers are implied by ``qatomic_read`` and ``qatomic_set`` in either
7424
+ Linux or QEMU.
7425
7426
- atomic read-modify-write operations in Linux are of three kinds:
7427
7428
@@ -XXX,XX +XXX,XX @@ and memory barriers, and the equivalents in QEMU:
7429
a different set of memory barriers; in QEMU, all of them enforce
7430
sequential consistency.
7431
7432
-- in QEMU, ``atomic_read()`` and ``atomic_set()`` do not participate in
7433
+- in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
7434
the total ordering enforced by sequentially-consistent operations.
7435
This is because QEMU uses the C11 memory model. The following example
7436
is correct in Linux but not in QEMU:
7437
@@ -XXX,XX +XXX,XX @@ and memory barriers, and the equivalents in QEMU:
7438
+==================================+================================+
7439
| :: | :: |
7440
| | |
7441
- | a = atomic_fetch_add(&x, 2); | a = atomic_fetch_add(&x, 2); |
7442
- | b = READ_ONCE(&y); | b = atomic_read(&y); |
7443
+ | a = atomic_fetch_add(&x, 2); | a = qatomic_fetch_add(&x, 2);|
7444
+ | b = READ_ONCE(&y); | b = qatomic_read(&y); |
7445
+----------------------------------+--------------------------------+
7446
7447
because the read of ``y`` can be moved (by either the processor or the
7448
@@ -XXX,XX +XXX,XX @@ and memory barriers, and the equivalents in QEMU:
7449
+================================+
7450
| :: |
7451
| |
7452
- | a = atomic_read(&x); |
7453
- | atomic_set(&x, a + 2); |
7454
+ | a = qatomic_read(&x); |
7455
+ | qatomic_set(&x, a + 2); |
7456
| smp_mb(); |
7457
- | b = atomic_read(&y); |
7458
+ | b = qatomic_read(&y); |
7459
+--------------------------------+
7460
7461
Sources
7462
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
7463
index XXXXXXX..XXXXXXX 100755
7464
--- a/scripts/kernel-doc
7465
+++ b/scripts/kernel-doc
7466
@@ -XXX,XX +XXX,XX @@ sub dump_function($$) {
7467
# If you mess with these regexps, it's a good idea to check that
7468
# the following functions' documentation still comes out right:
7469
# - parport_register_device (function pointer parameters)
7470
- # - atomic_set (macro)
7471
+ # - qatomic_set (macro)
7472
# - pci_match_device, __copy_to_user (long return type)
7473
7474
if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) {
7475
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
7476
index XXXXXXX..XXXXXXX 100644
7477
--- a/tcg/aarch64/tcg-target.c.inc
7478
+++ b/tcg/aarch64/tcg-target.c.inc
7479
@@ -XXX,XX +XXX,XX @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_addr,
7480
i2 = I3401_ADDI | rt << 31 | (addr & 0xfff) << 10 | rd << 5 | rd;
7481
}
7482
pair = (uint64_t)i2 << 32 | i1;
7483
- atomic_set((uint64_t *)jmp_addr, pair);
7484
+ qatomic_set((uint64_t *)jmp_addr, pair);
7485
flush_icache_range(jmp_addr, jmp_addr + 8);
7486
}
7487
7488
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
7489
index XXXXXXX..XXXXXXX 100644
7490
--- a/tcg/mips/tcg-target.c.inc
7491
+++ b/tcg/mips/tcg-target.c.inc
7492
@@ -XXX,XX +XXX,XX @@ static void tcg_target_init(TCGContext *s)
7493
void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_addr,
7494
uintptr_t addr)
7495
{
7496
- atomic_set((uint32_t *)jmp_addr, deposit32(OPC_J, 0, 26, addr >> 2));
7497
+ qatomic_set((uint32_t *)jmp_addr, deposit32(OPC_J, 0, 26, addr >> 2));
7498
flush_icache_range(jmp_addr, jmp_addr + 4);
7499
}
7500
7501
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
7502
index XXXXXXX..XXXXXXX 100644
7503
--- a/tcg/ppc/tcg-target.c.inc
7504
+++ b/tcg/ppc/tcg-target.c.inc
7505
@@ -XXX,XX +XXX,XX @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_addr,
7506
#endif
7507
7508
/* As per the enclosing if, this is ppc64. Avoid the _Static_assert
7509
- within atomic_set that would fail to build a ppc32 host. */
7510
- atomic_set__nocheck((uint64_t *)jmp_addr, pair);
7511
+ within qatomic_set that would fail to build a ppc32 host. */
7512
+ qatomic_set__nocheck((uint64_t *)jmp_addr, pair);
7513
flush_icache_range(jmp_addr, jmp_addr + 8);
7514
} else {
7515
intptr_t diff = addr - jmp_addr;
7516
tcg_debug_assert(in_range_b(diff));
7517
- atomic_set((uint32_t *)jmp_addr, B | (diff & 0x3fffffc));
7518
+ qatomic_set((uint32_t *)jmp_addr, B | (diff & 0x3fffffc));
7519
flush_icache_range(jmp_addr, jmp_addr + 4);
7520
}
7521
}
7522
diff --git a/tcg/sparc/tcg-target.c.inc b/tcg/sparc/tcg-target.c.inc
7523
index XXXXXXX..XXXXXXX 100644
7524
--- a/tcg/sparc/tcg-target.c.inc
7525
+++ b/tcg/sparc/tcg-target.c.inc
7526
@@ -XXX,XX +XXX,XX @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_addr,
7527
tcg_debug_assert(br_disp == (int32_t)br_disp);
7528
7529
if (!USE_REG_TB) {
7530
- atomic_set((uint32_t *)jmp_addr, deposit32(CALL, 0, 30, br_disp >> 2));
7531
+ qatomic_set((uint32_t *)jmp_addr,
7532
+         deposit32(CALL, 0, 30, br_disp >> 2));
7533
flush_icache_range(jmp_addr, jmp_addr + 4);
7534
return;
7535
}
7536
@@ -XXX,XX +XXX,XX @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_addr,
7537
| INSN_IMM13((tb_disp & 0x3ff) | -0x400));
7538
}
7539
7540
- atomic_set((uint64_t *)jmp_addr, deposit64(i2, 32, 32, i1));
7541
+ qatomic_set((uint64_t *)jmp_addr, deposit64(i2, 32, 32, i1));
7542
flush_icache_range(jmp_addr, jmp_addr + 8);
7543
}
7544
--
223
--
7545
2.26.2
224
2.29.2
7546
225
diff view generated by jsdifflib
New patch
1
From: Jagannathan Raman <jag.raman@oracle.com>
1
2
3
Add configuration options to enable or disable multiprocess QEMU code
4
5
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
6
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
7
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Message-id: 6cc37253e35418ebd7b675a31a3df6e3c7a12dc1.1611938319.git.jag.raman@oracle.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
12
configure | 10 ++++++++++
13
meson.build | 4 +++-
14
Kconfig.host | 4 ++++
15
hw/Kconfig | 1 +
16
hw/remote/Kconfig | 3 +++
17
5 files changed, 21 insertions(+), 1 deletion(-)
18
create mode 100644 hw/remote/Kconfig
19
20
diff --git a/configure b/configure
21
index XXXXXXX..XXXXXXX 100755
22
--- a/configure
23
+++ b/configure
24
@@ -XXX,XX +XXX,XX @@ skip_meson=no
25
gettext="auto"
26
fuse="auto"
27
fuse_lseek="auto"
28
+multiprocess="no"
29
30
malloc_trim="auto"
31
32
@@ -XXX,XX +XXX,XX @@ Linux)
33
linux="yes"
34
linux_user="yes"
35
vhost_user=${default_feature:-yes}
36
+ multiprocess=${default_feature:-yes}
37
;;
38
esac
39
40
@@ -XXX,XX +XXX,XX @@ for opt do
41
;;
42
--disable-fuse-lseek) fuse_lseek="disabled"
43
;;
44
+ --enable-multiprocess) multiprocess="yes"
45
+ ;;
46
+ --disable-multiprocess) multiprocess="no"
47
+ ;;
48
*)
49
echo "ERROR: unknown option $opt"
50
echo "Try '$0 --help' for more information"
51
@@ -XXX,XX +XXX,XX @@ disabled with --disable-FEATURE, default is enabled if available
52
libdaxctl libdaxctl support
53
fuse FUSE block device export
54
fuse-lseek SEEK_HOLE/SEEK_DATA support for FUSE exports
55
+ multiprocess Multiprocess QEMU support
56
57
NOTE: The object files are built at the place where configure is launched
58
EOF
59
@@ -XXX,XX +XXX,XX @@ fi
60
if test "$have_mlockall" = "yes" ; then
61
echo "HAVE_MLOCKALL=y" >> $config_host_mak
62
fi
63
+if test "$multiprocess" = "yes" ; then
64
+ echo "CONFIG_MULTIPROCESS_ALLOWED=y" >> $config_host_mak
65
+fi
66
if test "$fuzzing" = "yes" ; then
67
# If LIB_FUZZING_ENGINE is set, assume we are running on OSS-Fuzz, and the
68
# needed CFLAGS have already been provided
69
diff --git a/meson.build b/meson.build
70
index XXXXXXX..XXXXXXX 100644
71
--- a/meson.build
72
+++ b/meson.build
73
@@ -XXX,XX +XXX,XX @@ host_kconfig = \
74
('CONFIG_VHOST_KERNEL' in config_host ? ['CONFIG_VHOST_KERNEL=y'] : []) + \
75
(have_virtfs ? ['CONFIG_VIRTFS=y'] : []) + \
76
('CONFIG_LINUX' in config_host ? ['CONFIG_LINUX=y'] : []) + \
77
- ('CONFIG_PVRDMA' in config_host ? ['CONFIG_PVRDMA=y'] : [])
78
+ ('CONFIG_PVRDMA' in config_host ? ['CONFIG_PVRDMA=y'] : []) + \
79
+ ('CONFIG_MULTIPROCESS_ALLOWED' in config_host ? ['CONFIG_MULTIPROCESS_ALLOWED=y'] : [])
80
81
ignored = [ 'TARGET_XML_FILES', 'TARGET_ABI_DIR', 'TARGET_ARCH' ]
82
83
@@ -XXX,XX +XXX,XX @@ summary_info += {'libpmem support': config_host.has_key('CONFIG_LIBPMEM')}
84
summary_info += {'libdaxctl support': config_host.has_key('CONFIG_LIBDAXCTL')}
85
summary_info += {'libudev': libudev.found()}
86
summary_info += {'FUSE lseek': fuse_lseek.found()}
87
+summary_info += {'Multiprocess QEMU': config_host.has_key('CONFIG_MULTIPROCESS_ALLOWED')}
88
summary(summary_info, bool_yn: true, section: 'Dependencies')
89
90
if not supported_cpus.contains(cpu)
91
diff --git a/Kconfig.host b/Kconfig.host
92
index XXXXXXX..XXXXXXX 100644
93
--- a/Kconfig.host
94
+++ b/Kconfig.host
95
@@ -XXX,XX +XXX,XX @@ config VIRTFS
96
97
config PVRDMA
98
bool
99
+
100
+config MULTIPROCESS_ALLOWED
101
+ bool
102
+ imply MULTIPROCESS
103
diff --git a/hw/Kconfig b/hw/Kconfig
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/Kconfig
106
+++ b/hw/Kconfig
107
@@ -XXX,XX +XXX,XX @@ source pci-host/Kconfig
108
source pcmcia/Kconfig
109
source pci/Kconfig
110
source rdma/Kconfig
111
+source remote/Kconfig
112
source rtc/Kconfig
113
source scsi/Kconfig
114
source sd/Kconfig
115
diff --git a/hw/remote/Kconfig b/hw/remote/Kconfig
116
new file mode 100644
117
index XXXXXXX..XXXXXXX
118
--- /dev/null
119
+++ b/hw/remote/Kconfig
120
@@ -XXX,XX +XXX,XX @@
121
+config MULTIPROCESS
122
+ bool
123
+ depends on PCI && KVM
124
--
125
2.29.2
126
diff view generated by jsdifflib
1
From: Halil Pasic <pasic@linux.ibm.com>
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
2
3
Wire up the CCW device for vhost-user-fs.
3
PCI host bridge is setup for the remote device process. It is
4
implemented using remote-pcihost object. It is an extension of the PCI
5
host bridge setup by QEMU.
6
Remote-pcihost configures a PCI bus which could be used by the remote
7
PCI device to latch on to.
4
8
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
9
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
6
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
10
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
7
Message-id: 20200901150019.29229-2-mhartmay@linux.ibm.com
11
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
12
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Message-id: 0871ba857abb2eafacde07e7fe66a3f12415bfb2.1611938319.git.jag.raman@oracle.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
15
---
10
hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++++++++++++
16
MAINTAINERS | 2 +
11
hw/s390x/meson.build | 1 +
17
include/hw/pci-host/remote.h | 29 ++++++++++++++
12
2 files changed, 76 insertions(+)
18
hw/pci-host/remote.c | 75 ++++++++++++++++++++++++++++++++++++
13
create mode 100644 hw/s390x/vhost-user-fs-ccw.c
19
hw/pci-host/Kconfig | 3 ++
20
hw/pci-host/meson.build | 1 +
21
hw/remote/Kconfig | 1 +
22
6 files changed, 111 insertions(+)
23
create mode 100644 include/hw/pci-host/remote.h
24
create mode 100644 hw/pci-host/remote.c
14
25
15
diff --git a/hw/s390x/vhost-user-fs-ccw.c b/hw/s390x/vhost-user-fs-ccw.c
26
diff --git a/MAINTAINERS b/MAINTAINERS
27
index XXXXXXX..XXXXXXX 100644
28
--- a/MAINTAINERS
29
+++ b/MAINTAINERS
30
@@ -XXX,XX +XXX,XX @@ M: John G Johnson <john.g.johnson@oracle.com>
31
S: Maintained
32
F: docs/devel/multi-process.rst
33
F: docs/system/multi-process.rst
34
+F: hw/pci-host/remote.c
35
+F: include/hw/pci-host/remote.h
36
37
Build and test automation
38
-------------------------
39
diff --git a/include/hw/pci-host/remote.h b/include/hw/pci-host/remote.h
16
new file mode 100644
40
new file mode 100644
17
index XXXXXXX..XXXXXXX
41
index XXXXXXX..XXXXXXX
18
--- /dev/null
42
--- /dev/null
19
+++ b/hw/s390x/vhost-user-fs-ccw.c
43
+++ b/include/hw/pci-host/remote.h
20
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@
21
+/*
45
+/*
22
+ * virtio ccw vhost-user-fs implementation
46
+ * PCI Host for remote device
23
+ *
47
+ *
24
+ * Copyright 2020 IBM Corp.
48
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
25
+ *
49
+ *
26
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
50
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
27
+ * your option) any later version. See the COPYING file in the top-level
51
+ * See the COPYING file in the top-level directory.
28
+ * directory.
52
+ *
29
+ */
53
+ */
30
+#include "qemu/osdep.h"
31
+#include "hw/qdev-properties.h"
32
+#include "qapi/error.h"
33
+#include "hw/virtio/vhost-user-fs.h"
34
+#include "virtio-ccw.h"
35
+
54
+
36
+typedef struct VHostUserFSCcw {
55
+#ifndef REMOTE_PCIHOST_H
37
+ VirtioCcwDevice parent_obj;
56
+#define REMOTE_PCIHOST_H
38
+ VHostUserFS vdev;
39
+} VHostUserFSCcw;
40
+
57
+
41
+#define TYPE_VHOST_USER_FS_CCW "vhost-user-fs-ccw"
58
+#include "exec/memory.h"
42
+#define VHOST_USER_FS_CCW(obj) \
59
+#include "hw/pci/pcie_host.h"
43
+ OBJECT_CHECK(VHostUserFSCcw, (obj), TYPE_VHOST_USER_FS_CCW)
44
+
60
+
61
+#define TYPE_REMOTE_PCIHOST "remote-pcihost"
62
+OBJECT_DECLARE_SIMPLE_TYPE(RemotePCIHost, REMOTE_PCIHOST)
45
+
63
+
46
+static Property vhost_user_fs_ccw_properties[] = {
64
+struct RemotePCIHost {
47
+ DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags,
65
+ /*< private >*/
48
+ VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true),
66
+ PCIExpressHost parent_obj;
49
+ DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev,
67
+ /*< public >*/
50
+ VIRTIO_CCW_MAX_REV),
68
+
51
+ DEFINE_PROP_END_OF_LIST(),
69
+ MemoryRegion *mr_pci_mem;
70
+ MemoryRegion *mr_sys_io;
52
+};
71
+};
53
+
72
+
54
+static void vhost_user_fs_ccw_realize(VirtioCcwDevice *ccw_dev, Error **errp)
73
+#endif
74
diff --git a/hw/pci-host/remote.c b/hw/pci-host/remote.c
75
new file mode 100644
76
index XXXXXXX..XXXXXXX
77
--- /dev/null
78
+++ b/hw/pci-host/remote.c
79
@@ -XXX,XX +XXX,XX @@
80
+/*
81
+ * Remote PCI host device
82
+ *
83
+ * Unlike PCI host devices that model physical hardware, the purpose
84
+ * of this PCI host is to host multi-process QEMU devices.
85
+ *
86
+ * Multi-process QEMU extends the PCI host of a QEMU machine into a
87
+ * remote process. Any PCI device attached to the remote process is
88
+ * visible in the QEMU guest. This allows existing QEMU device models
89
+ * to be reused in the remote process.
90
+ *
91
+ * This PCI host is purely a container for PCI devices. It's fake in the
92
+ * sense that the guest never sees this PCI host and has no way of
93
+ * accessing it. Its job is just to provide the environment that QEMU
94
+ * PCI device models need when running in a remote process.
95
+ *
96
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
97
+ *
98
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
99
+ * See the COPYING file in the top-level directory.
100
+ *
101
+ */
102
+
103
+#include "qemu/osdep.h"
104
+#include "qemu-common.h"
105
+
106
+#include "hw/pci/pci.h"
107
+#include "hw/pci/pci_host.h"
108
+#include "hw/pci/pcie_host.h"
109
+#include "hw/qdev-properties.h"
110
+#include "hw/pci-host/remote.h"
111
+#include "exec/memory.h"
112
+
113
+static const char *remote_pcihost_root_bus_path(PCIHostState *host_bridge,
114
+ PCIBus *rootbus)
55
+{
115
+{
56
+ VHostUserFSCcw *dev = VHOST_USER_FS_CCW(ccw_dev);
116
+ return "0000:00";
57
+ DeviceState *vdev = DEVICE(&dev->vdev);
58
+
59
+ qdev_realize(vdev, BUS(&ccw_dev->bus), errp);
60
+}
117
+}
61
+
118
+
62
+static void vhost_user_fs_ccw_instance_init(Object *obj)
119
+static void remote_pcihost_realize(DeviceState *dev, Error **errp)
63
+{
120
+{
64
+ VHostUserFSCcw *dev = VHOST_USER_FS_CCW(obj);
121
+ PCIHostState *pci = PCI_HOST_BRIDGE(dev);
65
+ VirtioCcwDevice *ccw_dev = VIRTIO_CCW_DEVICE(obj);
122
+ RemotePCIHost *s = REMOTE_PCIHOST(dev);
66
+
123
+
67
+ ccw_dev->force_revision_1 = true;
124
+ pci->bus = pci_root_bus_new(DEVICE(s), "remote-pci",
68
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
125
+ s->mr_pci_mem, s->mr_sys_io,
69
+ TYPE_VHOST_USER_FS);
126
+ 0, TYPE_PCIE_BUS);
70
+}
127
+}
71
+
128
+
72
+static void vhost_user_fs_ccw_class_init(ObjectClass *klass, void *data)
129
+static void remote_pcihost_class_init(ObjectClass *klass, void *data)
73
+{
130
+{
74
+ DeviceClass *dc = DEVICE_CLASS(klass);
131
+ DeviceClass *dc = DEVICE_CLASS(klass);
75
+ VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass);
132
+ PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(klass);
76
+
133
+
77
+ k->realize = vhost_user_fs_ccw_realize;
134
+ hc->root_bus_path = remote_pcihost_root_bus_path;
78
+ device_class_set_props(dc, vhost_user_fs_ccw_properties);
135
+ dc->realize = remote_pcihost_realize;
79
+ set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
136
+
137
+ dc->user_creatable = false;
138
+ set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
139
+ dc->fw_name = "pci";
80
+}
140
+}
81
+
141
+
82
+static const TypeInfo vhost_user_fs_ccw = {
142
+static const TypeInfo remote_pcihost_info = {
83
+ .name = TYPE_VHOST_USER_FS_CCW,
143
+ .name = TYPE_REMOTE_PCIHOST,
84
+ .parent = TYPE_VIRTIO_CCW_DEVICE,
144
+ .parent = TYPE_PCIE_HOST_BRIDGE,
85
+ .instance_size = sizeof(VHostUserFSCcw),
145
+ .instance_size = sizeof(RemotePCIHost),
86
+ .instance_init = vhost_user_fs_ccw_instance_init,
146
+ .class_init = remote_pcihost_class_init,
87
+ .class_init = vhost_user_fs_ccw_class_init,
88
+};
147
+};
89
+
148
+
90
+static void vhost_user_fs_ccw_register(void)
149
+static void remote_pcihost_register(void)
91
+{
150
+{
92
+ type_register_static(&vhost_user_fs_ccw);
151
+ type_register_static(&remote_pcihost_info);
93
+}
152
+}
94
+
153
+
95
+type_init(vhost_user_fs_ccw_register)
154
+type_init(remote_pcihost_register)
96
diff --git a/hw/s390x/meson.build b/hw/s390x/meson.build
155
diff --git a/hw/pci-host/Kconfig b/hw/pci-host/Kconfig
97
index XXXXXXX..XXXXXXX 100644
156
index XXXXXXX..XXXXXXX 100644
98
--- a/hw/s390x/meson.build
157
--- a/hw/pci-host/Kconfig
99
+++ b/hw/s390x/meson.build
158
+++ b/hw/pci-host/Kconfig
100
@@ -XXX,XX +XXX,XX @@ virtio_ss.add(when: 'CONFIG_VIRTIO_SCSI', if_true: files('virtio-ccw-scsi.c'))
159
@@ -XXX,XX +XXX,XX @@ config PCI_POWERNV
101
virtio_ss.add(when: 'CONFIG_VIRTIO_SERIAL', if_true: files('virtio-ccw-serial.c'))
160
select PCI_EXPRESS
102
virtio_ss.add(when: ['CONFIG_VIRTIO_9P', 'CONFIG_VIRTFS'], if_true: files('virtio-ccw-blk.c'))
161
select MSI_NONBROKEN
103
virtio_ss.add(when: 'CONFIG_VHOST_VSOCK', if_true: files('vhost-vsock-ccw.c'))
162
select PCIE_PORT
104
+virtio_ss.add(when: 'CONFIG_VHOST_USER_FS', if_true: files('vhost-user-fs-ccw.c'))
163
+
105
s390x_ss.add_all(when: 'CONFIG_VIRTIO_CCW', if_true: virtio_ss)
164
+config REMOTE_PCIHOST
106
165
+ bool
107
hw_arch += {'s390x': s390x_ss}
166
diff --git a/hw/pci-host/meson.build b/hw/pci-host/meson.build
167
index XXXXXXX..XXXXXXX 100644
168
--- a/hw/pci-host/meson.build
169
+++ b/hw/pci-host/meson.build
170
@@ -XXX,XX +XXX,XX @@ pci_ss.add(when: 'CONFIG_PCI_EXPRESS_XILINX', if_true: files('xilinx-pcie.c'))
171
pci_ss.add(when: 'CONFIG_PCI_I440FX', if_true: files('i440fx.c'))
172
pci_ss.add(when: 'CONFIG_PCI_SABRE', if_true: files('sabre.c'))
173
pci_ss.add(when: 'CONFIG_XEN_IGD_PASSTHROUGH', if_true: files('xen_igd_pt.c'))
174
+pci_ss.add(when: 'CONFIG_REMOTE_PCIHOST', if_true: files('remote.c'))
175
176
# PPC devices
177
pci_ss.add(when: 'CONFIG_PREP_PCI', if_true: files('prep.c'))
178
diff --git a/hw/remote/Kconfig b/hw/remote/Kconfig
179
index XXXXXXX..XXXXXXX 100644
180
--- a/hw/remote/Kconfig
181
+++ b/hw/remote/Kconfig
182
@@ -XXX,XX +XXX,XX @@
183
config MULTIPROCESS
184
bool
185
depends on PCI && KVM
186
+ select REMOTE_PCIHOST
108
--
187
--
109
2.26.2
188
2.29.2
110
189
diff view generated by jsdifflib
New patch
1
From: Jagannathan Raman <jag.raman@oracle.com>
1
2
3
x-remote-machine object sets up various subsystems of the remote
4
device process. Instantiate PCI host bridge object and initialize RAM, IO &
5
PCI memory regions.
6
7
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
9
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: c537f38d17f90453ca610c6b70cf3480274e0ba1.1611938319.git.jag.raman@oracle.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
MAINTAINERS | 2 ++
15
include/hw/pci-host/remote.h | 1 +
16
include/hw/remote/machine.h | 27 ++++++++++++++
17
hw/remote/machine.c | 70 ++++++++++++++++++++++++++++++++++++
18
hw/meson.build | 1 +
19
hw/remote/meson.build | 5 +++
20
6 files changed, 106 insertions(+)
21
create mode 100644 include/hw/remote/machine.h
22
create mode 100644 hw/remote/machine.c
23
create mode 100644 hw/remote/meson.build
24
25
diff --git a/MAINTAINERS b/MAINTAINERS
26
index XXXXXXX..XXXXXXX 100644
27
--- a/MAINTAINERS
28
+++ b/MAINTAINERS
29
@@ -XXX,XX +XXX,XX @@ F: docs/devel/multi-process.rst
30
F: docs/system/multi-process.rst
31
F: hw/pci-host/remote.c
32
F: include/hw/pci-host/remote.h
33
+F: hw/remote/machine.c
34
+F: include/hw/remote/machine.h
35
36
Build and test automation
37
-------------------------
38
diff --git a/include/hw/pci-host/remote.h b/include/hw/pci-host/remote.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/include/hw/pci-host/remote.h
41
+++ b/include/hw/pci-host/remote.h
42
@@ -XXX,XX +XXX,XX @@ struct RemotePCIHost {
43
44
MemoryRegion *mr_pci_mem;
45
MemoryRegion *mr_sys_io;
46
+ MemoryRegion *mr_sys_mem;
47
};
48
49
#endif
50
diff --git a/include/hw/remote/machine.h b/include/hw/remote/machine.h
51
new file mode 100644
52
index XXXXXXX..XXXXXXX
53
--- /dev/null
54
+++ b/include/hw/remote/machine.h
55
@@ -XXX,XX +XXX,XX @@
56
+/*
57
+ * Remote machine configuration
58
+ *
59
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
60
+ *
61
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
62
+ * See the COPYING file in the top-level directory.
63
+ *
64
+ */
65
+
66
+#ifndef REMOTE_MACHINE_H
67
+#define REMOTE_MACHINE_H
68
+
69
+#include "qom/object.h"
70
+#include "hw/boards.h"
71
+#include "hw/pci-host/remote.h"
72
+
73
+struct RemoteMachineState {
74
+ MachineState parent_obj;
75
+
76
+ RemotePCIHost *host;
77
+};
78
+
79
+#define TYPE_REMOTE_MACHINE "x-remote-machine"
80
+OBJECT_DECLARE_SIMPLE_TYPE(RemoteMachineState, REMOTE_MACHINE)
81
+
82
+#endif
83
diff --git a/hw/remote/machine.c b/hw/remote/machine.c
84
new file mode 100644
85
index XXXXXXX..XXXXXXX
86
--- /dev/null
87
+++ b/hw/remote/machine.c
88
@@ -XXX,XX +XXX,XX @@
89
+/*
90
+ * Machine for remote device
91
+ *
92
+ * This machine type is used by the remote device process in multi-process
93
+ * QEMU. QEMU device models depend on parent busses, interrupt controllers,
94
+ * memory regions, etc. The remote machine type offers this environment so
95
+ * that QEMU device models can be used as remote devices.
96
+ *
97
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
98
+ *
99
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
100
+ * See the COPYING file in the top-level directory.
101
+ *
102
+ */
103
+
104
+#include "qemu/osdep.h"
105
+#include "qemu-common.h"
106
+
107
+#include "hw/remote/machine.h"
108
+#include "exec/address-spaces.h"
109
+#include "exec/memory.h"
110
+#include "qapi/error.h"
111
+
112
+static void remote_machine_init(MachineState *machine)
113
+{
114
+ MemoryRegion *system_memory, *system_io, *pci_memory;
115
+ RemoteMachineState *s = REMOTE_MACHINE(machine);
116
+ RemotePCIHost *rem_host;
117
+
118
+ system_memory = get_system_memory();
119
+ system_io = get_system_io();
120
+
121
+ pci_memory = g_new(MemoryRegion, 1);
122
+ memory_region_init(pci_memory, NULL, "pci", UINT64_MAX);
123
+
124
+ rem_host = REMOTE_PCIHOST(qdev_new(TYPE_REMOTE_PCIHOST));
125
+
126
+ rem_host->mr_pci_mem = pci_memory;
127
+ rem_host->mr_sys_mem = system_memory;
128
+ rem_host->mr_sys_io = system_io;
129
+
130
+ s->host = rem_host;
131
+
132
+ object_property_add_child(OBJECT(s), "remote-pcihost", OBJECT(rem_host));
133
+ memory_region_add_subregion_overlap(system_memory, 0x0, pci_memory, -1);
134
+
135
+ qdev_realize(DEVICE(rem_host), sysbus_get_default(), &error_fatal);
136
+}
137
+
138
+static void remote_machine_class_init(ObjectClass *oc, void *data)
139
+{
140
+ MachineClass *mc = MACHINE_CLASS(oc);
141
+
142
+ mc->init = remote_machine_init;
143
+ mc->desc = "Experimental remote machine";
144
+}
145
+
146
+static const TypeInfo remote_machine = {
147
+ .name = TYPE_REMOTE_MACHINE,
148
+ .parent = TYPE_MACHINE,
149
+ .instance_size = sizeof(RemoteMachineState),
150
+ .class_init = remote_machine_class_init,
151
+};
152
+
153
+static void remote_machine_register_types(void)
154
+{
155
+ type_register_static(&remote_machine);
156
+}
157
+
158
+type_init(remote_machine_register_types);
159
diff --git a/hw/meson.build b/hw/meson.build
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/meson.build
162
+++ b/hw/meson.build
163
@@ -XXX,XX +XXX,XX @@ subdir('moxie')
164
subdir('nios2')
165
subdir('openrisc')
166
subdir('ppc')
167
+subdir('remote')
168
subdir('riscv')
169
subdir('rx')
170
subdir('s390x')
171
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
172
new file mode 100644
173
index XXXXXXX..XXXXXXX
174
--- /dev/null
175
+++ b/hw/remote/meson.build
176
@@ -XXX,XX +XXX,XX @@
177
+remote_ss = ss.source_set()
178
+
179
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('machine.c'))
180
+
181
+softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
182
--
183
2.29.2
184
diff view generated by jsdifflib
New patch
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
1
2
3
Adds qio_channel_writev_full_all() to transmit both data and FDs.
4
Refactors existing code to use this helper.
5
6
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
7
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
11
Message-id: 480fbf1fe4152495d60596c9b665124549b426a5.1611938319.git.jag.raman@oracle.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
include/io/channel.h | 25 +++++++++++++++++++++++++
15
io/channel.c | 15 ++++++++++++++-
16
2 files changed, 39 insertions(+), 1 deletion(-)
17
18
diff --git a/include/io/channel.h b/include/io/channel.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/io/channel.h
21
+++ b/include/io/channel.h
22
@@ -XXX,XX +XXX,XX @@ void qio_channel_set_aio_fd_handler(QIOChannel *ioc,
23
IOHandler *io_write,
24
void *opaque);
25
26
+/**
27
+ * qio_channel_writev_full_all:
28
+ * @ioc: the channel object
29
+ * @iov: the array of memory regions to write data from
30
+ * @niov: the length of the @iov array
31
+ * @fds: an array of file handles to send
32
+ * @nfds: number of file handles in @fds
33
+ * @errp: pointer to a NULL-initialized error object
34
+ *
35
+ *
36
+ * Behaves like qio_channel_writev_full but will attempt
37
+ * to send all data passed (file handles and memory regions).
38
+ * The function will wait for all requested data
39
+ * to be written, yielding from the current coroutine
40
+ * if required.
41
+ *
42
+ * Returns: 0 if all bytes were written, or -1 on error
43
+ */
44
+
45
+int qio_channel_writev_full_all(QIOChannel *ioc,
46
+ const struct iovec *iov,
47
+ size_t niov,
48
+ int *fds, size_t nfds,
49
+ Error **errp);
50
+
51
#endif /* QIO_CHANNEL_H */
52
diff --git a/io/channel.c b/io/channel.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/io/channel.c
55
+++ b/io/channel.c
56
@@ -XXX,XX +XXX,XX @@ int qio_channel_writev_all(QIOChannel *ioc,
57
const struct iovec *iov,
58
size_t niov,
59
Error **errp)
60
+{
61
+ return qio_channel_writev_full_all(ioc, iov, niov, NULL, 0, errp);
62
+}
63
+
64
+int qio_channel_writev_full_all(QIOChannel *ioc,
65
+ const struct iovec *iov,
66
+ size_t niov,
67
+ int *fds, size_t nfds,
68
+ Error **errp)
69
{
70
int ret = -1;
71
struct iovec *local_iov = g_new(struct iovec, niov);
72
@@ -XXX,XX +XXX,XX @@ int qio_channel_writev_all(QIOChannel *ioc,
73
74
while (nlocal_iov > 0) {
75
ssize_t len;
76
- len = qio_channel_writev(ioc, local_iov, nlocal_iov, errp);
77
+ len = qio_channel_writev_full(ioc, local_iov, nlocal_iov, fds, nfds,
78
+ errp);
79
if (len == QIO_CHANNEL_ERR_BLOCK) {
80
if (qemu_in_coroutine()) {
81
qio_channel_yield(ioc, G_IO_OUT);
82
@@ -XXX,XX +XXX,XX @@ int qio_channel_writev_all(QIOChannel *ioc,
83
}
84
85
iov_discard_front(&local_iov, &nlocal_iov, len);
86
+
87
+ fds = NULL;
88
+ nfds = 0;
89
}
90
91
ret = 0;
92
--
93
2.29.2
94
diff view generated by jsdifflib
1
A number of iov_discard_front/back() operations are made by
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
2
virtio-crypto. The elem->in/out_sg iovec arrays are modified by these
2
3
operations, resulting virtqueue_unmap_sg() calls on different addresses
3
Adds qio_channel_readv_full_all_eof() and qio_channel_readv_full_all()
4
than were originally mapped.
4
to read both data and FDs. Refactors existing code to use these helpers.
5
5
6
This is problematic because dirty memory may not be logged correctly,
6
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
7
MemoryRegion refcounts may be leaked, and the non-RAM bounce buffer can
7
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
be leaked.
8
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
9
9
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
10
Take a copy of the elem->in/out_sg arrays so that the originals are
10
Message-id: b059c4cc0fb741e794d644c144cc21372cad877d.1611938319.git.jag.raman@oracle.com
11
preserved. The iov_discard_undo() API could be used instead (with better
12
performance) but requires careful auditing of the code, so do the simple
13
thing instead.
14
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Reviewed-by: Li Qiang <liq3ea@gmail.com>
17
Message-Id: <20200917094455.822379-4-stefanha@redhat.com>
18
---
12
---
19
hw/virtio/virtio-crypto.c | 17 ++++++++++++++---
13
include/io/channel.h | 53 +++++++++++++++++++++++
20
1 file changed, 14 insertions(+), 3 deletions(-)
14
io/channel.c | 101 ++++++++++++++++++++++++++++++++++---------
21
15
2 files changed, 134 insertions(+), 20 deletions(-)
22
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
16
17
diff --git a/include/io/channel.h b/include/io/channel.h
23
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/virtio/virtio-crypto.c
19
--- a/include/io/channel.h
25
+++ b/hw/virtio/virtio-crypto.c
20
+++ b/include/io/channel.h
26
@@ -XXX,XX +XXX,XX @@ static void virtio_crypto_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
21
@@ -XXX,XX +XXX,XX @@ void qio_channel_set_aio_fd_handler(QIOChannel *ioc,
27
size_t s;
22
IOHandler *io_write,
28
23
void *opaque);
29
for (;;) {
24
30
+ g_autofree struct iovec *out_iov_copy = NULL;
25
+/**
31
+
26
+ * qio_channel_readv_full_all_eof:
32
elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
27
+ * @ioc: the channel object
33
if (!elem) {
28
+ * @iov: the array of memory regions to read data to
34
break;
29
+ * @niov: the length of the @iov array
35
@@ -XXX,XX +XXX,XX @@ static void virtio_crypto_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
30
+ * @fds: an array of file handles to read
31
+ * @nfds: number of file handles in @fds
32
+ * @errp: pointer to a NULL-initialized error object
33
+ *
34
+ *
35
+ * Performs same function as qio_channel_readv_all_eof.
36
+ * Additionally, attempts to read file descriptors shared
37
+ * over the channel. The function will wait for all
38
+ * requested data to be read, yielding from the current
39
+ * coroutine if required. data refers to both file
40
+ * descriptors and the iovs.
41
+ *
42
+ * Returns: 1 if all bytes were read, 0 if end-of-file
43
+ * occurs without data, or -1 on error
44
+ */
45
+
46
+int qio_channel_readv_full_all_eof(QIOChannel *ioc,
47
+ const struct iovec *iov,
48
+ size_t niov,
49
+ int **fds, size_t *nfds,
50
+ Error **errp);
51
+
52
+/**
53
+ * qio_channel_readv_full_all:
54
+ * @ioc: the channel object
55
+ * @iov: the array of memory regions to read data to
56
+ * @niov: the length of the @iov array
57
+ * @fds: an array of file handles to read
58
+ * @nfds: number of file handles in @fds
59
+ * @errp: pointer to a NULL-initialized error object
60
+ *
61
+ *
62
+ * Performs same function as qio_channel_readv_all_eof.
63
+ * Additionally, attempts to read file descriptors shared
64
+ * over the channel. The function will wait for all
65
+ * requested data to be read, yielding from the current
66
+ * coroutine if required. data refers to both file
67
+ * descriptors and the iovs.
68
+ *
69
+ * Returns: 0 if all bytes were read, or -1 on error
70
+ */
71
+
72
+int qio_channel_readv_full_all(QIOChannel *ioc,
73
+ const struct iovec *iov,
74
+ size_t niov,
75
+ int **fds, size_t *nfds,
76
+ Error **errp);
77
+
78
/**
79
* qio_channel_writev_full_all:
80
* @ioc: the channel object
81
diff --git a/io/channel.c b/io/channel.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/io/channel.c
84
+++ b/io/channel.c
85
@@ -XXX,XX +XXX,XX @@ int qio_channel_readv_all_eof(QIOChannel *ioc,
86
const struct iovec *iov,
87
size_t niov,
88
Error **errp)
89
+{
90
+ return qio_channel_readv_full_all_eof(ioc, iov, niov, NULL, NULL, errp);
91
+}
92
+
93
+int qio_channel_readv_all(QIOChannel *ioc,
94
+ const struct iovec *iov,
95
+ size_t niov,
96
+ Error **errp)
97
+{
98
+ return qio_channel_readv_full_all(ioc, iov, niov, NULL, NULL, errp);
99
+}
100
+
101
+int qio_channel_readv_full_all_eof(QIOChannel *ioc,
102
+ const struct iovec *iov,
103
+ size_t niov,
104
+ int **fds, size_t *nfds,
105
+ Error **errp)
106
{
107
int ret = -1;
108
struct iovec *local_iov = g_new(struct iovec, niov);
109
struct iovec *local_iov_head = local_iov;
110
unsigned int nlocal_iov = niov;
111
+ int **local_fds = fds;
112
+ size_t *local_nfds = nfds;
113
bool partial = false;
114
115
+ if (nfds) {
116
+ *nfds = 0;
117
+ }
118
+
119
+ if (fds) {
120
+ *fds = NULL;
121
+ }
122
+
123
nlocal_iov = iov_copy(local_iov, nlocal_iov,
124
iov, niov,
125
0, iov_size(iov, niov));
126
127
- while (nlocal_iov > 0) {
128
+ while ((nlocal_iov > 0) || local_fds) {
129
ssize_t len;
130
- len = qio_channel_readv(ioc, local_iov, nlocal_iov, errp);
131
+ len = qio_channel_readv_full(ioc, local_iov, nlocal_iov, local_fds,
132
+ local_nfds, errp);
133
if (len == QIO_CHANNEL_ERR_BLOCK) {
134
if (qemu_in_coroutine()) {
135
qio_channel_yield(ioc, G_IO_IN);
136
@@ -XXX,XX +XXX,XX @@ int qio_channel_readv_all_eof(QIOChannel *ioc,
137
qio_channel_wait(ioc, G_IO_IN);
138
}
139
continue;
140
- } else if (len < 0) {
141
- goto cleanup;
142
- } else if (len == 0) {
143
- if (partial) {
144
- error_setg(errp,
145
- "Unexpected end-of-file before all bytes were read");
146
- } else {
147
+ }
148
+
149
+ if (len == 0) {
150
+ if (local_nfds && *local_nfds) {
151
+ /*
152
+ * Got some FDs, but no data yet. This isn't an EOF
153
+ * scenario (yet), so carry on to try to read data
154
+ * on next loop iteration
155
+ */
156
+ goto next_iter;
157
+ } else if (!partial) {
158
+ /* No fds and no data - EOF before any data read */
159
ret = 0;
160
+ goto cleanup;
161
+ } else {
162
+ len = -1;
163
+ error_setg(errp,
164
+ "Unexpected end-of-file before all data were read");
165
+ /* Fallthrough into len < 0 handling */
166
+ }
167
+ }
168
+
169
+ if (len < 0) {
170
+ /* Close any FDs we previously received */
171
+ if (nfds && fds) {
172
+ size_t i;
173
+ for (i = 0; i < (*nfds); i++) {
174
+ close((*fds)[i]);
175
+ }
176
+ g_free(*fds);
177
+ *fds = NULL;
178
+ *nfds = 0;
179
}
180
goto cleanup;
36
}
181
}
37
182
38
out_num = elem->out_num;
183
+ if (nlocal_iov) {
39
- out_iov = elem->out_sg;
184
+ iov_discard_front(&local_iov, &nlocal_iov, len);
40
+ out_iov_copy = g_memdup(elem->out_sg, sizeof(out_iov[0]) * out_num);
185
+ }
41
+ out_iov = out_iov_copy;
186
+
42
+
187
+next_iter:
43
in_num = elem->in_num;
188
partial = true;
44
in_iov = elem->in_sg;
189
- iov_discard_front(&local_iov, &nlocal_iov, len);
45
+
190
+ local_fds = NULL;
46
if (unlikely(iov_to_buf(out_iov, out_num, 0, &ctrl, sizeof(ctrl))
191
+ local_nfds = NULL;
47
!= sizeof(ctrl))) {
48
virtio_error(vdev, "virtio-crypto request ctrl_hdr too short");
49
@@ -XXX,XX +XXX,XX @@ virtio_crypto_handle_request(VirtIOCryptoReq *request)
50
int queue_index = virtio_crypto_vq2q(virtio_get_queue_index(request->vq));
51
struct virtio_crypto_op_data_req req;
52
int ret;
53
+ g_autofree struct iovec *in_iov_copy = NULL;
54
+ g_autofree struct iovec *out_iov_copy = NULL;
55
struct iovec *in_iov;
56
struct iovec *out_iov;
57
unsigned in_num;
58
@@ -XXX,XX +XXX,XX @@ virtio_crypto_handle_request(VirtIOCryptoReq *request)
59
}
192
}
60
193
61
out_num = elem->out_num;
194
ret = 1;
62
- out_iov = elem->out_sg;
195
@@ -XXX,XX +XXX,XX @@ int qio_channel_readv_all_eof(QIOChannel *ioc,
63
+ out_iov_copy = g_memdup(elem->out_sg, sizeof(out_iov[0]) * out_num);
196
return ret;
64
+ out_iov = out_iov_copy;
197
}
65
+
198
66
in_num = elem->in_num;
199
-int qio_channel_readv_all(QIOChannel *ioc,
67
- in_iov = elem->in_sg;
200
- const struct iovec *iov,
68
+ in_iov_copy = g_memdup(elem->in_sg, sizeof(in_iov[0]) * in_num);
201
- size_t niov,
69
+ in_iov = in_iov_copy;
202
- Error **errp)
70
+
203
+int qio_channel_readv_full_all(QIOChannel *ioc,
71
if (unlikely(iov_to_buf(out_iov, out_num, 0, &req, sizeof(req))
204
+ const struct iovec *iov,
72
!= sizeof(req))) {
205
+ size_t niov,
73
virtio_error(vdev, "virtio-crypto request outhdr too short");
206
+ int **fds, size_t *nfds,
207
+ Error **errp)
208
{
209
- int ret = qio_channel_readv_all_eof(ioc, iov, niov, errp);
210
+ int ret = qio_channel_readv_full_all_eof(ioc, iov, niov, fds, nfds, errp);
211
212
if (ret == 0) {
213
- ret = -1;
214
- error_setg(errp,
215
- "Unexpected end-of-file before all bytes were read");
216
- } else if (ret == 1) {
217
- ret = 0;
218
+ error_prepend(errp,
219
+ "Unexpected end-of-file before all data were read.");
220
+ return -1;
221
}
222
+ if (ret == 1) {
223
+ return 0;
224
+ }
225
+
226
return ret;
227
}
228
74
--
229
--
75
2.26.2
230
2.29.2
76
231
diff view generated by jsdifflib
New patch
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
1
2
3
Defines MPQemuMsg, which is the message that is sent to the remote
4
process. This message is sent over QIOChannel and is used to
5
command the remote process to perform various tasks.
6
Define transmission functions used by proxy and by remote.
7
8
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
9
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
10
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
11
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Message-id: 56ca8bcf95195b2b195b08f6b9565b6d7410bce5.1611938319.git.jag.raman@oracle.com
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
---
15
MAINTAINERS | 2 +
16
meson.build | 1 +
17
hw/remote/trace.h | 1 +
18
include/hw/remote/mpqemu-link.h | 63 ++++++++++
19
include/sysemu/iothread.h | 6 +
20
hw/remote/mpqemu-link.c | 205 ++++++++++++++++++++++++++++++++
21
iothread.c | 6 +
22
hw/remote/meson.build | 1 +
23
hw/remote/trace-events | 4 +
24
9 files changed, 289 insertions(+)
25
create mode 100644 hw/remote/trace.h
26
create mode 100644 include/hw/remote/mpqemu-link.h
27
create mode 100644 hw/remote/mpqemu-link.c
28
create mode 100644 hw/remote/trace-events
29
30
diff --git a/MAINTAINERS b/MAINTAINERS
31
index XXXXXXX..XXXXXXX 100644
32
--- a/MAINTAINERS
33
+++ b/MAINTAINERS
34
@@ -XXX,XX +XXX,XX @@ F: hw/pci-host/remote.c
35
F: include/hw/pci-host/remote.h
36
F: hw/remote/machine.c
37
F: include/hw/remote/machine.h
38
+F: hw/remote/mpqemu-link.c
39
+F: include/hw/remote/mpqemu-link.h
40
41
Build and test automation
42
-------------------------
43
diff --git a/meson.build b/meson.build
44
index XXXXXXX..XXXXXXX 100644
45
--- a/meson.build
46
+++ b/meson.build
47
@@ -XXX,XX +XXX,XX @@ if have_system
48
'net',
49
'softmmu',
50
'ui',
51
+ 'hw/remote',
52
]
53
endif
54
trace_events_subdirs += [
55
diff --git a/hw/remote/trace.h b/hw/remote/trace.h
56
new file mode 100644
57
index XXXXXXX..XXXXXXX
58
--- /dev/null
59
+++ b/hw/remote/trace.h
60
@@ -0,0 +1 @@
61
+#include "trace/trace-hw_remote.h"
62
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
63
new file mode 100644
64
index XXXXXXX..XXXXXXX
65
--- /dev/null
66
+++ b/include/hw/remote/mpqemu-link.h
67
@@ -XXX,XX +XXX,XX @@
68
+/*
69
+ * Communication channel between QEMU and remote device process
70
+ *
71
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
72
+ *
73
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
74
+ * See the COPYING file in the top-level directory.
75
+ *
76
+ */
77
+
78
+#ifndef MPQEMU_LINK_H
79
+#define MPQEMU_LINK_H
80
+
81
+#include "qom/object.h"
82
+#include "qemu/thread.h"
83
+#include "io/channel.h"
84
+
85
+#define REMOTE_MAX_FDS 8
86
+
87
+#define MPQEMU_MSG_HDR_SIZE offsetof(MPQemuMsg, data.u64)
88
+
89
+/**
90
+ * MPQemuCmd:
91
+ *
92
+ * MPQemuCmd enum type to specify the command to be executed on the remote
93
+ * device.
94
+ *
95
+ * This uses a private protocol between QEMU and the remote process. vfio-user
96
+ * protocol would supersede this in the future.
97
+ *
98
+ */
99
+typedef enum {
100
+ MPQEMU_CMD_MAX,
101
+} MPQemuCmd;
102
+
103
+/**
104
+ * MPQemuMsg:
105
+ * @cmd: The remote command
106
+ * @size: Size of the data to be shared
107
+ * @data: Structured data
108
+ * @fds: File descriptors to be shared with remote device
109
+ *
110
+ * MPQemuMsg Format of the message sent to the remote device from QEMU.
111
+ *
112
+ */
113
+typedef struct {
114
+ int cmd;
115
+ size_t size;
116
+
117
+ union {
118
+ uint64_t u64;
119
+ } data;
120
+
121
+ int fds[REMOTE_MAX_FDS];
122
+ int num_fds;
123
+} MPQemuMsg;
124
+
125
+bool mpqemu_msg_send(MPQemuMsg *msg, QIOChannel *ioc, Error **errp);
126
+bool mpqemu_msg_recv(MPQemuMsg *msg, QIOChannel *ioc, Error **errp);
127
+
128
+bool mpqemu_msg_valid(MPQemuMsg *msg);
129
+
130
+#endif
131
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
132
index XXXXXXX..XXXXXXX 100644
133
--- a/include/sysemu/iothread.h
134
+++ b/include/sysemu/iothread.h
135
@@ -XXX,XX +XXX,XX @@ IOThread *iothread_create(const char *id, Error **errp);
136
void iothread_stop(IOThread *iothread);
137
void iothread_destroy(IOThread *iothread);
138
139
+/*
140
+ * Returns true if executing withing IOThread context,
141
+ * false otherwise.
142
+ */
143
+bool qemu_in_iothread(void);
144
+
145
#endif /* IOTHREAD_H */
146
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
147
new file mode 100644
148
index XXXXXXX..XXXXXXX
149
--- /dev/null
150
+++ b/hw/remote/mpqemu-link.c
151
@@ -XXX,XX +XXX,XX @@
152
+/*
153
+ * Communication channel between QEMU and remote device process
154
+ *
155
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
156
+ *
157
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
158
+ * See the COPYING file in the top-level directory.
159
+ *
160
+ */
161
+
162
+#include "qemu/osdep.h"
163
+#include "qemu-common.h"
164
+
165
+#include "qemu/module.h"
166
+#include "hw/remote/mpqemu-link.h"
167
+#include "qapi/error.h"
168
+#include "qemu/iov.h"
169
+#include "qemu/error-report.h"
170
+#include "qemu/main-loop.h"
171
+#include "io/channel.h"
172
+#include "sysemu/iothread.h"
173
+#include "trace.h"
174
+
175
+/*
176
+ * Send message over the ioc QIOChannel.
177
+ * This function is safe to call from:
178
+ * - main loop in co-routine context. Will block the main loop if not in
179
+ * co-routine context;
180
+ * - vCPU thread with no co-routine context and if the channel is not part
181
+ * of the main loop handling;
182
+ * - IOThread within co-routine context, outside of co-routine context
183
+ * will block IOThread;
184
+ * Returns true if no errors were encountered, false otherwise.
185
+ */
186
+bool mpqemu_msg_send(MPQemuMsg *msg, QIOChannel *ioc, Error **errp)
187
+{
188
+ ERRP_GUARD();
189
+ bool iolock = qemu_mutex_iothread_locked();
190
+ bool iothread = qemu_in_iothread();
191
+ struct iovec send[2] = {0};
192
+ int *fds = NULL;
193
+ size_t nfds = 0;
194
+ bool ret = false;
195
+
196
+ send[0].iov_base = msg;
197
+ send[0].iov_len = MPQEMU_MSG_HDR_SIZE;
198
+
199
+ send[1].iov_base = (void *)&msg->data;
200
+ send[1].iov_len = msg->size;
201
+
202
+ if (msg->num_fds) {
203
+ nfds = msg->num_fds;
204
+ fds = msg->fds;
205
+ }
206
+
207
+ /*
208
+ * Dont use in IOThread out of co-routine context as
209
+ * it will block IOThread.
210
+ */
211
+ assert(qemu_in_coroutine() || !iothread);
212
+
213
+ /*
214
+ * Skip unlocking/locking iothread lock when the IOThread is running
215
+ * in co-routine context. Co-routine context is asserted above
216
+ * for IOThread case.
217
+ * Also skip lock handling while in a co-routine in the main context.
218
+ */
219
+ if (iolock && !iothread && !qemu_in_coroutine()) {
220
+ qemu_mutex_unlock_iothread();
221
+ }
222
+
223
+ if (!qio_channel_writev_full_all(ioc, send, G_N_ELEMENTS(send),
224
+ fds, nfds, errp)) {
225
+ ret = true;
226
+ } else {
227
+ trace_mpqemu_send_io_error(msg->cmd, msg->size, nfds);
228
+ }
229
+
230
+ if (iolock && !iothread && !qemu_in_coroutine()) {
231
+ /* See above comment why skip locking here. */
232
+ qemu_mutex_lock_iothread();
233
+ }
234
+
235
+ return ret;
236
+}
237
+
238
+/*
239
+ * Read message from the ioc QIOChannel.
240
+ * This function is safe to call from:
241
+ * - From main loop in co-routine context. Will block the main loop if not in
242
+ * co-routine context;
243
+ * - From vCPU thread with no co-routine context and if the channel is not part
244
+ * of the main loop handling;
245
+ * - From IOThread within co-routine context, outside of co-routine context
246
+ * will block IOThread;
247
+ */
248
+static ssize_t mpqemu_read(QIOChannel *ioc, void *buf, size_t len, int **fds,
249
+ size_t *nfds, Error **errp)
250
+{
251
+ ERRP_GUARD();
252
+ struct iovec iov = { .iov_base = buf, .iov_len = len };
253
+ bool iolock = qemu_mutex_iothread_locked();
254
+ bool iothread = qemu_in_iothread();
255
+ int ret = -1;
256
+
257
+ /*
258
+ * Dont use in IOThread out of co-routine context as
259
+ * it will block IOThread.
260
+ */
261
+ assert(qemu_in_coroutine() || !iothread);
262
+
263
+ if (iolock && !iothread && !qemu_in_coroutine()) {
264
+ qemu_mutex_unlock_iothread();
265
+ }
266
+
267
+ ret = qio_channel_readv_full_all_eof(ioc, &iov, 1, fds, nfds, errp);
268
+
269
+ if (iolock && !iothread && !qemu_in_coroutine()) {
270
+ qemu_mutex_lock_iothread();
271
+ }
272
+
273
+ return (ret <= 0) ? ret : iov.iov_len;
274
+}
275
+
276
+bool mpqemu_msg_recv(MPQemuMsg *msg, QIOChannel *ioc, Error **errp)
277
+{
278
+ ERRP_GUARD();
279
+ g_autofree int *fds = NULL;
280
+ size_t nfds = 0;
281
+ ssize_t len;
282
+ bool ret = false;
283
+
284
+ len = mpqemu_read(ioc, msg, MPQEMU_MSG_HDR_SIZE, &fds, &nfds, errp);
285
+ if (len <= 0) {
286
+ goto fail;
287
+ } else if (len != MPQEMU_MSG_HDR_SIZE) {
288
+ error_setg(errp, "Message header corrupted");
289
+ goto fail;
290
+ }
291
+
292
+ if (msg->size > sizeof(msg->data)) {
293
+ error_setg(errp, "Invalid size for message");
294
+ goto fail;
295
+ }
296
+
297
+ if (!msg->size) {
298
+ goto copy_fds;
299
+ }
300
+
301
+ len = mpqemu_read(ioc, &msg->data, msg->size, NULL, NULL, errp);
302
+ if (len <= 0) {
303
+ goto fail;
304
+ }
305
+ if (len != msg->size) {
306
+ error_setg(errp, "Unable to read full message");
307
+ goto fail;
308
+ }
309
+
310
+copy_fds:
311
+ msg->num_fds = nfds;
312
+ if (nfds > G_N_ELEMENTS(msg->fds)) {
313
+ error_setg(errp,
314
+ "Overflow error: received %zu fds, more than max of %d fds",
315
+ nfds, REMOTE_MAX_FDS);
316
+ goto fail;
317
+ }
318
+ if (nfds) {
319
+ memcpy(msg->fds, fds, nfds * sizeof(int));
320
+ }
321
+
322
+ ret = true;
323
+
324
+fail:
325
+ if (*errp) {
326
+ trace_mpqemu_recv_io_error(msg->cmd, msg->size, nfds);
327
+ }
328
+ while (*errp && nfds) {
329
+ close(fds[nfds - 1]);
330
+ nfds--;
331
+ }
332
+
333
+ return ret;
334
+}
335
+
336
+bool mpqemu_msg_valid(MPQemuMsg *msg)
337
+{
338
+ if (msg->cmd >= MPQEMU_CMD_MAX && msg->cmd < 0) {
339
+ return false;
340
+ }
341
+
342
+ /* Verify FDs. */
343
+ if (msg->num_fds >= REMOTE_MAX_FDS) {
344
+ return false;
345
+ }
346
+
347
+ if (msg->num_fds > 0) {
348
+ for (int i = 0; i < msg->num_fds; i++) {
349
+ if (fcntl(msg->fds[i], F_GETFL) == -1) {
350
+ return false;
351
+ }
352
+ }
353
+ }
354
+
355
+ return true;
356
+}
357
diff --git a/iothread.c b/iothread.c
358
index XXXXXXX..XXXXXXX 100644
359
--- a/iothread.c
360
+++ b/iothread.c
361
@@ -XXX,XX +XXX,XX @@ IOThread *iothread_by_id(const char *id)
362
{
363
return IOTHREAD(object_resolve_path_type(id, TYPE_IOTHREAD, NULL));
364
}
365
+
366
+bool qemu_in_iothread(void)
367
+{
368
+ return qemu_get_current_aio_context() == qemu_get_aio_context() ?
369
+ false : true;
370
+}
371
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
372
index XXXXXXX..XXXXXXX 100644
373
--- a/hw/remote/meson.build
374
+++ b/hw/remote/meson.build
375
@@ -XXX,XX +XXX,XX @@
376
remote_ss = ss.source_set()
377
378
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('machine.c'))
379
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
380
381
softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
382
diff --git a/hw/remote/trace-events b/hw/remote/trace-events
383
new file mode 100644
384
index XXXXXXX..XXXXXXX
385
--- /dev/null
386
+++ b/hw/remote/trace-events
387
@@ -XXX,XX +XXX,XX @@
388
+# multi-process trace events
389
+
390
+mpqemu_send_io_error(int cmd, int size, int nfds) "send command %d size %d, %d file descriptors to remote process"
391
+mpqemu_recv_io_error(int cmd, int size, int nfds) "failed to receive %d size %d, %d file descriptors to remote process"
392
--
393
2.29.2
394
diff view generated by jsdifflib
New patch
1
From: Jagannathan Raman <jag.raman@oracle.com>
1
2
3
Initializes the message handler function in the remote process. It is
4
called whenever there's an event pending on QIOChannel that registers
5
this function.
6
7
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
8
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
9
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: 99d38d8b93753a6409ac2340e858858cda59ab1b.1611938319.git.jag.raman@oracle.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
MAINTAINERS | 1 +
15
include/hw/remote/machine.h | 9 ++++++
16
hw/remote/message.c | 57 +++++++++++++++++++++++++++++++++++++
17
hw/remote/meson.build | 1 +
18
4 files changed, 68 insertions(+)
19
create mode 100644 hw/remote/message.c
20
21
diff --git a/MAINTAINERS b/MAINTAINERS
22
index XXXXXXX..XXXXXXX 100644
23
--- a/MAINTAINERS
24
+++ b/MAINTAINERS
25
@@ -XXX,XX +XXX,XX @@ F: hw/remote/machine.c
26
F: include/hw/remote/machine.h
27
F: hw/remote/mpqemu-link.c
28
F: include/hw/remote/mpqemu-link.h
29
+F: hw/remote/message.c
30
31
Build and test automation
32
-------------------------
33
diff --git a/include/hw/remote/machine.h b/include/hw/remote/machine.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/hw/remote/machine.h
36
+++ b/include/hw/remote/machine.h
37
@@ -XXX,XX +XXX,XX @@
38
#include "qom/object.h"
39
#include "hw/boards.h"
40
#include "hw/pci-host/remote.h"
41
+#include "io/channel.h"
42
43
struct RemoteMachineState {
44
MachineState parent_obj;
45
@@ -XXX,XX +XXX,XX @@ struct RemoteMachineState {
46
RemotePCIHost *host;
47
};
48
49
+/* Used to pass to co-routine device and ioc. */
50
+typedef struct RemoteCommDev {
51
+ PCIDevice *dev;
52
+ QIOChannel *ioc;
53
+} RemoteCommDev;
54
+
55
#define TYPE_REMOTE_MACHINE "x-remote-machine"
56
OBJECT_DECLARE_SIMPLE_TYPE(RemoteMachineState, REMOTE_MACHINE)
57
58
+void coroutine_fn mpqemu_remote_msg_loop_co(void *data);
59
+
60
#endif
61
diff --git a/hw/remote/message.c b/hw/remote/message.c
62
new file mode 100644
63
index XXXXXXX..XXXXXXX
64
--- /dev/null
65
+++ b/hw/remote/message.c
66
@@ -XXX,XX +XXX,XX @@
67
+/*
68
+ * Copyright © 2020, 2021 Oracle and/or its affiliates.
69
+ *
70
+ * This work is licensed under the terms of the GNU GPL-v2, version 2 or later.
71
+ *
72
+ * See the COPYING file in the top-level directory.
73
+ *
74
+ */
75
+
76
+#include "qemu/osdep.h"
77
+#include "qemu-common.h"
78
+
79
+#include "hw/remote/machine.h"
80
+#include "io/channel.h"
81
+#include "hw/remote/mpqemu-link.h"
82
+#include "qapi/error.h"
83
+#include "sysemu/runstate.h"
84
+
85
+void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
86
+{
87
+ g_autofree RemoteCommDev *com = (RemoteCommDev *)data;
88
+ PCIDevice *pci_dev = NULL;
89
+ Error *local_err = NULL;
90
+
91
+ assert(com->ioc);
92
+
93
+ pci_dev = com->dev;
94
+ for (; !local_err;) {
95
+ MPQemuMsg msg = {0};
96
+
97
+ if (!mpqemu_msg_recv(&msg, com->ioc, &local_err)) {
98
+ break;
99
+ }
100
+
101
+ if (!mpqemu_msg_valid(&msg)) {
102
+ error_setg(&local_err, "Received invalid message from proxy"
103
+ "in remote process pid="FMT_pid"",
104
+ getpid());
105
+ break;
106
+ }
107
+
108
+ switch (msg.cmd) {
109
+ default:
110
+ error_setg(&local_err,
111
+ "Unknown command (%d) received for device %s"
112
+ " (pid="FMT_pid")",
113
+ msg.cmd, DEVICE(pci_dev)->id, getpid());
114
+ }
115
+ }
116
+
117
+ if (local_err) {
118
+ error_report_err(local_err);
119
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
120
+ } else {
121
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
122
+ }
123
+}
124
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
125
index XXXXXXX..XXXXXXX 100644
126
--- a/hw/remote/meson.build
127
+++ b/hw/remote/meson.build
128
@@ -XXX,XX +XXX,XX @@ remote_ss = ss.source_set()
129
130
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('machine.c'))
131
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
132
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('message.c'))
133
134
softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
135
--
136
2.29.2
137
diff view generated by jsdifflib
New patch
1
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
3
Associate the file descriptor for a PCIDevice in remote process with
4
DeviceState object.
5
6
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
7
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: f405a2ed5d7518b87bea7c59cfdf334d67e5ee51.1611938319.git.jag.raman@oracle.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
13
MAINTAINERS | 1 +
14
hw/remote/remote-obj.c | 203 +++++++++++++++++++++++++++++++++++++++++
15
hw/remote/meson.build | 1 +
16
3 files changed, 205 insertions(+)
17
create mode 100644 hw/remote/remote-obj.c
18
19
diff --git a/MAINTAINERS b/MAINTAINERS
20
index XXXXXXX..XXXXXXX 100644
21
--- a/MAINTAINERS
22
+++ b/MAINTAINERS
23
@@ -XXX,XX +XXX,XX @@ F: include/hw/remote/machine.h
24
F: hw/remote/mpqemu-link.c
25
F: include/hw/remote/mpqemu-link.h
26
F: hw/remote/message.c
27
+F: hw/remote/remote-obj.c
28
29
Build and test automation
30
-------------------------
31
diff --git a/hw/remote/remote-obj.c b/hw/remote/remote-obj.c
32
new file mode 100644
33
index XXXXXXX..XXXXXXX
34
--- /dev/null
35
+++ b/hw/remote/remote-obj.c
36
@@ -XXX,XX +XXX,XX @@
37
+/*
38
+ * Copyright © 2020, 2021 Oracle and/or its affiliates.
39
+ *
40
+ * This work is licensed under the terms of the GNU GPL-v2, version 2 or later.
41
+ *
42
+ * See the COPYING file in the top-level directory.
43
+ *
44
+ */
45
+
46
+#include "qemu/osdep.h"
47
+#include "qemu-common.h"
48
+
49
+#include "qemu/error-report.h"
50
+#include "qemu/notify.h"
51
+#include "qom/object_interfaces.h"
52
+#include "hw/qdev-core.h"
53
+#include "io/channel.h"
54
+#include "hw/qdev-core.h"
55
+#include "hw/remote/machine.h"
56
+#include "io/channel-util.h"
57
+#include "qapi/error.h"
58
+#include "sysemu/sysemu.h"
59
+#include "hw/pci/pci.h"
60
+#include "qemu/sockets.h"
61
+#include "monitor/monitor.h"
62
+
63
+#define TYPE_REMOTE_OBJECT "x-remote-object"
64
+OBJECT_DECLARE_TYPE(RemoteObject, RemoteObjectClass, REMOTE_OBJECT)
65
+
66
+struct RemoteObjectClass {
67
+ ObjectClass parent_class;
68
+
69
+ unsigned int nr_devs;
70
+ unsigned int max_devs;
71
+};
72
+
73
+struct RemoteObject {
74
+ /* private */
75
+ Object parent;
76
+
77
+ Notifier machine_done;
78
+
79
+ int32_t fd;
80
+ char *devid;
81
+
82
+ QIOChannel *ioc;
83
+
84
+ DeviceState *dev;
85
+ DeviceListener listener;
86
+};
87
+
88
+static void remote_object_set_fd(Object *obj, const char *str, Error **errp)
89
+{
90
+ RemoteObject *o = REMOTE_OBJECT(obj);
91
+ int fd = -1;
92
+
93
+ fd = monitor_fd_param(monitor_cur(), str, errp);
94
+ if (fd == -1) {
95
+ error_prepend(errp, "Could not parse remote object fd %s:", str);
96
+ return;
97
+ }
98
+
99
+ if (!fd_is_socket(fd)) {
100
+ error_setg(errp, "File descriptor '%s' is not a socket", str);
101
+ close(fd);
102
+ return;
103
+ }
104
+
105
+ o->fd = fd;
106
+}
107
+
108
+static void remote_object_set_devid(Object *obj, const char *str, Error **errp)
109
+{
110
+ RemoteObject *o = REMOTE_OBJECT(obj);
111
+
112
+ g_free(o->devid);
113
+
114
+ o->devid = g_strdup(str);
115
+}
116
+
117
+static void remote_object_unrealize_listener(DeviceListener *listener,
118
+ DeviceState *dev)
119
+{
120
+ RemoteObject *o = container_of(listener, RemoteObject, listener);
121
+
122
+ if (o->dev == dev) {
123
+ object_unref(OBJECT(o));
124
+ }
125
+}
126
+
127
+static void remote_object_machine_done(Notifier *notifier, void *data)
128
+{
129
+ RemoteObject *o = container_of(notifier, RemoteObject, machine_done);
130
+ DeviceState *dev = NULL;
131
+ QIOChannel *ioc = NULL;
132
+ Coroutine *co = NULL;
133
+ RemoteCommDev *comdev = NULL;
134
+ Error *err = NULL;
135
+
136
+ dev = qdev_find_recursive(sysbus_get_default(), o->devid);
137
+ if (!dev || !object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
138
+ error_report("%s is not a PCI device", o->devid);
139
+ return;
140
+ }
141
+
142
+ ioc = qio_channel_new_fd(o->fd, &err);
143
+ if (!ioc) {
144
+ error_report_err(err);
145
+ return;
146
+ }
147
+ qio_channel_set_blocking(ioc, false, NULL);
148
+
149
+ o->dev = dev;
150
+
151
+ o->listener.unrealize = remote_object_unrealize_listener;
152
+ device_listener_register(&o->listener);
153
+
154
+ /* co-routine should free this. */
155
+ comdev = g_new0(RemoteCommDev, 1);
156
+ *comdev = (RemoteCommDev) {
157
+ .ioc = ioc,
158
+ .dev = PCI_DEVICE(dev),
159
+ };
160
+
161
+ co = qemu_coroutine_create(mpqemu_remote_msg_loop_co, comdev);
162
+ qemu_coroutine_enter(co);
163
+}
164
+
165
+static void remote_object_init(Object *obj)
166
+{
167
+ RemoteObjectClass *k = REMOTE_OBJECT_GET_CLASS(obj);
168
+ RemoteObject *o = REMOTE_OBJECT(obj);
169
+
170
+ if (k->nr_devs >= k->max_devs) {
171
+ error_report("Reached maximum number of devices: %u", k->max_devs);
172
+ return;
173
+ }
174
+
175
+ o->ioc = NULL;
176
+ o->fd = -1;
177
+ o->devid = NULL;
178
+
179
+ k->nr_devs++;
180
+
181
+ o->machine_done.notify = remote_object_machine_done;
182
+ qemu_add_machine_init_done_notifier(&o->machine_done);
183
+}
184
+
185
+static void remote_object_finalize(Object *obj)
186
+{
187
+ RemoteObjectClass *k = REMOTE_OBJECT_GET_CLASS(obj);
188
+ RemoteObject *o = REMOTE_OBJECT(obj);
189
+
190
+ device_listener_unregister(&o->listener);
191
+
192
+ if (o->ioc) {
193
+ qio_channel_shutdown(o->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
194
+ qio_channel_close(o->ioc, NULL);
195
+ }
196
+
197
+ object_unref(OBJECT(o->ioc));
198
+
199
+ k->nr_devs--;
200
+ g_free(o->devid);
201
+}
202
+
203
+static void remote_object_class_init(ObjectClass *klass, void *data)
204
+{
205
+ RemoteObjectClass *k = REMOTE_OBJECT_CLASS(klass);
206
+
207
+ /*
208
+ * Limit number of supported devices to 1. This is done to avoid devices
209
+ * from one VM accessing the RAM of another VM. This is done until we
210
+ * start using separate address spaces for individual devices.
211
+ */
212
+ k->max_devs = 1;
213
+ k->nr_devs = 0;
214
+
215
+ object_class_property_add_str(klass, "fd", NULL, remote_object_set_fd);
216
+ object_class_property_add_str(klass, "devid", NULL,
217
+ remote_object_set_devid);
218
+}
219
+
220
+static const TypeInfo remote_object_info = {
221
+ .name = TYPE_REMOTE_OBJECT,
222
+ .parent = TYPE_OBJECT,
223
+ .instance_size = sizeof(RemoteObject),
224
+ .instance_init = remote_object_init,
225
+ .instance_finalize = remote_object_finalize,
226
+ .class_size = sizeof(RemoteObjectClass),
227
+ .class_init = remote_object_class_init,
228
+ .interfaces = (InterfaceInfo[]) {
229
+ { TYPE_USER_CREATABLE },
230
+ { }
231
+ }
232
+};
233
+
234
+static void register_types(void)
235
+{
236
+ type_register_static(&remote_object_info);
237
+}
238
+
239
+type_init(register_types);
240
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
241
index XXXXXXX..XXXXXXX 100644
242
--- a/hw/remote/meson.build
243
+++ b/hw/remote/meson.build
244
@@ -XXX,XX +XXX,XX @@ remote_ss = ss.source_set()
245
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('machine.c'))
246
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
247
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('message.c'))
248
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('remote-obj.c'))
249
250
softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
251
--
252
2.29.2
253
diff view generated by jsdifflib
1
npfd keeps track of how many pollfds are currently being monitored. It
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
must be reset to 0 when fdmon_poll_wait() returns.
2
3
3
SyncSysMemMsg message format is defined. It is used to send
4
When npfd reaches a treshold we switch to fdmon-epoll because it scales
4
file descriptors of the RAM regions to remote device.
5
better.
5
RAM on the remote device is configured with a set of file descriptors.
6
6
Old RAM regions are deleted and new regions, each with an fd, is
7
This patch resets npfd in the case where we switch to fdmon-epoll.
7
added to the RAM.
8
Forgetting to do so results in the following assertion failure:
8
9
9
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
10
util/fdmon-poll.c:65: fdmon_poll_wait: Assertion `npfd == 0' failed.
10
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
11
11
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
12
Fixes: 1f050a4690f62a1e7dabc4f44141e9f762c3769f ("aio-posix: extract ppoll(2) and epoll(7) fd monitoring")
12
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Message-id: 7d2d1831d812e85f681e7a8ab99e032cf4704689.1611938319.git.jag.raman@oracle.com
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1869952
16
Message-Id: <20200915120339.702938-2-stefanha@redhat.com>
17
---
15
---
18
util/fdmon-poll.c | 1 +
16
MAINTAINERS | 2 +
19
1 file changed, 1 insertion(+)
17
include/hw/remote/memory.h | 19 ++++++++++
20
18
include/hw/remote/mpqemu-link.h | 10 +++++
21
diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c
19
hw/remote/memory.c | 65 +++++++++++++++++++++++++++++++++
22
index XXXXXXX..XXXXXXX 100644
20
hw/remote/mpqemu-link.c | 11 ++++++
23
--- a/util/fdmon-poll.c
21
hw/remote/meson.build | 2 +
24
+++ b/util/fdmon-poll.c
22
6 files changed, 109 insertions(+)
25
@@ -XXX,XX +XXX,XX @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerList *ready_list,
23
create mode 100644 include/hw/remote/memory.h
26
24
create mode 100644 hw/remote/memory.c
27
/* epoll(7) is faster above a certain number of fds */
25
28
if (fdmon_epoll_try_upgrade(ctx, npfd)) {
26
diff --git a/MAINTAINERS b/MAINTAINERS
29
+ npfd = 0; /* we won't need pollfds[], reset npfd */
27
index XXXXXXX..XXXXXXX 100644
30
return ctx->fdmon_ops->wait(ctx, ready_list, timeout);
28
--- a/MAINTAINERS
29
+++ b/MAINTAINERS
30
@@ -XXX,XX +XXX,XX @@ F: hw/remote/mpqemu-link.c
31
F: include/hw/remote/mpqemu-link.h
32
F: hw/remote/message.c
33
F: hw/remote/remote-obj.c
34
+F: include/hw/remote/memory.h
35
+F: hw/remote/memory.c
36
37
Build and test automation
38
-------------------------
39
diff --git a/include/hw/remote/memory.h b/include/hw/remote/memory.h
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/include/hw/remote/memory.h
44
@@ -XXX,XX +XXX,XX @@
45
+/*
46
+ * Memory manager for remote device
47
+ *
48
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
49
+ *
50
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
51
+ * See the COPYING file in the top-level directory.
52
+ *
53
+ */
54
+
55
+#ifndef REMOTE_MEMORY_H
56
+#define REMOTE_MEMORY_H
57
+
58
+#include "exec/hwaddr.h"
59
+#include "hw/remote/mpqemu-link.h"
60
+
61
+void remote_sysmem_reconfig(MPQemuMsg *msg, Error **errp);
62
+
63
+#endif
64
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
65
index XXXXXXX..XXXXXXX 100644
66
--- a/include/hw/remote/mpqemu-link.h
67
+++ b/include/hw/remote/mpqemu-link.h
68
@@ -XXX,XX +XXX,XX @@
69
#include "qom/object.h"
70
#include "qemu/thread.h"
71
#include "io/channel.h"
72
+#include "exec/hwaddr.h"
73
74
#define REMOTE_MAX_FDS 8
75
76
@@ -XXX,XX +XXX,XX @@
77
*
78
*/
79
typedef enum {
80
+ MPQEMU_CMD_SYNC_SYSMEM,
81
MPQEMU_CMD_MAX,
82
} MPQemuCmd;
83
84
+typedef struct {
85
+ hwaddr gpas[REMOTE_MAX_FDS];
86
+ uint64_t sizes[REMOTE_MAX_FDS];
87
+ off_t offsets[REMOTE_MAX_FDS];
88
+} SyncSysmemMsg;
89
+
90
/**
91
* MPQemuMsg:
92
* @cmd: The remote command
93
@@ -XXX,XX +XXX,XX @@ typedef enum {
94
* MPQemuMsg Format of the message sent to the remote device from QEMU.
95
*
96
*/
97
+
98
typedef struct {
99
int cmd;
100
size_t size;
101
102
union {
103
uint64_t u64;
104
+ SyncSysmemMsg sync_sysmem;
105
} data;
106
107
int fds[REMOTE_MAX_FDS];
108
diff --git a/hw/remote/memory.c b/hw/remote/memory.c
109
new file mode 100644
110
index XXXXXXX..XXXXXXX
111
--- /dev/null
112
+++ b/hw/remote/memory.c
113
@@ -XXX,XX +XXX,XX @@
114
+/*
115
+ * Memory manager for remote device
116
+ *
117
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
118
+ *
119
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
120
+ * See the COPYING file in the top-level directory.
121
+ *
122
+ */
123
+
124
+#include "qemu/osdep.h"
125
+#include "qemu-common.h"
126
+
127
+#include "hw/remote/memory.h"
128
+#include "exec/address-spaces.h"
129
+#include "exec/ram_addr.h"
130
+#include "qapi/error.h"
131
+
132
+static void remote_sysmem_reset(void)
133
+{
134
+ MemoryRegion *sysmem, *subregion, *next;
135
+
136
+ sysmem = get_system_memory();
137
+
138
+ QTAILQ_FOREACH_SAFE(subregion, &sysmem->subregions, subregions_link, next) {
139
+ if (subregion->ram) {
140
+ memory_region_del_subregion(sysmem, subregion);
141
+ object_unparent(OBJECT(subregion));
142
+ }
143
+ }
144
+}
145
+
146
+void remote_sysmem_reconfig(MPQemuMsg *msg, Error **errp)
147
+{
148
+ ERRP_GUARD();
149
+ SyncSysmemMsg *sysmem_info = &msg->data.sync_sysmem;
150
+ MemoryRegion *sysmem, *subregion;
151
+ static unsigned int suffix;
152
+ int region;
153
+
154
+ sysmem = get_system_memory();
155
+
156
+ remote_sysmem_reset();
157
+
158
+ for (region = 0; region < msg->num_fds; region++) {
159
+ g_autofree char *name;
160
+ subregion = g_new(MemoryRegion, 1);
161
+ name = g_strdup_printf("remote-mem-%u", suffix++);
162
+ memory_region_init_ram_from_fd(subregion, NULL,
163
+ name, sysmem_info->sizes[region],
164
+ true, msg->fds[region],
165
+ sysmem_info->offsets[region],
166
+ errp);
167
+
168
+ if (*errp) {
169
+ g_free(subregion);
170
+ remote_sysmem_reset();
171
+ return;
172
+ }
173
+
174
+ memory_region_add_subregion(sysmem, sysmem_info->gpas[region],
175
+ subregion);
176
+
177
+ }
178
+}
179
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
180
index XXXXXXX..XXXXXXX 100644
181
--- a/hw/remote/mpqemu-link.c
182
+++ b/hw/remote/mpqemu-link.c
183
@@ -XXX,XX +XXX,XX @@ bool mpqemu_msg_valid(MPQemuMsg *msg)
184
}
31
}
185
}
32
186
187
+ /* Verify message specific fields. */
188
+ switch (msg->cmd) {
189
+ case MPQEMU_CMD_SYNC_SYSMEM:
190
+ if (msg->num_fds == 0 || msg->size != sizeof(SyncSysmemMsg)) {
191
+ return false;
192
+ }
193
+ break;
194
+ default:
195
+ break;
196
+ }
197
+
198
return true;
199
}
200
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
201
index XXXXXXX..XXXXXXX 100644
202
--- a/hw/remote/meson.build
203
+++ b/hw/remote/meson.build
204
@@ -XXX,XX +XXX,XX @@ remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
205
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('message.c'))
206
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('remote-obj.c'))
207
208
+specific_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('memory.c'))
209
+
210
softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
33
--
211
--
34
2.26.2
212
2.29.2
35
213
diff view generated by jsdifflib
1
Test aio_disable_external(), which switches from fdmon-epoll back to
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
2
fdmon-poll. This resulted in an assertion failure that was fixed in the
3
previous patch.
4
2
3
Defines a PCI Device proxy object as a child of TYPE_PCI_DEVICE.
4
5
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
6
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
7
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Message-id: b5186ebfedf8e557044d09a768846c59230ad3a7.1611938319.git.jag.raman@oracle.com
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Message-Id: <20200915120339.702938-3-stefanha@redhat.com>
7
---
11
---
8
MAINTAINERS | 1 +
12
MAINTAINERS | 2 +
9
tests/test-fdmon-epoll.c | 73 ++++++++++++++++++++++++++++++++++++++++
13
include/hw/remote/proxy.h | 33 +++++++++++++
10
tests/meson.build | 3 ++
14
hw/remote/proxy.c | 99 +++++++++++++++++++++++++++++++++++++++
11
3 files changed, 77 insertions(+)
15
hw/remote/meson.build | 1 +
12
create mode 100644 tests/test-fdmon-epoll.c
16
4 files changed, 135 insertions(+)
17
create mode 100644 include/hw/remote/proxy.h
18
create mode 100644 hw/remote/proxy.c
13
19
14
diff --git a/MAINTAINERS b/MAINTAINERS
20
diff --git a/MAINTAINERS b/MAINTAINERS
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/MAINTAINERS
22
--- a/MAINTAINERS
17
+++ b/MAINTAINERS
23
+++ b/MAINTAINERS
18
@@ -XXX,XX +XXX,XX @@ F: migration/block*
24
@@ -XXX,XX +XXX,XX @@ F: hw/remote/message.c
19
F: include/block/aio.h
25
F: hw/remote/remote-obj.c
20
F: include/block/aio-wait.h
26
F: include/hw/remote/memory.h
21
F: scripts/qemugdb/aio.py
27
F: hw/remote/memory.c
22
+F: tests/test-fdmon-epoll.c
28
+F: hw/remote/proxy.c
23
T: git https://github.com/stefanha/qemu.git block
29
+F: include/hw/remote/proxy.h
24
30
25
Block SCSI subsystem
31
Build and test automation
26
diff --git a/tests/test-fdmon-epoll.c b/tests/test-fdmon-epoll.c
32
-------------------------
33
diff --git a/include/hw/remote/proxy.h b/include/hw/remote/proxy.h
27
new file mode 100644
34
new file mode 100644
28
index XXXXXXX..XXXXXXX
35
index XXXXXXX..XXXXXXX
29
--- /dev/null
36
--- /dev/null
30
+++ b/tests/test-fdmon-epoll.c
37
+++ b/include/hw/remote/proxy.h
31
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@
32
+/* SPDX-License-Identifier: GPL-2.0-or-later */
33
+/*
39
+/*
34
+ * fdmon-epoll tests
40
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
35
+ *
41
+ *
36
+ * Copyright (c) 2020 Red Hat, Inc.
42
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
43
+ * See the COPYING file in the top-level directory.
44
+ *
45
+ */
46
+
47
+#ifndef PROXY_H
48
+#define PROXY_H
49
+
50
+#include "hw/pci/pci.h"
51
+#include "io/channel.h"
52
+
53
+#define TYPE_PCI_PROXY_DEV "x-pci-proxy-dev"
54
+OBJECT_DECLARE_SIMPLE_TYPE(PCIProxyDev, PCI_PROXY_DEV)
55
+
56
+struct PCIProxyDev {
57
+ PCIDevice parent_dev;
58
+ char *fd;
59
+
60
+ /*
61
+ * Mutex used to protect the QIOChannel fd from
62
+ * the concurrent access by the VCPUs since proxy
63
+ * blocks while awaiting for the replies from the
64
+ * process remote.
65
+ */
66
+ QemuMutex io_mutex;
67
+ QIOChannel *ioc;
68
+ Error *migration_blocker;
69
+};
70
+
71
+#endif /* PROXY_H */
72
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
73
new file mode 100644
74
index XXXXXXX..XXXXXXX
75
--- /dev/null
76
+++ b/hw/remote/proxy.c
77
@@ -XXX,XX +XXX,XX @@
78
+/*
79
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
80
+ *
81
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
82
+ * See the COPYING file in the top-level directory.
83
+ *
37
+ */
84
+ */
38
+
85
+
39
+#include "qemu/osdep.h"
86
+#include "qemu/osdep.h"
40
+#include "block/aio.h"
87
+#include "qemu-common.h"
88
+
89
+#include "hw/remote/proxy.h"
90
+#include "hw/pci/pci.h"
41
+#include "qapi/error.h"
91
+#include "qapi/error.h"
42
+#include "qemu/main-loop.h"
92
+#include "io/channel-util.h"
93
+#include "hw/qdev-properties.h"
94
+#include "monitor/monitor.h"
95
+#include "migration/blocker.h"
96
+#include "qemu/sockets.h"
43
+
97
+
44
+static AioContext *ctx;
98
+static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
99
+{
100
+ ERRP_GUARD();
101
+ PCIProxyDev *dev = PCI_PROXY_DEV(device);
102
+ int fd;
45
+
103
+
46
+static void dummy_fd_handler(EventNotifier *notifier)
104
+ if (!dev->fd) {
47
+{
105
+ error_setg(errp, "fd parameter not specified for %s",
48
+ event_notifier_test_and_clear(notifier);
106
+ DEVICE(device)->id);
107
+ return;
108
+ }
109
+
110
+ fd = monitor_fd_param(monitor_cur(), dev->fd, errp);
111
+ if (fd == -1) {
112
+ error_prepend(errp, "proxy: unable to parse fd %s: ", dev->fd);
113
+ return;
114
+ }
115
+
116
+ if (!fd_is_socket(fd)) {
117
+ error_setg(errp, "proxy: fd %d is not a socket", fd);
118
+ close(fd);
119
+ return;
120
+ }
121
+
122
+ dev->ioc = qio_channel_new_fd(fd, errp);
123
+
124
+ error_setg(&dev->migration_blocker, "%s does not support migration",
125
+ TYPE_PCI_PROXY_DEV);
126
+ migrate_add_blocker(dev->migration_blocker, errp);
127
+
128
+ qemu_mutex_init(&dev->io_mutex);
129
+ qio_channel_set_blocking(dev->ioc, true, NULL);
49
+}
130
+}
50
+
131
+
51
+static void add_event_notifiers(EventNotifier *notifiers, size_t n)
132
+static void pci_proxy_dev_exit(PCIDevice *pdev)
52
+{
133
+{
53
+ for (size_t i = 0; i < n; i++) {
134
+ PCIProxyDev *dev = PCI_PROXY_DEV(pdev);
54
+ event_notifier_init(&notifiers[i], false);
135
+
55
+ aio_set_event_notifier(ctx, &notifiers[i], false,
136
+ if (dev->ioc) {
56
+ dummy_fd_handler, NULL);
137
+ qio_channel_close(dev->ioc, NULL);
57
+ }
138
+ }
139
+
140
+ migrate_del_blocker(dev->migration_blocker);
141
+
142
+ error_free(dev->migration_blocker);
58
+}
143
+}
59
+
144
+
60
+static void remove_event_notifiers(EventNotifier *notifiers, size_t n)
145
+static Property proxy_properties[] = {
146
+ DEFINE_PROP_STRING("fd", PCIProxyDev, fd),
147
+ DEFINE_PROP_END_OF_LIST(),
148
+};
149
+
150
+static void pci_proxy_dev_class_init(ObjectClass *klass, void *data)
61
+{
151
+{
62
+ for (size_t i = 0; i < n; i++) {
152
+ DeviceClass *dc = DEVICE_CLASS(klass);
63
+ aio_set_event_notifier(ctx, &notifiers[i], false, NULL, NULL);
153
+ PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
64
+ event_notifier_cleanup(&notifiers[i]);
154
+
65
+ }
155
+ k->realize = pci_proxy_dev_realize;
156
+ k->exit = pci_proxy_dev_exit;
157
+ device_class_set_props(dc, proxy_properties);
66
+}
158
+}
67
+
159
+
68
+/* Check that fd handlers work when external clients are disabled */
160
+static const TypeInfo pci_proxy_dev_type_info = {
69
+static void test_external_disabled(void)
161
+ .name = TYPE_PCI_PROXY_DEV,
162
+ .parent = TYPE_PCI_DEVICE,
163
+ .instance_size = sizeof(PCIProxyDev),
164
+ .class_init = pci_proxy_dev_class_init,
165
+ .interfaces = (InterfaceInfo[]) {
166
+ { INTERFACE_CONVENTIONAL_PCI_DEVICE },
167
+ { },
168
+ },
169
+};
170
+
171
+static void pci_proxy_dev_register_types(void)
70
+{
172
+{
71
+ EventNotifier notifiers[100];
173
+ type_register_static(&pci_proxy_dev_type_info);
72
+
73
+ /* fdmon-epoll is only enabled when many fd handlers are registered */
74
+ add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
75
+
76
+ event_notifier_set(&notifiers[0]);
77
+ assert(aio_poll(ctx, true));
78
+
79
+ aio_disable_external(ctx);
80
+ event_notifier_set(&notifiers[0]);
81
+ assert(aio_poll(ctx, true));
82
+ aio_enable_external(ctx);
83
+
84
+ remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
85
+}
174
+}
86
+
175
+
87
+int main(int argc, char **argv)
176
+type_init(pci_proxy_dev_register_types)
88
+{
177
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
89
+ /*
90
+ * This code relies on the fact that fdmon-io_uring disables itself when
91
+ * the glib main loop is in use. The main loop uses fdmon-poll and upgrades
92
+ * to fdmon-epoll when the number of fds exceeds a threshold.
93
+ */
94
+ qemu_init_main_loop(&error_fatal);
95
+ ctx = qemu_get_aio_context();
96
+
97
+ while (g_main_context_iteration(NULL, false)) {
98
+ /* Do nothing */
99
+ }
100
+
101
+ g_test_init(&argc, &argv, NULL);
102
+ g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabled);
103
+ return g_test_run();
104
+}
105
diff --git a/tests/meson.build b/tests/meson.build
106
index XXXXXXX..XXXXXXX 100644
178
index XXXXXXX..XXXXXXX 100644
107
--- a/tests/meson.build
179
--- a/hw/remote/meson.build
108
+++ b/tests/meson.build
180
+++ b/hw/remote/meson.build
109
@@ -XXX,XX +XXX,XX @@ if have_block
181
@@ -XXX,XX +XXX,XX @@ remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('machine.c'))
110
if 'CONFIG_NETTLE' in config_host or 'CONFIG_GCRYPT' in config_host
182
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
111
tests += {'test-crypto-pbkdf': [io]}
183
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('message.c'))
112
endif
184
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('remote-obj.c'))
113
+ if 'CONFIG_EPOLL_CREATE1' in config_host
185
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('proxy.c'))
114
+ tests += {'test-fdmon-epoll': [testblock]}
186
115
+ endif
187
specific_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('memory.c'))
116
benchs += {
188
117
'benchmark-crypto-hash': [crypto],
118
'benchmark-crypto-hmac': [crypto],
119
--
189
--
120
2.26.2
190
2.29.2
121
191
diff view generated by jsdifflib
New patch
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
1
2
3
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
4
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
5
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-id: d54edb4176361eed86b903e8f27058363b6c83b3.1611938319.git.jag.raman@oracle.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
include/hw/remote/mpqemu-link.h | 4 ++++
11
hw/remote/mpqemu-link.c | 34 +++++++++++++++++++++++++++++++++
12
2 files changed, 38 insertions(+)
13
14
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/remote/mpqemu-link.h
17
+++ b/include/hw/remote/mpqemu-link.h
18
@@ -XXX,XX +XXX,XX @@
19
#include "qemu/thread.h"
20
#include "io/channel.h"
21
#include "exec/hwaddr.h"
22
+#include "io/channel-socket.h"
23
+#include "hw/remote/proxy.h"
24
25
#define REMOTE_MAX_FDS 8
26
27
@@ -XXX,XX +XXX,XX @@ typedef struct {
28
bool mpqemu_msg_send(MPQemuMsg *msg, QIOChannel *ioc, Error **errp);
29
bool mpqemu_msg_recv(MPQemuMsg *msg, QIOChannel *ioc, Error **errp);
30
31
+uint64_t mpqemu_msg_send_and_await_reply(MPQemuMsg *msg, PCIProxyDev *pdev,
32
+ Error **errp);
33
bool mpqemu_msg_valid(MPQemuMsg *msg);
34
35
#endif
36
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/remote/mpqemu-link.c
39
+++ b/hw/remote/mpqemu-link.c
40
@@ -XXX,XX +XXX,XX @@ fail:
41
return ret;
42
}
43
44
+/*
45
+ * Send msg and wait for a reply with command code RET_MSG.
46
+ * Returns the message received of size u64 or UINT64_MAX
47
+ * on error.
48
+ * Called from VCPU thread in non-coroutine context.
49
+ * Used by the Proxy object to communicate to remote processes.
50
+ */
51
+uint64_t mpqemu_msg_send_and_await_reply(MPQemuMsg *msg, PCIProxyDev *pdev,
52
+ Error **errp)
53
+{
54
+ ERRP_GUARD();
55
+ MPQemuMsg msg_reply = {0};
56
+ uint64_t ret = UINT64_MAX;
57
+
58
+ assert(!qemu_in_coroutine());
59
+
60
+ QEMU_LOCK_GUARD(&pdev->io_mutex);
61
+ if (!mpqemu_msg_send(msg, pdev->ioc, errp)) {
62
+ return ret;
63
+ }
64
+
65
+ if (!mpqemu_msg_recv(&msg_reply, pdev->ioc, errp)) {
66
+ return ret;
67
+ }
68
+
69
+ if (!mpqemu_msg_valid(&msg_reply)) {
70
+ error_setg(errp, "ERROR: Invalid reply received for command %d",
71
+ msg->cmd);
72
+ return ret;
73
+ }
74
+
75
+ return msg_reply.data.u64;
76
+}
77
+
78
bool mpqemu_msg_valid(MPQemuMsg *msg)
79
{
80
if (msg->cmd >= MPQEMU_CMD_MAX && msg->cmd < 0) {
81
--
82
2.29.2
83
diff view generated by jsdifflib
1
Fuzzing discovered that virtqueue_unmap_sg() is being called on modified
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
2
req->in/out_sg iovecs. This means dma_memory_map() and
2
3
dma_memory_unmap() calls do not have matching memory addresses.
3
The Proxy Object sends the PCI config space accesses as messages
4
4
to the remote process over the communication channel
5
Fuzzing discovered that non-RAM addresses trigger a bug:
5
6
6
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
7
void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
7
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
8
bool is_write, hwaddr access_len)
8
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
9
{
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
if (buffer != bounce.buffer) {
10
Message-id: d3c94f4618813234655356c60e6f0d0362ff42d6.1611938319.git.jag.raman@oracle.com
11
^^^^^^^^^^^^^^^^^^^^^^^
12
13
A modified iov->iov_base is no longer recognized as a bounce buffer and
14
the wrong branch is taken.
15
16
There are more potential bugs: dirty memory is not tracked correctly and
17
MemoryRegion refcounts can be leaked.
18
19
Use the new iov_discard_undo() API to restore elem->in/out_sg before
20
virtqueue_push() is called.
21
22
Fixes: 827805a2492c1bbf1c0712ed18ee069b4ebf3dd6 ("virtio-blk: Convert VirtIOBlockReq.out to structrue")
23
Reported-by: Alexander Bulekov <alxndr@bu.edu>
24
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
25
Reviewed-by: Li Qiang <liq3ea@gmail.com>
26
Buglink: https://bugs.launchpad.net/qemu/+bug/1890360
27
Message-Id: <20200917094455.822379-3-stefanha@redhat.com>
28
---
12
---
29
include/hw/virtio/virtio-blk.h | 2 ++
13
include/hw/remote/mpqemu-link.h | 10 ++++++
30
hw/block/virtio-blk.c | 11 +++++++++--
14
hw/remote/message.c | 60 +++++++++++++++++++++++++++++++++
31
2 files changed, 11 insertions(+), 2 deletions(-)
15
hw/remote/mpqemu-link.c | 8 ++++-
32
16
hw/remote/proxy.c | 55 ++++++++++++++++++++++++++++++
33
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
17
4 files changed, 132 insertions(+), 1 deletion(-)
34
index XXXXXXX..XXXXXXX 100644
18
35
--- a/include/hw/virtio/virtio-blk.h
19
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
36
+++ b/include/hw/virtio/virtio-blk.h
20
index XXXXXXX..XXXXXXX 100644
37
@@ -XXX,XX +XXX,XX @@ typedef struct VirtIOBlockReq {
21
--- a/include/hw/remote/mpqemu-link.h
38
int64_t sector_num;
22
+++ b/include/hw/remote/mpqemu-link.h
39
VirtIOBlock *dev;
23
@@ -XXX,XX +XXX,XX @@
40
VirtQueue *vq;
24
*/
41
+ IOVDiscardUndo inhdr_undo;
25
typedef enum {
42
+ IOVDiscardUndo outhdr_undo;
26
MPQEMU_CMD_SYNC_SYSMEM,
43
struct virtio_blk_inhdr *in;
27
+ MPQEMU_CMD_RET,
44
struct virtio_blk_outhdr out;
28
+ MPQEMU_CMD_PCI_CFGWRITE,
45
QEMUIOVector qiov;
29
+ MPQEMU_CMD_PCI_CFGREAD,
46
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
30
MPQEMU_CMD_MAX,
47
index XXXXXXX..XXXXXXX 100644
31
} MPQemuCmd;
48
--- a/hw/block/virtio-blk.c
32
49
+++ b/hw/block/virtio-blk.c
33
@@ -XXX,XX +XXX,XX @@ typedef struct {
50
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
34
off_t offsets[REMOTE_MAX_FDS];
51
trace_virtio_blk_req_complete(vdev, req, status);
35
} SyncSysmemMsg;
52
36
53
stb_p(&req->in->status, status);
37
+typedef struct {
54
+ iov_discard_undo(&req->inhdr_undo);
38
+ uint32_t addr;
55
+ iov_discard_undo(&req->outhdr_undo);
39
+ uint32_t val;
56
virtqueue_push(req->vq, &req->elem, req->in_len);
40
+ int len;
57
if (s->dataplane_started && !s->dataplane_disabled) {
41
+} PciConfDataMsg;
58
virtio_blk_data_plane_notify(s->dataplane, req->vq);
42
+
59
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
43
/**
60
return -1;
44
* MPQemuMsg:
45
* @cmd: The remote command
46
@@ -XXX,XX +XXX,XX @@ typedef struct {
47
48
union {
49
uint64_t u64;
50
+ PciConfDataMsg pci_conf_data;
51
SyncSysmemMsg sync_sysmem;
52
} data;
53
54
diff --git a/hw/remote/message.c b/hw/remote/message.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/remote/message.c
57
+++ b/hw/remote/message.c
58
@@ -XXX,XX +XXX,XX @@
59
#include "hw/remote/mpqemu-link.h"
60
#include "qapi/error.h"
61
#include "sysemu/runstate.h"
62
+#include "hw/pci/pci.h"
63
+
64
+static void process_config_write(QIOChannel *ioc, PCIDevice *dev,
65
+ MPQemuMsg *msg, Error **errp);
66
+static void process_config_read(QIOChannel *ioc, PCIDevice *dev,
67
+ MPQemuMsg *msg, Error **errp);
68
69
void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
70
{
71
@@ -XXX,XX +XXX,XX @@ void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
72
}
73
74
switch (msg.cmd) {
75
+ case MPQEMU_CMD_PCI_CFGWRITE:
76
+ process_config_write(com->ioc, pci_dev, &msg, &local_err);
77
+ break;
78
+ case MPQEMU_CMD_PCI_CFGREAD:
79
+ process_config_read(com->ioc, pci_dev, &msg, &local_err);
80
+ break;
81
default:
82
error_setg(&local_err,
83
"Unknown command (%d) received for device %s"
84
@@ -XXX,XX +XXX,XX @@ void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
85
qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
61
}
86
}
62
87
}
63
- iov_discard_front(&out_iov, &out_num, sizeof(req->out));
88
+
64
+ iov_discard_front_undoable(&out_iov, &out_num, sizeof(req->out),
89
+static void process_config_write(QIOChannel *ioc, PCIDevice *dev,
65
+ &req->outhdr_undo);
90
+ MPQemuMsg *msg, Error **errp)
66
91
+{
67
if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
92
+ ERRP_GUARD();
68
virtio_error(vdev, "virtio-blk request inhdr too short");
93
+ PciConfDataMsg *conf = (PciConfDataMsg *)&msg->data.pci_conf_data;
69
+ iov_discard_undo(&req->outhdr_undo);
94
+ MPQemuMsg ret = { 0 };
70
return -1;
95
+
96
+ if ((conf->addr + sizeof(conf->val)) > pci_config_size(dev)) {
97
+ error_setg(errp, "Bad address for PCI config write, pid "FMT_pid".",
98
+ getpid());
99
+ ret.data.u64 = UINT64_MAX;
100
+ } else {
101
+ pci_default_write_config(dev, conf->addr, conf->val, conf->len);
102
+ }
103
+
104
+ ret.cmd = MPQEMU_CMD_RET;
105
+ ret.size = sizeof(ret.data.u64);
106
+
107
+ if (!mpqemu_msg_send(&ret, ioc, NULL)) {
108
+ error_prepend(errp, "Error returning code to proxy, pid "FMT_pid": ",
109
+ getpid());
110
+ }
111
+}
112
+
113
+static void process_config_read(QIOChannel *ioc, PCIDevice *dev,
114
+ MPQemuMsg *msg, Error **errp)
115
+{
116
+ ERRP_GUARD();
117
+ PciConfDataMsg *conf = (PciConfDataMsg *)&msg->data.pci_conf_data;
118
+ MPQemuMsg ret = { 0 };
119
+
120
+ if ((conf->addr + sizeof(conf->val)) > pci_config_size(dev)) {
121
+ error_setg(errp, "Bad address for PCI config read, pid "FMT_pid".",
122
+ getpid());
123
+ ret.data.u64 = UINT64_MAX;
124
+ } else {
125
+ ret.data.u64 = pci_default_read_config(dev, conf->addr, conf->len);
126
+ }
127
+
128
+ ret.cmd = MPQEMU_CMD_RET;
129
+ ret.size = sizeof(ret.data.u64);
130
+
131
+ if (!mpqemu_msg_send(&ret, ioc, NULL)) {
132
+ error_prepend(errp, "Error returning code to proxy, pid "FMT_pid": ",
133
+ getpid());
134
+ }
135
+}
136
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
137
index XXXXXXX..XXXXXXX 100644
138
--- a/hw/remote/mpqemu-link.c
139
+++ b/hw/remote/mpqemu-link.c
140
@@ -XXX,XX +XXX,XX @@ uint64_t mpqemu_msg_send_and_await_reply(MPQemuMsg *msg, PCIProxyDev *pdev,
141
return ret;
71
}
142
}
72
143
73
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
144
- if (!mpqemu_msg_valid(&msg_reply)) {
74
req->in = (void *)in_iov[in_num - 1].iov_base
145
+ if (!mpqemu_msg_valid(&msg_reply) || msg_reply.cmd != MPQEMU_CMD_RET) {
75
+ in_iov[in_num - 1].iov_len
146
error_setg(errp, "ERROR: Invalid reply received for command %d",
76
- sizeof(struct virtio_blk_inhdr);
147
msg->cmd);
77
- iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr));
148
return ret;
78
+ iov_discard_back_undoable(in_iov, &in_num, sizeof(struct virtio_blk_inhdr),
149
@@ -XXX,XX +XXX,XX @@ bool mpqemu_msg_valid(MPQemuMsg *msg)
79
+ &req->inhdr_undo);
150
return false;
80
151
}
81
type = virtio_ldl_p(vdev, &req->out.type);
152
break;
82
153
+ case MPQEMU_CMD_PCI_CFGWRITE:
83
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
154
+ case MPQEMU_CMD_PCI_CFGREAD:
84
155
+ if (msg->size != sizeof(PciConfDataMsg)) {
85
if (unlikely(iov_to_buf(out_iov, out_num, 0, &dwz_hdr,
156
+ return false;
86
sizeof(dwz_hdr)) != sizeof(dwz_hdr))) {
157
+ }
87
+ iov_discard_undo(&req->inhdr_undo);
158
+ break;
88
+ iov_discard_undo(&req->outhdr_undo);
159
default:
89
virtio_error(vdev, "virtio-blk discard/write_zeroes header"
160
break;
90
" too short");
161
}
91
return -1;
162
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
163
index XXXXXXX..XXXXXXX 100644
164
--- a/hw/remote/proxy.c
165
+++ b/hw/remote/proxy.c
166
@@ -XXX,XX +XXX,XX @@
167
#include "monitor/monitor.h"
168
#include "migration/blocker.h"
169
#include "qemu/sockets.h"
170
+#include "hw/remote/mpqemu-link.h"
171
+#include "qemu/error-report.h"
172
173
static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
174
{
175
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_exit(PCIDevice *pdev)
176
error_free(dev->migration_blocker);
177
}
178
179
+static void config_op_send(PCIProxyDev *pdev, uint32_t addr, uint32_t *val,
180
+ int len, unsigned int op)
181
+{
182
+ MPQemuMsg msg = { 0 };
183
+ uint64_t ret = -EINVAL;
184
+ Error *local_err = NULL;
185
+
186
+ msg.cmd = op;
187
+ msg.data.pci_conf_data.addr = addr;
188
+ msg.data.pci_conf_data.val = (op == MPQEMU_CMD_PCI_CFGWRITE) ? *val : 0;
189
+ msg.data.pci_conf_data.len = len;
190
+ msg.size = sizeof(PciConfDataMsg);
191
+
192
+ ret = mpqemu_msg_send_and_await_reply(&msg, pdev, &local_err);
193
+ if (local_err) {
194
+ error_report_err(local_err);
195
+ }
196
+
197
+ if (ret == UINT64_MAX) {
198
+ error_report("Failed to perform PCI config %s operation",
199
+ (op == MPQEMU_CMD_PCI_CFGREAD) ? "READ" : "WRITE");
200
+ }
201
+
202
+ if (op == MPQEMU_CMD_PCI_CFGREAD) {
203
+ *val = (uint32_t)ret;
204
+ }
205
+}
206
+
207
+static uint32_t pci_proxy_read_config(PCIDevice *d, uint32_t addr, int len)
208
+{
209
+ uint32_t val;
210
+
211
+ config_op_send(PCI_PROXY_DEV(d), addr, &val, len, MPQEMU_CMD_PCI_CFGREAD);
212
+
213
+ return val;
214
+}
215
+
216
+static void pci_proxy_write_config(PCIDevice *d, uint32_t addr, uint32_t val,
217
+ int len)
218
+{
219
+ /*
220
+ * Some of the functions access the copy of remote device's PCI config
221
+ * space which is cached in the proxy device. Therefore, maintain
222
+ * it updated.
223
+ */
224
+ pci_default_write_config(d, addr, val, len);
225
+
226
+ config_op_send(PCI_PROXY_DEV(d), addr, &val, len, MPQEMU_CMD_PCI_CFGWRITE);
227
+}
228
+
229
static Property proxy_properties[] = {
230
DEFINE_PROP_STRING("fd", PCIProxyDev, fd),
231
DEFINE_PROP_END_OF_LIST(),
232
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_class_init(ObjectClass *klass, void *data)
233
234
k->realize = pci_proxy_dev_realize;
235
k->exit = pci_proxy_dev_exit;
236
+ k->config_read = pci_proxy_read_config;
237
+ k->config_write = pci_proxy_write_config;
238
+
239
device_class_set_props(dc, proxy_properties);
240
}
241
92
--
242
--
93
2.26.2
243
2.29.2
94
244
diff view generated by jsdifflib
New patch
1
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
3
Proxy device object implements handler for PCI BAR writes and reads.
4
The handler uses BAR_WRITE/BAR_READ message to communicate to the
5
remote process with the BAR address and value to be written/read.
6
The remote process implements handler for BAR_WRITE/BAR_READ
7
message.
8
9
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
10
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
11
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
12
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Message-id: a8b76714a9688be5552c4c92d089bc9e8a4707ff.1611938319.git.jag.raman@oracle.com
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
16
include/hw/remote/mpqemu-link.h | 10 ++++
17
include/hw/remote/proxy.h | 9 ++++
18
hw/remote/message.c | 83 +++++++++++++++++++++++++++++++++
19
hw/remote/mpqemu-link.c | 6 +++
20
hw/remote/proxy.c | 60 ++++++++++++++++++++++++
21
5 files changed, 168 insertions(+)
22
23
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/remote/mpqemu-link.h
26
+++ b/include/hw/remote/mpqemu-link.h
27
@@ -XXX,XX +XXX,XX @@ typedef enum {
28
MPQEMU_CMD_RET,
29
MPQEMU_CMD_PCI_CFGWRITE,
30
MPQEMU_CMD_PCI_CFGREAD,
31
+ MPQEMU_CMD_BAR_WRITE,
32
+ MPQEMU_CMD_BAR_READ,
33
MPQEMU_CMD_MAX,
34
} MPQemuCmd;
35
36
@@ -XXX,XX +XXX,XX @@ typedef struct {
37
int len;
38
} PciConfDataMsg;
39
40
+typedef struct {
41
+ hwaddr addr;
42
+ uint64_t val;
43
+ unsigned size;
44
+ bool memory;
45
+} BarAccessMsg;
46
+
47
/**
48
* MPQemuMsg:
49
* @cmd: The remote command
50
@@ -XXX,XX +XXX,XX @@ typedef struct {
51
uint64_t u64;
52
PciConfDataMsg pci_conf_data;
53
SyncSysmemMsg sync_sysmem;
54
+ BarAccessMsg bar_access;
55
} data;
56
57
int fds[REMOTE_MAX_FDS];
58
diff --git a/include/hw/remote/proxy.h b/include/hw/remote/proxy.h
59
index XXXXXXX..XXXXXXX 100644
60
--- a/include/hw/remote/proxy.h
61
+++ b/include/hw/remote/proxy.h
62
@@ -XXX,XX +XXX,XX @@
63
#define TYPE_PCI_PROXY_DEV "x-pci-proxy-dev"
64
OBJECT_DECLARE_SIMPLE_TYPE(PCIProxyDev, PCI_PROXY_DEV)
65
66
+typedef struct ProxyMemoryRegion {
67
+ PCIProxyDev *dev;
68
+ MemoryRegion mr;
69
+ bool memory;
70
+ bool present;
71
+ uint8_t type;
72
+} ProxyMemoryRegion;
73
+
74
struct PCIProxyDev {
75
PCIDevice parent_dev;
76
char *fd;
77
@@ -XXX,XX +XXX,XX @@ struct PCIProxyDev {
78
QemuMutex io_mutex;
79
QIOChannel *ioc;
80
Error *migration_blocker;
81
+ ProxyMemoryRegion region[PCI_NUM_REGIONS];
82
};
83
84
#endif /* PROXY_H */
85
diff --git a/hw/remote/message.c b/hw/remote/message.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/remote/message.c
88
+++ b/hw/remote/message.c
89
@@ -XXX,XX +XXX,XX @@
90
#include "qapi/error.h"
91
#include "sysemu/runstate.h"
92
#include "hw/pci/pci.h"
93
+#include "exec/memattrs.h"
94
95
static void process_config_write(QIOChannel *ioc, PCIDevice *dev,
96
MPQemuMsg *msg, Error **errp);
97
static void process_config_read(QIOChannel *ioc, PCIDevice *dev,
98
MPQemuMsg *msg, Error **errp);
99
+static void process_bar_write(QIOChannel *ioc, MPQemuMsg *msg, Error **errp);
100
+static void process_bar_read(QIOChannel *ioc, MPQemuMsg *msg, Error **errp);
101
102
void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
103
{
104
@@ -XXX,XX +XXX,XX @@ void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
105
case MPQEMU_CMD_PCI_CFGREAD:
106
process_config_read(com->ioc, pci_dev, &msg, &local_err);
107
break;
108
+ case MPQEMU_CMD_BAR_WRITE:
109
+ process_bar_write(com->ioc, &msg, &local_err);
110
+ break;
111
+ case MPQEMU_CMD_BAR_READ:
112
+ process_bar_read(com->ioc, &msg, &local_err);
113
+ break;
114
default:
115
error_setg(&local_err,
116
"Unknown command (%d) received for device %s"
117
@@ -XXX,XX +XXX,XX @@ static void process_config_read(QIOChannel *ioc, PCIDevice *dev,
118
getpid());
119
}
120
}
121
+
122
+static void process_bar_write(QIOChannel *ioc, MPQemuMsg *msg, Error **errp)
123
+{
124
+ ERRP_GUARD();
125
+ BarAccessMsg *bar_access = &msg->data.bar_access;
126
+ AddressSpace *as =
127
+ bar_access->memory ? &address_space_memory : &address_space_io;
128
+ MPQemuMsg ret = { 0 };
129
+ MemTxResult res;
130
+ uint64_t val;
131
+
132
+ if (!is_power_of_2(bar_access->size) ||
133
+ (bar_access->size > sizeof(uint64_t))) {
134
+ ret.data.u64 = UINT64_MAX;
135
+ goto fail;
136
+ }
137
+
138
+ val = cpu_to_le64(bar_access->val);
139
+
140
+ res = address_space_rw(as, bar_access->addr, MEMTXATTRS_UNSPECIFIED,
141
+ (void *)&val, bar_access->size, true);
142
+
143
+ if (res != MEMTX_OK) {
144
+ error_setg(errp, "Bad address %"PRIx64" for mem write, pid "FMT_pid".",
145
+ bar_access->addr, getpid());
146
+ ret.data.u64 = -1;
147
+ }
148
+
149
+fail:
150
+ ret.cmd = MPQEMU_CMD_RET;
151
+ ret.size = sizeof(ret.data.u64);
152
+
153
+ if (!mpqemu_msg_send(&ret, ioc, NULL)) {
154
+ error_prepend(errp, "Error returning code to proxy, pid "FMT_pid": ",
155
+ getpid());
156
+ }
157
+}
158
+
159
+static void process_bar_read(QIOChannel *ioc, MPQemuMsg *msg, Error **errp)
160
+{
161
+ ERRP_GUARD();
162
+ BarAccessMsg *bar_access = &msg->data.bar_access;
163
+ MPQemuMsg ret = { 0 };
164
+ AddressSpace *as;
165
+ MemTxResult res;
166
+ uint64_t val = 0;
167
+
168
+ as = bar_access->memory ? &address_space_memory : &address_space_io;
169
+
170
+ if (!is_power_of_2(bar_access->size) ||
171
+ (bar_access->size > sizeof(uint64_t))) {
172
+ val = UINT64_MAX;
173
+ goto fail;
174
+ }
175
+
176
+ res = address_space_rw(as, bar_access->addr, MEMTXATTRS_UNSPECIFIED,
177
+ (void *)&val, bar_access->size, false);
178
+
179
+ if (res != MEMTX_OK) {
180
+ error_setg(errp, "Bad address %"PRIx64" for mem read, pid "FMT_pid".",
181
+ bar_access->addr, getpid());
182
+ val = UINT64_MAX;
183
+ }
184
+
185
+fail:
186
+ ret.cmd = MPQEMU_CMD_RET;
187
+ ret.data.u64 = le64_to_cpu(val);
188
+ ret.size = sizeof(ret.data.u64);
189
+
190
+ if (!mpqemu_msg_send(&ret, ioc, NULL)) {
191
+ error_prepend(errp, "Error returning code to proxy, pid "FMT_pid": ",
192
+ getpid());
193
+ }
194
+}
195
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
196
index XXXXXXX..XXXXXXX 100644
197
--- a/hw/remote/mpqemu-link.c
198
+++ b/hw/remote/mpqemu-link.c
199
@@ -XXX,XX +XXX,XX @@ bool mpqemu_msg_valid(MPQemuMsg *msg)
200
return false;
201
}
202
break;
203
+ case MPQEMU_CMD_BAR_WRITE:
204
+ case MPQEMU_CMD_BAR_READ:
205
+ if ((msg->size != sizeof(BarAccessMsg)) || (msg->num_fds != 0)) {
206
+ return false;
207
+ }
208
+ break;
209
default:
210
break;
211
}
212
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
213
index XXXXXXX..XXXXXXX 100644
214
--- a/hw/remote/proxy.c
215
+++ b/hw/remote/proxy.c
216
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_register_types(void)
217
}
218
219
type_init(pci_proxy_dev_register_types)
220
+
221
+static void send_bar_access_msg(PCIProxyDev *pdev, MemoryRegion *mr,
222
+ bool write, hwaddr addr, uint64_t *val,
223
+ unsigned size, bool memory)
224
+{
225
+ MPQemuMsg msg = { 0 };
226
+ long ret = -EINVAL;
227
+ Error *local_err = NULL;
228
+
229
+ msg.size = sizeof(BarAccessMsg);
230
+ msg.data.bar_access.addr = mr->addr + addr;
231
+ msg.data.bar_access.size = size;
232
+ msg.data.bar_access.memory = memory;
233
+
234
+ if (write) {
235
+ msg.cmd = MPQEMU_CMD_BAR_WRITE;
236
+ msg.data.bar_access.val = *val;
237
+ } else {
238
+ msg.cmd = MPQEMU_CMD_BAR_READ;
239
+ }
240
+
241
+ ret = mpqemu_msg_send_and_await_reply(&msg, pdev, &local_err);
242
+ if (local_err) {
243
+ error_report_err(local_err);
244
+ }
245
+
246
+ if (!write) {
247
+ *val = ret;
248
+ }
249
+}
250
+
251
+static void proxy_bar_write(void *opaque, hwaddr addr, uint64_t val,
252
+ unsigned size)
253
+{
254
+ ProxyMemoryRegion *pmr = opaque;
255
+
256
+ send_bar_access_msg(pmr->dev, &pmr->mr, true, addr, &val, size,
257
+ pmr->memory);
258
+}
259
+
260
+static uint64_t proxy_bar_read(void *opaque, hwaddr addr, unsigned size)
261
+{
262
+ ProxyMemoryRegion *pmr = opaque;
263
+ uint64_t val;
264
+
265
+ send_bar_access_msg(pmr->dev, &pmr->mr, false, addr, &val, size,
266
+ pmr->memory);
267
+
268
+ return val;
269
+}
270
+
271
+const MemoryRegionOps proxy_mr_ops = {
272
+ .read = proxy_bar_read,
273
+ .write = proxy_bar_write,
274
+ .endianness = DEVICE_NATIVE_ENDIAN,
275
+ .impl = {
276
+ .min_access_size = 1,
277
+ .max_access_size = 8,
278
+ },
279
+};
280
--
281
2.29.2
282
diff view generated by jsdifflib
1
From: Marc Hartmayer <mhartmay@linux.ibm.com>
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
2
3
Since virtio existed even before it got standardized, the virtio
3
Add ProxyMemoryListener object which is used to keep the view of the RAM
4
standard defines the following types of virtio devices:
4
in sync between QEMU and remote process.
5
A MemoryListener is registered for system-memory AddressSpace. The
6
listener sends SYNC_SYSMEM message to the remote process when memory
7
listener commits the changes to memory, the remote process receives
8
the message and processes it in the handler for SYNC_SYSMEM message.
5
9
6
+ legacy device (pre-virtio 1.0)
10
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
7
+ non-legacy or VIRTIO 1.0 device
11
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
+ transitional device (which can act both as legacy and non-legacy)
12
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
9
13
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Virtio 1.0 defines the fields of the virtqueues as little endian,
14
Message-id: 04fe4e6a9ca90d4f11ab6f59be7652f5b086a071.1611938319.git.jag.raman@oracle.com
11
while legacy uses guest's native endian [1]. Currently libvhost-user
12
does not handle virtio endianness at all, i.e. it works only if the
13
native endianness matches with whatever is actually needed. That means
14
things break spectacularly on big-endian targets. Let us handle virtio
15
endianness for non-legacy as required by the virtio specification [1]
16
and fence legacy virtio, as there is no safe way to figure out the
17
needed endianness conversions for all cases. The fencing of legacy
18
virtio devices is done in `vu_set_features_exec`.
19
20
[1] https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-210003
21
22
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
23
Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com>
24
Message-id: 20200901150019.29229-3-mhartmay@linux.ibm.com
25
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
26
---
16
---
27
contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------
17
MAINTAINERS | 2 +
28
1 file changed, 43 insertions(+), 34 deletions(-)
18
include/hw/remote/proxy-memory-listener.h | 28 +++
19
include/hw/remote/proxy.h | 2 +
20
hw/remote/message.c | 4 +
21
hw/remote/proxy-memory-listener.c | 227 ++++++++++++++++++++++
22
hw/remote/proxy.c | 6 +
23
hw/remote/meson.build | 1 +
24
7 files changed, 270 insertions(+)
25
create mode 100644 include/hw/remote/proxy-memory-listener.h
26
create mode 100644 hw/remote/proxy-memory-listener.c
29
27
30
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
28
diff --git a/MAINTAINERS b/MAINTAINERS
31
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
32
--- a/contrib/libvhost-user/libvhost-user.c
30
--- a/MAINTAINERS
33
+++ b/contrib/libvhost-user/libvhost-user.c
31
+++ b/MAINTAINERS
34
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ F: include/hw/remote/memory.h
35
33
F: hw/remote/memory.c
36
#include "qemu/atomic.h"
34
F: hw/remote/proxy.c
37
#include "qemu/osdep.h"
35
F: include/hw/remote/proxy.h
38
+#include "qemu/bswap.h"
36
+F: hw/remote/proxy-memory-listener.c
39
#include "qemu/memfd.h"
37
+F: include/hw/remote/proxy-memory-listener.h
40
38
41
#include "libvhost-user.h"
39
Build and test automation
42
@@ -XXX,XX +XXX,XX @@ vu_set_features_exec(VuDev *dev, VhostUserMsg *vmsg)
40
-------------------------
43
DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
41
diff --git a/include/hw/remote/proxy-memory-listener.h b/include/hw/remote/proxy-memory-listener.h
44
42
new file mode 100644
45
dev->features = vmsg->payload.u64;
43
index XXXXXXX..XXXXXXX
46
+ if (!vu_has_feature(dev, VIRTIO_F_VERSION_1)) {
44
--- /dev/null
47
+ /*
45
+++ b/include/hw/remote/proxy-memory-listener.h
48
+ * We only support devices conforming to VIRTIO 1.0 or
46
@@ -XXX,XX +XXX,XX @@
49
+ * later
47
+/*
50
+ */
48
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
51
+ vu_panic(dev, "virtio legacy devices aren't supported by libvhost-user");
49
+ *
50
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
51
+ * See the COPYING file in the top-level directory.
52
+ *
53
+ */
54
+
55
+#ifndef PROXY_MEMORY_LISTENER_H
56
+#define PROXY_MEMORY_LISTENER_H
57
+
58
+#include "exec/memory.h"
59
+#include "io/channel.h"
60
+
61
+typedef struct ProxyMemoryListener {
62
+ MemoryListener listener;
63
+
64
+ int n_mr_sections;
65
+ MemoryRegionSection *mr_sections;
66
+
67
+ QIOChannel *ioc;
68
+} ProxyMemoryListener;
69
+
70
+void proxy_memory_listener_configure(ProxyMemoryListener *proxy_listener,
71
+ QIOChannel *ioc);
72
+void proxy_memory_listener_deconfigure(ProxyMemoryListener *proxy_listener);
73
+
74
+#endif
75
diff --git a/include/hw/remote/proxy.h b/include/hw/remote/proxy.h
76
index XXXXXXX..XXXXXXX 100644
77
--- a/include/hw/remote/proxy.h
78
+++ b/include/hw/remote/proxy.h
79
@@ -XXX,XX +XXX,XX @@
80
81
#include "hw/pci/pci.h"
82
#include "io/channel.h"
83
+#include "hw/remote/proxy-memory-listener.h"
84
85
#define TYPE_PCI_PROXY_DEV "x-pci-proxy-dev"
86
OBJECT_DECLARE_SIMPLE_TYPE(PCIProxyDev, PCI_PROXY_DEV)
87
@@ -XXX,XX +XXX,XX @@ struct PCIProxyDev {
88
QemuMutex io_mutex;
89
QIOChannel *ioc;
90
Error *migration_blocker;
91
+ ProxyMemoryListener proxy_listener;
92
ProxyMemoryRegion region[PCI_NUM_REGIONS];
93
};
94
95
diff --git a/hw/remote/message.c b/hw/remote/message.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/hw/remote/message.c
98
+++ b/hw/remote/message.c
99
@@ -XXX,XX +XXX,XX @@
100
#include "sysemu/runstate.h"
101
#include "hw/pci/pci.h"
102
#include "exec/memattrs.h"
103
+#include "hw/remote/memory.h"
104
105
static void process_config_write(QIOChannel *ioc, PCIDevice *dev,
106
MPQemuMsg *msg, Error **errp);
107
@@ -XXX,XX +XXX,XX @@ void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
108
case MPQEMU_CMD_BAR_READ:
109
process_bar_read(com->ioc, &msg, &local_err);
110
break;
111
+ case MPQEMU_CMD_SYNC_SYSMEM:
112
+ remote_sysmem_reconfig(&msg, &local_err);
113
+ break;
114
default:
115
error_setg(&local_err,
116
"Unknown command (%d) received for device %s"
117
diff --git a/hw/remote/proxy-memory-listener.c b/hw/remote/proxy-memory-listener.c
118
new file mode 100644
119
index XXXXXXX..XXXXXXX
120
--- /dev/null
121
+++ b/hw/remote/proxy-memory-listener.c
122
@@ -XXX,XX +XXX,XX @@
123
+/*
124
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
125
+ *
126
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
127
+ * See the COPYING file in the top-level directory.
128
+ *
129
+ */
130
+
131
+#include "qemu/osdep.h"
132
+#include "qemu-common.h"
133
+
134
+#include "qemu/compiler.h"
135
+#include "qemu/int128.h"
136
+#include "qemu/range.h"
137
+#include "exec/memory.h"
138
+#include "exec/cpu-common.h"
139
+#include "cpu.h"
140
+#include "exec/ram_addr.h"
141
+#include "exec/address-spaces.h"
142
+#include "qapi/error.h"
143
+#include "hw/remote/mpqemu-link.h"
144
+#include "hw/remote/proxy-memory-listener.h"
145
+
146
+/*
147
+ * TODO: get_fd_from_hostaddr(), proxy_mrs_can_merge() and
148
+ * proxy_memory_listener_commit() defined below perform tasks similar to the
149
+ * functions defined in vhost-user.c. These functions are good candidates
150
+ * for refactoring.
151
+ *
152
+ */
153
+
154
+static void proxy_memory_listener_reset(MemoryListener *listener)
155
+{
156
+ ProxyMemoryListener *proxy_listener = container_of(listener,
157
+ ProxyMemoryListener,
158
+ listener);
159
+ int mrs;
160
+
161
+ for (mrs = 0; mrs < proxy_listener->n_mr_sections; mrs++) {
162
+ memory_region_unref(proxy_listener->mr_sections[mrs].mr);
163
+ }
164
+
165
+ g_free(proxy_listener->mr_sections);
166
+ proxy_listener->mr_sections = NULL;
167
+ proxy_listener->n_mr_sections = 0;
168
+}
169
+
170
+static int get_fd_from_hostaddr(uint64_t host, ram_addr_t *offset)
171
+{
172
+ MemoryRegion *mr;
173
+ ram_addr_t off;
174
+
175
+ /**
176
+ * Assumes that the host address is a valid address as it's
177
+ * coming from the MemoryListener system. In the case host
178
+ * address is not valid, the following call would return
179
+ * the default subregion of "system_memory" region, and
180
+ * not NULL. So it's not possible to check for NULL here.
181
+ */
182
+ mr = memory_region_from_host((void *)(uintptr_t)host, &off);
183
+
184
+ if (offset) {
185
+ *offset = off;
186
+ }
187
+
188
+ return memory_region_get_fd(mr);
189
+}
190
+
191
+static bool proxy_mrs_can_merge(uint64_t host, uint64_t prev_host, size_t size)
192
+{
193
+ if (((prev_host + size) != host)) {
52
+ return false;
194
+ return false;
53
+ }
195
+ }
54
196
+
55
if (!(dev->features & VHOST_USER_F_PROTOCOL_FEATURES)) {
197
+ if (get_fd_from_hostaddr(host, NULL) !=
56
vu_set_enable_all_rings(dev, true);
198
+ get_fd_from_hostaddr(prev_host, NULL)) {
57
@@ -XXX,XX +XXX,XX @@ vu_set_vring_addr_exec(VuDev *dev, VhostUserMsg *vmsg)
199
+ return false;
58
return false;
200
+ }
59
}
201
+
60
202
+ return true;
61
- vq->used_idx = vq->vring.used->idx;
203
+}
62
+ vq->used_idx = lduw_le_p(&vq->vring.used->idx);
204
+
63
205
+static bool try_merge(ProxyMemoryListener *proxy_listener,
64
if (vq->last_avail_idx != vq->used_idx) {
206
+ MemoryRegionSection *section)
65
bool resume = dev->iface->queue_is_processed_in_order &&
207
+{
66
@@ -XXX,XX +XXX,XX @@ vu_check_queue_inflights(VuDev *dev, VuVirtq *vq)
208
+ uint64_t mrs_size, mrs_gpa, mrs_page;
67
return 0;
209
+ MemoryRegionSection *prev_sec;
68
}
210
+ bool merged = false;
69
211
+ uintptr_t mrs_host;
70
- vq->used_idx = vq->vring.used->idx;
212
+ RAMBlock *mrs_rb;
71
+ vq->used_idx = lduw_le_p(&vq->vring.used->idx);
213
+
72
vq->resubmit_num = 0;
214
+ if (!proxy_listener->n_mr_sections) {
73
vq->resubmit_list = NULL;
215
+ return false;
74
vq->counter = 0;
216
+ }
75
@@ -XXX,XX +XXX,XX @@ vu_queue_started(const VuDev *dev, const VuVirtq *vq)
217
+
76
static inline uint16_t
218
+ mrs_rb = section->mr->ram_block;
77
vring_avail_flags(VuVirtq *vq)
219
+ mrs_page = (uint64_t)qemu_ram_pagesize(mrs_rb);
220
+ mrs_size = int128_get64(section->size);
221
+ mrs_gpa = section->offset_within_address_space;
222
+ mrs_host = (uintptr_t)memory_region_get_ram_ptr(section->mr) +
223
+ section->offset_within_region;
224
+
225
+ if (get_fd_from_hostaddr(mrs_host, NULL) < 0) {
226
+ return true;
227
+ }
228
+
229
+ mrs_host = mrs_host & ~(mrs_page - 1);
230
+ mrs_gpa = mrs_gpa & ~(mrs_page - 1);
231
+ mrs_size = ROUND_UP(mrs_size, mrs_page);
232
+
233
+ prev_sec = proxy_listener->mr_sections +
234
+ (proxy_listener->n_mr_sections - 1);
235
+ uint64_t prev_gpa_start = prev_sec->offset_within_address_space;
236
+ uint64_t prev_size = int128_get64(prev_sec->size);
237
+ uint64_t prev_gpa_end = range_get_last(prev_gpa_start, prev_size);
238
+ uint64_t prev_host_start =
239
+ (uintptr_t)memory_region_get_ram_ptr(prev_sec->mr) +
240
+ prev_sec->offset_within_region;
241
+ uint64_t prev_host_end = range_get_last(prev_host_start, prev_size);
242
+
243
+ if (mrs_gpa <= (prev_gpa_end + 1)) {
244
+ g_assert(mrs_gpa > prev_gpa_start);
245
+
246
+ if ((section->mr == prev_sec->mr) &&
247
+ proxy_mrs_can_merge(mrs_host, prev_host_start,
248
+ (mrs_gpa - prev_gpa_start))) {
249
+ uint64_t max_end = MAX(prev_host_end, mrs_host + mrs_size);
250
+ merged = true;
251
+ prev_sec->offset_within_address_space =
252
+ MIN(prev_gpa_start, mrs_gpa);
253
+ prev_sec->offset_within_region =
254
+ MIN(prev_host_start, mrs_host) -
255
+ (uintptr_t)memory_region_get_ram_ptr(prev_sec->mr);
256
+ prev_sec->size = int128_make64(max_end - MIN(prev_host_start,
257
+ mrs_host));
258
+ }
259
+ }
260
+
261
+ return merged;
262
+}
263
+
264
+static void proxy_memory_listener_region_addnop(MemoryListener *listener,
265
+ MemoryRegionSection *section)
266
+{
267
+ ProxyMemoryListener *proxy_listener = container_of(listener,
268
+ ProxyMemoryListener,
269
+ listener);
270
+
271
+ if (!memory_region_is_ram(section->mr) ||
272
+ memory_region_is_rom(section->mr)) {
273
+ return;
274
+ }
275
+
276
+ if (try_merge(proxy_listener, section)) {
277
+ return;
278
+ }
279
+
280
+ ++proxy_listener->n_mr_sections;
281
+ proxy_listener->mr_sections = g_renew(MemoryRegionSection,
282
+ proxy_listener->mr_sections,
283
+ proxy_listener->n_mr_sections);
284
+ proxy_listener->mr_sections[proxy_listener->n_mr_sections - 1] = *section;
285
+ proxy_listener->mr_sections[proxy_listener->n_mr_sections - 1].fv = NULL;
286
+ memory_region_ref(section->mr);
287
+}
288
+
289
+static void proxy_memory_listener_commit(MemoryListener *listener)
290
+{
291
+ ProxyMemoryListener *proxy_listener = container_of(listener,
292
+ ProxyMemoryListener,
293
+ listener);
294
+ MPQemuMsg msg;
295
+ MemoryRegionSection *section;
296
+ ram_addr_t offset;
297
+ uintptr_t host_addr;
298
+ int region;
299
+ Error *local_err = NULL;
300
+
301
+ memset(&msg, 0, sizeof(MPQemuMsg));
302
+
303
+ msg.cmd = MPQEMU_CMD_SYNC_SYSMEM;
304
+ msg.num_fds = proxy_listener->n_mr_sections;
305
+ msg.size = sizeof(SyncSysmemMsg);
306
+ if (msg.num_fds > REMOTE_MAX_FDS) {
307
+ error_report("Number of fds is more than %d", REMOTE_MAX_FDS);
308
+ return;
309
+ }
310
+
311
+ for (region = 0; region < proxy_listener->n_mr_sections; region++) {
312
+ section = &proxy_listener->mr_sections[region];
313
+ msg.data.sync_sysmem.gpas[region] =
314
+ section->offset_within_address_space;
315
+ msg.data.sync_sysmem.sizes[region] = int128_get64(section->size);
316
+ host_addr = (uintptr_t)memory_region_get_ram_ptr(section->mr) +
317
+ section->offset_within_region;
318
+ msg.fds[region] = get_fd_from_hostaddr(host_addr, &offset);
319
+ msg.data.sync_sysmem.offsets[region] = offset;
320
+ }
321
+ if (!mpqemu_msg_send(&msg, proxy_listener->ioc, &local_err)) {
322
+ error_report_err(local_err);
323
+ }
324
+}
325
+
326
+void proxy_memory_listener_deconfigure(ProxyMemoryListener *proxy_listener)
327
+{
328
+ memory_listener_unregister(&proxy_listener->listener);
329
+
330
+ proxy_memory_listener_reset(&proxy_listener->listener);
331
+}
332
+
333
+void proxy_memory_listener_configure(ProxyMemoryListener *proxy_listener,
334
+ QIOChannel *ioc)
335
+{
336
+ proxy_listener->n_mr_sections = 0;
337
+ proxy_listener->mr_sections = NULL;
338
+
339
+ proxy_listener->ioc = ioc;
340
+
341
+ proxy_listener->listener.begin = proxy_memory_listener_reset;
342
+ proxy_listener->listener.commit = proxy_memory_listener_commit;
343
+ proxy_listener->listener.region_add = proxy_memory_listener_region_addnop;
344
+ proxy_listener->listener.region_nop = proxy_memory_listener_region_addnop;
345
+ proxy_listener->listener.priority = 10;
346
+
347
+ memory_listener_register(&proxy_listener->listener,
348
+ &address_space_memory);
349
+}
350
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
351
index XXXXXXX..XXXXXXX 100644
352
--- a/hw/remote/proxy.c
353
+++ b/hw/remote/proxy.c
354
@@ -XXX,XX +XXX,XX @@
355
#include "qemu/sockets.h"
356
#include "hw/remote/mpqemu-link.h"
357
#include "qemu/error-report.h"
358
+#include "hw/remote/proxy-memory-listener.h"
359
+#include "qom/object.h"
360
361
static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
78
{
362
{
79
- return vq->vring.avail->flags;
363
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
80
+ return lduw_le_p(&vq->vring.avail->flags);
364
365
qemu_mutex_init(&dev->io_mutex);
366
qio_channel_set_blocking(dev->ioc, true, NULL);
367
+
368
+ proxy_memory_listener_configure(&dev->proxy_listener, dev->ioc);
81
}
369
}
82
370
83
static inline uint16_t
371
static void pci_proxy_dev_exit(PCIDevice *pdev)
84
vring_avail_idx(VuVirtq *vq)
372
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_exit(PCIDevice *pdev)
85
{
373
migrate_del_blocker(dev->migration_blocker);
86
- vq->shadow_avail_idx = vq->vring.avail->idx;
374
87
+ vq->shadow_avail_idx = lduw_le_p(&vq->vring.avail->idx);
375
error_free(dev->migration_blocker);
88
376
+
89
return vq->shadow_avail_idx;
377
+ proxy_memory_listener_deconfigure(&dev->proxy_listener);
90
}
378
}
91
@@ -XXX,XX +XXX,XX @@ vring_avail_idx(VuVirtq *vq)
379
92
static inline uint16_t
380
static void config_op_send(PCIProxyDev *pdev, uint32_t addr, uint32_t *val,
93
vring_avail_ring(VuVirtq *vq, int i)
381
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
94
{
382
index XXXXXXX..XXXXXXX 100644
95
- return vq->vring.avail->ring[i];
383
--- a/hw/remote/meson.build
96
+ return lduw_le_p(&vq->vring.avail->ring[i]);
384
+++ b/hw/remote/meson.build
97
}
385
@@ -XXX,XX +XXX,XX @@ remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('remote-obj.c'))
98
386
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('proxy.c'))
99
static inline uint16_t
387
100
@@ -XXX,XX +XXX,XX @@ virtqueue_read_next_desc(VuDev *dev, struct vring_desc *desc,
388
specific_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('memory.c'))
101
int i, unsigned int max, unsigned int *next)
389
+specific_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('proxy-memory-listener.c'))
102
{
390
103
/* If this descriptor says it doesn't chain, we're done. */
391
softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
104
- if (!(desc[i].flags & VRING_DESC_F_NEXT)) {
105
+ if (!(lduw_le_p(&desc[i].flags) & VRING_DESC_F_NEXT)) {
106
return VIRTQUEUE_READ_DESC_DONE;
107
}
108
109
/* Check they're not leading us off end of descriptors. */
110
- *next = desc[i].next;
111
+ *next = lduw_le_p(&desc[i].next);
112
/* Make sure compiler knows to grab that: we don't want it changing! */
113
smp_wmb();
114
115
@@ -XXX,XX +XXX,XX @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes,
116
}
117
desc = vq->vring.desc;
118
119
- if (desc[i].flags & VRING_DESC_F_INDIRECT) {
120
- if (desc[i].len % sizeof(struct vring_desc)) {
121
+ if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) {
122
+ if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) {
123
vu_panic(dev, "Invalid size for indirect buffer table");
124
goto err;
125
}
126
@@ -XXX,XX +XXX,XX @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes,
127
128
/* loop over the indirect descriptor table */
129
indirect = 1;
130
- desc_addr = desc[i].addr;
131
- desc_len = desc[i].len;
132
+ desc_addr = ldq_le_p(&desc[i].addr);
133
+ desc_len = ldl_le_p(&desc[i].len);
134
max = desc_len / sizeof(struct vring_desc);
135
read_len = desc_len;
136
desc = vu_gpa_to_va(dev, &read_len, desc_addr);
137
@@ -XXX,XX +XXX,XX @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes,
138
goto err;
139
}
140
141
- if (desc[i].flags & VRING_DESC_F_WRITE) {
142
- in_total += desc[i].len;
143
+ if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) {
144
+ in_total += ldl_le_p(&desc[i].len);
145
} else {
146
- out_total += desc[i].len;
147
+ out_total += ldl_le_p(&desc[i].len);
148
}
149
if (in_total >= max_in_bytes && out_total >= max_out_bytes) {
150
goto done;
151
@@ -XXX,XX +XXX,XX @@ vring_used_flags_set_bit(VuVirtq *vq, int mask)
152
153
flags = (uint16_t *)((char*)vq->vring.used +
154
offsetof(struct vring_used, flags));
155
- *flags |= mask;
156
+ stw_le_p(flags, lduw_le_p(flags) | mask);
157
}
158
159
static inline void
160
@@ -XXX,XX +XXX,XX @@ vring_used_flags_unset_bit(VuVirtq *vq, int mask)
161
162
flags = (uint16_t *)((char*)vq->vring.used +
163
offsetof(struct vring_used, flags));
164
- *flags &= ~mask;
165
+ stw_le_p(flags, lduw_le_p(flags) & ~mask);
166
}
167
168
static inline void
169
@@ -XXX,XX +XXX,XX @@ vring_set_avail_event(VuVirtq *vq, uint16_t val)
170
return;
171
}
172
173
- *((uint16_t *) &vq->vring.used->ring[vq->vring.num]) = val;
174
+ stw_le_p(&vq->vring.used->ring[vq->vring.num], val);
175
}
176
177
void
178
@@ -XXX,XX +XXX,XX @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz)
179
struct vring_desc desc_buf[VIRTQUEUE_MAX_SIZE];
180
int rc;
181
182
- if (desc[i].flags & VRING_DESC_F_INDIRECT) {
183
- if (desc[i].len % sizeof(struct vring_desc)) {
184
+ if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) {
185
+ if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) {
186
vu_panic(dev, "Invalid size for indirect buffer table");
187
}
188
189
/* loop over the indirect descriptor table */
190
- desc_addr = desc[i].addr;
191
- desc_len = desc[i].len;
192
+ desc_addr = ldq_le_p(&desc[i].addr);
193
+ desc_len = ldl_le_p(&desc[i].len);
194
max = desc_len / sizeof(struct vring_desc);
195
read_len = desc_len;
196
desc = vu_gpa_to_va(dev, &read_len, desc_addr);
197
@@ -XXX,XX +XXX,XX @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz)
198
199
/* Collect all the descriptors */
200
do {
201
- if (desc[i].flags & VRING_DESC_F_WRITE) {
202
+ if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) {
203
virtqueue_map_desc(dev, &in_num, iov + out_num,
204
VIRTQUEUE_MAX_SIZE - out_num, true,
205
- desc[i].addr, desc[i].len);
206
+ ldq_le_p(&desc[i].addr), ldl_le_p(&desc[i].len));
207
} else {
208
if (in_num) {
209
vu_panic(dev, "Incorrect order for descriptors");
210
@@ -XXX,XX +XXX,XX @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz)
211
}
212
virtqueue_map_desc(dev, &out_num, iov,
213
VIRTQUEUE_MAX_SIZE, false,
214
- desc[i].addr, desc[i].len);
215
+ ldq_le_p(&desc[i].addr), ldl_le_p(&desc[i].len));
216
}
217
218
/* If we've got too many, that implies a descriptor loop. */
219
@@ -XXX,XX +XXX,XX @@ vu_log_queue_fill(VuDev *dev, VuVirtq *vq,
220
max = vq->vring.num;
221
i = elem->index;
222
223
- if (desc[i].flags & VRING_DESC_F_INDIRECT) {
224
- if (desc[i].len % sizeof(struct vring_desc)) {
225
+ if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) {
226
+ if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) {
227
vu_panic(dev, "Invalid size for indirect buffer table");
228
}
229
230
/* loop over the indirect descriptor table */
231
- desc_addr = desc[i].addr;
232
- desc_len = desc[i].len;
233
+ desc_addr = ldq_le_p(&desc[i].addr);
234
+ desc_len = ldl_le_p(&desc[i].len);
235
max = desc_len / sizeof(struct vring_desc);
236
read_len = desc_len;
237
desc = vu_gpa_to_va(dev, &read_len, desc_addr);
238
@@ -XXX,XX +XXX,XX @@ vu_log_queue_fill(VuDev *dev, VuVirtq *vq,
239
return;
240
}
241
242
- if (desc[i].flags & VRING_DESC_F_WRITE) {
243
- min = MIN(desc[i].len, len);
244
- vu_log_write(dev, desc[i].addr, min);
245
+ if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) {
246
+ min = MIN(ldl_le_p(&desc[i].len), len);
247
+ vu_log_write(dev, ldq_le_p(&desc[i].addr), min);
248
len -= min;
249
}
250
251
@@ -XXX,XX +XXX,XX @@ vu_queue_fill(VuDev *dev, VuVirtq *vq,
252
253
idx = (idx + vq->used_idx) % vq->vring.num;
254
255
- uelem.id = elem->index;
256
- uelem.len = len;
257
+ stl_le_p(&uelem.id, elem->index);
258
+ stl_le_p(&uelem.len, len);
259
vring_used_write(dev, vq, &uelem, idx);
260
}
261
262
static inline
263
void vring_used_idx_set(VuDev *dev, VuVirtq *vq, uint16_t val)
264
{
265
- vq->vring.used->idx = val;
266
+ stw_le_p(&vq->vring.used->idx, val);
267
vu_log_write(dev,
268
vq->vring.log_guest_addr + offsetof(struct vring_used, idx),
269
sizeof(vq->vring.used->idx));
270
--
392
--
271
2.26.2
393
2.29.2
272
394
diff view generated by jsdifflib
1
Development of the userspace NVMe block driver picked up again recently.
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
After talking with Fam I am stepping up as block/nvme.c maintainer.
3
Patches will be merged through my 'block' tree.
4
2
5
Cc: Kevin Wolf <kwolf@redhat.com>
3
IOHUB object is added to manage PCI IRQs. It uses KVM_IRQFD
6
Cc: Klaus Jensen <k.jensen@samsung.com>
4
ioctl to create irqfd to injecting PCI interrupts to the guest.
7
Cc: Fam Zheng <fam@euphon.net>
5
IOHUB object forwards the irqfd to the remote process. Remote process
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
uses this fd to directly send interrupts to the guest, bypassing QEMU.
9
Acked-by: Kevin Wolf <kwolf@redhat.com>
7
10
Acked-by: Klaus Jensen <k.jensen@samsung.com>
8
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
11
Acked-by: Fam Zheng <fam@euphon.net>
9
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
12
Message-id: 20200907111632.90499-1-stefanha@redhat.com
10
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
11
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Message-id: 51d5c3d54e28a68b002e3875c59599c9f5a424a1.1611938319.git.jag.raman@oracle.com
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
---
14
---
15
MAINTAINERS | 4 +++-
15
MAINTAINERS | 2 +
16
1 file changed, 3 insertions(+), 1 deletion(-)
16
include/hw/pci/pci_ids.h | 3 +
17
include/hw/remote/iohub.h | 42 +++++++++++
18
include/hw/remote/machine.h | 2 +
19
include/hw/remote/mpqemu-link.h | 1 +
20
include/hw/remote/proxy.h | 4 ++
21
hw/remote/iohub.c | 119 ++++++++++++++++++++++++++++++++
22
hw/remote/machine.c | 10 +++
23
hw/remote/message.c | 4 ++
24
hw/remote/mpqemu-link.c | 5 ++
25
hw/remote/proxy.c | 56 +++++++++++++++
26
hw/remote/meson.build | 1 +
27
12 files changed, 249 insertions(+)
28
create mode 100644 include/hw/remote/iohub.h
29
create mode 100644 hw/remote/iohub.c
17
30
18
diff --git a/MAINTAINERS b/MAINTAINERS
31
diff --git a/MAINTAINERS b/MAINTAINERS
19
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
20
--- a/MAINTAINERS
33
--- a/MAINTAINERS
21
+++ b/MAINTAINERS
34
+++ b/MAINTAINERS
22
@@ -XXX,XX +XXX,XX @@ S: Supported
35
@@ -XXX,XX +XXX,XX @@ F: hw/remote/proxy.c
23
F: block/null.c
36
F: include/hw/remote/proxy.h
24
37
F: hw/remote/proxy-memory-listener.c
25
NVMe Block Driver
38
F: include/hw/remote/proxy-memory-listener.h
26
-M: Fam Zheng <fam@euphon.net>
39
+F: hw/remote/iohub.c
27
+M: Stefan Hajnoczi <stefanha@redhat.com>
40
+F: include/hw/remote/iohub.h
28
+R: Fam Zheng <fam@euphon.net>
41
29
L: qemu-block@nongnu.org
42
Build and test automation
30
S: Supported
43
-------------------------
31
F: block/nvme*
44
diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h
32
+T: git https://github.com/stefanha/qemu.git block
45
index XXXXXXX..XXXXXXX 100644
33
46
--- a/include/hw/pci/pci_ids.h
34
Bootdevice
47
+++ b/include/hw/pci/pci_ids.h
35
M: Gonglei <arei.gonglei@huawei.com>
48
@@ -XXX,XX +XXX,XX @@
49
#define PCI_DEVICE_ID_SUN_SIMBA 0x5000
50
#define PCI_DEVICE_ID_SUN_SABRE 0xa000
51
52
+#define PCI_VENDOR_ID_ORACLE 0x108e
53
+#define PCI_DEVICE_ID_REMOTE_IOHUB 0xb000
54
+
55
#define PCI_VENDOR_ID_CMD 0x1095
56
#define PCI_DEVICE_ID_CMD_646 0x0646
57
58
diff --git a/include/hw/remote/iohub.h b/include/hw/remote/iohub.h
59
new file mode 100644
60
index XXXXXXX..XXXXXXX
61
--- /dev/null
62
+++ b/include/hw/remote/iohub.h
63
@@ -XXX,XX +XXX,XX @@
64
+/*
65
+ * IO Hub for remote device
66
+ *
67
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
68
+ *
69
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
70
+ * See the COPYING file in the top-level directory.
71
+ *
72
+ */
73
+
74
+#ifndef REMOTE_IOHUB_H
75
+#define REMOTE_IOHUB_H
76
+
77
+#include "hw/pci/pci.h"
78
+#include "qemu/event_notifier.h"
79
+#include "qemu/thread-posix.h"
80
+#include "hw/remote/mpqemu-link.h"
81
+
82
+#define REMOTE_IOHUB_NB_PIRQS PCI_DEVFN_MAX
83
+
84
+typedef struct ResampleToken {
85
+ void *iohub;
86
+ int pirq;
87
+} ResampleToken;
88
+
89
+typedef struct RemoteIOHubState {
90
+ PCIDevice d;
91
+ EventNotifier irqfds[REMOTE_IOHUB_NB_PIRQS];
92
+ EventNotifier resamplefds[REMOTE_IOHUB_NB_PIRQS];
93
+ unsigned int irq_level[REMOTE_IOHUB_NB_PIRQS];
94
+ ResampleToken token[REMOTE_IOHUB_NB_PIRQS];
95
+ QemuMutex irq_level_lock[REMOTE_IOHUB_NB_PIRQS];
96
+} RemoteIOHubState;
97
+
98
+int remote_iohub_map_irq(PCIDevice *pci_dev, int intx);
99
+void remote_iohub_set_irq(void *opaque, int pirq, int level);
100
+void process_set_irqfd_msg(PCIDevice *pci_dev, MPQemuMsg *msg);
101
+
102
+void remote_iohub_init(RemoteIOHubState *iohub);
103
+void remote_iohub_finalize(RemoteIOHubState *iohub);
104
+
105
+#endif
106
diff --git a/include/hw/remote/machine.h b/include/hw/remote/machine.h
107
index XXXXXXX..XXXXXXX 100644
108
--- a/include/hw/remote/machine.h
109
+++ b/include/hw/remote/machine.h
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/boards.h"
112
#include "hw/pci-host/remote.h"
113
#include "io/channel.h"
114
+#include "hw/remote/iohub.h"
115
116
struct RemoteMachineState {
117
MachineState parent_obj;
118
119
RemotePCIHost *host;
120
+ RemoteIOHubState iohub;
121
};
122
123
/* Used to pass to co-routine device and ioc. */
124
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
125
index XXXXXXX..XXXXXXX 100644
126
--- a/include/hw/remote/mpqemu-link.h
127
+++ b/include/hw/remote/mpqemu-link.h
128
@@ -XXX,XX +XXX,XX @@ typedef enum {
129
MPQEMU_CMD_PCI_CFGREAD,
130
MPQEMU_CMD_BAR_WRITE,
131
MPQEMU_CMD_BAR_READ,
132
+ MPQEMU_CMD_SET_IRQFD,
133
MPQEMU_CMD_MAX,
134
} MPQemuCmd;
135
136
diff --git a/include/hw/remote/proxy.h b/include/hw/remote/proxy.h
137
index XXXXXXX..XXXXXXX 100644
138
--- a/include/hw/remote/proxy.h
139
+++ b/include/hw/remote/proxy.h
140
@@ -XXX,XX +XXX,XX @@
141
#include "hw/pci/pci.h"
142
#include "io/channel.h"
143
#include "hw/remote/proxy-memory-listener.h"
144
+#include "qemu/event_notifier.h"
145
146
#define TYPE_PCI_PROXY_DEV "x-pci-proxy-dev"
147
OBJECT_DECLARE_SIMPLE_TYPE(PCIProxyDev, PCI_PROXY_DEV)
148
@@ -XXX,XX +XXX,XX @@ struct PCIProxyDev {
149
QIOChannel *ioc;
150
Error *migration_blocker;
151
ProxyMemoryListener proxy_listener;
152
+ int virq;
153
+ EventNotifier intr;
154
+ EventNotifier resample;
155
ProxyMemoryRegion region[PCI_NUM_REGIONS];
156
};
157
158
diff --git a/hw/remote/iohub.c b/hw/remote/iohub.c
159
new file mode 100644
160
index XXXXXXX..XXXXXXX
161
--- /dev/null
162
+++ b/hw/remote/iohub.c
163
@@ -XXX,XX +XXX,XX @@
164
+/*
165
+ * Remote IO Hub
166
+ *
167
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
168
+ *
169
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
170
+ * See the COPYING file in the top-level directory.
171
+ *
172
+ */
173
+
174
+#include "qemu/osdep.h"
175
+#include "qemu-common.h"
176
+
177
+#include "hw/pci/pci.h"
178
+#include "hw/pci/pci_ids.h"
179
+#include "hw/pci/pci_bus.h"
180
+#include "qemu/thread.h"
181
+#include "hw/boards.h"
182
+#include "hw/remote/machine.h"
183
+#include "hw/remote/iohub.h"
184
+#include "qemu/main-loop.h"
185
+
186
+void remote_iohub_init(RemoteIOHubState *iohub)
187
+{
188
+ int pirq;
189
+
190
+ memset(&iohub->irqfds, 0, sizeof(iohub->irqfds));
191
+ memset(&iohub->resamplefds, 0, sizeof(iohub->resamplefds));
192
+
193
+ for (pirq = 0; pirq < REMOTE_IOHUB_NB_PIRQS; pirq++) {
194
+ qemu_mutex_init(&iohub->irq_level_lock[pirq]);
195
+ iohub->irq_level[pirq] = 0;
196
+ event_notifier_init_fd(&iohub->irqfds[pirq], -1);
197
+ event_notifier_init_fd(&iohub->resamplefds[pirq], -1);
198
+ }
199
+}
200
+
201
+void remote_iohub_finalize(RemoteIOHubState *iohub)
202
+{
203
+ int pirq;
204
+
205
+ for (pirq = 0; pirq < REMOTE_IOHUB_NB_PIRQS; pirq++) {
206
+ qemu_set_fd_handler(event_notifier_get_fd(&iohub->resamplefds[pirq]),
207
+ NULL, NULL, NULL);
208
+ event_notifier_cleanup(&iohub->irqfds[pirq]);
209
+ event_notifier_cleanup(&iohub->resamplefds[pirq]);
210
+ qemu_mutex_destroy(&iohub->irq_level_lock[pirq]);
211
+ }
212
+}
213
+
214
+int remote_iohub_map_irq(PCIDevice *pci_dev, int intx)
215
+{
216
+ return pci_dev->devfn;
217
+}
218
+
219
+void remote_iohub_set_irq(void *opaque, int pirq, int level)
220
+{
221
+ RemoteIOHubState *iohub = opaque;
222
+
223
+ assert(pirq >= 0);
224
+ assert(pirq < PCI_DEVFN_MAX);
225
+
226
+ QEMU_LOCK_GUARD(&iohub->irq_level_lock[pirq]);
227
+
228
+ if (level) {
229
+ if (++iohub->irq_level[pirq] == 1) {
230
+ event_notifier_set(&iohub->irqfds[pirq]);
231
+ }
232
+ } else if (iohub->irq_level[pirq] > 0) {
233
+ iohub->irq_level[pirq]--;
234
+ }
235
+}
236
+
237
+static void intr_resample_handler(void *opaque)
238
+{
239
+ ResampleToken *token = opaque;
240
+ RemoteIOHubState *iohub = token->iohub;
241
+ int pirq, s;
242
+
243
+ pirq = token->pirq;
244
+
245
+ s = event_notifier_test_and_clear(&iohub->resamplefds[pirq]);
246
+
247
+ assert(s >= 0);
248
+
249
+ QEMU_LOCK_GUARD(&iohub->irq_level_lock[pirq]);
250
+
251
+ if (iohub->irq_level[pirq]) {
252
+ event_notifier_set(&iohub->irqfds[pirq]);
253
+ }
254
+}
255
+
256
+void process_set_irqfd_msg(PCIDevice *pci_dev, MPQemuMsg *msg)
257
+{
258
+ RemoteMachineState *machine = REMOTE_MACHINE(current_machine);
259
+ RemoteIOHubState *iohub = &machine->iohub;
260
+ int pirq, intx;
261
+
262
+ intx = pci_get_byte(pci_dev->config + PCI_INTERRUPT_PIN) - 1;
263
+
264
+ pirq = remote_iohub_map_irq(pci_dev, intx);
265
+
266
+ if (event_notifier_get_fd(&iohub->irqfds[pirq]) != -1) {
267
+ qemu_set_fd_handler(event_notifier_get_fd(&iohub->resamplefds[pirq]),
268
+ NULL, NULL, NULL);
269
+ event_notifier_cleanup(&iohub->irqfds[pirq]);
270
+ event_notifier_cleanup(&iohub->resamplefds[pirq]);
271
+ memset(&iohub->token[pirq], 0, sizeof(ResampleToken));
272
+ }
273
+
274
+ event_notifier_init_fd(&iohub->irqfds[pirq], msg->fds[0]);
275
+ event_notifier_init_fd(&iohub->resamplefds[pirq], msg->fds[1]);
276
+
277
+ iohub->token[pirq].iohub = iohub;
278
+ iohub->token[pirq].pirq = pirq;
279
+
280
+ qemu_set_fd_handler(msg->fds[1], intr_resample_handler, NULL,
281
+ &iohub->token[pirq]);
282
+}
283
diff --git a/hw/remote/machine.c b/hw/remote/machine.c
284
index XXXXXXX..XXXXXXX 100644
285
--- a/hw/remote/machine.c
286
+++ b/hw/remote/machine.c
287
@@ -XXX,XX +XXX,XX @@
288
#include "exec/address-spaces.h"
289
#include "exec/memory.h"
290
#include "qapi/error.h"
291
+#include "hw/pci/pci_host.h"
292
+#include "hw/remote/iohub.h"
293
294
static void remote_machine_init(MachineState *machine)
295
{
296
MemoryRegion *system_memory, *system_io, *pci_memory;
297
RemoteMachineState *s = REMOTE_MACHINE(machine);
298
RemotePCIHost *rem_host;
299
+ PCIHostState *pci_host;
300
301
system_memory = get_system_memory();
302
system_io = get_system_io();
303
@@ -XXX,XX +XXX,XX @@ static void remote_machine_init(MachineState *machine)
304
memory_region_add_subregion_overlap(system_memory, 0x0, pci_memory, -1);
305
306
qdev_realize(DEVICE(rem_host), sysbus_get_default(), &error_fatal);
307
+
308
+ pci_host = PCI_HOST_BRIDGE(rem_host);
309
+
310
+ remote_iohub_init(&s->iohub);
311
+
312
+ pci_bus_irqs(pci_host->bus, remote_iohub_set_irq, remote_iohub_map_irq,
313
+ &s->iohub, REMOTE_IOHUB_NB_PIRQS);
314
}
315
316
static void remote_machine_class_init(ObjectClass *oc, void *data)
317
diff --git a/hw/remote/message.c b/hw/remote/message.c
318
index XXXXXXX..XXXXXXX 100644
319
--- a/hw/remote/message.c
320
+++ b/hw/remote/message.c
321
@@ -XXX,XX +XXX,XX @@
322
#include "hw/pci/pci.h"
323
#include "exec/memattrs.h"
324
#include "hw/remote/memory.h"
325
+#include "hw/remote/iohub.h"
326
327
static void process_config_write(QIOChannel *ioc, PCIDevice *dev,
328
MPQemuMsg *msg, Error **errp);
329
@@ -XXX,XX +XXX,XX @@ void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
330
case MPQEMU_CMD_SYNC_SYSMEM:
331
remote_sysmem_reconfig(&msg, &local_err);
332
break;
333
+ case MPQEMU_CMD_SET_IRQFD:
334
+ process_set_irqfd_msg(pci_dev, &msg);
335
+ break;
336
default:
337
error_setg(&local_err,
338
"Unknown command (%d) received for device %s"
339
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
340
index XXXXXXX..XXXXXXX 100644
341
--- a/hw/remote/mpqemu-link.c
342
+++ b/hw/remote/mpqemu-link.c
343
@@ -XXX,XX +XXX,XX @@ bool mpqemu_msg_valid(MPQemuMsg *msg)
344
return false;
345
}
346
break;
347
+ case MPQEMU_CMD_SET_IRQFD:
348
+ if (msg->size || (msg->num_fds != 2)) {
349
+ return false;
350
+ }
351
+ break;
352
default:
353
break;
354
}
355
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
356
index XXXXXXX..XXXXXXX 100644
357
--- a/hw/remote/proxy.c
358
+++ b/hw/remote/proxy.c
359
@@ -XXX,XX +XXX,XX @@
360
#include "qemu/error-report.h"
361
#include "hw/remote/proxy-memory-listener.h"
362
#include "qom/object.h"
363
+#include "qemu/event_notifier.h"
364
+#include "sysemu/kvm.h"
365
+#include "util/event_notifier-posix.c"
366
+
367
+static void proxy_intx_update(PCIDevice *pci_dev)
368
+{
369
+ PCIProxyDev *dev = PCI_PROXY_DEV(pci_dev);
370
+ PCIINTxRoute route;
371
+ int pin = pci_get_byte(pci_dev->config + PCI_INTERRUPT_PIN) - 1;
372
+
373
+ if (dev->virq != -1) {
374
+ kvm_irqchip_remove_irqfd_notifier_gsi(kvm_state, &dev->intr, dev->virq);
375
+ dev->virq = -1;
376
+ }
377
+
378
+ route = pci_device_route_intx_to_irq(pci_dev, pin);
379
+
380
+ dev->virq = route.irq;
381
+
382
+ if (dev->virq != -1) {
383
+ kvm_irqchip_add_irqfd_notifier_gsi(kvm_state, &dev->intr,
384
+ &dev->resample, dev->virq);
385
+ }
386
+}
387
+
388
+static void setup_irqfd(PCIProxyDev *dev)
389
+{
390
+ PCIDevice *pci_dev = PCI_DEVICE(dev);
391
+ MPQemuMsg msg;
392
+ Error *local_err = NULL;
393
+
394
+ event_notifier_init(&dev->intr, 0);
395
+ event_notifier_init(&dev->resample, 0);
396
+
397
+ memset(&msg, 0, sizeof(MPQemuMsg));
398
+ msg.cmd = MPQEMU_CMD_SET_IRQFD;
399
+ msg.num_fds = 2;
400
+ msg.fds[0] = event_notifier_get_fd(&dev->intr);
401
+ msg.fds[1] = event_notifier_get_fd(&dev->resample);
402
+ msg.size = 0;
403
+
404
+ if (!mpqemu_msg_send(&msg, dev->ioc, &local_err)) {
405
+ error_report_err(local_err);
406
+ }
407
+
408
+ dev->virq = -1;
409
+
410
+ proxy_intx_update(pci_dev);
411
+
412
+ pci_device_set_intx_routing_notifier(pci_dev, proxy_intx_update);
413
+}
414
415
static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
416
{
417
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
418
qio_channel_set_blocking(dev->ioc, true, NULL);
419
420
proxy_memory_listener_configure(&dev->proxy_listener, dev->ioc);
421
+
422
+ setup_irqfd(dev);
423
}
424
425
static void pci_proxy_dev_exit(PCIDevice *pdev)
426
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_exit(PCIDevice *pdev)
427
error_free(dev->migration_blocker);
428
429
proxy_memory_listener_deconfigure(&dev->proxy_listener);
430
+
431
+ event_notifier_cleanup(&dev->intr);
432
+ event_notifier_cleanup(&dev->resample);
433
}
434
435
static void config_op_send(PCIProxyDev *pdev, uint32_t addr, uint32_t *val,
436
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
437
index XXXXXXX..XXXXXXX 100644
438
--- a/hw/remote/meson.build
439
+++ b/hw/remote/meson.build
440
@@ -XXX,XX +XXX,XX @@ remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
441
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('message.c'))
442
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('remote-obj.c'))
443
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('proxy.c'))
444
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('iohub.c'))
445
446
specific_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('memory.c'))
447
specific_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('proxy-memory-listener.c'))
36
--
448
--
37
2.26.2
449
2.29.2
38
450
diff view generated by jsdifflib
1
The sentence explaining the deprecation schedule is ambiguous. Make it
1
From: Jagannathan Raman <jag.raman@oracle.com>
2
clear that a feature deprecated in the Nth release is guaranteed to
3
remain available in the N+1th release. Removal can occur in the N+2nd
4
release or later.
5
2
6
As an example of this in action, see commit
3
Retrieve PCI configuration info about the remote device and
7
25956af3fe5dd0385ad8017bc768a6afe41e2a74 ("block: Finish deprecation of
4
configure the Proxy PCI object based on the returned information
8
'qemu-img convert -n -o'"). The feature was deprecated in QEMU 4.2.0. It
9
was present in the 5.0.0 release and removed in the 5.1.0 release.
10
5
11
Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
6
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
7
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
8
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 85ee367bbb993aa23699b44cfedd83b4ea6d5221.1611938319.git.jag.raman@oracle.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
15
Message-Id: <20200915150734.711426-1-stefanha@redhat.com>
16
---
12
---
17
docs/system/deprecated.rst | 9 +++++----
13
hw/remote/proxy.c | 84 +++++++++++++++++++++++++++++++++++++++++++++++
18
1 file changed, 5 insertions(+), 4 deletions(-)
14
1 file changed, 84 insertions(+)
19
15
20
diff --git a/docs/system/deprecated.rst b/docs/system/deprecated.rst
16
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/docs/system/deprecated.rst
18
--- a/hw/remote/proxy.c
23
+++ b/docs/system/deprecated.rst
19
+++ b/hw/remote/proxy.c
24
@@ -XXX,XX +XXX,XX @@ Deprecated features
20
@@ -XXX,XX +XXX,XX @@
25
21
#include "sysemu/kvm.h"
26
In general features are intended to be supported indefinitely once
22
#include "util/event_notifier-posix.c"
27
introduced into QEMU. In the event that a feature needs to be removed,
23
28
-it will be listed in this section. The feature will remain functional
24
+static void probe_pci_info(PCIDevice *dev, Error **errp);
29
-for 2 releases prior to actual removal. Deprecated features may also
25
+
30
-generate warnings on the console when QEMU starts up, or if activated
26
static void proxy_intx_update(PCIDevice *pci_dev)
31
-via a monitor command, however, this is not a mandatory requirement.
27
{
32
+it will be listed in this section. The feature will remain functional for the
28
PCIProxyDev *dev = PCI_PROXY_DEV(pci_dev);
33
+release in which it was deprecated and one further release. After these two
29
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
34
+releases, the feature is liable to be removed. Deprecated features may also
30
{
35
+generate warnings on the console when QEMU starts up, or if activated via a
31
ERRP_GUARD();
36
+monitor command, however, this is not a mandatory requirement.
32
PCIProxyDev *dev = PCI_PROXY_DEV(device);
37
33
+ uint8_t *pci_conf = device->config;
38
Prior to the 2.10.0 release there was no official policy on how
34
int fd;
39
long features would be deprecated prior to their removal, nor
35
36
if (!dev->fd) {
37
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_realize(PCIDevice *device, Error **errp)
38
qemu_mutex_init(&dev->io_mutex);
39
qio_channel_set_blocking(dev->ioc, true, NULL);
40
41
+ pci_conf[PCI_LATENCY_TIMER] = 0xff;
42
+ pci_conf[PCI_INTERRUPT_PIN] = 0x01;
43
+
44
proxy_memory_listener_configure(&dev->proxy_listener, dev->ioc);
45
46
setup_irqfd(dev);
47
+
48
+ probe_pci_info(PCI_DEVICE(dev), errp);
49
}
50
51
static void pci_proxy_dev_exit(PCIDevice *pdev)
52
@@ -XXX,XX +XXX,XX @@ const MemoryRegionOps proxy_mr_ops = {
53
.max_access_size = 8,
54
},
55
};
56
+
57
+static void probe_pci_info(PCIDevice *dev, Error **errp)
58
+{
59
+ PCIDeviceClass *pc = PCI_DEVICE_GET_CLASS(dev);
60
+ uint32_t orig_val, new_val, base_class, val;
61
+ PCIProxyDev *pdev = PCI_PROXY_DEV(dev);
62
+ DeviceClass *dc = DEVICE_CLASS(pc);
63
+ uint8_t type;
64
+ int i, size;
65
+
66
+ config_op_send(pdev, PCI_VENDOR_ID, &val, 2, MPQEMU_CMD_PCI_CFGREAD);
67
+ pc->vendor_id = (uint16_t)val;
68
+
69
+ config_op_send(pdev, PCI_DEVICE_ID, &val, 2, MPQEMU_CMD_PCI_CFGREAD);
70
+ pc->device_id = (uint16_t)val;
71
+
72
+ config_op_send(pdev, PCI_CLASS_DEVICE, &val, 2, MPQEMU_CMD_PCI_CFGREAD);
73
+ pc->class_id = (uint16_t)val;
74
+
75
+ config_op_send(pdev, PCI_SUBSYSTEM_ID, &val, 2, MPQEMU_CMD_PCI_CFGREAD);
76
+ pc->subsystem_id = (uint16_t)val;
77
+
78
+ base_class = pc->class_id >> 4;
79
+ switch (base_class) {
80
+ case PCI_BASE_CLASS_BRIDGE:
81
+ set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
82
+ break;
83
+ case PCI_BASE_CLASS_STORAGE:
84
+ set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
85
+ break;
86
+ case PCI_BASE_CLASS_NETWORK:
87
+ set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
88
+ break;
89
+ case PCI_BASE_CLASS_INPUT:
90
+ set_bit(DEVICE_CATEGORY_INPUT, dc->categories);
91
+ break;
92
+ case PCI_BASE_CLASS_DISPLAY:
93
+ set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
94
+ break;
95
+ case PCI_BASE_CLASS_PROCESSOR:
96
+ set_bit(DEVICE_CATEGORY_CPU, dc->categories);
97
+ break;
98
+ default:
99
+ set_bit(DEVICE_CATEGORY_MISC, dc->categories);
100
+ break;
101
+ }
102
+
103
+ for (i = 0; i < PCI_NUM_REGIONS; i++) {
104
+ config_op_send(pdev, PCI_BASE_ADDRESS_0 + (4 * i), &orig_val, 4,
105
+ MPQEMU_CMD_PCI_CFGREAD);
106
+ new_val = 0xffffffff;
107
+ config_op_send(pdev, PCI_BASE_ADDRESS_0 + (4 * i), &new_val, 4,
108
+ MPQEMU_CMD_PCI_CFGWRITE);
109
+ config_op_send(pdev, PCI_BASE_ADDRESS_0 + (4 * i), &new_val, 4,
110
+ MPQEMU_CMD_PCI_CFGREAD);
111
+ size = (~(new_val & 0xFFFFFFF0)) + 1;
112
+ config_op_send(pdev, PCI_BASE_ADDRESS_0 + (4 * i), &orig_val, 4,
113
+ MPQEMU_CMD_PCI_CFGWRITE);
114
+ type = (new_val & 0x1) ?
115
+ PCI_BASE_ADDRESS_SPACE_IO : PCI_BASE_ADDRESS_SPACE_MEMORY;
116
+
117
+ if (size) {
118
+ g_autofree char *name;
119
+ pdev->region[i].dev = pdev;
120
+ pdev->region[i].present = true;
121
+ if (type == PCI_BASE_ADDRESS_SPACE_MEMORY) {
122
+ pdev->region[i].memory = true;
123
+ }
124
+ name = g_strdup_printf("bar-region-%d", i);
125
+ memory_region_init_io(&pdev->region[i].mr, OBJECT(pdev),
126
+ &proxy_mr_ops, &pdev->region[i],
127
+ name, size);
128
+ pci_register_bar(dev, i, type, &pdev->region[i].mr);
129
+ }
130
+ }
131
+}
40
--
132
--
41
2.26.2
133
2.29.2
42
134
diff view generated by jsdifflib
1
The iov_discard_front/back() operations are useful for parsing iovecs
1
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
2
but they modify the array elements. If the original array is needed
3
after parsing finishes there is currently no way to restore it.
4
2
5
Although g_memdup() can be used before performing destructive
3
Perform device reset in the remote process when QEMU performs
6
iov_discard_front/back() operations, this is inefficient.
4
device reset. This is required to reset the internal state
5
(like registers, etc...) of emulated devices
7
6
8
Introduce iov_discard_undo() to restore the array to the state prior to
7
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
9
an iov_discard_front/back() operation.
8
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
9
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: 7cb220a51f565dc0817bd76e2f540e89c2d2b850.1611938319.git.jag.raman@oracle.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
include/hw/remote/mpqemu-link.h | 1 +
15
hw/remote/message.c | 22 ++++++++++++++++++++++
16
hw/remote/proxy.c | 19 +++++++++++++++++++
17
3 files changed, 42 insertions(+)
10
18
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
19
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
12
Reviewed-by: Li Qiang <liq3ea@gmail.com>
13
Message-Id: <20200917094455.822379-2-stefanha@redhat.com>
14
---
15
include/qemu/iov.h | 23 +++++++
16
tests/test-iov.c | 165 +++++++++++++++++++++++++++++++++++++++++++++
17
util/iov.c | 50 ++++++++++++--
18
3 files changed, 234 insertions(+), 4 deletions(-)
19
20
diff --git a/include/qemu/iov.h b/include/qemu/iov.h
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/include/qemu/iov.h
21
--- a/include/hw/remote/mpqemu-link.h
23
+++ b/include/qemu/iov.h
22
+++ b/include/hw/remote/mpqemu-link.h
24
@@ -XXX,XX +XXX,XX @@ size_t iov_discard_front(struct iovec **iov, unsigned int *iov_cnt,
23
@@ -XXX,XX +XXX,XX @@ typedef enum {
25
size_t iov_discard_back(struct iovec *iov, unsigned int *iov_cnt,
24
MPQEMU_CMD_BAR_WRITE,
26
size_t bytes);
25
MPQEMU_CMD_BAR_READ,
27
26
MPQEMU_CMD_SET_IRQFD,
28
+/* Information needed to undo an iov_discard_*() operation */
27
+ MPQEMU_CMD_DEVICE_RESET,
29
+typedef struct {
28
MPQEMU_CMD_MAX,
30
+ struct iovec *modified_iov;
29
} MPQemuCmd;
31
+ struct iovec orig;
30
32
+} IOVDiscardUndo;
31
diff --git a/hw/remote/message.c b/hw/remote/message.c
33
+
34
+/*
35
+ * Undo an iov_discard_front_undoable() or iov_discard_back_undoable()
36
+ * operation. If multiple operations are made then each one needs a separate
37
+ * IOVDiscardUndo and iov_discard_undo() must be called in the reverse order
38
+ * that the operations were made.
39
+ */
40
+void iov_discard_undo(IOVDiscardUndo *undo);
41
+
42
+/*
43
+ * Undoable versions of iov_discard_front() and iov_discard_back(). Use
44
+ * iov_discard_undo() to reset to the state before the discard operations.
45
+ */
46
+size_t iov_discard_front_undoable(struct iovec **iov, unsigned int *iov_cnt,
47
+ size_t bytes, IOVDiscardUndo *undo);
48
+size_t iov_discard_back_undoable(struct iovec *iov, unsigned int *iov_cnt,
49
+ size_t bytes, IOVDiscardUndo *undo);
50
+
51
typedef struct QEMUIOVector {
52
struct iovec *iov;
53
int niov;
54
diff --git a/tests/test-iov.c b/tests/test-iov.c
55
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
56
--- a/tests/test-iov.c
33
--- a/hw/remote/message.c
57
+++ b/tests/test-iov.c
34
+++ b/hw/remote/message.c
58
@@ -XXX,XX +XXX,XX @@ static void iov_free(struct iovec *iov, unsigned niov)
35
@@ -XXX,XX +XXX,XX @@
59
g_free(iov);
36
#include "exec/memattrs.h"
60
}
37
#include "hw/remote/memory.h"
61
38
#include "hw/remote/iohub.h"
62
+static bool iov_equals(const struct iovec *a, const struct iovec *b,
39
+#include "sysemu/reset.h"
63
+ unsigned niov)
40
64
+{
41
static void process_config_write(QIOChannel *ioc, PCIDevice *dev,
65
+ return memcmp(a, b, sizeof(a[0]) * niov) == 0;
42
MPQemuMsg *msg, Error **errp);
66
+}
43
@@ -XXX,XX +XXX,XX @@ static void process_config_read(QIOChannel *ioc, PCIDevice *dev,
67
+
44
MPQemuMsg *msg, Error **errp);
68
static void test_iov_bytes(struct iovec *iov, unsigned niov,
45
static void process_bar_write(QIOChannel *ioc, MPQemuMsg *msg, Error **errp);
69
size_t offset, size_t bytes)
46
static void process_bar_read(QIOChannel *ioc, MPQemuMsg *msg, Error **errp);
47
+static void process_device_reset_msg(QIOChannel *ioc, PCIDevice *dev,
48
+ Error **errp);
49
50
void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
70
{
51
{
71
@@ -XXX,XX +XXX,XX @@ static void test_discard_front(void)
52
@@ -XXX,XX +XXX,XX @@ void coroutine_fn mpqemu_remote_msg_loop_co(void *data)
72
iov_free(iov, iov_cnt);
53
case MPQEMU_CMD_SET_IRQFD:
73
}
54
process_set_irqfd_msg(pci_dev, &msg);
74
55
break;
75
+static void test_discard_front_undo(void)
56
+ case MPQEMU_CMD_DEVICE_RESET:
76
+{
57
+ process_device_reset_msg(com->ioc, pci_dev, &local_err);
77
+ IOVDiscardUndo undo;
58
+ break;
78
+ struct iovec *iov;
59
default:
79
+ struct iovec *iov_tmp;
60
error_setg(&local_err,
80
+ struct iovec *iov_orig;
61
"Unknown command (%d) received for device %s"
81
+ unsigned int iov_cnt;
62
@@ -XXX,XX +XXX,XX @@ fail:
82
+ unsigned int iov_cnt_tmp;
63
getpid());
83
+ size_t size;
84
+
85
+ /* Discard zero bytes */
86
+ iov_random(&iov, &iov_cnt);
87
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
88
+ iov_tmp = iov;
89
+ iov_cnt_tmp = iov_cnt;
90
+ iov_discard_front_undoable(&iov_tmp, &iov_cnt_tmp, 0, &undo);
91
+ iov_discard_undo(&undo);
92
+ assert(iov_equals(iov, iov_orig, iov_cnt));
93
+ g_free(iov_orig);
94
+ iov_free(iov, iov_cnt);
95
+
96
+ /* Discard more bytes than vector size */
97
+ iov_random(&iov, &iov_cnt);
98
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
99
+ iov_tmp = iov;
100
+ iov_cnt_tmp = iov_cnt;
101
+ size = iov_size(iov, iov_cnt);
102
+ iov_discard_front_undoable(&iov_tmp, &iov_cnt_tmp, size + 1, &undo);
103
+ iov_discard_undo(&undo);
104
+ assert(iov_equals(iov, iov_orig, iov_cnt));
105
+ g_free(iov_orig);
106
+ iov_free(iov, iov_cnt);
107
+
108
+ /* Discard entire vector */
109
+ iov_random(&iov, &iov_cnt);
110
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
111
+ iov_tmp = iov;
112
+ iov_cnt_tmp = iov_cnt;
113
+ size = iov_size(iov, iov_cnt);
114
+ iov_discard_front_undoable(&iov_tmp, &iov_cnt_tmp, size, &undo);
115
+ iov_discard_undo(&undo);
116
+ assert(iov_equals(iov, iov_orig, iov_cnt));
117
+ g_free(iov_orig);
118
+ iov_free(iov, iov_cnt);
119
+
120
+ /* Discard within first element */
121
+ iov_random(&iov, &iov_cnt);
122
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
123
+ iov_tmp = iov;
124
+ iov_cnt_tmp = iov_cnt;
125
+ size = g_test_rand_int_range(1, iov->iov_len);
126
+ iov_discard_front_undoable(&iov_tmp, &iov_cnt_tmp, size, &undo);
127
+ iov_discard_undo(&undo);
128
+ assert(iov_equals(iov, iov_orig, iov_cnt));
129
+ g_free(iov_orig);
130
+ iov_free(iov, iov_cnt);
131
+
132
+ /* Discard entire first element */
133
+ iov_random(&iov, &iov_cnt);
134
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
135
+ iov_tmp = iov;
136
+ iov_cnt_tmp = iov_cnt;
137
+ iov_discard_front_undoable(&iov_tmp, &iov_cnt_tmp, iov->iov_len, &undo);
138
+ iov_discard_undo(&undo);
139
+ assert(iov_equals(iov, iov_orig, iov_cnt));
140
+ g_free(iov_orig);
141
+ iov_free(iov, iov_cnt);
142
+
143
+ /* Discard within second element */
144
+ iov_random(&iov, &iov_cnt);
145
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
146
+ iov_tmp = iov;
147
+ iov_cnt_tmp = iov_cnt;
148
+ size = iov->iov_len + g_test_rand_int_range(1, iov[1].iov_len);
149
+ iov_discard_front_undoable(&iov_tmp, &iov_cnt_tmp, size, &undo);
150
+ iov_discard_undo(&undo);
151
+ assert(iov_equals(iov, iov_orig, iov_cnt));
152
+ g_free(iov_orig);
153
+ iov_free(iov, iov_cnt);
154
+}
155
+
156
static void test_discard_back(void)
157
{
158
struct iovec *iov;
159
@@ -XXX,XX +XXX,XX @@ static void test_discard_back(void)
160
iov_free(iov, iov_cnt);
161
}
162
163
+static void test_discard_back_undo(void)
164
+{
165
+ IOVDiscardUndo undo;
166
+ struct iovec *iov;
167
+ struct iovec *iov_orig;
168
+ unsigned int iov_cnt;
169
+ unsigned int iov_cnt_tmp;
170
+ size_t size;
171
+
172
+ /* Discard zero bytes */
173
+ iov_random(&iov, &iov_cnt);
174
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
175
+ iov_cnt_tmp = iov_cnt;
176
+ iov_discard_back_undoable(iov, &iov_cnt_tmp, 0, &undo);
177
+ iov_discard_undo(&undo);
178
+ assert(iov_equals(iov, iov_orig, iov_cnt));
179
+ g_free(iov_orig);
180
+ iov_free(iov, iov_cnt);
181
+
182
+ /* Discard more bytes than vector size */
183
+ iov_random(&iov, &iov_cnt);
184
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
185
+ iov_cnt_tmp = iov_cnt;
186
+ size = iov_size(iov, iov_cnt);
187
+ iov_discard_back_undoable(iov, &iov_cnt_tmp, size + 1, &undo);
188
+ iov_discard_undo(&undo);
189
+ assert(iov_equals(iov, iov_orig, iov_cnt));
190
+ g_free(iov_orig);
191
+ iov_free(iov, iov_cnt);
192
+
193
+ /* Discard entire vector */
194
+ iov_random(&iov, &iov_cnt);
195
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
196
+ iov_cnt_tmp = iov_cnt;
197
+ size = iov_size(iov, iov_cnt);
198
+ iov_discard_back_undoable(iov, &iov_cnt_tmp, size, &undo);
199
+ iov_discard_undo(&undo);
200
+ assert(iov_equals(iov, iov_orig, iov_cnt));
201
+ g_free(iov_orig);
202
+ iov_free(iov, iov_cnt);
203
+
204
+ /* Discard within last element */
205
+ iov_random(&iov, &iov_cnt);
206
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
207
+ iov_cnt_tmp = iov_cnt;
208
+ size = g_test_rand_int_range(1, iov[iov_cnt - 1].iov_len);
209
+ iov_discard_back_undoable(iov, &iov_cnt_tmp, size, &undo);
210
+ iov_discard_undo(&undo);
211
+ assert(iov_equals(iov, iov_orig, iov_cnt));
212
+ g_free(iov_orig);
213
+ iov_free(iov, iov_cnt);
214
+
215
+ /* Discard entire last element */
216
+ iov_random(&iov, &iov_cnt);
217
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
218
+ iov_cnt_tmp = iov_cnt;
219
+ size = iov[iov_cnt - 1].iov_len;
220
+ iov_discard_back_undoable(iov, &iov_cnt_tmp, size, &undo);
221
+ iov_discard_undo(&undo);
222
+ assert(iov_equals(iov, iov_orig, iov_cnt));
223
+ g_free(iov_orig);
224
+ iov_free(iov, iov_cnt);
225
+
226
+ /* Discard within second-to-last element */
227
+ iov_random(&iov, &iov_cnt);
228
+ iov_orig = g_memdup(iov, sizeof(iov[0]) * iov_cnt);
229
+ iov_cnt_tmp = iov_cnt;
230
+ size = iov[iov_cnt - 1].iov_len +
231
+ g_test_rand_int_range(1, iov[iov_cnt - 2].iov_len);
232
+ iov_discard_back_undoable(iov, &iov_cnt_tmp, size, &undo);
233
+ iov_discard_undo(&undo);
234
+ assert(iov_equals(iov, iov_orig, iov_cnt));
235
+ g_free(iov_orig);
236
+ iov_free(iov, iov_cnt);
237
+}
238
+
239
int main(int argc, char **argv)
240
{
241
g_test_init(&argc, &argv, NULL);
242
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
243
g_test_add_func("/basic/iov/io", test_io);
244
g_test_add_func("/basic/iov/discard-front", test_discard_front);
245
g_test_add_func("/basic/iov/discard-back", test_discard_back);
246
+ g_test_add_func("/basic/iov/discard-front-undo", test_discard_front_undo);
247
+ g_test_add_func("/basic/iov/discard-back-undo", test_discard_back_undo);
248
return g_test_run();
249
}
250
diff --git a/util/iov.c b/util/iov.c
251
index XXXXXXX..XXXXXXX 100644
252
--- a/util/iov.c
253
+++ b/util/iov.c
254
@@ -XXX,XX +XXX,XX @@ void qemu_iovec_clone(QEMUIOVector *dest, const QEMUIOVector *src, void *buf)
255
}
64
}
256
}
65
}
257
66
+
258
-size_t iov_discard_front(struct iovec **iov, unsigned int *iov_cnt,
67
+static void process_device_reset_msg(QIOChannel *ioc, PCIDevice *dev,
259
- size_t bytes)
68
+ Error **errp)
260
+void iov_discard_undo(IOVDiscardUndo *undo)
261
+{
69
+{
262
+ /* Restore original iovec if it was modified */
70
+ DeviceClass *dc = DEVICE_GET_CLASS(dev);
263
+ if (undo->modified_iov) {
71
+ DeviceState *s = DEVICE(dev);
264
+ *undo->modified_iov = undo->orig;
72
+ MPQemuMsg ret = { 0 };
265
+ }
266
+}
267
+
73
+
268
+size_t iov_discard_front_undoable(struct iovec **iov,
74
+ if (dc->reset) {
269
+ unsigned int *iov_cnt,
75
+ dc->reset(s);
270
+ size_t bytes,
271
+ IOVDiscardUndo *undo)
272
{
273
size_t total = 0;
274
struct iovec *cur;
275
276
+ if (undo) {
277
+ undo->modified_iov = NULL;
278
+ }
76
+ }
279
+
77
+
280
for (cur = *iov; *iov_cnt > 0; cur++) {
78
+ ret.cmd = MPQEMU_CMD_RET;
281
if (cur->iov_len > bytes) {
282
+ if (undo) {
283
+ undo->modified_iov = cur;
284
+ undo->orig = *cur;
285
+ }
286
+
79
+
287
cur->iov_base += bytes;
80
+ mpqemu_msg_send(&ret, ioc, errp);
288
cur->iov_len -= bytes;
81
+}
289
total += bytes;
82
diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
290
@@ -XXX,XX +XXX,XX @@ size_t iov_discard_front(struct iovec **iov, unsigned int *iov_cnt,
83
index XXXXXXX..XXXXXXX 100644
291
return total;
84
--- a/hw/remote/proxy.c
85
+++ b/hw/remote/proxy.c
86
@@ -XXX,XX +XXX,XX @@
87
#include "util/event_notifier-posix.c"
88
89
static void probe_pci_info(PCIDevice *dev, Error **errp);
90
+static void proxy_device_reset(DeviceState *dev);
91
92
static void proxy_intx_update(PCIDevice *pci_dev)
93
{
94
@@ -XXX,XX +XXX,XX @@ static void pci_proxy_dev_class_init(ObjectClass *klass, void *data)
95
k->config_read = pci_proxy_read_config;
96
k->config_write = pci_proxy_write_config;
97
98
+ dc->reset = proxy_device_reset;
99
+
100
device_class_set_props(dc, proxy_properties);
292
}
101
}
293
102
294
-size_t iov_discard_back(struct iovec *iov, unsigned int *iov_cnt,
103
@@ -XXX,XX +XXX,XX @@ static void probe_pci_info(PCIDevice *dev, Error **errp)
295
- size_t bytes)
104
}
296
+size_t iov_discard_front(struct iovec **iov, unsigned int *iov_cnt,
105
}
297
+ size_t bytes)
106
}
107
+
108
+static void proxy_device_reset(DeviceState *dev)
298
+{
109
+{
299
+ return iov_discard_front_undoable(iov, iov_cnt, bytes, NULL);
110
+ PCIProxyDev *pdev = PCI_PROXY_DEV(dev);
300
+}
111
+ MPQemuMsg msg = { 0 };
112
+ Error *local_err = NULL;
301
+
113
+
302
+size_t iov_discard_back_undoable(struct iovec *iov,
114
+ msg.cmd = MPQEMU_CMD_DEVICE_RESET;
303
+ unsigned int *iov_cnt,
115
+ msg.size = 0;
304
+ size_t bytes,
116
+
305
+ IOVDiscardUndo *undo)
117
+ mpqemu_msg_send_and_await_reply(&msg, pdev, &local_err);
306
{
118
+ if (local_err) {
307
size_t total = 0;
119
+ error_report_err(local_err);
308
struct iovec *cur;
309
310
+ if (undo) {
311
+ undo->modified_iov = NULL;
312
+ }
120
+ }
313
+
121
+
314
if (*iov_cnt == 0) {
315
return 0;
316
}
317
@@ -XXX,XX +XXX,XX @@ size_t iov_discard_back(struct iovec *iov, unsigned int *iov_cnt,
318
319
while (*iov_cnt > 0) {
320
if (cur->iov_len > bytes) {
321
+ if (undo) {
322
+ undo->modified_iov = cur;
323
+ undo->orig = *cur;
324
+ }
325
+
326
cur->iov_len -= bytes;
327
total += bytes;
328
break;
329
@@ -XXX,XX +XXX,XX @@ size_t iov_discard_back(struct iovec *iov, unsigned int *iov_cnt,
330
return total;
331
}
332
333
+size_t iov_discard_back(struct iovec *iov, unsigned int *iov_cnt,
334
+ size_t bytes)
335
+{
336
+ return iov_discard_back_undoable(iov, iov_cnt, bytes, NULL);
337
+}
122
+}
338
+
339
void qemu_iovec_discard_back(QEMUIOVector *qiov, size_t bytes)
340
{
341
size_t total;
342
--
123
--
343
2.26.2
124
2.29.2
344
125
diff view generated by jsdifflib
1
QEMU now hosts a mirror of qboot.git. QEMU mirrors third-party code to
1
From: "Denis V. Lunev" <den@openvz.org>
2
ensure that users can always build QEMU even if the dependency goes
3
offline and so QEMU meets its responsibilities to provide full source
4
code under software licenses.
5
2
6
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
3
Original specification says that l1 table size if 64 * l1_size, which
4
is obviously wrong. The size of the l1 entry is 64 _bits_, not bytes.
5
Thus 64 is to be replaces with 8 as specification says about bytes.
6
7
There is also minor tweak, field name is renamed from l1 to l1_table,
8
which matches with the later text.
9
10
Signed-off-by: Denis V. Lunev <den@openvz.org>
11
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
Message-id: 20210128171313.2210947-1-den@openvz.org
13
CC: Stefan Hajnoczi <stefanha@redhat.com>
14
CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
15
16
[Replace the original commit message "docs: fix mistake in dirty bitmap
17
feature description" as suggested by Eric Blake.
18
--Stefan]
19
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
20
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Cc: Paolo Bonzini <pbonzini@redhat.com>
11
Message-Id: <20200915130834.706758-2-stefanha@redhat.com>
12
---
21
---
13
.gitmodules | 2 +-
22
docs/interop/parallels.txt | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
23
1 file changed, 1 insertion(+), 1 deletion(-)
15
24
16
diff --git a/.gitmodules b/.gitmodules
25
diff --git a/docs/interop/parallels.txt b/docs/interop/parallels.txt
17
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
18
--- a/.gitmodules
27
--- a/docs/interop/parallels.txt
19
+++ b/.gitmodules
28
+++ b/docs/interop/parallels.txt
20
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ of its data area are:
21
    url =     https://git.qemu.org/git/opensbi.git
30
28 - 31: l1_size
22
[submodule "roms/qboot"]
31
The number of entries in the L1 table of the bitmap.
23
    path = roms/qboot
32
24
-    url = https://github.com/bonzini/qboot
33
- variable: l1 (64 * l1_size bytes)
25
+    url = https://git.qemu.org/git/qboot.git
34
+ variable: l1_table (8 * l1_size bytes)
26
[submodule "meson"]
35
L1 offset table (in bytes)
27
    path = meson
36
28
    url = https://github.com/mesonbuild/meson/
37
A dirty bitmap is stored using a one-level structure for the mapping to host
29
--
38
--
30
2.26.2
39
2.29.2
31
40
diff view generated by jsdifflib