From nobody Fri Mar 29 01:46:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1600765214; cv=none; d=zohomail.com; s=zohoarc; b=CPTDHHSL8mQZumqbplp2Ro153ej0vU0WqAEVVIW/aAfnDna2TplhApAJSzFye9AxJxAeB2oxFt79ei52A707hoWGCxiiOnIqpQYQ3WYBiAH9ZUkJsUlrMLOA7+yCQCDkaYBNpz0SEZaAeMCb6urjfKi4RecaMeaerdfbCuDInsE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1600765214; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=NsjkbJRUZlrq992UPIRwGA/+zdtROKztSenCxyWjKcM=; b=Z7Z89g13RqhXqeYsoxIwML/X0j+O3qGD1ilRmZDzpmkp9BBsL5jqhHwbWrMkg1n1NKnOslc4AGjXdTEd25G4NBdlLJBmRp/nobe4YwY6Oh5ves0Jz2PQG1XnHkvde00WJTu29Ii67WGy8v1xRT+v9V/M3rb+TUAsRog+W26a3wk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1600765214506693.8011684183549; Tue, 22 Sep 2020 02:00:14 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kKe8z-0008Jo-Mn; Tue, 22 Sep 2020 08:59:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kKe8x-0008Jj-JY for xen-devel@lists.xenproject.org; Tue, 22 Sep 2020 08:59:24 +0000 Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124]) by us1-rack-iad1.inumbo.com (Halon) with ESMTP id b58fcc19-6ddd-403e-8ab7-3dd760c6a186; Tue, 22 Sep 2020 08:59:07 +0000 (UTC) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-575-lgfZASN-PZyX_MZwrQGadg-1; Tue, 22 Sep 2020 04:59:02 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 20ABE801AE1; Tue, 22 Sep 2020 08:58:57 +0000 (UTC) Received: from localhost (ovpn-115-46.ams2.redhat.com [10.36.115.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 992ED7881A; Tue, 22 Sep 2020 08:58:39 +0000 (UTC) X-Inumbo-ID: b58fcc19-6ddd-403e-8ab7-3dd760c6a186 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600765147; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=NsjkbJRUZlrq992UPIRwGA/+zdtROKztSenCxyWjKcM=; b=fLli5JWKmLXQHM0O+yBI5cQ4obCs426eC7KlObUDbCPcCkgAgb6vmb9idMwZBwPnGZqH8k lmkgWT5OVCakzqjwu30RlZzqdlIZmPLcIFcuCXspxvlCo78Yol8IPDI7jNzAIb+06lmnDy u3YnATea5NKn3tfQQSp9fgH72SZakiI= X-MC-Unique: lgfZASN-PZyX_MZwrQGadg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Max Filippov , Eric Blake , Peter Lieven , "Michael S. Tsirkin" , Cornelia Huck , Jason Wang , David Hildenbrand , Marcel Apfelbaum , Sagar Karandikar , Alberto Garcia , qemu-s390x@nongnu.org, kvm@vger.kernel.org, Liu Yuan , Jiri Slaby , Fam Zheng , Paul Durrant , Eduardo Habkost , Jiaxun Yang , Aurelien Jarno , Max Reitz , Halil Pasic , Michael Roth , Stefan Hajnoczi , Kevin Wolf , Christian Borntraeger , Thomas Huth , John Snow , Alistair Francis , Aleksandar Markovic , Paolo Bonzini , Anthony Perard , qemu-arm@nongnu.org, Yuval Shaia , Aleksandar Rikalo , sheepdog@lists.wpkg.org, Bastian Koppelmann , Yoshinori Sato , Gerd Hoffmann , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Stefano Stabellini , Juan Quintela , qemu-riscv@nongnu.org, "Dr. David Alan Gilbert" , Stefan Weil , Matthew Rosato , Sunil Muthuswamy , Markus Armbruster , Palmer Dabbelt , qemu-block@nongnu.org, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , xen-devel@lists.xenproject.org, Laurent Vivier , Hailiang Zhang , Richard Henderson , Peter Maydell , Huacai Chen Subject: [PATCH v2] qemu/atomic.h: prefix qemu_ to solve collisions Date: Tue, 22 Sep 2020 09:58:38 +0100 Message-Id: <20200922085838.230505-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included via a system header file: $ CC=3Dclang CXX=3Dclang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must b= e a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by . Prefix QEMU's APIs with qemu_ so that atomic.h and can co-exist. This patch was generated using: $ git grep -h -o '\/tmp/changed_identifiers $ for identifier in $(%qemu_$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi --- v2: * The diff of my manual fixups is available here: https://vmsplice.net/~stefan/atomic-namespace-pre-fixups.diff - Dropping #ifndef qemu_atomic_fetch_add in atomic.h - atomic_##X(haddr, val) glue macros not caught by grep - Keep atomic_add-bench name - C preprocessor backslash-newline ('\') column alignment - Line wrapping * Use grep -I to avoid accidentally modifying binary files (RISC-V OpenSBI ELFs) [Eric Blake] * Tweak .gitorder to show atomic.h changes first [Eric Blake] * Update grep commands in commit description so reviewers can reproduce mechanical changes [Eric Blake] --- include/qemu/atomic.h | 258 +++++++------- docs/devel/lockcnt.txt | 14 +- docs/devel/rcu.txt | 40 +-- accel/tcg/atomic_template.h | 20 +- include/block/aio-wait.h | 4 +- include/block/aio.h | 8 +- include/exec/cpu_ldst.h | 2 +- include/exec/exec-all.h | 6 +- include/exec/log.h | 6 +- include/exec/memory.h | 2 +- include/exec/ram_addr.h | 27 +- include/exec/ramlist.h | 2 +- include/exec/tb-lookup.h | 4 +- include/hw/core/cpu.h | 2 +- include/qemu/atomic128.h | 6 +- include/qemu/bitops.h | 2 +- include/qemu/coroutine.h | 2 +- include/qemu/log.h | 6 +- include/qemu/queue.h | 8 +- include/qemu/rcu.h | 10 +- include/qemu/rcu_queue.h | 109 +++--- include/qemu/seqlock.h | 8 +- include/qemu/stats64.h | 28 +- include/qemu/thread.h | 37 +- .../infiniband/hw/vmw_pvrdma/pvrdma_ring.h | 14 +- linux-user/qemu.h | 4 +- tcg/i386/tcg-target.h | 2 +- tcg/s390/tcg-target.h | 2 +- tcg/tci/tcg-target.h | 2 +- accel/kvm/kvm-all.c | 12 +- accel/tcg/cpu-exec.c | 16 +- accel/tcg/cputlb.c | 24 +- accel/tcg/tcg-all.c | 2 +- accel/tcg/translate-all.c | 56 +-- audio/jackaudio.c | 20 +- block.c | 4 +- block/block-backend.c | 15 +- block/io.c | 48 +-- block/nfs.c | 2 +- block/sheepdog.c | 2 +- block/throttle-groups.c | 13 +- block/throttle.c | 4 +- blockdev.c | 2 +- blockjob.c | 2 +- contrib/libvhost-user/libvhost-user.c | 2 +- cpus-common.c | 26 +- dump/dump.c | 8 +- exec.c | 49 +-- hw/core/cpu.c | 6 +- hw/display/qxl.c | 7 +- hw/hyperv/hyperv.c | 11 +- hw/hyperv/vmbus.c | 2 +- hw/i386/xen/xen-hvm.c | 2 +- hw/intc/rx_icu.c | 12 +- hw/intc/sifive_plic.c | 4 +- hw/misc/edu.c | 16 +- hw/net/virtio-net.c | 10 +- hw/rdma/rdma_backend.c | 19 +- hw/rdma/rdma_rm.c | 2 +- hw/rdma/vmw/pvrdma_dev_ring.c | 4 +- hw/s390x/s390-pci-bus.c | 2 +- hw/s390x/virtio-ccw.c | 2 +- hw/virtio/vhost.c | 4 +- hw/virtio/virtio-mmio.c | 6 +- hw/virtio/virtio-pci.c | 6 +- hw/virtio/virtio.c | 16 +- hw/xtensa/pic_cpu.c | 4 +- iothread.c | 6 +- linux-user/hppa/cpu_loop.c | 11 +- linux-user/signal.c | 8 +- migration/colo-failover.c | 4 +- migration/migration.c | 8 +- migration/multifd.c | 18 +- migration/postcopy-ram.c | 35 +- migration/rdma.c | 34 +- monitor/hmp.c | 6 +- monitor/misc.c | 2 +- monitor/monitor.c | 6 +- qemu-nbd.c | 2 +- qga/commands.c | 12 +- qom/object.c | 20 +- scsi/qemu-pr-helper.c | 4 +- softmmu/cpu-throttle.c | 10 +- softmmu/cpus.c | 42 +-- softmmu/memory.c | 6 +- softmmu/vl.c | 2 +- target/arm/mte_helper.c | 6 +- target/hppa/op_helper.c | 2 +- target/i386/mem_helper.c | 2 +- target/i386/whpx-all.c | 6 +- target/riscv/cpu_helper.c | 2 +- target/s390x/mem_helper.c | 4 +- target/xtensa/exc_helper.c | 4 +- target/xtensa/op_helper.c | 2 +- tcg/tcg.c | 59 ++-- tcg/tci.c | 2 +- tests/atomic64-bench.c | 14 +- tests/atomic_add-bench.c | 14 +- tests/iothread.c | 2 +- tests/qht-bench.c | 12 +- tests/rcutorture.c | 24 +- tests/test-aio-multithread.c | 52 +-- tests/test-logging.c | 4 +- tests/test-rcu-list.c | 38 +- tests/test-thread-pool.c | 10 +- util/aio-posix.c | 15 +- util/aio-wait.c | 2 +- util/aio-win32.c | 6 +- util/async.c | 37 +- util/atomic64.c | 10 +- util/bitmap.c | 14 +- util/cacheinfo.c | 2 +- util/fdmon-epoll.c | 4 +- util/fdmon-io_uring.c | 13 +- util/lockcnt.c | 59 ++-- util/log.c | 10 +- util/qemu-coroutine-lock.c | 19 +- util/qemu-coroutine-sleep.c | 4 +- util/qemu-coroutine.c | 6 +- util/qemu-sockets.c | 4 +- util/qemu-thread-posix.c | 12 +- util/qemu-thread-win32.c | 12 +- util/qemu-timer.c | 12 +- util/qht.c | 57 +-- util/qsp.c | 50 +-- util/rcu.c | 36 +- util/stats64.c | 34 +- docs/devel/atomics.rst | 328 +++++++++--------- scripts/kernel-doc | 2 +- tcg/aarch64/tcg-target.c.inc | 2 +- tcg/mips/tcg-target.c.inc | 2 +- tcg/ppc/tcg-target.c.inc | 6 +- tcg/sparc/tcg-target.c.inc | 5 +- 133 files changed, 1200 insertions(+), 1135 deletions(-) diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h index ff72db5115..f2c406eb49 100644 --- a/include/qemu/atomic.h +++ b/include/qemu/atomic.h @@ -125,49 +125,49 @@ * no effect on the generated code but not using the atomic primitives * will get flagged by sanitizers as a violation. */ -#define atomic_read__nocheck(ptr) \ +#define qemu_atomic_read__nocheck(ptr) \ __atomic_load_n(ptr, __ATOMIC_RELAXED) =20 -#define atomic_read(ptr) \ +#define qemu_atomic_read(ptr) \ ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ - atomic_read__nocheck(ptr); \ + qemu_atomic_read__nocheck(ptr); \ }) =20 -#define atomic_set__nocheck(ptr, i) \ +#define qemu_atomic_set__nocheck(ptr, i) \ __atomic_store_n(ptr, i, __ATOMIC_RELAXED) =20 -#define atomic_set(ptr, i) do { \ +#define qemu_atomic_set(ptr, i) do { \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ - atomic_set__nocheck(ptr, i); \ + qemu_atomic_set__nocheck(ptr, i); \ } while(0) =20 /* See above: most compilers currently treat consume and acquire the - * same, but this slows down atomic_rcu_read unnecessarily. + * same, but this slows down qemu_atomic_rcu_read unnecessarily. */ #ifdef __SANITIZE_THREAD__ -#define atomic_rcu_read__nocheck(ptr, valptr) \ +#define qemu_atomic_rcu_read__nocheck(ptr, valptr) \ __atomic_load(ptr, valptr, __ATOMIC_CONSUME); #else -#define atomic_rcu_read__nocheck(ptr, valptr) \ +#define qemu_atomic_rcu_read__nocheck(ptr, valptr) \ __atomic_load(ptr, valptr, __ATOMIC_RELAXED); \ smp_read_barrier_depends(); #endif =20 -#define atomic_rcu_read(ptr) \ +#define qemu_atomic_rcu_read(ptr) \ ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ typeof_strip_qual(*ptr) _val; \ - atomic_rcu_read__nocheck(ptr, &_val); \ + qemu_atomic_rcu_read__nocheck(ptr, &_val); \ _val; \ }) =20 -#define atomic_rcu_set(ptr, i) do { \ +#define qemu_atomic_rcu_set(ptr, i) do { \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ __atomic_store_n(ptr, i, __ATOMIC_RELEASE); \ } while(0) =20 -#define atomic_load_acquire(ptr) \ +#define qemu_atomic_load_acquire(ptr) \ ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ typeof_strip_qual(*ptr) _val; \ @@ -175,7 +175,7 @@ _val; \ }) =20 -#define atomic_store_release(ptr, i) do { \ +#define qemu_atomic_store_release(ptr, i) do { \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ __atomic_store_n(ptr, i, __ATOMIC_RELEASE); \ } while(0) @@ -183,56 +183,75 @@ =20 /* All the remaining operations are fully sequentially consistent */ =20 -#define atomic_xchg__nocheck(ptr, i) ({ \ +#define qemu_atomic_xchg__nocheck(ptr, i) ({ \ __atomic_exchange_n(ptr, (i), __ATOMIC_SEQ_CST); \ }) =20 -#define atomic_xchg(ptr, i) ({ \ +#define qemu_atomic_xchg(ptr, i) ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ - atomic_xchg__nocheck(ptr, i); \ + qemu_atomic_xchg__nocheck(ptr, i); \ }) =20 /* Returns the eventual value, failed or not */ -#define atomic_cmpxchg__nocheck(ptr, old, new) ({ \ +#define qemu_atomic_cmpxchg__nocheck(ptr, old, new) ({ \ typeof_strip_qual(*ptr) _old =3D (old); \ (void)__atomic_compare_exchange_n(ptr, &_old, new, false, \ __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); \ _old; \ }) =20 -#define atomic_cmpxchg(ptr, old, new) ({ \ +#define qemu_atomic_cmpxchg(ptr, old, new) ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ - atomic_cmpxchg__nocheck(ptr, old, new); \ + qemu_atomic_cmpxchg__nocheck(ptr, old, new); \ }) =20 /* Provide shorter names for GCC atomic builtins, return old value */ -#define atomic_fetch_inc(ptr) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST) -#define atomic_fetch_dec(ptr) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST) +#define qemu_atomic_fetch_inc(ptr) \ + __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST) +#define qemu_atomic_fetch_dec(ptr) \ + __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST) =20 -#ifndef atomic_fetch_add -#define atomic_fetch_add(ptr, n) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_C= ST) -#define atomic_fetch_sub(ptr, n) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_C= ST) -#define atomic_fetch_and(ptr, n) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_C= ST) -#define atomic_fetch_or(ptr, n) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CS= T) -#define atomic_fetch_xor(ptr, n) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_C= ST) -#endif +#define qemu_atomic_fetch_add(ptr, n) \ + __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_fetch_sub(ptr, n) \ + __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_fetch_and(ptr, n) \ + __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_fetch_or(ptr, n) \ + __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_fetch_xor(ptr, n) \ + __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_CST) =20 -#define atomic_inc_fetch(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_SEQ_C= ST) -#define atomic_dec_fetch(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_SEQ_C= ST) -#define atomic_add_fetch(ptr, n) __atomic_add_fetch(ptr, n, __ATOMIC_SEQ_C= ST) -#define atomic_sub_fetch(ptr, n) __atomic_sub_fetch(ptr, n, __ATOMIC_SEQ_C= ST) -#define atomic_and_fetch(ptr, n) __atomic_and_fetch(ptr, n, __ATOMIC_SEQ_C= ST) -#define atomic_or_fetch(ptr, n) __atomic_or_fetch(ptr, n, __ATOMIC_SEQ_CS= T) -#define atomic_xor_fetch(ptr, n) __atomic_xor_fetch(ptr, n, __ATOMIC_SEQ_C= ST) +#define qemu_atomic_inc_fetch(ptr) \ + __atomic_add_fetch(ptr, 1, __ATOMIC_SEQ_CST) +#define qemu_atomic_dec_fetch(ptr) \ + __atomic_sub_fetch(ptr, 1, __ATOMIC_SEQ_CST) +#define qemu_atomic_add_fetch(ptr, n) \ + __atomic_add_fetch(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_sub_fetch(ptr, n) \ + __atomic_sub_fetch(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_and_fetch(ptr, n) \ + __atomic_and_fetch(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_or_fetch(ptr, n) \ + __atomic_or_fetch(ptr, n, __ATOMIC_SEQ_CST) +#define qemu_atomic_xor_fetch(ptr, n) \ + __atomic_xor_fetch(ptr, n, __ATOMIC_SEQ_CST) =20 /* And even shorter names that return void. */ -#define atomic_inc(ptr) ((void) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ= _CST)) -#define atomic_dec(ptr) ((void) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ= _CST)) -#define atomic_add(ptr, n) ((void) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ= _CST)) -#define atomic_sub(ptr, n) ((void) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ= _CST)) -#define atomic_and(ptr, n) ((void) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ= _CST)) -#define atomic_or(ptr, n) ((void) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_= CST)) -#define atomic_xor(ptr, n) ((void) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ= _CST)) +#define qemu_atomic_inc(ptr) \ + ((void) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST)) +#define qemu_atomic_dec(ptr) \ + ((void) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST)) +#define qemu_atomic_add(ptr, n) \ + ((void) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST)) +#define qemu_atomic_sub(ptr, n) \ + ((void) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST)) +#define qemu_atomic_and(ptr, n) \ + ((void) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST)) +#define qemu_atomic_or(ptr, n) \ + ((void) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST)) +#define qemu_atomic_xor(ptr, n) \ + ((void) __atomic_fetch_xor(ptr, n, __ATOMIC_SEQ_CST)) =20 #else /* __ATOMIC_RELAXED */ =20 @@ -272,7 +291,7 @@ * but it is a full barrier at the hardware level. Add a compiler barrier * to make it a full barrier also at the compiler level. */ -#define atomic_xchg(ptr, i) (barrier(), __sync_lock_test_and_set(ptr, i= )) +#define qemu_atomic_xchg(ptr, i) (barrier(), __sync_lock_test_and_set(ptr,= i)) =20 #elif defined(_ARCH_PPC) =20 @@ -325,14 +344,15 @@ /* These will only be atomic if the processor does the fetch or store * in a single issue memory operation */ -#define atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile*) (p)) -#define atomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile*) (p)) =3D= (i)) +#define qemu_atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile*) (p)) +#define qemu_atomic_set__nocheck(p, i) \ + ((*(__typeof__(*(p)) volatile*) (p)) =3D (i)) =20 -#define atomic_read(ptr) atomic_read__nocheck(ptr) -#define atomic_set(ptr, i) atomic_set__nocheck(ptr,i) +#define qemu_atomic_read(ptr) qemu_atomic_read__nocheck(ptr) +#define qemu_atomic_set(ptr, i) qemu_atomic_set__nocheck(ptr,i) =20 /** - * atomic_rcu_read - reads a RCU-protected pointer to a local variable + * qemu_atomic_rcu_read - reads a RCU-protected pointer to a local variable * into a RCU read-side critical section. The pointer can later be safely * dereferenced within the critical section. * @@ -342,21 +362,22 @@ * Inserts memory barriers on architectures that require them (currently o= nly * Alpha) and documents which pointers are protected by RCU. * - * atomic_rcu_read also includes a compiler barrier to ensure that + * qemu_atomic_rcu_read also includes a compiler barrier to ensure that * value-speculative optimizations (e.g. VSS: Value Speculation * Scheduling) does not perform the data read before the pointer read * by speculating the value of the pointer. * - * Should match atomic_rcu_set(), atomic_xchg(), atomic_cmpxchg(). + * Should match qemu_atomic_rcu_set(), qemu_atomic_xchg(), + * and qemu_atomic_cmpxchg(). */ -#define atomic_rcu_read(ptr) ({ \ - typeof(*ptr) _val =3D atomic_read(ptr); \ +#define qemu_atomic_rcu_read(ptr) ({ \ + typeof(*ptr) _val =3D qemu_atomic_read(ptr); \ smp_read_barrier_depends(); \ _val; \ }) =20 /** - * atomic_rcu_set - assigns (publicizes) a pointer to a new data structure + * qemu_atomic_rcu_set - assigns (publicizes) a pointer to a new data stru= cture * meant to be read by RCU read-side critical sections. * * Documents which pointers will be dereferenced by RCU read-side critical @@ -364,65 +385,63 @@ * them. It also makes sure the compiler does not reorder code initializin= g the * data structure before its publication. * - * Should match atomic_rcu_read(). + * Should match qemu_atomic_rcu_read(). */ -#define atomic_rcu_set(ptr, i) do { \ +#define qemu_atomic_rcu_set(ptr, i) do { \ smp_wmb(); \ - atomic_set(ptr, i); \ + qemu_atomic_set(ptr, i); \ } while (0) =20 -#define atomic_load_acquire(ptr) ({ \ - typeof(*ptr) _val =3D atomic_read(ptr); \ - smp_mb_acquire(); \ - _val; \ +#define qemu_atomic_load_acquire(ptr) ({ \ + typeof(*ptr) _val =3D qemu_atomic_read(ptr); \ + smp_mb_acquire(); \ + _val; \ }) =20 -#define atomic_store_release(ptr, i) do { \ - smp_mb_release(); \ - atomic_set(ptr, i); \ +#define qemu_atomic_store_release(ptr, i) do { \ + smp_mb_release(); \ + qemu_atomic_set(ptr, i); \ } while (0) =20 -#ifndef atomic_xchg #if defined(__clang__) -#define atomic_xchg(ptr, i) __sync_swap(ptr, i) +#define qemu_atomic_xchg(ptr, i) __sync_swap(ptr, i) #else /* __sync_lock_test_and_set() is documented to be an acquire barrier only.= */ -#define atomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i)) +#define qemu_atomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, = i)) #endif -#endif -#define atomic_xchg__nocheck atomic_xchg +#define qemu_atomic_xchg__nocheck qemu_atomic_xchg =20 /* Provide shorter names for GCC atomic builtins. */ -#define atomic_fetch_inc(ptr) __sync_fetch_and_add(ptr, 1) -#define atomic_fetch_dec(ptr) __sync_fetch_and_add(ptr, -1) +#define qemu_atomic_fetch_inc(ptr) __sync_fetch_and_add(ptr, 1) +#define qemu_atomic_fetch_dec(ptr) __sync_fetch_and_add(ptr, -1) =20 -#ifndef atomic_fetch_add -#define atomic_fetch_add(ptr, n) __sync_fetch_and_add(ptr, n) -#define atomic_fetch_sub(ptr, n) __sync_fetch_and_sub(ptr, n) -#define atomic_fetch_and(ptr, n) __sync_fetch_and_and(ptr, n) -#define atomic_fetch_or(ptr, n) __sync_fetch_and_or(ptr, n) -#define atomic_fetch_xor(ptr, n) __sync_fetch_and_xor(ptr, n) -#endif +#define qemu_atomic_fetch_add(ptr, n) __sync_fetch_and_add(ptr, n) +#define qemu_atomic_fetch_sub(ptr, n) __sync_fetch_and_sub(ptr, n) +#define qemu_atomic_fetch_and(ptr, n) __sync_fetch_and_and(ptr, n) +#define qemu_atomic_fetch_or(ptr, n) __sync_fetch_and_or(ptr, n) +#define qemu_atomic_fetch_xor(ptr, n) __sync_fetch_and_xor(ptr, n) =20 -#define atomic_inc_fetch(ptr) __sync_add_and_fetch(ptr, 1) -#define atomic_dec_fetch(ptr) __sync_add_and_fetch(ptr, -1) -#define atomic_add_fetch(ptr, n) __sync_add_and_fetch(ptr, n) -#define atomic_sub_fetch(ptr, n) __sync_sub_and_fetch(ptr, n) -#define atomic_and_fetch(ptr, n) __sync_and_and_fetch(ptr, n) -#define atomic_or_fetch(ptr, n) __sync_or_and_fetch(ptr, n) -#define atomic_xor_fetch(ptr, n) __sync_xor_and_fetch(ptr, n) +#define qemu_atomic_inc_fetch(ptr) __sync_add_and_fetch(ptr, 1) +#define qemu_atomic_dec_fetch(ptr) __sync_add_and_fetch(ptr, -1) +#define qemu_atomic_add_fetch(ptr, n) __sync_add_and_fetch(ptr, n) +#define qemu_atomic_sub_fetch(ptr, n) __sync_sub_and_fetch(ptr, n) +#define qemu_atomic_and_fetch(ptr, n) __sync_and_and_fetch(ptr, n) +#define qemu_atomic_or_fetch(ptr, n) __sync_or_and_fetch(ptr, n) +#define qemu_atomic_xor_fetch(ptr, n) __sync_xor_and_fetch(ptr, n) =20 -#define atomic_cmpxchg(ptr, old, new) __sync_val_compare_and_swap(ptr, old= , new) -#define atomic_cmpxchg__nocheck(ptr, old, new) atomic_cmpxchg(ptr, old, n= ew) +#define qemu_atomic_cmpxchg(ptr, old, new) \ + __sync_val_compare_and_swap(ptr, old, new) +#define qemu_atomic_cmpxchg__nocheck(ptr, old, new) \ + qemu_atomic_cmpxchg(ptr, old, new) =20 /* And even shorter names that return void. */ -#define atomic_inc(ptr) ((void) __sync_fetch_and_add(ptr, 1)) -#define atomic_dec(ptr) ((void) __sync_fetch_and_add(ptr, -1)) -#define atomic_add(ptr, n) ((void) __sync_fetch_and_add(ptr, n)) -#define atomic_sub(ptr, n) ((void) __sync_fetch_and_sub(ptr, n)) -#define atomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n)) -#define atomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n)) -#define atomic_xor(ptr, n) ((void) __sync_fetch_and_xor(ptr, n)) +#define qemu_atomic_inc(ptr) ((void) __sync_fetch_and_add(ptr, 1)) +#define qemu_atomic_dec(ptr) ((void) __sync_fetch_and_add(ptr, -1)) +#define qemu_atomic_add(ptr, n) ((void) __sync_fetch_and_add(ptr, n)) +#define qemu_atomic_sub(ptr, n) ((void) __sync_fetch_and_sub(ptr, n)) +#define qemu_atomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n)) +#define qemu_atomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n)) +#define qemu_atomic_xor(ptr, n) ((void) __sync_fetch_and_xor(ptr, n)) =20 #endif /* __ATOMIC_RELAXED */ =20 @@ -436,11 +455,11 @@ /* This is more efficient than a store plus a fence. */ #if !defined(__SANITIZE_THREAD__) #if defined(__i386__) || defined(__x86_64__) || defined(__s390x__) -#define atomic_mb_set(ptr, i) ((void)atomic_xchg(ptr, i)) +#define qemu_atomic_mb_set(ptr, i) ((void)qemu_atomic_xchg(ptr, i)) #endif #endif =20 -/* atomic_mb_read/set semantics map Java volatile variables. They are +/* qemu_atomic_mb_read/set semantics map Java volatile variables. They are * less expensive on some platforms (notably POWER) than fully * sequentially consistent operations. * @@ -448,58 +467,55 @@ * use. See docs/devel/atomics.txt for more discussion. */ =20 -#ifndef atomic_mb_read -#define atomic_mb_read(ptr) \ - atomic_load_acquire(ptr) -#endif +#define qemu_atomic_mb_read(ptr) qemu_atomic_load_acquire(ptr) =20 -#ifndef atomic_mb_set -#define atomic_mb_set(ptr, i) do { \ - atomic_store_release(ptr, i); \ +#ifndef qemu_atomic_mb_set +#define qemu_atomic_mb_set(ptr, i) do { \ + qemu_atomic_store_release(ptr, i); \ smp_mb(); \ } while(0) #endif =20 -#define atomic_fetch_inc_nonzero(ptr) ({ \ - typeof_strip_qual(*ptr) _oldn =3D atomic_read(ptr); \ - while (_oldn && atomic_cmpxchg(ptr, _oldn, _oldn + 1) !=3D _oldn) { \ - _oldn =3D atomic_read(ptr); \ +#define qemu_atomic_fetch_inc_nonzero(ptr) ({ \ + typeof_strip_qual(*ptr) _oldn =3D qemu_atomic_read(ptr); \ + while (_oldn && qemu_atomic_cmpxchg(ptr, _oldn, _oldn + 1) !=3D _oldn)= { \ + _oldn =3D qemu_atomic_read(ptr); \ } \ _oldn; \ }) =20 /* Abstractions to access atomically (i.e. "once") i64/u64 variables */ #ifdef CONFIG_ATOMIC64 -static inline int64_t atomic_read_i64(const int64_t *ptr) +static inline int64_t qemu_atomic_read_i64(const int64_t *ptr) { /* use __nocheck because sizeof(void *) might be < sizeof(u64) */ - return atomic_read__nocheck(ptr); + return qemu_atomic_read__nocheck(ptr); } =20 -static inline uint64_t atomic_read_u64(const uint64_t *ptr) +static inline uint64_t qemu_atomic_read_u64(const uint64_t *ptr) { - return atomic_read__nocheck(ptr); + return qemu_atomic_read__nocheck(ptr); } =20 -static inline void atomic_set_i64(int64_t *ptr, int64_t val) +static inline void qemu_atomic_set_i64(int64_t *ptr, int64_t val) { - atomic_set__nocheck(ptr, val); + qemu_atomic_set__nocheck(ptr, val); } =20 -static inline void atomic_set_u64(uint64_t *ptr, uint64_t val) +static inline void qemu_atomic_set_u64(uint64_t *ptr, uint64_t val) { - atomic_set__nocheck(ptr, val); + qemu_atomic_set__nocheck(ptr, val); } =20 -static inline void atomic64_init(void) +static inline void qemu_atomic64_init(void) { } #else /* !CONFIG_ATOMIC64 */ -int64_t atomic_read_i64(const int64_t *ptr); -uint64_t atomic_read_u64(const uint64_t *ptr); -void atomic_set_i64(int64_t *ptr, int64_t val); -void atomic_set_u64(uint64_t *ptr, uint64_t val); -void atomic64_init(void); +int64_t qemu_atomic_read_i64(const int64_t *ptr); +uint64_t qemu_atomic_read_u64(const uint64_t *ptr); +void qemu_atomic_set_i64(int64_t *ptr, int64_t val); +void qemu_atomic_set_u64(uint64_t *ptr, uint64_t val); +void qemu_atomic64_init(void); #endif /* !CONFIG_ATOMIC64 */ =20 #endif /* QEMU_ATOMIC_H */ diff --git a/docs/devel/lockcnt.txt b/docs/devel/lockcnt.txt index 7c099bc6c8..dc928f85f9 100644 --- a/docs/devel/lockcnt.txt +++ b/docs/devel/lockcnt.txt @@ -95,10 +95,10 @@ not just frees, though there could be cases where this = is not necessary. =20 Reads, instead, can be done without taking the mutex, as long as the readers and writers use the same macros that are used for RCU, for -example atomic_rcu_read, atomic_rcu_set, QLIST_FOREACH_RCU, etc. This is -because the reads are done outside a lock and a set or QLIST_INSERT_HEAD -can happen concurrently with the read. The RCU API ensures that the -processor and the compiler see all required memory barriers. +example qemu_atomic_rcu_read, qemu_atomic_rcu_set, QLIST_FOREACH_RCU, etc. +This is because the reads are done outside a lock and a set or +QLIST_INSERT_HEAD can happen concurrently with the read. The RCU API ensu= res +that the processor and the compiler see all required memory barriers. =20 This could be implemented simply by protecting the counter with the mutex, for example: @@ -189,7 +189,7 @@ qemu_lockcnt_lock and qemu_lockcnt_unlock: if (!xyz) { new_xyz =3D g_new(XYZ, 1); ... - atomic_rcu_set(&xyz, new_xyz); + qemu_atomic_rcu_set(&xyz, new_xyz); } qemu_lockcnt_unlock(&xyz_lockcnt); =20 @@ -198,7 +198,7 @@ qemu_lockcnt_dec: =20 qemu_lockcnt_inc(&xyz_lockcnt); if (xyz) { - XYZ *p =3D atomic_rcu_read(&xyz); + XYZ *p =3D qemu_atomic_rcu_read(&xyz); ... /* Accesses can now be done through "p". */ } @@ -222,7 +222,7 @@ the decrement, the locking and the check on count as fo= llows: =20 qemu_lockcnt_inc(&xyz_lockcnt); if (xyz) { - XYZ *p =3D atomic_rcu_read(&xyz); + XYZ *p =3D qemu_atomic_rcu_read(&xyz); ... /* Accesses can now be done through "p". */ } diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt index 0ce15ba198..d04791c915 100644 --- a/docs/devel/rcu.txt +++ b/docs/devel/rcu.txt @@ -130,13 +130,13 @@ The core RCU API is small: =20 g_free_rcu(&foo, rcu); =20 - typeof(*p) atomic_rcu_read(p); + typeof(*p) qemu_atomic_rcu_read(p); =20 - atomic_rcu_read() is similar to atomic_load_acquire(), but it makes - some assumptions on the code that calls it. This allows a more + qemu_atomic_rcu_read() is similar to qemu_atomic_load_acquire(), but it + makes some assumptions on the code that calls it. This allows a more optimized implementation. =20 - atomic_rcu_read assumes that whenever a single RCU critical + qemu_atomic_rcu_read assumes that whenever a single RCU critical section reads multiple shared data, these reads are either data-dependent or need no ordering. This is almost always the case when using RCU, because read-side critical sections typically @@ -144,7 +144,7 @@ The core RCU API is small: every update) until reaching a data structure of interest, and then read from there. =20 - RCU read-side critical sections must use atomic_rcu_read() to + RCU read-side critical sections must use qemu_atomic_rcu_read() to read data, unless concurrent writes are prevented by another synchronization mechanism. =20 @@ -152,18 +152,18 @@ The core RCU API is small: data structure in a single direction, opposite to the direction in which the updater initializes it. =20 - void atomic_rcu_set(p, typeof(*p) v); + void qemu_atomic_rcu_set(p, typeof(*p) v); =20 - atomic_rcu_set() is similar to atomic_store_release(), though it a= lso - makes assumptions on the code that calls it in order to allow a mo= re - optimized implementation. + qemu_atomic_rcu_set() is similar to qemu_atomic_store_release(), though + it also makes assumptions on the code that calls it in order to allow a + more optimized implementation. =20 - In particular, atomic_rcu_set() suffices for synchronization + In particular, qemu_atomic_rcu_set() suffices for synchronization with readers, if the updater never mutates a field within a data item that is already accessible to readers. This is the case when initializing a new copy of the RCU-protected data structure; just ensure that initialization of *p is carried out - before atomic_rcu_set() makes the data item visible to readers. + before qemu_atomic_rcu_set() makes the data item visible to reader= s. If this rule is observed, writes will happen in the opposite order as reads in the RCU read-side critical sections (or if there is just one update), and there will be no need for other @@ -212,7 +212,7 @@ DIFFERENCES WITH LINUX programming; not allowing this would prevent upgrading an RCU read-side critical section to become an updater. =20 -- atomic_rcu_read and atomic_rcu_set replace rcu_dereference and +- qemu_atomic_rcu_read and qemu_atomic_rcu_set replace rcu_dereference and rcu_assign_pointer. They take a _pointer_ to the variable being accesse= d. =20 - call_rcu is a macro that has an extra argument (the name of the first @@ -257,7 +257,7 @@ may be used as a restricted reference-counting mechanis= m. For example, consider the following code fragment: =20 rcu_read_lock(); - p =3D atomic_rcu_read(&foo); + p =3D qemu_atomic_rcu_read(&foo); /* do something with p. */ rcu_read_unlock(); =20 @@ -268,7 +268,7 @@ The write side looks simply like this (with appropriate= locking): =20 qemu_mutex_lock(&foo_mutex); old =3D foo; - atomic_rcu_set(&foo, new); + qemu_atomic_rcu_set(&foo, new); qemu_mutex_unlock(&foo_mutex); synchronize_rcu(); free(old); @@ -277,7 +277,7 @@ If the processing cannot be done purely within the crit= ical section, it is possible to combine this idiom with a "real" reference count: =20 rcu_read_lock(); - p =3D atomic_rcu_read(&foo); + p =3D qemu_atomic_rcu_read(&foo); foo_ref(p); rcu_read_unlock(); /* do something with p. */ @@ -287,7 +287,7 @@ The write side can be like this: =20 qemu_mutex_lock(&foo_mutex); old =3D foo; - atomic_rcu_set(&foo, new); + qemu_atomic_rcu_set(&foo, new); qemu_mutex_unlock(&foo_mutex); synchronize_rcu(); foo_unref(old); @@ -296,7 +296,7 @@ or with call_rcu: =20 qemu_mutex_lock(&foo_mutex); old =3D foo; - atomic_rcu_set(&foo, new); + qemu_atomic_rcu_set(&foo, new); qemu_mutex_unlock(&foo_mutex); call_rcu(foo_unref, old, rcu); =20 @@ -307,7 +307,7 @@ last reference may be dropped on the read side. Hence = you can use call_rcu() instead: =20 foo_unref(struct foo *p) { - if (atomic_fetch_dec(&p->refcount) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&p->refcount) =3D=3D 1) { call_rcu(foo_destroy, p, rcu); } } @@ -375,7 +375,7 @@ Instead, we store the size of the array with the array = itself: =20 read side: rcu_read_lock(); - struct arr *array =3D atomic_rcu_read(&global_array); + struct arr *array =3D qemu_atomic_rcu_read(&global_array); x =3D i < array->size ? array->data[i] : -1; rcu_read_unlock(); return x; @@ -392,7 +392,7 @@ Instead, we store the size of the array with the array = itself: =20 /* Removal phase. */ old_array =3D global_array; - atomic_rcu_set(&new_array->data, new_array); + qemu_atomic_rcu_set(&new_array->data, new_array); synchronize_rcu(); =20 /* Reclamation phase. */ diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h index 26969487d6..6e9e221f78 100644 --- a/accel/tcg/atomic_template.h +++ b/accel/tcg/atomic_template.h @@ -83,7 +83,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_u= long addr, #if DATA_SIZE =3D=3D 16 ret =3D atomic16_cmpxchg(haddr, cmpv, newv); #else - ret =3D atomic_cmpxchg__nocheck(haddr, cmpv, newv); + ret =3D qemu_atomic_cmpxchg__nocheck(haddr, cmpv, newv); #endif ATOMIC_MMU_CLEANUP; atomic_trace_rmw_post(env, addr, info); @@ -131,7 +131,7 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ul= ong addr, ATOMIC_MMU_IDX); =20 atomic_trace_rmw_pre(env, addr, info); - ret =3D atomic_xchg__nocheck(haddr, val); + ret =3D qemu_atomic_xchg__nocheck(haddr, val); ATOMIC_MMU_CLEANUP; atomic_trace_rmw_post(env, addr, info); return ret; @@ -147,7 +147,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong= addr, \ uint16_t info =3D trace_mem_build_info(SHIFT, false, 0, false, \ ATOMIC_MMU_IDX); \ atomic_trace_rmw_pre(env, addr, info); \ - ret =3D atomic_##X(haddr, val); \ + ret =3D qemu_atomic_##X(haddr, val); \ ATOMIC_MMU_CLEANUP; \ atomic_trace_rmw_post(env, addr, info); \ return ret; \ @@ -182,10 +182,10 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulo= ng addr, \ ATOMIC_MMU_IDX); \ atomic_trace_rmw_pre(env, addr, info); \ smp_mb(); \ - cmp =3D atomic_read__nocheck(haddr); \ + cmp =3D qemu_atomic_read__nocheck(haddr); \ do { \ old =3D cmp; new =3D FN(old, val); \ - cmp =3D atomic_cmpxchg__nocheck(haddr, old, new); \ + cmp =3D qemu_atomic_cmpxchg__nocheck(haddr, old, new); \ } while (cmp !=3D old); \ ATOMIC_MMU_CLEANUP; \ atomic_trace_rmw_post(env, addr, info); \ @@ -230,7 +230,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target= _ulong addr, #if DATA_SIZE =3D=3D 16 ret =3D atomic16_cmpxchg(haddr, BSWAP(cmpv), BSWAP(newv)); #else - ret =3D atomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv)); + ret =3D qemu_atomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv)); #endif ATOMIC_MMU_CLEANUP; atomic_trace_rmw_post(env, addr, info); @@ -280,7 +280,7 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ul= ong addr, ATOMIC_MMU_IDX); =20 atomic_trace_rmw_pre(env, addr, info); - ret =3D atomic_xchg__nocheck(haddr, BSWAP(val)); + ret =3D qemu_atomic_xchg__nocheck(haddr, BSWAP(val)); ATOMIC_MMU_CLEANUP; atomic_trace_rmw_post(env, addr, info); return BSWAP(ret); @@ -296,7 +296,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong= addr, \ uint16_t info =3D trace_mem_build_info(SHIFT, false, MO_BSWAP, \ false, ATOMIC_MMU_IDX); \ atomic_trace_rmw_pre(env, addr, info); \ - ret =3D atomic_##X(haddr, BSWAP(val)); \ + ret =3D qemu_atomic_##X(haddr, BSWAP(val)); \ ATOMIC_MMU_CLEANUP; \ atomic_trace_rmw_post(env, addr, info); \ return BSWAP(ret); \ @@ -329,10 +329,10 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulo= ng addr, \ false, ATOMIC_MMU_IDX); \ atomic_trace_rmw_pre(env, addr, info); \ smp_mb(); \ - ldn =3D atomic_read__nocheck(haddr); \ + ldn =3D qemu_atomic_read__nocheck(haddr); \ do { \ ldo =3D ldn; old =3D BSWAP(ldo); new =3D FN(old, val); \ - ldn =3D atomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \ + ldn =3D qemu_atomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \ } while (ldo !=3D ldn); \ ATOMIC_MMU_CLEANUP; \ atomic_trace_rmw_post(env, addr, info); \ diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index 716d2639df..8f5a890666 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -80,7 +80,7 @@ extern AioWait global_aio_wait; AioWait *wait_ =3D &global_aio_wait; \ AioContext *ctx_ =3D (ctx); \ /* Increment wait_->num_waiters before evaluating cond. */ \ - atomic_inc(&wait_->num_waiters); \ + qemu_atomic_inc(&wait_->num_waiters); \ if (ctx_ && in_aio_context_home_thread(ctx_)) { \ while ((cond)) { \ aio_poll(ctx_, true); \ @@ -100,7 +100,7 @@ extern AioWait global_aio_wait; waited_ =3D true; \ } \ } \ - atomic_dec(&wait_->num_waiters); \ + qemu_atomic_dec(&wait_->num_waiters); \ waited_; }) =20 /** diff --git a/include/block/aio.h b/include/block/aio.h index b2f703fa3f..057e73c48c 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -595,7 +595,7 @@ int64_t aio_compute_timeout(AioContext *ctx); */ static inline void aio_disable_external(AioContext *ctx) { - atomic_inc(&ctx->external_disable_cnt); + qemu_atomic_inc(&ctx->external_disable_cnt); } =20 /** @@ -608,7 +608,7 @@ static inline void aio_enable_external(AioContext *ctx) { int old; =20 - old =3D atomic_fetch_dec(&ctx->external_disable_cnt); + old =3D qemu_atomic_fetch_dec(&ctx->external_disable_cnt); assert(old > 0); if (old =3D=3D 1) { /* Kick event loop so it re-arms file descriptors */ @@ -624,7 +624,7 @@ static inline void aio_enable_external(AioContext *ctx) */ static inline bool aio_external_disabled(AioContext *ctx) { - return atomic_read(&ctx->external_disable_cnt); + return qemu_atomic_read(&ctx->external_disable_cnt); } =20 /** @@ -637,7 +637,7 @@ static inline bool aio_external_disabled(AioContext *ct= x) */ static inline bool aio_node_check(AioContext *ctx, bool is_external) { - return !is_external || !atomic_read(&ctx->external_disable_cnt); + return !is_external || !qemu_atomic_read(&ctx->external_disable_cnt); } =20 /** diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index c14a48f65e..6dcf17f37b 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -299,7 +299,7 @@ static inline target_ulong tlb_addr_write(const CPUTLBE= ntry *entry) #if TCG_OVERSIZED_GUEST return entry->addr_write; #else - return atomic_read(&entry->addr_write); + return qemu_atomic_read(&entry->addr_write); #endif } =20 diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 3cf88272df..05ad65a714 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -89,7 +89,7 @@ void QEMU_NORETURN cpu_loop_exit_atomic(CPUState *cpu, ui= ntptr_t pc); */ static inline bool cpu_loop_exit_requested(CPUState *cpu) { - return (int32_t)atomic_read(&cpu_neg(cpu)->icount_decr.u32) < 0; + return (int32_t)qemu_atomic_read(&cpu_neg(cpu)->icount_decr.u32) < 0; } =20 #if !defined(CONFIG_USER_ONLY) @@ -487,10 +487,10 @@ struct TranslationBlock { =20 extern bool parallel_cpus; =20 -/* Hide the atomic_read to make code a little easier on the eyes */ +/* Hide the qemu_atomic_read to make code a little easier on the eyes */ static inline uint32_t tb_cflags(const TranslationBlock *tb) { - return atomic_read(&tb->cflags); + return qemu_atomic_read(&tb->cflags); } =20 /* current cflags for hashing/comparison */ diff --git a/include/exec/log.h b/include/exec/log.h index 3ed797c1c8..2ff0ac0f7f 100644 --- a/include/exec/log.h +++ b/include/exec/log.h @@ -19,7 +19,7 @@ static inline void log_cpu_state(CPUState *cpu, int flags) =20 if (qemu_log_enabled()) { rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { cpu_dump_state(cpu, logfile->fd, flags); } @@ -49,7 +49,7 @@ static inline void log_target_disas(CPUState *cpu, target= _ulong start, { QemuLogFile *logfile; rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { target_disas(logfile->fd, cpu, start, len); } @@ -60,7 +60,7 @@ static inline void log_disas(void *code, unsigned long si= ze, const char *note) { QemuLogFile *logfile; rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { disas(logfile->fd, code, size, note); } diff --git a/include/exec/memory.h b/include/exec/memory.h index f1bb2a7df5..d879d82d0f 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -685,7 +685,7 @@ struct FlatView { =20 static inline FlatView *address_space_to_flatview(AddressSpace *as) { - return atomic_rcu_read(&as->current_map); + return qemu_atomic_rcu_read(&as->current_map); } =20 =20 diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 3ef729a23c..6e7313d736 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -164,7 +164,7 @@ static inline bool cpu_physical_memory_get_dirty(ram_ad= dr_t start, page =3D start >> TARGET_PAGE_BITS; =20 WITH_RCU_READ_LOCK_GUARD() { - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + blocks =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[client]); =20 idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; @@ -205,7 +205,7 @@ static inline bool cpu_physical_memory_all_dirty(ram_ad= dr_t start, =20 RCU_READ_LOCK_GUARD(); =20 - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + blocks =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[client]); =20 idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; @@ -278,7 +278,7 @@ static inline void cpu_physical_memory_set_dirty_flag(r= am_addr_t addr, =20 RCU_READ_LOCK_GUARD(); =20 - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + blocks =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[client]); =20 set_bit_atomic(offset, blocks->blocks[idx]); } @@ -301,7 +301,7 @@ static inline void cpu_physical_memory_set_dirty_range(= ram_addr_t start, =20 WITH_RCU_READ_LOCK_GUARD() { for (i =3D 0; i < DIRTY_MEMORY_NUM; i++) { - blocks[i] =3D atomic_rcu_read(&ram_list.dirty_memory[i]); + blocks[i] =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[i]); } =20 idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; @@ -361,23 +361,26 @@ static inline void cpu_physical_memory_set_dirty_lebi= tmap(unsigned long *bitmap, =20 WITH_RCU_READ_LOCK_GUARD() { for (i =3D 0; i < DIRTY_MEMORY_NUM; i++) { - blocks[i] =3D atomic_rcu_read(&ram_list.dirty_memory[i])->= blocks; + blocks[i] =3D + qemu_atomic_rcu_read(&ram_list.dirty_memory[i])->block= s; } =20 for (k =3D 0; k < nr; k++) { if (bitmap[k]) { unsigned long temp =3D leul_to_cpu(bitmap[k]); =20 - atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp= ); + qemu_atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], + temp); =20 if (global_dirty_log) { - atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][off= set], - temp); + qemu_atomic_or( + &blocks[DIRTY_MEMORY_MIGRATION][idx][offset], + temp); } =20 if (tcg_enabled()) { - atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], - temp); + qemu_atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][off= set], + temp); } } =20 @@ -461,12 +464,12 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlo= ck *rb, DIRTY_MEMORY_BLOCK_SIZE); unsigned long page =3D BIT_WORD(start >> TARGET_PAGE_BITS); =20 - src =3D atomic_rcu_read( + src =3D qemu_atomic_rcu_read( &ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION])->blocks; =20 for (k =3D page; k < page + nr; k++) { if (src[idx][offset]) { - unsigned long bits =3D atomic_xchg(&src[idx][offset], 0); + unsigned long bits =3D qemu_atomic_xchg(&src[idx][offset],= 0); unsigned long new_dirty; new_dirty =3D ~dest[k]; dest[k] |=3D bits; diff --git a/include/exec/ramlist.h b/include/exec/ramlist.h index bc4faa1b00..af5806f143 100644 --- a/include/exec/ramlist.h +++ b/include/exec/ramlist.h @@ -19,7 +19,7 @@ typedef struct RAMBlockNotifier RAMBlockNotifier; * rcu_read_lock(); * * DirtyMemoryBlocks *blocks =3D - * atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]); + * qemu_atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATIO= N]); * * ram_addr_t idx =3D (addr >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SI= ZE; * unsigned long *block =3D blocks.blocks[idx]; diff --git a/include/exec/tb-lookup.h b/include/exec/tb-lookup.h index 26921b6daf..85eaded201 100644 --- a/include/exec/tb-lookup.h +++ b/include/exec/tb-lookup.h @@ -27,7 +27,7 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong *pc, tar= get_ulong *cs_base, =20 cpu_get_tb_cpu_state(env, pc, cs_base, flags); hash =3D tb_jmp_cache_hash_func(*pc); - tb =3D atomic_rcu_read(&cpu->tb_jmp_cache[hash]); + tb =3D qemu_atomic_rcu_read(&cpu->tb_jmp_cache[hash]); =20 cf_mask &=3D ~CF_CLUSTER_MASK; cf_mask |=3D cpu->cluster_index << CF_CLUSTER_SHIFT; @@ -44,7 +44,7 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong *pc, tar= get_ulong *cs_base, if (tb =3D=3D NULL) { return NULL; } - atomic_set(&cpu->tb_jmp_cache[hash], tb); + qemu_atomic_set(&cpu->tb_jmp_cache[hash], tb); return tb; } =20 diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index 99dc33ffeb..3231caa860 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -482,7 +482,7 @@ static inline void cpu_tb_jmp_cache_clear(CPUState *cpu) unsigned int i; =20 for (i =3D 0; i < TB_JMP_CACHE_SIZE; i++) { - atomic_set(&cpu->tb_jmp_cache[i], NULL); + qemu_atomic_set(&cpu->tb_jmp_cache[i], NULL); } } =20 diff --git a/include/qemu/atomic128.h b/include/qemu/atomic128.h index 6b34484e15..1fbb514c59 100644 --- a/include/qemu/atomic128.h +++ b/include/qemu/atomic128.h @@ -44,7 +44,7 @@ #if defined(CONFIG_ATOMIC128) static inline Int128 atomic16_cmpxchg(Int128 *ptr, Int128 cmp, Int128 new) { - return atomic_cmpxchg__nocheck(ptr, cmp, new); + return qemu_atomic_cmpxchg__nocheck(ptr, cmp, new); } # define HAVE_CMPXCHG128 1 #elif defined(CONFIG_CMPXCHG128) @@ -89,12 +89,12 @@ Int128 QEMU_ERROR("unsupported atomic") #if defined(CONFIG_ATOMIC128) static inline Int128 atomic16_read(Int128 *ptr) { - return atomic_read__nocheck(ptr); + return qemu_atomic_read__nocheck(ptr); } =20 static inline void atomic16_set(Int128 *ptr, Int128 val) { - atomic_set__nocheck(ptr, val); + qemu_atomic_set__nocheck(ptr, val); } =20 # define HAVE_ATOMIC128 1 diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h index f55ce8b320..f74600de90 100644 --- a/include/qemu/bitops.h +++ b/include/qemu/bitops.h @@ -51,7 +51,7 @@ static inline void set_bit_atomic(long nr, unsigned long = *addr) unsigned long mask =3D BIT_MASK(nr); unsigned long *p =3D addr + BIT_WORD(nr); =20 - atomic_or(p, mask); + qemu_atomic_or(p, mask); } =20 /** diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index dfd261c5b1..4d7020d188 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -179,7 +179,7 @@ static inline coroutine_fn void qemu_co_mutex_assert_lo= cked(CoMutex *mutex) * because the condition will be false no matter whether we read NULL = or * the pointer for any other coroutine. */ - assert(atomic_read(&mutex->locked) && + assert(qemu_atomic_read(&mutex->locked) && mutex->holder =3D=3D qemu_coroutine_self()); } =20 diff --git a/include/qemu/log.h b/include/qemu/log.h index f4724f7330..e4d454ccfa 100644 --- a/include/qemu/log.h +++ b/include/qemu/log.h @@ -36,7 +36,7 @@ static inline bool qemu_log_separate(void) bool res =3D false; =20 rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile && logfile->fd !=3D stderr) { res =3D true; } @@ -75,7 +75,7 @@ static inline FILE *qemu_log_lock(void) { QemuLogFile *logfile; rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { qemu_flockfile(logfile->fd); return logfile->fd; @@ -102,7 +102,7 @@ qemu_log_vprintf(const char *fmt, va_list va) QemuLogFile *logfile; =20 rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { vfprintf(logfile->fd, fmt, va); } diff --git a/include/qemu/queue.h b/include/qemu/queue.h index 456a5b01ee..707ae10062 100644 --- a/include/qemu/queue.h +++ b/include/qemu/queue.h @@ -218,12 +218,13 @@ struct { = \ typeof(elm) save_sle_next; = \ do { = \ save_sle_next =3D (elm)->field.sle_next =3D (head)->slh_first;= \ - } while (atomic_cmpxchg(&(head)->slh_first, save_sle_next, (elm)) = !=3D \ + } while (qemu_atomic_cmpxchg(&(head)->slh_first, = \ + save_sle_next, (elm)) !=3D = \ save_sle_next); = \ } while (/*CONSTCOND*/0) =20 #define QSLIST_MOVE_ATOMIC(dest, src) do { \ - (dest)->slh_first =3D atomic_xchg(&(src)->slh_first, NULL); = \ + (dest)->slh_first =3D qemu_atomic_xchg(&(src)->slh_first, NULL); = \ } while (/*CONSTCOND*/0) =20 #define QSLIST_REMOVE_HEAD(head, field) do { \ @@ -376,7 +377,8 @@ struct { = \ /* * Simple queue access methods. */ -#define QSIMPLEQ_EMPTY_ATOMIC(head) (atomic_read(&((head)->sqh_first)) =3D= =3D NULL) +#define QSIMPLEQ_EMPTY_ATOMIC(head) \ + (qemu_atomic_read(&((head)->sqh_first)) =3D=3D NULL) #define QSIMPLEQ_EMPTY(head) ((head)->sqh_first =3D=3D NULL) #define QSIMPLEQ_FIRST(head) ((head)->sqh_first) #define QSIMPLEQ_NEXT(elm, field) ((elm)->field.sqe_next) diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h index 0e375ebe13..96a1bb9039 100644 --- a/include/qemu/rcu.h +++ b/include/qemu/rcu.h @@ -79,8 +79,8 @@ static inline void rcu_read_lock(void) return; } =20 - ctr =3D atomic_read(&rcu_gp_ctr); - atomic_set(&p_rcu_reader->ctr, ctr); + ctr =3D qemu_atomic_read(&rcu_gp_ctr); + qemu_atomic_set(&p_rcu_reader->ctr, ctr); =20 /* Write p_rcu_reader->ctr before reading RCU-protected pointers. */ smp_mb_placeholder(); @@ -100,12 +100,12 @@ static inline void rcu_read_unlock(void) * smp_mb_placeholder(), this ensures writes to p_rcu_reader->ctr * are sequentially consistent. */ - atomic_store_release(&p_rcu_reader->ctr, 0); + qemu_atomic_store_release(&p_rcu_reader->ctr, 0); =20 /* Write p_rcu_reader->ctr before reading p_rcu_reader->waiting. */ smp_mb_placeholder(); - if (unlikely(atomic_read(&p_rcu_reader->waiting))) { - atomic_set(&p_rcu_reader->waiting, false); + if (unlikely(qemu_atomic_read(&p_rcu_reader->waiting))) { + qemu_atomic_set(&p_rcu_reader->waiting, false); qemu_event_set(&rcu_gp_event); } } diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h index 558961cc27..05c924cddc 100644 --- a/include/qemu/rcu_queue.h +++ b/include/qemu/rcu_queue.h @@ -36,9 +36,10 @@ extern "C" { /* * List access methods. */ -#define QLIST_EMPTY_RCU(head) (atomic_read(&(head)->lh_first) =3D=3D NULL) -#define QLIST_FIRST_RCU(head) (atomic_rcu_read(&(head)->lh_first)) -#define QLIST_NEXT_RCU(elm, field) (atomic_rcu_read(&(elm)->field.le_next)) +#define QLIST_EMPTY_RCU(head) (qemu_atomic_read(&(head)->lh_first) =3D=3D = NULL) +#define QLIST_FIRST_RCU(head) (qemu_atomic_rcu_read(&(head)->lh_first)) +#define QLIST_NEXT_RCU(elm, field) \ + (qemu_atomic_rcu_read(&(elm)->field.le_next)) =20 /* * List functions. @@ -46,13 +47,13 @@ extern "C" { =20 =20 /* - * The difference between atomic_read/set and atomic_rcu_read/set + * The difference between qemu_atomic_read/set and qemu_atomic_rcu_read/s= et * is in the including of a read/write memory barrier to the volatile - * access. atomic_rcu_* macros include the memory barrier, the + * access. qemu_atomic_rcu_* macros include the memory barrier, the * plain atomic macros do not. Therefore, it should be correct to * issue a series of reads or writes to the same element using only - * the atomic_* macro, until the last read or write, which should be - * atomic_rcu_* to introduce a read or write memory barrier as + * the qemu_atomic_* macro, until the last read or write, which should be + * qemu_atomic_rcu_* to introduce a read or write memory barrier as * appropriate. */ =20 @@ -66,7 +67,7 @@ extern "C" { #define QLIST_INSERT_AFTER_RCU(listelm, elm, field) do { \ (elm)->field.le_next =3D (listelm)->field.le_next; \ (elm)->field.le_prev =3D &(listelm)->field.le_next; \ - atomic_rcu_set(&(listelm)->field.le_next, (elm)); \ + qemu_atomic_rcu_set(&(listelm)->field.le_next, (elm)); \ if ((elm)->field.le_next !=3D NULL) { \ (elm)->field.le_next->field.le_prev =3D \ &(elm)->field.le_next; \ @@ -82,7 +83,7 @@ extern "C" { #define QLIST_INSERT_BEFORE_RCU(listelm, elm, field) do { \ (elm)->field.le_prev =3D (listelm)->field.le_prev; \ (elm)->field.le_next =3D (listelm); \ - atomic_rcu_set((listelm)->field.le_prev, (elm)); \ + qemu_atomic_rcu_set((listelm)->field.le_prev, (elm)); \ (listelm)->field.le_prev =3D &(elm)->field.le_next; \ } while (/*CONSTCOND*/0) =20 @@ -95,7 +96,7 @@ extern "C" { #define QLIST_INSERT_HEAD_RCU(head, elm, field) do { \ (elm)->field.le_prev =3D &(head)->lh_first; \ (elm)->field.le_next =3D (head)->lh_first; \ - atomic_rcu_set((&(head)->lh_first), (elm)); \ + qemu_atomic_rcu_set((&(head)->lh_first), (elm)); \ if ((elm)->field.le_next !=3D NULL) { \ (elm)->field.le_next->field.le_prev =3D \ &(elm)->field.le_next; \ @@ -112,20 +113,20 @@ extern "C" { (elm)->field.le_next->field.le_prev =3D \ (elm)->field.le_prev; \ } \ - atomic_set((elm)->field.le_prev, (elm)->field.le_next); \ + qemu_atomic_set((elm)->field.le_prev, (elm)->field.le_next); \ } while (/*CONSTCOND*/0) =20 /* List traversal must occur within an RCU critical section. */ #define QLIST_FOREACH_RCU(var, head, field) \ - for ((var) =3D atomic_rcu_read(&(head)->lh_first); \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->lh_first); \ (var); \ - (var) =3D atomic_rcu_read(&(var)->field.le_next)) + (var) =3D qemu_atomic_rcu_read(&(var)->field.le_next)) =20 /* List traversal must occur within an RCU critical section. */ #define QLIST_FOREACH_SAFE_RCU(var, head, field, next_var) \ - for ((var) =3D (atomic_rcu_read(&(head)->lh_first)); \ + for ((var) =3D (qemu_atomic_rcu_read(&(head)->lh_first)); = \ (var) && \ - ((next_var) =3D atomic_rcu_read(&(var)->field.le_next), 1); \ + ((next_var) =3D qemu_atomic_rcu_read(&(var)->field.le_next), 1);= \ (var) =3D (next_var)) =20 /* @@ -133,9 +134,11 @@ extern "C" { */ =20 /* Simple queue access methods */ -#define QSIMPLEQ_EMPTY_RCU(head) (atomic_read(&(head)->sqh_first) =3D= =3D NULL) -#define QSIMPLEQ_FIRST_RCU(head) atomic_rcu_read(&(head)->sqh_first) -#define QSIMPLEQ_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.sqe_n= ext) +#define QSIMPLEQ_EMPTY_RCU(head) \ + (qemu_atomic_read(&(head)->sqh_first) =3D=3D NULL) +#define QSIMPLEQ_FIRST_RCU(head) qemu_atomic_rcu_read(&(head)->sqh_first) +#define QSIMPLEQ_NEXT_RCU(elm, field) \ + qemu_atomic_rcu_read(&(elm)->field.sqe_next) =20 /* Simple queue functions */ #define QSIMPLEQ_INSERT_HEAD_RCU(head, elm, field) do { \ @@ -143,12 +146,12 @@ extern "C" { if ((elm)->field.sqe_next =3D=3D NULL) { \ (head)->sqh_last =3D &(elm)->field.sqe_next; \ } \ - atomic_rcu_set(&(head)->sqh_first, (elm)); \ + qemu_atomic_rcu_set(&(head)->sqh_first, (elm)); \ } while (/*CONSTCOND*/0) =20 #define QSIMPLEQ_INSERT_TAIL_RCU(head, elm, field) do { \ (elm)->field.sqe_next =3D NULL; \ - atomic_rcu_set((head)->sqh_last, (elm)); \ + qemu_atomic_rcu_set((head)->sqh_last, (elm)); \ (head)->sqh_last =3D &(elm)->field.sqe_next; \ } while (/*CONSTCOND*/0) =20 @@ -157,11 +160,11 @@ extern "C" { if ((elm)->field.sqe_next =3D=3D NULL) { = \ (head)->sqh_last =3D &(elm)->field.sqe_next; \ } \ - atomic_rcu_set(&(listelm)->field.sqe_next, (elm)); \ + qemu_atomic_rcu_set(&(listelm)->field.sqe_next, (elm)); = \ } while (/*CONSTCOND*/0) =20 #define QSIMPLEQ_REMOVE_HEAD_RCU(head, field) do { \ - atomic_set(&(head)->sqh_first, (head)->sqh_first->field.sqe_next); \ + qemu_atomic_set(&(head)->sqh_first, (head)->sqh_first->field.sqe_next)= ; \ if ((head)->sqh_first =3D=3D NULL) { = \ (head)->sqh_last =3D &(head)->sqh_first; \ } \ @@ -175,7 +178,7 @@ extern "C" { while (curr->field.sqe_next !=3D (elm)) { \ curr =3D curr->field.sqe_next; \ } \ - atomic_set(&curr->field.sqe_next, \ + qemu_atomic_set(&curr->field.sqe_next, \ curr->field.sqe_next->field.sqe_next); \ if (curr->field.sqe_next =3D=3D NULL) { \ (head)->sqh_last =3D &(curr)->field.sqe_next; \ @@ -184,13 +187,13 @@ extern "C" { } while (/*CONSTCOND*/0) =20 #define QSIMPLEQ_FOREACH_RCU(var, head, field) \ - for ((var) =3D atomic_rcu_read(&(head)->sqh_first); \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->sqh_first); = \ (var); \ - (var) =3D atomic_rcu_read(&(var)->field.sqe_next)) + (var) =3D qemu_atomic_rcu_read(&(var)->field.sqe_next)) =20 #define QSIMPLEQ_FOREACH_SAFE_RCU(var, head, field, next) \ - for ((var) =3D atomic_rcu_read(&(head)->sqh_first); = \ - (var) && ((next) =3D atomic_rcu_read(&(var)->field.sqe_next), 1);= \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->sqh_first); = \ + (var) && ((next) =3D qemu_atomic_rcu_read(&(var)->field.sqe_next)= , 1); \ (var) =3D (next)) =20 /* @@ -198,9 +201,11 @@ extern "C" { */ =20 /* Tail queue access methods */ -#define QTAILQ_EMPTY_RCU(head) (atomic_read(&(head)->tqh_first) =3D= =3D NULL) -#define QTAILQ_FIRST_RCU(head) atomic_rcu_read(&(head)->tqh_first) -#define QTAILQ_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.tqe_nex= t) +#define QTAILQ_EMPTY_RCU(head) \ + (qemu_atomic_read(&(head)->tqh_first) =3D=3D NULL) +#define QTAILQ_FIRST_RCU(head) qemu_atomic_rcu_read(&(head)->tqh_first) +#define QTAILQ_NEXT_RCU(elm, field) \ + qemu_atomic_rcu_read(&(elm)->field.tqe_next) =20 /* Tail queue functions */ #define QTAILQ_INSERT_HEAD_RCU(head, elm, field) do { \ @@ -211,14 +216,14 @@ extern "C" { } else { \ (head)->tqh_circ.tql_prev =3D &(elm)->field.tqe_circ; \ } \ - atomic_rcu_set(&(head)->tqh_first, (elm)); \ + qemu_atomic_rcu_set(&(head)->tqh_first, (elm)); = \ (elm)->field.tqe_circ.tql_prev =3D &(head)->tqh_circ; \ } while (/*CONSTCOND*/0) =20 #define QTAILQ_INSERT_TAIL_RCU(head, elm, field) do { \ (elm)->field.tqe_next =3D NULL; \ (elm)->field.tqe_circ.tql_prev =3D (head)->tqh_circ.tql_prev; \ - atomic_rcu_set(&(head)->tqh_circ.tql_prev->tql_next, (elm)); \ + qemu_atomic_rcu_set(&(head)->tqh_circ.tql_prev->tql_next, (elm)); = \ (head)->tqh_circ.tql_prev =3D &(elm)->field.tqe_circ; \ } while (/*CONSTCOND*/0) =20 @@ -230,14 +235,15 @@ extern "C" { } else { \ (head)->tqh_circ.tql_prev =3D &(elm)->field.tqe_circ; \ } \ - atomic_rcu_set(&(listelm)->field.tqe_next, (elm)); \ + qemu_atomic_rcu_set(&(listelm)->field.tqe_next, (elm)); = \ (elm)->field.tqe_circ.tql_prev =3D &(listelm)->field.tqe_circ; \ } while (/*CONSTCOND*/0) =20 #define QTAILQ_INSERT_BEFORE_RCU(listelm, elm, field) do { \ (elm)->field.tqe_circ.tql_prev =3D (listelm)->field.tqe_circ.tql_prev;= \ (elm)->field.tqe_next =3D (listelm); = \ - atomic_rcu_set(&(listelm)->field.tqe_circ.tql_prev->tql_next, (elm)); \ + qemu_atomic_rcu_set(&(listelm)->field.tqe_circ.tql_prev->tql_next, \ + (elm)); \ (listelm)->field.tqe_circ.tql_prev =3D &(elm)->field.tqe_circ; = \ } while (/*CONSTCOND*/0) =20 @@ -248,18 +254,19 @@ extern "C" { } else { \ (head)->tqh_circ.tql_prev =3D (elm)->field.tqe_circ.tql_prev; \ } \ - atomic_set(&(elm)->field.tqe_circ.tql_prev->tql_next, (elm)->field.tqe= _next); \ + qemu_atomic_set(&(elm)->field.tqe_circ.tql_prev->tql_next, \ + (elm)->field.tqe_next); \ (elm)->field.tqe_circ.tql_prev =3D NULL; \ } while (/*CONSTCOND*/0) =20 #define QTAILQ_FOREACH_RCU(var, head, field) \ - for ((var) =3D atomic_rcu_read(&(head)->tqh_first); \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->tqh_first); = \ (var); \ - (var) =3D atomic_rcu_read(&(var)->field.tqe_next)) + (var) =3D qemu_atomic_rcu_read(&(var)->field.tqe_next)) =20 #define QTAILQ_FOREACH_SAFE_RCU(var, head, field, next) \ - for ((var) =3D atomic_rcu_read(&(head)->tqh_first); = \ - (var) && ((next) =3D atomic_rcu_read(&(var)->field.tqe_next), 1);= \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->tqh_first); = \ + (var) && ((next) =3D qemu_atomic_rcu_read(&(var)->field.tqe_next)= , 1); \ (var) =3D (next)) =20 /* @@ -267,23 +274,25 @@ extern "C" { */ =20 /* Singly-linked list access methods */ -#define QSLIST_EMPTY_RCU(head) (atomic_read(&(head)->slh_first) =3D= =3D NULL) -#define QSLIST_FIRST_RCU(head) atomic_rcu_read(&(head)->slh_first) -#define QSLIST_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.sle_nex= t) +#define QSLIST_EMPTY_RCU(head) \ + (qemu_atomic_read(&(head)->slh_first) =3D=3D NULL) +#define QSLIST_FIRST_RCU(head) qemu_atomic_rcu_read(&(head)->slh_first) +#define QSLIST_NEXT_RCU(elm, field) \ + qemu_atomic_rcu_read(&(elm)->field.sle_next) =20 /* Singly-linked list functions */ #define QSLIST_INSERT_HEAD_RCU(head, elm, field) do { \ (elm)->field.sle_next =3D (head)->slh_first; \ - atomic_rcu_set(&(head)->slh_first, (elm)); \ + qemu_atomic_rcu_set(&(head)->slh_first, (elm)); \ } while (/*CONSTCOND*/0) =20 #define QSLIST_INSERT_AFTER_RCU(head, listelm, elm, field) do { \ (elm)->field.sle_next =3D (listelm)->field.sle_next; \ - atomic_rcu_set(&(listelm)->field.sle_next, (elm)); \ + qemu_atomic_rcu_set(&(listelm)->field.sle_next, (elm)); = \ } while (/*CONSTCOND*/0) =20 #define QSLIST_REMOVE_HEAD_RCU(head, field) do { \ - atomic_set(&(head)->slh_first, (head)->slh_first->field.sle_next); \ + qemu_atomic_set(&(head)->slh_first, (head)->slh_first->field.sle_next)= ; \ } while (/*CONSTCOND*/0) =20 #define QSLIST_REMOVE_RCU(head, elm, type, field) do { \ @@ -294,19 +303,19 @@ extern "C" { while (curr->field.sle_next !=3D (elm)) { \ curr =3D curr->field.sle_next; \ } \ - atomic_set(&curr->field.sle_next, \ + qemu_atomic_set(&curr->field.sle_next, \ curr->field.sle_next->field.sle_next); \ } \ } while (/*CONSTCOND*/0) =20 #define QSLIST_FOREACH_RCU(var, head, field) \ - for ((var) =3D atomic_rcu_read(&(head)->slh_first); \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->slh_first); = \ (var); \ - (var) =3D atomic_rcu_read(&(var)->field.sle_next)) + (var) =3D qemu_atomic_rcu_read(&(var)->field.sle_next)) =20 #define QSLIST_FOREACH_SAFE_RCU(var, head, field, next) \ - for ((var) =3D atomic_rcu_read(&(head)->slh_first); = \ - (var) && ((next) =3D atomic_rcu_read(&(var)->field.sle_next), 1);= \ + for ((var) =3D qemu_atomic_rcu_read(&(head)->slh_first); = \ + (var) && ((next) =3D qemu_atomic_rcu_read(&(var)->field.sle_next)= , 1); \ (var) =3D (next)) =20 #ifdef __cplusplus diff --git a/include/qemu/seqlock.h b/include/qemu/seqlock.h index 8b6b4ee4bb..b282aef078 100644 --- a/include/qemu/seqlock.h +++ b/include/qemu/seqlock.h @@ -32,7 +32,7 @@ static inline void seqlock_init(QemuSeqLock *sl) /* Lock out other writers and update the count. */ static inline void seqlock_write_begin(QemuSeqLock *sl) { - atomic_set(&sl->sequence, sl->sequence + 1); + qemu_atomic_set(&sl->sequence, sl->sequence + 1); =20 /* Write sequence before updating other fields. */ smp_wmb(); @@ -43,7 +43,7 @@ static inline void seqlock_write_end(QemuSeqLock *sl) /* Write other fields before finalizing sequence. */ smp_wmb(); =20 - atomic_set(&sl->sequence, sl->sequence + 1); + qemu_atomic_set(&sl->sequence, sl->sequence + 1); } =20 /* Lock out other writers and update the count. */ @@ -68,7 +68,7 @@ static inline void seqlock_write_unlock_impl(QemuSeqLock = *sl, QemuLockable *lock static inline unsigned seqlock_read_begin(const QemuSeqLock *sl) { /* Always fail if a write is in progress. */ - unsigned ret =3D atomic_read(&sl->sequence); + unsigned ret =3D qemu_atomic_read(&sl->sequence); =20 /* Read sequence before reading other fields. */ smp_rmb(); @@ -79,7 +79,7 @@ static inline int seqlock_read_retry(const QemuSeqLock *s= l, unsigned start) { /* Read other fields before reading final sequence. */ smp_rmb(); - return unlikely(atomic_read(&sl->sequence) !=3D start); + return unlikely(qemu_atomic_read(&sl->sequence) !=3D start); } =20 #endif diff --git a/include/qemu/stats64.h b/include/qemu/stats64.h index 19a5ac4c56..d43eca1dd2 100644 --- a/include/qemu/stats64.h +++ b/include/qemu/stats64.h @@ -37,27 +37,27 @@ static inline void stat64_init(Stat64 *s, uint64_t valu= e) =20 static inline uint64_t stat64_get(const Stat64 *s) { - return atomic_read__nocheck(&s->value); + return qemu_atomic_read__nocheck(&s->value); } =20 static inline void stat64_add(Stat64 *s, uint64_t value) { - atomic_add(&s->value, value); + qemu_atomic_add(&s->value, value); } =20 static inline void stat64_min(Stat64 *s, uint64_t value) { - uint64_t orig =3D atomic_read__nocheck(&s->value); + uint64_t orig =3D qemu_atomic_read__nocheck(&s->value); while (orig > value) { - orig =3D atomic_cmpxchg__nocheck(&s->value, orig, value); + orig =3D qemu_atomic_cmpxchg__nocheck(&s->value, orig, value); } } =20 static inline void stat64_max(Stat64 *s, uint64_t value) { - uint64_t orig =3D atomic_read__nocheck(&s->value); + uint64_t orig =3D qemu_atomic_read__nocheck(&s->value); while (orig < value) { - orig =3D atomic_cmpxchg__nocheck(&s->value, orig, value); + orig =3D qemu_atomic_cmpxchg__nocheck(&s->value, orig, value); } } #else @@ -79,7 +79,7 @@ static inline void stat64_add(Stat64 *s, uint64_t value) low =3D (uint32_t) value; if (!low) { if (high) { - atomic_add(&s->high, high); + qemu_atomic_add(&s->high, high); } return; } @@ -101,7 +101,7 @@ static inline void stat64_add(Stat64 *s, uint64_t value) * the high 32 bits, so it can race just fine with stat64_add32_ca= rry * and even stat64_get! */ - old =3D atomic_cmpxchg(&s->low, orig, result); + old =3D qemu_atomic_cmpxchg(&s->low, orig, result); if (orig =3D=3D old) { return; } @@ -116,7 +116,7 @@ static inline void stat64_min(Stat64 *s, uint64_t value) high =3D value >> 32; low =3D (uint32_t) value; do { - orig_high =3D atomic_read(&s->high); + orig_high =3D qemu_atomic_read(&s->high); if (orig_high < high) { return; } @@ -128,7 +128,7 @@ static inline void stat64_min(Stat64 *s, uint64_t value) * the write barrier in stat64_min_slow. */ smp_rmb(); - orig_low =3D atomic_read(&s->low); + orig_low =3D qemu_atomic_read(&s->low); if (orig_low <=3D low) { return; } @@ -138,7 +138,7 @@ static inline void stat64_min(Stat64 *s, uint64_t value) * we may miss being lucky. */ smp_rmb(); - orig_high =3D atomic_read(&s->high); + orig_high =3D qemu_atomic_read(&s->high); if (orig_high < high) { return; } @@ -156,7 +156,7 @@ static inline void stat64_max(Stat64 *s, uint64_t value) high =3D value >> 32; low =3D (uint32_t) value; do { - orig_high =3D atomic_read(&s->high); + orig_high =3D qemu_atomic_read(&s->high); if (orig_high > high) { return; } @@ -168,7 +168,7 @@ static inline void stat64_max(Stat64 *s, uint64_t value) * the write barrier in stat64_max_slow. */ smp_rmb(); - orig_low =3D atomic_read(&s->low); + orig_low =3D qemu_atomic_read(&s->low); if (orig_low >=3D low) { return; } @@ -178,7 +178,7 @@ static inline void stat64_max(Stat64 *s, uint64_t value) * we may miss being lucky. */ smp_rmb(); - orig_high =3D atomic_read(&s->high); + orig_high =3D qemu_atomic_read(&s->high); if (orig_high > high) { return; } diff --git a/include/qemu/thread.h b/include/qemu/thread.h index 4baf4d1715..104e811236 100644 --- a/include/qemu/thread.h +++ b/include/qemu/thread.h @@ -69,35 +69,38 @@ extern QemuCondTimedWaitFunc qemu_cond_timedwait_func; #define qemu_cond_timedwait(c, m, ms) \ qemu_cond_timedwait_impl(c, m, ms, __FILE__, __LINE__) #else -#define qemu_mutex_lock(m) ({ \ - QemuMutexLockFunc _f =3D atomic_read(&qemu_mutex_lock_func); \ - _f(m, __FILE__, __LINE__); \ +#define qemu_mutex_lock(m) ({ = \ + QemuMutexLockFunc _f =3D qemu_atomic_read(&qemu_mutex_lock_fun= c); \ + _f(m, __FILE__, __LINE__); = \ }) =20 #define qemu_mutex_trylock(m) ({ \ - QemuMutexTrylockFunc _f =3D atomic_read(&qemu_mutex_trylock_fu= nc); \ + QemuMutexTrylockFunc _f =3D \ + qemu_atomic_read(&qemu_mutex_trylock_func); \ _f(m, __FILE__, __LINE__); \ }) =20 #define qemu_rec_mutex_lock(m) ({ \ - QemuRecMutexLockFunc _f =3D atomic_read(&qemu_rec_mutex_lock_f= unc); \ + QemuRecMutexLockFunc _f =3D \ + qemu_atomic_read(&qemu_rec_mutex_lock_func); \ _f(m, __FILE__, __LINE__); \ }) =20 -#define qemu_rec_mutex_trylock(m) ({ \ - QemuRecMutexTrylockFunc _f; \ - _f =3D atomic_read(&qemu_rec_mutex_trylock_func); \ - _f(m, __FILE__, __LINE__); \ +#define qemu_rec_mutex_trylock(m) ({ \ + QemuRecMutexTrylockFunc _f; \ + _f =3D qemu_atomic_read(&qemu_rec_mutex_trylock_func); \ + _f(m, __FILE__, __LINE__); \ }) =20 -#define qemu_cond_wait(c, m) ({ \ - QemuCondWaitFunc _f =3D atomic_read(&qemu_cond_wait_func); \ - _f(c, m, __FILE__, __LINE__); \ +#define qemu_cond_wait(c, m) ({ = \ + QemuCondWaitFunc _f =3D qemu_atomic_read(&qemu_cond_wait_func)= ; \ + _f(c, m, __FILE__, __LINE__); = \ }) =20 -#define qemu_cond_timedwait(c, m, ms) ({ = \ - QemuCondTimedWaitFunc _f =3D atomic_read(&qemu_cond_timedwait_= func); \ - _f(c, m, ms, __FILE__, __LINE__); = \ +#define qemu_cond_timedwait(c, m, ms) ({ = \ + QemuCondTimedWaitFunc _f =3D = \ + qemu_atomic_read(&qemu_cond_timedwait_func); = \ + _f(c, m, ms, __FILE__, __LINE__); = \ }) #endif =20 @@ -236,7 +239,7 @@ static inline void qemu_spin_lock(QemuSpin *spin) __tsan_mutex_pre_lock(spin, 0); #endif while (unlikely(__sync_lock_test_and_set(&spin->value, true))) { - while (atomic_read(&spin->value)) { + while (qemu_atomic_read(&spin->value)) { cpu_relax(); } } @@ -261,7 +264,7 @@ static inline bool qemu_spin_trylock(QemuSpin *spin) =20 static inline bool qemu_spin_locked(QemuSpin *spin) { - return atomic_read(&spin->value); + return qemu_atomic_read(&spin->value); } =20 static inline void qemu_spin_unlock(QemuSpin *spin) diff --git a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrd= ma_ring.h b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdm= a_ring.h index acd4c8346d..8e712904e9 100644 --- a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring= .h +++ b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring= .h @@ -68,7 +68,7 @@ static inline int pvrdma_idx_valid(uint32_t idx, uint32_t= max_elems) =20 static inline int32_t pvrdma_idx(int *var, uint32_t max_elems) { - const unsigned int idx =3D atomic_read(var); + const unsigned int idx =3D qemu_atomic_read(var); =20 if (pvrdma_idx_valid(idx, max_elems)) return idx & (max_elems - 1); @@ -77,17 +77,17 @@ static inline int32_t pvrdma_idx(int *var, uint32_t max= _elems) =20 static inline void pvrdma_idx_ring_inc(int *var, uint32_t max_elems) { - uint32_t idx =3D atomic_read(var) + 1; /* Increment. */ + uint32_t idx =3D qemu_atomic_read(var) + 1; /* Increment. */ =20 idx &=3D (max_elems << 1) - 1; /* Modulo size, flip gen. */ - atomic_set(var, idx); + qemu_atomic_set(var, idx); } =20 static inline int32_t pvrdma_idx_ring_has_space(const struct pvrdma_ring *= r, uint32_t max_elems, uint32_t *out_tail) { - const uint32_t tail =3D atomic_read(&r->prod_tail); - const uint32_t head =3D atomic_read(&r->cons_head); + const uint32_t tail =3D qemu_atomic_read(&r->prod_tail); + const uint32_t head =3D qemu_atomic_read(&r->cons_head); =20 if (pvrdma_idx_valid(tail, max_elems) && pvrdma_idx_valid(head, max_elems)) { @@ -100,8 +100,8 @@ static inline int32_t pvrdma_idx_ring_has_space(const s= truct pvrdma_ring *r, static inline int32_t pvrdma_idx_ring_has_data(const struct pvrdma_ring *r, uint32_t max_elems, uint32_t *out_head) { - const uint32_t tail =3D atomic_read(&r->prod_tail); - const uint32_t head =3D atomic_read(&r->cons_head); + const uint32_t tail =3D qemu_atomic_read(&r->prod_tail); + const uint32_t head =3D qemu_atomic_read(&r->cons_head); =20 if (pvrdma_idx_valid(tail, max_elems) && pvrdma_idx_valid(head, max_elems)) { diff --git a/linux-user/qemu.h b/linux-user/qemu.h index a69a0bd347..f9e835de80 100644 --- a/linux-user/qemu.h +++ b/linux-user/qemu.h @@ -146,8 +146,8 @@ typedef struct TaskState { /* Nonzero if process_pending_signals() needs to do something (either * handle a pending signal or unblock signals). * This flag is written from a signal handler so should be accessed via - * the atomic_read() and atomic_set() functions. (It is not accessed - * from multiple threads.) + * the qemu_atomic_read() and qemu_atomic_set() functions. (It is not + * accessed from multiple threads.) */ int signal_pending; =20 diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 99ac1e3958..8eede29e32 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -215,7 +215,7 @@ static inline void tb_target_set_jmp_target(uintptr_t t= c_ptr, uintptr_t jmp_addr, uintptr_t = addr) { /* patch the branch destination */ - atomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4)); + qemu_atomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4)); /* no need to flush icache explicitly */ } =20 diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h index 07accabbd1..b8a6b51556 100644 --- a/tcg/s390/tcg-target.h +++ b/tcg/s390/tcg-target.h @@ -154,7 +154,7 @@ static inline void tb_target_set_jmp_target(uintptr_t t= c_ptr, { /* patch the branch destination */ intptr_t disp =3D addr - (jmp_addr - 2); - atomic_set((int32_t *)jmp_addr, disp / 2); + qemu_atomic_set((int32_t *)jmp_addr, disp / 2); /* no need to flush icache explicitly */ } =20 diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h index 8b90ab71cb..2fac115db6 100644 --- a/tcg/tci/tcg-target.h +++ b/tcg/tci/tcg-target.h @@ -206,7 +206,7 @@ static inline void tb_target_set_jmp_target(uintptr_t t= c_ptr, uintptr_t jmp_addr, uintptr_t = addr) { /* patch the branch destination */ - atomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4)); + qemu_atomic_set((int32_t *)jmp_addr, addr - (jmp_addr + 4)); /* no need to flush icache explicitly */ } =20 diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index ad8b315b35..06e50b8ac6 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -2379,7 +2379,7 @@ static __thread bool have_sigbus_pending; =20 static void kvm_cpu_kick(CPUState *cpu) { - atomic_set(&cpu->kvm_run->immediate_exit, 1); + qemu_atomic_set(&cpu->kvm_run->immediate_exit, 1); } =20 static void kvm_cpu_kick_self(void) @@ -2400,7 +2400,7 @@ static void kvm_eat_signals(CPUState *cpu) int r; =20 if (kvm_immediate_exit) { - atomic_set(&cpu->kvm_run->immediate_exit, 0); + qemu_atomic_set(&cpu->kvm_run->immediate_exit, 0); /* Write kvm_run->immediate_exit before the cpu->exit_request * write in kvm_cpu_exec. */ @@ -2434,7 +2434,7 @@ int kvm_cpu_exec(CPUState *cpu) DPRINTF("kvm_cpu_exec()\n"); =20 if (kvm_arch_process_async_events(cpu)) { - atomic_set(&cpu->exit_request, 0); + qemu_atomic_set(&cpu->exit_request, 0); return EXCP_HLT; } =20 @@ -2450,7 +2450,7 @@ int kvm_cpu_exec(CPUState *cpu) } =20 kvm_arch_pre_run(cpu, run); - if (atomic_read(&cpu->exit_request)) { + if (qemu_atomic_read(&cpu->exit_request)) { DPRINTF("interrupt exit requested\n"); /* * KVM requires us to reenter the kernel after IO exits to com= plete @@ -2577,7 +2577,7 @@ int kvm_cpu_exec(CPUState *cpu) vm_stop(RUN_STATE_INTERNAL_ERROR); } =20 - atomic_set(&cpu->exit_request, 0); + qemu_atomic_set(&cpu->exit_request, 0); return ret; } =20 @@ -2994,7 +2994,7 @@ int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void = *addr) have_sigbus_pending =3D true; pending_sigbus_addr =3D addr; pending_sigbus_code =3D code; - atomic_set(&cpu->exit_request, 1); + qemu_atomic_set(&cpu->exit_request, 1); return 0; #else return 1; diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index 66d38f9d85..abbd5e9588 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -367,7 +367,9 @@ static inline void tb_add_jump(TranslationBlock *tb, in= t n, goto out_unlock_next; } /* Atomically claim the jump destination slot only if it was NULL */ - old =3D atomic_cmpxchg(&tb->jmp_dest[n], (uintptr_t)NULL, (uintptr_t)t= b_next); + old =3D qemu_atomic_cmpxchg(&tb->jmp_dest[n], + (uintptr_t)NULL, + (uintptr_t)tb_next); if (old) { goto out_unlock_next; } @@ -407,7 +409,7 @@ static inline TranslationBlock *tb_find(CPUState *cpu, tb =3D tb_gen_code(cpu, pc, cs_base, flags, cf_mask); mmap_unlock(); /* We add the TB in the virtual pc hash table for the fast lookup = */ - atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb); + qemu_atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb= ); } #ifndef CONFIG_USER_ONLY /* We don't take care of direct jumps when address mapping changes in @@ -536,9 +538,9 @@ static inline bool cpu_handle_interrupt(CPUState *cpu, * Ensure zeroing happens before reading cpu->exit_request or * cpu->interrupt_request (see also smp_wmb in cpu_exit()) */ - atomic_mb_set(&cpu_neg(cpu)->icount_decr.u16.high, 0); + qemu_atomic_mb_set(&cpu_neg(cpu)->icount_decr.u16.high, 0); =20 - if (unlikely(atomic_read(&cpu->interrupt_request))) { + if (unlikely(qemu_atomic_read(&cpu->interrupt_request))) { int interrupt_request; qemu_mutex_lock_iothread(); interrupt_request =3D cpu->interrupt_request; @@ -613,10 +615,10 @@ static inline bool cpu_handle_interrupt(CPUState *cpu, } =20 /* Finally, check if we need to exit to the main loop. */ - if (unlikely(atomic_read(&cpu->exit_request)) + if (unlikely(qemu_atomic_read(&cpu->exit_request)) || (use_icount && cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra =3D= =3D 0)) { - atomic_set(&cpu->exit_request, 0); + qemu_atomic_set(&cpu->exit_request, 0); if (cpu->exception_index =3D=3D -1) { cpu->exception_index =3D EXCP_INTERRUPT; } @@ -642,7 +644,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, Tran= slationBlock *tb, } =20 *last_tb =3D NULL; - insns_left =3D atomic_read(&cpu_neg(cpu)->icount_decr.u32); + insns_left =3D qemu_atomic_read(&cpu_neg(cpu)->icount_decr.u32); if (insns_left < 0) { /* Something asked us to stop executing chained TBs; just * continue round the main loop. Whatever requested the exit diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6489abbf8c..eaaf0af574 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -312,9 +312,9 @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, siz= e_t *pelide) CPU_FOREACH(cpu) { CPUArchState *env =3D cpu->env_ptr; =20 - full +=3D atomic_read(&env_tlb(env)->c.full_flush_count); - part +=3D atomic_read(&env_tlb(env)->c.part_flush_count); - elide +=3D atomic_read(&env_tlb(env)->c.elide_flush_count); + full +=3D qemu_atomic_read(&env_tlb(env)->c.full_flush_count); + part +=3D qemu_atomic_read(&env_tlb(env)->c.part_flush_count); + elide +=3D qemu_atomic_read(&env_tlb(env)->c.elide_flush_count); } *pfull =3D full; *ppart =3D part; @@ -349,13 +349,13 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *= cpu, run_on_cpu_data data) cpu_tb_jmp_cache_clear(cpu); =20 if (to_clean =3D=3D ALL_MMUIDX_BITS) { - atomic_set(&env_tlb(env)->c.full_flush_count, + qemu_atomic_set(&env_tlb(env)->c.full_flush_count, env_tlb(env)->c.full_flush_count + 1); } else { - atomic_set(&env_tlb(env)->c.part_flush_count, + qemu_atomic_set(&env_tlb(env)->c.part_flush_count, env_tlb(env)->c.part_flush_count + ctpop16(to_clean)); if (to_clean !=3D asked) { - atomic_set(&env_tlb(env)->c.elide_flush_count, + qemu_atomic_set(&env_tlb(env)->c.elide_flush_count, env_tlb(env)->c.elide_flush_count + ctpop16(asked & ~to_clean)); } @@ -693,7 +693,7 @@ void tlb_unprotect_code(ram_addr_t ram_addr) * generated code. * * Other vCPUs might be reading their TLBs during guest execution, so we u= pdate - * te->addr_write with atomic_set. We don't need to worry about this for + * te->addr_write with qemu_atomic_set. We don't need to worry about this = for * oversized guests as MTTCG is disabled for them. * * Called with tlb_c.lock held. @@ -711,7 +711,7 @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *t= lb_entry, #if TCG_OVERSIZED_GUEST tlb_entry->addr_write |=3D TLB_NOTDIRTY; #else - atomic_set(&tlb_entry->addr_write, + qemu_atomic_set(&tlb_entry->addr_write, tlb_entry->addr_write | TLB_NOTDIRTY); #endif } @@ -1138,8 +1138,8 @@ static inline target_ulong tlb_read_ofs(CPUTLBEntry *= entry, size_t ofs) #if TCG_OVERSIZED_GUEST return *(target_ulong *)((uintptr_t)entry + ofs); #else - /* ofs might correspond to .addr_write, so use atomic_read */ - return atomic_read((target_ulong *)((uintptr_t)entry + ofs)); + /* ofs might correspond to .addr_write, so use qemu_atomic_read */ + return qemu_atomic_read((target_ulong *)((uintptr_t)entry + ofs)); #endif } =20 @@ -1155,11 +1155,11 @@ static bool victim_tlb_hit(CPUArchState *env, size_= t mmu_idx, size_t index, CPUTLBEntry *vtlb =3D &env_tlb(env)->d[mmu_idx].vtable[vidx]; target_ulong cmp; =20 - /* elt_ofs might correspond to .addr_write, so use atomic_read */ + /* elt_ofs might correspond to .addr_write, so use qemu_atomic_rea= d */ #if TCG_OVERSIZED_GUEST cmp =3D *(target_ulong *)((uintptr_t)vtlb + elt_ofs); #else - cmp =3D atomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs)); + cmp =3D qemu_atomic_read((target_ulong *)((uintptr_t)vtlb + elt_of= s)); #endif =20 if (cmp =3D=3D page) { diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c index 7098ad96c3..2d2ea21b78 100644 --- a/accel/tcg/tcg-all.c +++ b/accel/tcg/tcg-all.c @@ -65,7 +65,7 @@ static void tcg_handle_interrupt(CPUState *cpu, int mask) if (!qemu_cpu_is_self(cpu)) { qemu_cpu_kick(cpu); } else { - atomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); + qemu_atomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); if (use_icount && !cpu->can_do_io && (mask & ~old_mask) !=3D 0) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 2d83013633..2ccc44694b 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -377,9 +377,9 @@ static int cpu_restore_state_from_tb(CPUState *cpu, Tra= nslationBlock *tb, restore_state_to_opc(env, tb, data); =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->restore_time, + qemu_atomic_set(&prof->restore_time, prof->restore_time + profile_getclock() - ti); - atomic_set(&prof->restore_count, prof->restore_count + 1); + qemu_atomic_set(&prof->restore_count, prof->restore_count + 1); #endif return 0; } @@ -509,7 +509,7 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, = int alloc) =20 /* Level 2..N-1. */ for (i =3D v_l2_levels; i > 0; i--) { - void **p =3D atomic_rcu_read(lp); + void **p =3D qemu_atomic_rcu_read(lp); =20 if (p =3D=3D NULL) { void *existing; @@ -518,7 +518,7 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, = int alloc) return NULL; } p =3D g_new0(void *, V_L2_SIZE); - existing =3D atomic_cmpxchg(lp, NULL, p); + existing =3D qemu_atomic_cmpxchg(lp, NULL, p); if (unlikely(existing)) { g_free(p); p =3D existing; @@ -528,7 +528,7 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, = int alloc) lp =3D p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); } =20 - pd =3D atomic_rcu_read(lp); + pd =3D qemu_atomic_rcu_read(lp); if (pd =3D=3D NULL) { void *existing; =20 @@ -545,7 +545,7 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, = int alloc) } } #endif - existing =3D atomic_cmpxchg(lp, NULL, pd); + existing =3D qemu_atomic_cmpxchg(lp, NULL, pd); if (unlikely(existing)) { #ifndef CONFIG_USER_ONLY { @@ -1253,7 +1253,7 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_dat= a tb_flush_count) tcg_region_reset_all(); /* XXX: flush processor icache at this point if cache flush is expensive */ - atomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1); + qemu_atomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1); =20 done: mmap_unlock(); @@ -1265,7 +1265,7 @@ done: void tb_flush(CPUState *cpu) { if (tcg_enabled()) { - unsigned tb_flush_count =3D atomic_mb_read(&tb_ctx.tb_flush_count); + unsigned tb_flush_count =3D qemu_atomic_mb_read(&tb_ctx.tb_flush_c= ount); =20 if (cpu_in_exclusive_context(cpu)) { do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count)); @@ -1358,7 +1358,7 @@ static inline void tb_remove_from_jmp_list(Translatio= nBlock *orig, int n_orig) int n; =20 /* mark the LSB of jmp_dest[] so that no further jumps can be inserted= */ - ptr =3D atomic_or_fetch(&orig->jmp_dest[n_orig], 1); + ptr =3D qemu_atomic_or_fetch(&orig->jmp_dest[n_orig], 1); dest =3D (TranslationBlock *)(ptr & ~1); if (dest =3D=3D NULL) { return; @@ -1369,7 +1369,7 @@ static inline void tb_remove_from_jmp_list(Translatio= nBlock *orig, int n_orig) * While acquiring the lock, the jump might have been removed if the * destination TB was invalidated; check again. */ - ptr_locked =3D atomic_read(&orig->jmp_dest[n_orig]); + ptr_locked =3D qemu_atomic_read(&orig->jmp_dest[n_orig]); if (ptr_locked !=3D ptr) { qemu_spin_unlock(&dest->jmp_lock); /* @@ -1415,7 +1415,7 @@ static inline void tb_jmp_unlink(TranslationBlock *de= st) =20 TB_FOR_EACH_JMP(dest, tb, n) { tb_reset_jump(tb, n); - atomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1); + qemu_atomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1); /* No need to clear the list entry; setting the dest ptr is enough= */ } dest->jmp_list_head =3D (uintptr_t)NULL; @@ -1439,7 +1439,7 @@ static void do_tb_phys_invalidate(TranslationBlock *t= b, bool rm_from_page_list) =20 /* make sure no further incoming jumps will be chained to this TB */ qemu_spin_lock(&tb->jmp_lock); - atomic_set(&tb->cflags, tb->cflags | CF_INVALID); + qemu_atomic_set(&tb->cflags, tb->cflags | CF_INVALID); qemu_spin_unlock(&tb->jmp_lock); =20 /* remove the TB from the hash list */ @@ -1466,8 +1466,8 @@ static void do_tb_phys_invalidate(TranslationBlock *t= b, bool rm_from_page_list) /* remove the TB from the hash list */ h =3D tb_jmp_cache_hash_func(tb->pc); CPU_FOREACH(cpu) { - if (atomic_read(&cpu->tb_jmp_cache[h]) =3D=3D tb) { - atomic_set(&cpu->tb_jmp_cache[h], NULL); + if (qemu_atomic_read(&cpu->tb_jmp_cache[h]) =3D=3D tb) { + qemu_atomic_set(&cpu->tb_jmp_cache[h], NULL); } } =20 @@ -1478,7 +1478,7 @@ static void do_tb_phys_invalidate(TranslationBlock *t= b, bool rm_from_page_list) /* suppress any remaining jumps to this TB */ tb_jmp_unlink(tb); =20 - atomic_set(&tcg_ctx->tb_phys_invalidate_count, + qemu_atomic_set(&tcg_ctx->tb_phys_invalidate_count, tcg_ctx->tb_phys_invalidate_count + 1); } =20 @@ -1733,7 +1733,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, =20 #ifdef CONFIG_PROFILER /* includes aborted translations because of exceptions */ - atomic_set(&prof->tb_count1, prof->tb_count1 + 1); + qemu_atomic_set(&prof->tb_count1, prof->tb_count1 + 1); ti =3D profile_getclock(); #endif =20 @@ -1758,8 +1758,9 @@ TranslationBlock *tb_gen_code(CPUState *cpu, } =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->tb_count, prof->tb_count + 1); - atomic_set(&prof->interm_time, prof->interm_time + profile_getclock() = - ti); + qemu_atomic_set(&prof->tb_count, prof->tb_count + 1); + qemu_atomic_set(&prof->interm_time, + prof->interm_time + profile_getclock() - ti); ti =3D profile_getclock(); #endif =20 @@ -1804,10 +1805,11 @@ TranslationBlock *tb_gen_code(CPUState *cpu, tb->tc.size =3D gen_code_size; =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->code_time, prof->code_time + profile_getclock() - ti= ); - atomic_set(&prof->code_in_len, prof->code_in_len + tb->size); - atomic_set(&prof->code_out_len, prof->code_out_len + gen_code_size); - atomic_set(&prof->search_out_len, prof->search_out_len + search_size); + qemu_atomic_set(&prof->code_time, + prof->code_time + profile_getclock() - ti); + qemu_atomic_set(&prof->code_in_len, prof->code_in_len + tb->size); + qemu_atomic_set(&prof->code_out_len, prof->code_out_len + gen_code_siz= e); + qemu_atomic_set(&prof->search_out_len, prof->search_out_len + search_s= ize); #endif =20 #ifdef DEBUG_DISAS @@ -1869,7 +1871,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, } #endif =20 - atomic_set(&tcg_ctx->code_gen_ptr, (void *) + qemu_atomic_set(&tcg_ctx->code_gen_ptr, (void *) ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size, CODE_GEN_ALIGN)); =20 @@ -1905,7 +1907,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, uintptr_t orig_aligned =3D (uintptr_t)gen_code_buf; =20 orig_aligned -=3D ROUND_UP(sizeof(*tb), qemu_icache_linesize); - atomic_set(&tcg_ctx->code_gen_ptr, (void *)orig_aligned); + qemu_atomic_set(&tcg_ctx->code_gen_ptr, (void *)orig_aligned); tb_destroy(tb); return existing_tb; } @@ -2273,7 +2275,7 @@ static void tb_jmp_cache_clear_page(CPUState *cpu, ta= rget_ulong page_addr) unsigned int i, i0 =3D tb_jmp_cache_hash_page(page_addr); =20 for (i =3D 0; i < TB_JMP_PAGE_SIZE; i++) { - atomic_set(&cpu->tb_jmp_cache[i0 + i], NULL); + qemu_atomic_set(&cpu->tb_jmp_cache[i0 + i], NULL); } } =20 @@ -2393,7 +2395,7 @@ void dump_exec_info(void) =20 qemu_printf("\nStatistics:\n"); qemu_printf("TB flush count %u\n", - atomic_read(&tb_ctx.tb_flush_count)); + qemu_atomic_read(&tb_ctx.tb_flush_count)); qemu_printf("TB invalidate count %zu\n", tcg_tb_phys_invalidate_count()); =20 @@ -2415,7 +2417,7 @@ void cpu_interrupt(CPUState *cpu, int mask) { g_assert(qemu_mutex_iothread_locked()); cpu->interrupt_request |=3D mask; - atomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); + qemu_atomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); } =20 /* diff --git a/audio/jackaudio.c b/audio/jackaudio.c index 72ed7c4929..e0e9e005b7 100644 --- a/audio/jackaudio.c +++ b/audio/jackaudio.c @@ -104,7 +104,7 @@ static void qjack_buffer_create(QJackBuffer *buffer, in= t channels, int frames) static void qjack_buffer_clear(QJackBuffer *buffer) { assert(buffer->data); - atomic_store_release(&buffer->used, 0); + qemu_atomic_store_release(&buffer->used, 0); buffer->rptr =3D 0; buffer->wptr =3D 0; } @@ -129,7 +129,8 @@ static int qjack_buffer_write(QJackBuffer *buffer, floa= t *data, int size) assert(buffer->data); const int samples =3D size / sizeof(float); int frames =3D samples / buffer->channels; - const int avail =3D buffer->frames - atomic_load_acquire(&buffer->us= ed); + const int avail =3D buffer->frames - + qemu_atomic_load_acquire(&buffer->used); =20 if (frames > avail) { frames =3D avail; @@ -153,7 +154,7 @@ static int qjack_buffer_write(QJackBuffer *buffer, floa= t *data, int size) =20 buffer->wptr =3D wptr; =20 - atomic_add(&buffer->used, frames); + qemu_atomic_add(&buffer->used, frames); return frames * buffer->channels * sizeof(float); }; =20 @@ -161,7 +162,8 @@ static int qjack_buffer_write(QJackBuffer *buffer, floa= t *data, int size) static int qjack_buffer_write_l(QJackBuffer *buffer, float **dest, int fra= mes) { assert(buffer->data); - const int avail =3D buffer->frames - atomic_load_acquire(&buffer->us= ed); + const int avail =3D buffer->frames - + qemu_atomic_load_acquire(&buffer->used); int wptr =3D buffer->wptr; =20 if (frames > avail) { @@ -185,7 +187,7 @@ static int qjack_buffer_write_l(QJackBuffer *buffer, fl= oat **dest, int frames) } buffer->wptr =3D wptr; =20 - atomic_add(&buffer->used, frames); + qemu_atomic_add(&buffer->used, frames); return frames; } =20 @@ -195,7 +197,7 @@ static int qjack_buffer_read(QJackBuffer *buffer, float= *dest, int size) assert(buffer->data); const int samples =3D size / sizeof(float); int frames =3D samples / buffer->channels; - const int avail =3D atomic_load_acquire(&buffer->used); + const int avail =3D qemu_atomic_load_acquire(&buffer->used); =20 if (frames > avail) { frames =3D avail; @@ -219,7 +221,7 @@ static int qjack_buffer_read(QJackBuffer *buffer, float= *dest, int size) =20 buffer->rptr =3D rptr; =20 - atomic_sub(&buffer->used, frames); + qemu_atomic_sub(&buffer->used, frames); return frames * buffer->channels * sizeof(float); } =20 @@ -228,7 +230,7 @@ static int qjack_buffer_read_l(QJackBuffer *buffer, flo= at **dest, int frames) { assert(buffer->data); int copy =3D frames; - const int used =3D atomic_load_acquire(&buffer->used); + const int used =3D qemu_atomic_load_acquire(&buffer->used); int rptr =3D buffer->rptr; =20 if (copy > used) { @@ -252,7 +254,7 @@ static int qjack_buffer_read_l(QJackBuffer *buffer, flo= at **dest, int frames) } buffer->rptr =3D rptr; =20 - atomic_sub(&buffer->used, copy); + qemu_atomic_sub(&buffer->used, copy); return copy; } =20 diff --git a/block.c b/block.c index 9538af4884..54987b3ad5 100644 --- a/block.c +++ b/block.c @@ -1694,7 +1694,7 @@ static int bdrv_open_common(BlockDriverState *bs, Blo= ckBackend *file, } =20 /* bdrv_new() and bdrv_close() make it so */ - assert(atomic_read(&bs->copy_on_read) =3D=3D 0); + assert(qemu_atomic_read(&bs->copy_on_read) =3D=3D 0); =20 if (bs->open_flags & BDRV_O_COPY_ON_READ) { if (!bs->read_only) { @@ -4436,7 +4436,7 @@ static void bdrv_close(BlockDriverState *bs) bs->file =3D NULL; g_free(bs->opaque); bs->opaque =3D NULL; - atomic_set(&bs->copy_on_read, 0); + qemu_atomic_set(&bs->copy_on_read, 0); bs->backing_file[0] =3D '\0'; bs->backing_format[0] =3D '\0'; bs->total_sectors =3D 0; diff --git a/block/block-backend.c b/block/block-backend.c index 24dd0670d1..05390209a6 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -1353,12 +1353,12 @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFla= gs flags) =20 void blk_inc_in_flight(BlockBackend *blk) { - atomic_inc(&blk->in_flight); + qemu_atomic_inc(&blk->in_flight); } =20 void blk_dec_in_flight(BlockBackend *blk) { - atomic_dec(&blk->in_flight); + qemu_atomic_dec(&blk->in_flight); aio_wait_kick(); } =20 @@ -1720,7 +1720,7 @@ void blk_drain(BlockBackend *blk) =20 /* We may have -ENOMEDIUM completions in flight */ AIO_WAIT_WHILE(blk_get_aio_context(blk), - atomic_mb_read(&blk->in_flight) > 0); + qemu_atomic_mb_read(&blk->in_flight) > 0); =20 if (bs) { bdrv_drained_end(bs); @@ -1739,7 +1739,7 @@ void blk_drain_all(void) aio_context_acquire(ctx); =20 /* We may have -ENOMEDIUM completions in flight */ - AIO_WAIT_WHILE(ctx, atomic_mb_read(&blk->in_flight) > 0); + AIO_WAIT_WHILE(ctx, qemu_atomic_mb_read(&blk->in_flight) > 0); =20 aio_context_release(ctx); } @@ -2346,6 +2346,7 @@ void blk_io_limits_update_group(BlockBackend *blk, co= nst char *group) static void blk_root_drained_begin(BdrvChild *child) { BlockBackend *blk =3D child->opaque; + ThrottleGroupMember *tgm =3D &blk->public.throttle_group_member; =20 if (++blk->quiesce_counter =3D=3D 1) { if (blk->dev_ops && blk->dev_ops->drained_begin) { @@ -2356,8 +2357,8 @@ static void blk_root_drained_begin(BdrvChild *child) /* Note that blk->root may not be accessible here yet if we are just * attaching to a BlockDriverState that is drained. Use child instead.= */ =20 - if (atomic_fetch_inc(&blk->public.throttle_group_member.io_limits_disa= bled) =3D=3D 0) { - throttle_group_restart_tgm(&blk->public.throttle_group_member); + if (qemu_atomic_fetch_inc(&tgm->io_limits_disabled) =3D=3D 0) { + throttle_group_restart_tgm(tgm); } } =20 @@ -2374,7 +2375,7 @@ static void blk_root_drained_end(BdrvChild *child, in= t *drained_end_counter) assert(blk->quiesce_counter); =20 assert(blk->public.throttle_group_member.io_limits_disabled); - atomic_dec(&blk->public.throttle_group_member.io_limits_disabled); + qemu_atomic_dec(&blk->public.throttle_group_member.io_limits_disabled); =20 if (--blk->quiesce_counter =3D=3D 0) { if (blk->dev_ops && blk->dev_ops->drained_end) { diff --git a/block/io.c b/block/io.c index a2389bb38c..58a152ee59 100644 --- a/block/io.c +++ b/block/io.c @@ -69,7 +69,7 @@ void bdrv_parent_drained_end_single(BdrvChild *c) { int drained_end_counter =3D 0; bdrv_parent_drained_end_single_no_poll(c, &drained_end_counter); - BDRV_POLL_WHILE(c->bs, atomic_read(&drained_end_counter) > 0); + BDRV_POLL_WHILE(c->bs, qemu_atomic_read(&drained_end_counter) > 0); } =20 static void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignor= e, @@ -186,12 +186,12 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error = **errp) */ void bdrv_enable_copy_on_read(BlockDriverState *bs) { - atomic_inc(&bs->copy_on_read); + qemu_atomic_inc(&bs->copy_on_read); } =20 void bdrv_disable_copy_on_read(BlockDriverState *bs) { - int old =3D atomic_fetch_dec(&bs->copy_on_read); + int old =3D qemu_atomic_fetch_dec(&bs->copy_on_read); assert(old >=3D 1); } =20 @@ -219,9 +219,9 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *= opaque) } =20 /* Set data->done and decrement drained_end_counter before bdrv_wakeup= () */ - atomic_mb_set(&data->done, true); + qemu_atomic_mb_set(&data->done, true); if (!data->begin) { - atomic_dec(data->drained_end_counter); + qemu_atomic_dec(data->drained_end_counter); } bdrv_dec_in_flight(bs); =20 @@ -248,7 +248,7 @@ static void bdrv_drain_invoke(BlockDriverState *bs, boo= l begin, }; =20 if (!begin) { - atomic_inc(drained_end_counter); + qemu_atomic_inc(drained_end_counter); } =20 /* Make sure the driver callback completes during the polling phase for @@ -268,7 +268,7 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursi= ve, return true; } =20 - if (atomic_read(&bs->in_flight)) { + if (qemu_atomic_read(&bs->in_flight)) { return true; } =20 @@ -382,7 +382,7 @@ void bdrv_do_drained_begin_quiesce(BlockDriverState *bs, assert(!qemu_in_coroutine()); =20 /* Stop things in parent-to-child order */ - if (atomic_fetch_inc(&bs->quiesce_counter) =3D=3D 0) { + if (qemu_atomic_fetch_inc(&bs->quiesce_counter) =3D=3D 0) { aio_disable_external(bdrv_get_aio_context(bs)); } =20 @@ -473,7 +473,7 @@ static void bdrv_do_drained_end(BlockDriverState *bs, b= ool recursive, bdrv_parent_drained_end(bs, parent, ignore_bds_parents, drained_end_counter); =20 - old_quiesce_counter =3D atomic_fetch_dec(&bs->quiesce_counter); + old_quiesce_counter =3D qemu_atomic_fetch_dec(&bs->quiesce_counter); if (old_quiesce_counter =3D=3D 1) { aio_enable_external(bdrv_get_aio_context(bs)); } @@ -492,7 +492,7 @@ void bdrv_drained_end(BlockDriverState *bs) { int drained_end_counter =3D 0; bdrv_do_drained_end(bs, false, NULL, false, &drained_end_counter); - BDRV_POLL_WHILE(bs, atomic_read(&drained_end_counter) > 0); + BDRV_POLL_WHILE(bs, qemu_atomic_read(&drained_end_counter) > 0); } =20 void bdrv_drained_end_no_poll(BlockDriverState *bs, int *drained_end_count= er) @@ -504,7 +504,7 @@ void bdrv_subtree_drained_end(BlockDriverState *bs) { int drained_end_counter =3D 0; bdrv_do_drained_end(bs, true, NULL, false, &drained_end_counter); - BDRV_POLL_WHILE(bs, atomic_read(&drained_end_counter) > 0); + BDRV_POLL_WHILE(bs, qemu_atomic_read(&drained_end_counter) > 0); } =20 void bdrv_apply_subtree_drain(BdrvChild *child, BlockDriverState *new_pare= nt) @@ -526,7 +526,7 @@ void bdrv_unapply_subtree_drain(BdrvChild *child, Block= DriverState *old_parent) &drained_end_counter); } =20 - BDRV_POLL_WHILE(child->bs, atomic_read(&drained_end_counter) > 0); + BDRV_POLL_WHILE(child->bs, qemu_atomic_read(&drained_end_counter) > 0); } =20 /* @@ -553,7 +553,7 @@ static void bdrv_drain_assert_idle(BlockDriverState *bs) { BdrvChild *child, *next; =20 - assert(atomic_read(&bs->in_flight) =3D=3D 0); + assert(qemu_atomic_read(&bs->in_flight) =3D=3D 0); QLIST_FOREACH_SAFE(child, &bs->children, next, next) { bdrv_drain_assert_idle(child->bs); } @@ -655,7 +655,7 @@ void bdrv_drain_all_end(void) } =20 assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); - AIO_WAIT_WHILE(NULL, atomic_read(&drained_end_counter) > 0); + AIO_WAIT_WHILE(NULL, qemu_atomic_read(&drained_end_counter) > 0); =20 assert(bdrv_drain_all_count > 0); bdrv_drain_all_count--; @@ -675,7 +675,7 @@ void bdrv_drain_all(void) static void tracked_request_end(BdrvTrackedRequest *req) { if (req->serialising) { - atomic_dec(&req->bs->serialising_in_flight); + qemu_atomic_dec(&req->bs->serialising_in_flight); } =20 qemu_co_mutex_lock(&req->bs->reqs_lock); @@ -777,7 +777,7 @@ bool bdrv_mark_request_serialising(BdrvTrackedRequest *= req, uint64_t align) =20 qemu_co_mutex_lock(&bs->reqs_lock); if (!req->serialising) { - atomic_inc(&req->bs->serialising_in_flight); + qemu_atomic_inc(&req->bs->serialising_in_flight); req->serialising =3D true; } =20 @@ -841,7 +841,7 @@ static int bdrv_get_cluster_size(BlockDriverState *bs) =20 void bdrv_inc_in_flight(BlockDriverState *bs) { - atomic_inc(&bs->in_flight); + qemu_atomic_inc(&bs->in_flight); } =20 void bdrv_wakeup(BlockDriverState *bs) @@ -851,7 +851,7 @@ void bdrv_wakeup(BlockDriverState *bs) =20 void bdrv_dec_in_flight(BlockDriverState *bs) { - atomic_dec(&bs->in_flight); + qemu_atomic_dec(&bs->in_flight); bdrv_wakeup(bs); } =20 @@ -860,7 +860,7 @@ static bool coroutine_fn bdrv_wait_serialising_requests= (BdrvTrackedRequest *self BlockDriverState *bs =3D self->bs; bool waited =3D false; =20 - if (!atomic_read(&bs->serialising_in_flight)) { + if (!qemu_atomic_read(&bs->serialising_in_flight)) { return false; } =20 @@ -1747,7 +1747,7 @@ int coroutine_fn bdrv_co_preadv_part(BdrvChild *child, bdrv_inc_in_flight(bs); =20 /* Don't do copy-on-read if we read data before write operation */ - if (atomic_read(&bs->copy_on_read)) { + if (qemu_atomic_read(&bs->copy_on_read)) { flags |=3D BDRV_REQ_COPY_ON_READ; } =20 @@ -1935,7 +1935,7 @@ bdrv_co_write_req_finish(BdrvChild *child, int64_t of= fset, uint64_t bytes, int64_t end_sector =3D DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE); BlockDriverState *bs =3D child->bs; =20 - atomic_inc(&bs->write_gen); + qemu_atomic_inc(&bs->write_gen); =20 /* * Discard cannot extend the image, but in error handling cases, such = as @@ -2768,7 +2768,7 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs) } =20 qemu_co_mutex_lock(&bs->reqs_lock); - current_gen =3D atomic_read(&bs->write_gen); + current_gen =3D qemu_atomic_read(&bs->write_gen); =20 /* Wait until any previous flushes are completed */ while (bs->active_flush_req) { @@ -3116,7 +3116,7 @@ void bdrv_io_plug(BlockDriverState *bs) bdrv_io_plug(child->bs); } =20 - if (atomic_fetch_inc(&bs->io_plugged) =3D=3D 0) { + if (qemu_atomic_fetch_inc(&bs->io_plugged) =3D=3D 0) { BlockDriver *drv =3D bs->drv; if (drv && drv->bdrv_io_plug) { drv->bdrv_io_plug(bs); @@ -3129,7 +3129,7 @@ void bdrv_io_unplug(BlockDriverState *bs) BdrvChild *child; =20 assert(bs->io_plugged); - if (atomic_fetch_dec(&bs->io_plugged) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&bs->io_plugged) =3D=3D 1) { BlockDriver *drv =3D bs->drv; if (drv && drv->bdrv_io_unplug) { drv->bdrv_io_unplug(bs); diff --git a/block/nfs.c b/block/nfs.c index 61a249a9fc..d266f93d8c 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -721,7 +721,7 @@ nfs_get_allocated_file_size_cb(int ret, struct nfs_cont= ext *nfs, void *data, } =20 /* Set task->complete before reading bs->wakeup. */ - atomic_mb_set(&task->complete, 1); + qemu_atomic_mb_set(&task->complete, 1); bdrv_wakeup(task->bs); } =20 diff --git a/block/sheepdog.c b/block/sheepdog.c index cbbebc1aaf..b535830799 100644 --- a/block/sheepdog.c +++ b/block/sheepdog.c @@ -665,7 +665,7 @@ out: srco->co =3D NULL; srco->ret =3D ret; /* Set srco->finished before reading bs->wakeup. */ - atomic_mb_set(&srco->finished, true); + qemu_atomic_mb_set(&srco->finished, true); if (srco->bs) { bdrv_wakeup(srco->bs); } diff --git a/block/throttle-groups.c b/block/throttle-groups.c index 4e28365d8d..8d84d7cf61 100644 --- a/block/throttle-groups.c +++ b/block/throttle-groups.c @@ -228,7 +228,7 @@ static ThrottleGroupMember *next_throttle_token(Throttl= eGroupMember *tgm, * immediately if it has pending requests. Otherwise we could be * forcing it to wait for other member's throttled requests. */ if (tgm_has_pending_reqs(tgm, is_write) && - atomic_read(&tgm->io_limits_disabled)) { + qemu_atomic_read(&tgm->io_limits_disabled)) { return tgm; } =20 @@ -272,7 +272,7 @@ static bool throttle_group_schedule_timer(ThrottleGroup= Member *tgm, ThrottleTimers *tt =3D &tgm->throttle_timers; bool must_wait; =20 - if (atomic_read(&tgm->io_limits_disabled)) { + if (qemu_atomic_read(&tgm->io_limits_disabled)) { return false; } =20 @@ -417,7 +417,7 @@ static void coroutine_fn throttle_group_restart_queue_e= ntry(void *opaque) =20 g_free(data); =20 - atomic_dec(&tgm->restart_pending); + qemu_atomic_dec(&tgm->restart_pending); aio_wait_kick(); } =20 @@ -434,7 +434,7 @@ static void throttle_group_restart_queue(ThrottleGroupM= ember *tgm, bool is_write * be no timer pending on this tgm at this point */ assert(!timer_pending(tgm->throttle_timers.timers[is_write])); =20 - atomic_inc(&tgm->restart_pending); + qemu_atomic_inc(&tgm->restart_pending); =20 co =3D qemu_coroutine_create(throttle_group_restart_queue_entry, rd); aio_co_enter(tgm->aio_context, co); @@ -544,7 +544,7 @@ void throttle_group_register_tgm(ThrottleGroupMember *t= gm, =20 tgm->throttle_state =3D ts; tgm->aio_context =3D ctx; - atomic_set(&tgm->restart_pending, 0); + qemu_atomic_set(&tgm->restart_pending, 0); =20 qemu_mutex_lock(&tg->lock); /* If the ThrottleGroup is new set this ThrottleGroupMember as the tok= en */ @@ -592,7 +592,8 @@ void throttle_group_unregister_tgm(ThrottleGroupMember = *tgm) } =20 /* Wait for throttle_group_restart_queue_entry() coroutines to finish = */ - AIO_WAIT_WHILE(tgm->aio_context, atomic_read(&tgm->restart_pending) > = 0); + AIO_WAIT_WHILE(tgm->aio_context, + qemu_atomic_read(&tgm->restart_pending) > 0); =20 qemu_mutex_lock(&tg->lock); for (i =3D 0; i < 2; i++) { diff --git a/block/throttle.c b/block/throttle.c index 9a0f38149a..879cde65d7 100644 --- a/block/throttle.c +++ b/block/throttle.c @@ -217,7 +217,7 @@ static void throttle_reopen_abort(BDRVReopenState *reop= en_state) static void coroutine_fn throttle_co_drain_begin(BlockDriverState *bs) { ThrottleGroupMember *tgm =3D bs->opaque; - if (atomic_fetch_inc(&tgm->io_limits_disabled) =3D=3D 0) { + if (qemu_atomic_fetch_inc(&tgm->io_limits_disabled) =3D=3D 0) { throttle_group_restart_tgm(tgm); } } @@ -226,7 +226,7 @@ static void coroutine_fn throttle_co_drain_end(BlockDri= verState *bs) { ThrottleGroupMember *tgm =3D bs->opaque; assert(tgm->io_limits_disabled); - atomic_dec(&tgm->io_limits_disabled); + qemu_atomic_dec(&tgm->io_limits_disabled); } =20 static const char *const throttle_strong_runtime_opts[] =3D { diff --git a/blockdev.c b/blockdev.c index 7f2561081e..6c72ac46f4 100644 --- a/blockdev.c +++ b/blockdev.c @@ -1604,7 +1604,7 @@ static void external_snapshot_commit(BlkActionState *= common) /* We don't need (or want) to use the transactional * bdrv_reopen_multiple() across all the entries at once, because we * don't want to abort all of them if one of them fails the reopen */ - if (!atomic_read(&state->old_bs->copy_on_read)) { + if (!qemu_atomic_read(&state->old_bs->copy_on_read)) { bdrv_reopen_set_read_only(state->old_bs, true, NULL); } =20 diff --git a/blockjob.c b/blockjob.c index 470facfd47..e21e2ce77c 100644 --- a/blockjob.c +++ b/blockjob.c @@ -298,7 +298,7 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **er= rp) info =3D g_new0(BlockJobInfo, 1); info->type =3D g_strdup(job_type_str(&job->job)); info->device =3D g_strdup(job->job.id); - info->busy =3D atomic_read(&job->job.busy); + info->busy =3D qemu_atomic_read(&job->job.busy); info->paused =3D job->job.pause_count > 0; info->offset =3D job->job.progress.current; info->len =3D job->job.progress.total; diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/= libvhost-user.c index 53f16bdf08..8d9623bcbc 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -448,7 +448,7 @@ static void vu_log_page(uint8_t *log_table, uint64_t page) { DPRINT("Logged dirty guest page: %"PRId64"\n", page); - atomic_or(&log_table[page / 8], 1 << (page % 8)); + qemu_atomic_or(&log_table[page / 8], 1 << (page % 8)); } =20 static void diff --git a/cpus-common.c b/cpus-common.c index 34044f4e4c..6a6b82de52 100644 --- a/cpus-common.c +++ b/cpus-common.c @@ -148,7 +148,7 @@ void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func,= run_on_cpu_data data, wi.exclusive =3D false; =20 queue_work_on_cpu(cpu, &wi); - while (!atomic_mb_read(&wi.done)) { + while (!qemu_atomic_mb_read(&wi.done)) { CPUState *self_cpu =3D current_cpu; =20 qemu_cond_wait(&qemu_work_cond, mutex); @@ -188,20 +188,20 @@ void start_exclusive(void) exclusive_idle(); =20 /* Make all other cpus stop executing. */ - atomic_set(&pending_cpus, 1); + qemu_atomic_set(&pending_cpus, 1); =20 /* Write pending_cpus before reading other_cpu->running. */ smp_mb(); running_cpus =3D 0; CPU_FOREACH(other_cpu) { - if (atomic_read(&other_cpu->running)) { + if (qemu_atomic_read(&other_cpu->running)) { other_cpu->has_waiter =3D true; running_cpus++; qemu_cpu_kick(other_cpu); } } =20 - atomic_set(&pending_cpus, running_cpus + 1); + qemu_atomic_set(&pending_cpus, running_cpus + 1); while (pending_cpus > 1) { qemu_cond_wait(&exclusive_cond, &qemu_cpu_list_lock); } @@ -220,7 +220,7 @@ void end_exclusive(void) current_cpu->in_exclusive_context =3D false; =20 qemu_mutex_lock(&qemu_cpu_list_lock); - atomic_set(&pending_cpus, 0); + qemu_atomic_set(&pending_cpus, 0); qemu_cond_broadcast(&exclusive_resume); qemu_mutex_unlock(&qemu_cpu_list_lock); } @@ -228,7 +228,7 @@ void end_exclusive(void) /* Wait for exclusive ops to finish, and begin cpu execution. */ void cpu_exec_start(CPUState *cpu) { - atomic_set(&cpu->running, true); + qemu_atomic_set(&cpu->running, true); =20 /* Write cpu->running before reading pending_cpus. */ smp_mb(); @@ -246,17 +246,17 @@ void cpu_exec_start(CPUState *cpu) * 3. pending_cpus =3D=3D 0. Then start_exclusive is definitely going= to * see cpu->running =3D=3D true, and it will kick the CPU. */ - if (unlikely(atomic_read(&pending_cpus))) { + if (unlikely(qemu_atomic_read(&pending_cpus))) { QEMU_LOCK_GUARD(&qemu_cpu_list_lock); if (!cpu->has_waiter) { /* Not counted in pending_cpus, let the exclusive item * run. Since we have the lock, just set cpu->running to true * while holding it; no need to check pending_cpus again. */ - atomic_set(&cpu->running, false); + qemu_atomic_set(&cpu->running, false); exclusive_idle(); /* Now pending_cpus is zero. */ - atomic_set(&cpu->running, true); + qemu_atomic_set(&cpu->running, true); } else { /* Counted in pending_cpus, go ahead and release the * waiter at cpu_exec_end. @@ -268,7 +268,7 @@ void cpu_exec_start(CPUState *cpu) /* Mark cpu as not executing, and release pending exclusive ops. */ void cpu_exec_end(CPUState *cpu) { - atomic_set(&cpu->running, false); + qemu_atomic_set(&cpu->running, false); =20 /* Write cpu->running before reading pending_cpus. */ smp_mb(); @@ -288,11 +288,11 @@ void cpu_exec_end(CPUState *cpu) * see cpu->running =3D=3D false, and it can ignore this CPU until the * next cpu_exec_start. */ - if (unlikely(atomic_read(&pending_cpus))) { + if (unlikely(qemu_atomic_read(&pending_cpus))) { QEMU_LOCK_GUARD(&qemu_cpu_list_lock); if (cpu->has_waiter) { cpu->has_waiter =3D false; - atomic_set(&pending_cpus, pending_cpus - 1); + qemu_atomic_set(&pending_cpus, pending_cpus - 1); if (pending_cpus =3D=3D 1) { qemu_cond_signal(&exclusive_cond); } @@ -346,7 +346,7 @@ void process_queued_cpu_work(CPUState *cpu) if (wi->free) { g_free(wi); } else { - atomic_mb_set(&wi->done, true); + qemu_atomic_mb_set(&wi->done, true); } } qemu_mutex_unlock(&cpu->work_mutex); diff --git a/dump/dump.c b/dump/dump.c index 13fda440a4..1b2fbb6442 100644 --- a/dump/dump.c +++ b/dump/dump.c @@ -1572,7 +1572,7 @@ static void dump_state_prepare(DumpState *s) bool dump_in_progress(void) { DumpState *state =3D &dump_state_global; - return (atomic_read(&state->status) =3D=3D DUMP_STATUS_ACTIVE); + return (qemu_atomic_read(&state->status) =3D=3D DUMP_STATUS_ACTIVE); } =20 /* calculate total size of memory to be dumped (taking filter into @@ -1882,7 +1882,7 @@ static void dump_process(DumpState *s, Error **errp) =20 /* make sure status is written after written_size updates */ smp_wmb(); - atomic_set(&s->status, + qemu_atomic_set(&s->status, (local_err ? DUMP_STATUS_FAILED : DUMP_STATUS_COMPLETED)); =20 /* send DUMP_COMPLETED message (unconditionally) */ @@ -1908,7 +1908,7 @@ DumpQueryResult *qmp_query_dump(Error **errp) { DumpQueryResult *result =3D g_new(DumpQueryResult, 1); DumpState *state =3D &dump_state_global; - result->status =3D atomic_read(&state->status); + result->status =3D qemu_atomic_read(&state->status); /* make sure we are reading status and written_size in order */ smp_rmb(); result->completed =3D state->written_size; @@ -2013,7 +2013,7 @@ void qmp_dump_guest_memory(bool paging, const char *f= ile, begin, length, &local_err); if (local_err) { error_propagate(errp, local_err); - atomic_set(&s->status, DUMP_STATUS_FAILED); + qemu_atomic_set(&s->status, DUMP_STATUS_FAILED); return; } =20 diff --git a/exec.c b/exec.c index e34b602bdf..236e9eca1a 100644 --- a/exec.c +++ b/exec.c @@ -353,13 +353,13 @@ static MemoryRegionSection *address_space_lookup_regi= on(AddressSpaceDispatch *d, hwaddr addr, bool resolve_subpa= ge) { - MemoryRegionSection *section =3D atomic_read(&d->mru_section); + MemoryRegionSection *section =3D qemu_atomic_read(&d->mru_section); subpage_t *subpage; =20 if (!section || section =3D=3D &d->map.sections[PHYS_SECTION_UNASSIGNE= D] || !section_covers_addr(section, addr)) { section =3D phys_page_find(d, addr); - atomic_set(&d->mru_section, section); + qemu_atomic_set(&d->mru_section, section); } if (resolve_subpage && section->mr->subpage) { subpage =3D container_of(section->mr, subpage_t, iomem); @@ -695,7 +695,8 @@ address_space_translate_for_iotlb(CPUState *cpu, int as= idx, hwaddr addr, IOMMUMemoryRegionClass *imrc; IOMMUTLBEntry iotlb; int iommu_idx; - AddressSpaceDispatch *d =3D atomic_rcu_read(&cpu->cpu_ases[asidx].memo= ry_dispatch); + AddressSpaceDispatch *d =3D + qemu_atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch); =20 for (;;) { section =3D address_space_translate_internal(d, addr, &addr, plen,= false); @@ -1247,7 +1248,7 @@ static RAMBlock *qemu_get_ram_block(ram_addr_t addr) { RAMBlock *block; =20 - block =3D atomic_rcu_read(&ram_list.mru_block); + block =3D qemu_atomic_rcu_read(&ram_list.mru_block); if (block && addr - block->offset < block->max_length) { return block; } @@ -1273,7 +1274,7 @@ found: * call_rcu(reclaim_ramblock, x= xx); * rcu_read_unlock() * - * atomic_rcu_set is not needed here. The block was already published + * qemu_atomic_rcu_set is not needed here. The block was already publ= ished * when it was placed into the list. Here we're just making an extra * copy of the pointer. */ @@ -1321,7 +1322,7 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_add= r_t start, page =3D start_page; =20 WITH_RCU_READ_LOCK_GUARD() { - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + blocks =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[client]); ramblock =3D qemu_get_ram_block(start); /* Range sanity check on the ramblock */ assert(start >=3D ramblock->offset && @@ -1371,7 +1372,7 @@ DirtyBitmapSnapshot *cpu_physical_memory_snapshot_and= _clear_dirty dest =3D 0; =20 WITH_RCU_READ_LOCK_GUARD() { - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + blocks =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[client]); =20 while (page < end) { unsigned long idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; @@ -2207,7 +2208,7 @@ static void dirty_memory_extend(ram_addr_t old_ram_si= ze, DirtyMemoryBlocks *new_blocks; int j; =20 - old_blocks =3D atomic_rcu_read(&ram_list.dirty_memory[i]); + old_blocks =3D qemu_atomic_rcu_read(&ram_list.dirty_memory[i]); new_blocks =3D g_malloc(sizeof(*new_blocks) + sizeof(new_blocks->blocks[0]) * new_num_bloc= ks); =20 @@ -2220,7 +2221,7 @@ static void dirty_memory_extend(ram_addr_t old_ram_si= ze, new_blocks->blocks[j] =3D bitmap_new(DIRTY_MEMORY_BLOCK_SIZE); } =20 - atomic_rcu_set(&ram_list.dirty_memory[i], new_blocks); + qemu_atomic_rcu_set(&ram_list.dirty_memory[i], new_blocks); =20 if (old_blocks) { g_free_rcu(old_blocks, rcu); @@ -2667,7 +2668,7 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool ro= und_offset, } =20 RCU_READ_LOCK_GUARD(); - block =3D atomic_rcu_read(&ram_list.mru_block); + block =3D qemu_atomic_rcu_read(&ram_list.mru_block); if (block && block->host && host - block->host < block->max_length) { goto found; } @@ -2912,7 +2913,7 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, { int asidx =3D cpu_asidx_from_attrs(cpu, attrs); CPUAddressSpace *cpuas =3D &cpu->cpu_ases[asidx]; - AddressSpaceDispatch *d =3D atomic_rcu_read(&cpuas->memory_dispatch); + AddressSpaceDispatch *d =3D qemu_atomic_rcu_read(&cpuas->memory_dispat= ch); MemoryRegionSection *sections =3D d->map.sections; =20 return §ions[index & ~TARGET_PAGE_MASK]; @@ -2996,7 +2997,7 @@ static void tcg_commit(MemoryListener *listener) * may have split the RCU critical section. */ d =3D address_space_to_dispatch(cpuas->as); - atomic_rcu_set(&cpuas->memory_dispatch, d); + qemu_atomic_rcu_set(&cpuas->memory_dispatch, d); tlb_flush(cpuas->cpu); } =20 @@ -3443,7 +3444,7 @@ void cpu_register_map_client(QEMUBH *bh) qemu_mutex_lock(&map_client_list_lock); client->bh =3D bh; QLIST_INSERT_HEAD(&map_client_list, client, link); - if (!atomic_read(&bounce.in_use)) { + if (!qemu_atomic_read(&bounce.in_use)) { cpu_notify_map_clients_locked(); } qemu_mutex_unlock(&map_client_list_lock); @@ -3577,7 +3578,7 @@ void *address_space_map(AddressSpace *as, mr =3D flatview_translate(fv, addr, &xlat, &l, is_write, attrs); =20 if (!memory_access_is_direct(mr, is_write)) { - if (atomic_xchg(&bounce.in_use, true)) { + if (qemu_atomic_xchg(&bounce.in_use, true)) { *plen =3D 0; return NULL; } @@ -3636,7 +3637,7 @@ void address_space_unmap(AddressSpace *as, void *buff= er, hwaddr len, qemu_vfree(bounce.buffer); bounce.buffer =3D NULL; memory_region_unref(bounce.mr); - atomic_mb_set(&bounce.in_use, false); + qemu_atomic_mb_set(&bounce.in_use, false); cpu_notify_map_clients(); } =20 @@ -4105,16 +4106,17 @@ int ram_block_discard_disable(bool state) int old; =20 if (!state) { - atomic_dec(&ram_block_discard_disabled); + qemu_atomic_dec(&ram_block_discard_disabled); return 0; } =20 do { - old =3D atomic_read(&ram_block_discard_disabled); + old =3D qemu_atomic_read(&ram_block_discard_disabled); if (old < 0) { return -EBUSY; } - } while (atomic_cmpxchg(&ram_block_discard_disabled, old, old + 1) != =3D old); + } while (qemu_atomic_cmpxchg(&ram_block_discard_disabled, + old, old + 1) !=3D old); return 0; } =20 @@ -4123,27 +4125,28 @@ int ram_block_discard_require(bool state) int old; =20 if (!state) { - atomic_inc(&ram_block_discard_disabled); + qemu_atomic_inc(&ram_block_discard_disabled); return 0; } =20 do { - old =3D atomic_read(&ram_block_discard_disabled); + old =3D qemu_atomic_read(&ram_block_discard_disabled); if (old > 0) { return -EBUSY; } - } while (atomic_cmpxchg(&ram_block_discard_disabled, old, old - 1) != =3D old); + } while (qemu_atomic_cmpxchg(&ram_block_discard_disabled, + old, old - 1) !=3D old); return 0; } =20 bool ram_block_discard_is_disabled(void) { - return atomic_read(&ram_block_discard_disabled) > 0; + return qemu_atomic_read(&ram_block_discard_disabled) > 0; } =20 bool ram_block_discard_is_required(void) { - return atomic_read(&ram_block_discard_disabled) < 0; + return qemu_atomic_read(&ram_block_discard_disabled) < 0; } =20 #endif diff --git a/hw/core/cpu.c b/hw/core/cpu.c index 8f65383ffb..dda45764f5 100644 --- a/hw/core/cpu.c +++ b/hw/core/cpu.c @@ -111,10 +111,10 @@ void cpu_reset_interrupt(CPUState *cpu, int mask) =20 void cpu_exit(CPUState *cpu) { - atomic_set(&cpu->exit_request, 1); + qemu_atomic_set(&cpu->exit_request, 1); /* Ensure cpu_exec will see the exit request after TCG has exited. */ smp_wmb(); - atomic_set(&cpu->icount_decr_ptr->u16.high, -1); + qemu_atomic_set(&cpu->icount_decr_ptr->u16.high, -1); } =20 int cpu_write_elf32_qemunote(WriteCoreDumpFunction f, CPUState *cpu, @@ -261,7 +261,7 @@ static void cpu_common_reset(DeviceState *dev) cpu->halted =3D cpu->start_powered_off; cpu->mem_io_pc =3D 0; cpu->icount_extra =3D 0; - atomic_set(&cpu->icount_decr_ptr->u32, 0); + qemu_atomic_set(&cpu->icount_decr_ptr->u32, 0); cpu->can_do_io =3D 1; cpu->exception_index =3D -1; cpu->crash_occurred =3D false; diff --git a/hw/display/qxl.c b/hw/display/qxl.c index 11871340e7..0a33027685 100644 --- a/hw/display/qxl.c +++ b/hw/display/qxl.c @@ -1908,7 +1908,7 @@ static void qxl_send_events(PCIQXLDevice *d, uint32_t= events) /* * Older versions of Spice forgot to define the QXLRam struct * with the '__aligned__(4)' attribute. clang 7 and newer will - * thus warn that atomic_fetch_or(&d->ram->int_pending, ...) + * thus warn that qemu_atomic_fetch_or(&d->ram->int_pending, ...) * might be a misaligned atomic access, and will generate an * out-of-line call for it, which results in a link error since * we don't currently link against libatomic. @@ -1928,8 +1928,9 @@ static void qxl_send_events(PCIQXLDevice *d, uint32_t= events) #define ALIGNED_UINT32_PTR(P) ((uint32_t *)P) #endif =20 - old_pending =3D atomic_fetch_or(ALIGNED_UINT32_PTR(&d->ram->int_pendin= g), - le_events); + old_pending =3D + qemu_atomic_fetch_or(ALIGNED_UINT32_PTR(&d->ram->int_pending), + le_events); if ((old_pending & le_events) =3D=3D le_events) { return; } diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c index aa5a2a9bd8..a832b85ad0 100644 --- a/hw/hyperv/hyperv.c +++ b/hw/hyperv/hyperv.c @@ -233,7 +233,7 @@ static void sint_msg_bh(void *opaque) HvSintRoute *sint_route =3D opaque; HvSintStagedMessage *staged_msg =3D sint_route->staged_msg; =20 - if (atomic_read(&staged_msg->state) !=3D HV_STAGED_MSG_POSTED) { + if (qemu_atomic_read(&staged_msg->state) !=3D HV_STAGED_MSG_POSTED) { /* status nor ready yet (spurious ack from guest?), ignore */ return; } @@ -242,7 +242,7 @@ static void sint_msg_bh(void *opaque) staged_msg->status =3D 0; =20 /* staged message processing finished, ready to start over */ - atomic_set(&staged_msg->state, HV_STAGED_MSG_FREE); + qemu_atomic_set(&staged_msg->state, HV_STAGED_MSG_FREE); /* drop the reference taken in hyperv_post_msg */ hyperv_sint_route_unref(sint_route); } @@ -280,7 +280,7 @@ static void cpu_post_msg(CPUState *cs, run_on_cpu_data = data) memory_region_set_dirty(&synic->msg_page_mr, 0, sizeof(*synic->msg_pag= e)); =20 posted: - atomic_set(&staged_msg->state, HV_STAGED_MSG_POSTED); + qemu_atomic_set(&staged_msg->state, HV_STAGED_MSG_POSTED); /* * Notify the msg originator of the progress made; if the slot was bus= y we * set msg_pending flag in it so it will be the guest who will do EOM = and @@ -303,7 +303,7 @@ int hyperv_post_msg(HvSintRoute *sint_route, struct hyp= erv_message *src_msg) assert(staged_msg); =20 /* grab the staging area */ - if (atomic_cmpxchg(&staged_msg->state, HV_STAGED_MSG_FREE, + if (qemu_atomic_cmpxchg(&staged_msg->state, HV_STAGED_MSG_FREE, HV_STAGED_MSG_BUSY) !=3D HV_STAGED_MSG_FREE) { return -EAGAIN; } @@ -353,7 +353,8 @@ int hyperv_set_event_flag(HvSintRoute *sint_route, unsi= gned eventno) set_mask =3D BIT_MASK(eventno); flags =3D synic->event_page->slot[sint_route->sint].flags; =20 - if ((atomic_fetch_or(&flags[set_idx], set_mask) & set_mask) !=3D set_m= ask) { + if ((qemu_atomic_fetch_or(&flags[set_idx], set_mask) & set_mask) !=3D + set_mask) { memory_region_set_dirty(&synic->event_page_mr, 0, sizeof(*synic->event_page)); ret =3D hyperv_sint_route_set_sint(sint_route); diff --git a/hw/hyperv/vmbus.c b/hw/hyperv/vmbus.c index 6ef895bc35..c5ad261600 100644 --- a/hw/hyperv/vmbus.c +++ b/hw/hyperv/vmbus.c @@ -747,7 +747,7 @@ static int vmbus_channel_notify_guest(VMBusChannel *cha= n) =20 idx =3D BIT_WORD(chan->id); mask =3D BIT_MASK(chan->id); - if ((atomic_fetch_or(&int_map[idx], mask) & mask) !=3D mask) { + if ((qemu_atomic_fetch_or(&int_map[idx], mask) & mask) !=3D mask) { res =3D hyperv_sint_route_set_sint(chan->notify_route); dirty =3D len; } diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c index cde981bad6..ca6b3e2408 100644 --- a/hw/i386/xen/xen-hvm.c +++ b/hw/i386/xen/xen-hvm.c @@ -1140,7 +1140,7 @@ static int handle_buffered_iopage(XenIOState *state) assert(req.dir =3D=3D IOREQ_WRITE); assert(!req.data_is_ptr); =20 - atomic_add(&buf_page->read_pointer, qw + 1); + qemu_atomic_add(&buf_page->read_pointer, qw + 1); } =20 return req.count; diff --git a/hw/intc/rx_icu.c b/hw/intc/rx_icu.c index df4b6a8d22..4220120739 100644 --- a/hw/intc/rx_icu.c +++ b/hw/intc/rx_icu.c @@ -81,8 +81,8 @@ static void rxicu_request(RXICUState *icu, int n_IRQ) int enable; =20 enable =3D icu->ier[n_IRQ / 8] & (1 << (n_IRQ & 7)); - if (n_IRQ > 0 && enable !=3D 0 && atomic_read(&icu->req_irq) < 0) { - atomic_set(&icu->req_irq, n_IRQ); + if (n_IRQ > 0 && enable !=3D 0 && qemu_atomic_read(&icu->req_irq) < 0)= { + qemu_atomic_set(&icu->req_irq, n_IRQ); set_irq(icu, n_IRQ, rxicu_level(icu, n_IRQ)); } } @@ -124,10 +124,10 @@ static void rxicu_set_irq(void *opaque, int n_IRQ, in= t level) } if (issue =3D=3D 0 && src->sense =3D=3D TRG_LEVEL) { icu->ir[n_IRQ] =3D 0; - if (atomic_read(&icu->req_irq) =3D=3D n_IRQ) { + if (qemu_atomic_read(&icu->req_irq) =3D=3D n_IRQ) { /* clear request */ set_irq(icu, n_IRQ, 0); - atomic_set(&icu->req_irq, -1); + qemu_atomic_set(&icu->req_irq, -1); } return; } @@ -144,11 +144,11 @@ static void rxicu_ack_irq(void *opaque, int no, int l= evel) int n_IRQ; int max_pri; =20 - n_IRQ =3D atomic_read(&icu->req_irq); + n_IRQ =3D qemu_atomic_read(&icu->req_irq); if (n_IRQ < 0) { return; } - atomic_set(&icu->req_irq, -1); + qemu_atomic_set(&icu->req_irq, -1); if (icu->src[n_IRQ].sense !=3D TRG_LEVEL) { icu->ir[n_IRQ] =3D 0; } diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c index af611f8db8..4b9e401b79 100644 --- a/hw/intc/sifive_plic.c +++ b/hw/intc/sifive_plic.c @@ -89,12 +89,12 @@ static void sifive_plic_print_state(SiFivePLICState *pl= ic) =20 static uint32_t atomic_set_masked(uint32_t *a, uint32_t mask, uint32_t val= ue) { - uint32_t old, new, cmp =3D atomic_read(a); + uint32_t old, new, cmp =3D qemu_atomic_read(a); =20 do { old =3D cmp; new =3D (old & ~mask) | (value & mask); - cmp =3D atomic_cmpxchg(a, old, new); + cmp =3D qemu_atomic_cmpxchg(a, old, new); } while (old !=3D cmp); =20 return old; diff --git a/hw/misc/edu.c b/hw/misc/edu.c index 0ff9d1ac78..c2196dafb5 100644 --- a/hw/misc/edu.c +++ b/hw/misc/edu.c @@ -212,7 +212,7 @@ static uint64_t edu_mmio_read(void *opaque, hwaddr addr= , unsigned size) qemu_mutex_unlock(&edu->thr_mutex); break; case 0x20: - val =3D atomic_read(&edu->status); + val =3D qemu_atomic_read(&edu->status); break; case 0x24: val =3D edu->irq_status; @@ -252,7 +252,7 @@ static void edu_mmio_write(void *opaque, hwaddr addr, u= int64_t val, edu->addr4 =3D ~val; break; case 0x08: - if (atomic_read(&edu->status) & EDU_STATUS_COMPUTING) { + if (qemu_atomic_read(&edu->status) & EDU_STATUS_COMPUTING) { break; } /* EDU_STATUS_COMPUTING cannot go 0->1 concurrently, because it is= only @@ -260,15 +260,15 @@ static void edu_mmio_write(void *opaque, hwaddr addr,= uint64_t val, */ qemu_mutex_lock(&edu->thr_mutex); edu->fact =3D val; - atomic_or(&edu->status, EDU_STATUS_COMPUTING); + qemu_atomic_or(&edu->status, EDU_STATUS_COMPUTING); qemu_cond_signal(&edu->thr_cond); qemu_mutex_unlock(&edu->thr_mutex); break; case 0x20: if (val & EDU_STATUS_IRQFACT) { - atomic_or(&edu->status, EDU_STATUS_IRQFACT); + qemu_atomic_or(&edu->status, EDU_STATUS_IRQFACT); } else { - atomic_and(&edu->status, ~EDU_STATUS_IRQFACT); + qemu_atomic_and(&edu->status, ~EDU_STATUS_IRQFACT); } break; case 0x60: @@ -322,7 +322,7 @@ static void *edu_fact_thread(void *opaque) uint32_t val, ret =3D 1; =20 qemu_mutex_lock(&edu->thr_mutex); - while ((atomic_read(&edu->status) & EDU_STATUS_COMPUTING) =3D=3D 0= && + while ((qemu_atomic_read(&edu->status) & EDU_STATUS_COMPUTING) =3D= =3D 0 && !edu->stopping) { qemu_cond_wait(&edu->thr_cond, &edu->thr_mutex); } @@ -347,9 +347,9 @@ static void *edu_fact_thread(void *opaque) qemu_mutex_lock(&edu->thr_mutex); edu->fact =3D ret; qemu_mutex_unlock(&edu->thr_mutex); - atomic_and(&edu->status, ~EDU_STATUS_COMPUTING); + qemu_atomic_and(&edu->status, ~EDU_STATUS_COMPUTING); =20 - if (atomic_read(&edu->status) & EDU_STATUS_IRQFACT) { + if (qemu_atomic_read(&edu->status) & EDU_STATUS_IRQFACT) { qemu_mutex_lock_iothread(); edu_raise_irq(edu, FACT_IRQ); qemu_mutex_unlock_iothread(); diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index cb0d27084c..0c9c253367 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -933,7 +933,7 @@ static void virtio_net_set_features(VirtIODevice *vdev,= uint64_t features) =20 if (virtio_has_feature(features, VIRTIO_NET_F_STANDBY)) { qapi_event_send_failover_negotiated(n->netclient_name); - atomic_set(&n->primary_should_be_hidden, false); + qemu_atomic_set(&n->primary_should_be_hidden, false); failover_add_primary(n, &err); if (err) { n->primary_dev =3D virtio_connect_failover_devices(n, n->qdev,= &err); @@ -3168,7 +3168,7 @@ static void virtio_net_handle_migration_primary(VirtI= ONet *n, bool should_be_hidden; Error *err =3D NULL; =20 - should_be_hidden =3D atomic_read(&n->primary_should_be_hidden); + should_be_hidden =3D qemu_atomic_read(&n->primary_should_be_hidden); =20 if (!n->primary_dev) { n->primary_dev =3D virtio_connect_failover_devices(n, n->qdev, &er= r); @@ -3183,7 +3183,7 @@ static void virtio_net_handle_migration_primary(VirtI= ONet *n, qdev_get_vmsd(n->primary_dev), n->primary_dev); qapi_event_send_unplug_primary(n->primary_device_id); - atomic_set(&n->primary_should_be_hidden, true); + qemu_atomic_set(&n->primary_should_be_hidden, true); } else { warn_report("couldn't unplug primary device"); } @@ -3234,7 +3234,7 @@ static int virtio_net_primary_should_be_hidden(Device= Listener *listener, n->primary_device_opts =3D device_opts; =20 /* primary_should_be_hidden is set during feature negotiation */ - hide =3D atomic_read(&n->primary_should_be_hidden); + hide =3D qemu_atomic_read(&n->primary_should_be_hidden); =20 if (n->primary_device_dict) { g_free(n->primary_device_id); @@ -3291,7 +3291,7 @@ static void virtio_net_device_realize(DeviceState *de= v, Error **errp) if (n->failover) { n->primary_listener.should_be_hidden =3D virtio_net_primary_should_be_hidden; - atomic_set(&n->primary_should_be_hidden, true); + qemu_atomic_set(&n->primary_should_be_hidden, true); device_listener_register(&n->primary_listener); n->migration_state.notify =3D virtio_net_migration_state_notifier; add_migration_state_change_notifier(&n->migration_state); diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c index db7e5c8be5..886ce8758e 100644 --- a/hw/rdma/rdma_backend.c +++ b/hw/rdma/rdma_backend.c @@ -68,7 +68,7 @@ static void free_cqe_ctx(gpointer data, gpointer user_dat= a) bctx =3D rdma_rm_get_cqe_ctx(rdma_dev_res, cqe_ctx_id); if (bctx) { rdma_rm_dealloc_cqe_ctx(rdma_dev_res, cqe_ctx_id); - atomic_dec(&rdma_dev_res->stats.missing_cqe); + qemu_atomic_dec(&rdma_dev_res->stats.missing_cqe); } g_free(bctx); } @@ -81,7 +81,7 @@ static void clean_recv_mads(RdmaBackendDev *backend_dev) cqe_ctx_id =3D rdma_protected_qlist_pop_int64(&backend_dev-> recv_mads_list); if (cqe_ctx_id !=3D -ENOENT) { - atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); + qemu_atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); free_cqe_ctx(GINT_TO_POINTER(cqe_ctx_id), backend_dev->rdma_dev_res); } @@ -123,7 +123,7 @@ static int rdma_poll_cq(RdmaDeviceResources *rdma_dev_r= es, struct ibv_cq *ibcq) } total_ne +=3D ne; } while (ne > 0); - atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne); + qemu_atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne); } =20 if (ne < 0) { @@ -195,17 +195,18 @@ static void *comp_handler_thread(void *arg) =20 static inline void disable_rdmacm_mux_async(RdmaBackendDev *backend_dev) { - atomic_set(&backend_dev->rdmacm_mux.can_receive, 0); + qemu_atomic_set(&backend_dev->rdmacm_mux.can_receive, 0); } =20 static inline void enable_rdmacm_mux_async(RdmaBackendDev *backend_dev) { - atomic_set(&backend_dev->rdmacm_mux.can_receive, sizeof(RdmaCmMuxMsg)); + qemu_atomic_set(&backend_dev->rdmacm_mux.can_receive, + sizeof(RdmaCmMuxMsg)); } =20 static inline int rdmacm_mux_can_process_async(RdmaBackendDev *backend_dev) { - return atomic_read(&backend_dev->rdmacm_mux.can_receive); + return qemu_atomic_read(&backend_dev->rdmacm_mux.can_receive); } =20 static int rdmacm_mux_check_op_status(CharBackend *mad_chr_be) @@ -555,7 +556,7 @@ void rdma_backend_post_send(RdmaBackendDev *backend_dev, goto err_dealloc_cqe_ctx; } =20 - atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); + qemu_atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); backend_dev->rdma_dev_res->stats.tx++; =20 return; @@ -658,7 +659,7 @@ void rdma_backend_post_recv(RdmaBackendDev *backend_dev, goto err_dealloc_cqe_ctx; } =20 - atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); + qemu_atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); backend_dev->rdma_dev_res->stats.rx_bufs++; =20 return; @@ -710,7 +711,7 @@ void rdma_backend_post_srq_recv(RdmaBackendDev *backend= _dev, goto err_dealloc_cqe_ctx; } =20 - atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); + qemu_atomic_inc(&backend_dev->rdma_dev_res->stats.missing_cqe); backend_dev->rdma_dev_res->stats.rx_bufs++; backend_dev->rdma_dev_res->stats.rx_srq++; =20 diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c index 60957f88db..d6955d1d1b 100644 --- a/hw/rdma/rdma_rm.c +++ b/hw/rdma/rdma_rm.c @@ -790,7 +790,7 @@ int rdma_rm_init(RdmaDeviceResources *dev_res, struct i= bv_device_attr *dev_attr) qemu_mutex_init(&dev_res->lock); =20 memset(&dev_res->stats, 0, sizeof(dev_res->stats)); - atomic_set(&dev_res->stats.missing_cqe, 0); + qemu_atomic_set(&dev_res->stats.missing_cqe, 0); =20 return 0; } diff --git a/hw/rdma/vmw/pvrdma_dev_ring.c b/hw/rdma/vmw/pvrdma_dev_ring.c index c122fe7035..43c7dfbd52 100644 --- a/hw/rdma/vmw/pvrdma_dev_ring.c +++ b/hw/rdma/vmw/pvrdma_dev_ring.c @@ -38,8 +38,8 @@ int pvrdma_ring_init(PvrdmaRing *ring, const char *name, = PCIDevice *dev, ring->max_elems =3D max_elems; ring->elem_sz =3D elem_sz; /* TODO: Give a moment to think if we want to redo driver settings - atomic_set(&ring->ring_state->prod_tail, 0); - atomic_set(&ring->ring_state->cons_head, 0); + qemu_atomic_set(&ring->ring_state->prod_tail, 0); + qemu_atomic_set(&ring->ring_state->cons_head, 0); */ ring->npages =3D npages; ring->pages =3D g_malloc(npages * sizeof(void *)); diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c index 92146a2119..84e40b4395 100644 --- a/hw/s390x/s390-pci-bus.c +++ b/hw/s390x/s390-pci-bus.c @@ -650,7 +650,7 @@ static uint8_t set_ind_atomic(uint64_t ind_loc, uint8_t= to_be_set) actual =3D *ind_addr; do { expected =3D actual; - actual =3D atomic_cmpxchg(ind_addr, expected, expected | to_be_set= ); + actual =3D qemu_atomic_cmpxchg(ind_addr, expected, expected | to_b= e_set); } while (actual !=3D expected); cpu_physical_memory_unmap((void *)ind_addr, len, 1, len); =20 diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c index 8feb3451a0..8aef2cc8a1 100644 --- a/hw/s390x/virtio-ccw.c +++ b/hw/s390x/virtio-ccw.c @@ -800,7 +800,7 @@ static uint8_t virtio_set_ind_atomic(SubchDev *sch, uin= t64_t ind_loc, actual =3D *ind_addr; do { expected =3D actual; - actual =3D atomic_cmpxchg(ind_addr, expected, expected | to_be_set= ); + actual =3D qemu_atomic_cmpxchg(ind_addr, expected, expected | to_b= e_set); } while (actual !=3D expected); trace_virtio_ccw_set_ind(ind_loc, actual, actual | to_be_set); cpu_physical_memory_unmap((void *)ind_addr, len, 1, len); diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 1a1384e7a6..678c4c35e5 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -89,8 +89,8 @@ static void vhost_dev_sync_region(struct vhost_dev *dev, continue; } /* Data must be read atomically. We don't really need barrier sema= ntics - * but it's easier to use atomic_* than roll our own. */ - log =3D atomic_xchg(from, 0); + * but it's easier to use qemu_atomic_* than roll our own. */ + log =3D qemu_atomic_xchg(from, 0); while (log) { int bit =3D ctzl(log); hwaddr page_addr; diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c index f12d1595aa..d0117a0b6f 100644 --- a/hw/virtio/virtio-mmio.c +++ b/hw/virtio/virtio-mmio.c @@ -179,7 +179,7 @@ static uint64_t virtio_mmio_read(void *opaque, hwaddr o= ffset, unsigned size) } return proxy->vqs[vdev->queue_sel].enabled; case VIRTIO_MMIO_INTERRUPT_STATUS: - return atomic_read(&vdev->isr); + return qemu_atomic_read(&vdev->isr); case VIRTIO_MMIO_STATUS: return vdev->status; case VIRTIO_MMIO_CONFIG_GENERATION: @@ -370,7 +370,7 @@ static void virtio_mmio_write(void *opaque, hwaddr offs= et, uint64_t value, } break; case VIRTIO_MMIO_INTERRUPT_ACK: - atomic_and(&vdev->isr, ~value); + qemu_atomic_and(&vdev->isr, ~value); virtio_update_irq(vdev); break; case VIRTIO_MMIO_STATUS: @@ -496,7 +496,7 @@ static void virtio_mmio_update_irq(DeviceState *opaque,= uint16_t vector) if (!vdev) { return; } - level =3D (atomic_read(&vdev->isr) !=3D 0); + level =3D (qemu_atomic_read(&vdev->isr) !=3D 0); trace_virtio_mmio_setting_irq(level); qemu_set_irq(proxy->irq, level); } diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index 5bc769f685..8dd6a00ee5 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -72,7 +72,7 @@ static void virtio_pci_notify(DeviceState *d, uint16_t ve= ctor) msix_notify(&proxy->pci_dev, vector); else { VirtIODevice *vdev =3D virtio_bus_get_device(&proxy->bus); - pci_set_irq(&proxy->pci_dev, atomic_read(&vdev->isr) & 1); + pci_set_irq(&proxy->pci_dev, qemu_atomic_read(&vdev->isr) & 1); } } =20 @@ -398,7 +398,7 @@ static uint32_t virtio_ioport_read(VirtIOPCIProxy *prox= y, uint32_t addr) break; case VIRTIO_PCI_ISR: /* reading from the ISR also clears it. */ - ret =3D atomic_xchg(&vdev->isr, 0); + ret =3D qemu_atomic_xchg(&vdev->isr, 0); pci_irq_deassert(&proxy->pci_dev); break; case VIRTIO_MSI_CONFIG_VECTOR: @@ -1362,7 +1362,7 @@ static uint64_t virtio_pci_isr_read(void *opaque, hwa= ddr addr, { VirtIOPCIProxy *proxy =3D opaque; VirtIODevice *vdev =3D virtio_bus_get_device(&proxy->bus); - uint64_t val =3D atomic_xchg(&vdev->isr, 0); + uint64_t val =3D qemu_atomic_xchg(&vdev->isr, 0); pci_irq_deassert(&proxy->pci_dev); =20 return val; diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index e983025217..08f3c1cdd7 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -149,8 +149,8 @@ static void virtio_virtqueue_reset_region_cache(struct = VirtQueue *vq) { VRingMemoryRegionCaches *caches; =20 - caches =3D atomic_read(&vq->vring.caches); - atomic_rcu_set(&vq->vring.caches, NULL); + caches =3D qemu_atomic_read(&vq->vring.caches); + qemu_atomic_rcu_set(&vq->vring.caches, NULL); if (caches) { call_rcu(caches, virtio_free_region_cache, rcu); } @@ -197,7 +197,7 @@ static void virtio_init_region_cache(VirtIODevice *vdev= , int n) goto err_avail; } =20 - atomic_rcu_set(&vq->vring.caches, new); + qemu_atomic_rcu_set(&vq->vring.caches, new); if (old) { call_rcu(old, virtio_free_region_cache, rcu); } @@ -283,7 +283,7 @@ static void vring_packed_flags_write(VirtIODevice *vdev, /* Called within rcu_read_lock(). */ static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *= vq) { - return atomic_rcu_read(&vq->vring.caches); + return qemu_atomic_rcu_read(&vq->vring.caches); } =20 /* Called within rcu_read_lock(). */ @@ -2007,7 +2007,7 @@ void virtio_reset(void *opaque) vdev->queue_sel =3D 0; vdev->status =3D 0; vdev->disabled =3D false; - atomic_set(&vdev->isr, 0); + qemu_atomic_set(&vdev->isr, 0); vdev->config_vector =3D VIRTIO_NO_VECTOR; virtio_notify_vector(vdev, vdev->config_vector); =20 @@ -2439,13 +2439,13 @@ void virtio_del_queue(VirtIODevice *vdev, int n) =20 static void virtio_set_isr(VirtIODevice *vdev, int value) { - uint8_t old =3D atomic_read(&vdev->isr); + uint8_t old =3D qemu_atomic_read(&vdev->isr); =20 /* Do not write ISR if it does not change, so that its cacheline remai= ns * shared in the common case where the guest does not read it. */ if ((old & value) !=3D value) { - atomic_or(&vdev->isr, value); + qemu_atomic_or(&vdev->isr, value); } } =20 @@ -3254,7 +3254,7 @@ void virtio_init(VirtIODevice *vdev, const char *name, vdev->started =3D false; vdev->device_id =3D device_id; vdev->status =3D 0; - atomic_set(&vdev->isr, 0); + qemu_atomic_set(&vdev->isr, 0); vdev->queue_sel =3D 0; vdev->config_vector =3D VIRTIO_NO_VECTOR; vdev->vq =3D g_malloc0(sizeof(VirtQueue) * VIRTIO_QUEUE_MAX); diff --git a/hw/xtensa/pic_cpu.c b/hw/xtensa/pic_cpu.c index 1d5982a9e4..2d28392b7a 100644 --- a/hw/xtensa/pic_cpu.c +++ b/hw/xtensa/pic_cpu.c @@ -72,9 +72,9 @@ static void xtensa_set_irq(void *opaque, int irq, int act= ive) uint32_t irq_bit =3D 1 << irq; =20 if (active) { - atomic_or(&env->sregs[INTSET], irq_bit); + qemu_atomic_or(&env->sregs[INTSET], irq_bit); } else if (env->config->interrupt[irq].inttype =3D=3D INTTYPE_LEVE= L) { - atomic_and(&env->sregs[INTSET], ~irq_bit); + qemu_atomic_and(&env->sregs[INTSET], ~irq_bit); } =20 check_interrupts(env); diff --git a/iothread.c b/iothread.c index 3a3860a09c..54e2d30c3d 100644 --- a/iothread.c +++ b/iothread.c @@ -76,7 +76,7 @@ static void *iothread_run(void *opaque) * We must check the running state again in case it was * changed in previous aio_poll() */ - if (iothread->running && atomic_read(&iothread->run_gcontext)) { + if (iothread->running && qemu_atomic_read(&iothread->run_gcontext)= ) { g_main_loop_run(iothread->main_loop); } } @@ -116,7 +116,7 @@ static void iothread_instance_init(Object *obj) iothread->thread_id =3D -1; qemu_sem_init(&iothread->init_done_sem, 0); /* By default, we don't run gcontext */ - atomic_set(&iothread->run_gcontext, 0); + qemu_atomic_set(&iothread->run_gcontext, 0); } =20 static void iothread_instance_finalize(Object *obj) @@ -348,7 +348,7 @@ IOThreadInfoList *qmp_query_iothreads(Error **errp) =20 GMainContext *iothread_get_g_main_context(IOThread *iothread) { - atomic_set(&iothread->run_gcontext, 1); + qemu_atomic_set(&iothread->run_gcontext, 1); aio_notify(iothread->ctx); return iothread->worker_context; } diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c index 9915456a1d..7ef313c325 100644 --- a/linux-user/hppa/cpu_loop.c +++ b/linux-user/hppa/cpu_loop.c @@ -39,7 +39,7 @@ static abi_ulong hppa_lws(CPUHPPAState *env) } old =3D tswap32(old); new =3D tswap32(new); - ret =3D atomic_cmpxchg((uint32_t *)g2h(addr), old, new); + ret =3D qemu_atomic_cmpxchg((uint32_t *)g2h(addr), old, new); ret =3D tswap32(ret); break; =20 @@ -60,19 +60,19 @@ static abi_ulong hppa_lws(CPUHPPAState *env) case 0: old =3D *(uint8_t *)g2h(old); new =3D *(uint8_t *)g2h(new); - ret =3D atomic_cmpxchg((uint8_t *)g2h(addr), old, new); + ret =3D qemu_atomic_cmpxchg((uint8_t *)g2h(addr), old, new); ret =3D ret !=3D old; break; case 1: old =3D *(uint16_t *)g2h(old); new =3D *(uint16_t *)g2h(new); - ret =3D atomic_cmpxchg((uint16_t *)g2h(addr), old, new); + ret =3D qemu_atomic_cmpxchg((uint16_t *)g2h(addr), old, new); ret =3D ret !=3D old; break; case 2: old =3D *(uint32_t *)g2h(old); new =3D *(uint32_t *)g2h(new); - ret =3D atomic_cmpxchg((uint32_t *)g2h(addr), old, new); + ret =3D qemu_atomic_cmpxchg((uint32_t *)g2h(addr), old, new); ret =3D ret !=3D old; break; case 3: @@ -81,7 +81,8 @@ static abi_ulong hppa_lws(CPUHPPAState *env) o64 =3D *(uint64_t *)g2h(old); n64 =3D *(uint64_t *)g2h(new); #ifdef CONFIG_ATOMIC64 - r64 =3D atomic_cmpxchg__nocheck((uint64_t *)g2h(addr), o64= , n64); + r64 =3D qemu_atomic_cmpxchg__nocheck((uint64_t *)g2h(addr), + o64, n64); ret =3D r64 !=3D o64; #else start_exclusive(); diff --git a/linux-user/signal.c b/linux-user/signal.c index 8cf51ffecd..ac4eee73f0 100644 --- a/linux-user/signal.c +++ b/linux-user/signal.c @@ -195,7 +195,7 @@ int block_signals(void) sigfillset(&set); sigprocmask(SIG_SETMASK, &set, 0); =20 - return atomic_xchg(&ts->signal_pending, 1); + return qemu_atomic_xchg(&ts->signal_pending, 1); } =20 /* Wrapper for sigprocmask function @@ -688,7 +688,7 @@ int queue_signal(CPUArchState *env, int sig, int si_typ= e, ts->sync_signal.info =3D *info; ts->sync_signal.pending =3D sig; /* signal that a new signal is pending */ - atomic_set(&ts->signal_pending, 1); + qemu_atomic_set(&ts->signal_pending, 1); return 1; /* indicates that the signal was queued */ } =20 @@ -1005,7 +1005,7 @@ void process_pending_signals(CPUArchState *cpu_env) sigset_t set; sigset_t *blocked_set; =20 - while (atomic_read(&ts->signal_pending)) { + while (qemu_atomic_read(&ts->signal_pending)) { /* FIXME: This is not threadsafe. */ sigfillset(&set); sigprocmask(SIG_SETMASK, &set, 0); @@ -1049,7 +1049,7 @@ void process_pending_signals(CPUArchState *cpu_env) * of unblocking might cause us to take another host signal which * will set signal_pending again). */ - atomic_set(&ts->signal_pending, 0); + qemu_atomic_set(&ts->signal_pending, 0); ts->in_sigsuspend =3D 0; set =3D ts->signal_mask; sigdelset(&set, SIGSEGV); diff --git a/migration/colo-failover.c b/migration/colo-failover.c index e9ca0b4774..b961d251fa 100644 --- a/migration/colo-failover.c +++ b/migration/colo-failover.c @@ -63,7 +63,7 @@ FailoverStatus failover_set_state(FailoverStatus old_stat= e, { FailoverStatus old; =20 - old =3D atomic_cmpxchg(&failover_state, old_state, new_state); + old =3D qemu_atomic_cmpxchg(&failover_state, old_state, new_state); if (old =3D=3D old_state) { trace_colo_failover_set_state(FailoverStatus_str(new_state)); } @@ -72,7 +72,7 @@ FailoverStatus failover_set_state(FailoverStatus old_stat= e, =20 FailoverStatus failover_get_state(void) { - return atomic_read(&failover_state); + return qemu_atomic_read(&failover_state); } =20 void qmp_x_colo_lost_heartbeat(Error **errp) diff --git a/migration/migration.c b/migration/migration.c index 58a5452471..044e7c1fe7 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1595,7 +1595,7 @@ void qmp_migrate_start_postcopy(Error **errp) * we don't error if migration has finished since that would be racy * with issuing this command. */ - atomic_set(&s->start_postcopy, true); + qemu_atomic_set(&s->start_postcopy, true); } =20 /* shared migration helpers */ @@ -1603,7 +1603,7 @@ void qmp_migrate_start_postcopy(Error **errp) void migrate_set_state(int *state, int old_state, int new_state) { assert(new_state < MIGRATION_STATUS__MAX); - if (atomic_cmpxchg(state, old_state, new_state) =3D=3D old_state) { + if (qemu_atomic_cmpxchg(state, old_state, new_state) =3D=3D old_state)= { trace_migrate_set_state(MigrationStatus_str(new_state)); migrate_generate_event(new_state); } @@ -1954,7 +1954,7 @@ void qmp_migrate_recover(const char *uri, Error **err= p) return; } =20 - if (atomic_cmpxchg(&mis->postcopy_recover_triggered, + if (qemu_atomic_cmpxchg(&mis->postcopy_recover_triggered, false, true) =3D=3D true) { error_setg(errp, "Migrate recovery is triggered already"); return; @@ -3329,7 +3329,7 @@ static MigIterateState migration_iteration_run(Migrat= ionState *s) if (pending_size && pending_size >=3D s->threshold_size) { /* Still a significant amount to transfer */ if (!in_postcopy && pend_pre <=3D s->threshold_size && - atomic_read(&s->start_postcopy)) { + qemu_atomic_read(&s->start_postcopy)) { if (postcopy_start(s)) { error_report("%s: postcopy failed to start", __func__); } diff --git a/migration/multifd.c b/migration/multifd.c index d0441202aa..94c9679262 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -410,7 +410,7 @@ static int multifd_send_pages(QEMUFile *f) MultiFDPages_t *pages =3D multifd_send_state->pages; uint64_t transferred; =20 - if (atomic_read(&multifd_send_state->exiting)) { + if (qemu_atomic_read(&multifd_send_state->exiting)) { return -1; } =20 @@ -508,7 +508,7 @@ static void multifd_send_terminate_threads(Error *err) * threads at the same time, we can end calling this function * twice. */ - if (atomic_xchg(&multifd_send_state->exiting, 1)) { + if (qemu_atomic_xchg(&multifd_send_state->exiting, 1)) { return; } =20 @@ -632,7 +632,7 @@ static void *multifd_send_thread(void *opaque) while (true) { qemu_sem_wait(&p->sem); =20 - if (atomic_read(&multifd_send_state->exiting)) { + if (qemu_atomic_read(&multifd_send_state->exiting)) { break; } qemu_mutex_lock(&p->mutex); @@ -760,7 +760,7 @@ int multifd_save_setup(Error **errp) multifd_send_state->params =3D g_new0(MultiFDSendParams, thread_count); multifd_send_state->pages =3D multifd_pages_init(page_count); qemu_sem_init(&multifd_send_state->channels_ready, 0); - atomic_set(&multifd_send_state->exiting, 0); + qemu_atomic_set(&multifd_send_state->exiting, 0); multifd_send_state->ops =3D multifd_ops[migrate_multifd_compression()]; =20 for (i =3D 0; i < thread_count; i++) { @@ -997,7 +997,7 @@ int multifd_load_setup(Error **errp) thread_count =3D migrate_multifd_channels(); multifd_recv_state =3D g_malloc0(sizeof(*multifd_recv_state)); multifd_recv_state->params =3D g_new0(MultiFDRecvParams, thread_count); - atomic_set(&multifd_recv_state->count, 0); + qemu_atomic_set(&multifd_recv_state->count, 0); qemu_sem_init(&multifd_recv_state->sem_sync, 0); multifd_recv_state->ops =3D multifd_ops[migrate_multifd_compression()]; =20 @@ -1037,7 +1037,7 @@ bool multifd_recv_all_channels_created(void) return true; } =20 - return thread_count =3D=3D atomic_read(&multifd_recv_state->count); + return thread_count =3D=3D qemu_atomic_read(&multifd_recv_state->count= ); } =20 /* @@ -1058,7 +1058,7 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error = **errp) error_propagate_prepend(errp, local_err, "failed to receive packet" " via multifd channel %d: ", - atomic_read(&multifd_recv_state->count)); + qemu_atomic_read(&multifd_recv_state->coun= t)); return false; } trace_multifd_recv_new_channel(id); @@ -1079,7 +1079,7 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error = **errp) p->running =3D true; qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p, QEMU_THREAD_JOINABLE); - atomic_inc(&multifd_recv_state->count); - return atomic_read(&multifd_recv_state->count) =3D=3D + qemu_atomic_inc(&multifd_recv_state->count); + return qemu_atomic_read(&multifd_recv_state->count) =3D=3D migrate_multifd_channels(); } diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index 1bb22f2b6c..08ca265d5a 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -530,7 +530,7 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingStat= e *mis) Error *local_err =3D NULL; =20 /* Let the fault thread quit */ - atomic_set(&mis->fault_thread_quit, 1); + qemu_atomic_set(&mis->fault_thread_quit, 1); postcopy_fault_thread_notify(mis); trace_postcopy_ram_incoming_cleanup_join(); qemu_thread_join(&mis->fault_thread); @@ -742,12 +742,12 @@ static void mark_postcopy_blocktime_begin(uintptr_t a= ddr, uint32_t ptid, =20 low_time_offset =3D get_low_time_offset(dc); if (dc->vcpu_addr[cpu] =3D=3D 0) { - atomic_inc(&dc->smp_cpus_down); + qemu_atomic_inc(&dc->smp_cpus_down); } =20 - atomic_xchg(&dc->last_begin, low_time_offset); - atomic_xchg(&dc->page_fault_vcpu_time[cpu], low_time_offset); - atomic_xchg(&dc->vcpu_addr[cpu], addr); + qemu_atomic_xchg(&dc->last_begin, low_time_offset); + qemu_atomic_xchg(&dc->page_fault_vcpu_time[cpu], low_time_offset); + qemu_atomic_xchg(&dc->vcpu_addr[cpu], addr); =20 /* * check it here, not at the beginning of the function, @@ -756,9 +756,9 @@ static void mark_postcopy_blocktime_begin(uintptr_t add= r, uint32_t ptid, */ already_received =3D ramblock_recv_bitmap_test(rb, (void *)addr); if (already_received) { - atomic_xchg(&dc->vcpu_addr[cpu], 0); - atomic_xchg(&dc->page_fault_vcpu_time[cpu], 0); - atomic_dec(&dc->smp_cpus_down); + qemu_atomic_xchg(&dc->vcpu_addr[cpu], 0); + qemu_atomic_xchg(&dc->page_fault_vcpu_time[cpu], 0); + qemu_atomic_dec(&dc->smp_cpus_down); } trace_mark_postcopy_blocktime_begin(addr, dc, dc->page_fault_vcpu_time= [cpu], cpu, already_received); @@ -813,28 +813,29 @@ static void mark_postcopy_blocktime_end(uintptr_t add= r) for (i =3D 0; i < smp_cpus; i++) { uint32_t vcpu_blocktime =3D 0; =20 - read_vcpu_time =3D atomic_fetch_add(&dc->page_fault_vcpu_time[i], = 0); - if (atomic_fetch_add(&dc->vcpu_addr[i], 0) !=3D addr || + read_vcpu_time =3D + qemu_atomic_fetch_add(&dc->page_fault_vcpu_time[i], 0); + if (qemu_atomic_fetch_add(&dc->vcpu_addr[i], 0) !=3D addr || read_vcpu_time =3D=3D 0) { continue; } - atomic_xchg(&dc->vcpu_addr[i], 0); + qemu_atomic_xchg(&dc->vcpu_addr[i], 0); vcpu_blocktime =3D low_time_offset - read_vcpu_time; affected_cpu +=3D 1; /* we need to know is that mark_postcopy_end was due to * faulted page, another possible case it's prefetched * page and in that case we shouldn't be here */ if (!vcpu_total_blocktime && - atomic_fetch_add(&dc->smp_cpus_down, 0) =3D=3D smp_cpus) { + qemu_atomic_fetch_add(&dc->smp_cpus_down, 0) =3D=3D smp_cpus) { vcpu_total_blocktime =3D true; } /* continue cycle, due to one page could affect several vCPUs */ dc->vcpu_blocktime[i] +=3D vcpu_blocktime; } =20 - atomic_sub(&dc->smp_cpus_down, affected_cpu); + qemu_atomic_sub(&dc->smp_cpus_down, affected_cpu); if (vcpu_total_blocktime) { - dc->total_blocktime +=3D low_time_offset - atomic_fetch_add( + dc->total_blocktime +=3D low_time_offset - qemu_atomic_fetch_add( &dc->last_begin, 0); } trace_mark_postcopy_blocktime_end(addr, dc, dc->total_blocktime, @@ -928,7 +929,7 @@ static void *postcopy_ram_fault_thread(void *opaque) error_report("%s: read() failed", __func__); } =20 - if (atomic_read(&mis->fault_thread_quit)) { + if (qemu_atomic_read(&mis->fault_thread_quit)) { trace_postcopy_ram_fault_thread_quit(); break; } @@ -1410,13 +1411,13 @@ static PostcopyState incoming_postcopy_state; =20 PostcopyState postcopy_state_get(void) { - return atomic_mb_read(&incoming_postcopy_state); + return qemu_atomic_mb_read(&incoming_postcopy_state); } =20 /* Set the state and return the old state */ PostcopyState postcopy_state_set(PostcopyState new_state) { - return atomic_xchg(&incoming_postcopy_state, new_state); + return qemu_atomic_xchg(&incoming_postcopy_state, new_state); } =20 /* Register a handler for external shared memory postcopy diff --git a/migration/rdma.c b/migration/rdma.c index 1dc563ec3f..c4a380348d 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -2680,7 +2680,7 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *io= c, size_t len =3D 0; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmaout); =20 if (!rdma) { return -EIO; @@ -2762,7 +2762,7 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc, size_t done =3D 0; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmain); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmain); =20 if (!rdma) { return -EIO; @@ -2877,9 +2877,9 @@ qio_channel_rdma_source_prepare(GSource *source, =20 RCU_READ_LOCK_GUARD(); if (rsource->condition =3D=3D G_IO_IN) { - rdma =3D atomic_rcu_read(&rsource->rioc->rdmain); + rdma =3D qemu_atomic_rcu_read(&rsource->rioc->rdmain); } else { - rdma =3D atomic_rcu_read(&rsource->rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rsource->rioc->rdmaout); } =20 if (!rdma) { @@ -2904,9 +2904,9 @@ qio_channel_rdma_source_check(GSource *source) =20 RCU_READ_LOCK_GUARD(); if (rsource->condition =3D=3D G_IO_IN) { - rdma =3D atomic_rcu_read(&rsource->rioc->rdmain); + rdma =3D qemu_atomic_rcu_read(&rsource->rioc->rdmain); } else { - rdma =3D atomic_rcu_read(&rsource->rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rsource->rioc->rdmaout); } =20 if (!rdma) { @@ -2934,9 +2934,9 @@ qio_channel_rdma_source_dispatch(GSource *source, =20 RCU_READ_LOCK_GUARD(); if (rsource->condition =3D=3D G_IO_IN) { - rdma =3D atomic_rcu_read(&rsource->rioc->rdmain); + rdma =3D qemu_atomic_rcu_read(&rsource->rioc->rdmain); } else { - rdma =3D atomic_rcu_read(&rsource->rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rsource->rioc->rdmaout); } =20 if (!rdma) { @@ -3037,12 +3037,12 @@ static int qio_channel_rdma_close(QIOChannel *ioc, =20 rdmain =3D rioc->rdmain; if (rdmain) { - atomic_rcu_set(&rioc->rdmain, NULL); + qemu_atomic_rcu_set(&rioc->rdmain, NULL); } =20 rdmaout =3D rioc->rdmaout; if (rdmaout) { - atomic_rcu_set(&rioc->rdmaout, NULL); + qemu_atomic_rcu_set(&rioc->rdmaout, NULL); } =20 rcu->rdmain =3D rdmain; @@ -3062,8 +3062,8 @@ qio_channel_rdma_shutdown(QIOChannel *ioc, =20 RCU_READ_LOCK_GUARD(); =20 - rdmain =3D atomic_rcu_read(&rioc->rdmain); - rdmaout =3D atomic_rcu_read(&rioc->rdmain); + rdmain =3D qemu_atomic_rcu_read(&rioc->rdmain); + rdmaout =3D qemu_atomic_rcu_read(&rioc->rdmain); =20 switch (how) { case QIO_CHANNEL_SHUTDOWN_READ: @@ -3133,7 +3133,7 @@ static size_t qemu_rdma_save_page(QEMUFile *f, void *= opaque, int ret; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmaout); =20 if (!rdma) { return -EIO; @@ -3453,7 +3453,7 @@ static int qemu_rdma_registration_handle(QEMUFile *f,= void *opaque) int i =3D 0; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmain); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmain); =20 if (!rdma) { return -EIO; @@ -3716,7 +3716,7 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, = const char *name) int found =3D -1; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmain); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmain); =20 if (!rdma) { return -EIO; @@ -3764,7 +3764,7 @@ static int qemu_rdma_registration_start(QEMUFile *f, = void *opaque, RDMAContext *rdma; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmaout); if (!rdma) { return -EIO; } @@ -3795,7 +3795,7 @@ static int qemu_rdma_registration_stop(QEMUFile *f, v= oid *opaque, int ret =3D 0; =20 RCU_READ_LOCK_GUARD(); - rdma =3D atomic_rcu_read(&rioc->rdmaout); + rdma =3D qemu_atomic_rcu_read(&rioc->rdmaout); if (!rdma) { return -EIO; } diff --git a/monitor/hmp.c b/monitor/hmp.c index d598dd02bb..a42653e573 100644 --- a/monitor/hmp.c +++ b/monitor/hmp.c @@ -1337,19 +1337,19 @@ static void monitor_event(void *opaque, QEMUChrEven= t event) monitor_resume(mon); monitor_flush(mon); } else { - atomic_mb_set(&mon->suspend_cnt, 0); + qemu_atomic_mb_set(&mon->suspend_cnt, 0); } break; =20 case CHR_EVENT_MUX_OUT: if (mon->reset_seen) { - if (atomic_mb_read(&mon->suspend_cnt) =3D=3D 0) { + if (qemu_atomic_mb_read(&mon->suspend_cnt) =3D=3D 0) { monitor_printf(mon, "\n"); } monitor_flush(mon); monitor_suspend(mon); } else { - atomic_inc(&mon->suspend_cnt); + qemu_atomic_inc(&mon->suspend_cnt); } qemu_mutex_lock(&mon->mon_lock); mon->mux_out =3D 1; diff --git a/monitor/misc.c b/monitor/misc.c index 0b1b9b196c..c052b06f12 100644 --- a/monitor/misc.c +++ b/monitor/misc.c @@ -751,7 +751,7 @@ static uint64_t vtop(void *ptr, Error **errp) } =20 /* Force copy-on-write if necessary. */ - atomic_add((uint8_t *)ptr, 0); + qemu_atomic_add((uint8_t *)ptr, 0); =20 if (pread(fd, &pinfo, sizeof(pinfo), offset) !=3D sizeof(pinfo)) { error_setg_errno(errp, errno, "Cannot read pagemap"); diff --git a/monitor/monitor.c b/monitor/monitor.c index b385a3d569..71fa9bea6e 100644 --- a/monitor/monitor.c +++ b/monitor/monitor.c @@ -449,7 +449,7 @@ int monitor_suspend(Monitor *mon) return -ENOTTY; } =20 - atomic_inc(&mon->suspend_cnt); + qemu_atomic_inc(&mon->suspend_cnt); =20 if (mon->use_io_thread) { /* @@ -476,7 +476,7 @@ void monitor_resume(Monitor *mon) return; } =20 - if (atomic_dec_fetch(&mon->suspend_cnt) =3D=3D 0) { + if (qemu_atomic_dec_fetch(&mon->suspend_cnt) =3D=3D 0) { AioContext *ctx; =20 if (mon->use_io_thread) { @@ -501,7 +501,7 @@ int monitor_can_read(void *opaque) { Monitor *mon =3D opaque; =20 - return !atomic_mb_read(&mon->suspend_cnt); + return !qemu_atomic_mb_read(&mon->suspend_cnt); } =20 void monitor_list_append(Monitor *mon) diff --git a/qemu-nbd.c b/qemu-nbd.c index 33476a1000..b63a75c3f8 100644 --- a/qemu-nbd.c +++ b/qemu-nbd.c @@ -158,7 +158,7 @@ QEMU_COPYRIGHT "\n" #if HAVE_NBD_DEVICE static void termsig_handler(int signum) { - atomic_cmpxchg(&state, RUNNING, TERMINATE); + qemu_atomic_cmpxchg(&state, RUNNING, TERMINATE); qemu_notify_event(); } #endif /* HAVE_NBD_DEVICE */ diff --git a/qga/commands.c b/qga/commands.c index d3fec807c1..83f3a54d15 100644 --- a/qga/commands.c +++ b/qga/commands.c @@ -166,13 +166,13 @@ GuestExecStatus *qmp_guest_exec_status(int64_t pid, E= rror **errp) =20 ges =3D g_new0(GuestExecStatus, 1); =20 - bool finished =3D atomic_mb_read(&gei->finished); + bool finished =3D qemu_atomic_mb_read(&gei->finished); =20 /* need to wait till output channels are closed * to be sure we captured all output at this point */ if (gei->has_output) { - finished =3D finished && atomic_mb_read(&gei->out.closed); - finished =3D finished && atomic_mb_read(&gei->err.closed); + finished =3D finished && qemu_atomic_mb_read(&gei->out.closed); + finished =3D finished && qemu_atomic_mb_read(&gei->err.closed); } =20 ges->exited =3D finished; @@ -274,7 +274,7 @@ static void guest_exec_child_watch(GPid pid, gint statu= s, gpointer data) (int32_t)gpid_to_int64(pid), (uint32_t)status); =20 gei->status =3D status; - atomic_mb_set(&gei->finished, true); + qemu_atomic_mb_set(&gei->finished, true); =20 g_spawn_close_pid(pid); } @@ -330,7 +330,7 @@ static gboolean guest_exec_input_watch(GIOChannel *ch, done: g_io_channel_shutdown(ch, true, NULL); g_io_channel_unref(ch); - atomic_mb_set(&p->closed, true); + qemu_atomic_mb_set(&p->closed, true); g_free(p->data); =20 return false; @@ -384,7 +384,7 @@ static gboolean guest_exec_output_watch(GIOChannel *ch, close: g_io_channel_shutdown(ch, true, NULL); g_io_channel_unref(ch); - atomic_mb_set(&p->closed, true); + qemu_atomic_mb_set(&p->closed, true); return false; } =20 diff --git a/qom/object.c b/qom/object.c index 387efb25eb..36c1ff14c8 100644 --- a/qom/object.c +++ b/qom/object.c @@ -837,7 +837,7 @@ Object *object_dynamic_cast_assert(Object *obj, const c= har *typename, Object *inst; =20 for (i =3D 0; obj && i < OBJECT_CLASS_CAST_CACHE; i++) { - if (atomic_read(&obj->class->object_cast_cache[i]) =3D=3D typename= ) { + if (qemu_atomic_read(&obj->class->object_cast_cache[i]) =3D=3D typ= ename) { goto out; } } @@ -854,10 +854,10 @@ Object *object_dynamic_cast_assert(Object *obj, const= char *typename, =20 if (obj && obj =3D=3D inst) { for (i =3D 1; i < OBJECT_CLASS_CAST_CACHE; i++) { - atomic_set(&obj->class->object_cast_cache[i - 1], - atomic_read(&obj->class->object_cast_cache[i])); + qemu_atomic_set(&obj->class->object_cast_cache[i - 1], + qemu_atomic_read(&obj->class->object_cast_cache[i])= ); } - atomic_set(&obj->class->object_cast_cache[i - 1], typename); + qemu_atomic_set(&obj->class->object_cast_cache[i - 1], typename); } =20 out: @@ -927,7 +927,7 @@ ObjectClass *object_class_dynamic_cast_assert(ObjectCla= ss *class, int i; =20 for (i =3D 0; class && i < OBJECT_CLASS_CAST_CACHE; i++) { - if (atomic_read(&class->class_cast_cache[i]) =3D=3D typename) { + if (qemu_atomic_read(&class->class_cast_cache[i]) =3D=3D typename)= { ret =3D class; goto out; } @@ -948,10 +948,10 @@ ObjectClass *object_class_dynamic_cast_assert(ObjectC= lass *class, #ifdef CONFIG_QOM_CAST_DEBUG if (class && ret =3D=3D class) { for (i =3D 1; i < OBJECT_CLASS_CAST_CACHE; i++) { - atomic_set(&class->class_cast_cache[i - 1], - atomic_read(&class->class_cast_cache[i])); + qemu_atomic_set(&class->class_cast_cache[i - 1], + qemu_atomic_read(&class->class_cast_cache[i])); } - atomic_set(&class->class_cast_cache[i - 1], typename); + qemu_atomic_set(&class->class_cast_cache[i - 1], typename); } out: #endif @@ -1136,7 +1136,7 @@ Object *object_ref(void *objptr) if (!obj) { return NULL; } - atomic_inc(&obj->ref); + qemu_atomic_inc(&obj->ref); return obj; } =20 @@ -1149,7 +1149,7 @@ void object_unref(void *objptr) g_assert(obj->ref > 0); =20 /* parent always holds a reference to its children */ - if (atomic_fetch_dec(&obj->ref) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&obj->ref) =3D=3D 1) { object_finalize(obj); } } diff --git a/scsi/qemu-pr-helper.c b/scsi/qemu-pr-helper.c index 57ad830d54..95ebe892ff 100644 --- a/scsi/qemu-pr-helper.c +++ b/scsi/qemu-pr-helper.c @@ -747,7 +747,7 @@ static void coroutine_fn prh_co_entry(void *opaque) goto out; } =20 - while (atomic_read(&state) =3D=3D RUNNING) { + while (qemu_atomic_read(&state) =3D=3D RUNNING) { PRHelperRequest req; PRHelperResponse resp; int sz; @@ -816,7 +816,7 @@ static gboolean accept_client(QIOChannel *ioc, GIOCondi= tion cond, gpointer opaqu =20 static void termsig_handler(int signum) { - atomic_cmpxchg(&state, RUNNING, TERMINATE); + qemu_atomic_cmpxchg(&state, RUNNING, TERMINATE); qemu_notify_event(); } =20 diff --git a/softmmu/cpu-throttle.c b/softmmu/cpu-throttle.c index 4e6b2818ca..7b7dbcc9dc 100644 --- a/softmmu/cpu-throttle.c +++ b/softmmu/cpu-throttle.c @@ -64,7 +64,7 @@ static void cpu_throttle_thread(CPUState *cpu, run_on_cpu= _data opaque) } sleeptime_ns =3D endtime_ns - qemu_clock_get_ns(QEMU_CLOCK_REALTIM= E); } - atomic_set(&cpu->throttle_thread_scheduled, 0); + qemu_atomic_set(&cpu->throttle_thread_scheduled, 0); } =20 static void cpu_throttle_timer_tick(void *opaque) @@ -77,7 +77,7 @@ static void cpu_throttle_timer_tick(void *opaque) return; } CPU_FOREACH(cpu) { - if (!atomic_xchg(&cpu->throttle_thread_scheduled, 1)) { + if (!qemu_atomic_xchg(&cpu->throttle_thread_scheduled, 1)) { async_run_on_cpu(cpu, cpu_throttle_thread, RUN_ON_CPU_NULL); } @@ -94,7 +94,7 @@ void cpu_throttle_set(int new_throttle_pct) new_throttle_pct =3D MIN(new_throttle_pct, CPU_THROTTLE_PCT_MAX); new_throttle_pct =3D MAX(new_throttle_pct, CPU_THROTTLE_PCT_MIN); =20 - atomic_set(&throttle_percentage, new_throttle_pct); + qemu_atomic_set(&throttle_percentage, new_throttle_pct); =20 timer_mod(throttle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT) + CPU_THROTTLE_TIMESLICE_NS); @@ -102,7 +102,7 @@ void cpu_throttle_set(int new_throttle_pct) =20 void cpu_throttle_stop(void) { - atomic_set(&throttle_percentage, 0); + qemu_atomic_set(&throttle_percentage, 0); } =20 bool cpu_throttle_active(void) @@ -112,7 +112,7 @@ bool cpu_throttle_active(void) =20 int cpu_throttle_get_percentage(void) { - return atomic_read(&throttle_percentage); + return qemu_atomic_read(&throttle_percentage); } =20 void cpu_throttle_init(void) diff --git a/softmmu/cpus.c b/softmmu/cpus.c index e3b98065c9..94fcadea4a 100644 --- a/softmmu/cpus.c +++ b/softmmu/cpus.c @@ -192,7 +192,7 @@ static void cpu_update_icount_locked(CPUState *cpu) int64_t executed =3D cpu_get_icount_executed(cpu); cpu->icount_budget -=3D executed; =20 - atomic_set_i64(&timers_state.qemu_icount, + qemu_atomic_set_i64(&timers_state.qemu_icount, timers_state.qemu_icount + executed); } =20 @@ -223,13 +223,13 @@ static int64_t cpu_get_icount_raw_locked(void) cpu_update_icount_locked(cpu); } /* The read is protected by the seqlock, but needs atomic64 to avoid U= B */ - return atomic_read_i64(&timers_state.qemu_icount); + return qemu_atomic_read_i64(&timers_state.qemu_icount); } =20 static int64_t cpu_get_icount_locked(void) { int64_t icount =3D cpu_get_icount_raw_locked(); - return atomic_read_i64(&timers_state.qemu_icount_bias) + + return qemu_atomic_read_i64(&timers_state.qemu_icount_bias) + cpu_icount_to_ns(icount); } =20 @@ -262,7 +262,7 @@ int64_t cpu_get_icount(void) =20 int64_t cpu_icount_to_ns(int64_t icount) { - return icount << atomic_read(&timers_state.icount_time_shift); + return icount << qemu_atomic_read(&timers_state.icount_time_shift); } =20 static int64_t cpu_get_ticks_locked(void) @@ -393,18 +393,18 @@ static void icount_adjust(void) && last_delta + ICOUNT_WOBBLE < delta * 2 && timers_state.icount_time_shift > 0) { /* The guest is getting too far ahead. Slow time down. */ - atomic_set(&timers_state.icount_time_shift, + qemu_atomic_set(&timers_state.icount_time_shift, timers_state.icount_time_shift - 1); } if (delta < 0 && last_delta - ICOUNT_WOBBLE > delta * 2 && timers_state.icount_time_shift < MAX_ICOUNT_SHIFT) { /* The guest is getting too far behind. Speed time up. */ - atomic_set(&timers_state.icount_time_shift, + qemu_atomic_set(&timers_state.icount_time_shift, timers_state.icount_time_shift + 1); } last_delta =3D delta; - atomic_set_i64(&timers_state.qemu_icount_bias, + qemu_atomic_set_i64(&timers_state.qemu_icount_bias, cur_icount - (timers_state.qemu_icount << timers_state.icount_time_shift)); seqlock_write_unlock(&timers_state.vm_clock_seqlock, @@ -428,7 +428,7 @@ static void icount_adjust_vm(void *opaque) =20 static int64_t qemu_icount_round(int64_t count) { - int shift =3D atomic_read(&timers_state.icount_time_shift); + int shift =3D qemu_atomic_read(&timers_state.icount_time_shift); return (count + (1 << shift) - 1) >> shift; } =20 @@ -466,7 +466,7 @@ static void icount_warp_rt(void) int64_t delta =3D clock - cur_icount; warp_delta =3D MIN(warp_delta, delta); } - atomic_set_i64(&timers_state.qemu_icount_bias, + qemu_atomic_set_i64(&timers_state.qemu_icount_bias, timers_state.qemu_icount_bias + warp_delta); } timers_state.vm_clock_warp_start =3D -1; @@ -499,7 +499,7 @@ void qtest_clock_warp(int64_t dest) =20 seqlock_write_lock(&timers_state.vm_clock_seqlock, &timers_state.vm_clock_lock); - atomic_set_i64(&timers_state.qemu_icount_bias, + qemu_atomic_set_i64(&timers_state.qemu_icount_bias, timers_state.qemu_icount_bias + warp); seqlock_write_unlock(&timers_state.vm_clock_seqlock, &timers_state.vm_clock_lock); @@ -583,7 +583,7 @@ void qemu_start_warp_timer(void) */ seqlock_write_lock(&timers_state.vm_clock_seqlock, &timers_state.vm_clock_lock); - atomic_set_i64(&timers_state.qemu_icount_bias, + qemu_atomic_set_i64(&timers_state.qemu_icount_bias, timers_state.qemu_icount_bias + deadline); seqlock_write_unlock(&timers_state.vm_clock_seqlock, &timers_state.vm_clock_lock); @@ -837,11 +837,11 @@ static void qemu_cpu_kick_rr_next_cpu(void) { CPUState *cpu; do { - cpu =3D atomic_mb_read(&tcg_current_rr_cpu); + cpu =3D qemu_atomic_mb_read(&tcg_current_rr_cpu); if (cpu) { cpu_exit(cpu); } - } while (cpu !=3D atomic_mb_read(&tcg_current_rr_cpu)); + } while (cpu !=3D qemu_atomic_mb_read(&tcg_current_rr_cpu)); } =20 /* Kick all RR vCPUs */ @@ -1110,7 +1110,7 @@ static void qemu_cpu_stop(CPUState *cpu, bool exit) =20 static void qemu_wait_io_event_common(CPUState *cpu) { - atomic_mb_set(&cpu->thread_kicked, false); + qemu_atomic_mb_set(&cpu->thread_kicked, false); if (cpu->stop) { qemu_cpu_stop(cpu, false); } @@ -1356,7 +1356,7 @@ static int tcg_cpu_exec(CPUState *cpu) ret =3D cpu_exec(cpu); cpu_exec_end(cpu); #ifdef CONFIG_PROFILER - atomic_set(&tcg_ctx->prof.cpu_exec_time, + qemu_atomic_set(&tcg_ctx->prof.cpu_exec_time, tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti); #endif return ret; @@ -1443,7 +1443,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) =20 while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) { =20 - atomic_mb_set(&tcg_current_rr_cpu, cpu); + qemu_atomic_mb_set(&tcg_current_rr_cpu, cpu); current_cpu =3D cpu; =20 qemu_clock_enable(QEMU_CLOCK_VIRTUAL, @@ -1479,11 +1479,11 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) cpu =3D CPU_NEXT(cpu); } /* while (cpu && !cpu->exit_request).. */ =20 - /* Does not need atomic_mb_set because a spurious wakeup is okay. = */ - atomic_set(&tcg_current_rr_cpu, NULL); + /* Doesn't need qemu_atomic_mb_set because a spurious wakeup is ok= ay */ + qemu_atomic_set(&tcg_current_rr_cpu, NULL); =20 if (cpu && cpu->exit_request) { - atomic_mb_set(&cpu->exit_request, 0); + qemu_atomic_mb_set(&cpu->exit_request, 0); } =20 if (use_icount && all_cpu_threads_idle()) { @@ -1687,7 +1687,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) } } =20 - atomic_mb_set(&cpu->exit_request, 0); + qemu_atomic_mb_set(&cpu->exit_request, 0); qemu_wait_io_event(cpu); } while (!cpu->unplug || cpu_can_run(cpu)); =20 @@ -1776,7 +1776,7 @@ bool qemu_mutex_iothread_locked(void) */ void qemu_mutex_lock_iothread_impl(const char *file, int line) { - QemuMutexLockFunc bql_lock =3D atomic_read(&qemu_bql_mutex_lock_func); + QemuMutexLockFunc bql_lock =3D qemu_atomic_read(&qemu_bql_mutex_lock_f= unc); =20 g_assert(!qemu_mutex_iothread_locked()); bql_lock(&qemu_global_mutex, file, line); diff --git a/softmmu/memory.c b/softmmu/memory.c index d030eb6f7c..cabbd4ea3a 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -294,12 +294,12 @@ static void flatview_destroy(FlatView *view) =20 static bool flatview_ref(FlatView *view) { - return atomic_fetch_inc_nonzero(&view->ref) > 0; + return qemu_atomic_fetch_inc_nonzero(&view->ref) > 0; } =20 void flatview_unref(FlatView *view) { - if (atomic_fetch_dec(&view->ref) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&view->ref) =3D=3D 1) { trace_flatview_destroy_rcu(view, view->root); assert(view->root); call_rcu(view, flatview_destroy, rcu); @@ -1027,7 +1027,7 @@ static void address_space_set_flatview(AddressSpace *= as) } =20 /* Writes are protected by the BQL. */ - atomic_rcu_set(&as->current_map, new_view); + qemu_atomic_rcu_set(&as->current_map, new_view); if (old_view) { flatview_unref(old_view); } diff --git a/softmmu/vl.c b/softmmu/vl.c index f7b103467c..36ac55cb91 100644 --- a/softmmu/vl.c +++ b/softmmu/vl.c @@ -1320,7 +1320,7 @@ ShutdownCause qemu_reset_requested_get(void) =20 static int qemu_shutdown_requested(void) { - return atomic_xchg(&shutdown_requested, SHUTDOWN_CAUSE_NONE); + return qemu_atomic_xchg(&shutdown_requested, SHUTDOWN_CAUSE_NONE); } =20 static void qemu_kill_report(void) diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 891306f5b0..e1adf5ae33 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -313,11 +313,11 @@ static void store_tag1(uint64_t ptr, uint8_t *mem, in= t tag) static void store_tag1_parallel(uint64_t ptr, uint8_t *mem, int tag) { int ofs =3D extract32(ptr, LOG2_TAG_GRANULE, 1) * 4; - uint8_t old =3D atomic_read(mem); + uint8_t old =3D qemu_atomic_read(mem); =20 while (1) { uint8_t new =3D deposit32(old, ofs, 4, tag); - uint8_t cmp =3D atomic_cmpxchg(mem, old, new); + uint8_t cmp =3D qemu_atomic_cmpxchg(mem, old, new); if (likely(cmp =3D=3D old)) { return; } @@ -398,7 +398,7 @@ static inline void do_st2g(CPUARMState *env, uint64_t p= tr, uint64_t xt, 2 * TAG_GRANULE, MMU_DATA_STORE, 1, ra); if (mem1) { tag |=3D tag << 4; - atomic_set(mem1, tag); + qemu_atomic_set(mem1, tag); } } } diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c index 5685e303ab..ba33cba27c 100644 --- a/target/hppa/op_helper.c +++ b/target/hppa/op_helper.c @@ -67,7 +67,7 @@ static void atomic_store_3(CPUHPPAState *env, target_ulon= g addr, uint32_t val, old =3D *haddr; while (1) { new =3D (old & ~mask) | (val & mask); - cmp =3D atomic_cmpxchg(haddr, old, new); + cmp =3D qemu_atomic_cmpxchg(haddr, old, new); if (cmp =3D=3D old) { return; } diff --git a/target/i386/mem_helper.c b/target/i386/mem_helper.c index acf41f8885..da7b4d6d67 100644 --- a/target/i386/mem_helper.c +++ b/target/i386/mem_helper.c @@ -68,7 +68,7 @@ void helper_cmpxchg8b(CPUX86State *env, target_ulong a0) uint64_t *haddr =3D g2h(a0); cmpv =3D cpu_to_le64(cmpv); newv =3D cpu_to_le64(newv); - oldv =3D atomic_cmpxchg__nocheck(haddr, cmpv, newv); + oldv =3D qemu_atomic_cmpxchg__nocheck(haddr, cmpv, newv); oldv =3D le64_to_cpu(oldv); } #else diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c index c78baac6df..57d56d3831 100644 --- a/target/i386/whpx-all.c +++ b/target/i386/whpx-all.c @@ -946,7 +946,7 @@ static int whpx_vcpu_run(CPUState *cpu) whpx_vcpu_process_async_events(cpu); if (cpu->halted) { cpu->exception_index =3D EXCP_HLT; - atomic_set(&cpu->exit_request, false); + qemu_atomic_set(&cpu->exit_request, false); return 0; } =20 @@ -961,7 +961,7 @@ static int whpx_vcpu_run(CPUState *cpu) =20 whpx_vcpu_pre_run(cpu); =20 - if (atomic_read(&cpu->exit_request)) { + if (qemu_atomic_read(&cpu->exit_request)) { whpx_vcpu_kick(cpu); } =20 @@ -1113,7 +1113,7 @@ static int whpx_vcpu_run(CPUState *cpu) qemu_mutex_lock_iothread(); current_cpu =3D cpu; =20 - atomic_set(&cpu->exit_request, false); + qemu_atomic_set(&cpu->exit_request, false); =20 return ret < 0; } diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index f4c4111536..41779e3068 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -537,7 +537,7 @@ restart: *pte_pa =3D pte =3D updated_pte; #else target_ulong old_pte =3D - atomic_cmpxchg(pte_pa, pte, updated_pte); + qemu_atomic_cmpxchg(pte_pa, pte, updated_pte); if (old_pte !=3D pte) { goto restart; } else { diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c index a237dec757..26d97dd0c5 100644 --- a/target/s390x/mem_helper.c +++ b/target/s390x/mem_helper.c @@ -1780,7 +1780,7 @@ static uint32_t do_csst(CPUS390XState *env, uint32_t = r3, uint64_t a1, if (parallel) { #ifdef CONFIG_USER_ONLY uint32_t *haddr =3D g2h(a1); - ov =3D atomic_cmpxchg__nocheck(haddr, cv, nv); + ov =3D qemu_atomic_cmpxchg__nocheck(haddr, cv, nv); #else TCGMemOpIdx oi =3D make_memop_idx(MO_TEUL | MO_ALIGN, mem_= idx); ov =3D helper_atomic_cmpxchgl_be_mmu(env, a1, cv, nv, oi, = ra); @@ -1804,7 +1804,7 @@ static uint32_t do_csst(CPUS390XState *env, uint32_t = r3, uint64_t a1, #ifdef CONFIG_ATOMIC64 # ifdef CONFIG_USER_ONLY uint64_t *haddr =3D g2h(a1); - ov =3D atomic_cmpxchg__nocheck(haddr, cv, nv); + ov =3D qemu_atomic_cmpxchg__nocheck(haddr, cv, nv); # else TCGMemOpIdx oi =3D make_memop_idx(MO_TEQ | MO_ALIGN, mem_i= dx); ov =3D helper_atomic_cmpxchgq_be_mmu(env, a1, cv, nv, oi, = ra); diff --git a/target/xtensa/exc_helper.c b/target/xtensa/exc_helper.c index 58a64e6d62..dd3f8226b9 100644 --- a/target/xtensa/exc_helper.c +++ b/target/xtensa/exc_helper.c @@ -128,13 +128,13 @@ void HELPER(check_interrupts)(CPUXtensaState *env) =20 void HELPER(intset)(CPUXtensaState *env, uint32_t v) { - atomic_or(&env->sregs[INTSET], + qemu_atomic_or(&env->sregs[INTSET], v & env->config->inttype_mask[INTTYPE_SOFTWARE]); } =20 static void intclear(CPUXtensaState *env, uint32_t v) { - atomic_and(&env->sregs[INTSET], ~v); + qemu_atomic_and(&env->sregs[INTSET], ~v); } =20 void HELPER(intclear)(CPUXtensaState *env, uint32_t v) diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c index 09f4962d00..8b2b26c622 100644 --- a/target/xtensa/op_helper.c +++ b/target/xtensa/op_helper.c @@ -62,7 +62,7 @@ void HELPER(update_ccompare)(CPUXtensaState *env, uint32_= t i) { uint64_t dcc; =20 - atomic_and(&env->sregs[INTSET], + qemu_atomic_and(&env->sregs[INTSET], ~(1u << env->config->timerint[i])); HELPER(update_ccount)(env); dcc =3D (uint64_t)(env->sregs[CCOMPARE + i] - env->sregs[CCOUNT] - 1) = + 1; diff --git a/tcg/tcg.c b/tcg/tcg.c index 62f299e36e..88152ae4df 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -597,7 +597,7 @@ static inline bool tcg_region_initial_alloc__locked(TCG= Context *s) /* Call from a safe-work context */ void tcg_region_reset_all(void) { - unsigned int n_ctxs =3D atomic_read(&n_tcg_ctxs); + unsigned int n_ctxs =3D qemu_atomic_read(&n_tcg_ctxs); unsigned int i; =20 qemu_mutex_lock(®ion.lock); @@ -605,7 +605,7 @@ void tcg_region_reset_all(void) region.agg_size_full =3D 0; =20 for (i =3D 0; i < n_ctxs; i++) { - TCGContext *s =3D atomic_read(&tcg_ctxs[i]); + TCGContext *s =3D qemu_atomic_read(&tcg_ctxs[i]); bool err =3D tcg_region_initial_alloc__locked(s); =20 g_assert(!err); @@ -794,9 +794,9 @@ void tcg_register_thread(void) } =20 /* Claim an entry in tcg_ctxs */ - n =3D atomic_fetch_inc(&n_tcg_ctxs); + n =3D qemu_atomic_fetch_inc(&n_tcg_ctxs); g_assert(n < ms->smp.max_cpus); - atomic_set(&tcg_ctxs[n], s); + qemu_atomic_set(&tcg_ctxs[n], s); =20 if (n > 0) { alloc_tcg_plugin_context(s); @@ -819,17 +819,17 @@ void tcg_register_thread(void) */ size_t tcg_code_size(void) { - unsigned int n_ctxs =3D atomic_read(&n_tcg_ctxs); + unsigned int n_ctxs =3D qemu_atomic_read(&n_tcg_ctxs); unsigned int i; size_t total; =20 qemu_mutex_lock(®ion.lock); total =3D region.agg_size_full; for (i =3D 0; i < n_ctxs; i++) { - const TCGContext *s =3D atomic_read(&tcg_ctxs[i]); + const TCGContext *s =3D qemu_atomic_read(&tcg_ctxs[i]); size_t size; =20 - size =3D atomic_read(&s->code_gen_ptr) - s->code_gen_buffer; + size =3D qemu_atomic_read(&s->code_gen_ptr) - s->code_gen_buffer; g_assert(size <=3D s->code_gen_buffer_size); total +=3D size; } @@ -855,14 +855,14 @@ size_t tcg_code_capacity(void) =20 size_t tcg_tb_phys_invalidate_count(void) { - unsigned int n_ctxs =3D atomic_read(&n_tcg_ctxs); + unsigned int n_ctxs =3D qemu_atomic_read(&n_tcg_ctxs); unsigned int i; size_t total =3D 0; =20 for (i =3D 0; i < n_ctxs; i++) { - const TCGContext *s =3D atomic_read(&tcg_ctxs[i]); + const TCGContext *s =3D qemu_atomic_read(&tcg_ctxs[i]); =20 - total +=3D atomic_read(&s->tb_phys_invalidate_count); + total +=3D qemu_atomic_read(&s->tb_phys_invalidate_count); } return total; } @@ -1041,7 +1041,7 @@ TranslationBlock *tcg_tb_alloc(TCGContext *s) } goto retry; } - atomic_set(&s->code_gen_ptr, next); + qemu_atomic_set(&s->code_gen_ptr, next); s->data_gen_ptr =3D NULL; return tb; } @@ -2134,7 +2134,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_pre= fs) QemuLogFile *logfile; =20 rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { for (; col < 40; ++col) { putc(' ', logfile->fd); @@ -2341,7 +2341,7 @@ void tcg_op_remove(TCGContext *s, TCGOp *op) s->nb_ops--; =20 #ifdef CONFIG_PROFILER - atomic_set(&s->prof.del_op_count, s->prof.del_op_count + 1); + qemu_atomic_set(&s->prof.del_op_count, s->prof.del_op_count + 1); #endif } =20 @@ -3964,12 +3964,12 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp= *op) /* avoid copy/paste errors */ #define PROF_ADD(to, from, field) \ do { \ - (to)->field +=3D atomic_read(&((from)->field)); \ + (to)->field +=3D qemu_atomic_read(&((from)->field)); \ } while (0) =20 #define PROF_MAX(to, from, field) \ do { \ - typeof((from)->field) val__ =3D atomic_read(&((from)->field)); \ + typeof((from)->field) val__ =3D qemu_atomic_read(&((from)->field))= ; \ if (val__ > (to)->field) { \ (to)->field =3D val__; \ } \ @@ -3979,11 +3979,11 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp= *op) static inline void tcg_profile_snapshot(TCGProfile *prof, bool counters, bool table) { - unsigned int n_ctxs =3D atomic_read(&n_tcg_ctxs); + unsigned int n_ctxs =3D qemu_atomic_read(&n_tcg_ctxs); unsigned int i; =20 for (i =3D 0; i < n_ctxs; i++) { - TCGContext *s =3D atomic_read(&tcg_ctxs[i]); + TCGContext *s =3D qemu_atomic_read(&tcg_ctxs[i]); const TCGProfile *orig =3D &s->prof; =20 if (counters) { @@ -4042,15 +4042,15 @@ void tcg_dump_op_count(void) =20 int64_t tcg_cpu_exec_time(void) { - unsigned int n_ctxs =3D atomic_read(&n_tcg_ctxs); + unsigned int n_ctxs =3D qemu_atomic_read(&n_tcg_ctxs); unsigned int i; int64_t ret =3D 0; =20 for (i =3D 0; i < n_ctxs; i++) { - const TCGContext *s =3D atomic_read(&tcg_ctxs[i]); + const TCGContext *s =3D qemu_atomic_read(&tcg_ctxs[i]); const TCGProfile *prof =3D &s->prof; =20 - ret +=3D atomic_read(&prof->cpu_exec_time); + ret +=3D qemu_atomic_read(&prof->cpu_exec_time); } return ret; } @@ -4083,15 +4083,15 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *t= b) QTAILQ_FOREACH(op, &s->ops, link) { n++; } - atomic_set(&prof->op_count, prof->op_count + n); + qemu_atomic_set(&prof->op_count, prof->op_count + n); if (n > prof->op_count_max) { - atomic_set(&prof->op_count_max, n); + qemu_atomic_set(&prof->op_count_max, n); } =20 n =3D s->nb_temps; - atomic_set(&prof->temp_count, prof->temp_count + n); + qemu_atomic_set(&prof->temp_count, prof->temp_count + n); if (n > prof->temp_count_max) { - atomic_set(&prof->temp_count_max, n); + qemu_atomic_set(&prof->temp_count_max, n); } } #endif @@ -4125,7 +4125,7 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb) #endif =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->opt_time, prof->opt_time - profile_getclock()); + qemu_atomic_set(&prof->opt_time, prof->opt_time - profile_getclock()); #endif =20 #ifdef USE_TCG_OPTIMIZATIONS @@ -4133,8 +4133,8 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb) #endif =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->opt_time, prof->opt_time + profile_getclock()); - atomic_set(&prof->la_time, prof->la_time - profile_getclock()); + qemu_atomic_set(&prof->opt_time, prof->opt_time + profile_getclock()); + qemu_atomic_set(&prof->la_time, prof->la_time - profile_getclock()); #endif =20 reachable_code_pass(s); @@ -4159,7 +4159,7 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb) } =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->la_time, prof->la_time + profile_getclock()); + qemu_atomic_set(&prof->la_time, prof->la_time + profile_getclock()); #endif =20 #ifdef DEBUG_DISAS @@ -4190,7 +4190,8 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb) TCGOpcode opc =3D op->opc; =20 #ifdef CONFIG_PROFILER - atomic_set(&prof->table_op_count[opc], prof->table_op_count[opc] += 1); + qemu_atomic_set(&prof->table_op_count[opc], + prof->table_op_count[opc] + 1); #endif =20 switch (opc) { diff --git a/tcg/tci.c b/tcg/tci.c index 46fe9ce63f..e0fed902b2 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -1115,7 +1115,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t= *tb_ptr) case INDEX_op_goto_tb: /* Jump address is aligned */ tb_ptr =3D QEMU_ALIGN_PTR_UP(tb_ptr, 4); - t0 =3D atomic_read((int32_t *)tb_ptr); + t0 =3D qemu_atomic_read((int32_t *)tb_ptr); tb_ptr +=3D sizeof(int32_t); tci_assert(tb_ptr =3D=3D old_code_ptr + op_size); tb_ptr +=3D (int32_t)t0; diff --git a/tests/atomic64-bench.c b/tests/atomic64-bench.c index 121a8c14f4..6f56ae2ec5 100644 --- a/tests/atomic64-bench.c +++ b/tests/atomic64-bench.c @@ -56,17 +56,17 @@ static void *thread_func(void *arg) { struct thread_info *info =3D arg; =20 - atomic_inc(&n_ready_threads); - while (!atomic_read(&test_start)) { + qemu_atomic_inc(&n_ready_threads); + while (!qemu_atomic_read(&test_start)) { cpu_relax(); } =20 - while (!atomic_read(&test_stop)) { + while (!qemu_atomic_read(&test_stop)) { unsigned int index; =20 info->r =3D xorshift64star(info->r); index =3D info->r & (range - 1); - atomic_read_i64(&counts[index].i64); + qemu_atomic_read_i64(&counts[index].i64); info->accesses++; } return NULL; @@ -76,13 +76,13 @@ static void run_test(void) { unsigned int i; =20 - while (atomic_read(&n_ready_threads) !=3D n_threads) { + while (qemu_atomic_read(&n_ready_threads) !=3D n_threads) { cpu_relax(); } =20 - atomic_set(&test_start, true); + qemu_atomic_set(&test_start, true); g_usleep(duration * G_USEC_PER_SEC); - atomic_set(&test_stop, true); + qemu_atomic_set(&test_stop, true); =20 for (i =3D 0; i < n_threads; i++) { qemu_thread_join(&threads[i]); diff --git a/tests/atomic_add-bench.c b/tests/atomic_add-bench.c index 5666f6bbff..8d095b8988 100644 --- a/tests/atomic_add-bench.c +++ b/tests/atomic_add-bench.c @@ -53,12 +53,12 @@ static void *thread_func(void *arg) { struct thread_info *info =3D arg; =20 - atomic_inc(&n_ready_threads); - while (!atomic_read(&test_start)) { + qemu_atomic_inc(&n_ready_threads); + while (!qemu_atomic_read(&test_start)) { cpu_relax(); } =20 - while (!atomic_read(&test_stop)) { + while (!qemu_atomic_read(&test_stop)) { unsigned int index; =20 info->r =3D xorshift64star(info->r); @@ -68,7 +68,7 @@ static void *thread_func(void *arg) counts[index].val +=3D 1; qemu_mutex_unlock(&counts[index].lock); } else { - atomic_inc(&counts[index].val); + qemu_atomic_inc(&counts[index].val); } } return NULL; @@ -78,13 +78,13 @@ static void run_test(void) { unsigned int i; =20 - while (atomic_read(&n_ready_threads) !=3D n_threads) { + while (qemu_atomic_read(&n_ready_threads) !=3D n_threads) { cpu_relax(); } =20 - atomic_set(&test_start, true); + qemu_atomic_set(&test_start, true); g_usleep(duration * G_USEC_PER_SEC); - atomic_set(&test_stop, true); + qemu_atomic_set(&test_stop, true); =20 for (i =3D 0; i < n_threads; i++) { qemu_thread_join(&threads[i]); diff --git a/tests/iothread.c b/tests/iothread.c index d3a2ee9a01..6d86d7ca24 100644 --- a/tests/iothread.c +++ b/tests/iothread.c @@ -74,7 +74,7 @@ static void *iothread_run(void *opaque) qemu_cond_signal(&iothread->init_done_cond); qemu_mutex_unlock(&iothread->init_done_lock); =20 - while (!atomic_read(&iothread->stopping)) { + while (!qemu_atomic_read(&iothread->stopping)) { aio_poll(iothread->ctx, true); } =20 diff --git a/tests/qht-bench.c b/tests/qht-bench.c index 362f03cb03..cf2990b353 100644 --- a/tests/qht-bench.c +++ b/tests/qht-bench.c @@ -209,13 +209,13 @@ static void *thread_func(void *p) =20 rcu_register_thread(); =20 - atomic_inc(&n_ready_threads); - while (!atomic_read(&test_start)) { + qemu_atomic_inc(&n_ready_threads); + while (!qemu_atomic_read(&test_start)) { cpu_relax(); } =20 rcu_read_lock(); - while (!atomic_read(&test_stop)) { + while (!qemu_atomic_read(&test_stop)) { info->seed =3D xorshift64star(info->seed); info->func(info); } @@ -423,13 +423,13 @@ static void run_test(void) { int i; =20 - while (atomic_read(&n_ready_threads) !=3D n_rw_threads + n_rz_threads)= { + while (qemu_atomic_read(&n_ready_threads) !=3D n_rw_threads + n_rz_thr= eads) { cpu_relax(); } =20 - atomic_set(&test_start, true); + qemu_atomic_set(&test_start, true); g_usleep(duration * G_USEC_PER_SEC); - atomic_set(&test_stop, true); + qemu_atomic_set(&test_stop, true); =20 for (i =3D 0; i < n_rw_threads; i++) { qemu_thread_join(&rw_threads[i]); diff --git a/tests/rcutorture.c b/tests/rcutorture.c index 732f03abda..78148140d7 100644 --- a/tests/rcutorture.c +++ b/tests/rcutorture.c @@ -123,7 +123,7 @@ static void *rcu_read_perf_test(void *arg) rcu_register_thread(); =20 *(struct rcu_reader_data **)arg =3D &rcu_reader; - atomic_inc(&nthreadsrunning); + qemu_atomic_inc(&nthreadsrunning); while (goflag =3D=3D GOFLAG_INIT) { g_usleep(1000); } @@ -149,7 +149,7 @@ static void *rcu_update_perf_test(void *arg) rcu_register_thread(); =20 *(struct rcu_reader_data **)arg =3D &rcu_reader; - atomic_inc(&nthreadsrunning); + qemu_atomic_inc(&nthreadsrunning); while (goflag =3D=3D GOFLAG_INIT) { g_usleep(1000); } @@ -172,7 +172,7 @@ static void perftestinit(void) =20 static void perftestrun(int nthreads, int duration, int nreaders, int nupd= aters) { - while (atomic_read(&nthreadsrunning) < nthreads) { + while (qemu_atomic_read(&nthreadsrunning) < nthreads) { g_usleep(1000); } goflag =3D GOFLAG_RUN; @@ -259,8 +259,8 @@ static void *rcu_read_stress_test(void *arg) } while (goflag =3D=3D GOFLAG_RUN) { rcu_read_lock(); - p =3D atomic_rcu_read(&rcu_stress_current); - if (atomic_read(&p->mbtest) =3D=3D 0) { + p =3D qemu_atomic_rcu_read(&rcu_stress_current); + if (qemu_atomic_read(&p->mbtest) =3D=3D 0) { n_mberror++; } rcu_read_lock(); @@ -268,7 +268,7 @@ static void *rcu_read_stress_test(void *arg) garbage++; } rcu_read_unlock(); - pc =3D atomic_read(&p->age); + pc =3D qemu_atomic_read(&p->age); rcu_read_unlock(); if ((pc > RCU_STRESS_PIPE_LEN) || (pc < 0)) { pc =3D RCU_STRESS_PIPE_LEN; @@ -301,7 +301,7 @@ static void *rcu_read_stress_test(void *arg) static void *rcu_update_stress_test(void *arg) { int i, rcu_stress_idx =3D 0; - struct rcu_stress *cp =3D atomic_read(&rcu_stress_current); + struct rcu_stress *cp =3D qemu_atomic_read(&rcu_stress_current); =20 rcu_register_thread(); *(struct rcu_reader_data **)arg =3D &rcu_reader; @@ -319,11 +319,11 @@ static void *rcu_update_stress_test(void *arg) p =3D &rcu_stress_array[rcu_stress_idx]; /* catching up with ourselves would be a bug */ assert(p !=3D cp); - atomic_set(&p->mbtest, 0); + qemu_atomic_set(&p->mbtest, 0); smp_mb(); - atomic_set(&p->age, 0); - atomic_set(&p->mbtest, 1); - atomic_rcu_set(&rcu_stress_current, p); + qemu_atomic_set(&p->age, 0); + qemu_atomic_set(&p->mbtest, 1); + qemu_atomic_rcu_set(&rcu_stress_current, p); cp =3D p; /* * New RCU structure is now live, update pipe counts on old @@ -331,7 +331,7 @@ static void *rcu_update_stress_test(void *arg) */ for (i =3D 0; i < RCU_STRESS_PIPE_LEN; i++) { if (i !=3D rcu_stress_idx) { - atomic_set(&rcu_stress_array[i].age, + qemu_atomic_set(&rcu_stress_array[i].age, rcu_stress_array[i].age + 1); } } diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c index d3144be7e0..d864ca07cd 100644 --- a/tests/test-aio-multithread.c +++ b/tests/test-aio-multithread.c @@ -118,16 +118,16 @@ static bool schedule_next(int n) { Coroutine *co; =20 - co =3D atomic_xchg(&to_schedule[n], NULL); + co =3D qemu_atomic_xchg(&to_schedule[n], NULL); if (!co) { - atomic_inc(&count_retry); + qemu_atomic_inc(&count_retry); return false; } =20 if (n =3D=3D id) { - atomic_inc(&count_here); + qemu_atomic_inc(&count_here); } else { - atomic_inc(&count_other); + qemu_atomic_inc(&count_other); } =20 aio_co_schedule(ctx[n], co); @@ -143,13 +143,13 @@ static coroutine_fn void test_multi_co_schedule_entry= (void *opaque) { g_assert(to_schedule[id] =3D=3D NULL); =20 - while (!atomic_mb_read(&now_stopping)) { + while (!qemu_atomic_mb_read(&now_stopping)) { int n; =20 n =3D g_test_rand_int_range(0, NUM_CONTEXTS); schedule_next(n); =20 - atomic_mb_set(&to_schedule[id], qemu_coroutine_self()); + qemu_atomic_mb_set(&to_schedule[id], qemu_coroutine_self()); qemu_coroutine_yield(); g_assert(to_schedule[id] =3D=3D NULL); } @@ -171,7 +171,7 @@ static void test_multi_co_schedule(int seconds) =20 g_usleep(seconds * 1000000); =20 - atomic_mb_set(&now_stopping, true); + qemu_atomic_mb_set(&now_stopping, true); for (i =3D 0; i < NUM_CONTEXTS; i++) { ctx_run(i, finish_cb, NULL); to_schedule[i] =3D NULL; @@ -202,7 +202,7 @@ static CoMutex comutex; =20 static void coroutine_fn test_multi_co_mutex_entry(void *opaque) { - while (!atomic_mb_read(&now_stopping)) { + while (!qemu_atomic_mb_read(&now_stopping)) { qemu_co_mutex_lock(&comutex); counter++; qemu_co_mutex_unlock(&comutex); @@ -212,9 +212,9 @@ static void coroutine_fn test_multi_co_mutex_entry(void= *opaque) * exits before the coroutine is woken up, causing a spurious * assertion failure. */ - atomic_inc(&atomic_counter); + qemu_atomic_inc(&atomic_counter); } - atomic_dec(&running); + qemu_atomic_dec(&running); } =20 static void test_multi_co_mutex(int threads, int seconds) @@ -236,7 +236,7 @@ static void test_multi_co_mutex(int threads, int second= s) =20 g_usleep(seconds * 1000000); =20 - atomic_mb_set(&now_stopping, true); + qemu_atomic_mb_set(&now_stopping, true); while (running > 0) { g_usleep(100000); } @@ -296,9 +296,9 @@ static void mcs_mutex_lock(void) =20 nodes[id].next =3D -1; nodes[id].locked =3D 1; - prev =3D atomic_xchg(&mutex_head, id); + prev =3D qemu_atomic_xchg(&mutex_head, id); if (prev !=3D -1) { - atomic_set(&nodes[prev].next, id); + qemu_atomic_set(&nodes[prev].next, id); qemu_futex_wait(&nodes[id].locked, 1); } } @@ -306,13 +306,13 @@ static void mcs_mutex_lock(void) static void mcs_mutex_unlock(void) { int next; - if (atomic_read(&nodes[id].next) =3D=3D -1) { - if (atomic_read(&mutex_head) =3D=3D id && - atomic_cmpxchg(&mutex_head, id, -1) =3D=3D id) { + if (qemu_atomic_read(&nodes[id].next) =3D=3D -1) { + if (qemu_atomic_read(&mutex_head) =3D=3D id && + qemu_atomic_cmpxchg(&mutex_head, id, -1) =3D=3D id) { /* Last item in the list, exit. */ return; } - while (atomic_read(&nodes[id].next) =3D=3D -1) { + while (qemu_atomic_read(&nodes[id].next) =3D=3D -1) { /* mcs_mutex_lock did the xchg, but has not updated * nodes[prev].next yet. */ @@ -320,20 +320,20 @@ static void mcs_mutex_unlock(void) } =20 /* Wake up the next in line. */ - next =3D atomic_read(&nodes[id].next); + next =3D qemu_atomic_read(&nodes[id].next); nodes[next].locked =3D 0; qemu_futex_wake(&nodes[next].locked, 1); } =20 static void test_multi_fair_mutex_entry(void *opaque) { - while (!atomic_mb_read(&now_stopping)) { + while (!qemu_atomic_mb_read(&now_stopping)) { mcs_mutex_lock(); counter++; mcs_mutex_unlock(); - atomic_inc(&atomic_counter); + qemu_atomic_inc(&atomic_counter); } - atomic_dec(&running); + qemu_atomic_dec(&running); } =20 static void test_multi_fair_mutex(int threads, int seconds) @@ -355,7 +355,7 @@ static void test_multi_fair_mutex(int threads, int seco= nds) =20 g_usleep(seconds * 1000000); =20 - atomic_mb_set(&now_stopping, true); + qemu_atomic_mb_set(&now_stopping, true); while (running > 0) { g_usleep(100000); } @@ -383,13 +383,13 @@ static QemuMutex mutex; =20 static void test_multi_mutex_entry(void *opaque) { - while (!atomic_mb_read(&now_stopping)) { + while (!qemu_atomic_mb_read(&now_stopping)) { qemu_mutex_lock(&mutex); counter++; qemu_mutex_unlock(&mutex); - atomic_inc(&atomic_counter); + qemu_atomic_inc(&atomic_counter); } - atomic_dec(&running); + qemu_atomic_dec(&running); } =20 static void test_multi_mutex(int threads, int seconds) @@ -411,7 +411,7 @@ static void test_multi_mutex(int threads, int seconds) =20 g_usleep(seconds * 1000000); =20 - atomic_mb_set(&now_stopping, true); + qemu_atomic_mb_set(&now_stopping, true); while (running > 0) { g_usleep(100000); } diff --git a/tests/test-logging.c b/tests/test-logging.c index 8b1522cfed..32d4a270ac 100644 --- a/tests/test-logging.c +++ b/tests/test-logging.c @@ -133,7 +133,7 @@ static void test_logfile_write(gconstpointer data) */ qemu_set_log_filename(file_path, &error_abort); rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); orig_fd =3D logfile->fd; g_assert(logfile && logfile->fd); fprintf(logfile->fd, "%s 1st write to file\n", __func__); @@ -141,7 +141,7 @@ static void test_logfile_write(gconstpointer data) =20 /* Change the logfile and ensure that the handle is still valid. */ qemu_set_log_filename(file_path1, &error_abort); - logfile2 =3D atomic_rcu_read(&qemu_logfile); + logfile2 =3D qemu_atomic_rcu_read(&qemu_logfile); g_assert(logfile->fd =3D=3D orig_fd); g_assert(logfile2->fd !=3D logfile->fd); fprintf(logfile->fd, "%s 2nd write to file\n", __func__); diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c index 92be51ec50..d014e3f21b 100644 --- a/tests/test-rcu-list.c +++ b/tests/test-rcu-list.c @@ -106,7 +106,7 @@ static void reclaim_list_el(struct rcu_head *prcu) struct list_element *el =3D container_of(prcu, struct list_element, rc= u); g_free(el); /* Accessed only from call_rcu thread. */ - atomic_set_i64(&n_reclaims, n_reclaims + 1); + qemu_atomic_set_i64(&n_reclaims, n_reclaims + 1); } =20 #if TEST_LIST_TYPE =3D=3D 1 @@ -172,16 +172,16 @@ static void *rcu_q_reader(void *arg) rcu_register_thread(); =20 *(struct rcu_reader_data **)arg =3D &rcu_reader; - atomic_inc(&nthreadsrunning); - while (atomic_read(&goflag) =3D=3D GOFLAG_INIT) { + qemu_atomic_inc(&nthreadsrunning); + while (qemu_atomic_read(&goflag) =3D=3D GOFLAG_INIT) { g_usleep(1000); } =20 - while (atomic_read(&goflag) =3D=3D GOFLAG_RUN) { + while (qemu_atomic_read(&goflag) =3D=3D GOFLAG_RUN) { rcu_read_lock(); TEST_LIST_FOREACH_RCU(el, &Q_list_head, entry) { n_reads_local++; - if (atomic_read(&goflag) =3D=3D GOFLAG_STOP) { + if (qemu_atomic_read(&goflag) =3D=3D GOFLAG_STOP) { break; } } @@ -207,12 +207,12 @@ static void *rcu_q_updater(void *arg) struct list_element *el, *prev_el; =20 *(struct rcu_reader_data **)arg =3D &rcu_reader; - atomic_inc(&nthreadsrunning); - while (atomic_read(&goflag) =3D=3D GOFLAG_INIT) { + qemu_atomic_inc(&nthreadsrunning); + while (qemu_atomic_read(&goflag) =3D=3D GOFLAG_INIT) { g_usleep(1000); } =20 - while (atomic_read(&goflag) =3D=3D GOFLAG_RUN) { + while (qemu_atomic_read(&goflag) =3D=3D GOFLAG_RUN) { target_el =3D select_random_el(RCU_Q_LEN); j =3D 0; /* FOREACH_RCU could work here but let's use both macros */ @@ -226,7 +226,7 @@ static void *rcu_q_updater(void *arg) break; } } - if (atomic_read(&goflag) =3D=3D GOFLAG_STOP) { + if (qemu_atomic_read(&goflag) =3D=3D GOFLAG_STOP) { break; } target_el =3D select_random_el(RCU_Q_LEN); @@ -248,7 +248,7 @@ static void *rcu_q_updater(void *arg) qemu_mutex_lock(&counts_mutex); n_nodes +=3D n_nodes_local; n_updates +=3D n_updates_local; - atomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local); + qemu_atomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_loca= l); qemu_mutex_unlock(&counts_mutex); return NULL; } @@ -271,13 +271,13 @@ static void rcu_qtest_init(void) static void rcu_qtest_run(int duration, int nreaders) { int nthreads =3D nreaders + 1; - while (atomic_read(&nthreadsrunning) < nthreads) { + while (qemu_atomic_read(&nthreadsrunning) < nthreads) { g_usleep(1000); } =20 - atomic_set(&goflag, GOFLAG_RUN); + qemu_atomic_set(&goflag, GOFLAG_RUN); sleep(duration); - atomic_set(&goflag, GOFLAG_STOP); + qemu_atomic_set(&goflag, GOFLAG_STOP); wait_all_threads(); } =20 @@ -302,21 +302,23 @@ static void rcu_qtest(const char *test, int duration,= int nreaders) n_removed_local++; } qemu_mutex_lock(&counts_mutex); - atomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local); + qemu_atomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_loca= l); qemu_mutex_unlock(&counts_mutex); synchronize_rcu(); - while (atomic_read_i64(&n_nodes_removed) > atomic_read_i64(&n_reclaims= )) { + while (qemu_atomic_read_i64(&n_nodes_removed) > + qemu_atomic_read_i64(&n_reclaims)) { g_usleep(100); synchronize_rcu(); } if (g_test_in_charge) { - g_assert_cmpint(atomic_read_i64(&n_nodes_removed), =3D=3D, - atomic_read_i64(&n_reclaims)); + g_assert_cmpint(qemu_atomic_read_i64(&n_nodes_removed), =3D=3D, + qemu_atomic_read_i64(&n_reclaims)); } else { printf("%s: %d readers; 1 updater; nodes read: " \ "%lld, nodes removed: %"PRIi64"; nodes reclaimed: %"PRIi64"= \n", test, nthreadsrunning - 1, n_reads, - atomic_read_i64(&n_nodes_removed), atomic_read_i64(&n_recla= ims)); + qemu_atomic_read_i64(&n_nodes_removed), + qemu_atomic_read_i64(&n_reclaims)); exit(0); } } diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c index 0b675923f6..0e53e867d9 100644 --- a/tests/test-thread-pool.c +++ b/tests/test-thread-pool.c @@ -21,15 +21,15 @@ typedef struct { static int worker_cb(void *opaque) { WorkerTestData *data =3D opaque; - return atomic_fetch_inc(&data->n); + return qemu_atomic_fetch_inc(&data->n); } =20 static int long_cb(void *opaque) { WorkerTestData *data =3D opaque; - if (atomic_cmpxchg(&data->n, 0, 1) =3D=3D 0) { + if (qemu_atomic_cmpxchg(&data->n, 0, 1) =3D=3D 0) { g_usleep(2000000); - atomic_or(&data->n, 2); + qemu_atomic_or(&data->n, 2); } return 0; } @@ -172,7 +172,7 @@ static void do_test_cancel(bool sync) /* Cancel the jobs that haven't been started yet. */ num_canceled =3D 0; for (i =3D 0; i < 100; i++) { - if (atomic_cmpxchg(&data[i].n, 0, 4) =3D=3D 0) { + if (qemu_atomic_cmpxchg(&data[i].n, 0, 4) =3D=3D 0) { data[i].ret =3D -ECANCELED; if (sync) { bdrv_aio_cancel(data[i].aiocb); @@ -186,7 +186,7 @@ static void do_test_cancel(bool sync) g_assert_cmpint(num_canceled, <, 100); =20 for (i =3D 0; i < 100; i++) { - if (data[i].aiocb && atomic_read(&data[i].n) < 4) { + if (data[i].aiocb && qemu_atomic_read(&data[i].n) < 4) { if (sync) { /* Canceling the others will be a blocking operation. */ bdrv_aio_cancel(data[i].aiocb); diff --git a/util/aio-posix.c b/util/aio-posix.c index f7f13ebfc2..777eae4d1d 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -27,7 +27,7 @@ =20 bool aio_poll_disabled(AioContext *ctx) { - return atomic_read(&ctx->poll_disable_cnt); + return qemu_atomic_read(&ctx->poll_disable_cnt); } =20 void aio_add_ready_handler(AioHandlerList *ready_list, @@ -148,8 +148,8 @@ void aio_set_fd_handler(AioContext *ctx, * Changing handlers is a rare event, and a little wasted polling until * the aio_notify below is not an issue. */ - atomic_set(&ctx->poll_disable_cnt, - atomic_read(&ctx->poll_disable_cnt) + poll_disable_change); + qemu_atomic_set(&ctx->poll_disable_cnt, + qemu_atomic_read(&ctx->poll_disable_cnt) + poll_disable_cha= nge); =20 ctx->fdmon_ops->update(ctx, node, new_node); if (node) { @@ -581,7 +581,8 @@ bool aio_poll(AioContext *ctx, bool blocking) */ use_notify_me =3D timeout !=3D 0; if (use_notify_me) { - atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2); + qemu_atomic_set(&ctx->notify_me, + qemu_atomic_read(&ctx->notify_me) + 2); /* * Write ctx->notify_me before reading ctx->notified. Pairs with * smp_mb in aio_notify(). @@ -589,7 +590,7 @@ bool aio_poll(AioContext *ctx, bool blocking) smp_mb(); =20 /* Don't block if aio_notify() was called */ - if (atomic_read(&ctx->notified)) { + if (qemu_atomic_read(&ctx->notified)) { timeout =3D 0; } } @@ -603,8 +604,8 @@ bool aio_poll(AioContext *ctx, bool blocking) =20 if (use_notify_me) { /* Finish the poll before clearing the flag. */ - atomic_store_release(&ctx->notify_me, - atomic_read(&ctx->notify_me) - 2); + qemu_atomic_store_release(&ctx->notify_me, + qemu_atomic_read(&ctx->notify_me) - 2); } =20 aio_notify_accept(ctx); diff --git a/util/aio-wait.c b/util/aio-wait.c index b4877493f8..1aea6e7fa0 100644 --- a/util/aio-wait.c +++ b/util/aio-wait.c @@ -36,7 +36,7 @@ static void dummy_bh_cb(void *opaque) void aio_wait_kick(void) { /* The barrier (or an atomic op) is in the caller. */ - if (atomic_read(&global_aio_wait.num_waiters)) { + if (qemu_atomic_read(&global_aio_wait.num_waiters)) { aio_bh_schedule_oneshot(qemu_get_aio_context(), dummy_bh_cb, NULL); } } diff --git a/util/aio-win32.c b/util/aio-win32.c index 49bd90e62e..cfa81c6217 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -345,7 +345,8 @@ bool aio_poll(AioContext *ctx, bool blocking) * so disable the optimization now. */ if (blocking) { - atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2); + qemu_atomic_set(&ctx->notify_me, + qemu_atomic_read(&ctx->notify_me) + 2); /* * Write ctx->notify_me before computing the timeout * (reading bottom half flags, etc.). Pairs with @@ -384,7 +385,8 @@ bool aio_poll(AioContext *ctx, bool blocking) ret =3D WaitForMultipleObjects(count, events, FALSE, timeout); if (blocking) { assert(first); - atomic_store_release(&ctx->notify_me, atomic_read(&ctx->notify= _me) - 2); + qemu_atomic_store_release(&ctx->notify_me, + qemu_atomic_read(&ctx->notify_me) - = 2); aio_notify_accept(ctx); } =20 diff --git a/util/async.c b/util/async.c index 4266745dee..9341aacdcd 100644 --- a/util/async.c +++ b/util/async.c @@ -70,13 +70,13 @@ static void aio_bh_enqueue(QEMUBH *bh, unsigned new_fla= gs) unsigned old_flags; =20 /* - * The memory barrier implicit in atomic_fetch_or makes sure that: + * The memory barrier implicit in qemu_atomic_fetch_or makes sure that: * 1. idle & any writes needed by the callback are done before the * locations are read in the aio_bh_poll. * 2. ctx is loaded before the callback has a chance to execute and bh * could be freed. */ - old_flags =3D atomic_fetch_or(&bh->flags, BH_PENDING | new_flags); + old_flags =3D qemu_atomic_fetch_or(&bh->flags, BH_PENDING | new_flags); if (!(old_flags & BH_PENDING)) { QSLIST_INSERT_HEAD_ATOMIC(&ctx->bh_list, bh, next); } @@ -96,13 +96,13 @@ static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *f= lags) QSLIST_REMOVE_HEAD(head, next); =20 /* - * The atomic_and is paired with aio_bh_enqueue(). The implicit memory - * barrier ensures that the callback sees all writes done by the sched= uling - * thread. It also ensures that the scheduling thread sees the cleared - * flag before bh->cb has run, and thus will call aio_notify again if - * necessary. + * The qemu_atomic_and is paired with aio_bh_enqueue(). The implicit + * memory barrier ensures that the callback sees all writes done by the + * scheduling thread. It also ensures that the scheduling thread sees= the + * cleared flag before bh->cb has run, and thus will call aio_notify a= gain + * if necessary. */ - *flags =3D atomic_fetch_and(&bh->flags, + *flags =3D qemu_atomic_fetch_and(&bh->flags, ~(BH_PENDING | BH_SCHEDULED | BH_IDLE)); return bh; } @@ -185,7 +185,7 @@ void qemu_bh_schedule(QEMUBH *bh) */ void qemu_bh_cancel(QEMUBH *bh) { - atomic_and(&bh->flags, ~BH_SCHEDULED); + qemu_atomic_and(&bh->flags, ~BH_SCHEDULED); } =20 /* This func is async.The bottom half will do the delete action at the fin= ial @@ -249,7 +249,7 @@ aio_ctx_prepare(GSource *source, gint *timeout) { AioContext *ctx =3D (AioContext *) source; =20 - atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) | 1); + qemu_atomic_set(&ctx->notify_me, qemu_atomic_read(&ctx->notify_me) | 1= ); =20 /* * Write ctx->notify_me before computing the timeout @@ -276,7 +276,8 @@ aio_ctx_check(GSource *source) BHListSlice *s; =20 /* Finish computing the timeout before clearing the flag. */ - atomic_store_release(&ctx->notify_me, atomic_read(&ctx->notify_me) & ~= 1); + qemu_atomic_store_release(&ctx->notify_me, + qemu_atomic_read(&ctx->notify_me) & ~1); aio_notify_accept(ctx); =20 QSLIST_FOREACH_RCU(bh, &ctx->bh_list, next) { @@ -424,21 +425,21 @@ void aio_notify(AioContext *ctx) * aio_notify_accept. */ smp_wmb(); - atomic_set(&ctx->notified, true); + qemu_atomic_set(&ctx->notified, true); =20 /* * Write ctx->notified before reading ctx->notify_me. Pairs * with smp_mb in aio_ctx_prepare or aio_poll. */ smp_mb(); - if (atomic_read(&ctx->notify_me)) { + if (qemu_atomic_read(&ctx->notify_me)) { event_notifier_set(&ctx->notifier); } } =20 void aio_notify_accept(AioContext *ctx) { - atomic_set(&ctx->notified, false); + qemu_atomic_set(&ctx->notified, false); =20 /* * Write ctx->notified before reading e.g. bh->flags. Pairs with smp_= wmb @@ -465,7 +466,7 @@ static bool aio_context_notifier_poll(void *opaque) EventNotifier *e =3D opaque; AioContext *ctx =3D container_of(e, AioContext, notifier); =20 - return atomic_read(&ctx->notified); + return qemu_atomic_read(&ctx->notified); } =20 static void co_schedule_bh_cb(void *opaque) @@ -489,7 +490,7 @@ static void co_schedule_bh_cb(void *opaque) aio_context_acquire(ctx); =20 /* Protected by write barrier in qemu_aio_coroutine_enter */ - atomic_set(&co->scheduled, NULL); + qemu_atomic_set(&co->scheduled, NULL); qemu_aio_coroutine_enter(ctx, co); aio_context_release(ctx); } @@ -546,7 +547,7 @@ fail: void aio_co_schedule(AioContext *ctx, Coroutine *co) { trace_aio_co_schedule(ctx, co); - const char *scheduled =3D atomic_cmpxchg(&co->scheduled, NULL, + const char *scheduled =3D qemu_atomic_cmpxchg(&co->scheduled, NULL, __func__); =20 if (scheduled) { @@ -577,7 +578,7 @@ void aio_co_wake(struct Coroutine *co) * qemu_coroutine_enter. */ smp_read_barrier_depends(); - ctx =3D atomic_read(&co->ctx); + ctx =3D qemu_atomic_read(&co->ctx); =20 aio_co_enter(ctx, co); } diff --git a/util/atomic64.c b/util/atomic64.c index b198a6c9c8..87e59bbac0 100644 --- a/util/atomic64.c +++ b/util/atomic64.c @@ -51,8 +51,8 @@ static QemuSpin *addr_to_lock(const void *addr) return ret; \ } =20 -GEN_READ(atomic_read_i64, int64_t) -GEN_READ(atomic_read_u64, uint64_t) +GEN_READ(qemu_atomic_read_i64, int64_t) +GEN_READ(qemu_atomic_read_u64, uint64_t) #undef GEN_READ =20 #define GEN_SET(name, type) \ @@ -65,11 +65,11 @@ GEN_READ(atomic_read_u64, uint64_t) qemu_spin_unlock(lock); \ } =20 -GEN_SET(atomic_set_i64, int64_t) -GEN_SET(atomic_set_u64, uint64_t) +GEN_SET(qemu_atomic_set_i64, int64_t) +GEN_SET(qemu_atomic_set_u64, uint64_t) #undef GEN_SET =20 -void atomic64_init(void) +void qemu_atomic64_init(void) { int i; =20 diff --git a/util/bitmap.c b/util/bitmap.c index 1753ff7f5b..d7995776ab 100644 --- a/util/bitmap.c +++ b/util/bitmap.c @@ -190,7 +190,7 @@ void bitmap_set_atomic(unsigned long *map, long start, = long nr) =20 /* First word */ if (nr - bits_to_set > 0) { - atomic_or(p, mask_to_set); + qemu_atomic_or(p, mask_to_set); nr -=3D bits_to_set; bits_to_set =3D BITS_PER_LONG; mask_to_set =3D ~0UL; @@ -209,9 +209,9 @@ void bitmap_set_atomic(unsigned long *map, long start, = long nr) /* Last word */ if (nr) { mask_to_set &=3D BITMAP_LAST_WORD_MASK(size); - atomic_or(p, mask_to_set); + qemu_atomic_or(p, mask_to_set); } else { - /* If we avoided the full barrier in atomic_or(), issue a + /* If we avoided the full barrier in qemu_atomic_or(), issue a * barrier to account for the assignments in the while loop. */ smp_mb(); @@ -253,7 +253,7 @@ bool bitmap_test_and_clear_atomic(unsigned long *map, l= ong start, long nr) =20 /* First word */ if (nr - bits_to_clear > 0) { - old_bits =3D atomic_fetch_and(p, ~mask_to_clear); + old_bits =3D qemu_atomic_fetch_and(p, ~mask_to_clear); dirty |=3D old_bits & mask_to_clear; nr -=3D bits_to_clear; bits_to_clear =3D BITS_PER_LONG; @@ -265,7 +265,7 @@ bool bitmap_test_and_clear_atomic(unsigned long *map, l= ong start, long nr) if (bits_to_clear =3D=3D BITS_PER_LONG) { while (nr >=3D BITS_PER_LONG) { if (*p) { - old_bits =3D atomic_xchg(p, 0); + old_bits =3D qemu_atomic_xchg(p, 0); dirty |=3D old_bits; } nr -=3D BITS_PER_LONG; @@ -276,7 +276,7 @@ bool bitmap_test_and_clear_atomic(unsigned long *map, l= ong start, long nr) /* Last word */ if (nr) { mask_to_clear &=3D BITMAP_LAST_WORD_MASK(size); - old_bits =3D atomic_fetch_and(p, ~mask_to_clear); + old_bits =3D qemu_atomic_fetch_and(p, ~mask_to_clear); dirty |=3D old_bits & mask_to_clear; } else { if (!dirty) { @@ -291,7 +291,7 @@ void bitmap_copy_and_clear_atomic(unsigned long *dst, u= nsigned long *src, long nr) { while (nr > 0) { - *dst =3D atomic_xchg(src, 0); + *dst =3D qemu_atomic_xchg(src, 0); dst++; src++; nr -=3D BITS_PER_LONG; diff --git a/util/cacheinfo.c b/util/cacheinfo.c index d94dc6adc8..4881ff3568 100644 --- a/util/cacheinfo.c +++ b/util/cacheinfo.c @@ -193,5 +193,5 @@ static void __attribute__((constructor)) init_cache_inf= o(void) qemu_dcache_linesize =3D dsize; qemu_dcache_linesize_log =3D ctz32(dsize); =20 - atomic64_init(); + qemu_atomic64_init(); } diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c index fcd989d47d..02447ea89a 100644 --- a/util/fdmon-epoll.c +++ b/util/fdmon-epoll.c @@ -65,7 +65,7 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerLi= st *ready_list, struct epoll_event events[128]; =20 /* Fall back while external clients are disabled */ - if (atomic_read(&ctx->external_disable_cnt)) { + if (qemu_atomic_read(&ctx->external_disable_cnt)) { return fdmon_poll_ops.wait(ctx, ready_list, timeout); } =20 @@ -132,7 +132,7 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned = npfd) } =20 /* Do not upgrade while external clients are disabled */ - if (atomic_read(&ctx->external_disable_cnt)) { + if (qemu_atomic_read(&ctx->external_disable_cnt)) { return false; } =20 diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c index 1d14177df0..b1222e5370 100644 --- a/util/fdmon-io_uring.c +++ b/util/fdmon-io_uring.c @@ -103,7 +103,8 @@ static void enqueue(AioHandlerSList *head, AioHandler *= node, unsigned flags) { unsigned old_flags; =20 - old_flags =3D atomic_fetch_or(&node->flags, FDMON_IO_URING_PENDING | f= lags); + old_flags =3D qemu_atomic_fetch_or(&node->flags, + FDMON_IO_URING_PENDING | flags); if (!(old_flags & FDMON_IO_URING_PENDING)) { QSLIST_INSERT_HEAD_ATOMIC(head, node, node_submitted); } @@ -127,7 +128,7 @@ static AioHandler *dequeue(AioHandlerSList *head, unsig= ned *flags) * telling process_cqe() to delete the AioHandler when its * IORING_OP_POLL_ADD completes. */ - *flags =3D atomic_fetch_and(&node->flags, ~(FDMON_IO_URING_PENDING | + *flags =3D qemu_atomic_fetch_and(&node->flags, ~(FDMON_IO_URING_PENDIN= G | FDMON_IO_URING_ADD)); return node; } @@ -233,7 +234,7 @@ static bool process_cqe(AioContext *ctx, * with enqueue() here then we can safely clear the FDMON_IO_URING_REM= OVE * bit before IORING_OP_POLL_REMOVE is submitted. */ - flags =3D atomic_fetch_and(&node->flags, ~FDMON_IO_URING_REMOVE); + flags =3D qemu_atomic_fetch_and(&node->flags, ~FDMON_IO_URING_REMOVE); if (flags & FDMON_IO_URING_REMOVE) { QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers, node, node_delet= ed); return false; @@ -273,7 +274,7 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHand= lerList *ready_list, int ret; =20 /* Fall back while external clients are disabled */ - if (atomic_read(&ctx->external_disable_cnt)) { + if (qemu_atomic_read(&ctx->external_disable_cnt)) { return fdmon_poll_ops.wait(ctx, ready_list, timeout); } =20 @@ -312,7 +313,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx) } =20 /* Are we falling back to fdmon-poll? */ - return atomic_read(&ctx->external_disable_cnt); + return qemu_atomic_read(&ctx->external_disable_cnt); } =20 static const FDMonOps fdmon_io_uring_ops =3D { @@ -344,7 +345,7 @@ void fdmon_io_uring_destroy(AioContext *ctx) =20 /* Move handlers due to be removed onto the deleted list */ while ((node =3D QSLIST_FIRST_RCU(&ctx->submit_list))) { - unsigned flags =3D atomic_fetch_and(&node->flags, + unsigned flags =3D qemu_atomic_fetch_and(&node->flags, ~(FDMON_IO_URING_PENDING | FDMON_IO_URING_ADD | FDMON_IO_URING_REMOVE)); diff --git a/util/lockcnt.c b/util/lockcnt.c index 4f88dcf8b8..841d9df69c 100644 --- a/util/lockcnt.c +++ b/util/lockcnt.c @@ -61,7 +61,7 @@ static bool qemu_lockcnt_cmpxchg_or_wait(QemuLockCnt *loc= kcnt, int *val, int expected =3D *val; =20 trace_lockcnt_fast_path_attempt(lockcnt, expected, new_if_free); - *val =3D atomic_cmpxchg(&lockcnt->count, expected, new_if_free); + *val =3D qemu_atomic_cmpxchg(&lockcnt->count, expected, new_if_fre= e); if (*val =3D=3D expected) { trace_lockcnt_fast_path_success(lockcnt, expected, new_if_free= ); *val =3D new_if_free; @@ -81,7 +81,7 @@ static bool qemu_lockcnt_cmpxchg_or_wait(QemuLockCnt *loc= kcnt, int *val, int new =3D expected - QEMU_LOCKCNT_STATE_LOCKED + QEMU_LOCKCN= T_STATE_WAITING; =20 trace_lockcnt_futex_wait_prepare(lockcnt, expected, new); - *val =3D atomic_cmpxchg(&lockcnt->count, expected, new); + *val =3D qemu_atomic_cmpxchg(&lockcnt->count, expected, new); if (*val =3D=3D expected) { *val =3D new; } @@ -92,7 +92,7 @@ static bool qemu_lockcnt_cmpxchg_or_wait(QemuLockCnt *loc= kcnt, int *val, *waited =3D true; trace_lockcnt_futex_wait(lockcnt, *val); qemu_futex_wait(&lockcnt->count, *val); - *val =3D atomic_read(&lockcnt->count); + *val =3D qemu_atomic_read(&lockcnt->count); trace_lockcnt_futex_wait_resume(lockcnt, *val); continue; } @@ -110,19 +110,22 @@ static void lockcnt_wake(QemuLockCnt *lockcnt) =20 void qemu_lockcnt_inc(QemuLockCnt *lockcnt) { - int val =3D atomic_read(&lockcnt->count); + int val =3D qemu_atomic_read(&lockcnt->count); bool waited =3D false; =20 for (;;) { if (val >=3D QEMU_LOCKCNT_COUNT_STEP) { int expected =3D val; - val =3D atomic_cmpxchg(&lockcnt->count, val, val + QEMU_LOCKCN= T_COUNT_STEP); + val =3D qemu_atomic_cmpxchg(&lockcnt->count, val, + val + QEMU_LOCKCNT_COUNT_STEP); if (val =3D=3D expected) { break; } } else { /* The fast path is (0, unlocked)->(1, unlocked). */ - if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, &val, QEMU_LOCKCNT_C= OUNT_STEP, + if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, + &val, + QEMU_LOCKCNT_COUNT_STEP, &waited)) { break; } @@ -142,7 +145,7 @@ void qemu_lockcnt_inc(QemuLockCnt *lockcnt) =20 void qemu_lockcnt_dec(QemuLockCnt *lockcnt) { - atomic_sub(&lockcnt->count, QEMU_LOCKCNT_COUNT_STEP); + qemu_atomic_sub(&lockcnt->count, QEMU_LOCKCNT_COUNT_STEP); } =20 /* Decrement a counter, and return locked if it is decremented to zero. @@ -151,14 +154,15 @@ void qemu_lockcnt_dec(QemuLockCnt *lockcnt) */ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt) { - int val =3D atomic_read(&lockcnt->count); + int val =3D qemu_atomic_read(&lockcnt->count); int locked_state =3D QEMU_LOCKCNT_STATE_LOCKED; bool waited =3D false; =20 for (;;) { if (val >=3D 2 * QEMU_LOCKCNT_COUNT_STEP) { int expected =3D val; - val =3D atomic_cmpxchg(&lockcnt->count, val, val - QEMU_LOCKCN= T_COUNT_STEP); + val =3D qemu_atomic_cmpxchg(&lockcnt->count, val, + val - QEMU_LOCKCNT_COUNT_STEP); if (val =3D=3D expected) { break; } @@ -166,7 +170,8 @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt) /* If count is going 1->0, take the lock. The fast path is * (1, unlocked)->(0, locked) or (1, unlocked)->(0, waiting). */ - if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, &val, locked_state, = &waited)) { + if (qemu_lockcnt_cmpxchg_or_wait(lockcnt, &val, locked_state, + &waited)) { return true; } =20 @@ -199,7 +204,7 @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt) */ bool qemu_lockcnt_dec_if_lock(QemuLockCnt *lockcnt) { - int val =3D atomic_read(&lockcnt->count); + int val =3D qemu_atomic_read(&lockcnt->count); int locked_state =3D QEMU_LOCKCNT_STATE_LOCKED; bool waited =3D false; =20 @@ -233,7 +238,7 @@ bool qemu_lockcnt_dec_if_lock(QemuLockCnt *lockcnt) =20 void qemu_lockcnt_lock(QemuLockCnt *lockcnt) { - int val =3D atomic_read(&lockcnt->count); + int val =3D qemu_atomic_read(&lockcnt->count); int step =3D QEMU_LOCKCNT_STATE_LOCKED; bool waited =3D false; =20 @@ -255,12 +260,12 @@ void qemu_lockcnt_inc_and_unlock(QemuLockCnt *lockcnt) { int expected, new, val; =20 - val =3D atomic_read(&lockcnt->count); + val =3D qemu_atomic_read(&lockcnt->count); do { expected =3D val; new =3D (val + QEMU_LOCKCNT_COUNT_STEP) & ~QEMU_LOCKCNT_STATE_MASK; trace_lockcnt_unlock_attempt(lockcnt, val, new); - val =3D atomic_cmpxchg(&lockcnt->count, val, new); + val =3D qemu_atomic_cmpxchg(&lockcnt->count, val, new); } while (val !=3D expected); =20 trace_lockcnt_unlock_success(lockcnt, val, new); @@ -273,12 +278,12 @@ void qemu_lockcnt_unlock(QemuLockCnt *lockcnt) { int expected, new, val; =20 - val =3D atomic_read(&lockcnt->count); + val =3D qemu_atomic_read(&lockcnt->count); do { expected =3D val; new =3D val & ~QEMU_LOCKCNT_STATE_MASK; trace_lockcnt_unlock_attempt(lockcnt, val, new); - val =3D atomic_cmpxchg(&lockcnt->count, val, new); + val =3D qemu_atomic_cmpxchg(&lockcnt->count, val, new); } while (val !=3D expected); =20 trace_lockcnt_unlock_success(lockcnt, val, new); @@ -289,7 +294,7 @@ void qemu_lockcnt_unlock(QemuLockCnt *lockcnt) =20 unsigned qemu_lockcnt_count(QemuLockCnt *lockcnt) { - return atomic_read(&lockcnt->count) >> QEMU_LOCKCNT_COUNT_SHIFT; + return qemu_atomic_read(&lockcnt->count) >> QEMU_LOCKCNT_COUNT_SHIFT; } #else void qemu_lockcnt_init(QemuLockCnt *lockcnt) @@ -307,13 +312,13 @@ void qemu_lockcnt_inc(QemuLockCnt *lockcnt) { int old; for (;;) { - old =3D atomic_read(&lockcnt->count); + old =3D qemu_atomic_read(&lockcnt->count); if (old =3D=3D 0) { qemu_lockcnt_lock(lockcnt); qemu_lockcnt_inc_and_unlock(lockcnt); return; } else { - if (atomic_cmpxchg(&lockcnt->count, old, old + 1) =3D=3D old) { + if (qemu_atomic_cmpxchg(&lockcnt->count, old, old + 1) =3D=3D = old) { return; } } @@ -322,7 +327,7 @@ void qemu_lockcnt_inc(QemuLockCnt *lockcnt) =20 void qemu_lockcnt_dec(QemuLockCnt *lockcnt) { - atomic_dec(&lockcnt->count); + qemu_atomic_dec(&lockcnt->count); } =20 /* Decrement a counter, and return locked if it is decremented to zero. @@ -331,9 +336,9 @@ void qemu_lockcnt_dec(QemuLockCnt *lockcnt) */ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt) { - int val =3D atomic_read(&lockcnt->count); + int val =3D qemu_atomic_read(&lockcnt->count); while (val > 1) { - int old =3D atomic_cmpxchg(&lockcnt->count, val, val - 1); + int old =3D qemu_atomic_cmpxchg(&lockcnt->count, val, val - 1); if (old !=3D val) { val =3D old; continue; @@ -343,7 +348,7 @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt) } =20 qemu_lockcnt_lock(lockcnt); - if (atomic_fetch_dec(&lockcnt->count) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&lockcnt->count) =3D=3D 1) { return true; } =20 @@ -360,13 +365,13 @@ bool qemu_lockcnt_dec_and_lock(QemuLockCnt *lockcnt) bool qemu_lockcnt_dec_if_lock(QemuLockCnt *lockcnt) { /* No need for acquire semantics if we return false. */ - int val =3D atomic_read(&lockcnt->count); + int val =3D qemu_atomic_read(&lockcnt->count); if (val > 1) { return false; } =20 qemu_lockcnt_lock(lockcnt); - if (atomic_fetch_dec(&lockcnt->count) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&lockcnt->count) =3D=3D 1) { return true; } =20 @@ -381,7 +386,7 @@ void qemu_lockcnt_lock(QemuLockCnt *lockcnt) =20 void qemu_lockcnt_inc_and_unlock(QemuLockCnt *lockcnt) { - atomic_inc(&lockcnt->count); + qemu_atomic_inc(&lockcnt->count); qemu_mutex_unlock(&lockcnt->mutex); } =20 @@ -392,6 +397,6 @@ void qemu_lockcnt_unlock(QemuLockCnt *lockcnt) =20 unsigned qemu_lockcnt_count(QemuLockCnt *lockcnt) { - return atomic_read(&lockcnt->count); + return qemu_atomic_read(&lockcnt->count); } #endif diff --git a/util/log.c b/util/log.c index bdb3d712e8..e2a8eb4fed 100644 --- a/util/log.c +++ b/util/log.c @@ -41,7 +41,7 @@ int qemu_log(const char *fmt, ...) QemuLogFile *logfile; =20 rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { va_list ap; va_start(ap, fmt); @@ -98,7 +98,7 @@ void qemu_set_log(int log_flags) QEMU_LOCK_GUARD(&qemu_logfile_mutex); if (qemu_logfile && !need_to_open_file) { logfile =3D qemu_logfile; - atomic_rcu_set(&qemu_logfile, NULL); + qemu_atomic_rcu_set(&qemu_logfile, NULL); call_rcu(logfile, qemu_logfile_free, rcu); } else if (!qemu_logfile && need_to_open_file) { logfile =3D g_new0(QemuLogFile, 1); @@ -135,7 +135,7 @@ void qemu_set_log(int log_flags) #endif log_append =3D 1; } - atomic_rcu_set(&qemu_logfile, logfile); + qemu_atomic_rcu_set(&qemu_logfile, logfile); } } =20 @@ -272,7 +272,7 @@ void qemu_log_flush(void) QemuLogFile *logfile; =20 rcu_read_lock(); - logfile =3D atomic_rcu_read(&qemu_logfile); + logfile =3D qemu_atomic_rcu_read(&qemu_logfile); if (logfile) { fflush(logfile->fd); } @@ -288,7 +288,7 @@ void qemu_log_close(void) logfile =3D qemu_logfile; =20 if (logfile) { - atomic_rcu_set(&qemu_logfile, NULL); + qemu_atomic_rcu_set(&qemu_logfile, NULL); call_rcu(logfile, qemu_logfile_free, rcu); } qemu_mutex_unlock(&qemu_logfile_mutex); diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c index 5da5234155..942f5c43f2 100644 --- a/util/qemu-coroutine-lock.c +++ b/util/qemu-coroutine-lock.c @@ -212,10 +212,10 @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(= AioContext *ctx, /* This is the "Responsibility Hand-Off" protocol; a lock() picks from * a concurrent unlock() the responsibility of waking somebody up. */ - old_handoff =3D atomic_mb_read(&mutex->handoff); + old_handoff =3D qemu_atomic_mb_read(&mutex->handoff); if (old_handoff && has_waiters(mutex) && - atomic_cmpxchg(&mutex->handoff, old_handoff, 0) =3D=3D old_handoff= ) { + qemu_atomic_cmpxchg(&mutex->handoff, old_handoff, 0) =3D=3D old_ha= ndoff) { /* There can be no concurrent pops, because there can be only * one active handoff at a time. */ @@ -250,18 +250,18 @@ void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex) */ i =3D 0; retry_fast_path: - waiters =3D atomic_cmpxchg(&mutex->locked, 0, 1); + waiters =3D qemu_atomic_cmpxchg(&mutex->locked, 0, 1); if (waiters !=3D 0) { while (waiters =3D=3D 1 && ++i < 1000) { - if (atomic_read(&mutex->ctx) =3D=3D ctx) { + if (qemu_atomic_read(&mutex->ctx) =3D=3D ctx) { break; } - if (atomic_read(&mutex->locked) =3D=3D 0) { + if (qemu_atomic_read(&mutex->locked) =3D=3D 0) { goto retry_fast_path; } cpu_relax(); } - waiters =3D atomic_fetch_inc(&mutex->locked); + waiters =3D qemu_atomic_fetch_inc(&mutex->locked); } =20 if (waiters =3D=3D 0) { @@ -288,7 +288,7 @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex) mutex->ctx =3D NULL; mutex->holder =3D NULL; self->locks_held--; - if (atomic_fetch_dec(&mutex->locked) =3D=3D 1) { + if (qemu_atomic_fetch_dec(&mutex->locked) =3D=3D 1) { /* No waiting qemu_co_mutex_lock(). Pfew, that was easy! */ return; } @@ -311,7 +311,7 @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex) } =20 our_handoff =3D mutex->sequence; - atomic_mb_set(&mutex->handoff, our_handoff); + qemu_atomic_mb_set(&mutex->handoff, our_handoff); if (!has_waiters(mutex)) { /* The concurrent lock has not added itself yet, so it * will be able to pick our handoff. @@ -322,7 +322,8 @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex) /* Try to do the handoff protocol ourselves; if somebody else has * already taken it, however, we're done and they're responsible. */ - if (atomic_cmpxchg(&mutex->handoff, our_handoff, 0) !=3D our_hando= ff) { + if (qemu_atomic_cmpxchg(&mutex->handoff, + our_handoff, 0) !=3D our_handoff) { break; } } diff --git a/util/qemu-coroutine-sleep.c b/util/qemu-coroutine-sleep.c index 769a76e57d..9a81a7d6ee 100644 --- a/util/qemu-coroutine-sleep.c +++ b/util/qemu-coroutine-sleep.c @@ -28,7 +28,7 @@ struct QemuCoSleepState { void qemu_co_sleep_wake(QemuCoSleepState *sleep_state) { /* Write of schedule protected by barrier write in aio_co_schedule */ - const char *scheduled =3D atomic_cmpxchg(&sleep_state->co->scheduled, + const char *scheduled =3D qemu_atomic_cmpxchg(&sleep_state->co->schedu= led, qemu_co_sleep_ns__scheduled, NU= LL); =20 assert(scheduled =3D=3D qemu_co_sleep_ns__scheduled); @@ -54,7 +54,7 @@ void coroutine_fn qemu_co_sleep_ns_wakeable(QEMUClockType= type, int64_t ns, .user_state_pointer =3D sleep_state, }; =20 - const char *scheduled =3D atomic_cmpxchg(&state.co->scheduled, NULL, + const char *scheduled =3D qemu_atomic_cmpxchg(&state.co->scheduled, NU= LL, qemu_co_sleep_ns__scheduled); if (scheduled) { fprintf(stderr, diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c index c3caa6c770..ad28603a71 100644 --- a/util/qemu-coroutine.c +++ b/util/qemu-coroutine.c @@ -60,7 +60,7 @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry, v= oid *opaque) * release_pool_size and the actual size of release_pool. = But * it is just a heuristic, it does not need to be perfect. */ - alloc_pool_size =3D atomic_xchg(&release_pool_size, 0); + alloc_pool_size =3D qemu_atomic_xchg(&release_pool_size, 0= ); QSLIST_MOVE_ATOMIC(&alloc_pool, &release_pool); co =3D QSLIST_FIRST(&alloc_pool); } @@ -88,7 +88,7 @@ static void coroutine_delete(Coroutine *co) if (CONFIG_COROUTINE_POOL) { if (release_pool_size < POOL_BATCH_SIZE * 2) { QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next); - atomic_inc(&release_pool_size); + qemu_atomic_inc(&release_pool_size); return; } if (alloc_pool_size < POOL_BATCH_SIZE) { @@ -115,7 +115,7 @@ void qemu_aio_coroutine_enter(AioContext *ctx, Coroutin= e *co) =20 /* Cannot rely on the read barrier for to in aio_co_wake(), as the= re are * callers outside of aio_co_wake() */ - const char *scheduled =3D atomic_mb_read(&to->scheduled); + const char *scheduled =3D qemu_atomic_mb_read(&to->scheduled); =20 QSIMPLEQ_REMOVE_HEAD(&pending, co_queue_next); =20 diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c index b37d288866..294ea21446 100644 --- a/util/qemu-sockets.c +++ b/util/qemu-sockets.c @@ -395,7 +395,7 @@ static struct addrinfo *inet_parse_connect_saddr(InetSo= cketAddress *saddr, memset(&ai, 0, sizeof(ai)); =20 ai.ai_flags =3D AI_CANONNAME | AI_ADDRCONFIG; - if (atomic_read(&useV4Mapped)) { + if (qemu_atomic_read(&useV4Mapped)) { ai.ai_flags |=3D AI_V4MAPPED; } ai.ai_family =3D inet_ai_family_from_address(saddr, &err); @@ -421,7 +421,7 @@ static struct addrinfo *inet_parse_connect_saddr(InetSo= cketAddress *saddr, */ if (rc =3D=3D EAI_BADFLAGS && (ai.ai_flags & AI_V4MAPPED)) { - atomic_set(&useV4Mapped, 0); + qemu_atomic_set(&useV4Mapped, 0); ai.ai_flags &=3D ~AI_V4MAPPED; rc =3D getaddrinfo(saddr->host, saddr->port, &ai, &res); } diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c index b4c2359272..6457fda165 100644 --- a/util/qemu-thread-posix.c +++ b/util/qemu-thread-posix.c @@ -414,8 +414,8 @@ void qemu_event_set(QemuEvent *ev) */ assert(ev->initialized); smp_mb(); - if (atomic_read(&ev->value) !=3D EV_SET) { - if (atomic_xchg(&ev->value, EV_SET) =3D=3D EV_BUSY) { + if (qemu_atomic_read(&ev->value) !=3D EV_SET) { + if (qemu_atomic_xchg(&ev->value, EV_SET) =3D=3D EV_BUSY) { /* There were waiters, wake them up. */ qemu_futex_wake(ev, INT_MAX); } @@ -427,14 +427,14 @@ void qemu_event_reset(QemuEvent *ev) unsigned value; =20 assert(ev->initialized); - value =3D atomic_read(&ev->value); + value =3D qemu_atomic_read(&ev->value); smp_mb_acquire(); if (value =3D=3D EV_SET) { /* * If there was a concurrent reset (or even reset+wait), * do nothing. Otherwise change EV_SET->EV_FREE. */ - atomic_or(&ev->value, EV_FREE); + qemu_atomic_or(&ev->value, EV_FREE); } } =20 @@ -443,7 +443,7 @@ void qemu_event_wait(QemuEvent *ev) unsigned value; =20 assert(ev->initialized); - value =3D atomic_read(&ev->value); + value =3D qemu_atomic_read(&ev->value); smp_mb_acquire(); if (value !=3D EV_SET) { if (value =3D=3D EV_FREE) { @@ -453,7 +453,7 @@ void qemu_event_wait(QemuEvent *ev) * a concurrent busy->free transition. After the CAS, the * event will be either set or busy. */ - if (atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) =3D=3D EV_SET= ) { + if (qemu_atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) =3D=3D E= V_SET) { return; } } diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c index 56a83333da..409b28d21b 100644 --- a/util/qemu-thread-win32.c +++ b/util/qemu-thread-win32.c @@ -250,8 +250,8 @@ void qemu_event_set(QemuEvent *ev) * ev->value we need a full memory barrier here. */ smp_mb(); - if (atomic_read(&ev->value) !=3D EV_SET) { - if (atomic_xchg(&ev->value, EV_SET) =3D=3D EV_BUSY) { + if (qemu_atomic_read(&ev->value) !=3D EV_SET) { + if (qemu_atomic_xchg(&ev->value, EV_SET) =3D=3D EV_BUSY) { /* There were waiters, wake them up. */ SetEvent(ev->event); } @@ -263,13 +263,13 @@ void qemu_event_reset(QemuEvent *ev) unsigned value; =20 assert(ev->initialized); - value =3D atomic_read(&ev->value); + value =3D qemu_atomic_read(&ev->value); smp_mb_acquire(); if (value =3D=3D EV_SET) { /* If there was a concurrent reset (or even reset+wait), * do nothing. Otherwise change EV_SET->EV_FREE. */ - atomic_or(&ev->value, EV_FREE); + qemu_atomic_or(&ev->value, EV_FREE); } } =20 @@ -278,7 +278,7 @@ void qemu_event_wait(QemuEvent *ev) unsigned value; =20 assert(ev->initialized); - value =3D atomic_read(&ev->value); + value =3D qemu_atomic_read(&ev->value); smp_mb_acquire(); if (value !=3D EV_SET) { if (value =3D=3D EV_FREE) { @@ -292,7 +292,7 @@ void qemu_event_wait(QemuEvent *ev) * because there cannot be a concurent busy->free transition. * After the CAS, the event will be either set or busy. */ - if (atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) =3D=3D EV_SET= ) { + if (qemu_atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) =3D=3D E= V_SET) { value =3D EV_SET; } else { value =3D EV_BUSY; diff --git a/util/qemu-timer.c b/util/qemu-timer.c index 878d80fd5e..a70c03fc59 100644 --- a/util/qemu-timer.c +++ b/util/qemu-timer.c @@ -170,7 +170,7 @@ void qemu_clock_enable(QEMUClockType type, bool enabled) =20 bool timerlist_has_timers(QEMUTimerList *timer_list) { - return !!atomic_read(&timer_list->active_timers); + return !!qemu_atomic_read(&timer_list->active_timers); } =20 bool qemu_clock_has_timers(QEMUClockType type) @@ -183,7 +183,7 @@ bool timerlist_expired(QEMUTimerList *timer_list) { int64_t expire_time; =20 - if (!atomic_read(&timer_list->active_timers)) { + if (!qemu_atomic_read(&timer_list->active_timers)) { return false; } =20 @@ -213,7 +213,7 @@ int64_t timerlist_deadline_ns(QEMUTimerList *timer_list) int64_t delta; int64_t expire_time; =20 - if (!atomic_read(&timer_list->active_timers)) { + if (!qemu_atomic_read(&timer_list->active_timers)) { return -1; } =20 @@ -385,7 +385,7 @@ static void timer_del_locked(QEMUTimerList *timer_list,= QEMUTimer *ts) if (!t) break; if (t =3D=3D ts) { - atomic_set(pt, t->next); + qemu_atomic_set(pt, t->next); break; } pt =3D &t->next; @@ -408,7 +408,7 @@ static bool timer_mod_ns_locked(QEMUTimerList *timer_li= st, } ts->expire_time =3D MAX(expire_time, 0); ts->next =3D *pt; - atomic_set(pt, ts); + qemu_atomic_set(pt, ts); =20 return pt =3D=3D &timer_list->active_timers; } @@ -502,7 +502,7 @@ bool timerlist_run_timers(QEMUTimerList *timer_list) QEMUTimerCB *cb; void *opaque; =20 - if (!atomic_read(&timer_list->active_timers)) { + if (!qemu_atomic_read(&timer_list->active_timers)) { return false; } =20 diff --git a/util/qht.c b/util/qht.c index 67e5d5b916..2cedc1ae35 100644 --- a/util/qht.c +++ b/util/qht.c @@ -131,11 +131,11 @@ static inline void qht_unlock(struct qht *ht) =20 /* * Note: reading partially-updated pointers in @pointers could lead to - * segfaults. We thus access them with atomic_read/set; this guarantees + * segfaults. We thus access them with qemu_atomic_read/set; this guarante= es * that the compiler makes all those accesses atomic. We also need the - * volatile-like behavior in atomic_read, since otherwise the compiler + * volatile-like behavior in qemu_atomic_read, since otherwise the compiler * might refetch the pointer. - * atomic_read's are of course not necessary when the bucket lock is held. + * qemu_atomic_read's are of course not necessary when the bucket lock is = held. * * If both ht->lock and b->lock are grabbed, ht->lock should always * be grabbed first. @@ -286,7 +286,7 @@ void qht_map_lock_buckets__no_stale(struct qht *ht, str= uct qht_map **pmap) { struct qht_map *map; =20 - map =3D atomic_rcu_read(&ht->map); + map =3D qemu_atomic_rcu_read(&ht->map); qht_map_lock_buckets(map); if (likely(!qht_map_is_stale__locked(ht, map))) { *pmap =3D map; @@ -318,7 +318,7 @@ struct qht_bucket *qht_bucket_lock__no_stale(struct qht= *ht, uint32_t hash, struct qht_bucket *b; struct qht_map *map; =20 - map =3D atomic_rcu_read(&ht->map); + map =3D qemu_atomic_rcu_read(&ht->map); b =3D qht_map_to_bucket(map, hash); =20 qemu_spin_lock(&b->lock); @@ -340,7 +340,8 @@ struct qht_bucket *qht_bucket_lock__no_stale(struct qht= *ht, uint32_t hash, =20 static inline bool qht_map_needs_resize(const struct qht_map *map) { - return atomic_read(&map->n_added_buckets) > map->n_added_buckets_thres= hold; + return qemu_atomic_read(&map->n_added_buckets) > + map->n_added_buckets_threshold; } =20 static inline void qht_chain_destroy(const struct qht_bucket *head) @@ -404,7 +405,7 @@ void qht_init(struct qht *ht, qht_cmp_func_t cmp, size_= t n_elems, ht->mode =3D mode; qemu_mutex_init(&ht->lock); map =3D qht_map_create(n_buckets); - atomic_rcu_set(&ht->map, map); + qemu_atomic_rcu_set(&ht->map, map); } =20 /* call only when there are no readers/writers left */ @@ -425,8 +426,8 @@ static void qht_bucket_reset__locked(struct qht_bucket = *head) if (b->pointers[i] =3D=3D NULL) { goto done; } - atomic_set(&b->hashes[i], 0); - atomic_set(&b->pointers[i], NULL); + qemu_atomic_set(&b->hashes[i], 0); + qemu_atomic_set(&b->pointers[i], NULL); } b =3D b->next; } while (b); @@ -492,19 +493,19 @@ void *qht_do_lookup(const struct qht_bucket *head, qh= t_lookup_func_t func, =20 do { for (i =3D 0; i < QHT_BUCKET_ENTRIES; i++) { - if (atomic_read(&b->hashes[i]) =3D=3D hash) { + if (qemu_atomic_read(&b->hashes[i]) =3D=3D hash) { /* The pointer is dereferenced before seqlock_read_retry, * so (unlike qht_insert__locked) we need to use - * atomic_rcu_read here. + * qemu_atomic_rcu_read here. */ - void *p =3D atomic_rcu_read(&b->pointers[i]); + void *p =3D qemu_atomic_rcu_read(&b->pointers[i]); =20 if (likely(p) && likely(func(p, userp))) { return p; } } } - b =3D atomic_rcu_read(&b->next); + b =3D qemu_atomic_rcu_read(&b->next); } while (b); =20 return NULL; @@ -532,7 +533,7 @@ void *qht_lookup_custom(const struct qht *ht, const voi= d *userp, uint32_t hash, unsigned int version; void *ret; =20 - map =3D atomic_rcu_read(&ht->map); + map =3D qemu_atomic_rcu_read(&ht->map); b =3D qht_map_to_bucket(map, hash); =20 version =3D seqlock_read_begin(&b->sequence); @@ -584,7 +585,7 @@ static void *qht_insert__locked(const struct qht *ht, s= truct qht_map *map, memset(b, 0, sizeof(*b)); new =3D b; i =3D 0; - atomic_inc(&map->n_added_buckets); + qemu_atomic_inc(&map->n_added_buckets); if (unlikely(qht_map_needs_resize(map)) && needs_resize) { *needs_resize =3D true; } @@ -593,11 +594,11 @@ static void *qht_insert__locked(const struct qht *ht,= struct qht_map *map, /* found an empty key: acquire the seqlock and write */ seqlock_write_begin(&head->sequence); if (new) { - atomic_rcu_set(&prev->next, b); + qemu_atomic_rcu_set(&prev->next, b); } /* smp_wmb() implicit in seqlock_write_begin. */ - atomic_set(&b->hashes[i], hash); - atomic_set(&b->pointers[i], p); + qemu_atomic_set(&b->hashes[i], hash); + qemu_atomic_set(&b->pointers[i], p); seqlock_write_end(&head->sequence); return NULL; } @@ -668,11 +669,11 @@ qht_entry_move(struct qht_bucket *to, int i, struct q= ht_bucket *from, int j) qht_debug_assert(to->pointers[i]); qht_debug_assert(from->pointers[j]); =20 - atomic_set(&to->hashes[i], from->hashes[j]); - atomic_set(&to->pointers[i], from->pointers[j]); + qemu_atomic_set(&to->hashes[i], from->hashes[j]); + qemu_atomic_set(&to->pointers[i], from->pointers[j]); =20 - atomic_set(&from->hashes[j], 0); - atomic_set(&from->pointers[j], NULL); + qemu_atomic_set(&from->hashes[j], 0); + qemu_atomic_set(&from->pointers[j], NULL); } =20 /* @@ -687,7 +688,7 @@ static inline void qht_bucket_remove_entry(struct qht_b= ucket *orig, int pos) =20 if (qht_entry_is_last(orig, pos)) { orig->hashes[pos] =3D 0; - atomic_set(&orig->pointers[pos], NULL); + qemu_atomic_set(&orig->pointers[pos], NULL); return; } do { @@ -803,7 +804,7 @@ do_qht_iter(struct qht *ht, const struct qht_iter *iter= , void *userp) { struct qht_map *map; =20 - map =3D atomic_rcu_read(&ht->map); + map =3D qemu_atomic_rcu_read(&ht->map); qht_map_lock_buckets(map); qht_map_iter__all_locked(map, iter, userp); qht_map_unlock_buckets(map); @@ -876,7 +877,7 @@ static void qht_do_resize_reset(struct qht *ht, struct = qht_map *new, bool reset) qht_map_iter__all_locked(old, &iter, &data); qht_map_debug__all_locked(new); =20 - atomic_rcu_set(&ht->map, new); + qemu_atomic_rcu_set(&ht->map, new); qht_map_unlock_buckets(old); call_rcu(old, qht_map_destroy, rcu); } @@ -905,7 +906,7 @@ void qht_statistics_init(const struct qht *ht, struct q= ht_stats *stats) const struct qht_map *map; int i; =20 - map =3D atomic_rcu_read(&ht->map); + map =3D qemu_atomic_rcu_read(&ht->map); =20 stats->used_head_buckets =3D 0; stats->entries =3D 0; @@ -933,13 +934,13 @@ void qht_statistics_init(const struct qht *ht, struct= qht_stats *stats) b =3D head; do { for (j =3D 0; j < QHT_BUCKET_ENTRIES; j++) { - if (atomic_read(&b->pointers[j]) =3D=3D NULL) { + if (qemu_atomic_read(&b->pointers[j]) =3D=3D NULL) { break; } entries++; } buckets++; - b =3D atomic_rcu_read(&b->next); + b =3D qemu_atomic_rcu_read(&b->next); } while (b); } while (seqlock_read_retry(&head->sequence, version)); =20 diff --git a/util/qsp.c b/util/qsp.c index 7d5147f1b2..31ec2a2482 100644 --- a/util/qsp.c +++ b/util/qsp.c @@ -245,11 +245,11 @@ static void qsp_do_init(void) =20 static __attribute__((noinline)) void qsp_init__slowpath(void) { - if (atomic_cmpxchg(&qsp_initializing, false, true) =3D=3D false) { + if (qemu_atomic_cmpxchg(&qsp_initializing, false, true) =3D=3D false) { qsp_do_init(); - atomic_set(&qsp_initialized, true); + qemu_atomic_set(&qsp_initialized, true); } else { - while (!atomic_read(&qsp_initialized)) { + while (!qemu_atomic_read(&qsp_initialized)) { cpu_relax(); } } @@ -258,7 +258,7 @@ static __attribute__((noinline)) void qsp_init__slowpat= h(void) /* qsp_init() must be called from _all_ exported functions */ static inline void qsp_init(void) { - if (likely(atomic_read(&qsp_initialized))) { + if (likely(qemu_atomic_read(&qsp_initialized))) { return; } qsp_init__slowpath(); @@ -346,9 +346,9 @@ static QSPEntry *qsp_entry_get(const void *obj, const c= har *file, int line, */ static inline void do_qsp_entry_record(QSPEntry *e, int64_t delta, bool ac= q) { - atomic_set_u64(&e->ns, e->ns + delta); + qemu_atomic_set_u64(&e->ns, e->ns + delta); if (acq) { - atomic_set_u64(&e->n_acqs, e->n_acqs + 1); + qemu_atomic_set_u64(&e->n_acqs, e->n_acqs + 1); } } =20 @@ -432,29 +432,29 @@ qsp_cond_timedwait(QemuCond *cond, QemuMutex *mutex, = int ms, =20 bool qsp_is_enabled(void) { - return atomic_read(&qemu_mutex_lock_func) =3D=3D qsp_mutex_lock; + return qemu_atomic_read(&qemu_mutex_lock_func) =3D=3D qsp_mutex_lock; } =20 void qsp_enable(void) { - atomic_set(&qemu_mutex_lock_func, qsp_mutex_lock); - atomic_set(&qemu_mutex_trylock_func, qsp_mutex_trylock); - atomic_set(&qemu_bql_mutex_lock_func, qsp_bql_mutex_lock); - atomic_set(&qemu_rec_mutex_lock_func, qsp_rec_mutex_lock); - atomic_set(&qemu_rec_mutex_trylock_func, qsp_rec_mutex_trylock); - atomic_set(&qemu_cond_wait_func, qsp_cond_wait); - atomic_set(&qemu_cond_timedwait_func, qsp_cond_timedwait); + qemu_atomic_set(&qemu_mutex_lock_func, qsp_mutex_lock); + qemu_atomic_set(&qemu_mutex_trylock_func, qsp_mutex_trylock); + qemu_atomic_set(&qemu_bql_mutex_lock_func, qsp_bql_mutex_lock); + qemu_atomic_set(&qemu_rec_mutex_lock_func, qsp_rec_mutex_lock); + qemu_atomic_set(&qemu_rec_mutex_trylock_func, qsp_rec_mutex_trylock); + qemu_atomic_set(&qemu_cond_wait_func, qsp_cond_wait); + qemu_atomic_set(&qemu_cond_timedwait_func, qsp_cond_timedwait); } =20 void qsp_disable(void) { - atomic_set(&qemu_mutex_lock_func, qemu_mutex_lock_impl); - atomic_set(&qemu_mutex_trylock_func, qemu_mutex_trylock_impl); - atomic_set(&qemu_bql_mutex_lock_func, qemu_mutex_lock_impl); - atomic_set(&qemu_rec_mutex_lock_func, qemu_rec_mutex_lock_impl); - atomic_set(&qemu_rec_mutex_trylock_func, qemu_rec_mutex_trylock_impl); - atomic_set(&qemu_cond_wait_func, qemu_cond_wait_impl); - atomic_set(&qemu_cond_timedwait_func, qemu_cond_timedwait_impl); + qemu_atomic_set(&qemu_mutex_lock_func, qemu_mutex_lock_impl); + qemu_atomic_set(&qemu_mutex_trylock_func, qemu_mutex_trylock_impl); + qemu_atomic_set(&qemu_bql_mutex_lock_func, qemu_mutex_lock_impl); + qemu_atomic_set(&qemu_rec_mutex_lock_func, qemu_rec_mutex_lock_impl); + qemu_atomic_set(&qemu_rec_mutex_trylock_func, qemu_rec_mutex_trylock_i= mpl); + qemu_atomic_set(&qemu_cond_wait_func, qemu_cond_wait_impl); + qemu_atomic_set(&qemu_cond_timedwait_func, qemu_cond_timedwait_impl); } =20 static gint qsp_tree_cmp(gconstpointer ap, gconstpointer bp, gpointer up) @@ -538,8 +538,8 @@ static void qsp_aggregate(void *p, uint32_t h, void *up) * The entry is in the global hash table; read from it atomically (as = in * "read once"). */ - agg->ns +=3D atomic_read_u64(&e->ns); - agg->n_acqs +=3D atomic_read_u64(&e->n_acqs); + agg->ns +=3D qemu_atomic_read_u64(&e->ns); + agg->n_acqs +=3D qemu_atomic_read_u64(&e->n_acqs); } =20 static void qsp_iter_diff(void *p, uint32_t hash, void *htp) @@ -610,7 +610,7 @@ static void qsp_mktree(GTree *tree, bool callsite_coale= sce) * with the snapshot. */ WITH_RCU_READ_LOCK_GUARD() { - QSPSnapshot *snap =3D atomic_rcu_read(&qsp_snapshot); + QSPSnapshot *snap =3D qemu_atomic_rcu_read(&qsp_snapshot); =20 /* Aggregate all results from the global hash table into a local o= ne */ qht_init(&ht, qsp_entry_no_thread_cmp, QSP_INITIAL_SIZE, @@ -806,7 +806,7 @@ void qsp_reset(void) qht_iter(&qsp_ht, qsp_aggregate, &new->ht); =20 /* replace the previous snapshot, if any */ - old =3D atomic_xchg(&qsp_snapshot, new); + old =3D qemu_atomic_xchg(&qsp_snapshot, new); if (old) { call_rcu(old, qsp_snapshot_destroy, rcu); } diff --git a/util/rcu.c b/util/rcu.c index c4fefa9333..92a14e0a0f 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -57,7 +57,7 @@ static inline int rcu_gp_ongoing(unsigned long *ctr) { unsigned long v; =20 - v =3D atomic_read(ctr); + v =3D qemu_atomic_read(ctr); return v && (v !=3D rcu_gp_ctr); } =20 @@ -82,14 +82,14 @@ static void wait_for_readers(void) */ qemu_event_reset(&rcu_gp_event); =20 - /* Instead of using atomic_mb_set for index->waiting, and - * atomic_mb_read for index->ctr, memory barriers are placed + /* Instead of using qemu_atomic_mb_set for index->waiting, and + * qemu_atomic_mb_read for index->ctr, memory barriers are placed * manually since writes to different threads are independent. * qemu_event_reset has acquire semantics, so no memory barrier * is needed here. */ QLIST_FOREACH(index, ®istry, node) { - atomic_set(&index->waiting, true); + qemu_atomic_set(&index->waiting, true); } =20 /* Here, order the stores to index->waiting before the loads of @@ -106,7 +106,7 @@ static void wait_for_readers(void) /* No need for mb_set here, worst of all we * get some extra futex wakeups. */ - atomic_set(&index->waiting, false); + qemu_atomic_set(&index->waiting, false); } } =20 @@ -151,7 +151,7 @@ void synchronize_rcu(void) =20 QEMU_LOCK_GUARD(&rcu_registry_lock); if (!QLIST_EMPTY(®istry)) { - /* In either case, the atomic_mb_set below blocks stores that free + /* In either case, the qemu_atomic_mb_set below blocks stores that= free * old RCU-protected pointers. */ if (sizeof(rcu_gp_ctr) < 8) { @@ -160,12 +160,12 @@ void synchronize_rcu(void) * * Switch parity: 0 -> 1, 1 -> 0. */ - atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR); + qemu_atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR); wait_for_readers(); - atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR); + qemu_atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr ^ RCU_GP_CTR); } else { /* Increment current grace period. */ - atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr + RCU_GP_CTR); + qemu_atomic_mb_set(&rcu_gp_ctr, rcu_gp_ctr + RCU_GP_CTR); } =20 wait_for_readers(); @@ -188,8 +188,8 @@ static void enqueue(struct rcu_head *node) struct rcu_head **old_tail; =20 node->next =3D NULL; - old_tail =3D atomic_xchg(&tail, &node->next); - atomic_mb_set(old_tail, node); + old_tail =3D qemu_atomic_xchg(&tail, &node->next); + qemu_atomic_mb_set(old_tail, node); } =20 static struct rcu_head *try_dequeue(void) @@ -203,7 +203,7 @@ retry: * The tail, because it is the first step in the enqueuing. * It is only the next pointers that might be inconsistent. */ - if (head =3D=3D &dummy && atomic_mb_read(&tail) =3D=3D &dummy.next) { + if (head =3D=3D &dummy && qemu_atomic_mb_read(&tail) =3D=3D &dummy.nex= t) { abort(); } =20 @@ -211,7 +211,7 @@ retry: * wrong and we need to wait until its enqueuer finishes the update. */ node =3D head; - next =3D atomic_mb_read(&head->next); + next =3D qemu_atomic_mb_read(&head->next); if (!next) { return NULL; } @@ -240,7 +240,7 @@ static void *call_rcu_thread(void *opaque) =20 for (;;) { int tries =3D 0; - int n =3D atomic_read(&rcu_call_count); + int n =3D qemu_atomic_read(&rcu_call_count); =20 /* Heuristically wait for a decent number of callbacks to pile up. * Fetch rcu_call_count now, we only must process elements that we= re @@ -250,7 +250,7 @@ static void *call_rcu_thread(void *opaque) g_usleep(10000); if (n =3D=3D 0) { qemu_event_reset(&rcu_call_ready_event); - n =3D atomic_read(&rcu_call_count); + n =3D qemu_atomic_read(&rcu_call_count); if (n =3D=3D 0) { #if defined(CONFIG_MALLOC_TRIM) malloc_trim(4 * 1024 * 1024); @@ -258,10 +258,10 @@ static void *call_rcu_thread(void *opaque) qemu_event_wait(&rcu_call_ready_event); } } - n =3D atomic_read(&rcu_call_count); + n =3D qemu_atomic_read(&rcu_call_count); } =20 - atomic_sub(&rcu_call_count, n); + qemu_atomic_sub(&rcu_call_count, n); synchronize_rcu(); qemu_mutex_lock_iothread(); while (n > 0) { @@ -289,7 +289,7 @@ void call_rcu1(struct rcu_head *node, void (*func)(stru= ct rcu_head *node)) { node->func =3D func; enqueue(node); - atomic_inc(&rcu_call_count); + qemu_atomic_inc(&rcu_call_count); qemu_event_set(&rcu_call_ready_event); } =20 diff --git a/util/stats64.c b/util/stats64.c index 389c365a9e..a93c04dce0 100644 --- a/util/stats64.c +++ b/util/stats64.c @@ -18,27 +18,27 @@ static inline void stat64_rdlock(Stat64 *s) { /* Keep out incoming writers to avoid them starving us. */ - atomic_add(&s->lock, 2); + qemu_atomic_add(&s->lock, 2); =20 /* If there is a concurrent writer, wait for it. */ - while (atomic_read(&s->lock) & 1) { + while (qemu_atomic_read(&s->lock) & 1) { cpu_relax(); } } =20 static inline void stat64_rdunlock(Stat64 *s) { - atomic_sub(&s->lock, 2); + qemu_atomic_sub(&s->lock, 2); } =20 static inline bool stat64_wrtrylock(Stat64 *s) { - return atomic_cmpxchg(&s->lock, 0, 1) =3D=3D 0; + return qemu_atomic_cmpxchg(&s->lock, 0, 1) =3D=3D 0; } =20 static inline void stat64_wrunlock(Stat64 *s) { - atomic_dec(&s->lock); + qemu_atomic_dec(&s->lock); } =20 uint64_t stat64_get(const Stat64 *s) @@ -50,8 +50,8 @@ uint64_t stat64_get(const Stat64 *s) /* 64-bit writes always take the lock, so we can read in * any order. */ - high =3D atomic_read(&s->high); - low =3D atomic_read(&s->low); + high =3D qemu_atomic_read(&s->high); + low =3D qemu_atomic_read(&s->low); stat64_rdunlock((Stat64 *)s); =20 return ((uint64_t)high << 32) | low; @@ -70,9 +70,9 @@ bool stat64_add32_carry(Stat64 *s, uint32_t low, uint32_t= high) * order of our update. By updating s->low first, we can check * whether we have to carry into s->high. */ - old =3D atomic_fetch_add(&s->low, low); + old =3D qemu_atomic_fetch_add(&s->low, low); high +=3D (old + low) < old; - atomic_add(&s->high, high); + qemu_atomic_add(&s->high, high); stat64_wrunlock(s); return true; } @@ -87,8 +87,8 @@ bool stat64_min_slow(Stat64 *s, uint64_t value) return false; } =20 - high =3D atomic_read(&s->high); - low =3D atomic_read(&s->low); + high =3D qemu_atomic_read(&s->high); + low =3D qemu_atomic_read(&s->low); =20 orig =3D ((uint64_t)high << 32) | low; if (value < orig) { @@ -98,9 +98,9 @@ bool stat64_min_slow(Stat64 *s, uint64_t value) * effect on stat64_min is that the slow path may be triggered * unnecessarily. */ - atomic_set(&s->low, (uint32_t)value); + qemu_atomic_set(&s->low, (uint32_t)value); smp_wmb(); - atomic_set(&s->high, value >> 32); + qemu_atomic_set(&s->high, value >> 32); } stat64_wrunlock(s); return true; @@ -116,8 +116,8 @@ bool stat64_max_slow(Stat64 *s, uint64_t value) return false; } =20 - high =3D atomic_read(&s->high); - low =3D atomic_read(&s->low); + high =3D qemu_atomic_read(&s->high); + low =3D qemu_atomic_read(&s->low); =20 orig =3D ((uint64_t)high << 32) | low; if (value > orig) { @@ -127,9 +127,9 @@ bool stat64_max_slow(Stat64 *s, uint64_t value) * effect on stat64_max is that the slow path may be triggered * unnecessarily. */ - atomic_set(&s->low, (uint32_t)value); + qemu_atomic_set(&s->low, (uint32_t)value); smp_wmb(); - atomic_set(&s->high, value >> 32); + qemu_atomic_set(&s->high, value >> 32); } stat64_wrunlock(s); return true; diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst index 445c3b3503..07ad91654e 100644 --- a/docs/devel/atomics.rst +++ b/docs/devel/atomics.rst @@ -23,9 +23,9 @@ provides macros that fall in three camps: =20 - compiler barriers: ``barrier()``; =20 -- weak atomic access and manual memory barriers: ``atomic_read()``, - ``atomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``, ``smp_mb_a= cquire()``, - ``smp_mb_release()``, ``smp_read_barrier_depends()``; +- weak atomic access and manual memory barriers: ``qemu_atomic_read()``, + ``qemu_atomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``, + ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()= ``; =20 - sequentially consistent atomic access: everything else. =20 @@ -67,23 +67,23 @@ in the order specified by its program". ``qemu/atomic.h`` provides the following set of atomic read-modify-write operations:: =20 - void atomic_inc(ptr) - void atomic_dec(ptr) - void atomic_add(ptr, val) - void atomic_sub(ptr, val) - void atomic_and(ptr, val) - void atomic_or(ptr, val) + void qemu_atomic_inc(ptr) + void qemu_atomic_dec(ptr) + void qemu_atomic_add(ptr, val) + void qemu_atomic_sub(ptr, val) + void qemu_atomic_and(ptr, val) + void qemu_atomic_or(ptr, val) =20 - typeof(*ptr) atomic_fetch_inc(ptr) - typeof(*ptr) atomic_fetch_dec(ptr) - typeof(*ptr) atomic_fetch_add(ptr, val) - typeof(*ptr) atomic_fetch_sub(ptr, val) - typeof(*ptr) atomic_fetch_and(ptr, val) - typeof(*ptr) atomic_fetch_or(ptr, val) - typeof(*ptr) atomic_fetch_xor(ptr, val) - typeof(*ptr) atomic_fetch_inc_nonzero(ptr) - typeof(*ptr) atomic_xchg(ptr, val) - typeof(*ptr) atomic_cmpxchg(ptr, old, new) + typeof(*ptr) qemu_atomic_fetch_inc(ptr) + typeof(*ptr) qemu_atomic_fetch_dec(ptr) + typeof(*ptr) qemu_atomic_fetch_add(ptr, val) + typeof(*ptr) qemu_atomic_fetch_sub(ptr, val) + typeof(*ptr) qemu_atomic_fetch_and(ptr, val) + typeof(*ptr) qemu_atomic_fetch_or(ptr, val) + typeof(*ptr) qemu_atomic_fetch_xor(ptr, val) + typeof(*ptr) qemu_atomic_fetch_inc_nonzero(ptr) + typeof(*ptr) qemu_atomic_xchg(ptr, val) + typeof(*ptr) qemu_atomic_cmpxchg(ptr, old, new) =20 all of which return the old value of ``*ptr``. These operations are polymorphic; they operate on any type that is as wide as a pointer or @@ -91,19 +91,19 @@ smaller. =20 Similar operations return the new value of ``*ptr``:: =20 - typeof(*ptr) atomic_inc_fetch(ptr) - typeof(*ptr) atomic_dec_fetch(ptr) - typeof(*ptr) atomic_add_fetch(ptr, val) - typeof(*ptr) atomic_sub_fetch(ptr, val) - typeof(*ptr) atomic_and_fetch(ptr, val) - typeof(*ptr) atomic_or_fetch(ptr, val) - typeof(*ptr) atomic_xor_fetch(ptr, val) + typeof(*ptr) qemu_atomic_inc_fetch(ptr) + typeof(*ptr) qemu_atomic_dec_fetch(ptr) + typeof(*ptr) qemu_atomic_add_fetch(ptr, val) + typeof(*ptr) qemu_atomic_sub_fetch(ptr, val) + typeof(*ptr) qemu_atomic_and_fetch(ptr, val) + typeof(*ptr) qemu_atomic_or_fetch(ptr, val) + typeof(*ptr) qemu_atomic_xor_fetch(ptr, val) =20 ``qemu/atomic.h`` also provides loads and stores that cannot be reordered with each other:: =20 - typeof(*ptr) atomic_mb_read(ptr) - void atomic_mb_set(ptr, val) + typeof(*ptr) qemu_atomic_mb_read(ptr) + void qemu_atomic_mb_set(ptr, val) =20 However these do not provide sequential consistency and, in particular, they do not participate in the total ordering enforced by @@ -115,12 +115,12 @@ easiest to hardest): =20 - lightweight synchronization primitives such as ``QemuEvent`` =20 -- RCU operations (``atomic_rcu_read``, ``atomic_rcu_set``) when publishing - or accessing a new version of a data structure +- RCU operations (``qemu_atomic_rcu_read``, ``qemu_atomic_rcu_set``) when + publishing or accessing a new version of a data structure =20 -- other atomic accesses: ``atomic_read`` and ``atomic_load_acquire`` for - loads, ``atomic_set`` and ``atomic_store_release`` for stores, ``smp_mb`` - to forbid reordering subsequent loads before a store. +- other atomic accesses: ``qemu_atomic_read`` and ``qemu_atomic_load_acqui= re`` + for loads, ``qemu_atomic_set`` and ``qemu_atomic_store_release`` for sto= res, + ``smp_mb`` to forbid reordering subsequent loads before a store. =20 =20 Weak atomic access and manual memory barriers @@ -149,22 +149,22 @@ The only guarantees that you can rely upon in this ca= se are: =20 When using this model, variables are accessed with: =20 -- ``atomic_read()`` and ``atomic_set()``; these prevent the compiler from - optimizing accesses out of existence and creating unsolicited +- ``qemu_atomic_read()`` and ``qemu_atomic_set()``; these prevent the comp= iler + from optimizing accesses out of existence and creating unsolicited accesses, but do not otherwise impose any ordering on loads and stores: both the compiler and the processor are free to reorder them. =20 -- ``atomic_load_acquire()``, which guarantees the LOAD to appear to +- ``qemu_atomic_load_acquire()``, which guarantees the LOAD to appear to happen, with respect to the other components of the system, before all the LOAD or STORE operations specified afterwards. - Operations coming before ``atomic_load_acquire()`` can still be + Operations coming before ``qemu_atomic_load_acquire()`` can still be reordered after it. =20 -- ``atomic_store_release()``, which guarantees the STORE to appear to +- ``qemu_atomic_store_release()``, which guarantees the STORE to appear to happen, with respect to the other components of the system, after all the LOAD or STORE operations specified before. - Operations coming after ``atomic_store_release()`` can still be + Operations coming after ``qemu_atomic_store_release()`` can still be reordered before it. =20 Restrictions to the ordering of accesses can also be specified @@ -229,18 +229,18 @@ They come in six kinds: dependency and a full read barrier or better is required. =20 =20 -Memory barriers and ``atomic_load_acquire``/``atomic_store_release`` are -mostly used when a data structure has one thread that is always a writer +Memory barriers and ``qemu_atomic_load_acquire``/``qemu_atomic_store_relea= se`` +are mostly used when a data structure has one thread that is always a writ= er and one thread that is always a reader: =20 - +----------------------------------+----------------------------------+ - | thread 1 | thread 2 | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ - | :: | :: | - | | | - | atomic_store_release(&a, x); | y =3D atomic_load_acquire(&b); = | - | atomic_store_release(&b, y); | x =3D atomic_load_acquire(&a); = | - +----------------------------------+----------------------------------+ + +---------------------------------------+-----------------------------= ----------+ + | thread 1 | thread 2 = | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D+ + | :: | :: = | + | | = | + | qemu_atomic_store_release(&a, x); | y =3D qemu_atomic_load_acq= uire(&b); | + | qemu_atomic_store_release(&b, y); | x =3D qemu_atomic_load_acq= uire(&a); | + +---------------------------------------+-----------------------------= ----------+ =20 In this case, correctness is easy to check for using the "pairing" trick that is explained below. @@ -251,54 +251,54 @@ thread, exactly one other thread will read or write e= ach of these variables). In this case, it is possible to "hoist" the barriers outside a loop. For example: =20 - +------------------------------------------+--------------------------= --------+ - | before | after = | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D+ - | :: | :: = | - | | = | - | n =3D 0; | n =3D 0; = | - | for (i =3D 0; i < 10; i++) | for (i =3D 0; i < 10;= i++) | - | n +=3D atomic_load_acquire(&a[i]); | n +=3D atomic_read(= &a[i]); | - | | smp_mb_acquire(); = | - +------------------------------------------+--------------------------= --------+ - | :: | :: = | - | | = | - | | smp_mb_release(); = | - | for (i =3D 0; i < 10; i++) | for (i =3D 0; i < 10;= i++) | - | atomic_store_release(&a[i], false); | atomic_set(&a[i], fal= se); | - +------------------------------------------+--------------------------= --------+ + +-----------------------------------------------+---------------------= ------------------+ + | before | after = | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ + | :: | :: = | + | | = | + | n =3D 0; | n =3D 0; = | + | for (i =3D 0; i < 10; i++) | for (i =3D 0; i = < 10; i++) | + | n +=3D qemu_atomic_load_acquire(&a[i]); | n +=3D qemu_at= omic_read(&a[i]); | + | | smp_mb_acquire(); = | + +-----------------------------------------------+---------------------= ------------------+ + | :: | :: = | + | | = | + | | smp_mb_release(); = | + | for (i =3D 0; i < 10; i++) | for (i =3D 0; i = < 10; i++) | + | qemu_atomic_store_release(&a[i], false); | qemu_atomic_set(= &a[i], false); | + +-----------------------------------------------+---------------------= ------------------+ =20 Splitting a loop can also be useful to reduce the number of barriers: =20 - +------------------------------------------+--------------------------= --------+ - | before | after = | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D+ - | :: | :: = | - | | = | - | n =3D 0; | smp_mb_release(); = | - | for (i =3D 0; i < 10; i++) { | for (i =3D 0; i < 1= 0; i++) | - | atomic_store_release(&a[i], false); | atomic_set(&a[i], f= alse); | - | smp_mb(); | smb_mb(); = | - | n +=3D atomic_read(&b[i]); | n =3D 0; = | - | } | for (i =3D 0; i < 10;= i++) | - | | n +=3D atomic_read(= &b[i]); | - +------------------------------------------+--------------------------= --------+ + +-----------------------------------------------+---------------------= ------------------+ + | before | after = | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ + | :: | :: = | + | | = | + | n =3D 0; | smp_mb_release= (); | + | for (i =3D 0; i < 10; i++) { | for (i =3D 0; = i < 10; i++) | + | qemu_atomic_store_release(&a[i], false); | qemu_atomic_se= t(&a[i], false); | + | smp_mb(); | smb_mb(); = | + | n +=3D qemu_atomic_read(&b[i]); | n =3D 0; = | + | } | for (i =3D 0; i = < 10; i++) | + | | n +=3D qemu_at= omic_read(&b[i]); | + +-----------------------------------------------+---------------------= ------------------+ =20 In this case, a ``smp_mb_release()`` is also replaced with a (possibly che= aper, and clearer as well) ``smp_wmb()``: =20 - +------------------------------------------+--------------------------= --------+ - | before | after = | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D+ - | :: | :: = | - | | = | - | | smp_mb_release(); = | - | for (i =3D 0; i < 10; i++) { | for (i =3D 0; i < 1= 0; i++) | - | atomic_store_release(&a[i], false); | atomic_set(&a[i], f= alse); | - | atomic_store_release(&b[i], false); | smb_wmb(); = | - | } | for (i =3D 0; i < 10;= i++) | - | | atomic_set(&b[i], f= alse); | - +------------------------------------------+--------------------------= --------+ + +-----------------------------------------------+---------------------= ------------------+ + | before | after = | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ + | :: | :: = | + | | = | + | | smp_mb_release()= ; | + | for (i =3D 0; i < 10; i++) { | for (i =3D 0; = i < 10; i++) | + | qemu_atomic_store_release(&a[i], false); | qemu_atomic_se= t(&a[i], false); | + | qemu_atomic_store_release(&b[i], false); | smb_wmb(); = | + | } | for (i =3D 0; i = < 10; i++) | + | | qemu_atomic_se= t(&b[i], false); | + +-----------------------------------------------+---------------------= ------------------+ =20 =20 .. _acqrel: @@ -306,8 +306,8 @@ as well) ``smp_wmb()``: Acquire/release pairing and the *synchronizes-with* relation ------------------------------------------------------------ =20 -Atomic operations other than ``atomic_set()`` and ``atomic_read()`` have -either *acquire* or *release* semantics [#rmw]_. This has two effects: +Atomic operations other than ``qemu_atomic_set()`` and ``qemu_atomic_read(= )`` +have either *acquire* or *release* semantics [#rmw]_. This has two effect= s: =20 .. [#rmw] Read-modify-write operations can have both---acquire applies to = the read part, and release to the write. @@ -357,30 +357,30 @@ thread 2 is relying on the *synchronizes-with* relati= on between ``pthread_exit`` =20 Synchronization between threads basically descends from this pairing of a release operation and an acquire operation. Therefore, atomic operations -other than ``atomic_set()`` and ``atomic_read()`` will almost always be -paired with another operation of the opposite kind: an acquire operation +other than ``qemu_atomic_set()`` and ``qemu_atomic_read()`` will almost al= ways +be paired with another operation of the opposite kind: an acquire operation will pair with a release operation and vice versa. This rule of thumb is extremely useful; in the case of QEMU, however, note that the other operation may actually be in a driver that runs in the guest! =20 ``smp_read_barrier_depends()``, ``smp_rmb()``, ``smp_mb_acquire()``, -``atomic_load_acquire()`` and ``atomic_rcu_read()`` all count +``qemu_atomic_load_acquire()`` and ``qemu_atomic_rcu_read()`` all count as acquire operations. ``smp_wmb()``, ``smp_mb_release()``, -``atomic_store_release()`` and ``atomic_rcu_set()`` all count as release -operations. ``smp_mb()`` counts as both acquire and release, therefore +``qemu_atomic_store_release()`` and ``qemu_atomic_rcu_set()`` all count as +release operations. ``smp_mb()`` counts as both acquire and release, ther= efore it can pair with any other atomic operation. Here is an example: =20 - +----------------------+------------------------------+ - | thread 1 | thread 2 | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D+ - | :: | :: | - | | | - | atomic_set(&a, 1); | | - | smp_wmb(); | | - | atomic_set(&b, 2); | x =3D atomic_read(&b); | - | | smp_rmb(); | - | | y =3D atomic_read(&a); | - +----------------------+------------------------------+ + +---------------------------+------------------------------+ + | thread 1 | thread 2 | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ + | :: | :: | + | | | + | qemu_atomic_set(&a, 1); | | + | smp_wmb(); | | + | qemu_atomic_set(&b, 2); | x =3D qemu_atomic_read(&b); | + | | smp_rmb(); | + | | y =3D qemu_atomic_read(&a); | + +---------------------------+------------------------------+ =20 Note that a load-store pair only counts if the two operations access the same variable: that is, a store-release on a variable ``x`` *synchronizes @@ -388,15 +388,15 @@ with* a load-acquire on a variable ``x``, while a rel= ease barrier synchronizes with any acquire operation. The following example shows correct synchronization: =20 - +--------------------------------+--------------------------------+ - | thread 1 | thread 2 | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ - | :: | :: | - | | | - | atomic_set(&a, 1); | | - | atomic_store_release(&b, 2); | x =3D atomic_load_acquire(&b); | - | | y =3D atomic_read(&a); | - +--------------------------------+--------------------------------+ + +-------------------------------------+-----------------------------= --------+ + | thread 1 | thread 2 = | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D+ + | :: | :: = | + | | = | + | qemu_atomic_set(&a, 1); | = | + | qemu_atomic_store_release(&b, 2); | x =3D qemu_atomic_load_acq= uire(&b); | + | | y =3D qemu_atomic_read(&a)= ; | + +-------------------------------------+-----------------------------= --------+ =20 Acquire and release semantics of higher-level primitives can also be relied upon for the purpose of establishing the *synchronizes with* @@ -412,21 +412,21 @@ Finally, this more complex example has more than two = accesses and data dependency barriers. It also does not use atomic accesses whenever there cannot be a data race: =20 - +----------------------+------------------------------+ - | thread 1 | thread 2 | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D+ - | :: | :: | - | | | - | b[2] =3D 1; | | - | smp_wmb(); | | - | x->i =3D 2; | | - | smp_wmb(); | | - | atomic_set(&a, x); | x =3D atomic_read(&a); | - | | smp_read_barrier_depends(); | - | | y =3D x->i; | - | | smp_read_barrier_depends(); | - | | z =3D b[y]; | - +----------------------+------------------------------+ + +---------------------------+------------------------------+ + | thread 1 | thread 2 | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ + | :: | :: | + | | | + | b[2] =3D 1; | | + | smp_wmb(); | | + | x->i =3D 2; | | + | smp_wmb(); | | + | qemu_atomic_set(&a, x); | x =3D qemu_atomic_read(&a); | + | | smp_read_barrier_depends(); | + | | y =3D x->i; | + | | smp_read_barrier_depends(); | + | | z =3D b[y]; | + +---------------------------+------------------------------+ =20 Comparison with Linux kernel primitives =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D @@ -438,50 +438,50 @@ and memory barriers, and the equivalents in QEMU: use a boxed ``atomic_t`` type; atomic operations in QEMU are polymorphic and use normal C types. =20 -- Originally, ``atomic_read`` and ``atomic_set`` in Linux gave no guarantee - at all. Linux 4.1 updated them to implement volatile +- Originally, ``qemu_atomic_read`` and ``qemu_atomic_set`` in Linux gave no + guarantee at all. Linux 4.1 updated them to implement volatile semantics via ``ACCESS_ONCE`` (or the more recent ``READ``/``WRITE_ONCE`= `). =20 - QEMU's ``atomic_read`` and ``atomic_set`` implement C11 atomic relaxed - semantics if the compiler supports it, and volatile semantics otherwise. - Both semantics prevent the compiler from doing certain transformations; - the difference is that atomic accesses are guaranteed to be atomic, - while volatile accesses aren't. Thus, in the volatile case we just cross - our fingers hoping that the compiler will generate atomic accesses, - since we assume the variables passed are machine-word sized and - properly aligned. + QEMU's ``qemu_atomic_read`` and ``qemu_atomic_set`` implement C11 atomic + relaxed semantics if the compiler supports it, and volatile semantics + otherwise. Both semantics prevent the compiler from doing certain + transformations; the difference is that atomic accesses are guaranteed t= o be + atomic, while volatile accesses aren't. Thus, in the volatile case we ju= st + cross our fingers hoping that the compiler will generate atomic accesses, + since we assume the variables passed are machine-word sized and properly + aligned. =20 - No barriers are implied by ``atomic_read`` and ``atomic_set`` in either = Linux - or QEMU. + No barriers are implied by ``qemu_atomic_read`` and ``qemu_atomic_set`` = in + either Linux or QEMU. =20 - atomic read-modify-write operations in Linux are of three kinds: =20 - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - ``atomic_OP`` returns void - ``atomic_OP_return`` returns new value of the variable - ``atomic_fetch_OP`` returns the old value of the variable - ``atomic_cmpxchg`` returns the old value of the variable - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + ``atomic_OP`` returns void + ``atomic_OP_return`` returns new value of the variable + ``atomic_fetch_OP`` returns the old value of the variable + ``atomic_cmpxchg`` returns the old value of the variable + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 - In QEMU, the second kind is named ``atomic_OP_fetch``. + In QEMU, the second kind is named ``qemu_atomic_OP_fetch``. =20 - different atomic read-modify-write operations in Linux imply a different set of memory barriers; in QEMU, all of them enforce sequential consistency. =20 -- in QEMU, ``atomic_read()`` and ``atomic_set()`` do not participate in - the total ordering enforced by sequentially-consistent operations. +- in QEMU, ``qemu_atomic_read()`` and ``qemu_atomic_set()`` do not partici= pate + in the total ordering enforced by sequentially-consistent operations. This is because QEMU uses the C11 memory model. The following example is correct in Linux but not in QEMU: =20 - +----------------------------------+--------------------------------+ - | Linux (correct) | QEMU (incorrect) | - +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ - | :: | :: | - | | | - | a =3D atomic_fetch_add(&x, 2); | a =3D atomic_fetch_add(&x, = 2); | - | b =3D READ_ONCE(&y); | b =3D atomic_read(&y); = | - +----------------------------------+--------------------------------+ + +-------------------------------------+-----------------------------= --------+ + | Linux (correct) | QEMU (incorrect) = | + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D+ + | :: | :: = | + | | = | + | a =3D qemu_atomic_fetch_add(&x, 2); | a =3D qemu_atomic_fetch_= add(&x, 2); | + | b =3D READ_ONCE(&y); | b =3D qemu_atomic_read(&= y); | + +-------------------------------------+-----------------------------= --------+ =20 because the read of ``y`` can be moved (by either the processor or the compiler) before the write of ``x``. @@ -495,10 +495,10 @@ and memory barriers, and the equivalents in QEMU: +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ | :: | | | - | a =3D atomic_read(&x); | - | atomic_set(&x, a + 2); | + | a =3D qemu_atomic_read(&x); | + | qemu_atomic_set(&x, a + 2); | | smp_mb(); | - | b =3D atomic_read(&y); | + | b =3D qemu_atomic_read(&y); | +--------------------------------+ =20 Sources diff --git a/scripts/kernel-doc b/scripts/kernel-doc index 030b5c8691..9ec38a1bf1 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -1625,7 +1625,7 @@ sub dump_function($$) { # If you mess with these regexps, it's a good idea to check that # the following functions' documentation still comes out right: # - parport_register_device (function pointer parameters) - # - atomic_set (macro) + # - qemu_atomic_set (macro) # - pci_match_device, __copy_to_user (long return type) =20 if ($define && $prototype =3D~ m/^()([a-zA-Z0-9_~:]+)\s+/) { diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 948c35d825..33d90c6da3 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1365,7 +1365,7 @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintp= tr_t jmp_addr, i2 =3D I3401_ADDI | rt << 31 | (addr & 0xfff) << 10 | rd << 5 | rd; } pair =3D (uint64_t)i2 << 32 | i1; - atomic_set((uint64_t *)jmp_addr, pair); + qemu_atomic_set((uint64_t *)jmp_addr, pair); flush_icache_range(jmp_addr, jmp_addr + 8); } =20 diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index bd5b8e09a0..364aa2f64a 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -2662,7 +2662,7 @@ static void tcg_target_init(TCGContext *s) void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_addr, uintptr_t addr) { - atomic_set((uint32_t *)jmp_addr, deposit32(OPC_J, 0, 26, addr >> 2)); + qemu_atomic_set((uint32_t *)jmp_addr, deposit32(OPC_J, 0, 26, addr >> = 2)); flush_icache_range(jmp_addr, jmp_addr + 4); } =20 diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 393c4b30e0..21accf60fe 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -1756,13 +1756,13 @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uin= tptr_t jmp_addr, #endif =20 /* As per the enclosing if, this is ppc64. Avoid the _Static_asse= rt - within atomic_set that would fail to build a ppc32 host. */ - atomic_set__nocheck((uint64_t *)jmp_addr, pair); + within qemu_atomic_set that would fail to build a ppc32 host. = */ + qemu_atomic_set__nocheck((uint64_t *)jmp_addr, pair); flush_icache_range(jmp_addr, jmp_addr + 8); } else { intptr_t diff =3D addr - jmp_addr; tcg_debug_assert(in_range_b(diff)); - atomic_set((uint32_t *)jmp_addr, B | (diff & 0x3fffffc)); + qemu_atomic_set((uint32_t *)jmp_addr, B | (diff & 0x3fffffc)); flush_icache_range(jmp_addr, jmp_addr + 4); } } diff --git a/tcg/sparc/tcg-target.c.inc b/tcg/sparc/tcg-target.c.inc index 0f1d91fc21..c24fb403da 100644 --- a/tcg/sparc/tcg-target.c.inc +++ b/tcg/sparc/tcg-target.c.inc @@ -1839,7 +1839,8 @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintp= tr_t jmp_addr, tcg_debug_assert(br_disp =3D=3D (int32_t)br_disp); =20 if (!USE_REG_TB) { - atomic_set((uint32_t *)jmp_addr, deposit32(CALL, 0, 30, br_disp >>= 2)); + qemu_atomic_set((uint32_t *)jmp_addr, + deposit32(CALL, 0, 30, br_disp >> 2)); flush_icache_range(jmp_addr, jmp_addr + 4); return; } @@ -1863,6 +1864,6 @@ void tb_target_set_jmp_target(uintptr_t tc_ptr, uintp= tr_t jmp_addr, | INSN_IMM13((tb_disp & 0x3ff) | -0x400)); } =20 - atomic_set((uint64_t *)jmp_addr, deposit64(i2, 32, 32, i1)); + qemu_atomic_set((uint64_t *)jmp_addr, deposit64(i2, 32, 32, i1)); flush_icache_range(jmp_addr, jmp_addr + 8); } --=20 2.26.2