From nobody Wed Nov 12 03:41:40 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1568370506; cv=none; d=zoho.com; s=zohoarc; b=WVNS5NuZrSNSLsEsb9Fj4Q3T062axnI3L9dPB6cd3tSvfVN0vaPa+wBu8m1kfBgBopymCRbRnOpcYBRENITfqSz1kfUHuzVZA91ZPdEX2bFoxKCugPdBmkFA+T2zYuCaTc+oPe8udvsu9N+A+E8Ci78YwRDKGUvAkqrUHeNm55M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568370506; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=94eIFaCpZK6lhFHRuS2i2ttcIyMV7RoFUocJIDhL/BI=; b=ofHhevewezYZdj9p5LoT15iCvx92vzUeq5feOkJcxyAl3Uk6TgLczxsOTao60YP8Hxsbb5D9n0++kco3Jnh+kxh19FJMV7KvD6VeJ8TbJuzfmvP/Z8XEl0TrScqfCgOuRCREPQ7uYI8DGb9u3pPrlRIhcmmbBRF9ForjwTQp/s0= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568370506718268.97981253799344; Fri, 13 Sep 2019 03:28:26 -0700 (PDT) Received: from localhost ([::1]:42158 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ioK-0001a8-35 for importer@patchew.org; Fri, 13 Sep 2019 06:28:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35165) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ilt-00085D-HD for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i8ils-0005oU-Ay for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50626) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i8ils-0005nn-2i for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:44 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6DFF381DE7 for ; Fri, 13 Sep 2019 10:25:43 +0000 (UTC) Received: from dgilbert-t580.localhost (unknown [10.36.118.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 448645D9E1; Fri, 13 Sep 2019 10:25:42 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, pbonzini@redhat.com, ehabkost@redhat.com, berrange@redhat.com, quintela@redhat.com Date: Fri, 13 Sep 2019 11:25:34 +0100 Message-Id: <20190913102538.24167-2-dgilbert@redhat.com> In-Reply-To: <20190913102538.24167-1-dgilbert@redhat.com> References: <20190913102538.24167-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Fri, 13 Sep 2019 10:25:43 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: "Dr. David Alan Gilbert" RCU_READ_LOCK_GUARD() takes the rcu_read_lock and then uses glib's g_auto infrastructure (and thus whatever the compiler's hooks are) to release it on all exits of the block. WITH_RCU_READ_LOCK_GUARD() is similar but is used as a wrapper for the lock, i.e.: WITH_RCU_READ_LOCK_GUARD() { stuff under lock } Signed-off-by: Dr. David Alan Gilbert Acked-by: Paolo Bonzini Reviewed-by: Daniel P. Berrang=C3=A9 --- docs/devel/rcu.txt | 16 ++++++++++++++++ include/qemu/rcu.h | 25 +++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt index c84e7f42b2..d83fed2f79 100644 --- a/docs/devel/rcu.txt +++ b/docs/devel/rcu.txt @@ -187,6 +187,22 @@ The following APIs must be used before RCU is used in = a thread: Note that these APIs are relatively heavyweight, and should _not_ be nested. =20 +Convenience macros +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Two macros are provided that automatically release the read lock at the +end of the scope. + + RCU_READ_LOCK_GUARD() + + Takes the lock and will release it at the end of the block it's + used in. + + WITH_RCU_READ_LOCK_GUARD() { code } + + Is used at the head of a block to protect the code within the blo= ck. + +Note that 'goto'ing out of the guarded block will also drop the lock. =20 DIFFERENCES WITH LINUX =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h index 22876d1428..3a8d4cf28b 100644 --- a/include/qemu/rcu.h +++ b/include/qemu/rcu.h @@ -154,6 +154,31 @@ extern void call_rcu1(struct rcu_head *head, RCUCBFunc= *func); }), \ (RCUCBFunc *)g_free); =20 +typedef void RCUReadAuto; +static inline RCUReadAuto *rcu_read_auto_lock(void) +{ + rcu_read_lock(); + /* Anything non-NULL causes the cleanup function to be called */ + return (void *)(uintptr_t)0x1; +} + +static inline void rcu_read_auto_unlock(RCUReadAuto *r) +{ + rcu_read_unlock(); +} + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock) + +#define WITH_RCU_READ_LOCK_GUARD() \ + WITH_RCU_READ_LOCK_GUARD_(_rcu_read_auto##__COUNTER__) + +#define WITH_RCU_READ_LOCK_GUARD_(var) \ + for (g_autoptr(RCUReadAuto) var =3D rcu_read_auto_lock(); \ + (var); rcu_read_auto_unlock(var), (var) =3D NULL) + +#define RCU_READ_LOCK_GUARD() \ + g_autoptr(RCUReadAuto) _rcu_read_auto =3D rcu_read_auto_lock() + #ifdef __cplusplus } #endif --=20 2.21.0 From nobody Wed Nov 12 03:41:40 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1568370554; cv=none; d=zoho.com; s=zohoarc; b=fHxsrwgZOHQKsQorYPXbYkyoq85t2XF9EaA2dCeYGe+k5ZY2u1dh5E97xrcppsAdDXENyD4paj2KrP555q6iVDrev/Vs29ZyN80kc7WOh5rrUtky5lbK3/wTq3BvfRGkYd9bsy2JuGeVnKQQ6VBTl8DVulzyIEeHCnkYA7zXy/4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568370554; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=Vic6C8hNOShN1RjkxpGNZ6KsQU/5Qxt0zODhhUbAQRQ=; b=oPmvSICWfp5qzpJM7Q9/BRXtPLhfAOCC/m20QAOAREe+r703h8OXH+7e87E5hpuZeyiFp2ZT+BOSKL4A60wTvJsuE90xqRGitX4ShCP+l6AUSPgyEwqDEZFLuFJElVsGfki3QW08NBEU6TfLlEUCv6/otecW0rMxX7BQ8I0HeCE= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568370554428726.2432316865431; Fri, 13 Sep 2019 03:29:14 -0700 (PDT) Received: from localhost ([::1]:42162 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ipE-0002ZH-Pc for importer@patchew.org; Fri, 13 Sep 2019 06:29:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35182) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ilu-000870-QS for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i8ilt-0005pU-OY for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60172) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i8ilt-0005p2-IN for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:45 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E036A83F45 for ; Fri, 13 Sep 2019 10:25:44 +0000 (UTC) Received: from dgilbert-t580.localhost (unknown [10.36.118.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7F7B5D9E1; Fri, 13 Sep 2019 10:25:43 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, pbonzini@redhat.com, ehabkost@redhat.com, berrange@redhat.com, quintela@redhat.com Date: Fri, 13 Sep 2019 11:25:35 +0100 Message-Id: <20190913102538.24167-3-dgilbert@redhat.com> In-Reply-To: <20190913102538.24167-1-dgilbert@redhat.com> References: <20190913102538.24167-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Fri, 13 Sep 2019 10:25:44 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 2/5] migration: Fix missing rcu_read_unlock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: "Dr. David Alan Gilbert" Use the automatic rcu_read unlocker to fix a missing unlock. Signed-off-by: Dr. David Alan Gilbert Reviewed-by: Daniel P. Berrang=C3=A9 --- migration/ram.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index b2bd618a89..cff35477ec 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3445,28 +3445,27 @@ static int ram_save_setup(QEMUFile *f, void *opaque) } (*rsp)->f =3D f; =20 - rcu_read_lock(); - - qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_SIZE= ); + WITH_RCU_READ_LOCK_GUARD() { + qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_= SIZE); =20 - RAMBLOCK_FOREACH_MIGRATABLE(block) { - if (!block->idstr[0]) { - error_report("%s: RAMBlock with empty name", __func__); - return -1; - } - qemu_put_byte(f, strlen(block->idstr)); - qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr)); - qemu_put_be64(f, block->used_length); - if (migrate_postcopy_ram() && block->page_size !=3D qemu_host_page= _size) { - qemu_put_be64(f, block->page_size); - } - if (migrate_ignore_shared()) { - qemu_put_be64(f, block->mr->addr); + RAMBLOCK_FOREACH_MIGRATABLE(block) { + if (!block->idstr[0]) { + error_report("%s: RAMBlock with empty name", __func__); + return -1; + } + qemu_put_byte(f, strlen(block->idstr)); + qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idst= r)); + qemu_put_be64(f, block->used_length); + if (migrate_postcopy_ram() && block->page_size !=3D + qemu_host_page_size) { + qemu_put_be64(f, block->page_size); + } + if (migrate_ignore_shared()) { + qemu_put_be64(f, block->mr->addr); + } } } =20 - rcu_read_unlock(); - ram_control_before_iterate(f, RAM_CONTROL_SETUP); ram_control_after_iterate(f, RAM_CONTROL_SETUP); =20 --=20 2.21.0 From nobody Wed Nov 12 03:41:40 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1568370506; cv=none; d=zoho.com; s=zohoarc; b=kSnUokwrfmU/MW+jLnU8CuOu0T0CkiyQPNMfJjOaoBo7+bOnQDGo86IBVlFUZpIZFKJ59Fxi72SbLUKag56V9YT5QX4X2zLecKj2VooAxP+jZzNl6rMW5Mt10zlaYTBOccwwbTJ19KIYY0f8sz13YHutAs3GZ+nLu3Rwgm1XjSQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568370506; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=qOYHMzZBn7NZIivQf40i75v8kdiyR4aU2WKeNeT6PHA=; b=GhC5nFQYFn4z38LcLLQsqsq3avs+UtSRCPi+HBpzJgkzc5AV+jVtFJx02uzPZnKXnD8wv6tYvcklJJH4Q+5e+0/xXd4VcQYp0Q0YRoXhb+LRddd4QpOXbAkWoQWfLHCqUQ7K5kOwZ9Ya3c2H6CKeCeiNyb6+i4HNMD7eydm8S2U= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 15683705066493.101778758153614; Fri, 13 Sep 2019 03:28:26 -0700 (PDT) Received: from localhost ([::1]:42160 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ioT-0001iz-6S for importer@patchew.org; Fri, 13 Sep 2019 06:28:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35196) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ilx-0008BA-GH for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i8ilv-0005qP-O1 for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38056) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i8ilv-0005q5-G1 for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:47 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D080B18C4274 for ; Fri, 13 Sep 2019 10:25:46 +0000 (UTC) Received: from dgilbert-t580.localhost (unknown [10.36.118.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 383595D9CD; Fri, 13 Sep 2019 10:25:45 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, pbonzini@redhat.com, ehabkost@redhat.com, berrange@redhat.com, quintela@redhat.com Date: Fri, 13 Sep 2019 11:25:36 +0100 Message-Id: <20190913102538.24167-4-dgilbert@redhat.com> In-Reply-To: <20190913102538.24167-1-dgilbert@redhat.com> References: <20190913102538.24167-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.62]); Fri, 13 Sep 2019 10:25:46 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: "Dr. David Alan Gilbert" Use the automatic read unlocker in migration/ram.c Signed-off-by: Dr. David Alan Gilbert Reviewed-by: Daniel P. Berrang=C3=A9 --- migration/ram.c | 260 ++++++++++++++++++++++-------------------------- 1 file changed, 121 insertions(+), 139 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index cff35477ec..6c5f0199fd 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -181,14 +181,14 @@ int foreach_not_ignored_block(RAMBlockIterFunc func, = void *opaque) RAMBlock *block; int ret =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); + RAMBLOCK_FOREACH_NOT_IGNORED(block) { ret =3D func(block, opaque); if (ret) { break; } } - rcu_read_unlock(); return ret; } =20 @@ -1849,12 +1849,12 @@ static void migration_bitmap_sync(RAMState *rs) memory_global_dirty_log_sync(); =20 qemu_mutex_lock(&rs->bitmap_mutex); - rcu_read_lock(); - RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(rs, block); + WITH_RCU_READ_LOCK_GUARD() { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { + ramblock_sync_dirty_bitmap(rs, block); + } + ram_counters.remaining =3D ram_bytes_remaining(); } - ram_counters.remaining =3D ram_bytes_remaining(); - rcu_read_unlock(); qemu_mutex_unlock(&rs->bitmap_mutex); =20 memory_global_after_dirty_log_sync(); @@ -2398,13 +2398,12 @@ static void migration_page_queue_free(RAMState *rs) /* This queue generally should be empty - but in the case of a failed * migration might have some droppings in. */ - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_msp= r) { memory_region_unref(mspr->rb->mr); QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req); g_free(mspr); } - rcu_read_unlock(); } =20 /** @@ -2425,7 +2424,8 @@ int ram_save_queue_pages(const char *rbname, ram_addr= _t start, ram_addr_t len) RAMState *rs =3D ram_state; =20 ram_counters.postcopy_requests++; - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); + if (!rbname) { /* Reuse last RAMBlock */ ramblock =3D rs->last_req_rb; @@ -2467,12 +2467,10 @@ int ram_save_queue_pages(const char *rbname, ram_ad= dr_t start, ram_addr_t len) QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req); migration_make_urgent_request(); qemu_mutex_unlock(&rs->src_page_req_mutex); - rcu_read_unlock(); =20 return 0; =20 err: - rcu_read_unlock(); return -1; } =20 @@ -2712,7 +2710,8 @@ static uint64_t ram_bytes_total_common(bool count_ign= ored) RAMBlock *block; uint64_t total =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); + if (count_ignored) { RAMBLOCK_FOREACH_MIGRATABLE(block) { total +=3D block->used_length; @@ -2722,7 +2721,6 @@ static uint64_t ram_bytes_total_common(bool count_ign= ored) total +=3D block->used_length; } } - rcu_read_unlock(); return total; } =20 @@ -3086,7 +3084,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *= ms) RAMBlock *block; int ret; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); =20 /* This should be our last sync, the src is now paused */ migration_bitmap_sync(rs); @@ -3107,13 +3105,11 @@ int ram_postcopy_send_discard_bitmap(MigrationState= *ms) * point. */ error_report("migration ram resized during precopy phase"); - rcu_read_unlock(); return -EINVAL; } /* Deal with TPS !=3D HPS and huge pages */ ret =3D postcopy_chunk_hostpages(ms, block); if (ret) { - rcu_read_unlock(); return ret; } =20 @@ -3128,7 +3124,6 @@ int ram_postcopy_send_discard_bitmap(MigrationState *= ms) trace_ram_postcopy_send_discard_bitmap(); =20 ret =3D postcopy_each_ram_send_discard(ms); - rcu_read_unlock(); =20 return ret; } @@ -3149,7 +3144,7 @@ int ram_discard_range(const char *rbname, uint64_t st= art, size_t length) =20 trace_ram_discard_range(rbname, start, length); =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); RAMBlock *rb =3D qemu_ram_block_by_name(rbname); =20 if (!rb) { @@ -3169,8 +3164,6 @@ int ram_discard_range(const char *rbname, uint64_t st= art, size_t length) ret =3D ram_block_discard_range(rb, start, length); =20 err: - rcu_read_unlock(); - return ret; } =20 @@ -3303,13 +3296,12 @@ static void ram_init_bitmaps(RAMState *rs) /* For memory_global_dirty_log_start below. */ qemu_mutex_lock_iothread(); qemu_mutex_lock_ramlist(); - rcu_read_lock(); =20 - ram_list_init_bitmaps(); - memory_global_dirty_log_start(); - migration_bitmap_sync_precopy(rs); - - rcu_read_unlock(); + WITH_RCU_READ_LOCK_GUARD() { + ram_list_init_bitmaps(); + memory_global_dirty_log_start(); + migration_bitmap_sync_precopy(rs); + } qemu_mutex_unlock_ramlist(); qemu_mutex_unlock_iothread(); } @@ -3500,55 +3492,57 @@ static int ram_save_iterate(QEMUFile *f, void *opaq= ue) goto out; } =20 - rcu_read_lock(); - if (ram_list.version !=3D rs->last_version) { - ram_state_reset(rs); - } - - /* Read version before ram_list.blocks */ - smp_rmb(); + WITH_RCU_READ_LOCK_GUARD() { + if (ram_list.version !=3D rs->last_version) { + ram_state_reset(rs); + } =20 - ram_control_before_iterate(f, RAM_CONTROL_ROUND); + /* Read version before ram_list.blocks */ + smp_rmb(); =20 - t0 =3D qemu_clock_get_ns(QEMU_CLOCK_REALTIME); - i =3D 0; - while ((ret =3D qemu_file_rate_limit(f)) =3D=3D 0 || - !QSIMPLEQ_EMPTY(&rs->src_page_requests)) { - int pages; + ram_control_before_iterate(f, RAM_CONTROL_ROUND); =20 - if (qemu_file_get_error(f)) { - break; - } + t0 =3D qemu_clock_get_ns(QEMU_CLOCK_REALTIME); + i =3D 0; + while ((ret =3D qemu_file_rate_limit(f)) =3D=3D 0 || + !QSIMPLEQ_EMPTY(&rs->src_page_requests)) { + int pages; =20 - pages =3D ram_find_and_save_block(rs, false); - /* no more pages to sent */ - if (pages =3D=3D 0) { - done =3D 1; - break; - } + if (qemu_file_get_error(f)) { + break; + } =20 - if (pages < 0) { - qemu_file_set_error(f, pages); - break; - } + pages =3D ram_find_and_save_block(rs, false); + /* no more pages to sent */ + if (pages =3D=3D 0) { + done =3D 1; + break; + } =20 - rs->target_page_count +=3D pages; - - /* we want to check in the 1st loop, just in case it was the 1st t= ime - and we had to sync the dirty bitmap. - qemu_clock_get_ns() is a bit expensive, so we only check each s= ome - iterations - */ - if ((i & 63) =3D=3D 0) { - uint64_t t1 =3D (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - t0) = / 1000000; - if (t1 > MAX_WAIT) { - trace_ram_save_iterate_big_wait(t1, i); + if (pages < 0) { + qemu_file_set_error(f, pages); break; } + + rs->target_page_count +=3D pages; + + /* + * we want to check in the 1st loop, just in case it was the 1= st + * time and we had to sync the dirty bitmap. + * qemu_clock_get_ns() is a bit expensive, so we only check ea= ch + * some iterations + */ + if ((i & 63) =3D=3D 0) { + uint64_t t1 =3D (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - = t0) / + 1000000; + if (t1 > MAX_WAIT) { + trace_ram_save_iterate_big_wait(t1, i); + break; + } + } + i++; } - i++; } - rcu_read_unlock(); =20 /* * Must occur before EOS (or any QEMUFile operation) @@ -3586,35 +3580,33 @@ static int ram_save_complete(QEMUFile *f, void *opa= que) RAMState *rs =3D *temp; int ret =3D 0; =20 - rcu_read_lock(); - - if (!migration_in_postcopy()) { - migration_bitmap_sync_precopy(rs); - } + WITH_RCU_READ_LOCK_GUARD() { + if (!migration_in_postcopy()) { + migration_bitmap_sync_precopy(rs); + } =20 - ram_control_before_iterate(f, RAM_CONTROL_FINISH); + ram_control_before_iterate(f, RAM_CONTROL_FINISH); =20 - /* try transferring iterative blocks of memory */ + /* try transferring iterative blocks of memory */ =20 - /* flush all remaining blocks regardless of rate limiting */ - while (true) { - int pages; + /* flush all remaining blocks regardless of rate limiting */ + while (true) { + int pages; =20 - pages =3D ram_find_and_save_block(rs, !migration_in_colo_state()); - /* no more blocks to sent */ - if (pages =3D=3D 0) { - break; - } - if (pages < 0) { - ret =3D pages; - break; + pages =3D ram_find_and_save_block(rs, !migration_in_colo_state= ()); + /* no more blocks to sent */ + if (pages =3D=3D 0) { + break; + } + if (pages < 0) { + ret =3D pages; + break; + } } - } - - flush_compressed_data(rs); - ram_control_after_iterate(f, RAM_CONTROL_FINISH); =20 - rcu_read_unlock(); + flush_compressed_data(rs); + ram_control_after_iterate(f, RAM_CONTROL_FINISH); + } =20 multifd_send_sync_main(rs); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); @@ -3637,9 +3629,9 @@ static void ram_save_pending(QEMUFile *f, void *opaqu= e, uint64_t max_size, if (!migration_in_postcopy() && remaining_size < max_size) { qemu_mutex_lock_iothread(); - rcu_read_lock(); - migration_bitmap_sync_precopy(rs); - rcu_read_unlock(); + WITH_RCU_READ_LOCK_GUARD() { + migration_bitmap_sync_precopy(rs); + } qemu_mutex_unlock_iothread(); remaining_size =3D rs->migration_dirty_pages * TARGET_PAGE_SIZE; } @@ -3983,7 +3975,13 @@ int colo_init_ram_cache(void) error_report("%s: Can't alloc memory for COLO cache of block %= s," "size 0x" RAM_ADDR_FMT, __func__, block->idstr, block->used_length); - goto out_locked; + RAMBLOCK_FOREACH_NOT_IGNORED(block) { + if (block->colo_cache) { + qemu_anon_ram_free(block->colo_cache, block->used_leng= th); + block->colo_cache =3D NULL; + } + } + return -errno; } memcpy(block->colo_cache, block->host, block->used_length); } @@ -4009,18 +4007,6 @@ int colo_init_ram_cache(void) memory_global_dirty_log_start(); =20 return 0; - -out_locked: - - RAMBLOCK_FOREACH_NOT_IGNORED(block) { - if (block->colo_cache) { - qemu_anon_ram_free(block->colo_cache, block->used_length); - block->colo_cache =3D NULL; - } - } - - rcu_read_unlock(); - return -errno; } =20 /* It is need to hold the global lock to call this helper */ @@ -4034,16 +4020,14 @@ void colo_release_ram_cache(void) block->bmap =3D NULL; } =20 - rcu_read_lock(); - - RAMBLOCK_FOREACH_NOT_IGNORED(block) { - if (block->colo_cache) { - qemu_anon_ram_free(block->colo_cache, block->used_length); - block->colo_cache =3D NULL; + WITH_RCU_READ_LOCK_GUARD() { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { + if (block->colo_cache) { + qemu_anon_ram_free(block->colo_cache, block->used_length); + block->colo_cache =3D NULL; + } } } - - rcu_read_unlock(); qemu_mutex_destroy(&ram_state->bitmap_mutex); g_free(ram_state); ram_state =3D NULL; @@ -4281,31 +4265,30 @@ static void colo_flush_ram_cache(void) unsigned long offset =3D 0; =20 memory_global_dirty_log_sync(); - rcu_read_lock(); - RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(ram_state, block); + WITH_RCU_READ_LOCK_GUARD() { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { + ramblock_sync_dirty_bitmap(ram_state, block); + } } - rcu_read_unlock(); =20 trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages); - rcu_read_lock(); - block =3D QLIST_FIRST_RCU(&ram_list.blocks); + WITH_RCU_READ_LOCK_GUARD() { + block =3D QLIST_FIRST_RCU(&ram_list.blocks); =20 - while (block) { - offset =3D migration_bitmap_find_dirty(ram_state, block, offset); + while (block) { + offset =3D migration_bitmap_find_dirty(ram_state, block, offse= t); =20 - if (offset << TARGET_PAGE_BITS >=3D block->used_length) { - offset =3D 0; - block =3D QLIST_NEXT_RCU(block, next); - } else { - migration_bitmap_clear_dirty(ram_state, block, offset); - dst_host =3D block->host + (offset << TARGET_PAGE_BITS); - src_host =3D block->colo_cache + (offset << TARGET_PAGE_BITS); - memcpy(dst_host, src_host, TARGET_PAGE_SIZE); + if (offset << TARGET_PAGE_BITS >=3D block->used_length) { + offset =3D 0; + block =3D QLIST_NEXT_RCU(block, next); + } else { + migration_bitmap_clear_dirty(ram_state, block, offset); + dst_host =3D block->host + (offset << TARGET_PAGE_BITS); + src_host =3D block->colo_cache + (offset << TARGET_PAGE_BI= TS); + memcpy(dst_host, src_host, TARGET_PAGE_SIZE); + } } } - - rcu_read_unlock(); trace_colo_flush_ram_cache_end(); } =20 @@ -4504,16 +4487,15 @@ static int ram_load(QEMUFile *f, void *opaque, int = version_id) * it will be necessary to reduce the granularity of this * critical section. */ - rcu_read_lock(); + WITH_RCU_READ_LOCK_GUARD() { + if (postcopy_running) { + ret =3D ram_load_postcopy(f); + } else { + ret =3D ram_load_precopy(f); + } =20 - if (postcopy_running) { - ret =3D ram_load_postcopy(f); - } else { - ret =3D ram_load_precopy(f); + ret |=3D wait_for_decompress_done(); } - - ret |=3D wait_for_decompress_done(); - rcu_read_unlock(); trace_ram_load_complete(ret, seq_iter); =20 if (!ret && migration_incoming_in_colo_state()) { --=20 2.21.0 From nobody Wed Nov 12 03:41:40 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1568370694; cv=none; d=zoho.com; s=zohoarc; b=brJG0wiZBxulM+SHJyX/JZgry9Po+pI8rg9M2cxGnnlY4erSLZIoMG/LPXsLE75tAuUCbCoq+/IWjZ+NqYhmJB6nSmdxErp1Tzoop1yFh0XwmBUCQFWpyenioVplY4LVpRzy3H/wXWYFoOmpBGrlUHEPsc70/CL+9gmV0tMV0AA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568370694; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=PYQQtDdmXe0jh2zBep5SR6gk0LtIfVkdg+gCLdsUeCM=; b=IHu/AT8zUIyQxm7n56F1/7tlmDn3u0XYFsf7K7vM0HCF77OiC2auTdbhzDMVy9vujaoIxtE8auPYUhUlYLbGuqg76B96+jQysZALel7syw6Ve9PT/H2eewXgBXDpng7PEDFOWF39dkJ1uws0xmSNWJGnWFT2Gny8oCd0h8oDBpk= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568370694595987.2127371978878; Fri, 13 Sep 2019 03:31:34 -0700 (PDT) Received: from localhost ([::1]:42200 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8irV-0005in-4d for importer@patchew.org; Fri, 13 Sep 2019 06:31:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35202) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ily-0008Ck-9j for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i8ilw-0005qq-Rf for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59166) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i8ilw-0005qZ-KG for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:48 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 014FE3086218 for ; Fri, 13 Sep 2019 10:25:48 +0000 (UTC) Received: from dgilbert-t580.localhost (unknown [10.36.118.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF99D5D9E2; Fri, 13 Sep 2019 10:25:46 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, pbonzini@redhat.com, ehabkost@redhat.com, berrange@redhat.com, quintela@redhat.com Date: Fri, 13 Sep 2019 11:25:37 +0100 Message-Id: <20190913102538.24167-5-dgilbert@redhat.com> In-Reply-To: <20190913102538.24167-1-dgilbert@redhat.com> References: <20190913102538.24167-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Fri, 13 Sep 2019 10:25:48 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: "Dr. David Alan Gilbert" Use the automatic read unlocker in migration/rdma.c. Signed-off-by: Dr. David Alan Gilbert Reviewed-by: Daniel P. Berrang=C3=A9 --- migration/rdma.c | 57 ++++++++++-------------------------------------- 1 file changed, 11 insertions(+), 46 deletions(-) diff --git a/migration/rdma.c b/migration/rdma.c index 78e6b72bac..5c9054721d 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -88,7 +88,6 @@ static uint32_t known_capabilities =3D RDMA_CAPABILITY_PI= N_ALL; " to abort!"); \ rdma->error_reported =3D 1; \ } \ - rcu_read_unlock(); \ return rdma->error_state; \ } \ } while (0) @@ -2678,11 +2677,10 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *= ioc, size_t i; size_t len =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmaout); =20 if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 @@ -2695,7 +2693,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *io= c, ret =3D qemu_rdma_write_flush(f, rdma); if (ret < 0) { rdma->error_state =3D ret; - rcu_read_unlock(); return ret; } =20 @@ -2715,7 +2712,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *io= c, =20 if (ret < 0) { rdma->error_state =3D ret; - rcu_read_unlock(); return ret; } =20 @@ -2724,7 +2720,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *io= c, } } =20 - rcu_read_unlock(); return done; } =20 @@ -2764,11 +2759,10 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *i= oc, ssize_t i; size_t done =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmain); =20 if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 @@ -2805,7 +2799,6 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc, =20 if (ret < 0) { rdma->error_state =3D ret; - rcu_read_unlock(); return ret; } =20 @@ -2819,14 +2812,12 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *i= oc, /* Still didn't get enough, so lets just return */ if (want) { if (done =3D=3D 0) { - rcu_read_unlock(); return QIO_CHANNEL_ERR_BLOCK; } else { break; } } } - rcu_read_unlock(); return done; } =20 @@ -2882,7 +2873,7 @@ qio_channel_rdma_source_prepare(GSource *source, GIOCondition cond =3D 0; *timeout =3D -1; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); if (rsource->condition =3D=3D G_IO_IN) { rdma =3D atomic_rcu_read(&rsource->rioc->rdmain); } else { @@ -2891,7 +2882,6 @@ qio_channel_rdma_source_prepare(GSource *source, =20 if (!rdma) { error_report("RDMAContext is NULL when prepare Gsource"); - rcu_read_unlock(); return FALSE; } =20 @@ -2900,7 +2890,6 @@ qio_channel_rdma_source_prepare(GSource *source, } cond |=3D G_IO_OUT; =20 - rcu_read_unlock(); return cond & rsource->condition; } =20 @@ -2911,7 +2900,7 @@ qio_channel_rdma_source_check(GSource *source) RDMAContext *rdma; GIOCondition cond =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); if (rsource->condition =3D=3D G_IO_IN) { rdma =3D atomic_rcu_read(&rsource->rioc->rdmain); } else { @@ -2920,7 +2909,6 @@ qio_channel_rdma_source_check(GSource *source) =20 if (!rdma) { error_report("RDMAContext is NULL when check Gsource"); - rcu_read_unlock(); return FALSE; } =20 @@ -2929,7 +2917,6 @@ qio_channel_rdma_source_check(GSource *source) } cond |=3D G_IO_OUT; =20 - rcu_read_unlock(); return cond & rsource->condition; } =20 @@ -2943,7 +2930,7 @@ qio_channel_rdma_source_dispatch(GSource *source, RDMAContext *rdma; GIOCondition cond =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); if (rsource->condition =3D=3D G_IO_IN) { rdma =3D atomic_rcu_read(&rsource->rioc->rdmain); } else { @@ -2952,7 +2939,6 @@ qio_channel_rdma_source_dispatch(GSource *source, =20 if (!rdma) { error_report("RDMAContext is NULL when dispatch Gsource"); - rcu_read_unlock(); return FALSE; } =20 @@ -2961,7 +2947,6 @@ qio_channel_rdma_source_dispatch(GSource *source, } cond |=3D G_IO_OUT; =20 - rcu_read_unlock(); return (*func)(QIO_CHANNEL(rsource->rioc), (cond & rsource->condition), user_data); @@ -3058,7 +3043,7 @@ qio_channel_rdma_shutdown(QIOChannel *ioc, QIOChannelRDMA *rioc =3D QIO_CHANNEL_RDMA(ioc); RDMAContext *rdmain, *rdmaout; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); =20 rdmain =3D atomic_rcu_read(&rioc->rdmain); rdmaout =3D atomic_rcu_read(&rioc->rdmain); @@ -3085,7 +3070,6 @@ qio_channel_rdma_shutdown(QIOChannel *ioc, break; } =20 - rcu_read_unlock(); return 0; } =20 @@ -3131,18 +3115,16 @@ static size_t qemu_rdma_save_page(QEMUFile *f, void= *opaque, RDMAContext *rdma; int ret; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmaout); =20 if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 CHECK_ERROR_STATE(); =20 if (migration_in_postcopy()) { - rcu_read_unlock(); return RAM_SAVE_CONTROL_NOT_SUPP; } =20 @@ -3227,11 +3209,9 @@ static size_t qemu_rdma_save_page(QEMUFile *f, void = *opaque, } } =20 - rcu_read_unlock(); return RAM_SAVE_CONTROL_DELAYED; err: rdma->error_state =3D ret; - rcu_read_unlock(); return ret; } =20 @@ -3451,11 +3431,10 @@ static int qemu_rdma_registration_handle(QEMUFile *= f, void *opaque) int count =3D 0; int i =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmain); =20 if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 @@ -3698,7 +3677,6 @@ out: if (ret < 0) { rdma->error_state =3D ret; } - rcu_read_unlock(); return ret; } =20 @@ -3716,11 +3694,10 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc= , const char *name) int curr; int found =3D -1; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmain); =20 if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 @@ -3734,7 +3711,6 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, = const char *name) =20 if (found =3D=3D -1) { error_report("RAMBlock '%s' not found on destination", name); - rcu_read_unlock(); return -ENOENT; } =20 @@ -3742,7 +3718,6 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, = const char *name) trace_rdma_block_notification_handle(name, rdma->next_src_index); rdma->next_src_index++; =20 - rcu_read_unlock(); return 0; } =20 @@ -3767,17 +3742,15 @@ static int qemu_rdma_registration_start(QEMUFile *f= , void *opaque, QIOChannelRDMA *rioc =3D QIO_CHANNEL_RDMA(opaque); RDMAContext *rdma; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmaout); if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 CHECK_ERROR_STATE(); =20 if (migration_in_postcopy()) { - rcu_read_unlock(); return 0; } =20 @@ -3785,7 +3758,6 @@ static int qemu_rdma_registration_start(QEMUFile *f, = void *opaque, qemu_put_be64(f, RAM_SAVE_FLAG_HOOK); qemu_fflush(f); =20 - rcu_read_unlock(); return 0; } =20 @@ -3802,17 +3774,15 @@ static int qemu_rdma_registration_stop(QEMUFile *f,= void *opaque, RDMAControlHeader head =3D { .len =3D 0, .repeat =3D 1 }; int ret =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); rdma =3D atomic_rcu_read(&rioc->rdmaout); if (!rdma) { - rcu_read_unlock(); return -EIO; } =20 CHECK_ERROR_STATE(); =20 if (migration_in_postcopy()) { - rcu_read_unlock(); return 0; } =20 @@ -3844,7 +3814,6 @@ static int qemu_rdma_registration_stop(QEMUFile *f, v= oid *opaque, qemu_rdma_reg_whole_ram_blocks : NULL); if (ret < 0) { ERROR(errp, "receiving remote info!"); - rcu_read_unlock(); return ret; } =20 @@ -3868,7 +3837,6 @@ static int qemu_rdma_registration_stop(QEMUFile *f, v= oid *opaque, "not identical on both the source and destination.= ", local->nb_blocks, nb_dest_blocks); rdma->error_state =3D -EINVAL; - rcu_read_unlock(); return -EINVAL; } =20 @@ -3885,7 +3853,6 @@ static int qemu_rdma_registration_stop(QEMUFile *f, v= oid *opaque, local->block[i].length, rdma->dest_blocks[i].length); rdma->error_state =3D -EINVAL; - rcu_read_unlock(); return -EINVAL; } local->block[i].remote_host_addr =3D @@ -3903,11 +3870,9 @@ static int qemu_rdma_registration_stop(QEMUFile *f, = void *opaque, goto err; } =20 - rcu_read_unlock(); return 0; err: rdma->error_state =3D ret; - rcu_read_unlock(); return ret; } =20 --=20 2.21.0 From nobody Wed Nov 12 03:41:40 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1568370671; cv=none; d=zoho.com; s=zohoarc; b=eJ3iL5S0FTNL/IdhclhrIeGGeGueO9g35gCABKS6MhJ5GjvaYusYzFBVfr6N4TGInoqGgei6DesjVgBbHLyg7JqQzzjeO0e88Nv8a1LwG7yCp20Gr51y2zQ+M9Y4xSW2vNJo8FrPKL3lxZnkGtRucRgiUed3j5uVJIwbinHLxZY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568370671; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=fj3SDiPP71HseUCVDOKgnwbmxUEMpczAOqKE+FSTKmg=; b=E1LqL+tejlSPCIowkXQgJVLWyBc2o3q6jaNZJQCKQwXlHBkCouYv2VQTBbcN/DNoAVrEAHp/GcxYe5H03aF9ut5hJ9c/EPBCKHzvgX41gP76wXnlTLHI61LxWnNxDTLCTJakpGGupyDuBgXmbI6JOW+5+tnCOeuIZQErp3N7l6E= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568370671018471.1600265397499; Fri, 13 Sep 2019 03:31:11 -0700 (PDT) Received: from localhost ([::1]:42198 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8ir7-0005FU-Km for importer@patchew.org; Fri, 13 Sep 2019 06:31:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35220) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8im0-0008Ge-Uq for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i8ily-0005rg-Gm for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43992) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i8ily-0005rI-8T for qemu-devel@nongnu.org; Fri, 13 Sep 2019 06:25:50 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 98B9610DCC90 for ; Fri, 13 Sep 2019 10:25:49 +0000 (UTC) Received: from dgilbert-t580.localhost (unknown [10.36.118.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4AF255D9E1; Fri, 13 Sep 2019 10:25:48 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, pbonzini@redhat.com, ehabkost@redhat.com, berrange@redhat.com, quintela@redhat.com Date: Fri, 13 Sep 2019 11:25:38 +0100 Message-Id: <20190913102538.24167-6-dgilbert@redhat.com> In-Reply-To: <20190913102538.24167-1-dgilbert@redhat.com> References: <20190913102538.24167-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.64]); Fri, 13 Sep 2019 10:25:49 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: "Dr. David Alan Gilbert" Signed-off-by: Dr. David Alan Gilbert Reviewed-by: Daniel P. Berrang=C3=A9 --- exec.c | 120 +++++++++++++++------------------- include/exec/ram_addr.h | 138 +++++++++++++++++++--------------------- memory.c | 15 ++--- 3 files changed, 120 insertions(+), 153 deletions(-) diff --git a/exec.c b/exec.c index 235d6bc883..e75be06819 100644 --- a/exec.c +++ b/exec.c @@ -1034,16 +1034,14 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwad= dr addr, MemTxAttrs attrs) return; } =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); mr =3D address_space_translate(as, addr, &addr, &l, false, attrs); if (!(memory_region_is_ram(mr) || memory_region_is_romd(mr))) { - rcu_read_unlock(); return; } ram_addr =3D memory_region_get_ram_addr(mr) + addr; tb_invalidate_phys_page_range(ram_addr, ram_addr + 1, 0); - rcu_read_unlock(); } =20 static void breakpoint_invalidate(CPUState *cpu, target_ulong pc) @@ -1329,14 +1327,13 @@ static void tlb_reset_dirty_range_all(ram_addr_t st= art, ram_addr_t length) end =3D TARGET_PAGE_ALIGN(start + length); start &=3D TARGET_PAGE_MASK; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); block =3D qemu_get_ram_block(start); assert(block =3D=3D qemu_get_ram_block(end - 1)); start1 =3D (uintptr_t)ramblock_ptr(block, start - block->offset); CPU_FOREACH(cpu) { tlb_reset_dirty(cpu, start1, length); } - rcu_read_unlock(); } =20 /* Note: start and end must be within the same ram block. */ @@ -1357,30 +1354,29 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_a= ddr_t start, end =3D TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; page =3D start >> TARGET_PAGE_BITS; =20 - rcu_read_lock(); - - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); - ramblock =3D qemu_get_ram_block(start); - /* Range sanity check on the ramblock */ - assert(start >=3D ramblock->offset && - start + length <=3D ramblock->offset + ramblock->used_length); - - while (page < end) { - unsigned long idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; - unsigned long offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; - unsigned long num =3D MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - of= fset); + WITH_RCU_READ_LOCK_GUARD() { + blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + ramblock =3D qemu_get_ram_block(start); + /* Range sanity check on the ramblock */ + assert(start >=3D ramblock->offset && + start + length <=3D ramblock->offset + ramblock->used_lengt= h); + + while (page < end) { + unsigned long idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; + unsigned long offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; + unsigned long num =3D MIN(end - page, + DIRTY_MEMORY_BLOCK_SIZE - offset); + + dirty |=3D bitmap_test_and_clear_atomic(blocks->blocks[idx], + offset, num); + page +=3D num; + } =20 - dirty |=3D bitmap_test_and_clear_atomic(blocks->blocks[idx], - offset, num); - page +=3D num; + mr_offset =3D (ram_addr_t)(page << TARGET_PAGE_BITS) - ramblock->o= ffset; + mr_size =3D (end - page) << TARGET_PAGE_BITS; + memory_region_clear_dirty_bitmap(ramblock->mr, mr_offset, mr_size); } =20 - mr_offset =3D (ram_addr_t)(page << TARGET_PAGE_BITS) - ramblock->offse= t; - mr_size =3D (end - page) << TARGET_PAGE_BITS; - memory_region_clear_dirty_bitmap(ramblock->mr, mr_offset, mr_size); - - rcu_read_unlock(); - if (dirty && tcg_enabled()) { tlb_reset_dirty_range_all(start, length); } @@ -1408,28 +1404,27 @@ DirtyBitmapSnapshot *cpu_physical_memory_snapshot_a= nd_clear_dirty end =3D last >> TARGET_PAGE_BITS; dest =3D 0; =20 - rcu_read_lock(); - - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + WITH_RCU_READ_LOCK_GUARD() { + blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); =20 - while (page < end) { - unsigned long idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; - unsigned long offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; - unsigned long num =3D MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - of= fset); + while (page < end) { + unsigned long idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; + unsigned long offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; + unsigned long num =3D MIN(end - page, + DIRTY_MEMORY_BLOCK_SIZE - offset); =20 - assert(QEMU_IS_ALIGNED(offset, (1 << BITS_PER_LEVEL))); - assert(QEMU_IS_ALIGNED(num, (1 << BITS_PER_LEVEL))); - offset >>=3D BITS_PER_LEVEL; + assert(QEMU_IS_ALIGNED(offset, (1 << BITS_PER_LEVEL))); + assert(QEMU_IS_ALIGNED(num, (1 << BITS_PER_LEVEL))); + offset >>=3D BITS_PER_LEVEL; =20 - bitmap_copy_and_clear_atomic(snap->dirty + dest, - blocks->blocks[idx] + offset, - num); - page +=3D num; - dest +=3D num >> BITS_PER_LEVEL; + bitmap_copy_and_clear_atomic(snap->dirty + dest, + blocks->blocks[idx] + offset, + num); + page +=3D num; + dest +=3D num >> BITS_PER_LEVEL; + } } =20 - rcu_read_unlock(); - if (tcg_enabled()) { tlb_reset_dirty_range_all(start, length); } @@ -1661,7 +1656,7 @@ void ram_block_dump(Monitor *mon) RAMBlock *block; char *psize; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); monitor_printf(mon, "%24s %8s %18s %18s %18s\n", "Block Name", "PSize", "Offset", "Used", "Total"); RAMBLOCK_FOREACH(block) { @@ -1673,7 +1668,6 @@ void ram_block_dump(Monitor *mon) (uint64_t)block->max_length); g_free(psize); } - rcu_read_unlock(); } =20 #ifdef __linux__ @@ -1995,11 +1989,10 @@ static unsigned long last_ram_page(void) RAMBlock *block; ram_addr_t last =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); RAMBLOCK_FOREACH(block) { last =3D MAX(last, block->offset + block->max_length); } - rcu_read_unlock(); return last >> TARGET_PAGE_BITS; } =20 @@ -2086,7 +2079,7 @@ void qemu_ram_set_idstr(RAMBlock *new_block, const ch= ar *name, DeviceState *dev) } pstrcat(new_block->idstr, sizeof(new_block->idstr), name); =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); RAMBLOCK_FOREACH(block) { if (block !=3D new_block && !strcmp(block->idstr, new_block->idstr)) { @@ -2095,7 +2088,6 @@ void qemu_ram_set_idstr(RAMBlock *new_block, const ch= ar *name, DeviceState *dev) abort(); } } - rcu_read_unlock(); } =20 /* Called with iothread lock held. */ @@ -2637,17 +2629,16 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool = round_offset, =20 if (xen_enabled()) { ram_addr_t ram_addr; - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); ram_addr =3D xen_ram_addr_from_mapcache(ptr); block =3D qemu_get_ram_block(ram_addr); if (block) { *offset =3D ram_addr - block->offset; } - rcu_read_unlock(); return block; } =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); block =3D atomic_rcu_read(&ram_list.mru_block); if (block && block->host && host - block->host < block->max_length) { goto found; @@ -2663,7 +2654,6 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool ro= und_offset, } } =20 - rcu_read_unlock(); return NULL; =20 found: @@ -2671,7 +2661,6 @@ found: if (round_offset) { *offset &=3D TARGET_PAGE_MASK; } - rcu_read_unlock(); return block; } =20 @@ -3380,10 +3369,9 @@ MemTxResult address_space_read_full(AddressSpace *as= , hwaddr addr, FlatView *fv; =20 if (len > 0) { - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); fv =3D address_space_to_flatview(as); result =3D flatview_read(fv, addr, attrs, buf, len); - rcu_read_unlock(); } =20 return result; @@ -3397,10 +3385,9 @@ MemTxResult address_space_write(AddressSpace *as, hw= addr addr, FlatView *fv; =20 if (len > 0) { - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); fv =3D address_space_to_flatview(as); result =3D flatview_write(fv, addr, attrs, buf, len); - rcu_read_unlock(); } =20 return result; @@ -3440,7 +3427,7 @@ static inline MemTxResult address_space_write_rom_int= ernal(AddressSpace *as, hwaddr addr1; MemoryRegion *mr; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); while (len > 0) { l =3D len; mr =3D address_space_translate(as, addr, &addr1, &l, true, attrs); @@ -3465,7 +3452,6 @@ static inline MemTxResult address_space_write_rom_int= ernal(AddressSpace *as, buf +=3D l; addr +=3D l; } - rcu_read_unlock(); return MEMTX_OK; } =20 @@ -3610,10 +3596,9 @@ bool address_space_access_valid(AddressSpace *as, hw= addr addr, FlatView *fv; bool result; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); fv =3D address_space_to_flatview(as); result =3D flatview_access_valid(fv, addr, len, is_write, attrs); - rcu_read_unlock(); return result; } =20 @@ -3668,13 +3653,12 @@ void *address_space_map(AddressSpace *as, } =20 l =3D len; - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); fv =3D address_space_to_flatview(as); mr =3D flatview_translate(fv, addr, &xlat, &l, is_write, attrs); =20 if (!memory_access_is_direct(mr, is_write)) { if (atomic_xchg(&bounce.in_use, true)) { - rcu_read_unlock(); return NULL; } /* Avoid unbounded allocations */ @@ -3690,7 +3674,6 @@ void *address_space_map(AddressSpace *as, bounce.buffer, l); } =20 - rcu_read_unlock(); *plen =3D l; return bounce.buffer; } @@ -3700,7 +3683,6 @@ void *address_space_map(AddressSpace *as, *plen =3D flatview_extend_translation(fv, addr, len, mr, xlat, l, is_write, attrs); ptr =3D qemu_ram_ptr_length(mr->ram_block, xlat, plen, true); - rcu_read_unlock(); =20 return ptr; } @@ -3968,13 +3950,12 @@ bool cpu_physical_memory_is_io(hwaddr phys_addr) hwaddr l =3D 1; bool res; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); mr =3D address_space_translate(&address_space_memory, phys_addr, &phys_addr, &l, false, MEMTXATTRS_UNSPECIFIED); =20 res =3D !(memory_region_is_ram(mr) || memory_region_is_romd(mr)); - rcu_read_unlock(); return res; } =20 @@ -3983,14 +3964,13 @@ int qemu_ram_foreach_block(RAMBlockIterFunc func, v= oid *opaque) RAMBlock *block; int ret =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); RAMBLOCK_FOREACH(block) { ret =3D func(block, opaque); if (ret) { break; } } - rcu_read_unlock(); return ret; } =20 diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index a327a80cfe..be76b0bfba 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -199,30 +199,29 @@ static inline bool cpu_physical_memory_get_dirty(ram_= addr_t start, end =3D TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; page =3D start >> TARGET_PAGE_BITS; =20 - rcu_read_lock(); - - blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + WITH_RCU_READ_LOCK_GUARD() { + blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); + + idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; + offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; + base =3D page - offset; + while (page < end) { + unsigned long next =3D MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE= ); + unsigned long num =3D next - base; + unsigned long found =3D find_next_bit(blocks->blocks[idx], + num, offset); + if (found < num) { + dirty =3D true; + break; + } =20 - idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; - offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; - base =3D page - offset; - while (page < end) { - unsigned long next =3D MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE); - unsigned long num =3D next - base; - unsigned long found =3D find_next_bit(blocks->blocks[idx], num, of= fset); - if (found < num) { - dirty =3D true; - break; + page =3D next; + idx++; + offset =3D 0; + base +=3D DIRTY_MEMORY_BLOCK_SIZE; } - - page =3D next; - idx++; - offset =3D 0; - base +=3D DIRTY_MEMORY_BLOCK_SIZE; } =20 - rcu_read_unlock(); - return dirty; } =20 @@ -240,7 +239,7 @@ static inline bool cpu_physical_memory_all_dirty(ram_ad= dr_t start, end =3D TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; page =3D start >> TARGET_PAGE_BITS; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); =20 blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); =20 @@ -262,8 +261,6 @@ static inline bool cpu_physical_memory_all_dirty(ram_ad= dr_t start, base +=3D DIRTY_MEMORY_BLOCK_SIZE; } =20 - rcu_read_unlock(); - return dirty; } =20 @@ -315,13 +312,11 @@ static inline void cpu_physical_memory_set_dirty_flag= (ram_addr_t addr, idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); =20 blocks =3D atomic_rcu_read(&ram_list.dirty_memory[client]); =20 set_bit_atomic(offset, blocks->blocks[idx]); - - rcu_read_unlock(); } =20 static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start, @@ -340,39 +335,37 @@ static inline void cpu_physical_memory_set_dirty_rang= e(ram_addr_t start, end =3D TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; page =3D start >> TARGET_PAGE_BITS; =20 - rcu_read_lock(); + WITH_RCU_READ_LOCK_GUARD() { + for (i =3D 0; i < DIRTY_MEMORY_NUM; i++) { + blocks[i] =3D atomic_rcu_read(&ram_list.dirty_memory[i]); + } =20 - for (i =3D 0; i < DIRTY_MEMORY_NUM; i++) { - blocks[i] =3D atomic_rcu_read(&ram_list.dirty_memory[i]); - } + idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; + offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; + base =3D page - offset; + while (page < end) { + unsigned long next =3D MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE= ); =20 - idx =3D page / DIRTY_MEMORY_BLOCK_SIZE; - offset =3D page % DIRTY_MEMORY_BLOCK_SIZE; - base =3D page - offset; - while (page < end) { - unsigned long next =3D MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE); + if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) { + bitmap_set_atomic(blocks[DIRTY_MEMORY_MIGRATION]->blocks[i= dx], + offset, next - page); + } + if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) { + bitmap_set_atomic(blocks[DIRTY_MEMORY_VGA]->blocks[idx], + offset, next - page); + } + if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) { + bitmap_set_atomic(blocks[DIRTY_MEMORY_CODE]->blocks[idx], + offset, next - page); + } =20 - if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) { - bitmap_set_atomic(blocks[DIRTY_MEMORY_MIGRATION]->blocks[idx], - offset, next - page); + page =3D next; + idx++; + offset =3D 0; + base +=3D DIRTY_MEMORY_BLOCK_SIZE; } - if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) { - bitmap_set_atomic(blocks[DIRTY_MEMORY_VGA]->blocks[idx], - offset, next - page); - } - if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) { - bitmap_set_atomic(blocks[DIRTY_MEMORY_CODE]->blocks[idx], - offset, next - page); - } - - page =3D next; - idx++; - offset =3D 0; - base +=3D DIRTY_MEMORY_BLOCK_SIZE; } =20 - rcu_read_unlock(); - xen_hvm_modified_memory(start, length); } =20 @@ -402,36 +395,35 @@ static inline void cpu_physical_memory_set_dirty_lebi= tmap(unsigned long *bitmap, offset =3D BIT_WORD((start >> TARGET_PAGE_BITS) % DIRTY_MEMORY_BLOCK_SIZE); =20 - rcu_read_lock(); + WITH_RCU_READ_LOCK_GUARD() { + for (i =3D 0; i < DIRTY_MEMORY_NUM; i++) { + blocks[i] =3D atomic_rcu_read(&ram_list.dirty_memory[i])->= blocks; + } =20 - for (i =3D 0; i < DIRTY_MEMORY_NUM; i++) { - blocks[i] =3D atomic_rcu_read(&ram_list.dirty_memory[i])->bloc= ks; - } + for (k =3D 0; k < nr; k++) { + if (bitmap[k]) { + unsigned long temp =3D leul_to_cpu(bitmap[k]); =20 - for (k =3D 0; k < nr; k++) { - if (bitmap[k]) { - unsigned long temp =3D leul_to_cpu(bitmap[k]); + atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp= ); =20 - atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp); + if (global_dirty_log) { + atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][off= set], + temp); + } =20 - if (global_dirty_log) { - atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset], - temp); + if (tcg_enabled()) { + atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], + temp); + } } =20 - if (tcg_enabled()) { - atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], tem= p); + if (++offset >=3D BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) { + offset =3D 0; + idx++; } } - - if (++offset >=3D BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) { - offset =3D 0; - idx++; - } } =20 - rcu_read_unlock(); - xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS); } else { uint8_t clients =3D tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIE= NTS_NOCODE; diff --git a/memory.c b/memory.c index 61a254c3f9..e867a1f2b3 100644 --- a/memory.c +++ b/memory.c @@ -799,14 +799,13 @@ FlatView *address_space_get_flatview(AddressSpace *as) { FlatView *view; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); do { view =3D address_space_to_flatview(as); /* If somebody has replaced as->current_map concurrently, * flatview_ref returns false. */ } while (!flatview_ref(view)); - rcu_read_unlock(); return view; } =20 @@ -2177,12 +2176,11 @@ int memory_region_get_fd(MemoryRegion *mr) { int fd; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); while (mr->alias) { mr =3D mr->alias; } fd =3D mr->ram_block->fd; - rcu_read_unlock(); =20 return fd; } @@ -2192,14 +2190,13 @@ void *memory_region_get_ram_ptr(MemoryRegion *mr) void *ptr; uint64_t offset =3D 0; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); while (mr->alias) { offset +=3D mr->alias_offset; mr =3D mr->alias; } assert(mr->ram_block); ptr =3D qemu_map_ram_ptr(mr->ram_block, offset); - rcu_read_unlock(); =20 return ptr; } @@ -2589,12 +2586,11 @@ MemoryRegionSection memory_region_find(MemoryRegion= *mr, hwaddr addr, uint64_t size) { MemoryRegionSection ret; - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); ret =3D memory_region_find_rcu(mr, addr, size); if (ret.mr) { memory_region_ref(ret.mr); } - rcu_read_unlock(); return ret; } =20 @@ -2602,9 +2598,8 @@ bool memory_region_present(MemoryRegion *container, h= waddr addr) { MemoryRegion *mr; =20 - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); mr =3D memory_region_find_rcu(container, addr, 1).mr; - rcu_read_unlock(); return mr && mr !=3D container; } =20 --=20 2.21.0