From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185015; cv=none; d=zohomail.com; s=zohoarc; b=APg6QFwkJ/nuj3kLdoF/O08sB24XvIxV5mws2kdjrOzBqVRQ7iZ2RgVLgTyhs+SwbVBW7g8UvLw0NbG/YFvac+vBj4/4+0s2EtrWfOnpA1CV+U0tIX1f3zEfxc6/iRfJ4oH1hb6HfZ1ISx0uiHt/0jHzCnWaUoftNrW1CjEU7cI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185015; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=i+zk7SrzYoSHnf5yoRasYjwBwTIG+52hR950VPuNieg=; b=csTpl7rS50yzjoSy1LYLtipEkrpz2fp2sqWHZYMg/09XYKXK7lVkVRw6e96gY6I95Kh87o8FKCjc8lnBM/bLsII8CKWPUUn+1dqnUjvWnpKVU7Ocho7jCwTaG25Es0/c/Djv0JyrRM8c24OKnkLh46n3Rwt5pT/I7o2g4SY2c+s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185015379655.8720348123923; Mon, 26 Sep 2022 02:36:55 -0700 (PDT) Received: from localhost ([::1]:46558 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockXl-0007qo-CU for importer@patchew.org; Mon, 26 Sep 2022 05:36:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42806) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTW-0004fz-9N for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:54805) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000Gy-58 for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:28 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-37-df8dSSLtMiqH2G1TTwQiYg-1; Mon, 26 Sep 2022 05:32:18 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1B6073C10223; Mon, 26 Sep 2022 09:32:18 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B89949BB61; Mon, 26 Sep 2022 09:32:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184742; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i+zk7SrzYoSHnf5yoRasYjwBwTIG+52hR950VPuNieg=; b=f2Vvzvxz6XazkBpVkrCyfeTazHaqKn4zjJ7iwzd8ggEYBEduzUjDzlW1cdV1NmaTaUgup0 RAATED5fAu+QU8U15L/WcLpLfMHB1UknFK4fuUm1WjT9wXlqgfb7ESg0Z8BnJ+SO7f2ySK ATxxB1lNY3N5Ffxfq90obSQFKuK2f1g= X-MC-Unique: df8dSSLtMiqH2G1TTwQiYg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 01/21] job.c: make job_mutex and job_lock/unlock() public Date: Mon, 26 Sep 2022 05:31:54 -0400 Message-Id: <20220926093214.506243-2-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185016048100002 Content-Type: text/plain; charset="utf-8" job mutex will be used to protect the job struct elements and list, replacing AioContext locks. Right now use a shared lock for all jobs, in order to keep things simple. Once the AioContext lock is gone, we can introduce per-job locks. To simplify the switch from aiocontext to job lock, introduce *nop* lock/unlock functions and macros. We want to always call job_lock/unlock outside the AioContext locks, and not vice-versa, otherwise we might get a deadlock. This is not straightforward to do, and that's why we start with nop functions. Once everything is protected by job_lock/unlock, we can change the nop into an actual mutex and remove the aiocontext lock. Since job_mutex is already being used, add static real_job_{lock/unlock} for the existing usage. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Reviewed-by: Vladimir Sementsov-Ogievskiy --- include/qemu/job.h | 24 ++++++++++++++++++++++++ job.c | 35 +++++++++++++++++++++++------------ 2 files changed, 47 insertions(+), 12 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index c105b31076..d1192ffd61 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -303,6 +303,30 @@ typedef enum JobCreateFlags { JOB_MANUAL_DISMISS =3D 0x04, } JobCreateFlags; =20 +extern QemuMutex job_mutex; + +#define JOB_LOCK_GUARD() /* QEMU_LOCK_GUARD(&job_mutex) */ + +#define WITH_JOB_LOCK_GUARD() /* WITH_QEMU_LOCK_GUARD(&job_mutex) */ + +/** + * job_lock: + * + * Take the mutex protecting the list of jobs and their status. + * Most functions called by the monitor need to call job_lock + * and job_unlock manually. On the other hand, function called + * by the block jobs themselves and by the block layer will take the + * lock for you. + */ +void job_lock(void); + +/** + * job_unlock: + * + * Release the mutex protecting the list of jobs and their status. + */ +void job_unlock(void); + /** * Allocate and return a new job transaction. Jobs can be added to the * transaction using job_txn_add_job(). diff --git a/job.c b/job.c index 075c6f3a20..2b4ffca9d4 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,12 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* + * job_mutex protects the jobs list, but also makes the + * struct job fields thread-safe. + */ +QemuMutex job_mutex; + static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -74,17 +80,22 @@ struct JobTxn { int refcnt; }; =20 -/* Right now, this mutex is only needed to synchronize accesses to job->bu= sy - * and job->sleep_timer, such as concurrent calls to job_do_yield and - * job_enter. */ -static QemuMutex job_mutex; +void job_lock(void) +{ + /* nop */ +} + +void job_unlock(void) +{ + /* nop */ +} =20 -static void job_lock(void) +static void real_job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void job_unlock(void) +static void real_job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -450,21 +461,21 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) return; } =20 - job_lock(); + real_job_lock(); if (job->busy) { - job_unlock(); + real_job_unlock(); return; } =20 if (fn && !fn(job)) { - job_unlock(); + real_job_unlock(); return; } =20 assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy =3D true; - job_unlock(); + real_job_unlock(); aio_co_enter(job->aio_context, job->co); } =20 @@ -481,13 +492,13 @@ void job_enter(Job *job) * called explicitly. */ static void coroutine_fn job_do_yield(Job *job, uint64_t ns) { - job_lock(); + real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; job_event_idle(job); - job_unlock(); + real_job_unlock(); qemu_coroutine_yield(); =20 /* Set by job_enter_cond() before re-entering the coroutine. */ --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185375; cv=none; d=zohomail.com; s=zohoarc; b=AC1B7/CwAzwx4+gEPcGmDIFvI6hSgocIPYw57CucQSctkyKqKLO+T5Nf0t4vC2OK+5szLhfiC7EU3wReP2sQxxK4VRXF32in3ycCpMHqYEUYVvKdYSiTXF8itmcvMTraQwJQiG/RoQxzpzxESyTBIlaB4JIc8dvXducrUEts1Co= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185375; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=cAxqDeYK3bvqTwLe2EsQK/V9nFhI2eGJkcBGpQE/7po=; b=YeKI8VtcBToqp5hmFlp+sLVrMZc38xDn++GDZnRktBF91fR4SVrWYaG4Q2uw+NQeFsjtlZf7/xV8KDxPP3Dq3S0Zy1qwpmbvR98kN5gO7EiJLxeKvrR1dgnL6/WPhuW486QAFw29AYtuEleHvCBF86vQpaVvXTUFOSLk3kAoKAc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185375547452.1079598575343; Mon, 26 Sep 2022 02:42:55 -0700 (PDT) Received: from localhost ([::1]:37534 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockdZ-0002us-C7 for importer@patchew.org; Mon, 26 Sep 2022 05:42:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42818) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTX-0004gD-Sn for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:60875) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000Gn-7N for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:31 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-158-SnzR2lGsM0iVcjaEen51JQ-1; Mon, 26 Sep 2022 05:32:18 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 79CAE86C148; Mon, 26 Sep 2022 09:32:18 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2296349BB60; Mon, 26 Sep 2022 09:32:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184742; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cAxqDeYK3bvqTwLe2EsQK/V9nFhI2eGJkcBGpQE/7po=; b=iPnYxvMijXWytdhE3p6zOBkHz43sqKRY1BM9B3jMWzjqw6NycJZZjnelx1mweynqbTm4hY TNljZFteewqcf7mRZ7/+jC380N0nehaszcTK2urMvbuc8af7r4PuhfGphVvQW5jPNbd1Gs XRh7bzFdHHXS5tHF4CY7g/whsOWOAzw= X-MC-Unique: SnzR2lGsM0iVcjaEen51JQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 02/21] job.h: categorize fields in struct Job Date: Mon, 26 Sep 2022 05:31:55 -0400 Message-Id: <20220926093214.506243-3-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185377534100001 Content-Type: text/plain; charset="utf-8" Categorize the fields in struct Job to understand which ones need to be protected by the job mutex and which don't. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- include/qemu/job.h | 61 +++++++++++++++++++++++++++------------------- 1 file changed, 36 insertions(+), 25 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index d1192ffd61..876e13d549 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -40,27 +40,52 @@ typedef struct JobTxn JobTxn; * Long-running operation. */ typedef struct Job { + + /* Fields set at initialization (job_create), and never modified */ + /** The ID of the job. May be NULL for internal jobs. */ char *id; =20 - /** The type of this job. */ + /** + * The type of this job. + * All callbacks are called with job_mutex *not* held. + */ const JobDriver *driver; =20 - /** Reference count of the block job */ - int refcnt; - - /** Current state; See @JobStatus for details. */ - JobStatus status; - - /** AioContext to run the job coroutine in */ - AioContext *aio_context; - /** * The coroutine that executes the job. If not NULL, it is reentered = when * busy is false and the job is cancelled. + * Initialized in job_start() */ Coroutine *co; =20 + /** True if this job should automatically finalize itself */ + bool auto_finalize; + + /** True if this job should automatically dismiss itself */ + bool auto_dismiss; + + /** The completion function that will be called when the job completes= . */ + BlockCompletionFunc *cb; + + /** The opaque value that is passed to the completion function. */ + void *opaque; + + /* ProgressMeter API is thread-safe */ + ProgressMeter progress; + + + /** Protected by AioContext lock */ + + /** AioContext to run the job coroutine in */ + AioContext *aio_context; + + /** Reference count of the block job */ + int refcnt; + + /** Current state; See @JobStatus for details. */ + JobStatus status; + /** * Timer that is used by @job_sleep_ns. Accessed under job_mutex (in * job.c). @@ -112,14 +137,6 @@ typedef struct Job { /** Set to true when the job has deferred work to the main loop. */ bool deferred_to_main_loop; =20 - /** True if this job should automatically finalize itself */ - bool auto_finalize; - - /** True if this job should automatically dismiss itself */ - bool auto_dismiss; - - ProgressMeter progress; - /** * Return code from @run and/or @prepare callback(s). * Not final until the job has reached the CONCLUDED status. @@ -134,12 +151,6 @@ typedef struct Job { */ Error *err; =20 - /** The completion function that will be called when the job completes= . */ - BlockCompletionFunc *cb; - - /** The opaque value that is passed to the completion function. */ - void *opaque; - /** Notifiers called when a cancelled job is finalised */ NotifierList on_finalize_cancelled; =20 @@ -167,6 +178,7 @@ typedef struct Job { =20 /** * Callbacks and other information about a Job driver. + * All callbacks are invoked with job_mutex *not* held. */ struct JobDriver { =20 @@ -472,7 +484,6 @@ void job_yield(Job *job); */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns); =20 - /** Returns the JobType of a given Job. */ JobType job_type(const Job *job); =20 --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185180; cv=none; d=zohomail.com; s=zohoarc; b=DtTgadaBq6GiOsYT/VAEmlqTWjU2TEjDVS4FgmGlXm0emLiVmdrRngbNPNWs1Ypcop/pm/zP3BTK8xAGxgwEuDihmBzk4jGBZySicXBMgNG+MP4COL9ca+5skThA5gqR/oGbFHka7nD2eCWgQuamrO2PvinrQh/WCOMV3pzVCzY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185180; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dvbBn/0NeIwRj3tInX9CXxAdFuVhLTQ0A+b0lA2HCoI=; b=QRq6ObW5Y+ugWGYhDNaJuDMuOXhUNO6zIcBlgywfB6G/h18MrvBgPgwZIZMO+zgoWTGtl3I4hTVto3xK8I23I/YdO6QW7/+CDBTfx2tUGbi0Ly5Fm4nCm6SAlkYo+O9wvcyKS+nQoDQK0BdIQ0OzNRGiB005qzU5kV3wn0JwogE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185180320729.421883639607; Mon, 26 Sep 2022 02:39:40 -0700 (PDT) Received: from localhost ([::1]:37972 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockaR-0005nu-6b for importer@patchew.org; Mon, 26 Sep 2022 05:39:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42816) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTX-0004gC-PY for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:55422) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000Gm-6y for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:31 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-168-wGtm7guoPZ6G5Z0cuS-KJg-1; Mon, 26 Sep 2022 05:32:19 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D81671C05AF3; Mon, 26 Sep 2022 09:32:18 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 81CF549BB62; Mon, 26 Sep 2022 09:32:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184742; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dvbBn/0NeIwRj3tInX9CXxAdFuVhLTQ0A+b0lA2HCoI=; b=Z0U5B+IuDpcJ94IV9V3aIc73BjtRb+lDh1KnJErxJ4AkduRbp9jl9ZpsqYG3Rfgk8+ov2V tXTm2nqIYEKUa/kTWPYASdIedyrIA+RPnimF5JcwM+OZ/K9oVga17UJQZ/s2o6WvSbx4y3 m//hxv/Ycra9w55c9c4CQikZdKsRMlM= X-MC-Unique: wGtm7guoPZ6G5Z0cuS-KJg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 03/21] job.c: API functions not used outside should be static Date: Mon, 26 Sep 2022 05:31:56 -0400 Message-Id: <20220926093214.506243-4-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185180910100001 Content-Type: text/plain; charset="utf-8" job_event_* functions can all be static, as they are not used outside job.c. Same applies for job_txn_add_job(). Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Kevin Wolf --- include/qemu/job.h | 18 ------------------ job.c | 22 +++++++++++++++++++--- 2 files changed, 19 insertions(+), 21 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 876e13d549..4b64eb15f7 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -358,18 +358,6 @@ JobTxn *job_txn_new(void); */ void job_txn_unref(JobTxn *txn); =20 -/** - * @txn: The transaction (may be NULL) - * @job: Job to add to the transaction - * - * Add @job to the transaction. The @job must not already be in a transac= tion. - * The caller must call either job_txn_unref() or job_completed() to relea= se - * the reference that is automatically grabbed here. - * - * If @txn is NULL, the function does nothing. - */ -void job_txn_add_job(JobTxn *txn, Job *job); - /** * Create a new long-running job and return it. * @@ -431,12 +419,6 @@ void job_progress_set_remaining(Job *job, uint64_t rem= aining); */ void job_progress_increase_remaining(Job *job, uint64_t delta); =20 -/** To be called when a cancelled job is finalised. */ -void job_event_cancelled(Job *job); - -/** To be called when a successfully completed job is finalised. */ -void job_event_completed(Job *job); - /** * Conditionally enter the job coroutine if the job is ready to run, not * already busy and fn() returns true. fn() is called while under the job_= lock diff --git a/job.c b/job.c index 2b4ffca9d4..cafd597ba4 100644 --- a/job.c +++ b/job.c @@ -125,7 +125,17 @@ void job_txn_unref(JobTxn *txn) } } =20 -void job_txn_add_job(JobTxn *txn, Job *job) +/** + * @txn: The transaction (may be NULL) + * @job: Job to add to the transaction + * + * Add @job to the transaction. The @job must not already be in a transac= tion. + * The caller must call either job_txn_unref() or job_completed() to relea= se + * the reference that is automatically grabbed here. + * + * If @txn is NULL, the function does nothing. + */ +static void job_txn_add_job(JobTxn *txn, Job *job) { if (!txn) { return; @@ -427,12 +437,18 @@ void job_progress_increase_remaining(Job *job, uint64= _t delta) progress_increase_remaining(&job->progress, delta); } =20 -void job_event_cancelled(Job *job) +/** + * To be called when a cancelled job is finalised. + */ +static void job_event_cancelled(Job *job) { notifier_list_notify(&job->on_finalize_cancelled, job); } =20 -void job_event_completed(Job *job) +/** + * To be called when a successfully completed job is finalised. + */ +static void job_event_completed(Job *job) { notifier_list_notify(&job->on_finalize_completed, job); } --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185237; cv=none; d=zohomail.com; s=zohoarc; b=Ewhzi0W9r2TNnUAzxqjFyozQvw5RjR4LpUnGvXDWtzqdJYc6NyzqgcJxRpCUwguANS81a6Bpsc3qom3BiL6ZLBrmVFpyNARBL8TBB3mFDJktz/uWxGrt5usp6BHGRzXPOCI454Cg7o7M7vnaNtsC/dTJcfi6gYFE8UTjTz2kUos= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185237; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gsZJuypO/2wwnS2pQdeCowvadl6bg1GlA4IYS2HxRpA=; b=idIvRYvCAaIrXpBqZSs4oPwGJ72GX8ZS1vaPFdRElXz0EeE3Velo6m+8MfthRP37d5CDNOf3Dopq3lNt0aQIXD3VWrS4J7m7akvkMLEMqccpdqhlrBMgtFD9/kBXP006U3MfBUk6Pl4W+kfaAxyV+BR4knYQQpa0GhQtoCY5CHU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185237942527.9309102333774; Mon, 26 Sep 2022 02:40:37 -0700 (PDT) Received: from localhost ([::1]:53976 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockbM-0006yQ-Q5 for importer@patchew.org; Mon, 26 Sep 2022 05:40:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42822) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTY-0004gF-JZ for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:36444) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000HF-9K for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:32 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-131-0z3fE4DcO0CU-KwnEybHjA-1; Mon, 26 Sep 2022 05:32:19 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4201C882820; Mon, 26 Sep 2022 09:32:19 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id DFF2549BB60; Mon, 26 Sep 2022 09:32:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gsZJuypO/2wwnS2pQdeCowvadl6bg1GlA4IYS2HxRpA=; b=a1/0h4CVXW8IoaK5Coqx4S4twyj6xfj6Z/57UKjapLvRis+4tjlx2Ra08ImSsFgSLyGbkr f3Y1UieMnN2BneTO0CqFp+Q+xCJ+RiyvwV8KhsmWxq5kmVX+bEkJ0J9cVBraoVLCtS/U7q DE69ZUC07XX/a+Nd7hlk8HKwadQFdE0= X-MC-Unique: 0z3fE4DcO0CU-KwnEybHjA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 04/21] aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED Date: Mon, 26 Sep 2022 05:31:57 -0400 Message-Id: <20220926093214.506243-5-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185239241100001 Content-Type: text/plain; charset="utf-8" Same as AIO_WAIT_WHILE macro, but if we are in the Main loop do not release and then acquire ctx_ 's aiocontext. Once all Aiocontext locks go away, this macro will replace AIO_WAIT_WHILE. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Reviewed-by: Vladimir Sementsov-Ogievskiy --- include/block/aio-wait.h | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index 54840f8622..dd9a7f6461 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -59,10 +59,13 @@ typedef struct { extern AioWait global_aio_wait; =20 /** - * AIO_WAIT_WHILE: + * AIO_WAIT_WHILE_INTERNAL: * @ctx: the aio context, or NULL if multiple aio contexts (for which the * caller does not hold a lock) are involved in the polling conditio= n. * @cond: wait while this conditional expression is true + * @unlock: whether to unlock and then lock again @ctx. This apples + * only when waiting for another AioContext from the main loop. + * Otherwise it's ignored. * * Wait while a condition is true. Use this to implement synchronous * operations that require event loop activity. @@ -75,7 +78,7 @@ extern AioWait global_aio_wait; * wait on conditions between two IOThreads since that could lead to deadl= ock, * go via the main loop instead. */ -#define AIO_WAIT_WHILE(ctx, cond) ({ \ +#define AIO_WAIT_WHILE_INTERNAL(ctx, cond, unlock) ({ \ bool waited_ =3D false; \ AioWait *wait_ =3D &global_aio_wait; \ AioContext *ctx_ =3D (ctx); \ @@ -92,11 +95,11 @@ extern AioWait global_aio_wait; assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ while ((cond)) { \ - if (ctx_) { \ + if (unlock && ctx_) { \ aio_context_release(ctx_); \ } \ aio_poll(qemu_get_aio_context(), true); \ - if (ctx_) { \ + if (unlock && ctx_) { \ aio_context_acquire(ctx_); \ } \ waited_ =3D true; \ @@ -105,6 +108,12 @@ extern AioWait global_aio_wait; qatomic_dec(&wait_->num_waiters); \ waited_; }) =20 +#define AIO_WAIT_WHILE(ctx, cond) \ + AIO_WAIT_WHILE_INTERNAL(ctx, cond, true) + +#define AIO_WAIT_WHILE_UNLOCKED(ctx, cond) \ + AIO_WAIT_WHILE_INTERNAL(ctx, cond, false) + /** * aio_wait_kick: * Wake up the main thread if it is waiting on AIO_WAIT_WHILE(). During --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185181; cv=none; d=zohomail.com; s=zohoarc; b=CLXkjPpw2nIz+/bT5zGohQHYH1gckxbZt8fJ1/M/gQYI4zEwzy/pWuFCH/1qfbjNkhpk4rcwSdPR8O6SgjJG0rPs7a5G4//kb2quvdKYKyc/SJ1UHWdc/aBniCbavnHh0rFrdxDjOLGRcqkRZZiPvv61Isg0NVJxGXv3S1cCQ9M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185181; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=AozKdjh3vd8Cpy5D/JiicjnRl2z4Ui0w68SNIDbkZEI=; b=ib7zH/8xpB5mKXssVwTUlj6o7cNRlPUfSMkjcagxsL1DD6Me60d1SHi5Er2i3MO7P26lV50dzMhlf2Bz8pUhK/EAGUivLj2Dr+1AjXwh0xUvl11MWJYrVKS/ZwDdqCIIPRIZsabpwEVQYZbt5CDGzWUvRLNulVD0k0O4m1WTDeo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185181877878.5934979139972; Mon, 26 Sep 2022 02:39:41 -0700 (PDT) Received: from localhost ([::1]:37970 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockaR-0005fS-9E for importer@patchew.org; Mon, 26 Sep 2022 05:39:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42824) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTZ-0004gM-3m for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:55484) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000He-5u for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:32 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-206-umgxKDHEN4KOQlsFvUqg7w-1; Mon, 26 Sep 2022 05:32:20 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC56C85A59D; Mon, 26 Sep 2022 09:32:19 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4BEFC49BB60; Mon, 26 Sep 2022 09:32:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184745; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AozKdjh3vd8Cpy5D/JiicjnRl2z4Ui0w68SNIDbkZEI=; b=FMUcr3mn7Qalm9y6t0us/Zm83Fz0tphX2F9EPT9fAcKn5iAallTgObiSnP16xSGjTQDaya DdFBUeFJv9Ka+UkkDMkV/BpADoEweJ7X6eelM0bGcpb0bCGQKRv/afEyIsKeQl4wkYZC5R tV8EsQUYbFxsUbaovZggnGdOQ22mUY8= X-MC-Unique: umgxKDHEN4KOQlsFvUqg7w-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 05/21] job.c: add job_lock/unlock while keeping job.h intact Date: Mon, 26 Sep 2022 05:31:58 -0400 Message-Id: <20220926093214.506243-6-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185183112100001 Content-Type: text/plain; charset="utf-8" With "intact" we mean that all job.h functions implicitly take the lock. Therefore API callers are unmodified. This means that: - many static functions that will be always called with job lock held become _locked, and call _locked functions - all public functions take the lock internally if needed, and call _locked functions - all public functions called internally by other functions in job.c will h= ave a _locked counterpart (sometimes public), to avoid deadlocks (job lock alre= ady taken). These functions are not used for now. - some public functions called only from exernal files (not job.c) do not have _locked() counterpart and take the lock inside. Others won't need the lock at all because use fields only set at initialization and never modified. job_{lock/unlock} is independent from real_job_{lock/unlock}. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop* Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi Reviewed-by: Vladimir Sementsov-Ogievskiy --- include/qemu/job.h | 138 +++++++++- job.c | 610 ++++++++++++++++++++++++++++++++------------- 2 files changed, 561 insertions(+), 187 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 4b64eb15f7..5709e8d4a8 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -358,8 +358,15 @@ JobTxn *job_txn_new(void); */ void job_txn_unref(JobTxn *txn); =20 +/* + * Same as job_txn_unref(), but called with job lock held. + * Might release the lock temporarily. + */ +void job_txn_unref_locked(JobTxn *txn); + /** * Create a new long-running job and return it. + * Called with job_mutex *not* held. * * @job_id: The id of the newly-created job, or %NULL for internal jobs * @driver: The class object for the newly-created job. @@ -380,17 +387,25 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, */ void job_ref(Job *job); =20 +/* Same as job_ref(), but called with job lock held. */ +void job_ref_locked(Job *job); + /** * Release a reference that was previously acquired with job_ref() or * job_create(). If it's the last reference to the object, it will be free= d. */ void job_unref(Job *job); =20 +/* Same as job_unref(), but called with job lock held. */ +void job_unref_locked(Job *job); + /** * @job: The job that has made progress * @done: How much progress the job made since the last call * * Updates the progress counter of the job. + * + * May be called with mutex held or not held. */ void job_progress_update(Job *job, uint64_t done); =20 @@ -401,6 +416,8 @@ void job_progress_update(Job *job, uint64_t done); * * Sets the expected end value of the progress counter of a job so that a * completion percentage can be calculated when the progress is updated. + * + * May be called with mutex held or not held. */ void job_progress_set_remaining(Job *job, uint64_t remaining); =20 @@ -416,6 +433,8 @@ void job_progress_set_remaining(Job *job, uint64_t rema= ining); * length before, and job_progress_update() afterwards. * (So the operation acts as a parenthesis in regards to the main job * operation running in background.) + * + * May be called with mutex held or not held. */ void job_progress_increase_remaining(Job *job, uint64_t delta); =20 @@ -426,11 +445,19 @@ void job_progress_increase_remaining(Job *job, uint64= _t delta); */ void job_enter_cond(Job *job, bool(*fn)(Job *job)); =20 +/* + * Same as job_enter_cond(), but called with job lock held. + * Might release the lock temporarily. + */ +void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)); + /** * @job: A job that has not yet been started. * * Begins execution of a job. * Takes ownership of one reference to the job object. + * + * Called with job_mutex *not* held. */ void job_start(Job *job); =20 @@ -438,6 +465,7 @@ void job_start(Job *job); * @job: The job to enter. * * Continue the specified job by entering the coroutine. + * Called with job_mutex *not* held. */ void job_enter(Job *job); =20 @@ -446,6 +474,8 @@ void job_enter(Job *job); * * Pause now if job_pause() has been called. Jobs that perform lots of I/O * must call this between requests so that the job can be paused. + * + * Called with job_mutex *not* held. */ void coroutine_fn job_pause_point(Job *job); =20 @@ -453,6 +483,7 @@ void coroutine_fn job_pause_point(Job *job); * @job: The job that calls the function. * * Yield the job coroutine. + * Called with job_mutex *not* held. */ void job_yield(Job *job); =20 @@ -463,6 +494,8 @@ void job_yield(Job *job); * Put the job to sleep (assuming that it wasn't canceled) for @ns * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will immediately * interrupt the wait. + * + * Called with job_mutex *not* held. */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns); =20 @@ -475,21 +508,40 @@ const char *job_type_str(const Job *job); /** Returns true if the job should not be visible to the management layer.= */ bool job_is_internal(Job *job); =20 -/** Returns whether the job is being cancelled. */ +/** + * Returns whether the job is being cancelled. + * Called with job_mutex *not* held. + */ bool job_is_cancelled(Job *job); =20 +/* Same as job_is_cancelled(), but called with job lock held. */ +bool job_is_cancelled_locked(Job *job); + /** * Returns whether the job is scheduled for cancellation (at an * indefinite point). + * Called with job_mutex *not* held. */ bool job_cancel_requested(Job *job); =20 -/** Returns whether the job is in a completed state. */ +/** + * Returns whether the job is in a completed state. + * Called with job_mutex *not* held. + */ bool job_is_completed(Job *job); =20 -/** Returns whether the job is ready to be completed. */ +/* Same as job_is_completed(), but called with job lock held. */ +bool job_is_completed_locked(Job *job); + +/** + * Returns whether the job is ready to be completed. + * Called with job_mutex *not* held. + */ bool job_is_ready(Job *job); =20 +/* Same as job_is_ready(), but called with job lock held. */ +bool job_is_ready_locked(Job *job); + /** * Request @job to pause at the next pause point. Must be paired with * job_resume(). If the job is supposed to be resumed by user action, call @@ -497,24 +549,45 @@ bool job_is_ready(Job *job); */ void job_pause(Job *job); =20 +/* Same as job_pause(), but called with job lock held. */ +void job_pause_locked(Job *job); + /** Resumes a @job paused with job_pause. */ void job_resume(Job *job); =20 +/* + * Same as job_resume(), but called with job lock held. + * Might release the lock temporarily. + */ +void job_resume_locked(Job *job); + /** * Asynchronously pause the specified @job. * Do not allow a resume until a matching call to job_user_resume. */ void job_user_pause(Job *job, Error **errp); =20 +/* Same as job_user_pause(), but called with job lock held. */ +void job_user_pause_locked(Job *job, Error **errp); + /** Returns true if the job is user-paused. */ bool job_user_paused(Job *job); =20 +/* Same as job_user_paused(), but called with job lock held. */ +bool job_user_paused_locked(Job *job); + /** * Resume the specified @job. * Must be paired with a preceding job_user_pause. */ void job_user_resume(Job *job, Error **errp); =20 +/* + * Same as job_user_resume(), but called with job lock held. + * Might release the lock temporarily. + */ +void job_user_resume_locked(Job *job, Error **errp); + /** * Get the next element from the list of block jobs after @job, or the * first one if @job is %NULL. @@ -523,6 +596,9 @@ void job_user_resume(Job *job, Error **errp); */ Job *job_next(Job *job); =20 +/* Same as job_next(), but called with job lock held. */ +Job *job_next_locked(Job *job); + /** * Get the job identified by @id (which must not be %NULL). * @@ -530,6 +606,9 @@ Job *job_next(Job *job); */ Job *job_get(const char *id); =20 +/* Same as job_get(), but called with job lock held. */ +Job *job_get_locked(const char *id); + /** * Check whether the verb @verb can be applied to @job in its current stat= e. * Returns 0 if the verb can be applied; otherwise errp is set and -EPERM @@ -537,27 +616,48 @@ Job *job_get(const char *id); */ int job_apply_verb(Job *job, JobVerb verb, Error **errp); =20 -/** The @job could not be started, free it. */ +/* Same as job_apply_verb, but called with job lock held. */ +int job_apply_verb_locked(Job *job, JobVerb verb, Error **errp); + +/** + * The @job could not be started, free it. + * Called with job_mutex *not* held. + */ void job_early_fail(Job *job); =20 -/** Moves the @job from RUNNING to READY */ +/** + * Moves the @job from RUNNING to READY. + * Called with job_mutex *not* held. + */ void job_transition_to_ready(Job *job); =20 /** Asynchronously complete the specified @job. */ void job_complete(Job *job, Error **errp); =20 +/* + * Same as job_complete(), but called with job lock held. + * Might release the lock temporarily. + */ +void job_complete_locked(Job *job, Error **errp); + /** * Asynchronously cancel the specified @job. If @force is true, the job sh= ould * be cancelled immediately without waiting for a consistent state. */ void job_cancel(Job *job, bool force); =20 +/* Same as job_cancel(), but called with job lock held. */ +void job_cancel_locked(Job *job, bool force); + /** * Cancels the specified job like job_cancel(), but may refuse to do so if= the * operation isn't meaningful in the current state of the job. */ void job_user_cancel(Job *job, bool force, Error **errp); =20 +/* Same as job_user_cancel(), but called with job lock held. */ +void job_user_cancel_locked(Job *job, bool force, Error **errp); + /** * Synchronously cancel the @job. The completion callback is called * before the function returns. If @force is false, the job may @@ -571,7 +671,14 @@ void job_user_cancel(Job *job, bool force, Error **err= p); */ int job_cancel_sync(Job *job, bool force); =20 -/** Synchronously force-cancels all jobs using job_cancel_sync(). */ +/* Same as job_cancel_sync, but called with job lock held. */ +int job_cancel_sync_locked(Job *job, bool force); + +/** + * Synchronously force-cancels all jobs using job_cancel_sync_locked(). + * + * Called with job_lock *not* held. + */ void job_cancel_sync_all(void); =20 /** @@ -590,6 +697,9 @@ void job_cancel_sync_all(void); */ int job_complete_sync(Job *job, Error **errp); =20 +/* Same as job_complete_sync, but called with job lock held. */ +int job_complete_sync_locked(Job *job, Error **errp); + /** * For a @job that has finished its work and is pending awaiting explicit * acknowledgement to commit its work, this will commit that work. @@ -600,12 +710,18 @@ int job_complete_sync(Job *job, Error **errp); */ void job_finalize(Job *job, Error **errp); =20 +/* Same as job_finalize(), but called with job lock held. */ +void job_finalize_locked(Job *job, Error **errp); + /** * Remove the concluded @job from the query list and resets the passed poi= nter * to %NULL. Returns an error if the job is not actually concluded. */ void job_dismiss(Job **job, Error **errp); =20 +/* Same as job_dismiss(), but called with job lock held. */ +void job_dismiss_locked(Job **job, Error **errp); + /** * Synchronously finishes the given @job. If @finish is given, it is calle= d to * trigger completion or cancellation of the job. @@ -615,6 +731,14 @@ void job_dismiss(Job **job, Error **errp); * * Callers must hold the AioContext lock of job->aio_context. */ -int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp); +int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), + Error **errp); + +/* + * Same as job_finish_sync(), but called with job lock held. + * Might release the lock temporarily. + */ +int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), + Error **errp); =20 #endif diff --git a/job.c b/job.c index cafd597ba4..8967ec671e 100644 --- a/job.c +++ b/job.c @@ -38,6 +38,7 @@ */ QemuMutex job_mutex; =20 +/* Protected by job_mutex */ static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -113,18 +114,25 @@ JobTxn *job_txn_new(void) return txn; } =20 -static void job_txn_ref(JobTxn *txn) +/* Called with job_mutex held. */ +static void job_txn_ref_locked(JobTxn *txn) { txn->refcnt++; } =20 -void job_txn_unref(JobTxn *txn) +void job_txn_unref_locked(JobTxn *txn) { if (txn && --txn->refcnt =3D=3D 0) { g_free(txn); } } =20 +void job_txn_unref(JobTxn *txn) +{ + JOB_LOCK_GUARD(); + job_txn_unref_locked(txn); +} + /** * @txn: The transaction (may be NULL) * @job: Job to add to the transaction @@ -134,8 +142,10 @@ void job_txn_unref(JobTxn *txn) * the reference that is automatically grabbed here. * * If @txn is NULL, the function does nothing. + * + * Called with job_mutex held. */ -static void job_txn_add_job(JobTxn *txn, Job *job) +static void job_txn_add_job_locked(JobTxn *txn, Job *job) { if (!txn) { return; @@ -145,19 +155,21 @@ static void job_txn_add_job(JobTxn *txn, Job *job) job->txn =3D txn; =20 QLIST_INSERT_HEAD(&txn->jobs, job, txn_list); - job_txn_ref(txn); + job_txn_ref_locked(txn); } =20 -static void job_txn_del_job(Job *job) +/* Called with job_mutex held. */ +static void job_txn_del_job_locked(Job *job) { if (job->txn) { QLIST_REMOVE(job, txn_list); - job_txn_unref(job->txn); + job_txn_unref_locked(job->txn); job->txn =3D NULL; } } =20 -static int job_txn_apply(Job *job, int fn(Job *)) +/* Called with job_mutex held, but releases it temporarily. */ +static int job_txn_apply_locked(Job *job, int fn(Job *)) { AioContext *inner_ctx; Job *other_job, *next; @@ -170,7 +182,7 @@ static int job_txn_apply(Job *job, int fn(Job *)) * we need to release it here to avoid holding the lock twice - which = would * break AIO_WAIT_WHILE from within fn. */ - job_ref(job); + job_ref_locked(job); aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { @@ -188,7 +200,7 @@ static int job_txn_apply(Job *job, int fn(Job *)) * can't use a local variable to cache it. */ aio_context_acquire(job->aio_context); - job_unref(job); + job_unref_locked(job); return rc; } =20 @@ -197,7 +209,8 @@ bool job_is_internal(Job *job) return (job->id =3D=3D NULL); } =20 -static void job_state_transition(Job *job, JobStatus s1) +/* Called with job_mutex held. */ +static void job_state_transition_locked(Job *job, JobStatus s1) { JobStatus s0 =3D job->status; assert(s1 >=3D 0 && s1 < JOB_STATUS__MAX); @@ -212,7 +225,7 @@ static void job_state_transition(Job *job, JobStatus s1) } } =20 -int job_apply_verb(Job *job, JobVerb verb, Error **errp) +int job_apply_verb_locked(Job *job, JobVerb verb, Error **errp) { JobStatus s0 =3D job->status; assert(verb >=3D 0 && verb < JOB_VERB__MAX); @@ -226,6 +239,12 @@ int job_apply_verb(Job *job, JobVerb verb, Error **err= p) return -EPERM; } =20 +int job_apply_verb(Job *job, JobVerb verb, Error **errp) +{ + JOB_LOCK_GUARD(); + return job_apply_verb_locked(job, verb, errp); +} + JobType job_type(const Job *job) { return job->driver->job_type; @@ -236,19 +255,32 @@ const char *job_type_str(const Job *job) return JobType_str(job_type(job)); } =20 -bool job_is_cancelled(Job *job) +bool job_is_cancelled_locked(Job *job) { /* force_cancel may be true only if cancelled is true, too */ assert(job->cancelled || !job->force_cancel); return job->force_cancel; } =20 -bool job_cancel_requested(Job *job) +bool job_is_cancelled(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_cancelled_locked(job); +} + +/* Called with job_mutex held. */ +static bool job_cancel_requested_locked(Job *job) { return job->cancelled; } =20 -bool job_is_ready(Job *job) +bool job_cancel_requested(Job *job) +{ + JOB_LOCK_GUARD(); + return job_cancel_requested_locked(job); +} + +bool job_is_ready_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -270,7 +302,13 @@ bool job_is_ready(Job *job) return false; } =20 -bool job_is_completed(Job *job) +bool job_is_ready(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_ready_locked(job); +} + +bool job_is_completed_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -292,17 +330,24 @@ bool job_is_completed(Job *job) return false; } =20 -static bool job_started(Job *job) +bool job_is_completed(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_completed_locked(job); +} + +static bool job_started_locked(Job *job) { return job->co; } =20 -static bool job_should_pause(Job *job) +/* Called with job_mutex held. */ +static bool job_should_pause_locked(Job *job) { return job->pause_count > 0; } =20 -Job *job_next(Job *job) +Job *job_next_locked(Job *job) { if (!job) { return QLIST_FIRST(&jobs); @@ -310,7 +355,13 @@ Job *job_next(Job *job) return QLIST_NEXT(job, job_list); } =20 -Job *job_get(const char *id) +Job *job_next(Job *job) +{ + JOB_LOCK_GUARD(); + return job_next_locked(job); +} + +Job *job_get_locked(const char *id) { Job *job; =20 @@ -323,6 +374,13 @@ Job *job_get(const char *id) return NULL; } =20 +Job *job_get(const char *id) +{ + JOB_LOCK_GUARD(); + return job_get_locked(id); +} + +/* Called with job_mutex *not* held. */ static void job_sleep_timer_cb(void *opaque) { Job *job =3D opaque; @@ -336,6 +394,8 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, { Job *job; =20 + JOB_LOCK_GUARD(); + if (job_id) { if (flags & JOB_INTERNAL) { error_setg(errp, "Cannot specify job ID for internal job"); @@ -345,7 +405,7 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, error_setg(errp, "Invalid job ID '%s'", job_id); return NULL; } - if (job_get(job_id)) { + if (job_get_locked(job_id)) { error_setg(errp, "Job ID '%s' already in use", job_id); return NULL; } @@ -375,7 +435,7 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, notifier_list_init(&job->on_ready); notifier_list_init(&job->on_idle); =20 - job_state_transition(job, JOB_STATUS_CREATED); + job_state_transition_locked(job, JOB_STATUS_CREATED); aio_timer_init(qemu_get_aio_context(), &job->sleep_timer, QEMU_CLOCK_REALTIME, SCALE_NS, job_sleep_timer_cb, job); @@ -386,21 +446,27 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, * consolidating the job management logic */ if (!txn) { txn =3D job_txn_new(); - job_txn_add_job(txn, job); - job_txn_unref(txn); + job_txn_add_job_locked(txn, job); + job_txn_unref_locked(txn); } else { - job_txn_add_job(txn, job); + job_txn_add_job_locked(txn, job); } =20 return job; } =20 -void job_ref(Job *job) +void job_ref_locked(Job *job) { ++job->refcnt; } =20 -void job_unref(Job *job) +void job_ref(Job *job) +{ + JOB_LOCK_GUARD(); + job_ref_locked(job); +} + +void job_unref_locked(Job *job) { GLOBAL_STATE_CODE(); =20 @@ -410,7 +476,9 @@ void job_unref(Job *job) assert(!job->txn); =20 if (job->driver->free) { + job_unlock(); job->driver->free(job); + job_lock(); } =20 QLIST_REMOVE(job, job_list); @@ -422,6 +490,12 @@ void job_unref(Job *job) } } =20 +void job_unref(Job *job) +{ + JOB_LOCK_GUARD(); + job_unref_locked(job); +} + void job_progress_update(Job *job, uint64_t done) { progress_work_done(&job->progress, done); @@ -439,38 +513,43 @@ void job_progress_increase_remaining(Job *job, uint64= _t delta) =20 /** * To be called when a cancelled job is finalised. + * Called with job_mutex held. */ -static void job_event_cancelled(Job *job) +static void job_event_cancelled_locked(Job *job) { notifier_list_notify(&job->on_finalize_cancelled, job); } =20 /** * To be called when a successfully completed job is finalised. + * Called with job_mutex held. */ -static void job_event_completed(Job *job) +static void job_event_completed_locked(Job *job) { notifier_list_notify(&job->on_finalize_completed, job); } =20 -static void job_event_pending(Job *job) +/* Called with job_mutex held. */ +static void job_event_pending_locked(Job *job) { notifier_list_notify(&job->on_pending, job); } =20 -static void job_event_ready(Job *job) +/* Called with job_mutex held. */ +static void job_event_ready_locked(Job *job) { notifier_list_notify(&job->on_ready, job); } =20 -static void job_event_idle(Job *job) +/* Called with job_mutex held. */ +static void job_event_idle_locked(Job *job) { notifier_list_notify(&job->on_idle, job); } =20 -void job_enter_cond(Job *job, bool(*fn)(Job *job)) +void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)) { - if (!job_started(job)) { + if (!job_started_locked(job)) { return; } if (job->deferred_to_main_loop) { @@ -492,12 +571,21 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) timer_del(&job->sleep_timer); job->busy =3D true; real_job_unlock(); + job_unlock(); aio_co_enter(job->aio_context, job->co); + job_lock(); +} + +void job_enter_cond(Job *job, bool(*fn)(Job *job)) +{ + JOB_LOCK_GUARD(); + job_enter_cond_locked(job, fn); } =20 void job_enter(Job *job) { - job_enter_cond(job, NULL); + JOB_LOCK_GUARD(); + job_enter_cond_locked(job, NULL); } =20 /* Yield, and schedule a timer to reenter the coroutine after @ns nanoseco= nds. @@ -505,100 +593,129 @@ void job_enter(Job *job) * is allowed and cancels the timer. * * If @ns is (uint64_t) -1, no timer is scheduled and job_enter() must be - * called explicitly. */ -static void coroutine_fn job_do_yield(Job *job, uint64_t ns) + * called explicitly. + * + * Called with job_mutex held, but releases it temporarily. + */ +static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) { real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; - job_event_idle(job); + job_event_idle_locked(job); real_job_unlock(); + job_unlock(); qemu_coroutine_yield(); + job_lock(); =20 /* Set by job_enter_cond() before re-entering the coroutine. */ assert(job->busy); } =20 -void coroutine_fn job_pause_point(Job *job) +/* Called with job_mutex held, but releases it temporarily. */ +static void coroutine_fn job_pause_point_locked(Job *job) { - assert(job && job_started(job)); + assert(job && job_started_locked(job)); =20 - if (!job_should_pause(job)) { + if (!job_should_pause_locked(job)) { return; } - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { return; } =20 if (job->driver->pause) { + job_unlock(); job->driver->pause(job); + job_lock(); } =20 - if (job_should_pause(job) && !job_is_cancelled(job)) { + if (job_should_pause_locked(job) && !job_is_cancelled_locked(job)) { JobStatus status =3D job->status; - job_state_transition(job, status =3D=3D JOB_STATUS_READY - ? JOB_STATUS_STANDBY - : JOB_STATUS_PAUSED); + job_state_transition_locked(job, status =3D=3D JOB_STATUS_READY + ? JOB_STATUS_STANDBY + : JOB_STATUS_PAUSED); job->paused =3D true; - job_do_yield(job, -1); + job_do_yield_locked(job, -1); job->paused =3D false; - job_state_transition(job, status); + job_state_transition_locked(job, status); } =20 if (job->driver->resume) { + job_unlock(); job->driver->resume(job); + job_lock(); } } =20 -void job_yield(Job *job) +void coroutine_fn job_pause_point(Job *job) +{ + JOB_LOCK_GUARD(); + job_pause_point_locked(job); +} + +static void job_yield_locked(Job *job) { assert(job->busy); =20 /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { return; } =20 - if (!job_should_pause(job)) { - job_do_yield(job, -1); + if (!job_should_pause_locked(job)) { + job_do_yield_locked(job, -1); } =20 - job_pause_point(job); + job_pause_point_locked(job); +} + +void job_yield(Job *job) +{ + JOB_LOCK_GUARD(); + job_yield_locked(job); } =20 void coroutine_fn job_sleep_ns(Job *job, int64_t ns) { + JOB_LOCK_GUARD(); assert(job->busy); =20 /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { return; } =20 - if (!job_should_pause(job)) { - job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + if (!job_should_pause_locked(job)) { + job_do_yield_locked(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + = ns); } =20 - job_pause_point(job); + job_pause_point_locked(job); } =20 -/* Assumes the block_job_mutex is held */ -static bool job_timer_not_pending(Job *job) +/* Assumes the job_mutex is held */ +static bool job_timer_not_pending_locked(Job *job) { return !timer_pending(&job->sleep_timer); } =20 -void job_pause(Job *job) +void job_pause_locked(Job *job) { job->pause_count++; if (!job->paused) { - job_enter(job); + job_enter_cond_locked(job, NULL); } } =20 -void job_resume(Job *job) +void job_pause(Job *job) +{ + JOB_LOCK_GUARD(); + job_pause_locked(job); +} + +void job_resume_locked(Job *job) { assert(job->pause_count > 0); job->pause_count--; @@ -607,12 +724,18 @@ void job_resume(Job *job) } =20 /* kick only if no timer is pending */ - job_enter_cond(job, job_timer_not_pending); + job_enter_cond_locked(job, job_timer_not_pending_locked); } =20 -void job_user_pause(Job *job, Error **errp) +void job_resume(Job *job) { - if (job_apply_verb(job, JOB_VERB_PAUSE, errp)) { + JOB_LOCK_GUARD(); + job_resume_locked(job); +} + +void job_user_pause_locked(Job *job, Error **errp) +{ + if (job_apply_verb_locked(job, JOB_VERB_PAUSE, errp)) { return; } if (job->user_paused) { @@ -620,15 +743,27 @@ void job_user_pause(Job *job, Error **errp) return; } job->user_paused =3D true; - job_pause(job); + job_pause_locked(job); } =20 -bool job_user_paused(Job *job) +void job_user_pause(Job *job, Error **errp) +{ + JOB_LOCK_GUARD(); + job_user_pause_locked(job, errp); +} + +bool job_user_paused_locked(Job *job) { return job->user_paused; } =20 -void job_user_resume(Job *job, Error **errp) +bool job_user_paused(Job *job) +{ + JOB_LOCK_GUARD(); + return job_user_paused_locked(job); +} + +void job_user_resume_locked(Job *job, Error **errp) { assert(job); GLOBAL_STATE_CODE(); @@ -636,66 +771,84 @@ void job_user_resume(Job *job, Error **errp) error_setg(errp, "Can't resume a job that was not paused"); return; } - if (job_apply_verb(job, JOB_VERB_RESUME, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_RESUME, errp)) { return; } if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; - job_resume(job); + job_resume_locked(job); } =20 -static void job_do_dismiss(Job *job) +void job_user_resume(Job *job, Error **errp) +{ + JOB_LOCK_GUARD(); + job_user_resume_locked(job, errp); +} + +/* Called with job_mutex held, but releases it temporarily. */ +static void job_do_dismiss_locked(Job *job) { assert(job); job->busy =3D false; job->paused =3D false; job->deferred_to_main_loop =3D true; =20 - job_txn_del_job(job); + job_txn_del_job_locked(job); =20 - job_state_transition(job, JOB_STATUS_NULL); - job_unref(job); + job_state_transition_locked(job, JOB_STATUS_NULL); + job_unref_locked(job); } =20 -void job_dismiss(Job **jobptr, Error **errp) +void job_dismiss_locked(Job **jobptr, Error **errp) { Job *job =3D *jobptr; /* similarly to _complete, this is QMP-interface only. */ assert(job->id); - if (job_apply_verb(job, JOB_VERB_DISMISS, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_DISMISS, errp)) { return; } =20 - job_do_dismiss(job); + job_do_dismiss_locked(job); *jobptr =3D NULL; } =20 +void job_dismiss(Job **jobptr, Error **errp) +{ + JOB_LOCK_GUARD(); + job_dismiss_locked(jobptr, errp); +} + void job_early_fail(Job *job) { + JOB_LOCK_GUARD(); assert(job->status =3D=3D JOB_STATUS_CREATED); - job_do_dismiss(job); + job_do_dismiss_locked(job); } =20 -static void job_conclude(Job *job) +/* Called with job_mutex held. */ +static void job_conclude_locked(Job *job) { - job_state_transition(job, JOB_STATUS_CONCLUDED); - if (job->auto_dismiss || !job_started(job)) { - job_do_dismiss(job); + job_state_transition_locked(job, JOB_STATUS_CONCLUDED); + if (job->auto_dismiss || !job_started_locked(job)) { + job_do_dismiss_locked(job); } } =20 -static void job_update_rc(Job *job) +/* Called with job_mutex held. */ +static void job_update_rc_locked(Job *job) { - if (!job->ret && job_is_cancelled(job)) { + if (!job->ret && job_is_cancelled_locked(job)) { job->ret =3D -ECANCELED; } if (job->ret) { if (!job->err) { error_setg(&job->err, "%s", strerror(-job->ret)); } - job_state_transition(job, JOB_STATUS_ABORTING); + job_state_transition_locked(job, JOB_STATUS_ABORTING); } } =20 @@ -725,43 +878,57 @@ static void job_clean(Job *job) } } =20 -static int job_finalize_single(Job *job) +/* Called with job_mutex held, but releases it temporarily */ +static int job_finalize_single_locked(Job *job) { - assert(job_is_completed(job)); + int job_ret; + + assert(job_is_completed_locked(job)); =20 /* Ensure abort is called for late-transactional failures */ - job_update_rc(job); + job_update_rc_locked(job); + + job_ret =3D job->ret; + job_unlock(); =20 - if (!job->ret) { + if (!job_ret) { job_commit(job); } else { job_abort(job); } job_clean(job); =20 + job_lock(); + if (job->cb) { - job->cb(job->opaque, job->ret); + job_ret =3D job->ret; + job_unlock(); + job->cb(job->opaque, job_ret); + job_lock(); } =20 /* Emit events only if we actually started */ - if (job_started(job)) { - if (job_is_cancelled(job)) { - job_event_cancelled(job); + if (job_started_locked(job)) { + if (job_is_cancelled_locked(job)) { + job_event_cancelled_locked(job); } else { - job_event_completed(job); + job_event_completed_locked(job); } } =20 - job_txn_del_job(job); - job_conclude(job); + job_txn_del_job_locked(job); + job_conclude_locked(job); return 0; } =20 -static void job_cancel_async(Job *job, bool force) +/* Called with job_mutex held, but releases it temporarily */ +static void job_cancel_async_locked(Job *job, bool force) { GLOBAL_STATE_CODE(); if (job->driver->cancel) { + job_unlock(); force =3D job->driver->cancel(job, force); + job_lock(); } else { /* No .cancel() means the job will behave as if force-cancelled */ force =3D true; @@ -770,7 +937,9 @@ static void job_cancel_async(Job *job, bool force) if (job->user_paused) { /* Do not call job_enter here, the caller will handle it. */ if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; assert(job->pause_count > 0); @@ -791,7 +960,8 @@ static void job_cancel_async(Job *job, bool force) } } =20 -static void job_completed_txn_abort(Job *job) +/* Called with job_mutex held, but releases it temporarily. */ +static void job_completed_txn_abort_locked(Job *job) { AioContext *ctx; JobTxn *txn =3D job->txn; @@ -804,7 +974,7 @@ static void job_completed_txn_abort(Job *job) return; } txn->aborting =3D true; - job_txn_ref(txn); + job_txn_ref_locked(txn); =20 /* * We can only hold the single job's AioContext lock while calling @@ -812,7 +982,7 @@ static void job_completed_txn_abort(Job *job) * calls of AIO_WAIT_WHILE(), which could deadlock otherwise. * Note that the job's AioContext may change when it is finalized. */ - job_ref(job); + job_ref_locked(job); aio_context_release(job->aio_context); =20 /* Other jobs are effectively cancelled by us, set the status for @@ -827,7 +997,7 @@ static void job_completed_txn_abort(Job *job) * Therefore, pass force=3Dtrue to terminate all other jobs as= quickly * as possible. */ - job_cancel_async(other_job, true); + job_cancel_async_locked(other_job, true); aio_context_release(ctx); } } @@ -839,11 +1009,11 @@ static void job_completed_txn_abort(Job *job) */ ctx =3D other_job->aio_context; aio_context_acquire(ctx); - if (!job_is_completed(other_job)) { - assert(job_cancel_requested(other_job)); - job_finish_sync(other_job, NULL, NULL); + if (!job_is_completed_locked(other_job)) { + assert(job_cancel_requested_locked(other_job)); + job_finish_sync_locked(other_job, NULL, NULL); } - job_finalize_single(other_job); + job_finalize_single_locked(other_job); aio_context_release(ctx); } =20 @@ -852,110 +1022,132 @@ static void job_completed_txn_abort(Job *job) * even if the job went away during job_finalize_single(). */ aio_context_acquire(job->aio_context); - job_unref(job); + job_unref_locked(job); =20 - job_txn_unref(txn); + job_txn_unref_locked(txn); } =20 -static int job_prepare(Job *job) +/* Called with job_mutex held, but releases it temporarily */ +static int job_prepare_locked(Job *job) { + int ret; + GLOBAL_STATE_CODE(); if (job->ret =3D=3D 0 && job->driver->prepare) { - job->ret =3D job->driver->prepare(job); - job_update_rc(job); + job_unlock(); + ret =3D job->driver->prepare(job); + job_lock(); + job->ret =3D ret; + job_update_rc_locked(job); } return job->ret; } =20 -static int job_needs_finalize(Job *job) +/* Called with job_mutex held */ +static int job_needs_finalize_locked(Job *job) { return !job->auto_finalize; } =20 -static void job_do_finalize(Job *job) +/* Called with job_mutex held */ +static void job_do_finalize_locked(Job *job) { int rc; assert(job && job->txn); =20 /* prepare the transaction to complete */ - rc =3D job_txn_apply(job, job_prepare); + rc =3D job_txn_apply_locked(job, job_prepare_locked); if (rc) { - job_completed_txn_abort(job); + job_completed_txn_abort_locked(job); } else { - job_txn_apply(job, job_finalize_single); + job_txn_apply_locked(job, job_finalize_single_locked); } } =20 -void job_finalize(Job *job, Error **errp) +void job_finalize_locked(Job *job, Error **errp) { assert(job && job->id); - if (job_apply_verb(job, JOB_VERB_FINALIZE, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_FINALIZE, errp)) { return; } - job_do_finalize(job); + job_do_finalize_locked(job); } =20 -static int job_transition_to_pending(Job *job) +void job_finalize(Job *job, Error **errp) { - job_state_transition(job, JOB_STATUS_PENDING); + JOB_LOCK_GUARD(); + job_finalize_locked(job, errp); +} + +/* Called with job_mutex held. */ +static int job_transition_to_pending_locked(Job *job) +{ + job_state_transition_locked(job, JOB_STATUS_PENDING); if (!job->auto_finalize) { - job_event_pending(job); + job_event_pending_locked(job); } return 0; } =20 void job_transition_to_ready(Job *job) { - job_state_transition(job, JOB_STATUS_READY); - job_event_ready(job); + JOB_LOCK_GUARD(); + job_state_transition_locked(job, JOB_STATUS_READY); + job_event_ready_locked(job); } =20 -static void job_completed_txn_success(Job *job) +/* Called with job_mutex held. */ +static void job_completed_txn_success_locked(Job *job) { JobTxn *txn =3D job->txn; Job *other_job; =20 - job_state_transition(job, JOB_STATUS_WAITING); + job_state_transition_locked(job, JOB_STATUS_WAITING); =20 /* * Successful completion, see if there are other running jobs in this * txn. */ QLIST_FOREACH(other_job, &txn->jobs, txn_list) { - if (!job_is_completed(other_job)) { + if (!job_is_completed_locked(other_job)) { return; } assert(other_job->ret =3D=3D 0); } =20 - job_txn_apply(job, job_transition_to_pending); + job_txn_apply_locked(job, job_transition_to_pending_locked); =20 /* If no jobs need manual finalization, automatically do so */ - if (job_txn_apply(job, job_needs_finalize) =3D=3D 0) { - job_do_finalize(job); + if (job_txn_apply_locked(job, job_needs_finalize_locked) =3D=3D 0) { + job_do_finalize_locked(job); } } =20 -static void job_completed(Job *job) +/* Called with job_mutex held. */ +static void job_completed_locked(Job *job) { - assert(job && job->txn && !job_is_completed(job)); + assert(job && job->txn && !job_is_completed_locked(job)); =20 - job_update_rc(job); + job_update_rc_locked(job); trace_job_completed(job, job->ret); if (job->ret) { - job_completed_txn_abort(job); + job_completed_txn_abort_locked(job); } else { - job_completed_txn_success(job); + job_completed_txn_success_locked(job); } } =20 -/** Useful only as a type shim for aio_bh_schedule_oneshot. */ +/** + * Useful only as a type shim for aio_bh_schedule_oneshot. + * Called with job_mutex *not* held. + */ static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; AioContext *ctx; + JOB_LOCK_GUARD(); =20 - job_ref(job); + job_ref_locked(job); aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion @@ -963,9 +1155,9 @@ static void job_exit(void *opaque) * drain block nodes, and if .drained_poll still returned true, we wou= ld * deadlock. */ job->busy =3D false; - job_event_idle(job); + job_event_idle_locked(job); =20 - job_completed(job); + job_completed_locked(job); =20 /* * Note that calling job_completed can move the job to a different @@ -974,7 +1166,7 @@ static void job_exit(void *opaque) * the job underneath us. */ ctx =3D job->aio_context; - job_unref(job); + job_unref_locked(job); aio_context_release(ctx); } =20 @@ -985,37 +1177,47 @@ static void job_exit(void *opaque) static void coroutine_fn job_co_entry(void *opaque) { Job *job =3D opaque; + int ret; =20 assert(job && job->driver && job->driver->run); - assert(job->aio_context =3D=3D qemu_get_current_aio_context()); - job_pause_point(job); - job->ret =3D job->driver->run(job, &job->err); - job->deferred_to_main_loop =3D true; - job->busy =3D true; + WITH_JOB_LOCK_GUARD() { + assert(job->aio_context =3D=3D qemu_get_current_aio_context()); + job_pause_point_locked(job); + } + ret =3D job->driver->run(job, &job->err); + WITH_JOB_LOCK_GUARD() { + job->ret =3D ret; + job->deferred_to_main_loop =3D true; + job->busy =3D true; + } aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job); } =20 void job_start(Job *job) { - assert(job && !job_started(job) && job->paused && - job->driver && job->driver->run); - job->co =3D qemu_coroutine_create(job_co_entry, job); - job->pause_count--; - job->busy =3D true; - job->paused =3D false; - job_state_transition(job, JOB_STATUS_RUNNING); + assert(qemu_in_main_thread()); + + WITH_JOB_LOCK_GUARD() { + assert(job && !job_started_locked(job) && job->paused && + job->driver && job->driver->run); + job->co =3D qemu_coroutine_create(job_co_entry, job); + job->pause_count--; + job->busy =3D true; + job->paused =3D false; + job_state_transition_locked(job, JOB_STATUS_RUNNING); + } aio_co_enter(job->aio_context, job->co); } =20 -void job_cancel(Job *job, bool force) +void job_cancel_locked(Job *job, bool force) { if (job->status =3D=3D JOB_STATUS_CONCLUDED) { - job_do_dismiss(job); + job_do_dismiss_locked(job); return; } - job_cancel_async(job, force); - if (!job_started(job)) { - job_completed(job); + job_cancel_async_locked(job, force); + if (!job_started_locked(job)) { + job_completed_locked(job); } else if (job->deferred_to_main_loop) { /* * job_cancel_async() ignores soft-cancel requests for jobs @@ -1027,102 +1229,150 @@ void job_cancel(Job *job, bool force) * choose to call job_is_cancelled() to show that we invoke * job_completed_txn_abort() only for force-cancelled jobs.) */ - if (job_is_cancelled(job)) { - job_completed_txn_abort(job); + if (job_is_cancelled_locked(job)) { + job_completed_txn_abort_locked(job); } } else { - job_enter(job); + job_enter_cond_locked(job, NULL); } } =20 -void job_user_cancel(Job *job, bool force, Error **errp) +void job_cancel(Job *job, bool force) { - if (job_apply_verb(job, JOB_VERB_CANCEL, errp)) { + JOB_LOCK_GUARD(); + job_cancel_locked(job, force); +} + +void job_user_cancel_locked(Job *job, bool force, Error **errp) +{ + if (job_apply_verb_locked(job, JOB_VERB_CANCEL, errp)) { return; } - job_cancel(job, force); + job_cancel_locked(job, force); +} + +void job_user_cancel(Job *job, bool force, Error **errp) +{ + JOB_LOCK_GUARD(); + job_user_cancel_locked(job, force, errp); } =20 /* A wrapper around job_cancel() taking an Error ** parameter so it may be * used with job_finish_sync() without the need for (rather nasty) function - * pointer casts there. */ -static void job_cancel_err(Job *job, Error **errp) + * pointer casts there. + * + * Called with job_mutex held. + */ +static void job_cancel_err_locked(Job *job, Error **errp) { - job_cancel(job, false); + job_cancel_locked(job, false); } =20 /** * Same as job_cancel_err(), but force-cancel. + * Called with job_mutex held. */ -static void job_force_cancel_err(Job *job, Error **errp) +static void job_force_cancel_err_locked(Job *job, Error **errp) { - job_cancel(job, true); + job_cancel_locked(job, true); } =20 -int job_cancel_sync(Job *job, bool force) +int job_cancel_sync_locked(Job *job, bool force) { if (force) { - return job_finish_sync(job, &job_force_cancel_err, NULL); + return job_finish_sync_locked(job, &job_force_cancel_err_locked, N= ULL); } else { - return job_finish_sync(job, &job_cancel_err, NULL); + return job_finish_sync_locked(job, &job_cancel_err_locked, NULL); } } =20 +int job_cancel_sync(Job *job, bool force) +{ + JOB_LOCK_GUARD(); + return job_cancel_sync_locked(job, force); +} + void job_cancel_sync_all(void) { Job *job; AioContext *aio_context; + JOB_LOCK_GUARD(); =20 - while ((job =3D job_next(NULL))) { + while ((job =3D job_next_locked(NULL))) { aio_context =3D job->aio_context; aio_context_acquire(aio_context); - job_cancel_sync(job, true); + job_cancel_sync_locked(job, true); aio_context_release(aio_context); } } =20 +int job_complete_sync_locked(Job *job, Error **errp) +{ + return job_finish_sync_locked(job, job_complete_locked, errp); +} + int job_complete_sync(Job *job, Error **errp) { - return job_finish_sync(job, job_complete, errp); + JOB_LOCK_GUARD(); + return job_complete_sync_locked(job, errp); } =20 -void job_complete(Job *job, Error **errp) +void job_complete_locked(Job *job, Error **errp) { /* Should not be reachable via external interface for internal jobs */ assert(job->id); GLOBAL_STATE_CODE(); - if (job_apply_verb(job, JOB_VERB_COMPLETE, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_COMPLETE, errp)) { return; } - if (job_cancel_requested(job) || !job->driver->complete) { + if (job_cancel_requested_locked(job) || !job->driver->complete) { error_setg(errp, "The active block job '%s' cannot be completed", job->id); return; } =20 + job_unlock(); job->driver->complete(job, errp); + job_lock(); } =20 -int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp) +void job_complete(Job *job, Error **errp) +{ + JOB_LOCK_GUARD(); + job_complete_locked(job, errp); +} + +int job_finish_sync_locked(Job *job, + void (*finish)(Job *, Error **errp), + Error **errp) { Error *local_err =3D NULL; int ret; =20 - job_ref(job); + job_ref_locked(job); =20 if (finish) { finish(job, &local_err); } if (local_err) { error_propagate(errp, local_err); - job_unref(job); + job_unref_locked(job); return -EBUSY; } =20 + job_unlock(); AIO_WAIT_WHILE(job->aio_context, (job_enter(job), !job_is_completed(job))); + job_lock(); =20 - ret =3D (job_is_cancelled(job) && job->ret =3D=3D 0) ? -ECANCELED : jo= b->ret; - job_unref(job); + ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) + ? -ECANCELED : job->ret; + job_unref_locked(job); return ret; } + +int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp) +{ + JOB_LOCK_GUARD(); + return job_finish_sync_locked(job, finish, errp); +} --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185397; cv=none; d=zohomail.com; s=zohoarc; b=i1T23G36N23qs0PRpA4vjU8iCtEC2iQXrLpk/HidPkAJ4vshalscgySnDZ8a7feDUR8dXQyPBap70C9OAneZ88aU8eumFREGyXkaYsSGKuyFS6hoaoo40ksgAcmZZJZchhKUbekPlsjXVB5A4VCRh0ZdNrN0wn0BPJQSq72MAOc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185397; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xp+NTWbHv+civvM6QLNy8yaNCZ0wCMvaL1gMf2edFkg=; b=k5mGjF9QkG2OGulCYtIgAeGDCALTsV1wPubhhQVvibXs06tHDwcEXF4aCt1Ise40Qg+yDJc6Q7gNsu1GoAugkCTC0Pr7NTUzkeUa6/EznxKEIVViEPEFVpQOhBPGWhKbl2VyDuRyKzax1cqSjsimJmuVp8TtRZMvEjQXpSZqDTc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185397550668.4147084692028; Mon, 26 Sep 2022 02:43:17 -0700 (PDT) Received: from localhost ([::1]:46840 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockdw-0004WE-9z for importer@patchew.org; Mon, 26 Sep 2022 05:43:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42834) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTa-0004jJ-O3 for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:33963) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000HI-At for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-151-SwNQ1abLOiiNz7Pg_TAwqw-1; Mon, 26 Sep 2022 05:32:20 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16559872846; Mon, 26 Sep 2022 09:32:20 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id B42F749BB62; Mon, 26 Sep 2022 09:32:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xp+NTWbHv+civvM6QLNy8yaNCZ0wCMvaL1gMf2edFkg=; b=SpU5zXUwYZDj9VAhZeL9PI8LcWzuojQRJV79cssrcn404A3ZFc6rzdp5vQbV6QVmt3bznI 4UxQRH9ivz5of7qUolxGIaVsYqBmWtDscm7OgPkXOFApeUwPC3GtsEAIxmsxH+y6jaqmkY 1cRab2XzicvZOqdszB6HjNMzmes6nBE= X-MC-Unique: SwNQ1abLOiiNz7Pg_TAwqw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 06/21] job: move and update comments from blockjob.c Date: Mon, 26 Sep 2022 05:31:59 -0400 Message-Id: <20220926093214.506243-7-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185399507100005 Content-Type: text/plain; charset="utf-8" This comment applies more on job, it was left in blockjob as in the past the whole job logic was implemented there. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. No functional change intended. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Stefan Hajnoczi --- blockjob.c | 20 -------------------- job.c | 16 ++++++++++++++++ 2 files changed, 16 insertions(+), 20 deletions(-) diff --git a/blockjob.c b/blockjob.c index 4868453d74..7da59a1f1c 100644 --- a/blockjob.c +++ b/blockjob.c @@ -36,21 +36,6 @@ #include "qemu/main-loop.h" #include "qemu/timer.h" =20 -/* - * The block job API is composed of two categories of functions. - * - * The first includes functions used by the monitor. The monitor is - * peculiar in that it accesses the block job list with block_job_get, and - * therefore needs consistency across block_job_get and the actual operati= on - * (e.g. block_job_set_speed). The consistency is achieved with - * aio_context_acquire/release. These functions are declared in blockjob.= h. - * - * The second includes functions used by the block job drivers and sometim= es - * by the core block layer. These do not care about locking, because the - * whole coroutine runs under the AioContext lock, and are declared in - * blockjob_int.h. - */ - static bool is_block_job(Job *job) { return job_type(job) =3D=3D JOB_TYPE_BACKUP || @@ -433,11 +418,6 @@ static void block_job_event_ready(Notifier *n, void *o= paque) } =20 =20 -/* - * API for block job drivers and the block layer. These functions are - * declared in blockjob_int.h. - */ - void *block_job_create(const char *job_id, const BlockJobDriver *driver, JobTxn *txn, BlockDriverState *bs, uint64_t perm, uint64_t shared_perm, int64_t speed, int flags, diff --git a/job.c b/job.c index 8967ec671e..e336af0c1c 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,22 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* + * The job API is composed of two categories of functions. + * + * The first includes functions used by the monitor. The monitor is + * peculiar in that it accesses the job list with job_get, and + * therefore needs consistency across job_get and the actual operation + * (e.g. job_user_cancel). To achieve this consistency, the caller + * calls job_lock/job_unlock itself around the whole operation. + * + * + * The second includes functions used by the job drivers and sometimes + * by the core block layer. These delegate the locking to the callee inste= ad. + * + * TODO Actually make this true + */ + /* * job_mutex protects the jobs list, but also makes the * struct job fields thread-safe. --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185603; cv=none; d=zohomail.com; s=zohoarc; b=hmcuxmoNXzAU0DpmJHlRWlUIbFe5cFnphFNqM+St92rL+JKH5c7y6/X40rrgF0cUR8xiTbhfqYou4v6e+U6U0u3UcxHfNKizhSLDeKQnL4mcFpnxc0T/hUh4ZYrc15aNa4AgPT3WJoIDHf6XYrhIS4PKJBdVDuGfO3cdC3qvb+k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185603; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=h60nW70wRNq46EvQhvF2mbVls9JmeI2W+yNsFOL3C7U=; b=U759AHf5wVqv1vVxhO/ZMiMYOlCAgEdj/AHVjnqk4NVBkkF561Qeof95avEHI7GBEHI5hoIJrPgyZjlDMyhY+AYbRfOpZRU/WIAL2IT/9EXnluS95xk+sdftLwIOhlCB+XDSYz4YRUaadaIaolQcYuTRY6EMyLfCHJetxlG3RxU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185603992292.07941186540666; Mon, 26 Sep 2022 02:46:43 -0700 (PDT) Received: from localhost ([::1]:58608 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockhG-0001hf-Qg for importer@patchew.org; Mon, 26 Sep 2022 05:46:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54496) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTl-0004uX-4d for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:57 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:41897) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTb-0000NF-4f for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:44 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-161-6gr39cuFMom6P6psCLM8KQ-1; Mon, 26 Sep 2022 05:32:22 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 75D6486C152; Mon, 26 Sep 2022 09:32:20 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F23849BB61; Mon, 26 Sep 2022 09:32:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184754; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h60nW70wRNq46EvQhvF2mbVls9JmeI2W+yNsFOL3C7U=; b=eH9H3WFGvCT73MMtypnP9wuDLa+RVSvyVX9+3AgJfdeNtrYuJIgR8WttOldc64RJcigNG7 i5cY/jbJr9eaBmli2wO+7iWu7SBEkVCOnJUTd4FbNRDak/C8y8GrQaJTl5rLcrfi8bM6S6 FGPhCIAGismSiYOs3P+sPoOqROVDAwY= X-MC-Unique: 6gr39cuFMom6P6psCLM8KQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 07/21] blockjob: introduce block_job _locked() APIs Date: Mon, 26 Sep 2022 05:32:00 -0400 Message-Id: <20220926093214.506243-8-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185607520100001 Content-Type: text/plain; charset="utf-8" Just as done with job.h, create _locked() functions in blockjob.h These functions will be later useful when caller has already taken the lock. All blockjob _locked functions call job _locked functions. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi --- blockjob.c | 52 ++++++++++++++++++++++++++++++++-------- include/block/blockjob.h | 18 ++++++++++++++ 2 files changed, 60 insertions(+), 10 deletions(-) diff --git a/blockjob.c b/blockjob.c index 7da59a1f1c..0d59aba439 100644 --- a/blockjob.c +++ b/blockjob.c @@ -44,21 +44,27 @@ static bool is_block_job(Job *job) job_type(job) =3D=3D JOB_TYPE_STREAM; } =20 -BlockJob *block_job_next(BlockJob *bjob) +BlockJob *block_job_next_locked(BlockJob *bjob) { Job *job =3D bjob ? &bjob->job : NULL; GLOBAL_STATE_CODE(); =20 do { - job =3D job_next(job); + job =3D job_next_locked(job); } while (job && !is_block_job(job)); =20 return job ? container_of(job, BlockJob, job) : NULL; } =20 -BlockJob *block_job_get(const char *id) +BlockJob *block_job_next(BlockJob *bjob) { - Job *job =3D job_get(id); + JOB_LOCK_GUARD(); + return block_job_next_locked(bjob); +} + +BlockJob *block_job_get_locked(const char *id) +{ + Job *job =3D job_get_locked(id); GLOBAL_STATE_CODE(); =20 if (job && is_block_job(job)) { @@ -68,6 +74,12 @@ BlockJob *block_job_get(const char *id) } } =20 +BlockJob *block_job_get(const char *id) +{ + JOB_LOCK_GUARD(); + return block_job_get_locked(id); +} + void block_job_free(Job *job) { BlockJob *bjob =3D container_of(job, BlockJob, job); @@ -256,14 +268,14 @@ static bool job_timer_pending(Job *job) return timer_pending(&job->sleep_timer); } =20 -bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) +bool block_job_set_speed_locked(BlockJob *job, int64_t speed, Error **errp) { const BlockJobDriver *drv =3D block_job_driver(job); int64_t old_speed =3D job->speed; =20 GLOBAL_STATE_CODE(); =20 - if (job_apply_verb(&job->job, JOB_VERB_SET_SPEED, errp) < 0) { + if (job_apply_verb_locked(&job->job, JOB_VERB_SET_SPEED, errp) < 0) { return false; } if (speed < 0) { @@ -277,7 +289,9 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp) job->speed =3D speed; =20 if (drv->set_speed) { + job_unlock(); drv->set_speed(job, speed); + job_lock(); } =20 if (speed && speed <=3D old_speed) { @@ -285,18 +299,24 @@ bool block_job_set_speed(BlockJob *job, int64_t speed= , Error **errp) } =20 /* kick only if a timer is pending */ - job_enter_cond(&job->job, job_timer_pending); + job_enter_cond_locked(&job->job, job_timer_pending); =20 return true; } =20 +bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) +{ + JOB_LOCK_GUARD(); + return block_job_set_speed_locked(job, speed, errp); +} + int64_t block_job_ratelimit_get_delay(BlockJob *job, uint64_t n) { IO_CODE(); return ratelimit_calculate_delay(&job->limit, n); } =20 -BlockJobInfo *block_job_query(BlockJob *job, Error **errp) +BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp) { BlockJobInfo *info; uint64_t progress_current, progress_total; @@ -320,7 +340,7 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **er= rp) info->len =3D progress_total; info->speed =3D job->speed; info->io_status =3D job->iostatus; - info->ready =3D job_is_ready(&job->job), + info->ready =3D job_is_ready_locked(&job->job), info->status =3D job->job.status; info->auto_finalize =3D job->job.auto_finalize; info->auto_dismiss =3D job->job.auto_dismiss; @@ -333,6 +353,12 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **e= rrp) return info; } =20 +BlockJobInfo *block_job_query(BlockJob *job, Error **errp) +{ + JOB_LOCK_GUARD(); + return block_job_query_locked(job, errp); +} + static void block_job_iostatus_set_err(BlockJob *job, int error) { if (job->iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { @@ -478,7 +504,7 @@ fail: return NULL; } =20 -void block_job_iostatus_reset(BlockJob *job) +void block_job_iostatus_reset_locked(BlockJob *job) { GLOBAL_STATE_CODE(); if (job->iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { @@ -488,6 +514,12 @@ void block_job_iostatus_reset(BlockJob *job) job->iostatus =3D BLOCK_DEVICE_IO_STATUS_OK; } =20 +void block_job_iostatus_reset(BlockJob *job) +{ + JOB_LOCK_GUARD(); + block_job_iostatus_reset_locked(job); +} + void block_job_user_resume(Job *job) { BlockJob *bjob =3D container_of(job, BlockJob, job); diff --git a/include/block/blockjob.h b/include/block/blockjob.h index 6525e16fd5..8b65d3949d 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -92,6 +92,9 @@ typedef struct BlockJob { */ BlockJob *block_job_next(BlockJob *job); =20 +/* Same as block_job_next(), but called with job lock held. */ +BlockJob *block_job_next_locked(BlockJob *job); + /** * block_job_get: * @id: The id of the block job. @@ -102,6 +105,9 @@ BlockJob *block_job_next(BlockJob *job); */ BlockJob *block_job_get(const char *id); =20 +/* Same as block_job_get(), but called with job lock held. */ +BlockJob *block_job_get_locked(const char *id); + /** * block_job_add_bdrv: * @job: A block job @@ -145,6 +151,12 @@ bool block_job_has_bdrv(BlockJob *job, BlockDriverStat= e *bs); */ bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp); =20 +/* + * Same as block_job_set_speed(), but called with job lock held. + * Might release the lock temporarily. + */ +bool block_job_set_speed_locked(BlockJob *job, int64_t speed, Error **errp= ); + /** * block_job_query: * @job: The job to get information about. @@ -153,6 +165,9 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp); */ BlockJobInfo *block_job_query(BlockJob *job, Error **errp); =20 +/* Same as block_job_query(), but called with job lock held. */ +BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp); + /** * block_job_iostatus_reset: * @job: The job whose I/O status should be reset. @@ -162,6 +177,9 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **er= rp); */ void block_job_iostatus_reset(BlockJob *job); =20 +/* Same as block_job_iostatus_reset(), but called with job lock held. */ +void block_job_iostatus_reset_locked(BlockJob *job); + /* * block_job_get_aio_context: * --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185034; cv=none; d=zohomail.com; s=zohoarc; b=gfDdrwXMGKBX2alaS7Vvmu8OI/XTyhUp8Yhon/tpTPrLWfVJkf1T9AgPjZjhThY5ew7OKsDKzdTTssaCsoK5M/tmQuemfWnnpMXsohdXcxRiZ5BFOT+4Sfs1e5y70bqGR0Jy0T5MFkhvW7hC6lKhcfrKG0u7/OXIXhxN6LNTi04= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185034; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LmPMYt/2Wa6VEQMwT6O3Jd8jaGTAgVt6buQ5K0cPEKY=; b=DHqpIDC+yGgenDN14CSEcVXLUTK1JVWSQTTqw+ZCnmqe/FpC2AvPtqE7QXmFuKqCGo7BQkUQzXPUIsqta8Pkrh5zldUKSZO+CrJagGWGVSdRK89/Up9L3R/Xi0NtDehaWQpC8lDD3hQHYOmB5P3fAm9u8yS/Dvenqi6m9My2Cjc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185034362929.9351877948894; Mon, 26 Sep 2022 02:37:14 -0700 (PDT) Received: from localhost ([::1]:40574 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockY4-0000Lf-Vb for importer@patchew.org; Mon, 26 Sep 2022 05:37:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42828) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTZ-0004gt-PJ for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:48682) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000HS-9U for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:33 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-605-GMN0zN6cNESfv9S68jrQ3w-1; Mon, 26 Sep 2022 05:32:21 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D51B43C1022A; Mon, 26 Sep 2022 09:32:20 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7E35349BB60; Mon, 26 Sep 2022 09:32:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LmPMYt/2Wa6VEQMwT6O3Jd8jaGTAgVt6buQ5K0cPEKY=; b=VIHpdXzLgzJTCWdvheoN3Wdf3rkvsWc5rIvoyDhCJUhXoaWgqe9doR2NwskjJfuP7KyxYQ WBsbAlN9WUBhP73YEg+JNTG/swWM7UTbSnxuaKUAsVGwFWFnWvbMiCNAF28Zo4z3XuIC/e MGbZVIjGZJIAMWMKl8LEUfHzG48qxb4= X-MC-Unique: GMN0zN6cNESfv9S68jrQ3w-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 08/21] jobs: add job lock in find_* functions Date: Mon, 26 Sep 2022 05:32:01 -0400 Message-Id: <20220926093214.506243-9-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185035840100001 Content-Type: text/plain; charset="utf-8" Both blockdev.c and job-qmp.c have TOC/TOU conditions, because they first search for the job and then perform an action on it. Therefore, we need to do the search + action under the same job mutex critical section. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf --- blockdev.c | 67 +++++++++++++++++++++++++++++++++++++----------------- job-qmp.c | 57 ++++++++++++++++++++++++++++++++-------------- 2 files changed, 86 insertions(+), 38 deletions(-) diff --git a/blockdev.c b/blockdev.c index 392d9476e6..2e941e2979 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3313,9 +3313,13 @@ out: aio_context_release(aio_context); } =20 -/* Get a block job using its ID and acquire its AioContext */ -static BlockJob *find_block_job(const char *id, AioContext **aio_context, - Error **errp) +/* + * Get a block job using its ID and acquire its AioContext. + * Called with job_mutex held. + */ +static BlockJob *find_block_job_locked(const char *id, + AioContext **aio_context, + Error **errp) { BlockJob *job; =20 @@ -3323,7 +3327,7 @@ static BlockJob *find_block_job(const char *id, AioCo= ntext **aio_context, =20 *aio_context =3D NULL; =20 - job =3D block_job_get(id); + job =3D block_job_get_locked(id); =20 if (!job) { error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE, @@ -3340,13 +3344,16 @@ static BlockJob *find_block_job(const char *id, Aio= Context **aio_context, void qmp_block_job_set_speed(const char *device, int64_t speed, Error **er= rp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 - block_job_set_speed(job, speed, errp); + block_job_set_speed_locked(job, speed, errp); aio_context_release(aio_context); } =20 @@ -3354,7 +3361,10 @@ void qmp_block_job_cancel(const char *device, bool has_force, bool force, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; @@ -3364,14 +3374,14 @@ void qmp_block_job_cancel(const char *device, force =3D false; } =20 - if (job_user_paused(&job->job) && !force) { + if (job_user_paused_locked(&job->job) && !force) { error_setg(errp, "The block job for device '%s' is currently pause= d", device); goto out; } =20 trace_qmp_block_job_cancel(job); - job_user_cancel(&job->job, force, errp); + job_user_cancel_locked(&job->job, force, errp); out: aio_context_release(aio_context); } @@ -3379,57 +3389,69 @@ out: void qmp_block_job_pause(const char *device, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_pause(job); - job_user_pause(&job->job, errp); + job_user_pause_locked(&job->job, errp); aio_context_release(aio_context); } =20 void qmp_block_job_resume(const char *device, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_resume(job); - job_user_resume(&job->job, errp); + job_user_resume_locked(&job->job, errp); aio_context_release(aio_context); } =20 void qmp_block_job_complete(const char *device, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_complete(job); - job_complete(&job->job, errp); + job_complete_locked(&job->job, errp); aio_context_release(aio_context); } =20 void qmp_block_job_finalize(const char *id, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(id, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_finalize(job); - job_ref(&job->job); - job_finalize(&job->job, errp); + job_ref_locked(&job->job); + job_finalize_locked(&job->job, errp); =20 /* * Job's context might have changed via job_finalize (and job_txn_apply @@ -3437,23 +3459,26 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) * one. */ aio_context =3D block_job_get_aio_context(job); - job_unref(&job->job); + job_unref_locked(&job->job); aio_context_release(aio_context); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) { AioContext *aio_context; - BlockJob *bjob =3D find_block_job(id, &aio_context, errp); + BlockJob *bjob; Job *job; =20 + JOB_LOCK_GUARD(); + bjob =3D find_block_job_locked(id, &aio_context, errp); + if (!bjob) { return; } =20 trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; - job_dismiss(&job, errp); + job_dismiss_locked(&job, errp); aio_context_release(aio_context); } =20 diff --git a/job-qmp.c b/job-qmp.c index 829a28aa70..b1c456482a 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -29,14 +29,19 @@ #include "qapi/error.h" #include "trace/trace-root.h" =20 -/* Get a job using its ID and acquire its AioContext */ -static Job *find_job(const char *id, AioContext **aio_context, Error **err= p) +/* + * Get a job using its ID and acquire its AioContext. + * Called with job_mutex held. + */ +static Job *find_job_locked(const char *id, + AioContext **aio_context, + Error **errp) { Job *job; =20 *aio_context =3D NULL; =20 - job =3D job_get(id); + job =3D job_get_locked(id); if (!job) { error_setg(errp, "Job not found"); return NULL; @@ -51,71 +56,86 @@ static Job *find_job(const char *id, AioContext **aio_c= ontext, Error **errp) void qmp_job_cancel(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_cancel(job); - job_user_cancel(job, true, errp); + job_user_cancel_locked(job, true, errp); aio_context_release(aio_context); } =20 void qmp_job_pause(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_pause(job); - job_user_pause(job, errp); + job_user_pause_locked(job, errp); aio_context_release(aio_context); } =20 void qmp_job_resume(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_resume(job); - job_user_resume(job, errp); + job_user_resume_locked(job, errp); aio_context_release(aio_context); } =20 void qmp_job_complete(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_complete(job); - job_complete(job, errp); + job_complete_locked(job, errp); aio_context_release(aio_context); } =20 void qmp_job_finalize(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_finalize(job); - job_ref(job); - job_finalize(job, errp); + job_ref_locked(job); + job_finalize_locked(job, errp); =20 /* * Job's context might have changed via job_finalize (and job_txn_apply @@ -123,21 +143,24 @@ void qmp_job_finalize(const char *id, Error **errp) * one. */ aio_context =3D job->aio_context; - job_unref(job); + job_unref_locked(job); aio_context_release(aio_context); } =20 void qmp_job_dismiss(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_dismiss(job); - job_dismiss(&job, errp); + job_dismiss_locked(&job, errp); aio_context_release(aio_context); } =20 --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185385; cv=none; d=zohomail.com; s=zohoarc; b=TT+lsgQ+MKnBb/fr1SGcTXYeLnSYKsaB8zo22xYXIEo3JIz9NEZhnAn86KfMddayRohvnWnS3cl6EQqJM9zoODGU4mbrKBWdEjsz3KsJ2I7jRYpfH14XPF/p7TZ82cuDUxdsaxNIASo6uvrXWiJPV4wAMM+ewgG3gVHnWZ2y7N0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185385; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=yhmr+0miup+bCqFNmW19YV5kM5UXu01X5Dv1PsTFrIo=; b=OVr5psg5D+j7C6eR2YS1DhARbebIYoUPCYBgkS2YTiQQzxH/C6hZbKf00rPCPttmRj6FZ3vIC1YAuSgTjVR/f5/ZLE3UTNOQ5fV9NWyWnx+DsPFqTPiuHAUheWp40gS2xY7VF5eH0oAo+G1i9ayOW4FdiAWqHJfxyID1b8PipmU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185385337500.4604670107062; Mon, 26 Sep 2022 02:43:05 -0700 (PDT) Received: from localhost ([::1]:51710 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockdk-0003i1-4K for importer@patchew.org; Mon, 26 Sep 2022 05:43:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54452) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTc-0004nC-Px for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:36 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:34820) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000Hw-MQ for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:36 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-13-R8r72iqyNBKBcmyvyAP5hg-1; Mon, 26 Sep 2022 05:32:21 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 413281C05AF0; Mon, 26 Sep 2022 09:32:21 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id DD5F849BB61; Mon, 26 Sep 2022 09:32:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184746; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yhmr+0miup+bCqFNmW19YV5kM5UXu01X5Dv1PsTFrIo=; b=IHVs5A3AXahDRez9X8a3S3n+SUQQsx5c/z4k3wPmcL1IiB8oJXsg03TSsoXNyFvd9W2zVm 1R2JBpJzoYJp38eCZ8S6fSpjtDk5JqJmo1Up4uy4/NZgZtd6hqANH0dH5H7VBf/b9pXmyG dYOvQUXQx2mf4RpZ29hPTF1WOdLybT8= X-MC-Unique: R8r72iqyNBKBcmyvyAP5hg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 09/21] jobs: use job locks also in the unit tests Date: Mon, 26 Sep 2022 05:32:02 -0400 Message-Id: <20220926093214.506243-10-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185387434100003 Content-Type: text/plain; charset="utf-8" Add missing job synchronization in the unit tests, with explicit locks. We are deliberately using _locked functions wrapped by a guard instead of a normal call because the normal call will be removed in future, as the only usage is limited to the tests. In other words, if a function like job_pause() is/will be only used in tests to avoid: WITH_JOB_LOCK_GUARD(){ job_pause_locked(); } then it is not worth keeping job_pause(), and just use the guard. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Kevin Wolf --- tests/unit/test-bdrv-drain.c | 76 ++++++++++++-------- tests/unit/test-block-iothread.c | 8 ++- tests/unit/test-blockjob-txn.c | 24 ++++--- tests/unit/test-blockjob.c | 115 +++++++++++++++++++------------ 4 files changed, 140 insertions(+), 83 deletions(-) diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 36be84ae55..0db056ea63 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -943,61 +943,83 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(tjob->running); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + WITH_JOB_LOCK_GUARD() { + g_assert_cmpint(job->job.pause_count, =3D=3D, 0); + g_assert_false(job->job.paused); + g_assert_true(tjob->running); + g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + } =20 do_drain_begin_unlocked(drain_type, drain_bs); =20 - if (drain_type =3D=3D BDRV_DRAIN_ALL) { - /* bdrv_drain_all() drains both src and target */ - g_assert_cmpint(job->job.pause_count, =3D=3D, 2); - } else { - g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + WITH_JOB_LOCK_GUARD() { + if (drain_type =3D=3D BDRV_DRAIN_ALL) { + /* bdrv_drain_all() drains both src and target */ + g_assert_cmpint(job->job.pause_count, =3D=3D, 2); + } else { + g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + } + g_assert_true(job->job.paused); + g_assert_false(job->job.busy); /* The job is paused */ } - g_assert_true(job->job.paused); - g_assert_false(job->job.busy); /* The job is paused */ =20 do_drain_end_unlocked(drain_type, drain_bs); =20 if (use_iothread) { - /* paused is reset in the I/O thread, wait for it */ + /* + * Here we are waiting for the paused status to change, + * so don't bother protecting the read every time. + * + * paused is reset in the I/O thread, wait for it + */ while (job->job.paused) { aio_poll(qemu_get_aio_context(), false); } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + WITH_JOB_LOCK_GUARD() { + g_assert_cmpint(job->job.pause_count, =3D=3D, 0); + g_assert_false(job->job.paused); + g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + } =20 do_drain_begin_unlocked(drain_type, target); =20 - if (drain_type =3D=3D BDRV_DRAIN_ALL) { - /* bdrv_drain_all() drains both src and target */ - g_assert_cmpint(job->job.pause_count, =3D=3D, 2); - } else { - g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + WITH_JOB_LOCK_GUARD() { + if (drain_type =3D=3D BDRV_DRAIN_ALL) { + /* bdrv_drain_all() drains both src and target */ + g_assert_cmpint(job->job.pause_count, =3D=3D, 2); + } else { + g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + } + g_assert_true(job->job.paused); + g_assert_false(job->job.busy); /* The job is paused */ } - g_assert_true(job->job.paused); - g_assert_false(job->job.busy); /* The job is paused */ =20 do_drain_end_unlocked(drain_type, target); =20 if (use_iothread) { - /* paused is reset in the I/O thread, wait for it */ + /* + * Here we are waiting for the paused status to change, + * so don't bother protecting the read every time. + * + * paused is reset in the I/O thread, wait for it + */ while (job->job.paused) { aio_poll(qemu_get_aio_context(), false); } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + WITH_JOB_LOCK_GUARD() { + g_assert_cmpint(job->job.pause_count, =3D=3D, 0); + g_assert_false(job->job.paused); + g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + } =20 aio_context_acquire(ctx); - ret =3D job_complete_sync(&job->job, &error_abort); + WITH_JOB_LOCK_GUARD() { + ret =3D job_complete_sync_locked(&job->job, &error_abort); + } g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 if (use_iothread) { diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index 8b55eccc89..96fd21c00a 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -583,7 +583,9 @@ static void test_attach_blockjob(void) } =20 aio_context_acquire(ctx); - job_complete_sync(&tjob->common.job, &error_abort); + WITH_JOB_LOCK_GUARD() { + job_complete_sync_locked(&tjob->common.job, &error_abort); + } blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 @@ -757,7 +759,9 @@ static void test_propagate_mirror(void) BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, false, "filter_node", MIRROR_COPY_MODE_BACKGROUND, &error_abort); - job =3D job_get("job0"); + WITH_JOB_LOCK_GUARD() { + job =3D job_get_locked("job0"); + } filter =3D bdrv_find_node("filter_node"); =20 /* Change the AioContext of src */ diff --git a/tests/unit/test-blockjob-txn.c b/tests/unit/test-blockjob-txn.c index c69028b450..d3b0bb24be 100644 --- a/tests/unit/test-blockjob-txn.c +++ b/tests/unit/test-blockjob-txn.c @@ -116,8 +116,10 @@ static void test_single_job(int expected) job =3D test_block_job_start(1, true, expected, &result, txn); job_start(&job->job); =20 - if (expected =3D=3D -ECANCELED) { - job_cancel(&job->job, false); + WITH_JOB_LOCK_GUARD() { + if (expected =3D=3D -ECANCELED) { + job_cancel_locked(&job->job, false); + } } =20 while (result =3D=3D -EINPROGRESS) { @@ -160,13 +162,15 @@ static void test_pair_jobs(int expected1, int expecte= d2) /* Release our reference now to trigger as many nice * use-after-free bugs as possible. */ - job_txn_unref(txn); + WITH_JOB_LOCK_GUARD() { + job_txn_unref_locked(txn); =20 - if (expected1 =3D=3D -ECANCELED) { - job_cancel(&job1->job, false); - } - if (expected2 =3D=3D -ECANCELED) { - job_cancel(&job2->job, false); + if (expected1 =3D=3D -ECANCELED) { + job_cancel_locked(&job1->job, false); + } + if (expected2 =3D=3D -ECANCELED) { + job_cancel_locked(&job2->job, false); + } } =20 while (result1 =3D=3D -EINPROGRESS || result2 =3D=3D -EINPROGRESS) { @@ -219,7 +223,9 @@ static void test_pair_jobs_fail_cancel_race(void) job_start(&job1->job); job_start(&job2->job); =20 - job_cancel(&job1->job, false); + WITH_JOB_LOCK_GUARD() { + job_cancel_locked(&job1->job, false); + } =20 /* Now make job2 finish before the main loop kicks jobs. This simulat= es * the race between a pending kick and another job completing. diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index 4c9e1bf1e5..e4f126bb6d 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -211,8 +211,11 @@ static CancelJob *create_common(Job **pjob) bjob =3D mk_job(blk, "Steve", &test_cancel_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; - job_ref(job); - assert(job->status =3D=3D JOB_STATUS_CREATED); + WITH_JOB_LOCK_GUARD() { + job_ref_locked(job); + assert(job->status =3D=3D JOB_STATUS_CREATED); + } + s =3D container_of(bjob, CancelJob, common); s->blk =3D blk; =20 @@ -231,12 +234,14 @@ static void cancel_common(CancelJob *s) aio_context_acquire(ctx); =20 job_cancel_sync(&job->job, true); - if (sts !=3D JOB_STATUS_CREATED && sts !=3D JOB_STATUS_CONCLUDED) { - Job *dummy =3D &job->job; - job_dismiss(&dummy, &error_abort); + WITH_JOB_LOCK_GUARD() { + if (sts !=3D JOB_STATUS_CREATED && sts !=3D JOB_STATUS_CONCLUDED) { + Job *dummy =3D &job->job; + job_dismiss_locked(&dummy, &error_abort); + } + assert(job->job.status =3D=3D JOB_STATUS_NULL); + job_unref_locked(&job->job); } - assert(job->job.status =3D=3D JOB_STATUS_NULL); - job_unref(&job->job); destroy_blk(blk); =20 aio_context_release(ctx); @@ -251,6 +256,13 @@ static void test_cancel_created(void) cancel_common(s); } =20 +static void assert_job_status_is(Job *job, int status) +{ + WITH_JOB_LOCK_GUARD() { + assert(job->status =3D=3D status); + } +} + static void test_cancel_running(void) { Job *job; @@ -259,7 +271,7 @@ static void test_cancel_running(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert_job_status_is(job, JOB_STATUS_RUNNING); =20 cancel_common(s); } @@ -272,11 +284,12 @@ static void test_cancel_paused(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); - - job_user_pause(job, &error_abort); + WITH_JOB_LOCK_GUARD() { + assert(job->status =3D=3D JOB_STATUS_RUNNING); + job_user_pause_locked(job, &error_abort); + } job_enter(job); - assert(job->status =3D=3D JOB_STATUS_PAUSED); + assert_job_status_is(job, JOB_STATUS_PAUSED); =20 cancel_common(s); } @@ -289,11 +302,11 @@ static void test_cancel_ready(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert_job_status_is(job, JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert_job_status_is(job, JOB_STATUS_READY); =20 cancel_common(s); } @@ -306,15 +319,16 @@ static void test_cancel_standby(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert_job_status_is(job, JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); - - job_user_pause(job, &error_abort); + WITH_JOB_LOCK_GUARD() { + assert(job->status =3D=3D JOB_STATUS_READY); + job_user_pause_locked(job, &error_abort); + } job_enter(job); - assert(job->status =3D=3D JOB_STATUS_STANDBY); + assert_job_status_is(job, JOB_STATUS_STANDBY); =20 cancel_common(s); } @@ -327,20 +341,21 @@ static void test_cancel_pending(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert_job_status_is(job, JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); - - job_complete(job, &error_abort); + WITH_JOB_LOCK_GUARD() { + assert(job->status =3D=3D JOB_STATUS_READY); + job_complete_locked(job, &error_abort); + } job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); } - assert(job->status =3D=3D JOB_STATUS_READY); + assert_job_status_is(job, JOB_STATUS_READY); aio_poll(qemu_get_aio_context(), true); - assert(job->status =3D=3D JOB_STATUS_PENDING); + assert_job_status_is(job, JOB_STATUS_PENDING); =20 cancel_common(s); } @@ -353,25 +368,28 @@ static void test_cancel_concluded(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert_job_status_is(job, JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); - - job_complete(job, &error_abort); + WITH_JOB_LOCK_GUARD() { + assert(job->status =3D=3D JOB_STATUS_READY); + job_complete_locked(job, &error_abort); + } job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); } - assert(job->status =3D=3D JOB_STATUS_READY); + assert_job_status_is(job, JOB_STATUS_READY); aio_poll(qemu_get_aio_context(), true); - assert(job->status =3D=3D JOB_STATUS_PENDING); + assert_job_status_is(job, JOB_STATUS_PENDING); =20 aio_context_acquire(job->aio_context); - job_finalize(job, &error_abort); + WITH_JOB_LOCK_GUARD() { + job_finalize_locked(job, &error_abort); + } aio_context_release(job->aio_context); - assert(job->status =3D=3D JOB_STATUS_CONCLUDED); + assert_job_status_is(job, JOB_STATUS_CONCLUDED); =20 cancel_common(s); } @@ -459,36 +477,43 @@ static void test_complete_in_standby(void) bjob =3D mk_job(blk, "job", &test_yielding_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; - assert(job->status =3D=3D JOB_STATUS_CREATED); + assert_job_status_is(job, JOB_STATUS_CREATED); =20 /* Wait for the job to become READY */ job_start(job); aio_context_acquire(ctx); + /* + * Here we are waiting for the status to change, so don't bother + * protecting the read every time. + */ AIO_WAIT_WHILE(ctx, job->status !=3D JOB_STATUS_READY); aio_context_release(ctx); =20 /* Begin the drained section, pausing the job */ bdrv_drain_all_begin(); - assert(job->status =3D=3D JOB_STATUS_STANDBY); + assert_job_status_is(job, JOB_STATUS_STANDBY); + /* Lock the IO thread to prevent the job from being run */ aio_context_acquire(ctx); /* This will schedule the job to resume it */ bdrv_drain_all_end(); =20 - /* But the job cannot run, so it will remain on standby */ - assert(job->status =3D=3D JOB_STATUS_STANDBY); + WITH_JOB_LOCK_GUARD() { + /* But the job cannot run, so it will remain on standby */ + assert(job->status =3D=3D JOB_STATUS_STANDBY); =20 - /* Even though the job is on standby, this should work */ - job_complete(job, &error_abort); + /* Even though the job is on standby, this should work */ + job_complete_locked(job, &error_abort); =20 - /* The test is done now, clean up. */ - job_finish_sync(job, NULL, &error_abort); - assert(job->status =3D=3D JOB_STATUS_PENDING); + /* The test is done now, clean up. */ + job_finish_sync_locked(job, NULL, &error_abort); + assert(job->status =3D=3D JOB_STATUS_PENDING); =20 - job_finalize(job, &error_abort); - assert(job->status =3D=3D JOB_STATUS_CONCLUDED); + job_finalize_locked(job, &error_abort); + assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 - job_dismiss(&job, &error_abort); + job_dismiss_locked(&job, &error_abort); + } =20 destroy_blk(blk); aio_context_release(ctx); --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185021; cv=none; d=zohomail.com; s=zohoarc; b=ZoHXD5821cLQmcSNY7CUdti3Hc3eYk0MHp2qxW940FQUJH3tbdh0/2ENZOhi9rwrxwfJTNrghvVu82DhGvQygh5oYXbk+pbMP8AdshZBbdnNkymF8k/sR6PGD/nI6YVdqw0Zv9G+ATtWNG4R6oYEQ11wiXs0Y0D4oWX3M+1PxPs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185021; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LEmsgUzZ542FjhZpozZ88WhYEU7aXdlubkiUIC6XvIc=; b=OtV5XbV+b0PF8kgS2ZPBwFOwa8oUm7lemUMku3NPNoM9iEQH8vOwAqWD3eu7YZRM1I0fcqjlcDKg+xxRXoCOYrsYdFgN3rMmaixgkKXXEosnBbp9WgC/hW+m4Lr0Eyku3DJn2g/KlJEl+1QcMm0rhBNjjoZSc3DzvbHAcYiDUU0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185021525274.0722426721711; Mon, 26 Sep 2022 02:37:01 -0700 (PDT) Received: from localhost ([::1]:44504 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockXs-0008HG-B5 for importer@patchew.org; Mon, 26 Sep 2022 05:37:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42836) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTa-0004jM-Ot for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:22732) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000Hi-DQ for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:34 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-187-v3hrJ5u_MRScOTCmc-p-Hg-1; Mon, 26 Sep 2022 05:32:22 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A075D185A7A3; Mon, 26 Sep 2022 09:32:21 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 48CCA49BB60; Mon, 26 Sep 2022 09:32:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184745; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LEmsgUzZ542FjhZpozZ88WhYEU7aXdlubkiUIC6XvIc=; b=Enb+IU9Zc94A+ngyGWmWPoSHJes9LTkdphbL0NvsooVOwKtRuMhiPpKJROj4/qXbtuYjqM SLm2MSSuoMrD8I3Bj54fembpU7j6YSP+pfy2KlhrtxpJ+rJZtJNULHkAR1vNfWYmkbVhWI G095etblXpIK8k5YUM1i5no1+XrRsws= X-MC-Unique: v3hrJ5u_MRScOTCmc-p-Hg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 10/21] block/mirror.c: use of job helpers in drivers Date: Mon, 26 Sep 2022 05:32:03 -0400 Message-Id: <20220926093214.506243-11-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185024140100001 Content-Type: text/plain; charset="utf-8" Once job lock is used and aiocontext is removed, mirror has to perform job operations under the same critical section, Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- block/mirror.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 3c4ab1159d..c6bf7f40ce 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -1152,8 +1152,10 @@ static void mirror_complete(Job *job, Error **errp) s->should_complete =3D true; =20 /* If the job is paused, it will be re-entered when it is resumed */ - if (!job->paused) { - job_enter(job); + WITH_JOB_LOCK_GUARD() { + if (!job->paused) { + job_enter_cond_locked(job, NULL); + } } } =20 @@ -1173,8 +1175,11 @@ static bool mirror_drained_poll(BlockJob *job) * from one of our own drain sections, to avoid a deadlock waiting for * ourselves. */ - if (!s->common.job.paused && !job_is_cancelled(&job->job) && !s->in_dr= ain) { - return true; + WITH_JOB_LOCK_GUARD() { + if (!s->common.job.paused && !job_is_cancelled_locked(&job->job) + && !s->in_drain) { + return true; + } } =20 return !!s->in_flight; --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185966; cv=none; d=zohomail.com; s=zohoarc; b=GmiqCMk4SjKYxF9O4iS+2zMuYAttD1Ocj9AfgH3KOI1cJTkuq70K0nFTghT/R8mWCJn3wQPED7c3/yVoWeUO86WqcLOM/yiewCVuoq632YM9rc4B8oEsuHm8aSDWdJDTLLGNZH7h7B+aeOvCu7VRhU0xVAAWK5fdLyMMHfd7iIE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185966; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EhuGqqsKp/Kv7xpRZpYrfp9gdJs5Er8o/DGIYrHmh7k=; b=dwRjBLcdTMkS5L/1d4QnTTapZn/wcxQWfmNls9E0bWWL7y3HtBbODfzcnxZRMEd0cbpL6Woznz5eqKVWA7ZvfgbajPK2O4BovdMGnKihJTIxKg4Ofw6FsvTjW5B0d+ZSTcvSfp8QDMOue3iIe0oAvUoc1y62LNhSz/C/LZGM3NU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185966432532.74107922995; Mon, 26 Sep 2022 02:52:46 -0700 (PDT) Received: from localhost ([::1]:44590 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockn5-0006Lo-Pu for importer@patchew.org; Mon, 26 Sep 2022 05:52:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54460) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTe-0004qx-FB for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:40311) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTS-0000I8-Ux for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:38 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-60-OOvOcLMXNQu01Fh2fm8gPw-1; Mon, 26 Sep 2022 05:32:22 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0D41D882828; Mon, 26 Sep 2022 09:32:22 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id A94F349BB65; Mon, 26 Sep 2022 09:32:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184746; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EhuGqqsKp/Kv7xpRZpYrfp9gdJs5Er8o/DGIYrHmh7k=; b=Yy1y164u7UYGiNjnrOa9Fv+5dJTamjJsXPKt21GZYNwzNrFtv6fEYCk5d4s8AmsfqAAsV/ 22bM2QytxNrt+s6UcwiVyhXjqQAyEPFqpX7/wgGihZAH7LDQNrg9TAoj5MmVubI+WupFuk Bhe9fvxMRyfYY1OtWxw4GZBW5uIAu84= X-MC-Unique: OOvOcLMXNQu01Fh2fm8gPw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 11/21] jobs: group together API calls under the same job lock Date: Mon, 26 Sep 2022 05:32:04 -0400 Message-Id: <20220926093214.506243-12-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185967641100001 Content-Type: text/plain; charset="utf-8" Now that the API offers also _locked() functions, take advantage of it and give also the caller control to take the lock and call _locked functions. This makes sense especially when we have for loops, because it makes no sense to have: for(job =3D job_next(); ...) where each job_next() takes the lock internally. Instead we want JOB_LOCK_GUARD(); for(job =3D job_next_locked(); ...) In addition, protect also direct field accesses, by either creating a new critical section or widening the existing ones. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- block.c | 17 ++++++++++------- blockdev.c | 14 ++++++++++---- blockjob.c | 35 ++++++++++++++++++++++------------- job-qmp.c | 9 ++++++--- monitor/qmp-cmds.c | 7 +++++-- qemu-img.c | 15 ++++++++++----- 6 files changed, 63 insertions(+), 34 deletions(-) diff --git a/block.c b/block.c index bc85f46eed..2c6a4f62c9 100644 --- a/block.c +++ b/block.c @@ -4980,8 +4980,8 @@ static void bdrv_close(BlockDriverState *bs) =20 void bdrv_close_all(void) { - assert(job_next(NULL) =3D=3D NULL); GLOBAL_STATE_CODE(); + assert(job_next(NULL) =3D=3D NULL); =20 /* Drop references from requests still in flight, such as canceled blo= ck * jobs whose AIO context has not been polled yet */ @@ -6167,13 +6167,16 @@ XDbgBlockGraph *bdrv_get_xdbg_block_graph(Error **e= rrp) } } =20 - for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { - GSList *el; + WITH_JOB_LOCK_GUARD() { + for (job =3D block_job_next_locked(NULL); job; + job =3D block_job_next_locked(job)) { + GSList *el; =20 - xdbg_graph_add_node(gr, job, X_DBG_BLOCK_GRAPH_NODE_TYPE_BLOCK_JOB, - job->job.id); - for (el =3D job->nodes; el; el =3D el->next) { - xdbg_graph_add_edge(gr, job, (BdrvChild *)el->data); + xdbg_graph_add_node(gr, job, X_DBG_BLOCK_GRAPH_NODE_TYPE_BLOCK= _JOB, + job->job.id); + for (el =3D job->nodes; el; el =3D el->next) { + xdbg_graph_add_edge(gr, job, (BdrvChild *)el->data); + } } } =20 diff --git a/blockdev.c b/blockdev.c index 2e941e2979..46090bb0aa 100644 --- a/blockdev.c +++ b/blockdev.c @@ -150,12 +150,15 @@ void blockdev_mark_auto_del(BlockBackend *blk) return; } =20 - for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { + JOB_LOCK_GUARD(); + + for (job =3D block_job_next_locked(NULL); job; + job =3D block_job_next_locked(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { AioContext *aio_context =3D job->job.aio_context; aio_context_acquire(aio_context); =20 - job_cancel(&job->job, false); + job_cancel_locked(&job->job, false); =20 aio_context_release(aio_context); } @@ -3756,7 +3759,10 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) BlockJobInfoList *head =3D NULL, **tail =3D &head; BlockJob *job; =20 - for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { + JOB_LOCK_GUARD(); + + for (job =3D block_job_next_locked(NULL); job; + job =3D block_job_next_locked(job)) { BlockJobInfo *value; AioContext *aio_context; =20 @@ -3765,7 +3771,7 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) } aio_context =3D block_job_get_aio_context(job); aio_context_acquire(aio_context); - value =3D block_job_query(job, errp); + value =3D block_job_query_locked(job, errp); aio_context_release(aio_context); if (!value) { qapi_free_BlockJobInfoList(head); diff --git a/blockjob.c b/blockjob.c index 0d59aba439..96fb9d9f73 100644 --- a/blockjob.c +++ b/blockjob.c @@ -111,8 +111,10 @@ static bool child_job_drained_poll(BdrvChild *c) /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point aft= er * being reentered, so no job driver code will run before they pause. = */ - if (!job->busy || job_is_completed(job)) { - return false; + WITH_JOB_LOCK_GUARD() { + if (!job->busy || job_is_completed_locked(job)) { + return false; + } } =20 /* Otherwise, assume that it isn't fully stopped yet, but allow the jo= b to @@ -475,13 +477,15 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, job->ready_notifier.notify =3D block_job_event_ready; job->idle_notifier.notify =3D block_job_on_idle; =20 - notifier_list_add(&job->job.on_finalize_cancelled, - &job->finalize_cancelled_notifier); - notifier_list_add(&job->job.on_finalize_completed, - &job->finalize_completed_notifier); - notifier_list_add(&job->job.on_pending, &job->pending_notifier); - notifier_list_add(&job->job.on_ready, &job->ready_notifier); - notifier_list_add(&job->job.on_idle, &job->idle_notifier); + WITH_JOB_LOCK_GUARD() { + notifier_list_add(&job->job.on_finalize_cancelled, + &job->finalize_cancelled_notifier); + notifier_list_add(&job->job.on_finalize_completed, + &job->finalize_completed_notifier); + notifier_list_add(&job->job.on_pending, &job->pending_notifier); + notifier_list_add(&job->job.on_ready, &job->ready_notifier); + notifier_list_add(&job->job.on_idle, &job->idle_notifier); + } =20 error_setg(&job->blocker, "block device is in use by block job: %s", job_type_str(&job->job)); @@ -558,10 +562,15 @@ BlockErrorAction block_job_error_action(BlockJob *job= , BlockdevOnError on_err, action); } if (action =3D=3D BLOCK_ERROR_ACTION_STOP) { - if (!job->job.user_paused) { - job_pause(&job->job); - /* make the pause user visible, which will be resumed from QMP= . */ - job->job.user_paused =3D true; + WITH_JOB_LOCK_GUARD() { + if (!job->job.user_paused) { + job_pause_locked(&job->job); + /* + * make the pause user visible, which will be + * resumed from QMP. + */ + job->job.user_paused =3D true; + } } block_job_iostatus_set_err(job, error); } diff --git a/job-qmp.c b/job-qmp.c index b1c456482a..393d3a5b81 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -164,7 +164,8 @@ void qmp_job_dismiss(const char *id, Error **errp) aio_context_release(aio_context); } =20 -static JobInfo *job_query_single(Job *job, Error **errp) +/* Called with job_mutex held. */ +static JobInfo *job_query_single_locked(Job *job, Error **errp) { JobInfo *info; uint64_t progress_current; @@ -194,7 +195,9 @@ JobInfoList *qmp_query_jobs(Error **errp) JobInfoList *head =3D NULL, **tail =3D &head; Job *job; =20 - for (job =3D job_next(NULL); job; job =3D job_next(job)) { + JOB_LOCK_GUARD(); + + for (job =3D job_next_locked(NULL); job; job =3D job_next_locked(job))= { JobInfo *value; AioContext *aio_context; =20 @@ -203,7 +206,7 @@ JobInfoList *qmp_query_jobs(Error **errp) } aio_context =3D job->aio_context; aio_context_acquire(aio_context); - value =3D job_query_single(job, errp); + value =3D job_query_single_locked(job, errp); aio_context_release(aio_context); if (!value) { qapi_free_JobInfoList(head); diff --git a/monitor/qmp-cmds.c b/monitor/qmp-cmds.c index 7314cd813d..81c8fdadf8 100644 --- a/monitor/qmp-cmds.c +++ b/monitor/qmp-cmds.c @@ -135,8 +135,11 @@ void qmp_cont(Error **errp) blk_iostatus_reset(blk); } =20 - for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { - block_job_iostatus_reset(job); + WITH_JOB_LOCK_GUARD() { + for (job =3D block_job_next_locked(NULL); job; + job =3D block_job_next_locked(job)) { + block_job_iostatus_reset_locked(job); + } } =20 /* Continuing after completed migration. Images have been inactivated = to diff --git a/qemu-img.c b/qemu-img.c index cab9776f42..e0a30b1f4c 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -912,9 +912,11 @@ static void run_block_job(BlockJob *job, Error **errp) int ret =3D 0; =20 aio_context_acquire(aio_context); - job_ref(&job->job); + job_lock(); + job_ref_locked(&job->job); do { float progress =3D 0.0f; + job_unlock(); aio_poll(aio_context, true); =20 progress_get_snapshot(&job->job.progress, &progress_current, @@ -923,14 +925,17 @@ static void run_block_job(BlockJob *job, Error **errp) progress =3D (float)progress_current / progress_total * 100.f; } qemu_progress_print(progress, 0); - } while (!job_is_ready(&job->job) && !job_is_completed(&job->job)); + job_lock(); + } while (!job_is_ready_locked(&job->job) && + !job_is_completed_locked(&job->job)); =20 - if (!job_is_completed(&job->job)) { - ret =3D job_complete_sync(&job->job, errp); + if (!job_is_completed_locked(&job->job)) { + ret =3D job_complete_sync_locked(&job->job, errp); } else { ret =3D job->job.ret; } - job_unref(&job->job); + job_unref_locked(&job->job); + job_unlock(); aio_context_release(aio_context); =20 /* publish completion progress only when success */ --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185501; cv=none; d=zohomail.com; s=zohoarc; b=Ic1GiPD0DY0+nrVSgzff0U0tS4XTT+lh0ElIDLGSeQo+kgwQqTxx5OR2tKPKpH+DWhk9hhjz4p2wY7TFM9kbWVZ84UBeG7ttsOWgEZXTyGKQ+PAyKd+TUD0pRypeDnHuG15jv7le8XqCxHSlknQ36NQdB5pTvyin/57lsiML508= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185501; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=S7U6E90FGtRSbpixI9LatbGRxPSyzI8PxUSeETn4VtA=; b=O/CnAf8rE474m7FC4ULKfN6m9CDzx/7MlnfDEpjaQd/Xr0zcr7CiQBZ4l6gvkJBhOjt+08zq9tSKlK95ySwSqhI+gitxnAmLNwqLJZSEekAwpWBltIFqvqCWfpS4KF2zUqpVbm9qIIp08wcnpzefQUIbTV30jProlPeXt31bixY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185501273568.1418142415597; Mon, 26 Sep 2022 02:45:01 -0700 (PDT) Received: from localhost ([::1]:39654 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockfa-0000or-M2 for importer@patchew.org; Mon, 26 Sep 2022 05:44:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54480) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTh-0004tC-R3 for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:36520) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000IV-QS for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:40 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-42-UWEH489eM3aCvtqaVxz91g-1; Mon, 26 Sep 2022 05:32:22 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 69B01872855; Mon, 26 Sep 2022 09:32:22 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 15778475065; Mon, 26 Sep 2022 09:32:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S7U6E90FGtRSbpixI9LatbGRxPSyzI8PxUSeETn4VtA=; b=NGWcH/3DMpbLTS2Q8Y39WahZ7fl1KdiKHlhKlMWrYjBaxHPuq0/TWc9tAMPfpzmDJgzlqy 3wWuo6xu5Ms+TtclL1nVqkkfrxkpWHvzixpK4XhaJseQvweTZJKv25kJA7CF/DB6xxgJVz TySLRuqGv8lzM5/X4EXy4+QmVhx9INk= X-MC-Unique: UWEH489eM3aCvtqaVxz91g-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 12/21] job: detect change of aiocontext within job coroutine Date: Mon, 26 Sep 2022 05:32:05 -0400 Message-Id: <20220926093214.506243-13-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185502312100001 Content-Type: text/plain; charset="utf-8" From: Paolo Bonzini We want to make sure access of job->aio_context is always done under either BQL or job_mutex. The problem is that using aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond makes the coroutine immediately resume, so we can't hold the job lock. And caching it is not safe either, as it might change. job_start is under BQL, so it can freely read job->aiocontext, but job_enter_cond is not. We want to avoid reading job->aio_context in job_enter_cond, therefore: 1) use aio_co_wake(), since it doesn't want an aiocontext as argument but uses job->co->ctx 2) detect possible discrepancy between job->co->ctx and job->aio_context by checking right after the coroutine resumes back from yielding if job->aio_context has changed. If so, reschedule the coroutine to the new context. Calling bdrv_try_set_aio_context() will issue the following calls (simplified): * in terms of bdrv callbacks: .drained_begin -> .set_aio_context -> .drained_end * in terms of child_job functions: child_job_drained_begin -> child_job_set_aio_context -> child_job_drained= _end * in terms of job functions: job_pause_locked -> job_set_aio_context -> job_resume_locked We can see that after setting the new aio_context, job_resume_locked calls again job_enter_cond, which then invokes aio_co_wake(). But while job->aiocontext has been set in job_set_aio_context, job->co->ctx has not changed, so the coroutine would be entering in the wrong aiocontext. Using aio_co_schedule in job_resume_locked() might seem as a valid alternative, but the problem is that the bh resuming the coroutine is not scheduled immediately, and if in the meanwhile another bdrv_try_set_aio_context() is run (see test_propagate_mirror() in test-block-iothread.c), we would have the first schedule in the wrong aiocontext, and the second set of drains won't even manage to schedule the coroutine, as job->busy would still be true from the previous job_resume_locked(). The solution is to stick with aio_co_wake() and detect every time the coroutine resumes back from yielding if job->aio_context has changed. If so, we can reschedule it to the new context. Check for the aiocontext change in job_do_yield_locked because: 1) aio_co_reschedule_self requires to be in the running coroutine 2) since child_job_set_aio_context allows changing the aiocontext only while the job is paused, this is the exact place where the coroutine resumes, before running JobDriver's code. Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- job.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/job.c b/job.c index e336af0c1c..85ae843f03 100644 --- a/job.c +++ b/job.c @@ -588,7 +588,7 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *job= )) job->busy =3D true; real_job_unlock(); job_unlock(); - aio_co_enter(job->aio_context, job->co); + aio_co_wake(job->co); job_lock(); } =20 @@ -615,6 +615,8 @@ void job_enter(Job *job) */ static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) { + AioContext *next_aio_context; + real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); @@ -626,7 +628,20 @@ static void coroutine_fn job_do_yield_locked(Job *job,= uint64_t ns) qemu_coroutine_yield(); job_lock(); =20 - /* Set by job_enter_cond() before re-entering the coroutine. */ + next_aio_context =3D job->aio_context; + /* + * Coroutine has resumed, but in the meanwhile the job AioContext + * might have changed via bdrv_try_set_aio_context(), so we need to mo= ve + * the coroutine too in the new aiocontext. + */ + while (qemu_get_current_aio_context() !=3D next_aio_context) { + job_unlock(); + aio_co_reschedule_self(next_aio_context); + job_lock(); + next_aio_context =3D job->aio_context; + } + + /* Set by job_enter_cond_locked() before re-entering the coroutine. */ assert(job->busy); } =20 --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185609; cv=none; d=zohomail.com; s=zohoarc; b=Nj1LMRH02qASbSy4hGA4btfwr+GyDHR6W8g4Wk7uiqgDmCl1Nbn5d5fW+K41zjZlSO3VMPIwSE4trViUsCJ5sgPMfxmLxvtS4bH40Ib5gDb9yas3/PN9Hi4/YA1w0wDgW8q1/TG7KsyGoBUBVRR7TDpXQrI20BN4TjEEcHEofJI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185609; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dvkZda/QpYGiRqtN92gJDinhJYnAWswbbJmtEOqWTK8=; b=nqh9THM6pg7hoo1HdF4URdsV+bPyyJnkEJiZyz25mo56lE05v+JcHpgfe5P2dDeEeuDUfbGgdi/I4UE8gfSIyV2LfPZPTNM4+UqBLNqrrgF0xrbccjSBA1YDE/oIyUwqte4bw+jICaMICc8krfalurulfg4FEcZXF8dHNmjYqF8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185609315835.204051560223; Mon, 26 Sep 2022 02:46:49 -0700 (PDT) Received: from localhost ([::1]:35878 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockhL-00021y-Ui for importer@patchew.org; Mon, 26 Sep 2022 05:46:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54458) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTe-0004qB-7H for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:38 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:41598) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTT-0000IL-3t for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:37 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-675-ksPq-ycFOm6F-K9YuhTQMw-1; Mon, 26 Sep 2022 05:32:23 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C2C3F29ABA08; Mon, 26 Sep 2022 09:32:22 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 716124B3FC6; Mon, 26 Sep 2022 09:32:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184746; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dvkZda/QpYGiRqtN92gJDinhJYnAWswbbJmtEOqWTK8=; b=c3Y76bvbzu6KBTQaoLz6WaS5KVOPfn6lCdHgVPU7ClHa9XpLEaR3fO4YmhzGwm/DGx+5tI muG2f/QIpYicdTywA+WussUY5sZml3TTyYpE4UlMcUcZTniCmEPOwTSW4NmRaf4kM4Kr0h tOebqLQo1ycrkN///xUvGlyPzXDzPfc= X-MC-Unique: ksPq-ycFOm6F-K9YuhTQMw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v12 13/21] jobs: protect job.aio_context with BQL and job_mutex Date: Mon, 26 Sep 2022 05:32:06 -0400 Message-Id: <20220926093214.506243-14-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185611250100001 Content-Type: text/plain; charset="utf-8" In order to make it thread safe, implement a "fake rwlock", where we allow reads under BQL *or* job_mutex held, but writes only under BQL *and* job_mutex. The only write we have is in child_job_set_aio_ctx, which always happens under drain (so the job is paused). For this reason, introduce job_set_aio_context and make sure that the context is set under BQL, job_mutex and drain. Also make sure all other places where the aiocontext is read are protected. The reads in commit.c and mirror.c are actually safe, because always done under BQL. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Suggested-by: Paolo Bonzini Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- block/replication.c | 1 + blockjob.c | 3 ++- include/qemu/job.h | 23 ++++++++++++++++++++--- job.c | 12 ++++++++++++ 4 files changed, 35 insertions(+), 4 deletions(-) diff --git a/block/replication.c b/block/replication.c index 55c8f894aa..5977f7a833 100644 --- a/block/replication.c +++ b/block/replication.c @@ -142,6 +142,7 @@ static void replication_close(BlockDriverState *bs) { BDRVReplicationState *s =3D bs->opaque; Job *commit_job; + GLOBAL_STATE_CODE(); =20 if (s->stage =3D=3D BLOCK_REPLICATION_RUNNING) { replication_stop(s->rs, false, NULL); diff --git a/blockjob.c b/blockjob.c index 96fb9d9f73..c8919cef9b 100644 --- a/blockjob.c +++ b/blockjob.c @@ -162,12 +162,13 @@ static void child_job_set_aio_ctx(BdrvChild *c, AioCo= ntext *ctx, bdrv_set_aio_context_ignore(sibling->bs, ctx, ignore); } =20 - job->job.aio_context =3D ctx; + job_set_aio_context(&job->job, ctx); } =20 static AioContext *child_job_get_parent_aio_context(BdrvChild *c) { BlockJob *job =3D c->opaque; + GLOBAL_STATE_CODE(); =20 return job->job.aio_context; } diff --git a/include/qemu/job.h b/include/qemu/job.h index 5709e8d4a8..50a4c06b93 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -74,11 +74,17 @@ typedef struct Job { /* ProgressMeter API is thread-safe */ ProgressMeter progress; =20 + /** + * AioContext to run the job coroutine in. + * The job Aiocontext can be read when holding *either* + * the BQL (so we are in the main loop) or the job_mutex. + * It can only be written when we hold *both* BQL + * and the job_mutex. + */ + AioContext *aio_context; =20 - /** Protected by AioContext lock */ =20 - /** AioContext to run the job coroutine in */ - AioContext *aio_context; + /** Protected by AioContext lock */ =20 /** Reference count of the block job */ int refcnt; @@ -741,4 +747,15 @@ int job_finish_sync(Job *job, void (*finish)(Job *, Er= ror **errp), int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), Error **errp); =20 +/** + * Sets the @job->aio_context. + * Called with job_mutex *not* held. + * + * This function must run in the main thread to protect against + * concurrent read in job_finish_sync_locked(), takes the job_mutex + * lock to protect against the read in job_do_yield_locked(), and must + * be called when the job is quiescent. + */ +void job_set_aio_context(Job *job, AioContext *ctx); + #endif diff --git a/job.c b/job.c index 85ae843f03..c4ac363f08 100644 --- a/job.c +++ b/job.c @@ -396,6 +396,17 @@ Job *job_get(const char *id) return job_get_locked(id); } =20 +void job_set_aio_context(Job *job, AioContext *ctx) +{ + /* protect against read in job_finish_sync_locked and job_start */ + GLOBAL_STATE_CODE(); + /* protect against read in job_do_yield_locked */ + JOB_LOCK_GUARD(); + /* ensure the job is quiescent while the AioContext is changed */ + assert(job->paused || job_is_completed_locked(job)); + job->aio_context =3D ctx; +} + /* Called with job_mutex *not* held. */ static void job_sleep_timer_cb(void *opaque) { @@ -1379,6 +1390,7 @@ int job_finish_sync_locked(Job *job, { Error *local_err =3D NULL; int ret; + GLOBAL_STATE_CODE(); =20 job_ref_locked(job); =20 --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185396; cv=none; d=zohomail.com; s=zohoarc; b=oGPhv3ZkLslsq3R/xFDphCFRt27YL/nh1FBKTuVNjSqmEaF3PA8JMquibqYgS8AknnvsHyZYxsEiEnlZIaveFSqin3kGiNQOwPLEadbKkn2nJhUeW83AzSwiQDi5nK3UTocZcRzSJWe92dhyM68UbNbZQjI/qrJUB3IFmUPlvN8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185396; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=sGVA3ktwx9v2Ova5wax/U1mwlhKzuntiarETSGroatY=; b=TZkO1ANyjS7eKWIqiBAeMPnnHI2Az2UjsDFTK6V9HT+HVBa80A7IhgKxKGd2e5Je7P/Dp4Z3uRleteVXGGM0JWorOFPmhOwuI0jLzYlzIY03W/VLWR5MAB5rXjGLPnt+9Q+6McmCQoYEMPpZkH7QT6mRWgzOy8WyetTxPikW1BI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185396187574.7518014898783; Mon, 26 Sep 2022 02:43:16 -0700 (PDT) Received: from localhost ([::1]:46838 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockdv-0004O0-1p for importer@patchew.org; Mon, 26 Sep 2022 05:43:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54472) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTf-0004t3-SD for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:51 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:48035) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000IY-MS for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:39 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-341-1VLXV0ihOpKphHWXyzWTiQ-1; Mon, 26 Sep 2022 05:32:23 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2C52F180074D; Mon, 26 Sep 2022 09:32:23 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id CA65A4B3FC8; Mon, 26 Sep 2022 09:32:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sGVA3ktwx9v2Ova5wax/U1mwlhKzuntiarETSGroatY=; b=N4s82WpzlUhEtLXHIVd+mHIZZwuuCwS8aBvz6z9M3BXHP9H8z0QKMnXNlgcWjm6FJ1muES D8+gN8BmDnZQQj7kTzCIpUaGVtZMqcy+1PcJzM7kxER+beGaqhi0JPedxoXW2dlzLeEraC jXf5dKNEo0rBOJmhdicILBEwiVw6NOE= X-MC-Unique: 1VLXV0ihOpKphHWXyzWTiQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 14/21] blockjob.h: categorize fields in struct BlockJob Date: Mon, 26 Sep 2022 05:32:07 -0400 Message-Id: <20220926093214.506243-15-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185397442100001 Content-Type: text/plain; charset="utf-8" The same job lock is being used also to protect some of blockjob fields. Categorize them just as done in job.h. Reviewed-by: Vladimir Sementsov-Ogievskiy Signed-off-by: Emanuele Giuseppe Esposito --- include/block/blockjob.h | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/include/block/blockjob.h b/include/block/blockjob.h index 8b65d3949d..10c24e240a 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -40,21 +40,38 @@ typedef struct BlockJobDriver BlockJobDriver; * Long-running operation on a BlockDriverState. */ typedef struct BlockJob { - /** Data belonging to the generic Job infrastructure */ + /** + * Data belonging to the generic Job infrastructure. + * Protected by job mutex. + */ Job job; =20 - /** Status that is published by the query-block-jobs QMP API */ + /** + * Status that is published by the query-block-jobs QMP API. + * Protected by job mutex. + */ BlockDeviceIoStatus iostatus; =20 - /** Speed that was set with @block_job_set_speed. */ + /** + * Speed that was set with @block_job_set_speed. + * Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE= ). + */ int64_t speed; =20 - /** Rate limiting data structure for implementing @speed. */ + /** + * Rate limiting data structure for implementing @speed. + * RateLimit API is thread-safe. + */ RateLimit limit; =20 - /** Block other operations when block job is running */ + /** + * Block other operations when block job is running. + * Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE= ). + */ Error *blocker; =20 + /** All notifiers are set once in block_job_create() and never modifie= d. */ + /** Called when a cancelled job is finalised. */ Notifier finalize_cancelled_notifier; =20 @@ -70,7 +87,10 @@ typedef struct BlockJob { /** Called when the job coroutine yields or terminates */ Notifier idle_notifier; =20 - /** BlockDriverStates that are involved in this block job */ + /** + * BlockDriverStates that are involved in this block job. + * Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE= ). + */ GSList *nodes; } BlockJob; =20 --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185241; cv=none; d=zohomail.com; s=zohoarc; b=lQBI3e8Vd4ejO4p0AU/aGGLXO2VRajpJaS+PwupzNO3YsWIlzi/4kHjvo/f7piWkleiLss68GbwLkCQscyJ3zmyekJbf1dmDUdr8xJd7U5wii79hh6s/SOAVRg/pj/cHuv4aY4f2U9HodjAZpY06NhykQoKfIL+ekasgaJQwxyk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185241; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7hyiHSQ1Qhc+b3VG5PI5DaHdlddCPX69P03EqLQU2C0=; b=KfjhJuNtI6wH9FIIkcoizxlFlmkcMwqELJT35GJiWCunKUtKneb4BYI347xBwQrwhsA/VQc2D5jCoAnT3d+gUcBqF7u0iWtAjlSbBfYwsKFQFZYyICM7RWP+bYPRNkBc4mYsNwF4gcXH9efb11w7N1kY901OXEUXj2BbxLkgByM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185241159429.11186643047324; Mon, 26 Sep 2022 02:40:41 -0700 (PDT) Received: from localhost ([::1]:53978 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockbO-00070f-J1 for importer@patchew.org; Mon, 26 Sep 2022 05:40:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54468) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTf-0004sR-Of for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:29369) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000Ic-HE for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:39 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-387-LOapucXaPUqoVmMZMuLg3g-1; Mon, 26 Sep 2022 05:32:24 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8B4561C05AF3; Mon, 26 Sep 2022 09:32:23 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 342994B3FC6; Mon, 26 Sep 2022 09:32:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7hyiHSQ1Qhc+b3VG5PI5DaHdlddCPX69P03EqLQU2C0=; b=KySPBlrDBJSAzt5NO5a1rR/zwHqzmvL45oRmjrS3DjGjsn8LO/ZHo86Tmb1hwIqVa0flB7 QpidNF5ay7VBQCgqZOatWIAkwg++mb0uSAI46LYIaK7QC2OAi67HKrIYxrkGphANX56Rbp p7PCfN0RiQUmAQo4OFDtPs/OzMACAIc= X-MC-Unique: LOapucXaPUqoVmMZMuLg3g-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 15/21] blockjob: rename notifier callbacks as _locked Date: Mon, 26 Sep 2022 05:32:08 -0400 Message-Id: <20220926093214.506243-16-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185244170100001 Content-Type: text/plain; charset="utf-8" They all are called with job_lock held, in job_event_*_locked() Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf --- blockjob.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/blockjob.c b/blockjob.c index c8919cef9b..d8fb5311c7 100644 --- a/blockjob.c +++ b/blockjob.c @@ -250,7 +250,8 @@ int block_job_add_bdrv(BlockJob *job, const char *name,= BlockDriverState *bs, return 0; } =20 -static void block_job_on_idle(Notifier *n, void *opaque) +/* Called with job_mutex lock held. */ +static void block_job_on_idle_locked(Notifier *n, void *opaque) { aio_wait_kick(); } @@ -370,7 +371,8 @@ static void block_job_iostatus_set_err(BlockJob *job, i= nt error) } } =20 -static void block_job_event_cancelled(Notifier *n, void *opaque) +/* Called with job_mutex lock held. */ +static void block_job_event_cancelled_locked(Notifier *n, void *opaque) { BlockJob *job =3D opaque; uint64_t progress_current, progress_total; @@ -389,7 +391,8 @@ static void block_job_event_cancelled(Notifier *n, void= *opaque) job->speed); } =20 -static void block_job_event_completed(Notifier *n, void *opaque) +/* Called with job_mutex lock held. */ +static void block_job_event_completed_locked(Notifier *n, void *opaque) { BlockJob *job =3D opaque; const char *msg =3D NULL; @@ -415,7 +418,8 @@ static void block_job_event_completed(Notifier *n, void= *opaque) msg); } =20 -static void block_job_event_pending(Notifier *n, void *opaque) +/* Called with job_mutex lock held. */ +static void block_job_event_pending_locked(Notifier *n, void *opaque) { BlockJob *job =3D opaque; =20 @@ -427,7 +431,8 @@ static void block_job_event_pending(Notifier *n, void *= opaque) job->job.id); } =20 -static void block_job_event_ready(Notifier *n, void *opaque) +/* Called with job_mutex lock held. */ +static void block_job_event_ready_locked(Notifier *n, void *opaque) { BlockJob *job =3D opaque; uint64_t progress_current, progress_total; @@ -472,11 +477,11 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, =20 ratelimit_init(&job->limit); =20 - job->finalize_cancelled_notifier.notify =3D block_job_event_cancelled; - job->finalize_completed_notifier.notify =3D block_job_event_completed; - job->pending_notifier.notify =3D block_job_event_pending; - job->ready_notifier.notify =3D block_job_event_ready; - job->idle_notifier.notify =3D block_job_on_idle; + job->finalize_cancelled_notifier.notify =3D block_job_event_cancelled_= locked; + job->finalize_completed_notifier.notify =3D block_job_event_completed_= locked; + job->pending_notifier.notify =3D block_job_event_pending_locked; + job->ready_notifier.notify =3D block_job_event_ready_locked; + job->idle_notifier.notify =3D block_job_on_idle_locked; =20 WITH_JOB_LOCK_GUARD() { notifier_list_add(&job->job.on_finalize_cancelled, --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185937; cv=none; d=zohomail.com; s=zohoarc; b=bhXnEYuucy7E9/fa5HTosXagLEpIQ5x/DnsTl8jfty2gmesQqGG8bg488Lu6jdjRz3SMg9hFk0FRvhGLIaMuX6oOByce1QfCtufA5DaX01iPK6y+1Qhw7zXozWKTJ/LFwGOxHkbsvJkgIJnou+8FKHPlbjxtgiaJEx9y78gWCpM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185937; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dGkczSnWXo4N9YXFzkBTGlXZWRfvUXAN+FbCwI2WXzc=; b=Ct8QPqHAC3RLJxJiaKE4tkukrXzc6tU3wAYHK3KOjBz5N3+MpAJQ8UaY6IkTQLWuloflmdSq7aNoN0gSPm73lofAdzGZYGrTKHCK6KMjobGm55vDib/8GIedpRo63/Slhwz+H/X5kBCThb4dhs5XRwnaemPnJRQ893huRIpaCMU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185937430128.06454368684206; Mon, 26 Sep 2022 02:52:17 -0700 (PDT) Received: from localhost ([::1]:37766 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockme-0006C3-CE for importer@patchew.org; Mon, 26 Sep 2022 05:52:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42354) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTl-0004uZ-NC for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:57 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:58351) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000Ig-W0 for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:45 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-260-5Mf-MqSXPY-h17qPDiFlKA-1; Mon, 26 Sep 2022 05:32:24 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EA044382C964; Mon, 26 Sep 2022 09:32:23 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 932E04B3FC6; Mon, 26 Sep 2022 09:32:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184748; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dGkczSnWXo4N9YXFzkBTGlXZWRfvUXAN+FbCwI2WXzc=; b=TGxTVp2VH8kyK+rnnVtMK6o26RyWWvP7CyjgvE/kHUbpsbyqESlci2ECnrZImf8Xoxp9vN N+IlxDbjoPACyNyd/dhdRk5LLEmUAnpWawRoHgq9Lhneeay84t4ZnFl7K/Fmndyr7okIYN 0Os6MZVWxIenLl164SxSrohc2p6uQbk= X-MC-Unique: 5Mf-MqSXPY-h17qPDiFlKA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 16/21] blockjob: protect iostatus field in BlockJob struct Date: Mon, 26 Sep 2022 05:32:09 -0400 Message-Id: <20220926093214.506243-17-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185939389100001 Content-Type: text/plain; charset="utf-8" iostatus is the only field (together with .job) that needs protection using the job mutex. It is set in the main loop (GLOBAL_STATE functions) but read in I/O code (block_job_error_action). In order to protect it, change block_job_iostatus_set_err to block_job_iostatus_set_err_locked(), always called under job lock. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Kevin Wolf Reviewed-by: Vladimir Sementsov-Ogievskiy --- block/mirror.c | 7 +++++-- blockjob.c | 5 +++-- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index c6bf7f40ce..7e32ee1d31 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -893,7 +893,7 @@ static int coroutine_fn mirror_run(Job *job, Error **er= rp) MirrorBlockJob *s =3D container_of(job, MirrorBlockJob, common.job); BlockDriverState *bs =3D s->mirror_top_bs->backing->bs; BlockDriverState *target_bs =3D blk_bs(s->target); - bool need_drain =3D true; + bool need_drain =3D true, iostatus; int64_t length; int64_t target_length; BlockDriverInfo bdi; @@ -1016,8 +1016,11 @@ static int coroutine_fn mirror_run(Job *job, Error *= *errp) * We do so every BLKOCK_JOB_SLICE_TIME nanoseconds, or when there= is * an error, or when the source is clean, whichever comes first. */ delta =3D qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - s->last_pause_n= s; + WITH_JOB_LOCK_GUARD() { + iostatus =3D s->common.iostatus; + } if (delta < BLOCK_JOB_SLICE_TIME && - s->common.iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { + iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { if (s->in_flight >=3D MAX_IN_FLIGHT || s->buf_free_count =3D= =3D 0 || (cnt =3D=3D 0 && s->in_flight > 0)) { trace_mirror_yield(s, cnt, s->buf_free_count, s->in_flight= ); diff --git a/blockjob.c b/blockjob.c index d8fb5311c7..d04f804001 100644 --- a/blockjob.c +++ b/blockjob.c @@ -363,7 +363,8 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **er= rp) return block_job_query_locked(job, errp); } =20 -static void block_job_iostatus_set_err(BlockJob *job, int error) +/* Called with job lock held */ +static void block_job_iostatus_set_err_locked(BlockJob *job, int error) { if (job->iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { job->iostatus =3D error =3D=3D ENOSPC ? BLOCK_DEVICE_IO_STATUS_NOS= PACE : @@ -577,8 +578,8 @@ BlockErrorAction block_job_error_action(BlockJob *job, = BlockdevOnError on_err, */ job->job.user_paused =3D true; } + block_job_iostatus_set_err_locked(job, error); } - block_job_iostatus_set_err(job, error); } return action; } --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664185862; cv=none; d=zohomail.com; s=zohoarc; b=QBIXfAz6uV+pu7VVD180MXJEGB3qirVwtp8eactNSj6Mcxxu8zLRCfFm3iv8+8b4kPd6HHv7Z5S1/AHUsNuJDoX1yS5KyVc6SMyb195+2CVURbxDBlx50hSEZUtVfZ0hkN+bFtaaTTKp3WX/qrjylPJK4v2+83baCP/smkIlyss= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664185862; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qY8kuHl/HYCjFB05cGWdzGqXAY0Hm72mfTy3rfx4Z3I=; b=eMR1kU0jPMmLDka5tjRv7bnsTBYwp3GU0stq8dMt/pB9gZXOx7OAmNgi7CS9HQ76Un6n2ZQrhVelfD83CmXKjOlzain13aMWcxfUIxWXENy18956Ipk6DvL23uJlQvW9JLkApmX04xdZJnJgnhvvlCQjiJfVvIwjwxb2Ws26yTI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664185862278144.60279995205462; Mon, 26 Sep 2022 02:51:02 -0700 (PDT) Received: from localhost ([::1]:40570 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ocklQ-0005YF-P1 for importer@patchew.org; Mon, 26 Sep 2022 05:51:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54474) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTh-0004t9-Qe for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:51 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:32442) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000Ie-Pg for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:40 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-19-8gnuxcOMMH633X0duc5xEg-1; Mon, 26 Sep 2022 05:32:24 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5345D29ABA04; Mon, 26 Sep 2022 09:32:24 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id F1AE549BB65; Mon, 26 Sep 2022 09:32:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184748; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qY8kuHl/HYCjFB05cGWdzGqXAY0Hm72mfTy3rfx4Z3I=; b=UM4UKmlSi2TKKx7Zu5nHuY+Txr73nXq9+2TwiRb/oPGNEHzlHYMuWZcBCN8/n4+aaCbiN0 pVo9w4C2Da5t17dyD+ja7JcjOL0KwamKCvfS1VvmRRMHcUs/YjWi/tlwFgG1veZSSPMm1l vmE2QOWcnazQjJiWc1nMpn1PzZoK2I4= X-MC-Unique: 8gnuxcOMMH633X0duc5xEg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 17/21] job.h: categorize JobDriver callbacks that need the AioContext lock Date: Mon, 26 Sep 2022 05:32:10 -0400 Message-Id: <20220926093214.506243-18-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664185863570100001 Content-Type: text/plain; charset="utf-8" Some callbacks implementation use bdrv_* APIs that assume the AioContext lock is held. Make sure this invariant is documented. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- include/qemu/job.h | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 50a4c06b93..a954f8f992 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -65,7 +65,11 @@ typedef struct Job { /** True if this job should automatically dismiss itself */ bool auto_dismiss; =20 - /** The completion function that will be called when the job completes= . */ + /** + * The completion function that will be called when the job completes. + * Called with AioContext lock held, since many callback implementatio= ns + * use bdrv_* functions that require to hold the lock. + */ BlockCompletionFunc *cb; =20 /** The opaque value that is passed to the completion function. */ @@ -260,6 +264,9 @@ struct JobDriver { * * This callback will not be invoked if the job has already failed. * If it fails, abort and then clean will be called. + * + * Called with AioContext lock held, since many callbacs implementatio= ns + * use bdrv_* functions that require to hold the lock. */ int (*prepare)(Job *job); =20 @@ -270,6 +277,9 @@ struct JobDriver { * * All jobs will complete with a call to either .commit() or .abort() = but * never both. + * + * Called with AioContext lock held, since many callback implementatio= ns + * use bdrv_* functions that require to hold the lock. */ void (*commit)(Job *job); =20 @@ -280,6 +290,9 @@ struct JobDriver { * * All jobs will complete with a call to either .commit() or .abort() = but * never both. + * + * Called with AioContext lock held, since many callback implementatio= ns + * use bdrv_* functions that require to hold the lock. */ void (*abort)(Job *job); =20 @@ -288,6 +301,9 @@ struct JobDriver { * .commit() or .abort(). Regardless of which callback is invoked after * completion, .clean() will always be called, even if the job does not * belong to a transaction group. + * + * Called with AioContext lock held, since many callbacs implementatio= ns + * use bdrv_* functions that require to hold the lock. */ void (*clean)(Job *job); =20 @@ -302,11 +318,18 @@ struct JobDriver { * READY). * (If the callback is NULL, the job is assumed to terminate * without I/O.) + * + * Called with AioContext lock held, since many callback implementatio= ns + * use bdrv_* functions that require to hold the lock. */ bool (*cancel)(Job *job, bool force); =20 =20 - /** Called when the job is freed */ + /** + * Called when the job is freed. + * Called with AioContext lock held, since many callback implementatio= ns + * use bdrv_* functions that require to hold the lock. + */ void (*free)(Job *job); }; =20 --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664186334; cv=none; d=zohomail.com; s=zohoarc; b=PKDvxS0pdLkeLfTca2X8zc4XboHPnVN0oSZAtD52zGkliZTI4DC1MymmSdtR5W4f3Ena23ZWZtiHhQDocDux5KBpsj7BfJHVASKc141A0wO9R70GeKeWYA0uBf9bA5gityWsloGDDLkN+ZmFtcTrMvpXH1eilf+m5wcf3YGd63c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664186334; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qlFHnnbBzwC0NbwlUGbpXN4R6IpjCKQgSfrLDyGTTg0=; b=RPyhh6HUkHgwMh2lUXTlS42+FS97tkvHYMAYdnneIdvnqQmuRXYc6o8abejt4s4nvSsFNjwiD7vpPJKg8TzLVkIGDc+TjZHeX3RInTLwSflYOg7HiPrbhCGUk4nRqk+aKNWVFa+4tSZwzrhBAaQd4lRzSmZlJlQ8aMH+4v2UyIE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664186334782634.3808121046684; Mon, 26 Sep 2022 02:58:54 -0700 (PDT) Received: from localhost ([::1]:57968 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockt3-000096-8m for importer@patchew.org; Mon, 26 Sep 2022 05:58:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54490) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTj-0004to-LZ for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:57 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:28015) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTW-0000JN-2f for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:43 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-426-RP-tPyMFN3GX5uPN62zlZg-1; Mon, 26 Sep 2022 05:32:25 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B35EC1C05AFB; Mon, 26 Sep 2022 09:32:24 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5CB1B49BB60; Mon, 26 Sep 2022 09:32:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qlFHnnbBzwC0NbwlUGbpXN4R6IpjCKQgSfrLDyGTTg0=; b=GuTlUTIRwqDPGrw/bjdHhcY5bXZFC5QCc/hmgUoSAqypldyxni+bKdrr48xFY+4+8o4NQV ncjy766ROJ4MX51zGaQhtPc9vKZGo6QYPe+VH+aHp8cLLpyRdoJkj/AUgshAz+2Vxz/R4U jGZLrYOruA409Nu9kVnR2Bo6HOGsoZE= X-MC-Unique: RP-tPyMFN3GX5uPN62zlZg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v12 18/21] job.c: enable job lock/unlock and remove Aiocontext locks Date: Mon, 26 Sep 2022 05:32:11 -0400 Message-Id: <20220926093214.506243-19-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664186336209100001 Content-Type: text/plain; charset="utf-8" Change the job_{lock/unlock} and macros to use job_mutex. Now that they are not nop anymore, remove the aiocontext to avoid deadlocks. Therefore: - when possible, remove completely the aiocontext lock/unlock pair - if it is used by some other function too, reduce the locking section as much as possible, leaving the job API outside. - change AIO_WAIT_WHILE in AIO_WAIT_WHILE_UNLOCKED, since we are not using the aiocontext lock anymore The only functions that still need the aiocontext lock are: - the JobDriver callbacks, already documented in job.h - job_cancel_sync() in replication.c is called with aio_context_lock taken, but now job is using AIO_WAIT_WHILE_UNLOCKED so we need to release the lock. Reduce the locking section to only cover the callback invocation and document the functions that take the AioContext lock, to avoid taking it twice. Also remove real_job_{lock/unlock}, as they are replaced by the public functions. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- block/replication.c | 2 + blockdev.c | 72 +++----------------- include/qemu/job.h | 17 ++--- job-qmp.c | 46 +++---------- job.c | 111 +++++++++---------------------- qemu-img.c | 2 - tests/unit/test-bdrv-drain.c | 4 +- tests/unit/test-block-iothread.c | 2 +- tests/unit/test-blockjob.c | 19 +++--- 9 files changed, 72 insertions(+), 203 deletions(-) diff --git a/block/replication.c b/block/replication.c index 5977f7a833..c67f931f37 100644 --- a/block/replication.c +++ b/block/replication.c @@ -727,7 +727,9 @@ static void replication_stop(ReplicationState *rs, bool= failover, Error **errp) * disk, secondary disk in backup_job_completed(). */ if (s->backup_job) { + aio_context_release(aio_context); job_cancel_sync(&s->backup_job->job, true); + aio_context_acquire(aio_context); } =20 if (!failover) { diff --git a/blockdev.c b/blockdev.c index 46090bb0aa..a32bafc07a 100644 --- a/blockdev.c +++ b/blockdev.c @@ -155,12 +155,7 @@ void blockdev_mark_auto_del(BlockBackend *blk) for (job =3D block_job_next_locked(NULL); job; job =3D block_job_next_locked(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { - AioContext *aio_context =3D job->job.aio_context; - aio_context_acquire(aio_context); - job_cancel_locked(&job->job, false); - - aio_context_release(aio_context); } } =20 @@ -1847,14 +1842,7 @@ static void drive_backup_abort(BlkActionState *commo= n) DriveBackupState *state =3D DO_UPCAST(DriveBackupState, common, common= ); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - job_cancel_sync(&state->job->job, true); - - aio_context_release(aio_context); } } =20 @@ -1948,14 +1936,7 @@ static void blockdev_backup_abort(BlkActionState *co= mmon) BlockdevBackupState *state =3D DO_UPCAST(BlockdevBackupState, common, = common); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - job_cancel_sync(&state->job->job, true); - - aio_context_release(aio_context); } } =20 @@ -3317,19 +3298,14 @@ out: } =20 /* - * Get a block job using its ID and acquire its AioContext. - * Called with job_mutex held. + * Get a block job using its ID. Called with job_mutex held. */ -static BlockJob *find_block_job_locked(const char *id, - AioContext **aio_context, - Error **errp) +static BlockJob *find_block_job_locked(const char *id, Error **errp) { BlockJob *job; =20 assert(id !=3D NULL); =20 - *aio_context =3D NULL; - job =3D block_job_get_locked(id); =20 if (!job) { @@ -3338,36 +3314,30 @@ static BlockJob *find_block_job_locked(const char *= id, return NULL; } =20 - *aio_context =3D block_job_get_aio_context(job); - aio_context_acquire(*aio_context); - return job; } =20 void qmp_block_job_set_speed(const char *device, int64_t speed, Error **er= rp) { - AioContext *aio_context; BlockJob *job; =20 JOB_LOCK_GUARD(); - job =3D find_block_job_locked(device, &aio_context, errp); + job =3D find_block_job_locked(device, errp); =20 if (!job) { return; } =20 block_job_set_speed_locked(job, speed, errp); - aio_context_release(aio_context); } =20 void qmp_block_job_cancel(const char *device, bool has_force, bool force, Error **errp) { - AioContext *aio_context; BlockJob *job; =20 JOB_LOCK_GUARD(); - job =3D find_block_job_locked(device, &aio_context, errp); + job =3D find_block_job_locked(device, errp); =20 if (!job) { return; @@ -3380,22 +3350,19 @@ void qmp_block_job_cancel(const char *device, if (job_user_paused_locked(&job->job) && !force) { error_setg(errp, "The block job for device '%s' is currently pause= d", device); - goto out; + return; } =20 trace_qmp_block_job_cancel(job); job_user_cancel_locked(&job->job, force, errp); -out: - aio_context_release(aio_context); } =20 void qmp_block_job_pause(const char *device, Error **errp) { - AioContext *aio_context; BlockJob *job; =20 JOB_LOCK_GUARD(); - job =3D find_block_job_locked(device, &aio_context, errp); + job =3D find_block_job_locked(device, errp); =20 if (!job) { return; @@ -3403,16 +3370,14 @@ void qmp_block_job_pause(const char *device, Error = **errp) =20 trace_qmp_block_job_pause(job); job_user_pause_locked(&job->job, errp); - aio_context_release(aio_context); } =20 void qmp_block_job_resume(const char *device, Error **errp) { - AioContext *aio_context; BlockJob *job; =20 JOB_LOCK_GUARD(); - job =3D find_block_job_locked(device, &aio_context, errp); + job =3D find_block_job_locked(device, errp); =20 if (!job) { return; @@ -3420,16 +3385,14 @@ void qmp_block_job_resume(const char *device, Error= **errp) =20 trace_qmp_block_job_resume(job); job_user_resume_locked(&job->job, errp); - aio_context_release(aio_context); } =20 void qmp_block_job_complete(const char *device, Error **errp) { - AioContext *aio_context; BlockJob *job; =20 JOB_LOCK_GUARD(); - job =3D find_block_job_locked(device, &aio_context, errp); + job =3D find_block_job_locked(device, errp); =20 if (!job) { return; @@ -3437,16 +3400,14 @@ void qmp_block_job_complete(const char *device, Err= or **errp) =20 trace_qmp_block_job_complete(job); job_complete_locked(&job->job, errp); - aio_context_release(aio_context); } =20 void qmp_block_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; BlockJob *job; =20 JOB_LOCK_GUARD(); - job =3D find_block_job_locked(id, &aio_context, errp); + job =3D find_block_job_locked(id, errp); =20 if (!job) { return; @@ -3456,24 +3417,16 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) job_ref_locked(&job->job); job_finalize_locked(&job->job, errp); =20 - /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. - */ - aio_context =3D block_job_get_aio_context(job); job_unref_locked(&job->job); - aio_context_release(aio_context); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; BlockJob *bjob; Job *job; =20 JOB_LOCK_GUARD(); - bjob =3D find_block_job_locked(id, &aio_context, errp); + bjob =3D find_block_job_locked(id, errp); =20 if (!bjob) { return; @@ -3482,7 +3435,6 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; job_dismiss_locked(&job, errp); - aio_context_release(aio_context); } =20 void qmp_change_backing_file(const char *device, @@ -3764,15 +3716,11 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) for (job =3D block_job_next_locked(NULL); job; job =3D block_job_next_locked(job)) { BlockJobInfo *value; - AioContext *aio_context; =20 if (block_job_is_internal(job)) { continue; } - aio_context =3D block_job_get_aio_context(job); - aio_context_acquire(aio_context); value =3D block_job_query_locked(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_BlockJobInfoList(head); return NULL; diff --git a/include/qemu/job.h b/include/qemu/job.h index a954f8f992..8c8c58dada 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -88,7 +88,7 @@ typedef struct Job { AioContext *aio_context; =20 =20 - /** Protected by AioContext lock */ + /** Protected by job_mutex */ =20 /** Reference count of the block job */ int refcnt; @@ -111,7 +111,7 @@ typedef struct Job { /** * Set to false by the job while the coroutine has yielded and may be * re-entered by job_enter(). There may still be I/O or event loop act= ivity - * pending. Accessed under block_job_mutex (in blockjob.c). + * pending. Accessed under job_mutex. * * When the job is deferred to the main loop, busy is true as long as = the * bottom half is still pending. @@ -346,9 +346,9 @@ typedef enum JobCreateFlags { =20 extern QemuMutex job_mutex; =20 -#define JOB_LOCK_GUARD() /* QEMU_LOCK_GUARD(&job_mutex) */ +#define JOB_LOCK_GUARD() QEMU_LOCK_GUARD(&job_mutex) =20 -#define WITH_JOB_LOCK_GUARD() /* WITH_QEMU_LOCK_GUARD(&job_mutex) */ +#define WITH_JOB_LOCK_GUARD() WITH_QEMU_LOCK_GUARD(&job_mutex) =20 /** * job_lock: @@ -422,6 +422,8 @@ void job_ref_locked(Job *job); /** * Release a reference that was previously acquired with job_ref() or * job_create(). If it's the last reference to the object, it will be free= d. + * + * Takes AioContext lock internally to invoke a job->driver callback. */ void job_unref(Job *job); =20 @@ -696,7 +698,7 @@ void job_user_cancel_locked(Job *job, bool force, Error= **errp); * Returns the return value from the job if the job actually completed * during the call, or -ECANCELED if it was canceled. * - * Callers must hold the AioContext lock of job->aio_context. + * Called with job_lock *not* held. */ int job_cancel_sync(Job *job, bool force); =20 @@ -721,8 +723,7 @@ void job_cancel_sync_all(void); * function). * * Returns the return value from the job. - * - * Callers must hold the AioContext lock of job->aio_context. + * Called with job_lock *not* held. */ int job_complete_sync(Job *job, Error **errp); =20 @@ -758,7 +759,7 @@ void job_dismiss_locked(Job **job, Error **errp); * Returns 0 if the job is successfully completed, -ECANCELED if the job w= as * cancelled before completing, and -errno in other error cases. * - * Callers must hold the AioContext lock of job->aio_context. + * Called with job_lock *not* held. */ int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error **errp); diff --git a/job-qmp.c b/job-qmp.c index 393d3a5b81..d498fc89c0 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -30,36 +30,27 @@ #include "trace/trace-root.h" =20 /* - * Get a job using its ID and acquire its AioContext. - * Called with job_mutex held. + * Get a job using its ID. Called with job_mutex held. */ -static Job *find_job_locked(const char *id, - AioContext **aio_context, - Error **errp) +static Job *find_job_locked(const char *id, Error **errp) { Job *job; =20 - *aio_context =3D NULL; - job =3D job_get_locked(id); if (!job) { error_setg(errp, "Job not found"); return NULL; } =20 - *aio_context =3D job->aio_context; - aio_context_acquire(*aio_context); - return job; } =20 void qmp_job_cancel(const char *id, Error **errp) { - AioContext *aio_context; Job *job; =20 JOB_LOCK_GUARD(); - job =3D find_job_locked(id, &aio_context, errp); + job =3D find_job_locked(id, errp); =20 if (!job) { return; @@ -67,16 +58,14 @@ void qmp_job_cancel(const char *id, Error **errp) =20 trace_qmp_job_cancel(job); job_user_cancel_locked(job, true, errp); - aio_context_release(aio_context); } =20 void qmp_job_pause(const char *id, Error **errp) { - AioContext *aio_context; Job *job; =20 JOB_LOCK_GUARD(); - job =3D find_job_locked(id, &aio_context, errp); + job =3D find_job_locked(id, errp); =20 if (!job) { return; @@ -84,16 +73,14 @@ void qmp_job_pause(const char *id, Error **errp) =20 trace_qmp_job_pause(job); job_user_pause_locked(job, errp); - aio_context_release(aio_context); } =20 void qmp_job_resume(const char *id, Error **errp) { - AioContext *aio_context; Job *job; =20 JOB_LOCK_GUARD(); - job =3D find_job_locked(id, &aio_context, errp); + job =3D find_job_locked(id, errp); =20 if (!job) { return; @@ -101,16 +88,14 @@ void qmp_job_resume(const char *id, Error **errp) =20 trace_qmp_job_resume(job); job_user_resume_locked(job, errp); - aio_context_release(aio_context); } =20 void qmp_job_complete(const char *id, Error **errp) { - AioContext *aio_context; Job *job; =20 JOB_LOCK_GUARD(); - job =3D find_job_locked(id, &aio_context, errp); + job =3D find_job_locked(id, errp); =20 if (!job) { return; @@ -118,16 +103,14 @@ void qmp_job_complete(const char *id, Error **errp) =20 trace_qmp_job_complete(job); job_complete_locked(job, errp); - aio_context_release(aio_context); } =20 void qmp_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; Job *job; =20 JOB_LOCK_GUARD(); - job =3D find_job_locked(id, &aio_context, errp); + job =3D find_job_locked(id, errp); =20 if (!job) { return; @@ -137,23 +120,15 @@ void qmp_job_finalize(const char *id, Error **errp) job_ref_locked(job); job_finalize_locked(job, errp); =20 - /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. - */ - aio_context =3D job->aio_context; job_unref_locked(job); - aio_context_release(aio_context); } =20 void qmp_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; Job *job; =20 JOB_LOCK_GUARD(); - job =3D find_job_locked(id, &aio_context, errp); + job =3D find_job_locked(id, errp); =20 if (!job) { return; @@ -161,7 +136,6 @@ void qmp_job_dismiss(const char *id, Error **errp) =20 trace_qmp_job_dismiss(job); job_dismiss_locked(&job, errp); - aio_context_release(aio_context); } =20 /* Called with job_mutex held. */ @@ -199,15 +173,11 @@ JobInfoList *qmp_query_jobs(Error **errp) =20 for (job =3D job_next_locked(NULL); job; job =3D job_next_locked(job))= { JobInfo *value; - AioContext *aio_context; =20 if (job_is_internal(job)) { continue; } - aio_context =3D job->aio_context; - aio_context_acquire(aio_context); value =3D job_query_single_locked(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_JobInfoList(head); return NULL; diff --git a/job.c b/job.c index c4ac363f08..59c7bf5aa9 100644 --- a/job.c +++ b/job.c @@ -44,8 +44,6 @@ * * The second includes functions used by the job drivers and sometimes * by the core block layer. These delegate the locking to the callee inste= ad. - * - * TODO Actually make this true */ =20 /* @@ -98,21 +96,11 @@ struct JobTxn { }; =20 void job_lock(void) -{ - /* nop */ -} - -void job_unlock(void) -{ - /* nop */ -} - -static void real_job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void real_job_unlock(void) +void job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -187,7 +175,6 @@ static void job_txn_del_job_locked(Job *job) /* Called with job_mutex held, but releases it temporarily. */ static int job_txn_apply_locked(Job *job, int fn(Job *)) { - AioContext *inner_ctx; Job *other_job, *next; JobTxn *txn =3D job->txn; int rc =3D 0; @@ -199,23 +186,14 @@ static int job_txn_apply_locked(Job *job, int fn(Job = *)) * break AIO_WAIT_WHILE from within fn. */ job_ref_locked(job); - aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { - inner_ctx =3D other_job->aio_context; - aio_context_acquire(inner_ctx); rc =3D fn(other_job); - aio_context_release(inner_ctx); if (rc) { break; } } =20 - /* - * Note that job->aio_context might have been changed by calling fn, s= o we - * can't use a local variable to cache it. - */ - aio_context_acquire(job->aio_context); job_unref_locked(job); return rc; } @@ -503,8 +481,12 @@ void job_unref_locked(Job *job) assert(!job->txn); =20 if (job->driver->free) { + AioContext *aio_context =3D job->aio_context; job_unlock(); + /* FIXME: aiocontext lock is required because cb calls blk_unr= ef */ + aio_context_acquire(aio_context); job->driver->free(job); + aio_context_release(aio_context); job_lock(); } =20 @@ -583,21 +565,17 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *j= ob)) return; } =20 - real_job_lock(); if (job->busy) { - real_job_unlock(); return; } =20 if (fn && !fn(job)) { - real_job_unlock(); return; } =20 assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy =3D true; - real_job_unlock(); job_unlock(); aio_co_wake(job->co); job_lock(); @@ -628,13 +606,11 @@ static void coroutine_fn job_do_yield_locked(Job *job= , uint64_t ns) { AioContext *next_aio_context; =20 - real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; job_event_idle_locked(job); - real_job_unlock(); job_unlock(); qemu_coroutine_yield(); job_lock(); @@ -920,10 +896,14 @@ static void job_clean(Job *job) } } =20 -/* Called with job_mutex held, but releases it temporarily */ +/* + * Called with job_mutex held, but releases it temporarily. + * Takes AioContext lock internally to invoke a job->driver callback. + */ static int job_finalize_single_locked(Job *job) { int job_ret; + AioContext *ctx =3D job->aio_context; =20 assert(job_is_completed_locked(job)); =20 @@ -932,6 +912,7 @@ static int job_finalize_single_locked(Job *job) =20 job_ret =3D job->ret; job_unlock(); + aio_context_acquire(ctx); =20 if (!job_ret) { job_commit(job); @@ -940,15 +921,13 @@ static int job_finalize_single_locked(Job *job) } job_clean(job); =20 - job_lock(); - if (job->cb) { - job_ret =3D job->ret; - job_unlock(); job->cb(job->opaque, job_ret); - job_lock(); } =20 + aio_context_release(ctx); + job_lock(); + /* Emit events only if we actually started */ if (job_started_locked(job)) { if (job_is_cancelled_locked(job)) { @@ -963,13 +942,19 @@ static int job_finalize_single_locked(Job *job) return 0; } =20 -/* Called with job_mutex held, but releases it temporarily */ +/* + * Called with job_mutex held, but releases it temporarily. + * Takes AioContext lock internally to invoke a job->driver callback. + */ static void job_cancel_async_locked(Job *job, bool force) { + AioContext *ctx =3D job->aio_context; GLOBAL_STATE_CODE(); if (job->driver->cancel) { job_unlock(); + aio_context_acquire(ctx); force =3D job->driver->cancel(job, force); + aio_context_release(ctx); job_lock(); } else { /* No .cancel() means the job will behave as if force-cancelled */ @@ -1002,10 +987,12 @@ static void job_cancel_async_locked(Job *job, bool f= orce) } } =20 -/* Called with job_mutex held, but releases it temporarily. */ +/* + * Called with job_mutex held, but releases it temporarily. + * Takes AioContext lock internally to invoke a job->driver callback. + */ static void job_completed_txn_abort_locked(Job *job) { - AioContext *ctx; JobTxn *txn =3D job->txn; Job *other_job; =20 @@ -1018,54 +1005,31 @@ static void job_completed_txn_abort_locked(Job *job) txn->aborting =3D true; job_txn_ref_locked(txn); =20 - /* - * We can only hold the single job's AioContext lock while calling - * job_finalize_single() because the finalization callbacks can involve - * calls of AIO_WAIT_WHILE(), which could deadlock otherwise. - * Note that the job's AioContext may change when it is finalized. - */ job_ref_locked(job); - aio_context_release(job->aio_context); =20 /* Other jobs are effectively cancelled by us, set the status for * them; this job, however, may or may not be cancelled, depending * on the caller, so leave it. */ QLIST_FOREACH(other_job, &txn->jobs, txn_list) { if (other_job !=3D job) { - ctx =3D other_job->aio_context; - aio_context_acquire(ctx); /* * This is a transaction: If one job failed, no result will ma= tter. * Therefore, pass force=3Dtrue to terminate all other jobs as= quickly * as possible. */ job_cancel_async_locked(other_job, true); - aio_context_release(ctx); } } while (!QLIST_EMPTY(&txn->jobs)) { other_job =3D QLIST_FIRST(&txn->jobs); - /* - * The job's AioContext may change, so store it in @ctx so we - * release the same context that we have acquired before. - */ - ctx =3D other_job->aio_context; - aio_context_acquire(ctx); if (!job_is_completed_locked(other_job)) { assert(job_cancel_requested_locked(other_job)); job_finish_sync_locked(other_job, NULL, NULL); } job_finalize_single_locked(other_job); - aio_context_release(ctx); } =20 - /* - * Use job_ref()/job_unref() so we can read the AioContext here - * even if the job went away during job_finalize_single(). - */ - aio_context_acquire(job->aio_context); job_unref_locked(job); - job_txn_unref_locked(txn); } =20 @@ -1073,15 +1037,20 @@ static void job_completed_txn_abort_locked(Job *job) static int job_prepare_locked(Job *job) { int ret; + AioContext *ctx =3D job->aio_context; =20 GLOBAL_STATE_CODE(); + if (job->ret =3D=3D 0 && job->driver->prepare) { job_unlock(); + aio_context_acquire(ctx); ret =3D job->driver->prepare(job); + aio_context_release(ctx); job_lock(); job->ret =3D ret; job_update_rc_locked(job); } + return job->ret; } =20 @@ -1186,11 +1155,8 @@ static void job_completed_locked(Job *job) static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; - AioContext *ctx; JOB_LOCK_GUARD(); - job_ref_locked(job); - aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion * callbacks. However, completion callbacks tend to involve operations= that @@ -1200,16 +1166,7 @@ static void job_exit(void *opaque) job_event_idle_locked(job); =20 job_completed_locked(job); - - /* - * Note that calling job_completed can move the job to a different - * aio_context, so we cannot cache from above. job_txn_apply takes car= e of - * acquiring the new lock, and we ref/unref to avoid job_completed fre= eing - * the job underneath us. - */ - ctx =3D job->aio_context; job_unref_locked(job); - aio_context_release(ctx); } =20 /** @@ -1337,14 +1294,10 @@ int job_cancel_sync(Job *job, bool force) void job_cancel_sync_all(void) { Job *job; - AioContext *aio_context; JOB_LOCK_GUARD(); =20 while ((job =3D job_next_locked(NULL))) { - aio_context =3D job->aio_context; - aio_context_acquire(aio_context); job_cancel_sync_locked(job, true); - aio_context_release(aio_context); } } =20 @@ -1404,8 +1357,8 @@ int job_finish_sync_locked(Job *job, } =20 job_unlock(); - AIO_WAIT_WHILE(job->aio_context, - (job_enter(job), !job_is_completed(job))); + AIO_WAIT_WHILE_UNLOCKED(job->aio_context, + (job_enter(job), !job_is_completed(job))); job_lock(); =20 ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) diff --git a/qemu-img.c b/qemu-img.c index e0a30b1f4c..ace3adf8ae 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -911,7 +911,6 @@ static void run_block_job(BlockJob *job, Error **errp) AioContext *aio_context =3D block_job_get_aio_context(job); int ret =3D 0; =20 - aio_context_acquire(aio_context); job_lock(); job_ref_locked(&job->job); do { @@ -936,7 +935,6 @@ static void run_block_job(BlockJob *job, Error **errp) } job_unref_locked(&job->job); job_unlock(); - aio_context_release(aio_context); =20 /* publish completion progress only when success */ if (!ret) { diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 0db056ea63..4924ceb562 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -930,9 +930,9 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, tjob->prepare_ret =3D -EIO; break; } + aio_context_release(ctx); =20 job_start(&job->job); - aio_context_release(ctx); =20 if (use_iothread) { /* job_co_entry() is run in the I/O thread, wait for the actual job @@ -1016,12 +1016,12 @@ static void test_blockjob_common_drain_node(enum dr= ain_type drain_type, g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ } =20 - aio_context_acquire(ctx); WITH_JOB_LOCK_GUARD() { ret =3D job_complete_sync_locked(&job->job, &error_abort); } g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 + aio_context_acquire(ctx); if (use_iothread) { blk_set_aio_context(blk_src, qemu_get_aio_context(), &error_abort); assert(blk_get_aio_context(blk_target) =3D=3D qemu_get_aio_context= ()); diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index 96fd21c00a..def0709b2b 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -582,10 +582,10 @@ static void test_attach_blockjob(void) aio_poll(qemu_get_aio_context(), false); } =20 - aio_context_acquire(ctx); WITH_JOB_LOCK_GUARD() { job_complete_sync_locked(&tjob->common.job, &error_abort); } + aio_context_acquire(ctx); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index e4f126bb6d..f88e10e356 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -228,10 +228,7 @@ static void cancel_common(CancelJob *s) BlockJob *job =3D &s->common; BlockBackend *blk =3D s->blk; JobStatus sts =3D job->job.status; - AioContext *ctx; - - ctx =3D job->job.aio_context; - aio_context_acquire(ctx); + AioContext *ctx =3D job->job.aio_context; =20 job_cancel_sync(&job->job, true); WITH_JOB_LOCK_GUARD() { @@ -242,9 +239,11 @@ static void cancel_common(CancelJob *s) assert(job->job.status =3D=3D JOB_STATUS_NULL); job_unref_locked(&job->job); } - destroy_blk(blk); =20 + aio_context_acquire(ctx); + destroy_blk(blk); aio_context_release(ctx); + } =20 static void test_cancel_created(void) @@ -384,12 +383,10 @@ static void test_cancel_concluded(void) aio_poll(qemu_get_aio_context(), true); assert_job_status_is(job, JOB_STATUS_PENDING); =20 - aio_context_acquire(job->aio_context); WITH_JOB_LOCK_GUARD() { job_finalize_locked(job, &error_abort); + assert(job->status =3D=3D JOB_STATUS_CONCLUDED); } - aio_context_release(job->aio_context); - assert_job_status_is(job, JOB_STATUS_CONCLUDED); =20 cancel_common(s); } @@ -481,13 +478,11 @@ static void test_complete_in_standby(void) =20 /* Wait for the job to become READY */ job_start(job); - aio_context_acquire(ctx); /* * Here we are waiting for the status to change, so don't bother * protecting the read every time. */ - AIO_WAIT_WHILE(ctx, job->status !=3D JOB_STATUS_READY); - aio_context_release(ctx); + AIO_WAIT_WHILE_UNLOCKED(ctx, job->status !=3D JOB_STATUS_READY); =20 /* Begin the drained section, pausing the job */ bdrv_drain_all_begin(); @@ -497,6 +492,7 @@ static void test_complete_in_standby(void) aio_context_acquire(ctx); /* This will schedule the job to resume it */ bdrv_drain_all_end(); + aio_context_release(ctx); =20 WITH_JOB_LOCK_GUARD() { /* But the job cannot run, so it will remain on standby */ @@ -515,6 +511,7 @@ static void test_complete_in_standby(void) job_dismiss_locked(&job, &error_abort); } =20 + aio_context_acquire(ctx); destroy_blk(blk); aio_context_release(ctx); iothread_join(iothread); --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664186101; cv=none; d=zohomail.com; s=zohoarc; b=Kj+wMv4bMbhmSj0UKO3IL3jdjCLpRzymOMAAiAjH7yfOx1/UWTCArEyCmOteKS7vwP9e2S4fakDz5tuiAbQGyk1nCAJp+pZMg9kgrCjBEzMFTRGrkUdaSv68rutU/zVzUcw1TRrZJEBPar3+9ggDnJq6aU2DT3WLyHpCcWhX7p8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664186101; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LKqVznNy8akJg1iTy9Jd+SVAEogFydiIAa9+NwW7cgw=; b=MZ/aHO9BuhN8RVFQI9ci7p63IfhlrLZfn41acdG9J0HMt76FvPC3ZPq+pOO43uf4lBSQ9jmBgKJPxpdXgVXeLxxqkPT2guko/PLnbIrmEXVcEu0ju7R4nTpkRB8Mrn35k6CwKgNU/NCXG+f/+FeCw4P/ani+jnQOK5pEcK+uNGQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664186101417644.9958800089044; Mon, 26 Sep 2022 02:55:01 -0700 (PDT) Received: from localhost ([::1]:47034 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockpI-0002tg-6e for importer@patchew.org; Mon, 26 Sep 2022 05:55:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54482) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTh-0004tD-S5 for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:51 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:52675) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000JC-Rs for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:40 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-435-SdmIjoOANzW8LHc5FD_HtA-1; Mon, 26 Sep 2022 05:32:25 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1EC00800B30; Mon, 26 Sep 2022 09:32:25 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id BAE6149BB61; Mon, 26 Sep 2022 09:32:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LKqVznNy8akJg1iTy9Jd+SVAEogFydiIAa9+NwW7cgw=; b=L3pNlQQ0+XMtRl23CYFfxz8W7xpkUUeoYLD4ilFx3SkoAVNPL71kW3VHSjIw66Hu7A45MM 6fxCFfBPpQ39JxDMA0EkAf5ijDHEqfZ1K35/nuGz2hmcREjPYKbnqd/BYm+72GVz3bnfQl eUv901tJPlyKUM8dJHOkO570GVomJ/Q= X-MC-Unique: SdmIjoOANzW8LHc5FD_HtA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 19/21] block_job_query: remove atomic read Date: Mon, 26 Sep 2022 05:32:12 -0400 Message-Id: <20220926093214.506243-20-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664186102266100001 Content-Type: text/plain; charset="utf-8" Not sure what the atomic here was supposed to do, since job.busy is protected by the job lock. Since the whole function is called under job_mutex, just remove the atomic. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf --- blockjob.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/blockjob.c b/blockjob.c index d04f804001..120c1b7ead 100644 --- a/blockjob.c +++ b/blockjob.c @@ -338,7 +338,7 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Err= or **errp) info =3D g_new0(BlockJobInfo, 1); info->type =3D g_strdup(job_type_str(&job->job)); info->device =3D g_strdup(job->job.id); - info->busy =3D qatomic_read(&job->job.busy); + info->busy =3D job->job.busy; info->paused =3D job->job.pause_count > 0; info->offset =3D progress_current; info->len =3D progress_total; --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664186353; cv=none; d=zohomail.com; s=zohoarc; b=HnlFAIlx8/4MpvNg9AOc6KhHrHyoqpGT6OqkOFvge22N9ZVHg0WvQhVBsWCgiQgExG2PZjoSztOqXpL75EYe+iXzJKHYIIuv1zxcinpg+SfeZE3irWBOhlH8Zya29bppbEHT2lRl+NZ2rHKUz08an8ChTTE/9qHMyKaNJ1IT8ng= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664186353; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=oAQrsZKsrJIreZva3FNcMnILYMrItVGqfXODqzVdFw8=; b=YhxR/WHyAcaIwlMbPbTsBz0BcYBrR2e0lny4yy1BPegeiyzrn4ctw9fXwFKHc035w0KWb1AVKXIZP962+kAXZRwVHBEcaOBBdEFEx/jh+QtbtMq5j8vjoIL8ZW1B38mJMmpj+E0pl3rl7tXLo5EeRu4suIEXE5UMeeBKLUwqEO4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664186353375376.8269966027202; Mon, 26 Sep 2022 02:59:13 -0700 (PDT) Received: from localhost ([::1]:45676 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ocktL-0000iy-1n for importer@patchew.org; Mon, 26 Sep 2022 05:59:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54484) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTh-0004tE-Sj for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:52837) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTV-0000JG-VA for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:41 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-322-yvkN68qoN1e505_F_KIg3Q-1; Mon, 26 Sep 2022 05:32:25 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7DB4729ABA03; Mon, 26 Sep 2022 09:32:25 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 26A6A49BB60; Mon, 26 Sep 2022 09:32:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oAQrsZKsrJIreZva3FNcMnILYMrItVGqfXODqzVdFw8=; b=etRbf58aSzRILbOazhQtywMdaCkKmCVbsEIWSqyU4ei5+cDtFECPpowOAUeb68gSt3NkIL B4NvIUzl2CWpvcdOolWxg11RSbgCJlAfQ8t/Me87ThrAS8wnlMnM5pF+Y9tKQqVSFX/l3f LcP/eA/5QTT4kI3xsDxCMXM6t91pIqo= X-MC-Unique: yvkN68qoN1e505_F_KIg3Q-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy Subject: [PATCH v12 20/21] blockjob: remove unused functions Date: Mon, 26 Sep 2022 05:32:13 -0400 Message-Id: <20220926093214.506243-21-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664186354186100001 Content-Type: text/plain; charset="utf-8" These public functions are not used anywhere, thus can be dropped. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf Reviewed-by: Vladimir Sementsov-Ogievskiy --- blockjob.c | 16 ++-------------- include/block/blockjob.h | 31 ++++++++++++------------------- 2 files changed, 14 insertions(+), 33 deletions(-) diff --git a/blockjob.c b/blockjob.c index 120c1b7ead..bdf20a0e35 100644 --- a/blockjob.c +++ b/blockjob.c @@ -56,12 +56,6 @@ BlockJob *block_job_next_locked(BlockJob *bjob) return job ? container_of(job, BlockJob, job) : NULL; } =20 -BlockJob *block_job_next(BlockJob *bjob) -{ - JOB_LOCK_GUARD(); - return block_job_next_locked(bjob); -} - BlockJob *block_job_get_locked(const char *id) { Job *job =3D job_get_locked(id); @@ -308,7 +302,7 @@ bool block_job_set_speed_locked(BlockJob *job, int64_t = speed, Error **errp) return true; } =20 -bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) +static bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) { JOB_LOCK_GUARD(); return block_job_set_speed_locked(job, speed, errp); @@ -357,12 +351,6 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Er= ror **errp) return info; } =20 -BlockJobInfo *block_job_query(BlockJob *job, Error **errp) -{ - JOB_LOCK_GUARD(); - return block_job_query_locked(job, errp); -} - /* Called with job lock held */ static void block_job_iostatus_set_err_locked(BlockJob *job, int error) { @@ -525,7 +513,7 @@ void block_job_iostatus_reset_locked(BlockJob *job) job->iostatus =3D BLOCK_DEVICE_IO_STATUS_OK; } =20 -void block_job_iostatus_reset(BlockJob *job) +static void block_job_iostatus_reset(BlockJob *job) { JOB_LOCK_GUARD(); block_job_iostatus_reset_locked(job); diff --git a/include/block/blockjob.h b/include/block/blockjob.h index 10c24e240a..03032b2eca 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -102,17 +102,15 @@ typedef struct BlockJob { */ =20 /** - * block_job_next: + * block_job_next_locked: * @job: A block job, or %NULL. * * Get the next element from the list of block jobs after @job, or the * first one if @job is %NULL. * * Returns the requested job, or %NULL if there are no more jobs left. + * Called with job lock held. */ -BlockJob *block_job_next(BlockJob *job); - -/* Same as block_job_next(), but called with job lock held. */ BlockJob *block_job_next_locked(BlockJob *job); =20 /** @@ -122,6 +120,7 @@ BlockJob *block_job_next_locked(BlockJob *job); * Get the block job identified by @id (which must not be %NULL). * * Returns the requested job, or %NULL if it doesn't exist. + * Called with job lock *not* held. */ BlockJob *block_job_get(const char *id); =20 @@ -161,43 +160,37 @@ void block_job_remove_all_bdrv(BlockJob *job); bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs); =20 /** - * block_job_set_speed: + * block_job_set_speed_locked: * @job: The job to set the speed for. * @speed: The new value * @errp: Error object. * * Set a rate-limiting parameter for the job; the actual meaning may * vary depending on the job type. - */ -bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp); - -/* - * Same as block_job_set_speed(), but called with job lock held. - * Might release the lock temporarily. + * + * Called with job lock held, but might release it temporarily. */ bool block_job_set_speed_locked(BlockJob *job, int64_t speed, Error **errp= ); =20 /** - * block_job_query: + * block_job_query_locked: * @job: The job to get information about. * * Return information about a job. + * + * Called with job lock held. */ -BlockJobInfo *block_job_query(BlockJob *job, Error **errp); - -/* Same as block_job_query(), but called with job lock held. */ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp); =20 /** - * block_job_iostatus_reset: + * block_job_iostatus_reset_locked: * @job: The job whose I/O status should be reset. * * Reset I/O status on @job and on BlockDriverState objects it uses, * other than job->blk. + * + * Called with job lock held. */ -void block_job_iostatus_reset(BlockJob *job); - -/* Same as block_job_iostatus_reset(), but called with job lock held. */ void block_job_iostatus_reset_locked(BlockJob *job); =20 /* --=20 2.31.1 From nobody Wed May 15 16:20:35 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1664186105; cv=none; d=zohomail.com; s=zohoarc; b=dYXXcHD+/w74tgEW7waaH/2pZoH9Bi9l3I7ARXWW7btpksB6A4bT8xbZM41ivB+oJV0lfEdN4KodrAedZmaEet7zaU32d2oK1zV+sSoOPVpSv/2omjMM/IK2Swce98v06rg2+fD59bNr7LNDR2tXp07PrVXfVw2kgGxwSGO/FUY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664186105; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=e12HXf0l7FvMV3dXOHsue2QhdDwboDnriKFG68MiHdM=; b=GP+34FcIsFIuHWHzayBcFUvuQZkcqJl6aqTbz0HakCkalQ8VS5rbe6wkSFY75yoGsro65E2bx8Xn5YeYBy1XiWZcqWxkoFqe+IhH02DKNVncMxTwwkOFcOcDlVYHI3BZZG+iDtGggXjVx1eJRGnWYWGDM2k/qXHM+eMaw6niD/0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1664186105869848.2214709998249; Mon, 26 Sep 2022 02:55:05 -0700 (PDT) Received: from localhost ([::1]:47036 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ockpL-00032z-Hp for importer@patchew.org; Mon, 26 Sep 2022 05:55:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54492) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTk-0004uH-2Z for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:57 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:22445) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ockTW-0000Jj-8E for qemu-devel@nongnu.org; Mon, 26 Sep 2022 05:32:43 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-497-xg8meUQ3NLua1kgVV0kuMg-1; Mon, 26 Sep 2022 05:32:26 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DC1C3857D10; Mon, 26 Sep 2022 09:32:25 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 866F0492CA2; Mon, 26 Sep 2022 09:32:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664184749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e12HXf0l7FvMV3dXOHsue2QhdDwboDnriKFG68MiHdM=; b=WWt9BQOMUvy5vLsDta5rum8MjNhczUcgEuh9yzEyJRxr/qcMHM/TiHbufLqXneZAI6EAnn s4IPka9B0x6eaEL73PEDXDIWXx/afTNxOWJYsIQsC5zxRntLFxyW6/tMjamN5BBtnLOVtg Wmv0baPXXoc2R5qDPcUyMlMyHVvaalc= X-MC-Unique: xg8meUQ3NLua1kgVV0kuMg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v12 21/21] job: remove unused functions Date: Mon, 26 Sep 2022 05:32:14 -0400 Message-Id: <20220926093214.506243-22-eesposit@redhat.com> In-Reply-To: <20220926093214.506243-1-eesposit@redhat.com> References: <20220926093214.506243-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1664186106314100001 Content-Type: text/plain; charset="utf-8" These public functions are not used anywhere, thus can be dropped. Also, since this is the final job API that doesn't use AioContext lock and replaces it with job_lock, adjust all remaining function documentation to clearly specify if the job lock is taken or not. Also document the locking requirements for a few functions where the second version is not removed. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Reviewed-by: Kevin Wolf Reviewed-by: Vladimir Sementsov-Ogievskiy --- include/qemu/job.h | 110 +++++++++++++------------------------ job.c | 107 ++---------------------------------- tests/unit/test-blockjob.c | 4 +- 3 files changed, 46 insertions(+), 175 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 8c8c58dada..32ad5622e1 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -384,6 +384,8 @@ JobTxn *job_txn_new(void); /** * Release a reference that was previously acquired with job_txn_add_job or * job_txn_new. If it's the last reference to the object, it will be freed. + * + * Called with job lock *not* held. */ void job_txn_unref(JobTxn *txn); =20 @@ -413,21 +415,18 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, /** * Add a reference to Job refcnt, it will be decreased with job_unref, and= then * be freed if it comes to be the last reference. + * + * Called with job lock held. */ -void job_ref(Job *job); - -/* Same as job_ref(), but called with job lock held. */ void job_ref_locked(Job *job); =20 /** - * Release a reference that was previously acquired with job_ref() or + * Release a reference that was previously acquired with job_ref_locked() = or * job_create(). If it's the last reference to the object, it will be free= d. * * Takes AioContext lock internally to invoke a job->driver callback. + * Called with job lock held. */ -void job_unref(Job *job); - -/* Same as job_unref(), but called with job lock held. */ void job_unref_locked(Job *job); =20 /** @@ -473,12 +472,8 @@ void job_progress_increase_remaining(Job *job, uint64_= t delta); * Conditionally enter the job coroutine if the job is ready to run, not * already busy and fn() returns true. fn() is called while under the job_= lock * critical section. - */ -void job_enter_cond(Job *job, bool(*fn)(Job *job)); - -/* - * Same as job_enter_cond(), but called with job lock held. - * Might release the lock temporarily. + * + * Called with job lock held, but might release it temporarily. */ void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)); =20 @@ -557,11 +552,8 @@ bool job_cancel_requested(Job *job); =20 /** * Returns whether the job is in a completed state. - * Called with job_mutex *not* held. + * Called with job lock held. */ -bool job_is_completed(Job *job); - -/* Same as job_is_completed(), but called with job lock held. */ bool job_is_completed_locked(Job *job); =20 /** @@ -576,14 +568,16 @@ bool job_is_ready_locked(Job *job); /** * Request @job to pause at the next pause point. Must be paired with * job_resume(). If the job is supposed to be resumed by user action, call - * job_user_pause() instead. + * job_user_pause_locked() instead. + * + * Called with job lock *not* held. */ void job_pause(Job *job); =20 /* Same as job_pause(), but called with job lock held. */ void job_pause_locked(Job *job); =20 -/** Resumes a @job paused with job_pause. */ +/** Resumes a @job paused with job_pause. Called with job lock *not* held.= */ void job_resume(Job *job); =20 /* @@ -595,27 +589,20 @@ void job_resume_locked(Job *job); /** * Asynchronously pause the specified @job. * Do not allow a resume until a matching call to job_user_resume. + * Called with job lock held. */ -void job_user_pause(Job *job, Error **errp); - -/* Same as job_user_pause(), but called with job lock held. */ void job_user_pause_locked(Job *job, Error **errp); =20 -/** Returns true if the job is user-paused. */ -bool job_user_paused(Job *job); - -/* Same as job_user_paused(), but called with job lock held. */ +/** + * Returns true if the job is user-paused. + * Called with job lock held. + */ bool job_user_paused_locked(Job *job); =20 /** * Resume the specified @job. - * Must be paired with a preceding job_user_pause. - */ -void job_user_resume(Job *job, Error **errp); - -/* - * Same as job_user_resume(), but called with job lock held. - * Might release the lock temporarily. + * Must be paired with a preceding job_user_pause_locked. + * Called with job lock held, but might release it temporarily. */ void job_user_resume_locked(Job *job, Error **errp); =20 @@ -624,6 +611,7 @@ void job_user_resume_locked(Job *job, Error **errp); * first one if @job is %NULL. * * Returns the requested job, or %NULL if there are no more jobs left. + * Called with job lock *not* held. */ Job *job_next(Job *job); =20 @@ -634,20 +622,17 @@ Job *job_next_locked(Job *job); * Get the job identified by @id (which must not be %NULL). * * Returns the requested job, or %NULL if it doesn't exist. + * Called with job lock held. */ -Job *job_get(const char *id); - -/* Same as job_get(), but called with job lock held. */ Job *job_get_locked(const char *id); =20 /** * Check whether the verb @verb can be applied to @job in its current stat= e. * Returns 0 if the verb can be applied; otherwise errp is set and -EPERM * returned. + * + * Called with job lock held. */ -int job_apply_verb(Job *job, JobVerb verb, Error **errp); - -/* Same as job_apply_verb, but called with job lock held. */ int job_apply_verb_locked(Job *job, JobVerb verb, Error **errp); =20 /** @@ -662,31 +647,24 @@ void job_early_fail(Job *job); */ void job_transition_to_ready(Job *job); =20 -/** Asynchronously complete the specified @job. */ -void job_complete(Job *job, Error **errp); - -/* - * Same as job_complete(), but called with job lock held. - * Might release the lock temporarily. +/** + * Asynchronously complete the specified @job. + * Called with job lock held, but might release it temporarily. */ void job_complete_locked(Job *job, Error **errp); =20 /** * Asynchronously cancel the specified @job. If @force is true, the job sh= ould * be cancelled immediately without waiting for a consistent state. + * Called with job lock held. */ -void job_cancel(Job *job, bool force); - -/* Same as job_cancel(), but called with job lock held. */ void job_cancel_locked(Job *job, bool force); =20 /** - * Cancels the specified job like job_cancel(), but may refuse to do so if= the - * operation isn't meaningful in the current state of the job. + * Cancels the specified job like job_cancel_locked(), but may refuse + * to do so if the operation isn't meaningful in the current state of the = job. + * Called with job lock held. */ -void job_user_cancel(Job *job, bool force, Error **errp); - -/* Same as job_user_cancel(), but called with job lock held. */ void job_user_cancel_locked(Job *job, bool force, Error **errp); =20 /** @@ -714,7 +692,7 @@ void job_cancel_sync_all(void); =20 /** * @job: The job to be completed. - * @errp: Error object which may be set by job_complete(); this is not + * @errp: Error object which may be set by job_complete_locked(); this is = not * necessarily set on every error, the job return value has to be * checked as well. * @@ -723,11 +701,8 @@ void job_cancel_sync_all(void); * function). * * Returns the return value from the job. - * Called with job_lock *not* held. + * Called with job_lock held. */ -int job_complete_sync(Job *job, Error **errp); - -/* Same as job_complete_sync, but called with job lock held. */ int job_complete_sync_locked(Job *job, Error **errp); =20 /** @@ -737,19 +712,17 @@ int job_complete_sync_locked(Job *job, Error **errp); * FIXME: Make the below statement universally true: * For jobs that support the manual workflow mode, all graph changes that = occur * as a result will occur after this command and before a successful reply. + * + * Called with job lock held. */ -void job_finalize(Job *job, Error **errp); - -/* Same as job_finalize(), but called with job lock held. */ void job_finalize_locked(Job *job, Error **errp); =20 /** * Remove the concluded @job from the query list and resets the passed poi= nter * to %NULL. Returns an error if the job is not actually concluded. + * + * Called with job lock held. */ -void job_dismiss(Job **job, Error **errp); - -/* Same as job_dismiss(), but called with job lock held. */ void job_dismiss_locked(Job **job, Error **errp); =20 /** @@ -759,14 +732,7 @@ void job_dismiss_locked(Job **job, Error **errp); * Returns 0 if the job is successfully completed, -ECANCELED if the job w= as * cancelled before completing, and -errno in other error cases. * - * Called with job_lock *not* held. - */ -int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), - Error **errp); - -/* - * Same as job_finish_sync(), but called with job lock held. - * Might release the lock temporarily. + * Called with job_lock held, but might release it temporarily. */ int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), Error **errp); diff --git a/job.c b/job.c index 59c7bf5aa9..15b25f421f 100644 --- a/job.c +++ b/job.c @@ -233,12 +233,6 @@ int job_apply_verb_locked(Job *job, JobVerb verb, Erro= r **errp) return -EPERM; } =20 -int job_apply_verb(Job *job, JobVerb verb, Error **errp) -{ - JOB_LOCK_GUARD(); - return job_apply_verb_locked(job, verb, errp); -} - JobType job_type(const Job *job) { return job->driver->job_type; @@ -324,7 +318,7 @@ bool job_is_completed_locked(Job *job) return false; } =20 -bool job_is_completed(Job *job) +static bool job_is_completed(Job *job) { JOB_LOCK_GUARD(); return job_is_completed_locked(job); @@ -368,12 +362,6 @@ Job *job_get_locked(const char *id) return NULL; } =20 -Job *job_get(const char *id) -{ - JOB_LOCK_GUARD(); - return job_get_locked(id); -} - void job_set_aio_context(Job *job, AioContext *ctx) { /* protect against read in job_finish_sync_locked and job_start */ @@ -465,12 +453,6 @@ void job_ref_locked(Job *job) ++job->refcnt; } =20 -void job_ref(Job *job) -{ - JOB_LOCK_GUARD(); - job_ref_locked(job); -} - void job_unref_locked(Job *job) { GLOBAL_STATE_CODE(); @@ -499,12 +481,6 @@ void job_unref_locked(Job *job) } } =20 -void job_unref(Job *job) -{ - JOB_LOCK_GUARD(); - job_unref_locked(job); -} - void job_progress_update(Job *job, uint64_t done) { progress_work_done(&job->progress, done); @@ -581,12 +557,6 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *jo= b)) job_lock(); } =20 -void job_enter_cond(Job *job, bool(*fn)(Job *job)) -{ - JOB_LOCK_GUARD(); - job_enter_cond_locked(job, fn); -} - void job_enter(Job *job) { JOB_LOCK_GUARD(); @@ -674,8 +644,9 @@ void coroutine_fn job_pause_point(Job *job) job_pause_point_locked(job); } =20 -static void job_yield_locked(Job *job) +void job_yield(Job *job) { + JOB_LOCK_GUARD(); assert(job->busy); =20 /* Check cancellation *before* setting busy =3D false, too! */ @@ -690,12 +661,6 @@ static void job_yield_locked(Job *job) job_pause_point_locked(job); } =20 -void job_yield(Job *job) -{ - JOB_LOCK_GUARD(); - job_yield_locked(job); -} - void coroutine_fn job_sleep_ns(Job *job, int64_t ns) { JOB_LOCK_GUARD(); @@ -764,23 +729,11 @@ void job_user_pause_locked(Job *job, Error **errp) job_pause_locked(job); } =20 -void job_user_pause(Job *job, Error **errp) -{ - JOB_LOCK_GUARD(); - job_user_pause_locked(job, errp); -} - bool job_user_paused_locked(Job *job) { return job->user_paused; } =20 -bool job_user_paused(Job *job) -{ - JOB_LOCK_GUARD(); - return job_user_paused_locked(job); -} - void job_user_resume_locked(Job *job, Error **errp) { assert(job); @@ -801,12 +754,6 @@ void job_user_resume_locked(Job *job, Error **errp) job_resume_locked(job); } =20 -void job_user_resume(Job *job, Error **errp) -{ - JOB_LOCK_GUARD(); - job_user_resume_locked(job, errp); -} - /* Called with job_mutex held, but releases it temporarily. */ static void job_do_dismiss_locked(Job *job) { @@ -834,12 +781,6 @@ void job_dismiss_locked(Job **jobptr, Error **errp) *jobptr =3D NULL; } =20 -void job_dismiss(Job **jobptr, Error **errp) -{ - JOB_LOCK_GUARD(); - job_dismiss_locked(jobptr, errp); -} - void job_early_fail(Job *job) { JOB_LOCK_GUARD(); @@ -1084,12 +1025,6 @@ void job_finalize_locked(Job *job, Error **errp) job_do_finalize_locked(job); } =20 -void job_finalize(Job *job, Error **errp) -{ - JOB_LOCK_GUARD(); - job_finalize_locked(job, errp); -} - /* Called with job_mutex held. */ static int job_transition_to_pending_locked(Job *job) { @@ -1236,12 +1171,6 @@ void job_cancel_locked(Job *job, bool force) } } =20 -void job_cancel(Job *job, bool force) -{ - JOB_LOCK_GUARD(); - job_cancel_locked(job, force); -} - void job_user_cancel_locked(Job *job, bool force, Error **errp) { if (job_apply_verb_locked(job, JOB_VERB_CANCEL, errp)) { @@ -1250,15 +1179,9 @@ void job_user_cancel_locked(Job *job, bool force, Er= ror **errp) job_cancel_locked(job, force); } =20 -void job_user_cancel(Job *job, bool force, Error **errp) -{ - JOB_LOCK_GUARD(); - job_user_cancel_locked(job, force, errp); -} - -/* A wrapper around job_cancel() taking an Error ** parameter so it may be - * used with job_finish_sync() without the need for (rather nasty) function - * pointer casts there. +/* A wrapper around job_cancel_locked() taking an Error ** parameter so it= may + * be used with job_finish_sync_locked() without the need for (rather nast= y) + * function pointer casts there. * * Called with job_mutex held. */ @@ -1306,12 +1229,6 @@ int job_complete_sync_locked(Job *job, Error **errp) return job_finish_sync_locked(job, job_complete_locked, errp); } =20 -int job_complete_sync(Job *job, Error **errp) -{ - JOB_LOCK_GUARD(); - return job_complete_sync_locked(job, errp); -} - void job_complete_locked(Job *job, Error **errp) { /* Should not be reachable via external interface for internal jobs */ @@ -1331,12 +1248,6 @@ void job_complete_locked(Job *job, Error **errp) job_lock(); } =20 -void job_complete(Job *job, Error **errp) -{ - JOB_LOCK_GUARD(); - job_complete_locked(job, errp); -} - int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), Error **errp) @@ -1366,9 +1277,3 @@ int job_finish_sync_locked(Job *job, job_unref_locked(job); return ret; } - -int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp) -{ - JOB_LOCK_GUARD(); - return job_finish_sync_locked(job, finish, errp); -} diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index f88e10e356..c0426bd10c 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -432,7 +432,7 @@ static const BlockJobDriver test_yielding_driver =3D { }; =20 /* - * Test that job_complete() works even on jobs that are in a paused + * Test that job_complete_locked() works even on jobs that are in a paused * state (i.e., STANDBY). * * To do this, run YieldingJob in an IO thread, get it into the READY @@ -440,7 +440,7 @@ static const BlockJobDriver test_yielding_driver =3D { * acquire the context so the job will not be entered and will thus * remain on STANDBY. * - * job_complete() should still work without error. + * job_complete_locked() should still work without error. * * Note that on the QMP interface, it is impossible to lock an IO * thread before a drained section ends. In practice, the --=20 2.31.1