From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641392574290956.8206010231739; Wed, 5 Jan 2022 06:22:54 -0800 (PST) Received: from localhost ([::1]:49008 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57Bl-0002d4-AB for importer@patchew.org; Wed, 05 Jan 2022 09:22:53 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52798) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sN-0008Ep-R2 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:02:51 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:29756) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sL-0006vr-64 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:02:51 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-408-_ZwaKptONSu7yYo6KBKY5A-1; Wed, 05 Jan 2022 09:02:47 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 212DF10168F3; Wed, 5 Jan 2022 14:02:43 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 58FC643FD6; Wed, 5 Jan 2022 14:02:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OsDeG1VZ7rCy3hs8TfEEqmgcuMy5LqzmgXcViJJw758=; b=bRwYT9H/sIUTbhU06QrMtgfpWfST2fY5l25EmsH7AsLSuNxMcRMqdNqYLpMW4Z5BoObGyZ phHeO0Kd/kRrm2SNNOqLvhuCbS/SKqCUQas1TFLTpkRlDnVYIYcylx6F7EeE7R80mLeLWe K6KMl2jGnfQyQlSMMgf37lhPMQ+SG/A= X-MC-Unique: _ZwaKptONSu7yYo6KBKY5A-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 01/16] job.c: make job_mutex and job_lock/unlock() public Date: Wed, 5 Jan 2022 09:01:53 -0500 Message-Id: <20220105140208.365608-2-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641392576577100001 Content-Type: text/plain; charset="utf-8" job mutex will be used to protect the job struct elements and list, replacing AioContext locks. Right now use a shared lock for all jobs, in order to keep things simple. Once the AioContext lock is gone, we can introduce per-job locks. To simplify the switch from aiocontext to job lock, introduce *nop* lock/unlock functions and macros. Once everything is protected by jobs, we can add the mutex and remove the aiocontext. Since job_mutex is already being used, add static real_job_{lock/unlock}. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 24 ++++++++++++++++++++++++ job.c | 35 +++++++++++++++++++++++------------ 2 files changed, 47 insertions(+), 12 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 915ceff425..8d0d370dda 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -312,6 +312,30 @@ typedef enum JobCreateFlags { JOB_MANUAL_DISMISS =3D 0x04, } JobCreateFlags; =20 +extern QemuMutex job_mutex; + +#define JOB_LOCK_GUARD() /* QEMU_LOCK_GUARD(&job_mutex) */ + +#define WITH_JOB_LOCK_GUARD() /* WITH_QEMU_LOCK_GUARD(&job_mutex) */ + +/** + * job_lock: + * + * Take the mutex protecting the list of jobs and their status. + * Most functions called by the monitor need to call job_lock + * and job_unlock manually. On the other hand, function called + * by the block jobs themselves and by the block layer will take the + * lock for you. + */ +void job_lock(void); + +/** + * job_unlock: + * + * Release the mutex protecting the list of jobs and their status. + */ +void job_unlock(void); + /** * Allocate and return a new job transaction. Jobs can be added to the * transaction using job_txn_add_job(). diff --git a/job.c b/job.c index e048037099..ccf737a179 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,12 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* + * job_mutex protects the jobs list, but also makes the + * struct job fields thread-safe. + */ +QemuMutex job_mutex; + static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -74,17 +80,22 @@ struct JobTxn { int refcnt; }; =20 -/* Right now, this mutex is only needed to synchronize accesses to job->bu= sy - * and job->sleep_timer, such as concurrent calls to job_do_yield and - * job_enter. */ -static QemuMutex job_mutex; +void job_lock(void) +{ + /* nop */ +} + +void job_unlock(void) +{ + /* nop */ +} =20 -static void job_lock(void) +static void real_job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void job_unlock(void) +static void real_job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -449,21 +460,21 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) return; } =20 - job_lock(); + real_job_lock(); if (job->busy) { - job_unlock(); + real_job_unlock(); return; } =20 if (fn && !fn(job)) { - job_unlock(); + real_job_unlock(); return; } =20 assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy =3D true; - job_unlock(); + real_job_unlock(); aio_co_enter(job->aio_context, job->co); } =20 @@ -480,13 +491,13 @@ void job_enter(Job *job) * called explicitly. */ static void coroutine_fn job_do_yield(Job *job, uint64_t ns) { - job_lock(); + real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; job_event_idle(job); - job_unlock(); + real_job_unlock(); qemu_coroutine_yield(); =20 /* Set by job_enter_cond() before re-entering the coroutine. */ --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641394343557908.8320951025669; Wed, 5 Jan 2022 06:52:23 -0800 (PST) Received: from localhost ([::1]:55116 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57eI-0003zK-Ci for importer@patchew.org; Wed, 05 Jan 2022 09:52:22 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52838) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sS-0008Lw-8c for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:02:56 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:35972) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sQ-00076l-Fp for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:02:55 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-226-M0CuOZaGN1WIExJMT5VArQ-1; Wed, 05 Jan 2022 09:02:51 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 80AC386A095; Wed, 5 Jan 2022 14:02:45 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 84DCD429A6; Wed, 5 Jan 2022 14:02:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v4Gtqv0U/t5sLkNGtvY4XFL+yBTSPWkrAM1x5czoJ1o=; b=SqxF0P89xpbT6rPmBb+PLQfvWToliD7cpwBgpXGKb0UOCmYSWukI4F8Y9z10EYAT87t5X0 ooCn6pH9ynloV/Fw4zPbwaJESpfRlI019mATNpMHvx3clRpdnsM62HqdYI9fsRVFYSN2qd gIX0kbvzh+VJPJnr7QZCcUyCUd/rZvI= X-MC-Unique: M0CuOZaGN1WIExJMT5VArQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 02/16] job.h: categorize fields in struct Job Date: Wed, 5 Jan 2022 09:01:54 -0500 Message-Id: <20220105140208.365608-3-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641394345021100001 Content-Type: text/plain; charset="utf-8" Categorize the fields in struct Job to understand which ones need to be protected by the job mutex and which don't. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 63 +++++++++++++++++++++++++++------------------- 1 file changed, 37 insertions(+), 26 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 8d0d370dda..0d348ff186 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -40,27 +40,52 @@ typedef struct JobTxn JobTxn; * Long-running operation. */ typedef struct Job { + + /* Fields set at initialization (job_create), and never modified */ + /** The ID of the job. May be NULL for internal jobs. */ char *id; =20 - /** The type of this job. */ + /** + * The type of this job. + * All callbacks are called with job_mutex *not* held. + */ const JobDriver *driver; =20 - /** Reference count of the block job */ - int refcnt; - - /** Current state; See @JobStatus for details. */ - JobStatus status; - - /** AioContext to run the job coroutine in */ - AioContext *aio_context; - /** * The coroutine that executes the job. If not NULL, it is reentered = when * busy is false and the job is cancelled. + * Initialized in job_start() */ Coroutine *co; =20 + /** True if this job should automatically finalize itself */ + bool auto_finalize; + + /** True if this job should automatically dismiss itself */ + bool auto_dismiss; + + /** The completion function that will be called when the job completes= . */ + BlockCompletionFunc *cb; + + /** The opaque value that is passed to the completion function. */ + void *opaque; + + /* ProgressMeter API is thread-safe */ + ProgressMeter progress; + + + /** Protected by job_mutex */ + + /** AioContext to run the job coroutine in */ + AioContext *aio_context; + + /** Reference count of the block job */ + int refcnt; + + /** Current state; See @JobStatus for details. */ + JobStatus status; + /** * Timer that is used by @job_sleep_ns. Accessed under job_mutex (in * job.c). @@ -76,7 +101,7 @@ typedef struct Job { /** * Set to false by the job while the coroutine has yielded and may be * re-entered by job_enter(). There may still be I/O or event loop act= ivity - * pending. Accessed under block_job_mutex (in blockjob.c). + * pending. Accessed under job_mutex. * * When the job is deferred to the main loop, busy is true as long as = the * bottom half is still pending. @@ -112,14 +137,6 @@ typedef struct Job { /** Set to true when the job has deferred work to the main loop. */ bool deferred_to_main_loop; =20 - /** True if this job should automatically finalize itself */ - bool auto_finalize; - - /** True if this job should automatically dismiss itself */ - bool auto_dismiss; - - ProgressMeter progress; - /** * Return code from @run and/or @prepare callback(s). * Not final until the job has reached the CONCLUDED status. @@ -134,12 +151,6 @@ typedef struct Job { */ Error *err; =20 - /** The completion function that will be called when the job completes= . */ - BlockCompletionFunc *cb; - - /** The opaque value that is passed to the completion function. */ - void *opaque; - /** Notifiers called when a cancelled job is finalised */ NotifierList on_finalize_cancelled; =20 @@ -167,6 +178,7 @@ typedef struct Job { =20 /** * Callbacks and other information about a Job driver. + * All callbacks are invoked with job_mutex *not* held. */ struct JobDriver { =20 @@ -481,7 +493,6 @@ void job_yield(Job *job); */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns); =20 - /** Returns the JobType of a given Job. */ JobType job_type(const Job *job); =20 --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393716185807.2336217570816; Wed, 5 Jan 2022 06:41:56 -0800 (PST) Received: from localhost ([::1]:35874 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57UB-0006jJ-1E for importer@patchew.org; Wed, 05 Jan 2022 09:41:55 -0500 Received: from eggs.gnu.org ([209.51.188.92]:54528) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56xb-0007Rq-A3 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:08:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:46517) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56xS-0006bU-73 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:08:14 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-328-6iwUeA_PPpykba0v4tT9rg-1; Wed, 05 Jan 2022 09:07:56 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 45EEE874991; Wed, 5 Jan 2022 14:02:46 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3DF95B7CD; Wed, 5 Jan 2022 14:02:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zfaj9EvKGUqJ1wWUDEqbOPPkd907n9+NL2fxc+laJeM=; b=Xkd9eS3R9WNKbG2s+ybWcylqY5M3zzh32MFrwfkTPuVifxF/egJIJ4pD1878ThA66dYHiZ nKJPltqzw0ahUQMkWwWQiq7IM1Ey1ZGwuJM/iADF5BAIwSYq3UZGoUOgQYaNiYOLcjhR0s q8DKUaodRA3iMx7WaklwitB/9hHlJDQ= X-MC-Unique: 6iwUeA_PPpykba0v4tT9rg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 03/16] job.h: define locked functions Date: Wed, 5 Jan 2022 09:01:55 -0500 Message-Id: <20220105140208.365608-4-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393718048100001 Content-Type: text/plain; charset="utf-8" These functions assume that the job lock is held by the caller, to avoid TOC/TOU conditions. Therefore, their name must end with _locked. Introduce also additional helpers that define _locked functions (useful when the job_mutex is globally applied). Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- block.c | 2 +- block/replication.c | 4 +- blockdev.c | 32 +++---- blockjob.c | 16 ++-- include/qemu/job.h | 153 +++++++++++++++++++++--------- job-qmp.c | 26 +++--- job.c | 155 +++++++++++++++++-------------- qemu-img.c | 10 +- tests/unit/test-bdrv-drain.c | 2 +- tests/unit/test-block-iothread.c | 4 +- tests/unit/test-blockjob-txn.c | 14 +-- tests/unit/test-blockjob.c | 30 +++--- 12 files changed, 263 insertions(+), 185 deletions(-) diff --git a/block.c b/block.c index ca70bcc807..8fcd525fa0 100644 --- a/block.c +++ b/block.c @@ -4976,7 +4976,7 @@ static void bdrv_close(BlockDriverState *bs) =20 void bdrv_close_all(void) { - assert(job_next(NULL) =3D=3D NULL); + assert(job_next_locked(NULL) =3D=3D NULL); assert(qemu_in_main_thread()); =20 /* Drop references from requests still in flight, such as canceled blo= ck diff --git a/block/replication.c b/block/replication.c index 55c8f894aa..5215c328c1 100644 --- a/block/replication.c +++ b/block/replication.c @@ -149,7 +149,7 @@ static void replication_close(BlockDriverState *bs) if (s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { commit_job =3D &s->commit_job->job; assert(commit_job->aio_context =3D=3D qemu_get_current_aio_context= ()); - job_cancel_sync(commit_job, false); + job_cancel_sync_locked(commit_job, false); } =20 if (s->mode =3D=3D REPLICATION_MODE_SECONDARY) { @@ -726,7 +726,7 @@ static void replication_stop(ReplicationState *rs, bool= failover, Error **errp) * disk, secondary disk in backup_job_completed(). */ if (s->backup_job) { - job_cancel_sync(&s->backup_job->job, true); + job_cancel_sync_locked(&s->backup_job->job, true); } =20 if (!failover) { diff --git a/blockdev.c b/blockdev.c index a3b9aeb3c2..11fd651bde 100644 --- a/blockdev.c +++ b/blockdev.c @@ -160,7 +160,7 @@ void blockdev_mark_auto_del(BlockBackend *blk) AioContext *aio_context =3D job->job.aio_context; aio_context_acquire(aio_context); =20 - job_cancel(&job->job, false); + job_cancel_locked(&job->job, false); =20 aio_context_release(aio_context); } @@ -1832,7 +1832,7 @@ static void drive_backup_abort(BlkActionState *common) aio_context =3D bdrv_get_aio_context(state->bs); aio_context_acquire(aio_context); =20 - job_cancel_sync(&state->job->job, true); + job_cancel_sync_locked(&state->job->job, true); =20 aio_context_release(aio_context); } @@ -1933,7 +1933,7 @@ static void blockdev_backup_abort(BlkActionState *com= mon) aio_context =3D bdrv_get_aio_context(state->bs); aio_context_acquire(aio_context); =20 - job_cancel_sync(&state->job->job, true); + job_cancel_sync_locked(&state->job->job, true); =20 aio_context_release(aio_context); } @@ -2382,7 +2382,7 @@ exit: if (!has_props) { qapi_free_TransactionProperties(props); } - job_txn_unref(block_job_txn); + job_txn_unref_locked(block_job_txn); } =20 BlockDirtyBitmapSha256 *qmp_x_debug_block_dirty_bitmap_sha256(const char *= node, @@ -3347,14 +3347,14 @@ void qmp_block_job_cancel(const char *device, force =3D false; } =20 - if (job_user_paused(&job->job) && !force) { + if (job_user_paused_locked(&job->job) && !force) { error_setg(errp, "The block job for device '%s' is currently pause= d", device); goto out; } =20 trace_qmp_block_job_cancel(job); - job_user_cancel(&job->job, force, errp); + job_user_cancel_locked(&job->job, force, errp); out: aio_context_release(aio_context); } @@ -3369,7 +3369,7 @@ void qmp_block_job_pause(const char *device, Error **= errp) } =20 trace_qmp_block_job_pause(job); - job_user_pause(&job->job, errp); + job_user_pause_locked(&job->job, errp); aio_context_release(aio_context); } =20 @@ -3383,7 +3383,7 @@ void qmp_block_job_resume(const char *device, Error *= *errp) } =20 trace_qmp_block_job_resume(job); - job_user_resume(&job->job, errp); + job_user_resume_locked(&job->job, errp); aio_context_release(aio_context); } =20 @@ -3397,7 +3397,7 @@ void qmp_block_job_complete(const char *device, Error= **errp) } =20 trace_qmp_block_job_complete(job); - job_complete(&job->job, errp); + job_complete_locked(&job->job, errp); aio_context_release(aio_context); } =20 @@ -3411,16 +3411,16 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) } =20 trace_qmp_block_job_finalize(job); - job_ref(&job->job); - job_finalize(&job->job, errp); + job_ref_locked(&job->job); + job_finalize_locked(&job->job, errp); =20 /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. + * Job's context might have changed via job_finalize_locked + * (and job_txn_apply automatically acquires the new one), + * so make sure we release the correct one. */ aio_context =3D blk_get_aio_context(job->blk); - job_unref(&job->job); + job_unref_locked(&job->job); aio_context_release(aio_context); } =20 @@ -3436,7 +3436,7 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) =20 trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; - job_dismiss(&job, errp); + job_dismiss_locked(&job, errp); aio_context_release(aio_context); } =20 diff --git a/blockjob.c b/blockjob.c index 74476af473..5b5d7f26b3 100644 --- a/blockjob.c +++ b/blockjob.c @@ -65,7 +65,7 @@ BlockJob *block_job_next(BlockJob *bjob) assert(qemu_in_main_thread()); =20 do { - job =3D job_next(job); + job =3D job_next_locked(job); } while (job && !is_block_job(job)); =20 return job ? container_of(job, BlockJob, job) : NULL; @@ -73,7 +73,7 @@ BlockJob *block_job_next(BlockJob *bjob) =20 BlockJob *block_job_get(const char *id) { - Job *job =3D job_get(id); + Job *job =3D job_get_locked(id); assert(qemu_in_main_thread()); =20 if (job && is_block_job(job)) { @@ -103,7 +103,7 @@ static char *child_job_get_parent_desc(BdrvChild *c) static void child_job_drained_begin(BdrvChild *c) { BlockJob *job =3D c->opaque; - job_pause(&job->job); + job_pause_locked(&job->job); } =20 static bool child_job_drained_poll(BdrvChild *c) @@ -115,7 +115,7 @@ static bool child_job_drained_poll(BdrvChild *c) /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point aft= er * being reentered, so no job driver code will run before they pause. = */ - if (!job->busy || job_is_completed(job)) { + if (!job->busy || job_is_completed_locked(job)) { return false; } =20 @@ -131,7 +131,7 @@ static bool child_job_drained_poll(BdrvChild *c) static void child_job_drained_end(BdrvChild *c, int *drained_end_counter) { BlockJob *job =3D c->opaque; - job_resume(&job->job); + job_resume_locked(&job->job); } =20 static bool child_job_can_set_aio_ctx(BdrvChild *c, AioContext *ctx, @@ -279,7 +279,7 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp) =20 assert(qemu_in_main_thread()); =20 - if (job_apply_verb(&job->job, JOB_VERB_SET_SPEED, errp) < 0) { + if (job_apply_verb_locked(&job->job, JOB_VERB_SET_SPEED, errp) < 0) { return false; } if (speed < 0) { @@ -301,7 +301,7 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp) } =20 /* kick only if a timer is pending */ - job_enter_cond(&job->job, job_timer_pending); + job_enter_cond_locked(&job->job, job_timer_pending); =20 return true; } @@ -553,7 +553,7 @@ BlockErrorAction block_job_error_action(BlockJob *job, = BlockdevOnError on_err, } if (action =3D=3D BLOCK_ERROR_ACTION_STOP) { if (!job->job.user_paused) { - job_pause(&job->job); + job_pause_locked(&job->job); /* make the pause user visible, which will be resumed from QMP= . */ job->job.user_paused =3D true; } diff --git a/include/qemu/job.h b/include/qemu/job.h index 0d348ff186..0d1c4d1bb1 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -350,7 +350,7 @@ void job_unlock(void); =20 /** * Allocate and return a new job transaction. Jobs can be added to the - * transaction using job_txn_add_job(). + * transaction using job_txn_add_job_locked(). * * The transaction is automatically freed when the last job completes or is * cancelled. @@ -362,22 +362,25 @@ void job_unlock(void); JobTxn *job_txn_new(void); =20 /** - * Release a reference that was previously acquired with job_txn_add_job or - * job_txn_new. If it's the last reference to the object, it will be freed. + * Release a reference that was previously acquired with + * job_txn_add_job_locked or job_txn_new. + * If it's the last reference to the object, it will be freed. */ -void job_txn_unref(JobTxn *txn); +void job_txn_unref_locked(JobTxn *txn); =20 /** * @txn: The transaction (may be NULL) * @job: Job to add to the transaction * * Add @job to the transaction. The @job must not already be in a transac= tion. - * The caller must call either job_txn_unref() or job_completed() to relea= se - * the reference that is automatically grabbed here. + * The caller must call either job_txn_unref_locked() or job_completed() + * to release the reference that is automatically grabbed here. * * If @txn is NULL, the function does nothing. + * + * Called between job_lock and job_unlock. */ -void job_txn_add_job(JobTxn *txn, Job *job); +void job_txn_add_job_locked(JobTxn *txn, Job *job); =20 /** * Create a new long-running job and return it. @@ -396,16 +399,20 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, void *opaque, Error **errp); =20 /** - * Add a reference to Job refcnt, it will be decreased with job_unref, and= then - * be freed if it comes to be the last reference. + * Add a reference to Job refcnt, it will be decreased with job_unref_lock= ed, + * and then be freed if it comes to be the last reference. + * + * Called between job_lock and job_unlock. */ -void job_ref(Job *job); +void job_ref_locked(Job *job); =20 /** - * Release a reference that was previously acquired with job_ref() or + * Release a reference that was previously acquired with job_ref_locked() = or * job_create(). If it's the last reference to the object, it will be free= d. + * + * Called between job_lock and job_unlock, but might release it temporarly. */ -void job_unref(Job *job); +void job_unref_locked(Job *job); =20 /** * @job: The job that has made progress @@ -450,8 +457,10 @@ void job_event_completed(Job *job); * Conditionally enter the job coroutine if the job is ready to run, not * already busy and fn() returns true. fn() is called while under the job_= lock * critical section. + * + * Called between job_lock and job_unlock, but it releases the lock tempor= arly. */ -void job_enter_cond(Job *job, bool(*fn)(Job *job)); +void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)); =20 /** * @job: A job that has not yet been started. @@ -471,8 +480,9 @@ void job_enter(Job *job); /** * @job: The job that is ready to pause. * - * Pause now if job_pause() has been called. Jobs that perform lots of I/O - * must call this between requests so that the job can be paused. + * Pause now if job_pause_locked() has been called. + * Jobs that perform lots of I/O must call this between + * requests so that the job can be paused. */ void coroutine_fn job_pause_point(Job *job); =20 @@ -511,79 +521,117 @@ bool job_is_cancelled(Job *job); */ bool job_cancel_requested(Job *job); =20 -/** Returns whether the job is in a completed state. */ -bool job_is_completed(Job *job); +/** + * Returns whether the job is in a completed state. + * Called between job_lock and job_unlock. + */ +bool job_is_completed_locked(Job *job); =20 -/** Returns whether the job is ready to be completed. */ +/** + * Returns whether the job is ready to be completed. + * Called with job_mutex *not* held. + */ bool job_is_ready(Job *job); =20 +/** Same as job_is_ready(), but assumes job_lock is held. */ +bool job_is_ready_locked(Job *job); + /** * Request @job to pause at the next pause point. Must be paired with - * job_resume(). If the job is supposed to be resumed by user action, call - * job_user_pause() instead. + * job_resume_locked(). If the job is supposed to be resumed by user actio= n, + * call job_user_pause_locked() instead. + * + * Called between job_lock and job_unlock. */ -void job_pause(Job *job); +void job_pause_locked(Job *job); =20 -/** Resumes a @job paused with job_pause. */ -void job_resume(Job *job); +/** + * Resumes a @job paused with job_pause_locked. + * Called between job_lock and job_unlock. + */ +void job_resume_locked(Job *job); =20 /** * Asynchronously pause the specified @job. - * Do not allow a resume until a matching call to job_user_resume. + * Do not allow a resume until a matching call to job_user_resume_locked. + * + * Called between job_lock and job_unlock. */ -void job_user_pause(Job *job, Error **errp); +void job_user_pause_locked(Job *job, Error **errp); =20 -/** Returns true if the job is user-paused. */ -bool job_user_paused(Job *job); +/** + * Returns true if the job is user-paused. + * Called between job_lock and job_unlock. + */ +bool job_user_paused_locked(Job *job); =20 /** * Resume the specified @job. - * Must be paired with a preceding job_user_pause. + * Must be paired with a preceding job_user_pause_locked. + * + * Called between job_lock and job_unlock, but might release it temporarly. */ -void job_user_resume(Job *job, Error **errp); +void job_user_resume_locked(Job *job, Error **errp); =20 /** * Get the next element from the list of block jobs after @job, or the * first one if @job is %NULL. * * Returns the requested job, or %NULL if there are no more jobs left. + * + * Called between job_lock and job_unlock. */ -Job *job_next(Job *job); +Job *job_next_locked(Job *job); =20 /** * Get the job identified by @id (which must not be %NULL). * * Returns the requested job, or %NULL if it doesn't exist. + * + * Called between job_lock and job_unlock. */ -Job *job_get(const char *id); +Job *job_get_locked(const char *id); =20 /** * Check whether the verb @verb can be applied to @job in its current stat= e. * Returns 0 if the verb can be applied; otherwise errp is set and -EPERM * returned. + * + * Called between job_lock and job_unlock. */ -int job_apply_verb(Job *job, JobVerb verb, Error **errp); +int job_apply_verb_locked(Job *job, JobVerb verb, Error **errp); =20 /** The @job could not be started, free it. */ void job_early_fail(Job *job); =20 +/** Same as job_early_fail(), but assumes job_lock is held. */ +void job_early_fail_locked(Job *job); + /** Moves the @job from RUNNING to READY */ void job_transition_to_ready(Job *job); =20 -/** Asynchronously complete the specified @job. */ -void job_complete(Job *job, Error **errp); +/** + * Asynchronously complete the specified @job. + * Called between job_lock and job_unlock, but it releases the lock tempor= arly. + */ +void job_complete_locked(Job *job, Error **errp); =20 /** * Asynchronously cancel the specified @job. If @force is true, the job sh= ould * be cancelled immediately without waiting for a consistent state. + * + * Called between job_lock and job_unlock. */ -void job_cancel(Job *job, bool force); +void job_cancel_locked(Job *job, bool force); =20 /** - * Cancels the specified job like job_cancel(), but may refuse to do so if= the - * operation isn't meaningful in the current state of the job. + * Cancels the specified job like job_cancel_locked(), + * but may refuse to do so if the operation isn't meaningful + * in the current state of the job. + * + * Called between job_lock and job_unlock. */ -void job_user_cancel(Job *job, bool force, Error **errp); +void job_user_cancel_locked(Job *job, bool force, Error **errp); =20 /** * Synchronously cancel the @job. The completion callback is called @@ -596,14 +644,20 @@ void job_user_cancel(Job *job, bool force, Error **er= rp); * * Callers must hold the AioContext lock of job->aio_context. */ -int job_cancel_sync(Job *job, bool force); +int job_cancel_sync_locked(Job *job, bool force); =20 -/** Synchronously force-cancels all jobs using job_cancel_sync(). */ +/** + * Synchronously force-cancels all jobs using job_cancel_sync_locked(). + * + * Called with job_lock *not* held, unlike most other APIs consumed + * by the monitor! This is primarly to avoid adding unnecessary lock-unlock + * patterns in the caller. + */ void job_cancel_sync_all(void); =20 /** * @job: The job to be completed. - * @errp: Error object which may be set by job_complete(); this is not + * @errp: Error object which may be set by job_complete_locked(); this is = not * necessarily set on every error, the job return value has to be * checked as well. * @@ -614,8 +668,10 @@ void job_cancel_sync_all(void); * Returns the return value from the job. * * Callers must hold the AioContext lock of job->aio_context. + * + * Called between job_lock and job_unlock. */ -int job_complete_sync(Job *job, Error **errp); +int job_complete_sync_locked(Job *job, Error **errp); =20 /** * For a @job that has finished its work and is pending awaiting explicit @@ -624,14 +680,18 @@ int job_complete_sync(Job *job, Error **errp); * FIXME: Make the below statement universally true: * For jobs that support the manual workflow mode, all graph changes that = occur * as a result will occur after this command and before a successful reply. + * + * Called between job_lock and job_unlock. */ -void job_finalize(Job *job, Error **errp); +void job_finalize_locked(Job *job, Error **errp); =20 /** * Remove the concluded @job from the query list and resets the passed poi= nter * to %NULL. Returns an error if the job is not actually concluded. + * + * Called between job_lock and job_unlock. */ -void job_dismiss(Job **job, Error **errp); +void job_dismiss_locked(Job **job, Error **errp); =20 /** * Synchronously finishes the given @job. If @finish is given, it is calle= d to @@ -641,7 +701,10 @@ void job_dismiss(Job **job, Error **errp); * cancelled before completing, and -errno in other error cases. * * Callers must hold the AioContext lock of job->aio_context. + * + * Called between job_lock and job_unlock. */ -int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp); +int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), + Error **errp); =20 #endif diff --git a/job-qmp.c b/job-qmp.c index 829a28aa70..de4120a1d4 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -36,7 +36,7 @@ static Job *find_job(const char *id, AioContext **aio_con= text, Error **errp) =20 *aio_context =3D NULL; =20 - job =3D job_get(id); + job =3D job_get_locked(id); if (!job) { error_setg(errp, "Job not found"); return NULL; @@ -58,7 +58,7 @@ void qmp_job_cancel(const char *id, Error **errp) } =20 trace_qmp_job_cancel(job); - job_user_cancel(job, true, errp); + job_user_cancel_locked(job, true, errp); aio_context_release(aio_context); } =20 @@ -72,7 +72,7 @@ void qmp_job_pause(const char *id, Error **errp) } =20 trace_qmp_job_pause(job); - job_user_pause(job, errp); + job_user_pause_locked(job, errp); aio_context_release(aio_context); } =20 @@ -86,7 +86,7 @@ void qmp_job_resume(const char *id, Error **errp) } =20 trace_qmp_job_resume(job); - job_user_resume(job, errp); + job_user_resume_locked(job, errp); aio_context_release(aio_context); } =20 @@ -100,7 +100,7 @@ void qmp_job_complete(const char *id, Error **errp) } =20 trace_qmp_job_complete(job); - job_complete(job, errp); + job_complete_locked(job, errp); aio_context_release(aio_context); } =20 @@ -114,16 +114,16 @@ void qmp_job_finalize(const char *id, Error **errp) } =20 trace_qmp_job_finalize(job); - job_ref(job); - job_finalize(job, errp); + job_ref_locked(job); + job_finalize_locked(job, errp); =20 /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. + * Job's context might have changed via job_finalize_locked + * (and job_txn_apply automatically acquires the new one), + * so make sure we release the correct one. */ aio_context =3D job->aio_context; - job_unref(job); + job_unref_locked(job); aio_context_release(aio_context); } =20 @@ -137,7 +137,7 @@ void qmp_job_dismiss(const char *id, Error **errp) } =20 trace_qmp_job_dismiss(job); - job_dismiss(&job, errp); + job_dismiss_locked(&job, errp); aio_context_release(aio_context); } =20 @@ -171,7 +171,7 @@ JobInfoList *qmp_query_jobs(Error **errp) JobInfoList *head =3D NULL, **tail =3D &head; Job *job; =20 - for (job =3D job_next(NULL); job; job =3D job_next(job)) { + for (job =3D job_next_locked(NULL); job; job =3D job_next_locked(job))= { JobInfo *value; AioContext *aio_context; =20 diff --git a/job.c b/job.c index ccf737a179..bb6ca2940c 100644 --- a/job.c +++ b/job.c @@ -118,14 +118,14 @@ static void job_txn_ref(JobTxn *txn) txn->refcnt++; } =20 -void job_txn_unref(JobTxn *txn) +void job_txn_unref_locked(JobTxn *txn) { if (txn && --txn->refcnt =3D=3D 0) { g_free(txn); } } =20 -void job_txn_add_job(JobTxn *txn, Job *job) +void job_txn_add_job_locked(JobTxn *txn, Job *job) { if (!txn) { return; @@ -142,7 +142,7 @@ static void job_txn_del_job(Job *job) { if (job->txn) { QLIST_REMOVE(job, txn_list); - job_txn_unref(job->txn); + job_txn_unref_locked(job->txn); job->txn =3D NULL; } } @@ -160,7 +160,7 @@ static int job_txn_apply(Job *job, int fn(Job *)) * we need to release it here to avoid holding the lock twice - which = would * break AIO_WAIT_WHILE from within fn. */ - job_ref(job); + job_ref_locked(job); aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { @@ -178,7 +178,7 @@ static int job_txn_apply(Job *job, int fn(Job *)) * can't use a local variable to cache it. */ aio_context_acquire(job->aio_context); - job_unref(job); + job_unref_locked(job); return rc; } =20 @@ -202,7 +202,7 @@ static void job_state_transition(Job *job, JobStatus s1) } } =20 -int job_apply_verb(Job *job, JobVerb verb, Error **errp) +int job_apply_verb_locked(Job *job, JobVerb verb, Error **errp) { JobStatus s0 =3D job->status; assert(verb >=3D 0 && verb < JOB_VERB__MAX); @@ -238,7 +238,7 @@ bool job_cancel_requested(Job *job) return job->cancelled; } =20 -bool job_is_ready(Job *job) +bool job_is_ready_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -260,7 +260,13 @@ bool job_is_ready(Job *job) return false; } =20 -bool job_is_completed(Job *job) +bool job_is_ready(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_ready_locked(job); +} + +bool job_is_completed_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -292,7 +298,7 @@ static bool job_should_pause(Job *job) return job->pause_count > 0; } =20 -Job *job_next(Job *job) +Job *job_next_locked(Job *job) { if (!job) { return QLIST_FIRST(&jobs); @@ -300,7 +306,7 @@ Job *job_next(Job *job) return QLIST_NEXT(job, job_list); } =20 -Job *job_get(const char *id) +Job *job_get_locked(const char *id) { Job *job; =20 @@ -335,7 +341,7 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, error_setg(errp, "Invalid job ID '%s'", job_id); return NULL; } - if (job_get(job_id)) { + if (job_get_locked(job_id)) { error_setg(errp, "Job ID '%s' already in use", job_id); return NULL; } @@ -375,21 +381,21 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, * consolidating the job management logic */ if (!txn) { txn =3D job_txn_new(); - job_txn_add_job(txn, job); - job_txn_unref(txn); + job_txn_add_job_locked(txn, job); + job_txn_unref_locked(txn); } else { - job_txn_add_job(txn, job); + job_txn_add_job_locked(txn, job); } =20 return job; } =20 -void job_ref(Job *job) +void job_ref_locked(Job *job) { ++job->refcnt; } =20 -void job_unref(Job *job) +void job_unref_locked(Job *job) { assert(qemu_in_main_thread()); =20 @@ -451,7 +457,7 @@ static void job_event_idle(Job *job) notifier_list_notify(&job->on_idle, job); } =20 -void job_enter_cond(Job *job, bool(*fn)(Job *job)) +void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)) { if (!job_started(job)) { return; @@ -480,7 +486,7 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) =20 void job_enter(Job *job) { - job_enter_cond(job, NULL); + job_enter_cond_locked(job, NULL); } =20 /* Yield, and schedule a timer to reenter the coroutine after @ns nanoseco= nds. @@ -500,7 +506,7 @@ static void coroutine_fn job_do_yield(Job *job, uint64_= t ns) real_job_unlock(); qemu_coroutine_yield(); =20 - /* Set by job_enter_cond() before re-entering the coroutine. */ + /* Set by job_enter_cond_locked() before re-entering the coroutine. */ assert(job->busy); } =20 @@ -573,7 +579,7 @@ static bool job_timer_not_pending(Job *job) return !timer_pending(&job->sleep_timer); } =20 -void job_pause(Job *job) +void job_pause_locked(Job *job) { job->pause_count++; if (!job->paused) { @@ -581,7 +587,7 @@ void job_pause(Job *job) } } =20 -void job_resume(Job *job) +void job_resume_locked(Job *job) { assert(job->pause_count > 0); job->pause_count--; @@ -590,12 +596,12 @@ void job_resume(Job *job) } =20 /* kick only if no timer is pending */ - job_enter_cond(job, job_timer_not_pending); + job_enter_cond_locked(job, job_timer_not_pending); } =20 -void job_user_pause(Job *job, Error **errp) +void job_user_pause_locked(Job *job, Error **errp) { - if (job_apply_verb(job, JOB_VERB_PAUSE, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_PAUSE, errp)) { return; } if (job->user_paused) { @@ -603,15 +609,15 @@ void job_user_pause(Job *job, Error **errp) return; } job->user_paused =3D true; - job_pause(job); + job_pause_locked(job); } =20 -bool job_user_paused(Job *job) +bool job_user_paused_locked(Job *job) { return job->user_paused; } =20 -void job_user_resume(Job *job, Error **errp) +void job_user_resume_locked(Job *job, Error **errp) { assert(job); assert(qemu_in_main_thread()); @@ -619,14 +625,14 @@ void job_user_resume(Job *job, Error **errp) error_setg(errp, "Can't resume a job that was not paused"); return; } - if (job_apply_verb(job, JOB_VERB_RESUME, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_RESUME, errp)) { return; } if (job->driver->user_resume) { job->driver->user_resume(job); } job->user_paused =3D false; - job_resume(job); + job_resume_locked(job); } =20 static void job_do_dismiss(Job *job) @@ -639,15 +645,15 @@ static void job_do_dismiss(Job *job) job_txn_del_job(job); =20 job_state_transition(job, JOB_STATUS_NULL); - job_unref(job); + job_unref_locked(job); } =20 -void job_dismiss(Job **jobptr, Error **errp) +void job_dismiss_locked(Job **jobptr, Error **errp) { Job *job =3D *jobptr; /* similarly to _complete, this is QMP-interface only. */ assert(job->id); - if (job_apply_verb(job, JOB_VERB_DISMISS, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_DISMISS, errp)) { return; } =20 @@ -655,12 +661,18 @@ void job_dismiss(Job **jobptr, Error **errp) *jobptr =3D NULL; } =20 -void job_early_fail(Job *job) +void job_early_fail_locked(Job *job) { assert(job->status =3D=3D JOB_STATUS_CREATED); job_do_dismiss(job); } =20 +void job_early_fail(Job *job) +{ + JOB_LOCK_GUARD(); + job_early_fail_locked(job); +} + static void job_conclude(Job *job) { job_state_transition(job, JOB_STATUS_CONCLUDED); @@ -710,7 +722,7 @@ static void job_clean(Job *job) =20 static int job_finalize_single(Job *job) { - assert(job_is_completed(job)); + assert(job_is_completed_locked(job)); =20 /* Ensure abort is called for late-transactional failures */ job_update_rc(job); @@ -795,7 +807,7 @@ static void job_completed_txn_abort(Job *job) * calls of AIO_WAIT_WHILE(), which could deadlock otherwise. * Note that the job's AioContext may change when it is finalized. */ - job_ref(job); + job_ref_locked(job); aio_context_release(job->aio_context); =20 /* Other jobs are effectively cancelled by us, set the status for @@ -822,22 +834,22 @@ static void job_completed_txn_abort(Job *job) */ ctx =3D other_job->aio_context; aio_context_acquire(ctx); - if (!job_is_completed(other_job)) { + if (!job_is_completed_locked(other_job)) { assert(job_cancel_requested(other_job)); - job_finish_sync(other_job, NULL, NULL); + job_finish_sync_locked(other_job, NULL, NULL); } job_finalize_single(other_job); aio_context_release(ctx); } =20 /* - * Use job_ref()/job_unref() so we can read the AioContext here - * even if the job went away during job_finalize_single(). + * Use job_ref_locked()/job_unref_locked() so we can read the AioConte= xt + * here even if the job went away during job_finalize_single(). */ aio_context_acquire(job->aio_context); - job_unref(job); + job_unref_locked(job); =20 - job_txn_unref(txn); + job_txn_unref_locked(txn); } =20 static int job_prepare(Job *job) @@ -869,10 +881,10 @@ static void job_do_finalize(Job *job) } } =20 -void job_finalize(Job *job, Error **errp) +void job_finalize_locked(Job *job, Error **errp) { assert(job && job->id); - if (job_apply_verb(job, JOB_VERB_FINALIZE, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_FINALIZE, errp)) { return; } job_do_finalize(job); @@ -905,7 +917,7 @@ static void job_completed_txn_success(Job *job) * txn. */ QLIST_FOREACH(other_job, &txn->jobs, txn_list) { - if (!job_is_completed(other_job)) { + if (!job_is_completed_locked(other_job)) { return; } assert(other_job->ret =3D=3D 0); @@ -921,7 +933,7 @@ static void job_completed_txn_success(Job *job) =20 static void job_completed(Job *job) { - assert(job && job->txn && !job_is_completed(job)); + assert(job && job->txn && !job_is_completed_locked(job)); =20 job_update_rc(job); trace_job_completed(job, job->ret); @@ -938,7 +950,7 @@ static void job_exit(void *opaque) Job *job =3D (Job *)opaque; AioContext *ctx; =20 - job_ref(job); + job_ref_locked(job); aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion @@ -957,7 +969,7 @@ static void job_exit(void *opaque) * the job underneath us. */ ctx =3D job->aio_context; - job_unref(job); + job_unref_locked(job); aio_context_release(ctx); } =20 @@ -1003,7 +1015,7 @@ void job_start(Job *job) aio_co_enter(job->aio_context, job->co); } =20 -void job_cancel(Job *job, bool force) +void job_cancel_locked(Job *job, bool force) { if (job->status =3D=3D JOB_STATUS_CONCLUDED) { job_do_dismiss(job); @@ -1031,20 +1043,22 @@ void job_cancel(Job *job, bool force) } } =20 -void job_user_cancel(Job *job, bool force, Error **errp) +void job_user_cancel_locked(Job *job, bool force, Error **errp) { - if (job_apply_verb(job, JOB_VERB_CANCEL, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_CANCEL, errp)) { return; } - job_cancel(job, force); + job_cancel_locked(job, force); } =20 -/* A wrapper around job_cancel() taking an Error ** parameter so it may be - * used with job_finish_sync() without the need for (rather nasty) function - * pointer casts there. */ +/* + * A wrapper around job_cancel_locked() taking an Error ** parameter so + * it may be used with job_finish_sync_locked() without the + * need for (rather nasty) function pointer casts there. + */ static void job_cancel_err(Job *job, Error **errp) { - job_cancel(job, false); + job_cancel_locked(job, false); } =20 /** @@ -1052,15 +1066,15 @@ static void job_cancel_err(Job *job, Error **errp) */ static void job_force_cancel_err(Job *job, Error **errp) { - job_cancel(job, true); + job_cancel_locked(job, true); } =20 -int job_cancel_sync(Job *job, bool force) +int job_cancel_sync_locked(Job *job, bool force) { if (force) { - return job_finish_sync(job, &job_force_cancel_err, NULL); + return job_finish_sync_locked(job, &job_force_cancel_err, NULL); } else { - return job_finish_sync(job, &job_cancel_err, NULL); + return job_finish_sync_locked(job, &job_cancel_err, NULL); } } =20 @@ -1069,25 +1083,25 @@ void job_cancel_sync_all(void) Job *job; AioContext *aio_context; =20 - while ((job =3D job_next(NULL))) { + while ((job =3D job_next_locked(NULL))) { aio_context =3D job->aio_context; aio_context_acquire(aio_context); - job_cancel_sync(job, true); + job_cancel_sync_locked(job, true); aio_context_release(aio_context); } } =20 -int job_complete_sync(Job *job, Error **errp) +int job_complete_sync_locked(Job *job, Error **errp) { - return job_finish_sync(job, job_complete, errp); + return job_finish_sync_locked(job, job_complete_locked, errp); } =20 -void job_complete(Job *job, Error **errp) +void job_complete_locked(Job *job, Error **errp) { /* Should not be reachable via external interface for internal jobs */ assert(job->id); assert(qemu_in_main_thread()); - if (job_apply_verb(job, JOB_VERB_COMPLETE, errp)) { + if (job_apply_verb_locked(job, JOB_VERB_COMPLETE, errp)) { return; } if (job_cancel_requested(job) || !job->driver->complete) { @@ -1099,26 +1113,27 @@ void job_complete(Job *job, Error **errp) job->driver->complete(job, errp); } =20 -int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp) +int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), + Error **errp) { Error *local_err =3D NULL; int ret; =20 - job_ref(job); + job_ref_locked(job); =20 if (finish) { finish(job, &local_err); } if (local_err) { error_propagate(errp, local_err); - job_unref(job); + job_unref_locked(job); return -EBUSY; } =20 AIO_WAIT_WHILE(job->aio_context, - (job_enter(job), !job_is_completed(job))); + (job_enter(job), !job_is_completed_locked(job))); =20 ret =3D (job_is_cancelled(job) && job->ret =3D=3D 0) ? -ECANCELED : jo= b->ret; - job_unref(job); + job_unref_locked(job); return ret; } diff --git a/qemu-img.c b/qemu-img.c index f036a1d428..09f3b11eab 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -906,7 +906,7 @@ static void run_block_job(BlockJob *job, Error **errp) int ret =3D 0; =20 aio_context_acquire(aio_context); - job_ref(&job->job); + job_ref_locked(&job->job); do { float progress =3D 0.0f; aio_poll(aio_context, true); @@ -917,14 +917,14 @@ static void run_block_job(BlockJob *job, Error **errp) progress =3D (float)progress_current / progress_total * 100.f; } qemu_progress_print(progress, 0); - } while (!job_is_ready(&job->job) && !job_is_completed(&job->job)); + } while (!job_is_ready(&job->job) && !job_is_completed_locked(&job->jo= b)); =20 - if (!job_is_completed(&job->job)) { - ret =3D job_complete_sync(&job->job, errp); + if (!job_is_completed_locked(&job->job)) { + ret =3D job_complete_sync_locked(&job->job, errp); } else { ret =3D job->job.ret; } - job_unref(&job->job); + job_unref_locked(&job->job); aio_context_release(aio_context); =20 /* publish completion progress only when success */ diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 2d3c17e566..3f344a0d0d 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -995,7 +995,7 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ =20 aio_context_acquire(ctx); - ret =3D job_complete_sync(&job->job, &error_abort); + ret =3D job_complete_sync_locked(&job->job, &error_abort); g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 if (use_iothread) { diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index aea660aeed..7e1b521d61 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -456,7 +456,7 @@ static void test_attach_blockjob(void) } =20 aio_context_acquire(ctx); - job_complete_sync(&tjob->common.job, &error_abort); + job_complete_sync_locked(&tjob->common.job, &error_abort); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 @@ -630,7 +630,7 @@ static void test_propagate_mirror(void) BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, false, "filter_node", MIRROR_COPY_MODE_BACKGROUND, &error_abort); - job =3D job_get("job0"); + job =3D job_get_locked("job0"); filter =3D bdrv_find_node("filter_node"); =20 /* Change the AioContext of src */ diff --git a/tests/unit/test-blockjob-txn.c b/tests/unit/test-blockjob-txn.c index 8bd13b9949..5396fcef10 100644 --- a/tests/unit/test-blockjob-txn.c +++ b/tests/unit/test-blockjob-txn.c @@ -125,7 +125,7 @@ static void test_single_job(int expected) job_start(&job->job); =20 if (expected =3D=3D -ECANCELED) { - job_cancel(&job->job, false); + job_cancel_locked(&job->job, false); } =20 while (result =3D=3D -EINPROGRESS) { @@ -133,7 +133,7 @@ static void test_single_job(int expected) } g_assert_cmpint(result, =3D=3D, expected); =20 - job_txn_unref(txn); + job_txn_unref_locked(txn); } =20 static void test_single_job_success(void) @@ -168,13 +168,13 @@ static void test_pair_jobs(int expected1, int expecte= d2) /* Release our reference now to trigger as many nice * use-after-free bugs as possible. */ - job_txn_unref(txn); + job_txn_unref_locked(txn); =20 if (expected1 =3D=3D -ECANCELED) { - job_cancel(&job1->job, false); + job_cancel_locked(&job1->job, false); } if (expected2 =3D=3D -ECANCELED) { - job_cancel(&job2->job, false); + job_cancel_locked(&job2->job, false); } =20 while (result1 =3D=3D -EINPROGRESS || result2 =3D=3D -EINPROGRESS) { @@ -227,7 +227,7 @@ static void test_pair_jobs_fail_cancel_race(void) job_start(&job1->job); job_start(&job2->job); =20 - job_cancel(&job1->job, false); + job_cancel_locked(&job1->job, false); =20 /* Now make job2 finish before the main loop kicks jobs. This simulat= es * the race between a pending kick and another job completing. @@ -242,7 +242,7 @@ static void test_pair_jobs_fail_cancel_race(void) g_assert_cmpint(result1, =3D=3D, -ECANCELED); g_assert_cmpint(result2, =3D=3D, -ECANCELED); =20 - job_txn_unref(txn); + job_txn_unref_locked(txn); } =20 int main(int argc, char **argv) diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index 4c9e1bf1e5..2beed3623e 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -211,7 +211,7 @@ static CancelJob *create_common(Job **pjob) bjob =3D mk_job(blk, "Steve", &test_cancel_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; - job_ref(job); + job_ref_locked(job); assert(job->status =3D=3D JOB_STATUS_CREATED); s =3D container_of(bjob, CancelJob, common); s->blk =3D blk; @@ -230,13 +230,13 @@ static void cancel_common(CancelJob *s) ctx =3D job->job.aio_context; aio_context_acquire(ctx); =20 - job_cancel_sync(&job->job, true); + job_cancel_sync_locked(&job->job, true); if (sts !=3D JOB_STATUS_CREATED && sts !=3D JOB_STATUS_CONCLUDED) { Job *dummy =3D &job->job; - job_dismiss(&dummy, &error_abort); + job_dismiss_locked(&dummy, &error_abort); } assert(job->job.status =3D=3D JOB_STATUS_NULL); - job_unref(&job->job); + job_unref_locked(&job->job); destroy_blk(blk); =20 aio_context_release(ctx); @@ -274,7 +274,7 @@ static void test_cancel_paused(void) job_start(job); assert(job->status =3D=3D JOB_STATUS_RUNNING); =20 - job_user_pause(job, &error_abort); + job_user_pause_locked(job, &error_abort); job_enter(job); assert(job->status =3D=3D JOB_STATUS_PAUSED); =20 @@ -312,7 +312,7 @@ static void test_cancel_standby(void) job_enter(job); assert(job->status =3D=3D JOB_STATUS_READY); =20 - job_user_pause(job, &error_abort); + job_user_pause_locked(job, &error_abort); job_enter(job); assert(job->status =3D=3D JOB_STATUS_STANDBY); =20 @@ -333,7 +333,7 @@ static void test_cancel_pending(void) job_enter(job); assert(job->status =3D=3D JOB_STATUS_READY); =20 - job_complete(job, &error_abort); + job_complete_locked(job, &error_abort); job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); @@ -359,7 +359,7 @@ static void test_cancel_concluded(void) job_enter(job); assert(job->status =3D=3D JOB_STATUS_READY); =20 - job_complete(job, &error_abort); + job_complete_locked(job, &error_abort); job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); @@ -369,7 +369,7 @@ static void test_cancel_concluded(void) assert(job->status =3D=3D JOB_STATUS_PENDING); =20 aio_context_acquire(job->aio_context); - job_finalize(job, &error_abort); + job_finalize_locked(job, &error_abort); aio_context_release(job->aio_context); assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 @@ -417,7 +417,7 @@ static const BlockJobDriver test_yielding_driver =3D { }; =20 /* - * Test that job_complete() works even on jobs that are in a paused + * Test that job_complete_locked() works even on jobs that are in a paused * state (i.e., STANDBY). * * To do this, run YieldingJob in an IO thread, get it into the READY @@ -425,7 +425,7 @@ static const BlockJobDriver test_yielding_driver =3D { * acquire the context so the job will not be entered and will thus * remain on STANDBY. * - * job_complete() should still work without error. + * job_complete_locked() should still work without error. * * Note that on the QMP interface, it is impossible to lock an IO * thread before a drained section ends. In practice, the @@ -479,16 +479,16 @@ static void test_complete_in_standby(void) assert(job->status =3D=3D JOB_STATUS_STANDBY); =20 /* Even though the job is on standby, this should work */ - job_complete(job, &error_abort); + job_complete_locked(job, &error_abort); =20 /* The test is done now, clean up. */ - job_finish_sync(job, NULL, &error_abort); + job_finish_sync_locked(job, NULL, &error_abort); assert(job->status =3D=3D JOB_STATUS_PENDING); =20 - job_finalize(job, &error_abort); + job_finalize_locked(job, &error_abort); assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 - job_dismiss(&job, &error_abort); + job_dismiss_locked(&job, &error_abort); =20 destroy_blk(blk); aio_context_release(ctx); --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641395535977943.3552606313239; Wed, 5 Jan 2022 07:12:15 -0800 (PST) Received: from localhost ([::1]:43172 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57xW-0001Cj-Sw for importer@patchew.org; Wed, 05 Jan 2022 10:12:14 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53384) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56tG-0000uv-0E for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:51 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:40202) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sx-0008Dx-8L for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:29 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-xk8B1DtkMJOfMAP9C7AyrA-1; Wed, 05 Jan 2022 09:03:23 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5118C6127A; Wed, 5 Jan 2022 14:02:46 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 184C53468B; Wed, 5 Jan 2022 14:02:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391406; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g9KFzjfrQRYiykcdvOQDWrZC14aIDxSfC1t9wEbmH0U=; b=a/D0461tJDDABOjswyIytPjTmyrRkPyT2FHJBBjEXHRZNFWp9J1jSwmHbXPSRrGUNX0kBB AuZbwOYXP1I0e8O6TAdgkt+YtxKQ8kzDPmCMWdPDhGPQW2VC/eR//G+CT26TY4CTxSZuzn wEGxl2vBi529qCQsaxiXfFt/bh0zqoQ= X-MC-Unique: xk8B1DtkMJOfMAP9C7AyrA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 04/16] job.h: define unlocked functions Date: Wed, 5 Jan 2022 09:01:56 -0500 Message-Id: <20220105140208.365608-5-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641395536445100001 Content-Type: text/plain; charset="utf-8" All these functions assume that the lock is not held, and acquire it internally. These functions will be useful when job_lock is globally applied, as they will allow callers to access the job struct fields without worrying about the job lock. Update also the comments in blockjob.c (and move them in job.c). Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- blockjob.c | 20 ------------- include/qemu/job.h | 68 ++++++++++++++++++++++++++++++++++++++++++-- job.c | 70 ++++++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 133 insertions(+), 25 deletions(-) diff --git a/blockjob.c b/blockjob.c index 5b5d7f26b3..ce356be51e 100644 --- a/blockjob.c +++ b/blockjob.c @@ -36,21 +36,6 @@ #include "qemu/main-loop.h" #include "qemu/timer.h" =20 -/* - * The block job API is composed of two categories of functions. - * - * The first includes functions used by the monitor. The monitor is - * peculiar in that it accesses the block job list with block_job_get, and - * therefore needs consistency across block_job_get and the actual operati= on - * (e.g. block_job_set_speed). The consistency is achieved with - * aio_context_acquire/release. These functions are declared in blockjob.= h. - * - * The second includes functions used by the block job drivers and sometim= es - * by the core block layer. These do not care about locking, because the - * whole coroutine runs under the AioContext lock, and are declared in - * blockjob_int.h. - */ - static bool is_block_job(Job *job) { return job_type(job) =3D=3D JOB_TYPE_BACKUP || @@ -433,11 +418,6 @@ static void block_job_event_ready(Notifier *n, void *o= paque) } =20 =20 -/* - * API for block job drivers and the block layer. These functions are - * declared in blockjob_int.h. - */ - void *block_job_create(const char *job_id, const BlockJobDriver *driver, JobTxn *txn, BlockDriverState *bs, uint64_t perm, uint64_t shared_perm, int64_t speed, int flags, diff --git a/include/qemu/job.h b/include/qemu/job.h index 0d1c4d1bb1..f800b0b881 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -384,6 +384,7 @@ void job_txn_add_job_locked(JobTxn *txn, Job *job); =20 /** * Create a new long-running job and return it. + * Called with job_mutex *not* held. * * @job_id: The id of the newly-created job, or %NULL for internal jobs * @driver: The class object for the newly-created job. @@ -419,6 +420,8 @@ void job_unref_locked(Job *job); * @done: How much progress the job made since the last call * * Updates the progress counter of the job. + * + * Progress API is thread safe. */ void job_progress_update(Job *job, uint64_t done); =20 @@ -429,6 +432,8 @@ void job_progress_update(Job *job, uint64_t done); * * Sets the expected end value of the progress counter of a job so that a * completion percentage can be calculated when the progress is updated. + * + * Progress API is thread safe. */ void job_progress_set_remaining(Job *job, uint64_t remaining); =20 @@ -444,6 +449,8 @@ void job_progress_set_remaining(Job *job, uint64_t rema= ining); * length before, and job_progress_update() afterwards. * (So the operation acts as a parenthesis in regards to the main job * operation running in background.) + * + * Progress API is thread safe. */ void job_progress_increase_remaining(Job *job, uint64_t delta); =20 @@ -467,13 +474,17 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *j= ob)); * * Begins execution of a job. * Takes ownership of one reference to the job object. + * + * Called with job_mutex *not* held. */ void job_start(Job *job); =20 /** * @job: The job to enter. + * Called with job_mutex *not* held. * * Continue the specified job by entering the coroutine. + * Called with job_mutex lock *not* held. */ void job_enter(Job *job); =20 @@ -483,6 +494,9 @@ void job_enter(Job *job); * Pause now if job_pause_locked() has been called. * Jobs that perform lots of I/O must call this between * requests so that the job can be paused. + * + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). */ void coroutine_fn job_pause_point(Job *job); =20 @@ -490,6 +504,8 @@ void coroutine_fn job_pause_point(Job *job); * @job: The job that calls the function. * * Yield the job coroutine. + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). */ void job_yield(Job *job); =20 @@ -500,6 +516,9 @@ void job_yield(Job *job); * Put the job to sleep (assuming that it wasn't canceled) for @ns * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will immediately * interrupt the wait. + * + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns); =20 @@ -512,12 +531,19 @@ const char *job_type_str(const Job *job); /** Returns true if the job should not be visible to the management layer.= */ bool job_is_internal(Job *job); =20 -/** Returns whether the job is being cancelled. */ +/** + * Returns whether the job is being cancelled. + * Called with job_mutex *not* held. + */ bool job_is_cancelled(Job *job); =20 +/** Just like job_is_cancelled, but called between job_lock and job_unlock= */ +bool job_is_cancelled_locked(Job *job); + /** * Returns whether the job is scheduled for cancellation (at an * indefinite point). + * Called with job_mutex *not* held. */ bool job_cancel_requested(Job *job); =20 @@ -601,13 +627,19 @@ Job *job_get_locked(const char *id); */ int job_apply_verb_locked(Job *job, JobVerb verb, Error **errp); =20 -/** The @job could not be started, free it. */ +/** + * The @job could not be started, free it. + * Called with job_mutex *not* held. + */ void job_early_fail(Job *job); =20 /** Same as job_early_fail(), but assumes job_lock is held. */ void job_early_fail_locked(Job *job); =20 -/** Moves the @job from RUNNING to READY */ +/** + * Moves the @job from RUNNING to READY. + * Called with job_mutex *not* held. + */ void job_transition_to_ready(Job *job); =20 /** @@ -707,4 +739,34 @@ void job_dismiss_locked(Job **job, Error **errp); int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), Error **errp); =20 +/** + * Returns the @job->status. + * Called with job_mutex *not* held. + */ +JobStatus job_get_status(Job *job); + +/** + * Returns the @job->pause_count. + * Called with job_mutex *not* held. + */ +int job_get_pause_count(Job *job); + +/** + * Returns @job->paused. + * Called with job_mutex *not* held. + */ +bool job_get_paused(Job *job); + +/** + * Returns @job->busy. + * Called with job_mutex *not* held. + */ +bool job_get_busy(Job *job); + +/** + * Returns @job->aio_context. + * Called with job_mutex *not* held. + */ +AioContext *job_get_aio_context(Job *job); + #endif diff --git a/job.c b/job.c index bb6ca2940c..f4e1a56705 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,22 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* + * The job API is composed of two categories of functions. + * + * The first includes functions used by the monitor. The monitor is + * peculiar in that it accesses the block job list with job_get, and + * therefore needs consistency across job_get and the actual operation + * (e.g. job_user_cancel). To achieve this consistency, the caller + * calls job_lock/job_unlock itself around the whole operation. + * These functions are declared in job-monitor.h. + * + * + * The second includes functions used by the block job drivers and sometim= es + * by the core block layer. These delegate the locking to the callee inste= ad, + * and are declared in job-driver.h. + */ + /* * job_mutex protects the jobs list, but also makes the * struct job fields thread-safe. @@ -226,18 +242,61 @@ const char *job_type_str(const Job *job) return JobType_str(job_type(job)); } =20 -bool job_is_cancelled(Job *job) +JobStatus job_get_status(Job *job) +{ + JOB_LOCK_GUARD(); + return job->status; +} + +int job_get_pause_count(Job *job) +{ + JOB_LOCK_GUARD(); + return job->pause_count; +} + +bool job_get_paused(Job *job) +{ + JOB_LOCK_GUARD(); + return job->paused; +} + +bool job_get_busy(Job *job) +{ + JOB_LOCK_GUARD(); + return job->busy; +} + +AioContext *job_get_aio_context(Job *job) +{ + JOB_LOCK_GUARD(); + return job->aio_context; +} + +bool job_is_cancelled_locked(Job *job) { /* force_cancel may be true only if cancelled is true, too */ assert(job->cancelled || !job->force_cancel); return job->force_cancel; } =20 -bool job_cancel_requested(Job *job) +bool job_is_cancelled(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_cancelled_locked(job); +} + +/* Called with job_mutex held. */ +static bool job_cancel_requested_locked(Job *job) { return job->cancelled; } =20 +bool job_cancel_requested(Job *job) +{ + JOB_LOCK_GUARD(); + return job_cancel_requested_locked(job); +} + bool job_is_ready_locked(Job *job) { switch (job->status) { @@ -288,6 +347,13 @@ bool job_is_completed_locked(Job *job) return false; } =20 +/* Called with job_mutex lock *not* held */ +static bool job_is_completed(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_completed_locked(job); +} + static bool job_started(Job *job) { return job->co; --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 164139283937348.79386220253309; Wed, 5 Jan 2022 06:27:19 -0800 (PST) Received: from localhost ([::1]:57316 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57G2-0008Br-Eg for importer@patchew.org; Wed, 05 Jan 2022 09:27:18 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52898) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sW-0008Tj-7k for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:00 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:53752) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sU-0007Du-8b for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:02:59 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-86-h-uyjKypMBOGgx803tfYiQ-1; Wed, 05 Jan 2022 09:02:54 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7D283190A7BD; Wed, 5 Jan 2022 14:02:47 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 44C6C2B5AC; Wed, 5 Jan 2022 14:02:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391377; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rZ/SOkR+4Hg9csWNBf100L1EOqqRsNqhmnwutq86Fmk=; b=GfrmHqNh08Arlc7TD0ybICMjnLmE9aZuUBTilEfbJD0gYsxxwyrEn7JPX3ezT4vyD+/9OZ fPXCTOpqJdSssF4kH20jxc3cx+5+rCA59mMel3fZAVJ9cWakt3dB1F/1CwQ/1cP7+uCZfU 7bHkDta6yHDSkSfmNpv63w9tWOU/cKc= X-MC-Unique: h-uyjKypMBOGgx803tfYiQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 05/16] block/mirror.c: use of job helpers in drivers to avoid TOC/TOU Date: Wed, 5 Jan 2022 09:01:57 -0500 Message-Id: <20220105140208.365608-6-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641392840281100001 Content-Type: text/plain; charset="utf-8" Once job lock is used and aiocontext is removed, mirror has to perform job operations under the same critical section, using the helpers prepared in previous commit. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- block/mirror.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 00089e519b..41450df55c 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -653,9 +653,13 @@ static int mirror_exit_common(Job *job) BlockDriverState *target_bs; BlockDriverState *mirror_top_bs; Error *local_err =3D NULL; - bool abort =3D job->ret < 0; + bool abort; int ret =3D 0; =20 + WITH_JOB_LOCK_GUARD() { + abort =3D job->ret < 0; + } + if (s->prepared) { return 0; } @@ -1161,8 +1165,10 @@ static void mirror_complete(Job *job, Error **errp) s->should_complete =3D true; =20 /* If the job is paused, it will be re-entered when it is resumed */ - if (!job->paused) { - job_enter(job); + WITH_JOB_LOCK_GUARD() { + if (!job->paused) { + job_enter_cond_locked(job, NULL); + } } } =20 @@ -1182,8 +1188,11 @@ static bool mirror_drained_poll(BlockJob *job) * from one of our own drain sections, to avoid a deadlock waiting for * ourselves. */ - if (!s->common.job.paused && !job_is_cancelled(&job->job) && !s->in_dr= ain) { - return true; + WITH_JOB_LOCK_GUARD() { + if (!s->common.job.paused && !job_is_cancelled_locked(&job->job) + && !s->in_drain) { + return true; + } } =20 return !!s->in_flight; --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393102559862.8764104565611; Wed, 5 Jan 2022 06:31:42 -0800 (PST) Received: from localhost ([::1]:38508 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57KG-0006BA-Qe for importer@patchew.org; Wed, 05 Jan 2022 09:31:40 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53214) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sk-0000Hz-F1 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:14 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:41702) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sc-0007Nb-AD for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:12 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-307-4NQczIrfMXalxsdKdkvh3g-1; Wed, 05 Jan 2022 09:02:55 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8B4AF1083F83; Wed, 5 Jan 2022 14:02:48 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8863F2B4CF; Wed, 5 Jan 2022 14:02:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lwE/ply5dtgUjbq/zwhy+8uN2X4Pg7ArB8KuS+5rnt8=; b=e0JGxfh6joZaRFrrpr3MiEN3UpGljLmhdgkttXzMVFwNTyNVHEX822hkQ9uRHfDesZEBiZ +ObrrHGPykQgl4NHkI2RKZTGuSrGmNWnj4Y6K9WXeK4DcxmKP7jU02TNmXC5+aUs+WtjwV rRLLRjx/XhrXRuPE+P97Blwq+NzgVYU= X-MC-Unique: 4NQczIrfMXalxsdKdkvh3g-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 06/16] job.c: make job_event_* functions static Date: Wed, 5 Jan 2022 09:01:58 -0500 Message-Id: <20220105140208.365608-7-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393104349100001 Content-Type: text/plain; charset="utf-8" job_event_* functions can all be static, as they are not used outside job.c. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- include/qemu/job.h | 6 ------ job.c | 12 ++++++++++-- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index f800b0b881..c95f9fa8d1 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -454,12 +454,6 @@ void job_progress_set_remaining(Job *job, uint64_t rem= aining); */ void job_progress_increase_remaining(Job *job, uint64_t delta); =20 -/** To be called when a cancelled job is finalised. */ -void job_event_cancelled(Job *job); - -/** To be called when a successfully completed job is finalised. */ -void job_event_completed(Job *job); - /** * Conditionally enter the job coroutine if the job is ready to run, not * already busy and fn() returns true. fn() is called while under the job_= lock diff --git a/job.c b/job.c index f4e1a56705..b0dba40728 100644 --- a/job.c +++ b/job.c @@ -498,12 +498,20 @@ void job_progress_increase_remaining(Job *job, uint64= _t delta) progress_increase_remaining(&job->progress, delta); } =20 -void job_event_cancelled(Job *job) +/** + * To be called when a cancelled job is finalised. + * Called with job_mutex held. + */ +static void job_event_cancelled(Job *job) { notifier_list_notify(&job->on_finalize_cancelled, job); } =20 -void job_event_completed(Job *job) +/** + * To be called when a successfully completed job is finalised. + * Called with job_mutex held. + */ +static void job_event_completed(Job *job) { notifier_list_notify(&job->on_finalize_completed, job); } --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641394544058533.8509620683366; Wed, 5 Jan 2022 06:55:44 -0800 (PST) Received: from localhost ([::1]:36376 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57hW-0002DB-Q3 for importer@patchew.org; Wed, 05 Jan 2022 09:55:42 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52908) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sX-00004w-Hw for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:02 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:36539) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sV-0007Kg-UB for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:01 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-613-LXbujE7bORiy89gI906jPQ-1; Wed, 05 Jan 2022 09:02:58 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CB12F81EE98; Wed, 5 Jan 2022 14:02:49 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id A4ACB2B4B3; Wed, 5 Jan 2022 14:02:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391379; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MU05fzFG14Paw6y7UQrLSY8usprd9sJ/d97Uf91xMcU=; b=KWPPF8tlZu7xdjYxSkjbyq7/ZSEBU4QKWIG9SIbV2H3ee+FITYZ8wJZX5EwbMmYR2fBF81 qlDKw5tMhxkgfhmHf1mZnlFx/PVUU50YaO210krSThr8QveBfOK+5koGyCLP/Jp/cjnDWx pJjTp0dwL/W4Q0y4EvBh5QP8W+2OQK4= X-MC-Unique: LXbujE7bORiy89gI906jPQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 07/16] job.c: move inner aiocontext lock in callbacks Date: Wed, 5 Jan 2022 09:01:59 -0500 Message-Id: <20220105140208.365608-8-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641394544891100001 Content-Type: text/plain; charset="utf-8" Instead of having the lock in job_tnx_apply, move it inside in the callback. This will be helpful for next commits, when we introduce job_lock/unlock pairs. job_transition_to_pending() and job_needs_finalize() do not need to be protected by the aiocontext lock. No functional change intended. Signed-off-by: Emanuele Giuseppe Esposito --- job.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/job.c b/job.c index b0dba40728..2ee7233763 100644 --- a/job.c +++ b/job.c @@ -165,7 +165,6 @@ static void job_txn_del_job(Job *job) =20 static int job_txn_apply(Job *job, int fn(Job *)) { - AioContext *inner_ctx; Job *other_job, *next; JobTxn *txn =3D job->txn; int rc =3D 0; @@ -180,10 +179,7 @@ static int job_txn_apply(Job *job, int fn(Job *)) aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { - inner_ctx =3D other_job->aio_context; - aio_context_acquire(inner_ctx); rc =3D fn(other_job); - aio_context_release(inner_ctx); if (rc) { break; } @@ -796,11 +792,15 @@ static void job_clean(Job *job) =20 static int job_finalize_single(Job *job) { + AioContext *ctx =3D job->aio_context; + assert(job_is_completed_locked(job)); =20 /* Ensure abort is called for late-transactional failures */ job_update_rc(job); =20 + aio_context_acquire(ctx); + if (!job->ret) { job_commit(job); } else { @@ -808,6 +808,8 @@ static int job_finalize_single(Job *job) } job_clean(job); =20 + aio_context_release(ctx); + if (job->cb) { job->cb(job->opaque, job->ret); } @@ -928,11 +930,16 @@ static void job_completed_txn_abort(Job *job) =20 static int job_prepare(Job *job) { + AioContext *ctx =3D job->aio_context; assert(qemu_in_main_thread()); + if (job->ret =3D=3D 0 && job->driver->prepare) { + aio_context_acquire(ctx); job->ret =3D job->driver->prepare(job); + aio_context_release(ctx); job_update_rc(job); } + return job->ret; } =20 --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393789420949.1087653088725; Wed, 5 Jan 2022 06:43:09 -0800 (PST) Received: from localhost ([::1]:38168 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57VL-0008Lf-Le for importer@patchew.org; Wed, 05 Jan 2022 09:43:08 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53286) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sq-0000Xx-6y for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:20 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:31043) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sZ-0007NC-Ta for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:19 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-103-kXvQ09SvOv6Titr6dKBdIQ-1; Wed, 05 Jan 2022 09:03:00 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0E56583DCCA; Wed, 5 Jan 2022 14:02:51 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id E3DEC2C24E; Wed, 5 Jan 2022 14:02:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2GFPYF3rd7vEl2S21B2p7zNKVat/lv/1EGml81GNS7s=; b=bWdZ8ZZaSvU8gTE1rY8x17POdDLJdvO9sLveG8mDHb4EOv1qFo1hLFJGa3o7Zju3+Ja31t fDJsrQmffFJL5At8v3DJ+DQUboVALZQxJVZq1InmGTXEJvTDL94G0xjSwUGFRrflZxmKH9 ei5MZ826HZ6+yn5ZjYhIX8nHCrUJsP8= X-MC-Unique: kXvQ09SvOv6Titr6dKBdIQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 08/16] aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED Date: Wed, 5 Jan 2022 09:02:00 -0500 Message-Id: <20220105140208.365608-9-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -24 X-Spam_score: -2.5 X-Spam_bar: -- X-Spam_report: (-2.5 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393790465100001 Content-Type: text/plain; charset="utf-8" Same as AIO_WAIT_WHILE macro, but if we are in the Main loop do not release and then acquire ctx_ 's aiocontext. Once all Aiocontext locks go away, this macro will replace AIO_WAIT_WHILE. Signed-off-by: Emanuele Giuseppe Esposito --- include/block/aio-wait.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index b39eefb38d..ff27fe4eab 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -59,10 +59,11 @@ typedef struct { extern AioWait global_aio_wait; =20 /** - * AIO_WAIT_WHILE: + * _AIO_WAIT_WHILE: * @ctx: the aio context, or NULL if multiple aio contexts (for which the * caller does not hold a lock) are involved in the polling conditio= n. * @cond: wait while this conditional expression is true + * @unlock: whether to unlock and then lock again @ctx * * Wait while a condition is true. Use this to implement synchronous * operations that require event loop activity. @@ -75,7 +76,7 @@ extern AioWait global_aio_wait; * wait on conditions between two IOThreads since that could lead to deadl= ock, * go via the main loop instead. */ -#define AIO_WAIT_WHILE(ctx, cond) ({ \ +#define _AIO_WAIT_WHILE(ctx, cond, unlock) ({ \ bool waited_ =3D false; \ AioWait *wait_ =3D &global_aio_wait; \ AioContext *ctx_ =3D (ctx); \ @@ -90,11 +91,11 @@ extern AioWait global_aio_wait; assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ while ((cond)) { \ - if (ctx_) { \ + if (unlock && ctx_) { \ aio_context_release(ctx_); \ } \ aio_poll(qemu_get_aio_context(), true); \ - if (ctx_) { \ + if (unlock && ctx_) { \ aio_context_acquire(ctx_); \ } \ waited_ =3D true; \ @@ -103,6 +104,12 @@ extern AioWait global_aio_wait; qatomic_dec(&wait_->num_waiters); \ waited_; }) =20 +#define AIO_WAIT_WHILE(ctx, cond) \ + _AIO_WAIT_WHILE(ctx, cond, true) + +#define AIO_WAIT_WHILE_UNLOCKED(ctx, cond) \ + _AIO_WAIT_WHILE(ctx, cond, false) + /** * aio_wait_kick: * Wake up the main thread if it is waiting on AIO_WAIT_WHILE(). During --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641395228098561.738482139212; Wed, 5 Jan 2022 07:07:08 -0800 (PST) Received: from localhost ([::1]:59364 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57sZ-0001Sf-5U for importer@patchew.org; Wed, 05 Jan 2022 10:07:07 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53206) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sk-0000Hv-ER for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:14 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:36222) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sb-0007O7-OV for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:11 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-60-Zn25ye9eO6G9PlD8O2UKAQ-1; Wed, 05 Jan 2022 09:03:02 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 41A0093942; Wed, 5 Jan 2022 14:02:52 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F08B2B4CF; Wed, 5 Jan 2022 14:02:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p/ypJPqsKZBwcX4ZTuN9cITCyT5OXf4VlHCs1AlxneI=; b=harLyN4BzVj8VNyEtM2DM3q/9YFfVKRl95ouDojru5MM0riYBoeNmTCQnBINaaXxJt6/Ex XkVONagV1wD6UkmRgxY/pKC5KbA5z5Y2GjhIWh/9KsGs9rhRTFI3gu/fbLVGChUAp9NToZ b1Qzp0ze8JImxMGt217Ntp0cD5pafSE= X-MC-Unique: Zn25ye9eO6G9PlD8O2UKAQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 09/16] jobs: remove aiocontext locks since the functions are under BQL Date: Wed, 5 Jan 2022 09:02:01 -0500 Message-Id: <20220105140208.365608-10-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641395229234100001 Content-Type: text/plain; charset="utf-8" In preparation to the job_lock/unlock patch, remove these aiocontext locks. The main reason these two locks are removed here is because they are inside a loop iterating on the jobs list. Once the job_lock is added, it will have to protect the whole loop, wrapping also the aiocontext acquire/release. We don't want this, as job_lock can only be *wrapped by* the aiocontext lock, and not vice-versa, to avoid deadlocks. Signed-off-by: Emanuele Giuseppe Esposito --- blockdev.c | 4 ---- job-qmp.c | 4 ---- 2 files changed, 8 deletions(-) diff --git a/blockdev.c b/blockdev.c index 11fd651bde..ee35aff13a 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3707,15 +3707,11 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { BlockJobInfo *value; - AioContext *aio_context; =20 if (block_job_is_internal(job)) { continue; } - aio_context =3D blk_get_aio_context(job->blk); - aio_context_acquire(aio_context); value =3D block_job_query(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_BlockJobInfoList(head); return NULL; diff --git a/job-qmp.c b/job-qmp.c index de4120a1d4..f6f9840436 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -173,15 +173,11 @@ JobInfoList *qmp_query_jobs(Error **errp) =20 for (job =3D job_next_locked(NULL); job; job =3D job_next_locked(job))= { JobInfo *value; - AioContext *aio_context; =20 if (job_is_internal(job)) { continue; } - aio_context =3D job->aio_context; - aio_context_acquire(aio_context); value =3D job_query_single(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_JobInfoList(head); return NULL; --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641394917975767.3916484600919; Wed, 5 Jan 2022 07:01:57 -0800 (PST) Received: from localhost ([::1]:45636 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57nY-0000LM-OL for importer@patchew.org; Wed, 05 Jan 2022 10:01:56 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53040) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sc-00009U-DV for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:29607) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sY-0007Mn-68 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:06 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-78-grsmfNHOPyS5B3Ve56A3rA-1; Wed, 05 Jan 2022 09:02:58 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 50357802E6A; Wed, 5 Jan 2022 14:02:53 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5D003167A9; Wed, 5 Jan 2022 14:02:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Iv5GfP8+cVEeKjeLpVYdCxfI/oDughnzudTeYTJOjnE=; b=A6q/dPY85BIFNzaAys3QNguQ/z2OhF4kXL6cL6amk1Gfv4f2NqMjixbY20ferkBB9Ks4ch 8rW/VGcyx6PfDICAxKJp3YSaziZkLBDlZaqfpRwiBxhfirCxm/zGmtJipvcwQiqgKsnJB9 LVYvF2CfRhaJZ3qoy2ZuxzGcO4rIGvQ= X-MC-Unique: grsmfNHOPyS5B3Ve56A3rA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 10/16] jobs: protect jobs with job_lock/unlock Date: Wed, 5 Jan 2022 09:02:02 -0500 Message-Id: <20220105140208.365608-11-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641394920680100001 Content-Type: text/plain; charset="utf-8" Introduce the job locking mechanism through the whole job API, following the comments and requirements of job-monitor (assume lock is held) and job-driver (lock is not held). job_{lock/unlock} is independent from real_job_{lock/unlock}. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- block.c | 18 +++--- block/replication.c | 8 ++- blockdev.c | 17 +++++- blockjob.c | 64 ++++++++++++++------- job-qmp.c | 2 + job.c | 132 +++++++++++++++++++++++++++++++------------- monitor/qmp-cmds.c | 6 +- qemu-img.c | 41 ++++++++------ 8 files changed, 199 insertions(+), 89 deletions(-) diff --git a/block.c b/block.c index 8fcd525fa0..fac0759422 100644 --- a/block.c +++ b/block.c @@ -4976,7 +4976,9 @@ static void bdrv_close(BlockDriverState *bs) =20 void bdrv_close_all(void) { - assert(job_next_locked(NULL) =3D=3D NULL); + WITH_JOB_LOCK_GUARD() { + assert(job_next_locked(NULL) =3D=3D NULL); + } assert(qemu_in_main_thread()); =20 /* Drop references from requests still in flight, such as canceled blo= ck @@ -6154,13 +6156,15 @@ XDbgBlockGraph *bdrv_get_xdbg_block_graph(Error **e= rrp) } } =20 - for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { - GSList *el; + WITH_JOB_LOCK_GUARD() { + for (job =3D block_job_next(NULL); job; job =3D block_job_next(job= )) { + GSList *el; =20 - xdbg_graph_add_node(gr, job, X_DBG_BLOCK_GRAPH_NODE_TYPE_BLOCK_JOB, - job->job.id); - for (el =3D job->nodes; el; el =3D el->next) { - xdbg_graph_add_edge(gr, job, (BdrvChild *)el->data); + xdbg_graph_add_node(gr, job, X_DBG_BLOCK_GRAPH_NODE_TYPE_BLOCK= _JOB, + job->job.id); + for (el =3D job->nodes; el; el =3D el->next) { + xdbg_graph_add_edge(gr, job, (BdrvChild *)el->data); + } } } =20 diff --git a/block/replication.c b/block/replication.c index 5215c328c1..50ea778937 100644 --- a/block/replication.c +++ b/block/replication.c @@ -149,7 +149,9 @@ static void replication_close(BlockDriverState *bs) if (s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { commit_job =3D &s->commit_job->job; assert(commit_job->aio_context =3D=3D qemu_get_current_aio_context= ()); - job_cancel_sync_locked(commit_job, false); + WITH_JOB_LOCK_GUARD() { + job_cancel_sync_locked(commit_job, false); + } } =20 if (s->mode =3D=3D REPLICATION_MODE_SECONDARY) { @@ -726,7 +728,9 @@ static void replication_stop(ReplicationState *rs, bool= failover, Error **errp) * disk, secondary disk in backup_job_completed(). */ if (s->backup_job) { - job_cancel_sync_locked(&s->backup_job->job, true); + WITH_JOB_LOCK_GUARD() { + job_cancel_sync_locked(&s->backup_job->job, true); + } } =20 if (!failover) { diff --git a/blockdev.c b/blockdev.c index ee35aff13a..099d57e0d2 100644 --- a/blockdev.c +++ b/blockdev.c @@ -155,6 +155,8 @@ void blockdev_mark_auto_del(BlockBackend *blk) return; } =20 + JOB_LOCK_GUARD(); + for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { AioContext *aio_context =3D job->job.aio_context; @@ -1832,7 +1834,9 @@ static void drive_backup_abort(BlkActionState *common) aio_context =3D bdrv_get_aio_context(state->bs); aio_context_acquire(aio_context); =20 - job_cancel_sync_locked(&state->job->job, true); + WITH_JOB_LOCK_GUARD() { + job_cancel_sync_locked(&state->job->job, true); + } =20 aio_context_release(aio_context); } @@ -1933,7 +1937,9 @@ static void blockdev_backup_abort(BlkActionState *com= mon) aio_context =3D bdrv_get_aio_context(state->bs); aio_context_acquire(aio_context); =20 - job_cancel_sync_locked(&state->job->job, true); + WITH_JOB_LOCK_GUARD() { + job_cancel_sync_locked(&state->job->job, true); + } =20 aio_context_release(aio_context); } @@ -2382,7 +2388,10 @@ exit: if (!has_props) { qapi_free_TransactionProperties(props); } - job_txn_unref_locked(block_job_txn); + + WITH_JOB_LOCK_GUARD() { + job_txn_unref_locked(block_job_txn); + } } =20 BlockDirtyBitmapSha256 *qmp_x_debug_block_dirty_bitmap_sha256(const char *= node, @@ -3705,6 +3714,8 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) BlockJobInfoList *head =3D NULL, **tail =3D &head; BlockJob *job; =20 + JOB_LOCK_GUARD(); + for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { BlockJobInfo *value; =20 diff --git a/blockjob.c b/blockjob.c index ce356be51e..e00c8d31d5 100644 --- a/blockjob.c +++ b/blockjob.c @@ -88,7 +88,9 @@ static char *child_job_get_parent_desc(BdrvChild *c) static void child_job_drained_begin(BdrvChild *c) { BlockJob *job =3D c->opaque; - job_pause_locked(&job->job); + WITH_JOB_LOCK_GUARD() { + job_pause_locked(&job->job); + } } =20 static bool child_job_drained_poll(BdrvChild *c) @@ -100,8 +102,10 @@ static bool child_job_drained_poll(BdrvChild *c) /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point aft= er * being reentered, so no job driver code will run before they pause. = */ - if (!job->busy || job_is_completed_locked(job)) { - return false; + WITH_JOB_LOCK_GUARD() { + if (!job->busy || job_is_completed_locked(job)) { + return false; + } } =20 /* Otherwise, assume that it isn't fully stopped yet, but allow the jo= b to @@ -116,7 +120,9 @@ static bool child_job_drained_poll(BdrvChild *c) static void child_job_drained_end(BdrvChild *c, int *drained_end_counter) { BlockJob *job =3D c->opaque; - job_resume_locked(&job->job); + WITH_JOB_LOCK_GUARD() { + job_resume_locked(&job->job); + } } =20 static bool child_job_can_set_aio_ctx(BdrvChild *c, AioContext *ctx, @@ -238,7 +244,13 @@ int block_job_add_bdrv(BlockJob *job, const char *name= , BlockDriverState *bs, =20 static void block_job_on_idle(Notifier *n, void *opaque) { + /* + * we can't kick with job_mutex held, but we also want + * to protect the notifier list. + */ + job_unlock(); aio_wait_kick(); + job_lock(); } =20 bool block_job_is_internal(BlockJob *job) @@ -278,7 +290,9 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp) job->speed =3D speed; =20 if (drv->set_speed) { + job_unlock(); drv->set_speed(job, speed); + job_lock(); } =20 if (speed && speed <=3D old_speed) { @@ -458,13 +472,15 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, job->ready_notifier.notify =3D block_job_event_ready; job->idle_notifier.notify =3D block_job_on_idle; =20 - notifier_list_add(&job->job.on_finalize_cancelled, - &job->finalize_cancelled_notifier); - notifier_list_add(&job->job.on_finalize_completed, - &job->finalize_completed_notifier); - notifier_list_add(&job->job.on_pending, &job->pending_notifier); - notifier_list_add(&job->job.on_ready, &job->ready_notifier); - notifier_list_add(&job->job.on_idle, &job->idle_notifier); + WITH_JOB_LOCK_GUARD() { + notifier_list_add(&job->job.on_finalize_cancelled, + &job->finalize_cancelled_notifier); + notifier_list_add(&job->job.on_finalize_completed, + &job->finalize_completed_notifier); + notifier_list_add(&job->job.on_pending, &job->pending_notifier); + notifier_list_add(&job->job.on_ready, &job->ready_notifier); + notifier_list_add(&job->job.on_idle, &job->idle_notifier); + } =20 error_setg(&job->blocker, "block device is in use by block job: %s", job_type_str(&job->job)); @@ -477,11 +493,14 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, blk_set_disable_request_queuing(blk, true); blk_set_allow_aio_context_change(blk, true); =20 - if (!block_job_set_speed(job, speed, errp)) { - job_early_fail(&job->job); - return NULL; + WITH_JOB_LOCK_GUARD() { + if (!block_job_set_speed(job, speed, errp)) { + job_early_fail_locked(&job->job); + return NULL; + } } =20 + return job; } =20 @@ -499,7 +518,9 @@ void block_job_user_resume(Job *job) { BlockJob *bjob =3D container_of(job, BlockJob, job); assert(qemu_in_main_thread()); - block_job_iostatus_reset(bjob); + WITH_JOB_LOCK_GUARD() { + block_job_iostatus_reset(bjob); + } } =20 BlockErrorAction block_job_error_action(BlockJob *job, BlockdevOnError on_= err, @@ -532,10 +553,15 @@ BlockErrorAction block_job_error_action(BlockJob *job= , BlockdevOnError on_err, action); } if (action =3D=3D BLOCK_ERROR_ACTION_STOP) { - if (!job->job.user_paused) { - job_pause_locked(&job->job); - /* make the pause user visible, which will be resumed from QMP= . */ - job->job.user_paused =3D true; + WITH_JOB_LOCK_GUARD() { + if (!job->job.user_paused) { + job_pause_locked(&job->job); + /* + * make the pause user visible, which will be + * resumed from QMP. + */ + job->job.user_paused =3D true; + } } block_job_iostatus_set_err(job, error); } diff --git a/job-qmp.c b/job-qmp.c index f6f9840436..9fa14bf761 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -171,6 +171,8 @@ JobInfoList *qmp_query_jobs(Error **errp) JobInfoList *head =3D NULL, **tail =3D &head; Job *job; =20 + JOB_LOCK_GUARD(); + for (job =3D job_next_locked(NULL); job; job =3D job_next_locked(job))= { JobInfo *value; =20 diff --git a/job.c b/job.c index 2ee7233763..56722a5043 100644 --- a/job.c +++ b/job.c @@ -394,6 +394,8 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, { Job *job; =20 + JOB_LOCK_GUARD(); + if (job_id) { if (flags & JOB_INTERNAL) { error_setg(errp, "Cannot specify job ID for internal job"); @@ -467,7 +469,9 @@ void job_unref_locked(Job *job) assert(!job->txn); =20 if (job->driver->free) { + job_unlock(); job->driver->free(job); + job_lock(); } =20 QLIST_REMOVE(job, job_list); @@ -551,11 +555,14 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *j= ob)) timer_del(&job->sleep_timer); job->busy =3D true; real_job_unlock(); + job_unlock(); aio_co_enter(job->aio_context, job->co); + job_lock(); } =20 void job_enter(Job *job) { + JOB_LOCK_GUARD(); job_enter_cond_locked(job, NULL); } =20 @@ -574,7 +581,9 @@ static void coroutine_fn job_do_yield(Job *job, uint64_= t ns) job->busy =3D false; job_event_idle(job); real_job_unlock(); + job_unlock(); qemu_coroutine_yield(); + job_lock(); =20 /* Set by job_enter_cond_locked() before re-entering the coroutine. */ assert(job->busy); @@ -584,18 +593,23 @@ void coroutine_fn job_pause_point(Job *job) { assert(job && job_started(job)); =20 + job_lock(); if (!job_should_pause(job)) { + job_unlock(); return; } - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { + job_unlock(); return; } =20 if (job->driver->pause) { + job_unlock(); job->driver->pause(job); + job_lock(); } =20 - if (job_should_pause(job) && !job_is_cancelled(job)) { + if (job_should_pause(job) && !job_is_cancelled_locked(job)) { JobStatus status =3D job->status; job_state_transition(job, status =3D=3D JOB_STATUS_READY ? JOB_STATUS_STANDBY @@ -605,6 +619,7 @@ void coroutine_fn job_pause_point(Job *job) job->paused =3D false; job_state_transition(job, status); } + job_unlock(); =20 if (job->driver->resume) { job->driver->resume(job); @@ -613,15 +628,17 @@ void coroutine_fn job_pause_point(Job *job) =20 void job_yield(Job *job) { - assert(job->busy); + WITH_JOB_LOCK_GUARD() { + assert(job->busy); =20 - /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { - return; - } + /* Check cancellation *before* setting busy =3D false, too! */ + if (job_is_cancelled_locked(job)) { + return; + } =20 - if (!job_should_pause(job)) { - job_do_yield(job, -1); + if (!job_should_pause(job)) { + job_do_yield(job, -1); + } } =20 job_pause_point(job); @@ -629,21 +646,23 @@ void job_yield(Job *job) =20 void coroutine_fn job_sleep_ns(Job *job, int64_t ns) { - assert(job->busy); + WITH_JOB_LOCK_GUARD() { + assert(job->busy); =20 - /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { - return; - } + /* Check cancellation *before* setting busy =3D false, too! */ + if (job_is_cancelled_locked(job)) { + return; + } =20 - if (!job_should_pause(job)) { - job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + if (!job_should_pause(job)) { + job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + } } =20 job_pause_point(job); } =20 -/* Assumes the block_job_mutex is held */ +/* Assumes the job_mutex is held */ static bool job_timer_not_pending(Job *job) { return !timer_pending(&job->sleep_timer); @@ -653,7 +672,7 @@ void job_pause_locked(Job *job) { job->pause_count++; if (!job->paused) { - job_enter(job); + job_enter_cond_locked(job, NULL); } } =20 @@ -699,7 +718,9 @@ void job_user_resume_locked(Job *job, Error **errp) return; } if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; job_resume_locked(job); @@ -753,7 +774,7 @@ static void job_conclude(Job *job) =20 static void job_update_rc(Job *job) { - if (!job->ret && job_is_cancelled(job)) { + if (!job->ret && job_is_cancelled_locked(job)) { job->ret =3D -ECANCELED; } if (job->ret) { @@ -769,7 +790,9 @@ static void job_commit(Job *job) assert(!job->ret); assert(qemu_in_main_thread()); if (job->driver->commit) { + job_unlock(); job->driver->commit(job); + job_lock(); } } =20 @@ -778,7 +801,9 @@ static void job_abort(Job *job) assert(job->ret); assert(qemu_in_main_thread()); if (job->driver->abort) { + job_unlock(); job->driver->abort(job); + job_lock(); } } =20 @@ -786,12 +811,15 @@ static void job_clean(Job *job) { assert(qemu_in_main_thread()); if (job->driver->clean) { + job_unlock(); job->driver->clean(job); + job_lock(); } } =20 static int job_finalize_single(Job *job) { + int job_ret; AioContext *ctx =3D job->aio_context; =20 assert(job_is_completed_locked(job)); @@ -811,12 +839,15 @@ static int job_finalize_single(Job *job) aio_context_release(ctx); =20 if (job->cb) { - job->cb(job->opaque, job->ret); + job_ret =3D job->ret; + job_unlock(); + job->cb(job->opaque, job_ret); + job_lock(); } =20 /* Emit events only if we actually started */ if (job_started(job)) { - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { job_event_cancelled(job); } else { job_event_completed(job); @@ -832,7 +863,9 @@ static void job_cancel_async(Job *job, bool force) { assert(qemu_in_main_thread()); if (job->driver->cancel) { + job_unlock(); force =3D job->driver->cancel(job, force); + job_lock(); } else { /* No .cancel() means the job will behave as if force-cancelled */ force =3D true; @@ -841,7 +874,9 @@ static void job_cancel_async(Job *job, bool force) if (job->user_paused) { /* Do not call job_enter here, the caller will handle it. */ if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; assert(job->pause_count > 0); @@ -911,7 +946,7 @@ static void job_completed_txn_abort(Job *job) ctx =3D other_job->aio_context; aio_context_acquire(ctx); if (!job_is_completed_locked(other_job)) { - assert(job_cancel_requested(other_job)); + assert(job_cancel_requested_locked(other_job)); job_finish_sync_locked(other_job, NULL, NULL); } job_finalize_single(other_job); @@ -930,13 +965,17 @@ static void job_completed_txn_abort(Job *job) =20 static int job_prepare(Job *job) { + int ret; AioContext *ctx =3D job->aio_context; assert(qemu_in_main_thread()); =20 if (job->ret =3D=3D 0 && job->driver->prepare) { + job_unlock(); aio_context_acquire(ctx); - job->ret =3D job->driver->prepare(job); + ret =3D job->driver->prepare(job); aio_context_release(ctx); + job_lock(); + job->ret =3D ret; job_update_rc(job); } =20 @@ -982,6 +1021,7 @@ static int job_transition_to_pending(Job *job) =20 void job_transition_to_ready(Job *job) { + JOB_LOCK_GUARD(); job_state_transition(job, JOB_STATUS_READY); job_event_ready(job); } @@ -1031,6 +1071,7 @@ static void job_exit(void *opaque) Job *job =3D (Job *)opaque; AioContext *ctx; =20 + JOB_LOCK_GUARD(); job_ref_locked(job); aio_context_acquire(job->aio_context); =20 @@ -1061,13 +1102,17 @@ static void job_exit(void *opaque) static void coroutine_fn job_co_entry(void *opaque) { Job *job =3D opaque; + int ret; =20 assert(job->aio_context =3D=3D qemu_get_current_aio_context()); assert(job && job->driver && job->driver->run); job_pause_point(job); - job->ret =3D job->driver->run(job, &job->err); - job->deferred_to_main_loop =3D true; - job->busy =3D true; + ret =3D job->driver->run(job, &job->err); + WITH_JOB_LOCK_GUARD() { + job->ret =3D ret; + job->deferred_to_main_loop =3D true; + job->busy =3D true; + } aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job); } =20 @@ -1083,16 +1128,20 @@ static int job_pre_run(Job *job) =20 void job_start(Job *job) { - assert(job && !job_started(job) && job->paused && - job->driver && job->driver->run); - job->co =3D qemu_coroutine_create(job_co_entry, job); + WITH_JOB_LOCK_GUARD() { + assert(job && !job_started(job) && job->paused && + job->driver && job->driver->run); + job->co =3D qemu_coroutine_create(job_co_entry, job); + } if (job_pre_run(job)) { return; } - job->pause_count--; - job->busy =3D true; - job->paused =3D false; - job_state_transition(job, JOB_STATUS_RUNNING); + WITH_JOB_LOCK_GUARD() { + job->pause_count--; + job->busy =3D true; + job->paused =3D false; + job_state_transition(job, JOB_STATUS_RUNNING); + } aio_co_enter(job->aio_context, job->co); } =20 @@ -1116,11 +1165,11 @@ void job_cancel_locked(Job *job, bool force) * choose to call job_is_cancelled() to show that we invoke * job_completed_txn_abort() only for force-cancelled jobs.) */ - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { job_completed_txn_abort(job); } } else { - job_enter(job); + job_enter_cond_locked(job, NULL); } } =20 @@ -1164,6 +1213,7 @@ void job_cancel_sync_all(void) Job *job; AioContext *aio_context; =20 + JOB_LOCK_GUARD(); while ((job =3D job_next_locked(NULL))) { aio_context =3D job->aio_context; aio_context_acquire(aio_context); @@ -1185,13 +1235,15 @@ void job_complete_locked(Job *job, Error **errp) if (job_apply_verb_locked(job, JOB_VERB_COMPLETE, errp)) { return; } - if (job_cancel_requested(job) || !job->driver->complete) { + if (job_cancel_requested_locked(job) || !job->driver->complete) { error_setg(errp, "The active block job '%s' cannot be completed", job->id); return; } =20 + job_unlock(); job->driver->complete(job, errp); + job_lock(); } =20 int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), @@ -1211,10 +1263,12 @@ int job_finish_sync_locked(Job *job, void (*finish)= (Job *, Error **errp), return -EBUSY; } =20 - AIO_WAIT_WHILE(job->aio_context, - (job_enter(job), !job_is_completed_locked(job))); + job_unlock(); + AIO_WAIT_WHILE(job->aio_context, (job_enter(job), !job_is_completed(jo= b))); + job_lock(); =20 - ret =3D (job_is_cancelled(job) && job->ret =3D=3D 0) ? -ECANCELED : jo= b->ret; + ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) ? + -ECANCELED : job->ret; job_unref_locked(job); return ret; } diff --git a/monitor/qmp-cmds.c b/monitor/qmp-cmds.c index 343353e27a..2f11d086a6 100644 --- a/monitor/qmp-cmds.c +++ b/monitor/qmp-cmds.c @@ -133,8 +133,10 @@ void qmp_cont(Error **errp) blk_iostatus_reset(blk); } =20 - for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { - block_job_iostatus_reset(job); + WITH_JOB_LOCK_GUARD() { + for (job =3D block_job_next(NULL); job; job =3D block_job_next(job= )) { + block_job_iostatus_reset(job); + } } =20 /* Continuing after completed migration. Images have been inactivated = to diff --git a/qemu-img.c b/qemu-img.c index 09f3b11eab..95e2e33e61 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -906,25 +906,30 @@ static void run_block_job(BlockJob *job, Error **errp) int ret =3D 0; =20 aio_context_acquire(aio_context); - job_ref_locked(&job->job); - do { - float progress =3D 0.0f; - aio_poll(aio_context, true); + WITH_JOB_LOCK_GUARD() { + job_ref_locked(&job->job); + do { + float progress =3D 0.0f; + job_unlock(); + aio_poll(aio_context, true); + + progress_get_snapshot(&job->job.progress, &progress_current, + &progress_total); + if (progress_total) { + progress =3D (float)progress_current / progress_total * 10= 0.f; + } + qemu_progress_print(progress, 0); + job_lock(); + } while (!job_is_ready_locked(&job->job) && + !job_is_completed_locked(&job->job)); =20 - progress_get_snapshot(&job->job.progress, &progress_current, - &progress_total); - if (progress_total) { - progress =3D (float)progress_current / progress_total * 100.f; + if (!job_is_completed_locked(&job->job)) { + ret =3D job_complete_sync_locked(&job->job, errp); + } else { + ret =3D job->job.ret; } - qemu_progress_print(progress, 0); - } while (!job_is_ready(&job->job) && !job_is_completed_locked(&job->jo= b)); - - if (!job_is_completed_locked(&job->job)) { - ret =3D job_complete_sync_locked(&job->job, errp); - } else { - ret =3D job->job.ret; + job_unref_locked(&job->job); } - job_unref_locked(&job->job); aio_context_release(aio_context); =20 /* publish completion progress only when success */ @@ -1077,7 +1082,9 @@ static int img_commit(int argc, char **argv) bdrv_ref(bs); } =20 - job =3D block_job_get("commit"); + WITH_JOB_LOCK_GUARD() { + job =3D block_job_get("commit"); + } assert(job); run_block_job(job, &local_err); if (local_err) { --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393152164839.9470528182321; Wed, 5 Jan 2022 06:32:32 -0800 (PST) Received: from localhost ([::1]:41986 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57L5-0000A3-4s for importer@patchew.org; Wed, 05 Jan 2022 09:32:31 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53204) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sk-0000Ht-Dq for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:14 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:40488) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sb-0007NZ-6s for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:10 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-443-p8GXoTSqPbGRFYbvPz-9og-1; Wed, 05 Jan 2022 09:03:01 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6128A8042A2; Wed, 5 Jan 2022 14:02:54 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6AFD92B4B3; Wed, 5 Jan 2022 14:02:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T7YKvrBZTPTQ+wpenx5UiAxDXXIWRIJIZ0rAV7So16E=; b=P2h2MucocZ+x+hOIG7YKhSrhvx2D1UGkZdsXPoE1oenJWNI8jZL8GiNatlSQdX2OCVRpxb bmJGR3J3sKwhqsTlMVzI5nAyhOb3rAXmAttpt5RvGLaoVvGQOHI1BaWdpb+i3XDb8TopEH e6amvXYzWUCmWfj/6EzA5byKslKultE= X-MC-Unique: p8GXoTSqPbGRFYbvPz-9og-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 11/16] jobs: document all static functions and add _locked() suffix Date: Wed, 5 Jan 2022 09:02:03 -0500 Message-Id: <20220105140208.365608-12-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393152756100001 Content-Type: text/plain; charset="utf-8" Now that we added the job_lock/unlock pairs, we can also rename all static functions in job.c that are called with the job mutex held as _locked(), and add a little comment on top. No functional change intended. Signed-off-by: Emanuele Giuseppe Esposito --- blockjob.c | 8 ++ job.c | 243 +++++++++++++++++++++++++++++++---------------------- 2 files changed, 149 insertions(+), 102 deletions(-) diff --git a/blockjob.c b/blockjob.c index e00c8d31d5..cf1f49f6c2 100644 --- a/blockjob.c +++ b/blockjob.c @@ -242,6 +242,7 @@ int block_job_add_bdrv(BlockJob *job, const char *name,= BlockDriverState *bs, return 0; } =20 +/* Called with job_mutex lock held. */ static void block_job_on_idle(Notifier *n, void *opaque) { /* @@ -269,6 +270,7 @@ static bool job_timer_pending(Job *job) return timer_pending(&job->sleep_timer); } =20 +/* Called with job_mutex held. May temporarly release the lock. */ bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) { const BlockJobDriver *drv =3D block_job_driver(job); @@ -310,6 +312,7 @@ int64_t block_job_ratelimit_get_delay(BlockJob *job, ui= nt64_t n) return ratelimit_calculate_delay(&job->limit, n); } =20 +/* Called with job_mutex lock held. */ BlockJobInfo *block_job_query(BlockJob *job, Error **errp) { BlockJobInfo *info; @@ -355,6 +358,7 @@ static void block_job_iostatus_set_err(BlockJob *job, i= nt error) } } =20 +/* Called with job_mutex lock held. */ static void block_job_event_cancelled(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -374,6 +378,7 @@ static void block_job_event_cancelled(Notifier *n, void= *opaque) job->speed); } =20 +/* Called with job_mutex lock held. */ static void block_job_event_completed(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -400,6 +405,7 @@ static void block_job_event_completed(Notifier *n, void= *opaque) msg); } =20 +/* Called with job_mutex lock held. */ static void block_job_event_pending(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -412,6 +418,7 @@ static void block_job_event_pending(Notifier *n, void *= opaque) job->job.id); } =20 +/* Called with job_mutex lock held. */ static void block_job_event_ready(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -504,6 +511,7 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, return job; } =20 +/* Called with job_mutex lock held. */ void block_job_iostatus_reset(BlockJob *job) { assert(qemu_in_main_thread()); diff --git a/job.c b/job.c index 56722a5043..f16a4ef542 100644 --- a/job.c +++ b/job.c @@ -54,6 +54,7 @@ */ QemuMutex job_mutex; =20 +/* Protected by job_mutex */ static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -129,7 +130,8 @@ JobTxn *job_txn_new(void) return txn; } =20 -static void job_txn_ref(JobTxn *txn) +/* Called with job_mutex held. */ +static void job_txn_ref_locked(JobTxn *txn) { txn->refcnt++; } @@ -151,10 +153,11 @@ void job_txn_add_job_locked(JobTxn *txn, Job *job) job->txn =3D txn; =20 QLIST_INSERT_HEAD(&txn->jobs, job, txn_list); - job_txn_ref(txn); + job_txn_ref_locked(txn); } =20 -static void job_txn_del_job(Job *job) +/* Called with job_mutex held. */ +static void job_txn_del_job_locked(Job *job) { if (job->txn) { QLIST_REMOVE(job, txn_list); @@ -163,17 +166,18 @@ static void job_txn_del_job(Job *job) } } =20 -static int job_txn_apply(Job *job, int fn(Job *)) +/* Called with job_mutex held. */ +static int job_txn_apply_locked(Job *job, int fn(Job *)) { Job *other_job, *next; JobTxn *txn =3D job->txn; int rc =3D 0; =20 /* - * Similar to job_completed_txn_abort, we take each job's lock before - * applying fn, but since we assume that outer_ctx is held by the call= er, - * we need to release it here to avoid holding the lock twice - which = would - * break AIO_WAIT_WHILE from within fn. + * Similar to job_completed_txn_abort_locked, we take each job's lock + * before applying fn, but since we assume that outer_ctx is held by t= he + * caller, we need to release it here to avoid holding the lock + * twice - which would break AIO_WAIT_WHILE from within fn. */ job_ref_locked(job); aio_context_release(job->aio_context); @@ -199,7 +203,8 @@ bool job_is_internal(Job *job) return (job->id =3D=3D NULL); } =20 -static void job_state_transition(Job *job, JobStatus s1) +/* Called with job_mutex held. */ +static void job_state_transition_locked(Job *job, JobStatus s1) { JobStatus s0 =3D job->status; assert(s1 >=3D 0 && s1 < JOB_STATUS__MAX); @@ -355,7 +360,8 @@ static bool job_started(Job *job) return job->co; } =20 -static bool job_should_pause(Job *job) +/* Called with job_mutex held. */ +static bool job_should_pause_locked(Job *job) { return job->pause_count > 0; } @@ -381,6 +387,7 @@ Job *job_get_locked(const char *id) return NULL; } =20 +/* Called with job_mutex *not* held. */ static void job_sleep_timer_cb(void *opaque) { Job *job =3D opaque; @@ -434,7 +441,7 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, notifier_list_init(&job->on_pending); notifier_list_init(&job->on_ready); =20 - job_state_transition(job, JOB_STATUS_CREATED); + job_state_transition_locked(job, JOB_STATUS_CREATED); aio_timer_init(qemu_get_aio_context(), &job->sleep_timer, QEMU_CLOCK_REALTIME, SCALE_NS, job_sleep_timer_cb, job); @@ -502,7 +509,7 @@ void job_progress_increase_remaining(Job *job, uint64_t= delta) * To be called when a cancelled job is finalised. * Called with job_mutex held. */ -static void job_event_cancelled(Job *job) +static void job_event_cancelled_locked(Job *job) { notifier_list_notify(&job->on_finalize_cancelled, job); } @@ -511,22 +518,25 @@ static void job_event_cancelled(Job *job) * To be called when a successfully completed job is finalised. * Called with job_mutex held. */ -static void job_event_completed(Job *job) +static void job_event_completed_locked(Job *job) { notifier_list_notify(&job->on_finalize_completed, job); } =20 -static void job_event_pending(Job *job) +/* Called with job_mutex held. */ +static void job_event_pending_locked(Job *job) { notifier_list_notify(&job->on_pending, job); } =20 -static void job_event_ready(Job *job) +/* Called with job_mutex held. */ +static void job_event_ready_locked(Job *job) { notifier_list_notify(&job->on_ready, job); } =20 -static void job_event_idle(Job *job) +/* Called with job_mutex held. */ +static void job_event_idle_locked(Job *job) { notifier_list_notify(&job->on_idle, job); } @@ -571,15 +581,18 @@ void job_enter(Job *job) * is allowed and cancels the timer. * * If @ns is (uint64_t) -1, no timer is scheduled and job_enter() must be - * called explicitly. */ -static void coroutine_fn job_do_yield(Job *job, uint64_t ns) + * called explicitly. + * + * Called with job_mutex held, but releases it temporarly. + */ +static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) { real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; - job_event_idle(job); + job_event_idle_locked(job); real_job_unlock(); job_unlock(); qemu_coroutine_yield(); @@ -594,7 +607,7 @@ void coroutine_fn job_pause_point(Job *job) assert(job && job_started(job)); =20 job_lock(); - if (!job_should_pause(job)) { + if (!job_should_pause_locked(job)) { job_unlock(); return; } @@ -609,15 +622,15 @@ void coroutine_fn job_pause_point(Job *job) job_lock(); } =20 - if (job_should_pause(job) && !job_is_cancelled_locked(job)) { + if (job_should_pause_locked(job) && !job_is_cancelled_locked(job)) { JobStatus status =3D job->status; - job_state_transition(job, status =3D=3D JOB_STATUS_READY + job_state_transition_locked(job, status =3D=3D JOB_STATUS_READY ? JOB_STATUS_STANDBY : JOB_STATUS_PAUSED); job->paused =3D true; - job_do_yield(job, -1); + job_do_yield_locked(job, -1); job->paused =3D false; - job_state_transition(job, status); + job_state_transition_locked(job, status); } job_unlock(); =20 @@ -636,8 +649,8 @@ void job_yield(Job *job) return; } =20 - if (!job_should_pause(job)) { - job_do_yield(job, -1); + if (!job_should_pause_locked(job)) { + job_do_yield_locked(job, -1); } } =20 @@ -654,8 +667,9 @@ void coroutine_fn job_sleep_ns(Job *job, int64_t ns) return; } =20 - if (!job_should_pause(job)) { - job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + if (!job_should_pause_locked(job)) { + job_do_yield_locked(job, + qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + n= s); } } =20 @@ -726,16 +740,17 @@ void job_user_resume_locked(Job *job, Error **errp) job_resume_locked(job); } =20 -static void job_do_dismiss(Job *job) +/* Called with job_mutex held. */ +static void job_do_dismiss_locked(Job *job) { assert(job); job->busy =3D false; job->paused =3D false; job->deferred_to_main_loop =3D true; =20 - job_txn_del_job(job); + job_txn_del_job_locked(job); =20 - job_state_transition(job, JOB_STATUS_NULL); + job_state_transition_locked(job, JOB_STATUS_NULL); job_unref_locked(job); } =20 @@ -748,14 +763,14 @@ void job_dismiss_locked(Job **jobptr, Error **errp) return; } =20 - job_do_dismiss(job); + job_do_dismiss_locked(job); *jobptr =3D NULL; } =20 void job_early_fail_locked(Job *job) { assert(job->status =3D=3D JOB_STATUS_CREATED); - job_do_dismiss(job); + job_do_dismiss_locked(job); } =20 void job_early_fail(Job *job) @@ -764,15 +779,17 @@ void job_early_fail(Job *job) job_early_fail_locked(job); } =20 -static void job_conclude(Job *job) +/* Called with job_mutex held. */ +static void job_conclude_locked(Job *job) { - job_state_transition(job, JOB_STATUS_CONCLUDED); + job_state_transition_locked(job, JOB_STATUS_CONCLUDED); if (job->auto_dismiss || !job_started(job)) { - job_do_dismiss(job); + job_do_dismiss_locked(job); } } =20 -static void job_update_rc(Job *job) +/* Called with job_mutex held. */ +static void job_update_rc_locked(Job *job) { if (!job->ret && job_is_cancelled_locked(job)) { job->ret =3D -ECANCELED; @@ -781,11 +798,12 @@ static void job_update_rc(Job *job) if (!job->err) { error_setg(&job->err, "%s", strerror(-job->ret)); } - job_state_transition(job, JOB_STATUS_ABORTING); + job_state_transition_locked(job, JOB_STATUS_ABORTING); } } =20 -static void job_commit(Job *job) +/* Called with job_mutex held, but releases it temporarly */ +static void job_commit_locked(Job *job) { assert(!job->ret); assert(qemu_in_main_thread()); @@ -796,7 +814,8 @@ static void job_commit(Job *job) } } =20 -static void job_abort(Job *job) +/* Called with job_mutex held, but releases it temporarly */ +static void job_abort_locked(Job *job) { assert(job->ret); assert(qemu_in_main_thread()); @@ -807,7 +826,8 @@ static void job_abort(Job *job) } } =20 -static void job_clean(Job *job) +/* Called with job_mutex held, but releases it temporarly */ +static void job_clean_locked(Job *job) { assert(qemu_in_main_thread()); if (job->driver->clean) { @@ -817,7 +837,8 @@ static void job_clean(Job *job) } } =20 -static int job_finalize_single(Job *job) +/* Called with job_mutex held, but releases it temporarly. */ +static int job_finalize_single_locked(Job *job) { int job_ret; AioContext *ctx =3D job->aio_context; @@ -825,16 +846,16 @@ static int job_finalize_single(Job *job) assert(job_is_completed_locked(job)); =20 /* Ensure abort is called for late-transactional failures */ - job_update_rc(job); + job_update_rc_locked(job); =20 aio_context_acquire(ctx); =20 if (!job->ret) { - job_commit(job); + job_commit_locked(job); } else { - job_abort(job); + job_abort_locked(job); } - job_clean(job); + job_clean_locked(job); =20 aio_context_release(ctx); =20 @@ -848,18 +869,19 @@ static int job_finalize_single(Job *job) /* Emit events only if we actually started */ if (job_started(job)) { if (job_is_cancelled_locked(job)) { - job_event_cancelled(job); + job_event_cancelled_locked(job); } else { - job_event_completed(job); + job_event_completed_locked(job); } } =20 - job_txn_del_job(job); - job_conclude(job); + job_txn_del_job_locked(job); + job_conclude_locked(job); return 0; } =20 -static void job_cancel_async(Job *job, bool force) +/* Called with job_mutex held, but releases it temporarly. */ +static void job_cancel_async_locked(Job *job, bool force) { assert(qemu_in_main_thread()); if (job->driver->cancel) { @@ -897,7 +919,8 @@ static void job_cancel_async(Job *job, bool force) } } =20 -static void job_completed_txn_abort(Job *job) +/* Called with job_mutex held. */ +static void job_completed_txn_abort_locked(Job *job) { AioContext *ctx; JobTxn *txn =3D job->txn; @@ -910,12 +933,12 @@ static void job_completed_txn_abort(Job *job) return; } txn->aborting =3D true; - job_txn_ref(txn); + job_txn_ref_locked(txn); =20 /* * We can only hold the single job's AioContext lock while calling - * job_finalize_single() because the finalization callbacks can involve - * calls of AIO_WAIT_WHILE(), which could deadlock otherwise. + * job_finalize_single_locked() because the finalization callbacks can + * involve calls of AIO_WAIT_WHILE(), which could deadlock otherwise. * Note that the job's AioContext may change when it is finalized. */ job_ref_locked(job); @@ -930,10 +953,10 @@ static void job_completed_txn_abort(Job *job) aio_context_acquire(ctx); /* * This is a transaction: If one job failed, no result will ma= tter. - * Therefore, pass force=3Dtrue to terminate all other jobs as= quickly - * as possible. + * Therefore, pass force=3Dtrue to terminate all other jobs as + * quickly as possible. */ - job_cancel_async(other_job, true); + job_cancel_async_locked(other_job, true); aio_context_release(ctx); } } @@ -949,13 +972,13 @@ static void job_completed_txn_abort(Job *job) assert(job_cancel_requested_locked(other_job)); job_finish_sync_locked(other_job, NULL, NULL); } - job_finalize_single(other_job); + job_finalize_single_locked(other_job); aio_context_release(ctx); } =20 /* * Use job_ref_locked()/job_unref_locked() so we can read the AioConte= xt - * here even if the job went away during job_finalize_single(). + * here even if the job went away during job_finalize_single_locked(). */ aio_context_acquire(job->aio_context); job_unref_locked(job); @@ -963,7 +986,8 @@ static void job_completed_txn_abort(Job *job) job_txn_unref_locked(txn); } =20 -static int job_prepare(Job *job) +/* Called with job_mutex held, but releases it temporarly. */ +static int job_prepare_locked(Job *job) { int ret; AioContext *ctx =3D job->aio_context; @@ -976,28 +1000,30 @@ static int job_prepare(Job *job) aio_context_release(ctx); job_lock(); job->ret =3D ret; - job_update_rc(job); + job_update_rc_locked(job); } =20 return job->ret; } =20 -static int job_needs_finalize(Job *job) +/* Called with job_mutex held. */ +static int job_needs_finalize_locked(Job *job) { return !job->auto_finalize; } =20 -static void job_do_finalize(Job *job) +/* Called with job_mutex held. */ +static void job_do_finalize_locked(Job *job) { int rc; assert(job && job->txn); =20 /* prepare the transaction to complete */ - rc =3D job_txn_apply(job, job_prepare); + rc =3D job_txn_apply_locked(job, job_prepare_locked); if (rc) { - job_completed_txn_abort(job); + job_completed_txn_abort_locked(job); } else { - job_txn_apply(job, job_finalize_single); + job_txn_apply_locked(job, job_finalize_single_locked); } } =20 @@ -1007,14 +1033,15 @@ void job_finalize_locked(Job *job, Error **errp) if (job_apply_verb_locked(job, JOB_VERB_FINALIZE, errp)) { return; } - job_do_finalize(job); + job_do_finalize_locked(job); } =20 -static int job_transition_to_pending(Job *job) +/* Called with job_mutex held. */ +static int job_transition_to_pending_locked(Job *job) { - job_state_transition(job, JOB_STATUS_PENDING); + job_state_transition_locked(job, JOB_STATUS_PENDING); if (!job->auto_finalize) { - job_event_pending(job); + job_event_pending_locked(job); } return 0; } @@ -1022,16 +1049,17 @@ static int job_transition_to_pending(Job *job) void job_transition_to_ready(Job *job) { JOB_LOCK_GUARD(); - job_state_transition(job, JOB_STATUS_READY); - job_event_ready(job); + job_state_transition_locked(job, JOB_STATUS_READY); + job_event_ready_locked(job); } =20 -static void job_completed_txn_success(Job *job) +/* Called with job_mutex held. */ +static void job_completed_txn_success_locked(Job *job) { JobTxn *txn =3D job->txn; Job *other_job; =20 - job_state_transition(job, JOB_STATUS_WAITING); + job_state_transition_locked(job, JOB_STATUS_WAITING); =20 /* * Successful completion, see if there are other running jobs in this @@ -1044,28 +1072,32 @@ static void job_completed_txn_success(Job *job) assert(other_job->ret =3D=3D 0); } =20 - job_txn_apply(job, job_transition_to_pending); + job_txn_apply_locked(job, job_transition_to_pending_locked); =20 /* If no jobs need manual finalization, automatically do so */ - if (job_txn_apply(job, job_needs_finalize) =3D=3D 0) { - job_do_finalize(job); + if (job_txn_apply_locked(job, job_needs_finalize_locked) =3D=3D 0) { + job_do_finalize_locked(job); } } =20 -static void job_completed(Job *job) +/* Called with job_mutex held. */ +static void job_completed_locked(Job *job) { assert(job && job->txn && !job_is_completed_locked(job)); =20 - job_update_rc(job); + job_update_rc_locked(job); trace_job_completed(job, job->ret); if (job->ret) { - job_completed_txn_abort(job); + job_completed_txn_abort_locked(job); } else { - job_completed_txn_success(job); + job_completed_txn_success_locked(job); } } =20 -/** Useful only as a type shim for aio_bh_schedule_oneshot. */ +/** + * Useful only as a type shim for aio_bh_schedule_oneshot. + * Called with job_mutex *not* held. + */ static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; @@ -1080,15 +1112,15 @@ static void job_exit(void *opaque) * drain block nodes, and if .drained_poll still returned true, we wou= ld * deadlock. */ job->busy =3D false; - job_event_idle(job); + job_event_idle_locked(job); =20 - job_completed(job); + job_completed_locked(job); =20 /* - * Note that calling job_completed can move the job to a different - * aio_context, so we cannot cache from above. job_txn_apply takes car= e of - * acquiring the new lock, and we ref/unref to avoid job_completed fre= eing - * the job underneath us. + * Note that calling job_completed_locked can move the job to a differ= ent + * aio_context, so we cannot cache from above. job_txn_apply_locked ta= kes + * care of acquiring the new lock, and we ref/unref to avoid + * job_completed_locked freeing the job underneath us. */ ctx =3D job->aio_context; job_unref_locked(job); @@ -1098,6 +1130,8 @@ static void job_exit(void *opaque) /** * All jobs must allow a pause point before entering their job proper. This * ensures that jobs can be paused prior to being started, then resumed la= ter. + * + * Called with job_mutex *not* held. */ static void coroutine_fn job_co_entry(void *opaque) { @@ -1116,6 +1150,7 @@ static void coroutine_fn job_co_entry(void *opaque) aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job); } =20 +/* Called with job_mutex *not* held. */ static int job_pre_run(Job *job) { assert(qemu_in_main_thread()); @@ -1140,7 +1175,7 @@ void job_start(Job *job) job->pause_count--; job->busy =3D true; job->paused =3D false; - job_state_transition(job, JOB_STATUS_RUNNING); + job_state_transition_locked(job, JOB_STATUS_RUNNING); } aio_co_enter(job->aio_context, job->co); } @@ -1148,25 +1183,25 @@ void job_start(Job *job) void job_cancel_locked(Job *job, bool force) { if (job->status =3D=3D JOB_STATUS_CONCLUDED) { - job_do_dismiss(job); + job_do_dismiss_locked(job); return; } - job_cancel_async(job, force); + job_cancel_async_locked(job, force); if (!job_started(job)) { - job_completed(job); + job_completed_locked(job); } else if (job->deferred_to_main_loop) { /* - * job_cancel_async() ignores soft-cancel requests for jobs + * job_cancel_async_locked() ignores soft-cancel requests for jobs * that are already done (i.e. deferred to the main loop). We * have to check again whether the job is really cancelled. * (job_cancel_requested() and job_is_cancelled() are equivalent - * here, because job_cancel_async() will make soft-cancel + * here, because job_cancel_async_locked() will make soft-cancel * requests no-ops when deferred_to_main_loop is true. We * choose to call job_is_cancelled() to show that we invoke - * job_completed_txn_abort() only for force-cancelled jobs.) + * job_completed_txn_abort_locked() only for force-cancelled jobs.) */ if (job_is_cancelled_locked(job)) { - job_completed_txn_abort(job); + job_completed_txn_abort_locked(job); } } else { job_enter_cond_locked(job, NULL); @@ -1185,16 +1220,20 @@ void job_user_cancel_locked(Job *job, bool force, E= rror **errp) * A wrapper around job_cancel_locked() taking an Error ** parameter so * it may be used with job_finish_sync_locked() without the * need for (rather nasty) function pointer casts there. + * + * Called with job_mutex held. */ -static void job_cancel_err(Job *job, Error **errp) +static void job_cancel_err_locked(Job *job, Error **errp) { job_cancel_locked(job, false); } =20 /** - * Same as job_cancel_err(), but force-cancel. + * Same as job_cancel_err_locked(), but force-cancel. + * + * Called with job_mutex held. */ -static void job_force_cancel_err(Job *job, Error **errp) +static void job_force_cancel_err_locked(Job *job, Error **errp) { job_cancel_locked(job, true); } @@ -1202,9 +1241,9 @@ static void job_force_cancel_err(Job *job, Error **er= rp) int job_cancel_sync_locked(Job *job, bool force) { if (force) { - return job_finish_sync_locked(job, &job_force_cancel_err, NULL); + return job_finish_sync_locked(job, &job_force_cancel_err_locked, N= ULL); } else { - return job_finish_sync_locked(job, &job_cancel_err, NULL); + return job_finish_sync_locked(job, &job_cancel_err_locked, NULL); } } =20 --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393374603975.2888520265172; Wed, 5 Jan 2022 06:36:14 -0800 (PST) Received: from localhost ([::1]:50740 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57Of-00062i-F2 for importer@patchew.org; Wed, 05 Jan 2022 09:36:13 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53228) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sk-0000IE-E6 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:14 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:36977) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sa-0007NS-6r for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:07 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-214-Pnrt6Mw7OaaCxs6D8FTgnA-1; Wed, 05 Jan 2022 09:03:02 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 71F3E94DDB; Wed, 5 Jan 2022 14:02:55 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B5B62BE77; Wed, 5 Jan 2022 14:02:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e6oZW+L+lvCwJy/AK4xp7wKAb/Tfqu+x97oTRYvZuuU=; b=Yy92TVDu6IlVxwh40TZZM/k1JB8naW193f6+P1h1wcY0uDg9jPMhjvf6Fsj6YR8vxK8YWb OjHefrOwyrgxZBY50tajDBaS+r8lJMdEyw+RO6FQMNSqfLXUMT94624qZft74/TRz+pNPi pCa4GMZfPSaWOzdw07XaZfOF+3zq9HQ= X-MC-Unique: Pnrt6Mw7OaaCxs6D8FTgnA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 12/16] jobs: use job locks and helpers also in the unit tests Date: Wed, 5 Jan 2022 09:02:04 -0500 Message-Id: <20220105140208.365608-13-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393376210100002 Content-Type: text/plain; charset="utf-8" Add missing job synchronization in the unit tests, with both explicit locks and helpers. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- tests/unit/test-bdrv-drain.c | 40 +++++++++++----------- tests/unit/test-block-iothread.c | 4 +++ tests/unit/test-blockjob-txn.c | 10 ++++++ tests/unit/test-blockjob.c | 57 +++++++++++++++++++++----------- 4 files changed, 72 insertions(+), 39 deletions(-) diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 3f344a0d0d..c03560e63d 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -941,61 +941,63 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 0); + g_assert_false(job_get_paused(&job->job)); g_assert_true(tjob->running); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 do_drain_begin_unlocked(drain_type, drain_bs); =20 if (drain_type =3D=3D BDRV_DRAIN_ALL) { /* bdrv_drain_all() drains both src and target */ - g_assert_cmpint(job->job.pause_count, =3D=3D, 2); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 2); } else { - g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 1); } - g_assert_true(job->job.paused); - g_assert_false(job->job.busy); /* The job is paused */ + g_assert_true(job_get_paused(&job->job)); + g_assert_false(job_get_busy(&job->job)); /* The job is paused */ =20 do_drain_end_unlocked(drain_type, drain_bs); =20 if (use_iothread) { /* paused is reset in the I/O thread, wait for it */ - while (job->job.paused) { + while (job_get_paused(&job->job)) { aio_poll(qemu_get_aio_context(), false); } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 0); + g_assert_false(job_get_paused(&job->job)); + g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 do_drain_begin_unlocked(drain_type, target); =20 if (drain_type =3D=3D BDRV_DRAIN_ALL) { /* bdrv_drain_all() drains both src and target */ - g_assert_cmpint(job->job.pause_count, =3D=3D, 2); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 2); } else { - g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 1); } - g_assert_true(job->job.paused); - g_assert_false(job->job.busy); /* The job is paused */ + g_assert_true(job_get_paused(&job->job)); + g_assert_false(job_get_busy(&job->job)); /* The job is paused */ =20 do_drain_end_unlocked(drain_type, target); =20 if (use_iothread) { /* paused is reset in the I/O thread, wait for it */ - while (job->job.paused) { + while (job_get_paused(&job->job)) { aio_poll(qemu_get_aio_context(), false); } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 0); + g_assert_false(job_get_paused(&job->job)); + g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 aio_context_acquire(ctx); + job_lock(); ret =3D job_complete_sync_locked(&job->job, &error_abort); + job_unlock(); g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 if (use_iothread) { diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index 7e1b521d61..b9309beec2 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -456,7 +456,9 @@ static void test_attach_blockjob(void) } =20 aio_context_acquire(ctx); + job_lock(); job_complete_sync_locked(&tjob->common.job, &error_abort); + job_unlock(); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 @@ -630,7 +632,9 @@ static void test_propagate_mirror(void) BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, false, "filter_node", MIRROR_COPY_MODE_BACKGROUND, &error_abort); + job_lock(); job =3D job_get_locked("job0"); + job_unlock(); filter =3D bdrv_find_node("filter_node"); =20 /* Change the AioContext of src */ diff --git a/tests/unit/test-blockjob-txn.c b/tests/unit/test-blockjob-txn.c index 5396fcef10..bd69076300 100644 --- a/tests/unit/test-blockjob-txn.c +++ b/tests/unit/test-blockjob-txn.c @@ -124,16 +124,20 @@ static void test_single_job(int expected) job =3D test_block_job_start(1, true, expected, &result, txn); job_start(&job->job); =20 + job_lock(); if (expected =3D=3D -ECANCELED) { job_cancel_locked(&job->job, false); } + job_unlock(); =20 while (result =3D=3D -EINPROGRESS) { aio_poll(qemu_get_aio_context(), true); } g_assert_cmpint(result, =3D=3D, expected); =20 + job_lock(); job_txn_unref_locked(txn); + job_unlock(); } =20 static void test_single_job_success(void) @@ -168,6 +172,7 @@ static void test_pair_jobs(int expected1, int expected2) /* Release our reference now to trigger as many nice * use-after-free bugs as possible. */ + job_lock(); job_txn_unref_locked(txn); =20 if (expected1 =3D=3D -ECANCELED) { @@ -176,6 +181,7 @@ static void test_pair_jobs(int expected1, int expected2) if (expected2 =3D=3D -ECANCELED) { job_cancel_locked(&job2->job, false); } + job_unlock(); =20 while (result1 =3D=3D -EINPROGRESS || result2 =3D=3D -EINPROGRESS) { aio_poll(qemu_get_aio_context(), true); @@ -227,7 +233,9 @@ static void test_pair_jobs_fail_cancel_race(void) job_start(&job1->job); job_start(&job2->job); =20 + job_lock(); job_cancel_locked(&job1->job, false); + job_unlock(); =20 /* Now make job2 finish before the main loop kicks jobs. This simulat= es * the race between a pending kick and another job completing. @@ -242,7 +250,9 @@ static void test_pair_jobs_fail_cancel_race(void) g_assert_cmpint(result1, =3D=3D, -ECANCELED); g_assert_cmpint(result2, =3D=3D, -ECANCELED); =20 + job_lock(); job_txn_unref_locked(txn); + job_unlock(); } =20 int main(int argc, char **argv) diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index 2beed3623e..ec9128dbb5 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -211,8 +211,11 @@ static CancelJob *create_common(Job **pjob) bjob =3D mk_job(blk, "Steve", &test_cancel_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; + job_lock(); job_ref_locked(job); assert(job->status =3D=3D JOB_STATUS_CREATED); + job_unlock(); + s =3D container_of(bjob, CancelJob, common); s->blk =3D blk; =20 @@ -230,6 +233,7 @@ static void cancel_common(CancelJob *s) ctx =3D job->job.aio_context; aio_context_acquire(ctx); =20 + job_lock(); job_cancel_sync_locked(&job->job, true); if (sts !=3D JOB_STATUS_CREATED && sts !=3D JOB_STATUS_CONCLUDED) { Job *dummy =3D &job->job; @@ -237,6 +241,7 @@ static void cancel_common(CancelJob *s) } assert(job->job.status =3D=3D JOB_STATUS_NULL); job_unref_locked(&job->job); + job_unlock(); destroy_blk(blk); =20 aio_context_release(ctx); @@ -259,7 +264,7 @@ static void test_cancel_running(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 cancel_common(s); } @@ -272,11 +277,13 @@ static void test_cancel_paused(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 + job_lock(); job_user_pause_locked(job, &error_abort); + job_unlock(); job_enter(job); - assert(job->status =3D=3D JOB_STATUS_PAUSED); + assert(job_get_status(job) =3D=3D JOB_STATUS_PAUSED); =20 cancel_common(s); } @@ -289,11 +296,11 @@ static void test_cancel_ready(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 cancel_common(s); } @@ -306,15 +313,17 @@ static void test_cancel_standby(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 + job_lock(); job_user_pause_locked(job, &error_abort); + job_unlock(); job_enter(job); - assert(job->status =3D=3D JOB_STATUS_STANDBY); + assert(job_get_status(job) =3D=3D JOB_STATUS_STANDBY); =20 cancel_common(s); } @@ -327,20 +336,22 @@ static void test_cancel_pending(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 + job_lock(); job_complete_locked(job, &error_abort); + job_unlock(); job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); } - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); aio_poll(qemu_get_aio_context(), true); - assert(job->status =3D=3D JOB_STATUS_PENDING); + assert(job_get_status(job) =3D=3D JOB_STATUS_PENDING); =20 cancel_common(s); } @@ -353,25 +364,29 @@ static void test_cancel_concluded(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 + job_lock(); job_complete_locked(job, &error_abort); + job_unlock(); job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); } - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); aio_poll(qemu_get_aio_context(), true); - assert(job->status =3D=3D JOB_STATUS_PENDING); + assert(job_get_status(job) =3D=3D JOB_STATUS_PENDING); =20 aio_context_acquire(job->aio_context); + job_lock(); job_finalize_locked(job, &error_abort); + job_unlock(); aio_context_release(job->aio_context); - assert(job->status =3D=3D JOB_STATUS_CONCLUDED); + assert(job_get_status(job) =3D=3D JOB_STATUS_CONCLUDED); =20 cancel_common(s); } @@ -459,22 +474,23 @@ static void test_complete_in_standby(void) bjob =3D mk_job(blk, "job", &test_yielding_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; - assert(job->status =3D=3D JOB_STATUS_CREATED); + assert(job_get_status(job) =3D=3D JOB_STATUS_CREATED); =20 /* Wait for the job to become READY */ job_start(job); aio_context_acquire(ctx); - AIO_WAIT_WHILE(ctx, job->status !=3D JOB_STATUS_READY); + AIO_WAIT_WHILE(ctx, job_get_status(job) !=3D JOB_STATUS_READY); aio_context_release(ctx); =20 /* Begin the drained section, pausing the job */ bdrv_drain_all_begin(); - assert(job->status =3D=3D JOB_STATUS_STANDBY); + assert(job_get_status(job) =3D=3D JOB_STATUS_STANDBY); /* Lock the IO thread to prevent the job from being run */ aio_context_acquire(ctx); /* This will schedule the job to resume it */ bdrv_drain_all_end(); =20 + job_lock(); /* But the job cannot run, so it will remain on standby */ assert(job->status =3D=3D JOB_STATUS_STANDBY); =20 @@ -489,6 +505,7 @@ static void test_complete_in_standby(void) assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 job_dismiss_locked(&job, &error_abort); + job_unlock(); =20 destroy_blk(blk); aio_context_release(ctx); --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393263669481.59695881984237; Wed, 5 Jan 2022 06:34:23 -0800 (PST) Received: from localhost ([::1]:47498 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57Ms-0003qs-Ih for importer@patchew.org; Wed, 05 Jan 2022 09:34:22 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53288) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sq-0000ZI-SE for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:20 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:31314) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56si-0007Of-UO for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:20 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-438-wsOzxX4lMeqmZnYtrw3qdg-1; Wed, 05 Jan 2022 09:02:59 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9F9B4101AFAB; Wed, 5 Jan 2022 14:02:56 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8B65223786; Wed, 5 Jan 2022 14:02:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391387; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bcQQsIScVHFKTqvi0bVDekODfIi9G0VhzNIY05Mxyn4=; b=hfbtpvtOeaFPX6aJbmnhS6gcvqA6GiJdCw0u2JrhkmcnJl9q34rtYKhj1Ia6YTa4b+QrPZ WuBuKnxRMJ6qVa6xj5U3nzhUtFwiSHYntgAIFZewiEdPnUhLbnsnLYMF6J1h4OKeECQLIz +LJo2MxP1OAna7p26WQjwQw/hGzIQKE= X-MC-Unique: wsOzxX4lMeqmZnYtrw3qdg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 13/16] jobs: add job lock in find_* functions Date: Wed, 5 Jan 2022 09:02:05 -0500 Message-Id: <20220105140208.365608-14-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -23 X-Spam_score: -2.4 X-Spam_bar: -- X-Spam_report: (-2.4 / 5.0 requ) BAYES_00=-1.9, DKIM_INVALID=0.1, DKIM_SIGNED=0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, T_SPF_HELO_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393264945100001 Content-Type: text/plain; charset="utf-8" Both blockdev.c and job-qmp.c have TOC/TOU conditions, because they first search for the job and then perform an action on it. Therefore, we need to do the search + action under the same job mutex critical section. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- blockdev.c | 14 +++++++++++++- job-qmp.c | 13 ++++++++++++- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/blockdev.c b/blockdev.c index 099d57e0d2..1fbd9b9e04 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3305,7 +3305,10 @@ out: aio_context_release(aio_context); } =20 -/* Get a block job using its ID and acquire its AioContext */ +/* + * Get a block job using its ID and acquire its AioContext. + * Returns with job_lock held on success. + */ static BlockJob *find_block_job(const char *id, AioContext **aio_context, Error **errp) { @@ -3314,12 +3317,14 @@ static BlockJob *find_block_job(const char *id, Aio= Context **aio_context, assert(id !=3D NULL); =20 *aio_context =3D NULL; + job_lock(); =20 job =3D block_job_get(id); =20 if (!job) { error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE, "Block job '%s' not found", id); + job_unlock(); return NULL; } =20 @@ -3340,6 +3345,7 @@ void qmp_block_job_set_speed(const char *device, int6= 4_t speed, Error **errp) =20 block_job_set_speed(job, speed, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_cancel(const char *device, @@ -3366,6 +3372,7 @@ void qmp_block_job_cancel(const char *device, job_user_cancel_locked(&job->job, force, errp); out: aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_pause(const char *device, Error **errp) @@ -3380,6 +3387,7 @@ void qmp_block_job_pause(const char *device, Error **= errp) trace_qmp_block_job_pause(job); job_user_pause_locked(&job->job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_resume(const char *device, Error **errp) @@ -3394,6 +3402,7 @@ void qmp_block_job_resume(const char *device, Error *= *errp) trace_qmp_block_job_resume(job); job_user_resume_locked(&job->job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_complete(const char *device, Error **errp) @@ -3408,6 +3417,7 @@ void qmp_block_job_complete(const char *device, Error= **errp) trace_qmp_block_job_complete(job); job_complete_locked(&job->job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_finalize(const char *id, Error **errp) @@ -3431,6 +3441,7 @@ void qmp_block_job_finalize(const char *id, Error **e= rrp) aio_context =3D blk_get_aio_context(job->blk); job_unref_locked(&job->job); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) @@ -3447,6 +3458,7 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) job =3D &bjob->job; job_dismiss_locked(&job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_change_backing_file(const char *device, diff --git a/job-qmp.c b/job-qmp.c index 9fa14bf761..615e056fc4 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -29,16 +29,21 @@ #include "qapi/error.h" #include "trace/trace-root.h" =20 -/* Get a job using its ID and acquire its AioContext */ +/* + * Get a block job using its ID and acquire its AioContext. + * Returns with job_lock held on success. + */ static Job *find_job(const char *id, AioContext **aio_context, Error **err= p) { Job *job; =20 *aio_context =3D NULL; + job_lock(); =20 job =3D job_get_locked(id); if (!job) { error_setg(errp, "Job not found"); + job_unlock(); return NULL; } =20 @@ -60,6 +65,7 @@ void qmp_job_cancel(const char *id, Error **errp) trace_qmp_job_cancel(job); job_user_cancel_locked(job, true, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_pause(const char *id, Error **errp) @@ -74,6 +80,7 @@ void qmp_job_pause(const char *id, Error **errp) trace_qmp_job_pause(job); job_user_pause_locked(job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_resume(const char *id, Error **errp) @@ -88,6 +95,7 @@ void qmp_job_resume(const char *id, Error **errp) trace_qmp_job_resume(job); job_user_resume_locked(job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_complete(const char *id, Error **errp) @@ -102,6 +110,7 @@ void qmp_job_complete(const char *id, Error **errp) trace_qmp_job_complete(job); job_complete_locked(job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_finalize(const char *id, Error **errp) @@ -125,6 +134,7 @@ void qmp_job_finalize(const char *id, Error **errp) aio_context =3D job->aio_context; job_unref_locked(job); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_dismiss(const char *id, Error **errp) @@ -139,6 +149,7 @@ void qmp_job_dismiss(const char *id, Error **errp) trace_qmp_job_dismiss(job); job_dismiss_locked(&job, errp); aio_context_release(aio_context); + job_unlock(); } =20 static JobInfo *job_query_single(Job *job, Error **errp) --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 164139308694638.58072796338422; Wed, 5 Jan 2022 06:31:26 -0800 (PST) Received: from localhost ([::1]:38154 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57Jz-0005xD-C2 for importer@patchew.org; Wed, 05 Jan 2022 09:31:23 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53210) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sk-0000Hx-Eq for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:14 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:43434) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sb-0007Nx-Ih for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:09 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-653-FDzG04ljNL6TBBYSfJ0Qqw-1; Wed, 05 Jan 2022 09:03:01 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9CCB919251CB; Wed, 5 Jan 2022 14:02:57 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 98B0E2B4C0; Wed, 5 Jan 2022 14:02:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BAZv/Zz5rRyB9QDSuBVfXjjv/jEAQDSoJpETDt8kYcA=; b=WzKq7rBcknahiuBc20IcwCOHPtGA3gMEJs6Z7xhzCDQuFW2WrSL+hYvWFmHya7LFLUIVAr ulpq4sDzpaQ7ZG4c6WOOKzACXUc365wfKHvp22R2IwiBBcfB2wINxbDpe0ONaE352+YkGq 5wxeaCCSTFGQ63ozWQ5eFIboBFLOStU= X-MC-Unique: FDzG04ljNL6TBBYSfJ0Qqw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 14/16] job.c: use job_get_aio_context() Date: Wed, 5 Jan 2022 09:02:06 -0500 Message-Id: <20220105140208.365608-15-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393089201100001 Content-Type: text/plain; charset="utf-8" If the job->aio_context is accessed under job_mutex, leave as it is. Otherwise use job_get_aio_context(). Signed-off-by: Emanuele Giuseppe Esposito --- block/commit.c | 4 ++-- block/mirror.c | 2 +- block/replication.c | 2 +- blockjob.c | 18 +++++++++++------- job.c | 8 ++++---- tests/unit/test-block-iothread.c | 6 +++--- 6 files changed, 22 insertions(+), 18 deletions(-) diff --git a/block/commit.c b/block/commit.c index f639eb49c5..961b57edf0 100644 --- a/block/commit.c +++ b/block/commit.c @@ -369,7 +369,7 @@ void commit_start(const char *job_id, BlockDriverState = *bs, goto fail; } =20 - s->base =3D blk_new(s->common.job.aio_context, + s->base =3D blk_new(job_get_aio_context(&s->common.job), base_perms, BLK_PERM_CONSISTENT_READ | BLK_PERM_GRAPH_MOD @@ -382,7 +382,7 @@ void commit_start(const char *job_id, BlockDriverState = *bs, s->base_bs =3D base; =20 /* Required permissions are already taken with block_job_add_bdrv() */ - s->top =3D blk_new(s->common.job.aio_context, 0, BLK_PERM_ALL); + s->top =3D blk_new(job_get_aio_context(&s->common.job), 0, BLK_PERM_AL= L); ret =3D blk_insert_bs(s->top, top, errp); if (ret < 0) { goto fail; diff --git a/block/mirror.c b/block/mirror.c index 41450df55c..72b4367b4e 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -1743,7 +1743,7 @@ static BlockJob *mirror_start_job( target_perms |=3D BLK_PERM_GRAPH_MOD; } =20 - s->target =3D blk_new(s->common.job.aio_context, + s->target =3D blk_new(job_get_aio_context(&s->common.job), target_perms, target_shared_perms); ret =3D blk_insert_bs(s->target, target, errp); if (ret < 0) { diff --git a/block/replication.c b/block/replication.c index 50ea778937..68018948b9 100644 --- a/block/replication.c +++ b/block/replication.c @@ -148,8 +148,8 @@ static void replication_close(BlockDriverState *bs) } if (s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { commit_job =3D &s->commit_job->job; - assert(commit_job->aio_context =3D=3D qemu_get_current_aio_context= ()); WITH_JOB_LOCK_GUARD() { + assert(commit_job->aio_context =3D=3D qemu_get_current_aio_con= text()); job_cancel_sync_locked(commit_job, false); } } diff --git a/blockjob.c b/blockjob.c index cf1f49f6c2..468ba735c5 100644 --- a/blockjob.c +++ b/blockjob.c @@ -155,14 +155,16 @@ static void child_job_set_aio_ctx(BdrvChild *c, AioCo= ntext *ctx, bdrv_set_aio_context_ignore(sibling->bs, ctx, ignore); } =20 - job->job.aio_context =3D ctx; + WITH_JOB_LOCK_GUARD() { + job->job.aio_context =3D ctx; + } } =20 static AioContext *child_job_get_parent_aio_context(BdrvChild *c) { BlockJob *job =3D c->opaque; =20 - return job->job.aio_context; + return job_get_aio_context(&job->job); } =20 static const BdrvChildClass child_job =3D { @@ -218,19 +220,21 @@ int block_job_add_bdrv(BlockJob *job, const char *nam= e, BlockDriverState *bs, { BdrvChild *c; bool need_context_ops; + AioContext *job_aiocontext; assert(qemu_in_main_thread()); =20 bdrv_ref(bs); =20 - need_context_ops =3D bdrv_get_aio_context(bs) !=3D job->job.aio_contex= t; + job_aiocontext =3D job_get_aio_context(&job->job); + need_context_ops =3D bdrv_get_aio_context(bs) !=3D job_aiocontext; =20 - if (need_context_ops && job->job.aio_context !=3D qemu_get_aio_context= ()) { - aio_context_release(job->job.aio_context); + if (need_context_ops && job_aiocontext !=3D qemu_get_aio_context()) { + aio_context_release(job_aiocontext); } c =3D bdrv_root_attach_child(bs, name, &child_job, 0, perm, shared_per= m, job, errp); - if (need_context_ops && job->job.aio_context !=3D qemu_get_aio_context= ()) { - aio_context_acquire(job->job.aio_context); + if (need_context_ops && job_aiocontext !=3D qemu_get_aio_context()) { + aio_context_acquire(job_aiocontext); } if (c =3D=3D NULL) { return -EPERM; diff --git a/job.c b/job.c index f16a4ef542..8a5b710d9b 100644 --- a/job.c +++ b/job.c @@ -566,7 +566,7 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *job= )) job->busy =3D true; real_job_unlock(); job_unlock(); - aio_co_enter(job->aio_context, job->co); + aio_co_enter(job_get_aio_context(job), job->co); job_lock(); } =20 @@ -1138,7 +1138,6 @@ static void coroutine_fn job_co_entry(void *opaque) Job *job =3D opaque; int ret; =20 - assert(job->aio_context =3D=3D qemu_get_current_aio_context()); assert(job && job->driver && job->driver->run); job_pause_point(job); ret =3D job->driver->run(job, &job->err); @@ -1177,7 +1176,7 @@ void job_start(Job *job) job->paused =3D false; job_state_transition_locked(job, JOB_STATUS_RUNNING); } - aio_co_enter(job->aio_context, job->co); + aio_co_enter(job_get_aio_context(job), job->co); } =20 void job_cancel_locked(Job *job, bool force) @@ -1303,7 +1302,8 @@ int job_finish_sync_locked(Job *job, void (*finish)(J= ob *, Error **errp), } =20 job_unlock(); - AIO_WAIT_WHILE(job->aio_context, (job_enter(job), !job_is_completed(jo= b))); + AIO_WAIT_WHILE(job_get_aio_context(job), + (job_enter(job), !job_is_completed(job))); job_lock(); =20 ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) ? diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index b9309beec2..addcb5846b 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -379,7 +379,7 @@ static int coroutine_fn test_job_run(Job *job, Error **= errp) job_transition_to_ready(&s->common.job); while (!s->should_complete) { s->n++; - g_assert(qemu_get_current_aio_context() =3D=3D job->aio_context); + g_assert(qemu_get_current_aio_context() =3D=3D job_get_aio_context= (job)); =20 /* Avoid job_sleep_ns() because it marks the job as !busy. We want= to * emulate some actual activity (probably some I/O) here so that t= he @@ -390,7 +390,7 @@ static int coroutine_fn test_job_run(Job *job, Error **= errp) job_pause_point(&s->common.job); } =20 - g_assert(qemu_get_current_aio_context() =3D=3D job->aio_context); + g_assert(qemu_get_current_aio_context() =3D=3D job_get_aio_context(job= )); return 0; } =20 @@ -642,7 +642,7 @@ static void test_propagate_mirror(void) g_assert(bdrv_get_aio_context(src) =3D=3D ctx); g_assert(bdrv_get_aio_context(target) =3D=3D ctx); g_assert(bdrv_get_aio_context(filter) =3D=3D ctx); - g_assert(job->aio_context =3D=3D ctx); + g_assert(job_get_aio_context(job) =3D=3D ctx); =20 /* Change the AioContext of target */ aio_context_acquire(ctx); --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641395382118708.6119235641486; Wed, 5 Jan 2022 07:09:42 -0800 (PST) Received: from localhost ([::1]:38750 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57v3-0006YW-2u for importer@patchew.org; Wed, 05 Jan 2022 10:09:41 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53252) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sl-0000Ms-Ki for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:50453) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56sZ-0007N9-Om for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:15 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-422-oZwJrLwpNY2IEOTUfMNJSQ-1; Wed, 05 Jan 2022 09:03:01 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ADB3F802CBA; Wed, 5 Jan 2022 14:02:58 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id B70732B47B; Wed, 5 Jan 2022 14:02:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9OtFIgJcfTyeaTBYVqIC8LgpWjYmf72AMFR9Oyvtnm0=; b=XmgWWmKGLJhRU0vWQNPnpxXmfdsL9QqIahtkTiRbVsvuPJpsmwQo/qIn+/s0VjMX7eMkx2 EtNmMVq4IYaWfnKhZPfzOvDz/Iho+FKrht7VNm5u95HaW4cbYhmDMpGS3js2ISzQa7UWWX fMiCBTJJO5EFp0X3qOkWP+kpjYM2Uuw= X-MC-Unique: oZwJrLwpNY2IEOTUfMNJSQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 15/16] job.c: enable job lock/unlock and remove Aiocontext locks Date: Wed, 5 Jan 2022 09:02:07 -0500 Message-Id: <20220105140208.365608-16-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641395383782100001 Content-Type: text/plain; charset="utf-8" Change the job_{lock/unlock} and macros to use job_mutex. Now that they are not nop anymore, remove the aiocontext to avoid deadlocks. Therefore: - when possible, remove completely the aiocontext lock/unlock pair - if it is used by some other functions too, reduce the locking section as much as possible, leaving the job API outside. There is only one JobDriver callback, ->free() that assumes that the aiocontext lock is held (because it calls bdrv_unref), so for now keep that under aiocontext lock. Also remove real_job_{lock/unlock}, as they are replaced by the public functions. Signed-off-by: Emanuele Giuseppe Esposito --- blockdev.c | 65 ++++----------------------- include/qemu/job.h | 11 +---- job-qmp.c | 41 ++++------------- job.c | 76 +++----------------------------- tests/unit/test-bdrv-drain.c | 4 +- tests/unit/test-block-iothread.c | 2 +- tests/unit/test-blockjob.c | 13 ++---- 7 files changed, 31 insertions(+), 181 deletions(-) diff --git a/blockdev.c b/blockdev.c index 1fbd9b9e04..ebc14daa86 100644 --- a/blockdev.c +++ b/blockdev.c @@ -159,12 +159,7 @@ void blockdev_mark_auto_del(BlockBackend *blk) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { - AioContext *aio_context =3D job->job.aio_context; - aio_context_acquire(aio_context); - job_cancel_locked(&job->job, false); - - aio_context_release(aio_context); } } =20 @@ -1829,16 +1824,9 @@ static void drive_backup_abort(BlkActionState *commo= n) DriveBackupState *state =3D DO_UPCAST(DriveBackupState, common, common= ); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - WITH_JOB_LOCK_GUARD() { job_cancel_sync_locked(&state->job->job, true); } - - aio_context_release(aio_context); } } =20 @@ -1932,16 +1920,9 @@ static void blockdev_backup_abort(BlkActionState *co= mmon) BlockdevBackupState *state =3D DO_UPCAST(BlockdevBackupState, common, = common); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - WITH_JOB_LOCK_GUARD() { job_cancel_sync_locked(&state->job->job, true); } - - aio_context_release(aio_context); } } =20 @@ -3305,18 +3286,13 @@ out: aio_context_release(aio_context); } =20 -/* - * Get a block job using its ID and acquire its AioContext. - * Returns with job_lock held on success. - */ -static BlockJob *find_block_job(const char *id, AioContext **aio_context, - Error **errp) +/* Get a block job using its ID. Returns with job_lock held on success */ +static BlockJob *find_block_job(const char *id, Error **errp) { BlockJob *job; =20 assert(id !=3D NULL); =20 - *aio_context =3D NULL; job_lock(); =20 job =3D block_job_get(id); @@ -3328,31 +3304,25 @@ static BlockJob *find_block_job(const char *id, Aio= Context **aio_context, return NULL; } =20 - *aio_context =3D blk_get_aio_context(job->blk); - aio_context_acquire(*aio_context); - return job; } =20 void qmp_block_job_set_speed(const char *device, int64_t speed, Error **er= rp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; } =20 block_job_set_speed(job, speed, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_cancel(const char *device, bool has_force, bool force, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3371,14 +3341,12 @@ void qmp_block_job_cancel(const char *device, trace_qmp_block_job_cancel(job); job_user_cancel_locked(&job->job, force, errp); out: - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_pause(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3386,14 +3354,12 @@ void qmp_block_job_pause(const char *device, Error = **errp) =20 trace_qmp_block_job_pause(job); job_user_pause_locked(&job->job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_resume(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3401,14 +3367,12 @@ void qmp_block_job_resume(const char *device, Error= **errp) =20 trace_qmp_block_job_resume(job); job_user_resume_locked(&job->job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_complete(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3416,14 +3380,12 @@ void qmp_block_job_complete(const char *device, Err= or **errp) =20 trace_qmp_block_job_complete(job); job_complete_locked(&job->job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(id, &aio_context, errp); + BlockJob *job =3D find_block_job(id, errp); =20 if (!job) { return; @@ -3433,21 +3395,13 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) job_ref_locked(&job->job); job_finalize_locked(&job->job, errp); =20 - /* - * Job's context might have changed via job_finalize_locked - * (and job_txn_apply automatically acquires the new one), - * so make sure we release the correct one. - */ - aio_context =3D blk_get_aio_context(job->blk); job_unref_locked(&job->job); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; - BlockJob *bjob =3D find_block_job(id, &aio_context, errp); + BlockJob *bjob =3D find_block_job(id, errp); Job *job; =20 if (!bjob) { @@ -3457,7 +3411,6 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; job_dismiss_locked(&job, errp); - aio_context_release(aio_context); job_unlock(); } =20 diff --git a/include/qemu/job.h b/include/qemu/job.h index c95f9fa8d1..602ee56ae6 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -326,9 +326,9 @@ typedef enum JobCreateFlags { =20 extern QemuMutex job_mutex; =20 -#define JOB_LOCK_GUARD() /* QEMU_LOCK_GUARD(&job_mutex) */ +#define JOB_LOCK_GUARD() QEMU_LOCK_GUARD(&job_mutex) =20 -#define WITH_JOB_LOCK_GUARD() /* WITH_QEMU_LOCK_GUARD(&job_mutex) */ +#define WITH_JOB_LOCK_GUARD() WITH_QEMU_LOCK_GUARD(&job_mutex) =20 /** * job_lock: @@ -667,8 +667,6 @@ void job_user_cancel_locked(Job *job, bool force, Error= **errp); * * Returns the return value from the job if the job actually completed * during the call, or -ECANCELED if it was canceled. - * - * Callers must hold the AioContext lock of job->aio_context. */ int job_cancel_sync_locked(Job *job, bool force); =20 @@ -692,9 +690,6 @@ void job_cancel_sync_all(void); * function). * * Returns the return value from the job. - * - * Callers must hold the AioContext lock of job->aio_context. - * * Called between job_lock and job_unlock. */ int job_complete_sync_locked(Job *job, Error **errp); @@ -726,8 +721,6 @@ void job_dismiss_locked(Job **job, Error **errp); * Returns 0 if the job is successfully completed, -ECANCELED if the job w= as * cancelled before completing, and -errno in other error cases. * - * Callers must hold the AioContext lock of job->aio_context. - * * Called between job_lock and job_unlock. */ int job_finish_sync_locked(Job *job, void (*finish)(Job *, Error **errp), diff --git a/job-qmp.c b/job-qmp.c index 615e056fc4..858b3a28f5 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -29,15 +29,11 @@ #include "qapi/error.h" #include "trace/trace-root.h" =20 -/* - * Get a block job using its ID and acquire its AioContext. - * Returns with job_lock held on success. - */ -static Job *find_job(const char *id, AioContext **aio_context, Error **err= p) +/* Get a job using its ID. Returns with job_lock held on success. */ +static Job *find_job(const char *id, Error **errp) { Job *job; =20 - *aio_context =3D NULL; job_lock(); =20 job =3D job_get_locked(id); @@ -47,16 +43,12 @@ static Job *find_job(const char *id, AioContext **aio_c= ontext, Error **errp) return NULL; } =20 - *aio_context =3D job->aio_context; - aio_context_acquire(*aio_context); - return job; } =20 void qmp_job_cancel(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -64,14 +56,12 @@ void qmp_job_cancel(const char *id, Error **errp) =20 trace_qmp_job_cancel(job); job_user_cancel_locked(job, true, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_pause(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -79,14 +69,12 @@ void qmp_job_pause(const char *id, Error **errp) =20 trace_qmp_job_pause(job); job_user_pause_locked(job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_resume(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -94,14 +82,12 @@ void qmp_job_resume(const char *id, Error **errp) =20 trace_qmp_job_resume(job); job_user_resume_locked(job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_complete(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -109,14 +95,12 @@ void qmp_job_complete(const char *id, Error **errp) =20 trace_qmp_job_complete(job); job_complete_locked(job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -126,21 +110,13 @@ void qmp_job_finalize(const char *id, Error **errp) job_ref_locked(job); job_finalize_locked(job, errp); =20 - /* - * Job's context might have changed via job_finalize_locked - * (and job_txn_apply automatically acquires the new one), - * so make sure we release the correct one. - */ - aio_context =3D job->aio_context; job_unref_locked(job); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -148,7 +124,6 @@ void qmp_job_dismiss(const char *id, Error **errp) =20 trace_qmp_job_dismiss(job); job_dismiss_locked(&job, errp); - aio_context_release(aio_context); job_unlock(); } =20 diff --git a/job.c b/job.c index 8a5b710d9b..9fa0f34565 100644 --- a/job.c +++ b/job.c @@ -98,21 +98,11 @@ struct JobTxn { }; =20 void job_lock(void) -{ - /* nop */ -} - -void job_unlock(void) -{ - /* nop */ -} - -static void real_job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void real_job_unlock(void) +void job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -180,7 +170,6 @@ static int job_txn_apply_locked(Job *job, int fn(Job *)) * twice - which would break AIO_WAIT_WHILE from within fn. */ job_ref_locked(job); - aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { rc =3D fn(other_job); @@ -189,11 +178,6 @@ static int job_txn_apply_locked(Job *job, int fn(Job *= )) } } =20 - /* - * Note that job->aio_context might have been changed by calling fn, s= o we - * can't use a local variable to cache it. - */ - aio_context_acquire(job->aio_context); job_unref_locked(job); return rc; } @@ -477,7 +461,10 @@ void job_unref_locked(Job *job) =20 if (job->driver->free) { job_unlock(); + /* FIXME: aiocontext lock is required because cb calls blk_unr= ef */ + aio_context_acquire(job_get_aio_context(job)); job->driver->free(job); + aio_context_release(job_get_aio_context(job)); job_lock(); } =20 @@ -550,21 +537,17 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *j= ob)) return; } =20 - real_job_lock(); if (job->busy) { - real_job_unlock(); return; } =20 if (fn && !fn(job)) { - real_job_unlock(); return; } =20 assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy =3D true; - real_job_unlock(); job_unlock(); aio_co_enter(job_get_aio_context(job), job->co); job_lock(); @@ -587,13 +570,11 @@ void job_enter(Job *job) */ static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) { - real_job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; job_event_idle_locked(job); - real_job_unlock(); job_unlock(); qemu_coroutine_yield(); job_lock(); @@ -922,7 +903,6 @@ static void job_cancel_async_locked(Job *job, bool forc= e) /* Called with job_mutex held. */ static void job_completed_txn_abort_locked(Job *job) { - AioContext *ctx; JobTxn *txn =3D job->txn; Job *other_job; =20 @@ -935,54 +915,28 @@ static void job_completed_txn_abort_locked(Job *job) txn->aborting =3D true; job_txn_ref_locked(txn); =20 - /* - * We can only hold the single job's AioContext lock while calling - * job_finalize_single_locked() because the finalization callbacks can - * involve calls of AIO_WAIT_WHILE(), which could deadlock otherwise. - * Note that the job's AioContext may change when it is finalized. - */ - job_ref_locked(job); - aio_context_release(job->aio_context); - /* Other jobs are effectively cancelled by us, set the status for * them; this job, however, may or may not be cancelled, depending * on the caller, so leave it. */ QLIST_FOREACH(other_job, &txn->jobs, txn_list) { if (other_job !=3D job) { - ctx =3D other_job->aio_context; - aio_context_acquire(ctx); /* * This is a transaction: If one job failed, no result will ma= tter. * Therefore, pass force=3Dtrue to terminate all other jobs as * quickly as possible. */ job_cancel_async_locked(other_job, true); - aio_context_release(ctx); } } while (!QLIST_EMPTY(&txn->jobs)) { other_job =3D QLIST_FIRST(&txn->jobs); - /* - * The job's AioContext may change, so store it in @ctx so we - * release the same context that we have acquired before. - */ - ctx =3D other_job->aio_context; - aio_context_acquire(ctx); if (!job_is_completed_locked(other_job)) { assert(job_cancel_requested_locked(other_job)); job_finish_sync_locked(other_job, NULL, NULL); } job_finalize_single_locked(other_job); - aio_context_release(ctx); } =20 - /* - * Use job_ref_locked()/job_unref_locked() so we can read the AioConte= xt - * here even if the job went away during job_finalize_single_locked(). - */ - aio_context_acquire(job->aio_context); - job_unref_locked(job); - job_txn_unref_locked(txn); } =20 @@ -1101,11 +1055,7 @@ static void job_completed_locked(Job *job) static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; - AioContext *ctx; - JOB_LOCK_GUARD(); - job_ref_locked(job); - aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion * callbacks. However, completion callbacks tend to involve operations= that @@ -1115,16 +1065,6 @@ static void job_exit(void *opaque) job_event_idle_locked(job); =20 job_completed_locked(job); - - /* - * Note that calling job_completed_locked can move the job to a differ= ent - * aio_context, so we cannot cache from above. job_txn_apply_locked ta= kes - * care of acquiring the new lock, and we ref/unref to avoid - * job_completed_locked freeing the job underneath us. - */ - ctx =3D job->aio_context; - job_unref_locked(job); - aio_context_release(ctx); } =20 /** @@ -1249,14 +1189,10 @@ int job_cancel_sync_locked(Job *job, bool force) void job_cancel_sync_all(void) { Job *job; - AioContext *aio_context; =20 JOB_LOCK_GUARD(); while ((job =3D job_next_locked(NULL))) { - aio_context =3D job->aio_context; - aio_context_acquire(aio_context); job_cancel_sync_locked(job, true); - aio_context_release(aio_context); } } =20 @@ -1302,8 +1238,8 @@ int job_finish_sync_locked(Job *job, void (*finish)(J= ob *, Error **errp), } =20 job_unlock(); - AIO_WAIT_WHILE(job_get_aio_context(job), - (job_enter(job), !job_is_completed(job))); + AIO_WAIT_WHILE_UNLOCKED(job_get_aio_context(job), + (job_enter(job), !job_is_completed(job))); job_lock(); =20 ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) ? diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index c03560e63d..dae207e24e 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -928,9 +928,9 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, tjob->prepare_ret =3D -EIO; break; } + aio_context_release(ctx); =20 job_start(&job->job); - aio_context_release(ctx); =20 if (use_iothread) { /* job_co_entry() is run in the I/O thread, wait for the actual job @@ -994,12 +994,12 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, g_assert_false(job_get_paused(&job->job)); g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 - aio_context_acquire(ctx); job_lock(); ret =3D job_complete_sync_locked(&job->job, &error_abort); job_unlock(); g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 + aio_context_acquire(ctx); if (use_iothread) { blk_set_aio_context(blk_src, qemu_get_aio_context(), &error_abort); assert(blk_get_aio_context(blk_target) =3D=3D qemu_get_aio_context= ()); diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index addcb5846b..e09e440342 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -455,10 +455,10 @@ static void test_attach_blockjob(void) aio_poll(qemu_get_aio_context(), false); } =20 - aio_context_acquire(ctx); job_lock(); job_complete_sync_locked(&tjob->common.job, &error_abort); job_unlock(); + aio_context_acquire(ctx); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index ec9128dbb5..c926db7b5d 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -228,10 +228,6 @@ static void cancel_common(CancelJob *s) BlockJob *job =3D &s->common; BlockBackend *blk =3D s->blk; JobStatus sts =3D job->job.status; - AioContext *ctx; - - ctx =3D job->job.aio_context; - aio_context_acquire(ctx); =20 job_lock(); job_cancel_sync_locked(&job->job, true); @@ -244,7 +240,6 @@ static void cancel_common(CancelJob *s) job_unlock(); destroy_blk(blk); =20 - aio_context_release(ctx); } =20 static void test_cancel_created(void) @@ -381,11 +376,9 @@ static void test_cancel_concluded(void) aio_poll(qemu_get_aio_context(), true); assert(job_get_status(job) =3D=3D JOB_STATUS_PENDING); =20 - aio_context_acquire(job->aio_context); job_lock(); job_finalize_locked(job, &error_abort); job_unlock(); - aio_context_release(job->aio_context); assert(job_get_status(job) =3D=3D JOB_STATUS_CONCLUDED); =20 cancel_common(s); @@ -478,9 +471,7 @@ static void test_complete_in_standby(void) =20 /* Wait for the job to become READY */ job_start(job); - aio_context_acquire(ctx); - AIO_WAIT_WHILE(ctx, job_get_status(job) !=3D JOB_STATUS_READY); - aio_context_release(ctx); + AIO_WAIT_WHILE_UNLOCKED(ctx, job_get_status(job) !=3D JOB_STATUS_READY= ); =20 /* Begin the drained section, pausing the job */ bdrv_drain_all_begin(); @@ -498,6 +489,7 @@ static void test_complete_in_standby(void) job_complete_locked(job, &error_abort); =20 /* The test is done now, clean up. */ + aio_context_release(ctx); job_finish_sync_locked(job, NULL, &error_abort); assert(job->status =3D=3D JOB_STATUS_PENDING); =20 @@ -507,6 +499,7 @@ static void test_complete_in_standby(void) job_dismiss_locked(&job, &error_abort); job_unlock(); =20 + aio_context_acquire(ctx); destroy_blk(blk); aio_context_release(ctx); iothread_join(iothread); --=20 2.31.1 From nobody Fri May 3 03:08:45 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1641393515212958.7255309111147; Wed, 5 Jan 2022 06:38:35 -0800 (PST) Received: from localhost ([::1]:58718 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n57Qw-000349-8F for importer@patchew.org; Wed, 05 Jan 2022 09:38:34 -0500 Received: from eggs.gnu.org ([209.51.188.92]:53274) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56so-0000Td-IF for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:18 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:29932) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n56si-0007OZ-33 for qemu-devel@nongnu.org; Wed, 05 Jan 2022 09:03:18 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-362-OeHhBdlpNPC1y2C_k5EZdQ-1; Wed, 05 Jan 2022 09:03:03 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D12971926DAD; Wed, 5 Jan 2022 14:02:59 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id C62EA23786; Wed, 5 Jan 2022 14:02:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641391386; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Nf2opnPB75k7t4RW0pDtmlBjPpQl4E8dlecSzzdJ/+E=; b=UyAUsn0OB2/IGWOmVOjyJcr2oEIhMhb389ugrCvQUx93nXWdWkO0Uflrmh+D8yr1vk95lU r0bzHMK8XJ6tc/0v1PmtPgnc7f2dpbqF29S6kXhqzmwOpUc8J/PpErqCzw2z8CTj4TcRS+ JUyvvlOepLFZct5zS5PVU8Y1cPhuCm0= X-MC-Unique: OeHhBdlpNPC1y2C_k5EZdQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [PATCH v3 16/16] block_job_query: remove atomic read Date: Wed, 5 Jan 2022 09:02:08 -0500 Message-Id: <20220105140208.365608-17-eesposit@redhat.com> In-Reply-To: <20220105140208.365608-1-eesposit@redhat.com> References: <20220105140208.365608-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.372, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1641393517568100001 Content-Type: text/plain; charset="utf-8" Not sure what the atomic here was supposed to do, since job.busy is protected by the job lock. Since the whole function is called under job_mutex, just remove the atomic. Signed-off-by: Emanuele Giuseppe Esposito --- blockjob.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/blockjob.c b/blockjob.c index 468ba735c5..d1d8808a56 100644 --- a/blockjob.c +++ b/blockjob.c @@ -335,13 +335,13 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **= errp) info =3D g_new0(BlockJobInfo, 1); info->type =3D g_strdup(job_type_str(&job->job)); info->device =3D g_strdup(job->job.id); - info->busy =3D qatomic_read(&job->job.busy); + info->busy =3D job->job.busy; info->paused =3D job->job.pause_count > 0; info->offset =3D progress_current; info->len =3D progress_total; info->speed =3D job->speed; info->io_status =3D job->iostatus; - info->ready =3D job_is_ready(&job->job), + info->ready =3D job_is_ready_locked(&job->job), info->status =3D job->job.status; info->auto_finalize =3D job->job.auto_finalize; info->auto_dismiss =3D job->job.auto_dismiss; --=20 2.31.1