From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636037904451942.2475879554629; Thu, 4 Nov 2021 07:58:24 -0700 (PDT) Received: from localhost ([::1]:54876 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieC7-0002KG-Fo for importer@patchew.org; Thu, 04 Nov 2021 10:58:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51190) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie81-000389-8m for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:12 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:28378) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie7y-0006sS-Un for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:09 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-217-Uwf9X2dpOxOsddGABji-uw-1; Thu, 04 Nov 2021 10:54:03 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D419F1923795; Thu, 4 Nov 2021 14:53:51 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id B779E6418A; Thu, 4 Nov 2021 14:53:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r7ofFkHnExQPEdJ0ShEbR6dIWEJ0sNV5QRWGv2LCM4g=; b=L5c49Mutm0umE43dZC00qkubzgKb+MfVR3Pd+2WWizy/2Y2puNGt+JWF/1ynokGRgzLTip YDsiQf633iuOlGwPPCypmFcBPdclNVf0jfF6m/bzo5goI+zhuKoAeA7/n52zhlLILGlBs9 Doow6wRtlRQ9NbikLykVJ+uMck8/YtI= X-MC-Unique: Uwf9X2dpOxOsddGABji-uw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 01/14] job.c: make job_lock/unlock public Date: Thu, 4 Nov 2021 10:53:21 -0400 Message-Id: <20211104145334.1346363-2-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636037904843100001 Content-Type: text/plain; charset="utf-8" job mutex will be used to protect the job struct elements and list, replacing AioContext locks. Right now use a shared lock for all jobs, in order to keep things simple. Once the AioContext lock is gone, we can introduce per-job locks. To simplify the switch from aiocontext to job lock, introduce *nop* lock/unlock functions and macros. Once everything is protected by jobs, we can add the mutex and remove the aiocontext. Since job_mutex is already being used, add static _job_{lock/unlock}. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- include/qemu/job.h | 18 ++++++++++++++++++ job.c | 39 +++++++++++++++++++++++++++------------ 2 files changed, 45 insertions(+), 12 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 7e9e59f4b8..ccf7826426 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -297,6 +297,24 @@ typedef enum JobCreateFlags { JOB_MANUAL_DISMISS =3D 0x04, } JobCreateFlags; =20 +/** + * job_lock: + * + * Take the mutex protecting the list of jobs and their status. + * Most functions called by the monitor need to call job_lock + * and job_unlock manually. On the other hand, function called + * by the block jobs themselves and by the block layer will take the + * lock for you. + */ +void job_lock(void); + +/** + * job_unlock: + * + * Release the mutex protecting the list of jobs and their status. + */ +void job_unlock(void); + /** * Allocate and return a new job transaction. Jobs can be added to the * transaction using job_txn_add_job(). diff --git a/job.c b/job.c index 94b142684f..0e4dacf028 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,12 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* + * job_mutex protects the jobs list, but also makes the + * struct job fields thread-safe. + */ +static QemuMutex job_mutex; + static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -74,17 +80,26 @@ struct JobTxn { int refcnt; }; =20 -/* Right now, this mutex is only needed to synchronize accesses to job->bu= sy - * and job->sleep_timer, such as concurrent calls to job_do_yield and - * job_enter. */ -static QemuMutex job_mutex; +#define JOB_LOCK_GUARD() /* QEMU_LOCK_GUARD(&job_mutex) */ + +#define WITH_JOB_LOCK_GUARD() /* WITH_QEMU_LOCK_GUARD(&job_mutex) */ + +void job_lock(void) +{ + /* nop */ +} + +void job_unlock(void) +{ + /* nop */ +} =20 -static void job_lock(void) +static void _job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void job_unlock(void) +static void _job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -449,21 +464,21 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) return; } =20 - job_lock(); + _job_lock(); if (job->busy) { - job_unlock(); + _job_unlock(); return; } =20 if (fn && !fn(job)) { - job_unlock(); + _job_unlock(); return; } =20 assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy =3D true; - job_unlock(); + _job_unlock(); aio_co_enter(job->aio_context, job->co); } =20 @@ -480,13 +495,13 @@ void job_enter(Job *job) * called explicitly. */ static void coroutine_fn job_do_yield(Job *job, uint64_t ns) { - job_lock(); + _job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; job_event_idle(job); - job_unlock(); + _job_unlock(); qemu_coroutine_yield(); =20 /* Set by job_enter_cond() before re-entering the coroutine. */ --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16360377888331015.4075390989009; Thu, 4 Nov 2021 07:56:28 -0700 (PDT) Received: from localhost ([::1]:48454 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieAF-0006RZ-Hy for importer@patchew.org; Thu, 04 Nov 2021 10:56:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51148) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie7x-000310-Q1 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40745) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie7v-0006qo-MA for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:05 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-429-2wkGjFs8MUq0ncpaRbIMJQ-1; Thu, 04 Nov 2021 10:53:59 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E35AD1007932; Thu, 4 Nov 2021 14:53:52 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id ED32656A83; Thu, 4 Nov 2021 14:53:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037642; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WYVci7NHvihvWLwY3IHTnWLxXIfASM5N3xFOBb34Jmw=; b=Q5ItpYsYYrTJEo6ZgtvquKuBJEXIybLSXB55ySvC0MF8R0BuxWuvMWtGcmKWZyWvxSunWi NDMlBuulKmizv7GcvHT0F87CtgP8pn3DtTt7xW8kAh+3Pg34j/sFX06Asn/7WvM8u5BYEd GZG2iRnIZE8sakuDzdchiAxh3CgE4cs= X-MC-Unique: 2wkGjFs8MUq0ncpaRbIMJQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 02/14] job.h: categorize fields in struct Job Date: Thu, 4 Nov 2021 10:53:22 -0400 Message-Id: <20211104145334.1346363-3-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636037790012100001 Content-Type: text/plain; charset="utf-8" Categorize the fields in struct Job to understand which ones need to be protected by the job mutex and which don't. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 57 +++++++++++++++++++++++++++------------------- 1 file changed, 34 insertions(+), 23 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index ccf7826426..f7036ac6b3 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -40,27 +40,52 @@ typedef struct JobTxn JobTxn; * Long-running operation. */ typedef struct Job { + + /* Fields set at initialization (job_create), and never modified */ + /** The ID of the job. May be NULL for internal jobs. */ char *id; =20 - /** The type of this job. */ + /** + * The type of this job. + * All callbacks are called with job_mutex *not* held. + */ const JobDriver *driver; =20 - /** Reference count of the block job */ - int refcnt; - - /** Current state; See @JobStatus for details. */ - JobStatus status; - /** AioContext to run the job coroutine in */ AioContext *aio_context; =20 /** * The coroutine that executes the job. If not NULL, it is reentered = when * busy is false and the job is cancelled. + * Initialized in job_start() */ Coroutine *co; =20 + /** True if this job should automatically finalize itself */ + bool auto_finalize; + + /** True if this job should automatically dismiss itself */ + bool auto_dismiss; + + /** The completion function that will be called when the job completes= . */ + BlockCompletionFunc *cb; + + /** The opaque value that is passed to the completion function. */ + void *opaque; + + /* ProgressMeter API is thread-safe */ + ProgressMeter progress; + + + /** Protected by job_mutex */ + + /** Reference count of the block job */ + int refcnt; + + /** Current state; See @JobStatus for details. */ + JobStatus status; + /** * Timer that is used by @job_sleep_ns. Accessed under job_mutex (in * job.c). @@ -76,7 +101,7 @@ typedef struct Job { /** * Set to false by the job while the coroutine has yielded and may be * re-entered by job_enter(). There may still be I/O or event loop act= ivity - * pending. Accessed under block_job_mutex (in blockjob.c). + * pending. Accessed under job_mutex. * * When the job is deferred to the main loop, busy is true as long as = the * bottom half is still pending. @@ -112,14 +137,6 @@ typedef struct Job { /** Set to true when the job has deferred work to the main loop. */ bool deferred_to_main_loop; =20 - /** True if this job should automatically finalize itself */ - bool auto_finalize; - - /** True if this job should automatically dismiss itself */ - bool auto_dismiss; - - ProgressMeter progress; - /** * Return code from @run and/or @prepare callback(s). * Not final until the job has reached the CONCLUDED status. @@ -134,12 +151,6 @@ typedef struct Job { */ Error *err; =20 - /** The completion function that will be called when the job completes= . */ - BlockCompletionFunc *cb; - - /** The opaque value that is passed to the completion function. */ - void *opaque; - /** Notifiers called when a cancelled job is finalised */ NotifierList on_finalize_cancelled; =20 @@ -167,6 +178,7 @@ typedef struct Job { =20 /** * Callbacks and other information about a Job driver. + * All callbacks are invoked with job_mutex *not* held. */ struct JobDriver { =20 @@ -460,7 +472,6 @@ void job_yield(Job *job); */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns); =20 - /** Returns the JobType of a given Job. */ JobType job_type(const Job *job); =20 --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636037780202206.8057036389896; Thu, 4 Nov 2021 07:56:20 -0700 (PDT) Received: from localhost ([::1]:47638 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieA6-0005ol-Nf for importer@patchew.org; Thu, 04 Nov 2021 10:56:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51120) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie7v-0002vY-De for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:03 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:27381) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie7s-0006qF-B8 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:03 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-23-WRmBIlNiOnqinximpI7wlQ-1; Thu, 04 Nov 2021 10:53:56 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F30B2CF989; Thu, 4 Nov 2021 14:53:53 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 090A26418A; Thu, 4 Nov 2021 14:53:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037639; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E0O4TgLee/XFBAnFSAEoBp6QNy4upsDe28hIOz6AcLc=; b=HycomxxVCsHwFyFLS2OOPxf1DKBSZwOOVSrUOdBH/azho01wMeIsTljx4LdrijGOfVHF/0 NLSGPuqqXwTtTfhq2SJM4C+ZyHn3WjYkusyiWxedxxFZ1KjTEaQ8Z4rLdq7Eql8PdLlZEy Z+aEeDYkXGeWIDeD7DDNid3ezT/+k1A= X-MC-Unique: WRmBIlNiOnqinximpI7wlQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 03/14] job.h: define locked functions Date: Thu, 4 Nov 2021 10:53:23 -0400 Message-Id: <20211104145334.1346363-4-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636037784008100001 Content-Type: text/plain; charset="utf-8" These functions assume that the job lock is held by the caller, to avoid TOC/TOU conditions. Introduce also additional helpers that define _locked functions (useful when the job_mutex is globally applied). Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 66 ++++++++++++++++++++++++++++++++++++++++++---- job.c | 18 +++++++++++-- 2 files changed, 77 insertions(+), 7 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index f7036ac6b3..c7a6bcad1b 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -355,6 +355,8 @@ void job_txn_unref(JobTxn *txn); * the reference that is automatically grabbed here. * * If @txn is NULL, the function does nothing. + * + * Called between job_lock and job_unlock. */ void job_txn_add_job(JobTxn *txn, Job *job); =20 @@ -377,12 +379,16 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, /** * Add a reference to Job refcnt, it will be decreased with job_unref, and= then * be freed if it comes to be the last reference. + * + * Called between job_lock and job_unlock. */ void job_ref(Job *job); =20 /** * Release a reference that was previously acquired with job_ref() or * job_create(). If it's the last reference to the object, it will be free= d. + * + * Called between job_lock and job_unlock. */ void job_unref(Job *job); =20 @@ -429,6 +435,8 @@ void job_event_completed(Job *job); * Conditionally enter the job coroutine if the job is ready to run, not * already busy and fn() returns true. fn() is called while under the job_= lock * critical section. + * + * Called between job_lock and job_unlock, but it releases the lock tempor= arly. */ void job_enter_cond(Job *job, bool(*fn)(Job *job)); =20 @@ -490,34 +498,52 @@ bool job_is_cancelled(Job *job); */ bool job_cancel_requested(Job *job); =20 -/** Returns whether the job is in a completed state. */ +/** + * Returns whether the job is in a completed state. + * Called between job_lock and job_unlock. + */ bool job_is_completed(Job *job); =20 /** Returns whether the job is ready to be completed. */ bool job_is_ready(Job *job); =20 +/** Same as job_is_ready(), but assumes job_lock is held. */ +bool job_is_ready_locked(Job *job); + /** * Request @job to pause at the next pause point. Must be paired with * job_resume(). If the job is supposed to be resumed by user action, call * job_user_pause() instead. + * + * Called between job_lock and job_unlock. */ void job_pause(Job *job); =20 -/** Resumes a @job paused with job_pause. */ +/** + * Resumes a @job paused with job_pause. + * Called between job_lock and job_unlock. + */ void job_resume(Job *job); =20 /** * Asynchronously pause the specified @job. * Do not allow a resume until a matching call to job_user_resume. + * + * Called between job_lock and job_unlock. */ void job_user_pause(Job *job, Error **errp); =20 -/** Returns true if the job is user-paused. */ +/** + * Returns true if the job is user-paused. + * Called between job_lock and job_unlock. + */ bool job_user_paused(Job *job); =20 /** * Resume the specified @job. * Must be paired with a preceding job_user_pause. + * + * Called between job_lock and job_unlock. */ void job_user_resume(Job *job, Error **errp); =20 @@ -526,6 +552,8 @@ void job_user_resume(Job *job, Error **errp); * first one if @job is %NULL. * * Returns the requested job, or %NULL if there are no more jobs left. + * + * Called between job_lock and job_unlock. */ Job *job_next(Job *job); =20 @@ -533,6 +561,8 @@ Job *job_next(Job *job); * Get the job identified by @id (which must not be %NULL). * * Returns the requested job, or %NULL if it doesn't exist. + * + * Called between job_lock and job_unlock. */ Job *job_get(const char *id); =20 @@ -540,27 +570,39 @@ Job *job_get(const char *id); * Check whether the verb @verb can be applied to @job in its current stat= e. * Returns 0 if the verb can be applied; otherwise errp is set and -EPERM * returned. + * + * Called between job_lock and job_unlock. */ int job_apply_verb(Job *job, JobVerb verb, Error **errp); =20 /** The @job could not be started, free it. */ void job_early_fail(Job *job); =20 +/** Same as job_early_fail(), but assumes job_lock is held. */ +void job_early_fail_locked(Job *job); + /** Moves the @job from RUNNING to READY */ void job_transition_to_ready(Job *job); =20 -/** Asynchronously complete the specified @job. */ +/** + * Asynchronously complete the specified @job. + * Called between job_lock and job_unlock, but it releases the lock tempor= arly. + */ void job_complete(Job *job, Error **errp); =20 /** * Asynchronously cancel the specified @job. If @force is true, the job sh= ould * be cancelled immediately without waiting for a consistent state. + * + * Called between job_lock and job_unlock. */ void job_cancel(Job *job, bool force); =20 /** * Cancels the specified job like job_cancel(), but may refuse to do so if= the * operation isn't meaningful in the current state of the job. + * + * Called between job_lock and job_unlock. */ void job_user_cancel(Job *job, bool force, Error **errp); =20 @@ -577,7 +619,13 @@ void job_user_cancel(Job *job, bool force, Error **err= p); */ int job_cancel_sync(Job *job, bool force); =20 -/** Synchronously force-cancels all jobs using job_cancel_sync(). */ +/** + * Synchronously force-cancels all jobs using job_cancel_sync(). + * + * Called with job_lock *not* held, unlike most other APIs consumed + * by the monitor! This is primarly to avoid adding unnecessary lock-unlock + * patterns in the caller. + */ void job_cancel_sync_all(void); =20 /** @@ -593,6 +641,8 @@ void job_cancel_sync_all(void); * Returns the return value from the job. * * Callers must hold the AioContext lock of job->aio_context. + * + * Called between job_lock and job_unlock. */ int job_complete_sync(Job *job, Error **errp); =20 @@ -603,12 +653,16 @@ int job_complete_sync(Job *job, Error **errp); * FIXME: Make the below statement universally true: * For jobs that support the manual workflow mode, all graph changes that = occur * as a result will occur after this command and before a successful reply. + * + * Called between job_lock and job_unlock. */ void job_finalize(Job *job, Error **errp); =20 /** * Remove the concluded @job from the query list and resets the passed poi= nter * to %NULL. Returns an error if the job is not actually concluded. + * + * Called between job_lock and job_unlock. */ void job_dismiss(Job **job, Error **errp); =20 @@ -620,6 +674,8 @@ void job_dismiss(Job **job, Error **errp); * cancelled before completing, and -errno in other error cases. * * Callers must hold the AioContext lock of job->aio_context. + * + * Called between job_lock and job_unlock. */ int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp); =20 diff --git a/job.c b/job.c index 0e4dacf028..e393c1222f 100644 --- a/job.c +++ b/job.c @@ -242,7 +242,8 @@ bool job_cancel_requested(Job *job) return job->cancelled; } =20 -bool job_is_ready(Job *job) +/* Called with job_mutex held. */ +bool job_is_ready_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -264,6 +265,13 @@ bool job_is_ready(Job *job) return false; } =20 +/* Called with job_mutex lock *not* held */ +bool job_is_ready(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_ready_locked(job); +} + bool job_is_completed(Job *job) { switch (job->status) { @@ -659,12 +667,18 @@ void job_dismiss(Job **jobptr, Error **errp) *jobptr =3D NULL; } =20 -void job_early_fail(Job *job) +void job_early_fail_locked(Job *job) { assert(job->status =3D=3D JOB_STATUS_CREATED); job_do_dismiss(job); } =20 +void job_early_fail(Job *job) +{ + JOB_LOCK_GUARD(); + job_early_fail_locked(job); +} + static void job_conclude(Job *job) { job_state_transition(job, JOB_STATUS_CONCLUDED); --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038057187272.4047887921141; Thu, 4 Nov 2021 08:00:57 -0700 (PDT) Received: from localhost ([::1]:33482 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieEa-0006tw-1j for importer@patchew.org; Thu, 04 Nov 2021 11:00:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51292) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie86-0003Gp-Sf for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:14 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:42182) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie7z-0006sW-4l for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:13 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-6-P_C8QnkPPy22sv0tXuBDtw-1; Thu, 04 Nov 2021 10:54:03 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0C270192379D; Thu, 4 Nov 2021 14:53:55 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 183AC6418A; Thu, 4 Nov 2021 14:53:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3sDFSZxR0UgGHDfFOGiUeqE5T9RBdi1XZVgkky6ITX0=; b=hjY4Ha0GocVkHO326m0hJ1zce6Bi5RPxrFfEFp2/kWhhX4pXXERTG4IBdMOyWE/Mulo7Gs 9TD+j0LopKa1OaV2rh2Njax8Iuq1nBWhnfjjcdR8IILx7YQOQO0o43enyFd7f7cnQgR3s/ 7DaHApGJ3u5wmB3uHdIk2IcXimTlmJs= X-MC-Unique: P_C8QnkPPy22sv0tXuBDtw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 04/14] job.h: define unlocked functions Date: Thu, 4 Nov 2021 10:53:24 -0400 Message-Id: <20211104145334.1346363-5-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, T_SPF_HELO_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038060454100003 Content-Type: text/plain; charset="utf-8" All these functions assume that the lock is not held, and acquire it internally. These functions will be useful when job_lock is globally applied, as they will allow callers to access the job struct fields without worrying about the job lock. Update also the comments in blockjob.c (and move them in job.c). Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- include/qemu/job.h | 21 +++++++++++ blockjob.c | 20 ----------- job.c | 88 ++++++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 107 insertions(+), 22 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index c7a6bcad1b..d34c55dad0 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -679,4 +679,25 @@ void job_dismiss(Job **job, Error **errp); */ int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp); =20 +/** Enters the @job if it is not paused */ +void job_enter_not_paused(Job *job); + +/** returns @job->ret */ +bool job_has_failed(Job *job); + +/** Returns the @job->status */ +JobStatus job_get_status(Job *job); + +/** Returns the @job->pause_count */ +int job_get_pause_count(Job *job); + +/** Returns @job->paused */ +bool job_get_paused(Job *job); + +/** Returns @job->busy */ +bool job_get_busy(Job *job); + +/** Return true if @job not paused and not cancelled */ +bool job_not_paused_nor_cancelled(Job *job); + #endif diff --git a/blockjob.c b/blockjob.c index 4982f6a2b5..53c1e9c406 100644 --- a/blockjob.c +++ b/blockjob.c @@ -36,21 +36,6 @@ #include "qemu/main-loop.h" #include "qemu/timer.h" =20 -/* - * The block job API is composed of two categories of functions. - * - * The first includes functions used by the monitor. The monitor is - * peculiar in that it accesses the block job list with block_job_get, and - * therefore needs consistency across block_job_get and the actual operati= on - * (e.g. block_job_set_speed). The consistency is achieved with - * aio_context_acquire/release. These functions are declared in blockjob.= h. - * - * The second includes functions used by the block job drivers and sometim= es - * by the core block layer. These do not care about locking, because the - * whole coroutine runs under the AioContext lock, and are declared in - * blockjob_int.h. - */ - static bool is_block_job(Job *job) { return job_type(job) =3D=3D JOB_TYPE_BACKUP || @@ -433,11 +418,6 @@ static void block_job_event_ready(Notifier *n, void *o= paque) } =20 =20 -/* - * API for block job drivers and the block layer. These functions are - * declared in blockjob_int.h. - */ - void *block_job_create(const char *job_id, const BlockJobDriver *driver, JobTxn *txn, BlockDriverState *bs, uint64_t perm, uint64_t shared_perm, int64_t speed, int flags, diff --git a/job.c b/job.c index e393c1222f..bd36207021 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,23 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* + * The job API is composed of two categories of functions. + * + * The first includes functions used by the monitor. The monitor is + * peculiar in that it accesses the block job list with job_get, and + * therefore needs consistency across job_get and the actual operation + * (e.g. job_user_cancel). To achieve this consistency, the caller + * calls job_lock/job_unlock itself around the whole operation. + * These functions are declared in job-monitor.h. + * + * + * The second includes functions used by the block job drivers and sometim= es + * by the core block layer. These delegate the locking to the callee inste= ad, + * and are declared in job-driver.h. + */ + + /* * job_mutex protects the jobs list, but also makes the * struct job fields thread-safe. @@ -230,18 +247,70 @@ const char *job_type_str(const Job *job) return JobType_str(job_type(job)); } =20 -bool job_is_cancelled(Job *job) +JobStatus job_get_status(Job *job) +{ + JOB_LOCK_GUARD(); + return job->status; +} + +int job_get_pause_count(Job *job) +{ + JOB_LOCK_GUARD(); + return job->pause_count; +} + +bool job_get_paused(Job *job) +{ + JOB_LOCK_GUARD(); + return job->paused; +} + +bool job_get_busy(Job *job) +{ + JOB_LOCK_GUARD(); + return job->busy; +} + +bool job_has_failed(Job *job) +{ + JOB_LOCK_GUARD(); + return job->ret < 0; +} + +/* Called with job_mutex held. */ +static bool job_is_cancelled_locked(Job *job) { /* force_cancel may be true only if cancelled is true, too */ assert(job->cancelled || !job->force_cancel); return job->force_cancel; } =20 -bool job_cancel_requested(Job *job) +/* Called with job_mutex *not* held. */ +bool job_is_cancelled(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_cancelled_locked(job); +} + +bool job_not_paused_nor_cancelled(Job *job) +{ + JOB_LOCK_GUARD(); + return !job->paused && !job_is_cancelled_locked(job); +} + +/* Called with job_mutex held. */ +static bool job_cancel_requested_locked(Job *job) { return job->cancelled; } =20 +/* Called with job_mutex *not* held. */ +bool job_cancel_requested(Job *job) +{ + JOB_LOCK_GUARD(); + return job_cancel_requested_locked(job); +} + /* Called with job_mutex held. */ bool job_is_ready_locked(Job *job) { @@ -294,6 +363,13 @@ bool job_is_completed(Job *job) return false; } =20 +/* Called with job_mutex lock *not* held */ +static bool job_is_completed_unlocked(Job *job) +{ + JOB_LOCK_GUARD(); + return job_is_completed(job); +} + static bool job_started(Job *job) { return job->co; @@ -593,6 +669,14 @@ void job_pause(Job *job) } } =20 +void job_enter_not_paused(Job *job) +{ + JOB_LOCK_GUARD(); + if (!job->paused) { + job_enter_cond(job, NULL); + } +} + void job_resume(Job *job) { assert(job->pause_count > 0); --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038022906994.9081771211069; Thu, 4 Nov 2021 08:00:22 -0700 (PDT) Received: from localhost ([::1]:60594 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieE1-0006AK-U1 for importer@patchew.org; Thu, 04 Nov 2021 11:00:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51276) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie86-0003Fm-Dv for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:14 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:32450) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie84-0006tJ-54 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:14 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-536-c19v-D3vPm6ea2jzX79FFQ-1; Thu, 04 Nov 2021 10:54:08 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C351A19251A0; Thu, 4 Nov 2021 14:54:06 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C7476418A; Thu, 4 Nov 2021 14:53:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037651; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UrRfMkBnaj5EI7rovdrJF0eNhgCHysukHpqKwzSsn3Y=; b=eJadbEIblKEov/sHdGOy/PuwvhJgl2tMIoNSQBvUyEnUNDoEq707U4UyZDad5UVG/vKK5j xBc1n6gx8MDxWPp85JyvPYCQaK5rIf66qeEXbXHAZS8UwZN4LpcA39EmI/aFPPfkw/hw5W 7QsZ9r5VMYucbWC5ClcSG0OYpkSbJbg= X-MC-Unique: c19v-D3vPm6ea2jzX79FFQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 05/14] block/mirror.c: use of job helpers in drivers to avoid TOC/TOU Date: Thu, 4 Nov 2021 10:53:25 -0400 Message-Id: <20211104145334.1346363-6-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038024045100001 Content-Type: text/plain; charset="utf-8" Once job lock is used and aiocontext is removed, mirror has to perform job operations under the same critical section, using the helpers prepared in previous commit. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- block/mirror.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 00089e519b..f22fa7da6e 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -653,7 +653,7 @@ static int mirror_exit_common(Job *job) BlockDriverState *target_bs; BlockDriverState *mirror_top_bs; Error *local_err =3D NULL; - bool abort =3D job->ret < 0; + bool abort =3D job_has_failed(job); int ret =3D 0; =20 if (s->prepared) { @@ -1161,9 +1161,7 @@ static void mirror_complete(Job *job, Error **errp) s->should_complete =3D true; =20 /* If the job is paused, it will be re-entered when it is resumed */ - if (!job->paused) { - job_enter(job); - } + job_enter_not_paused(job); } =20 static void coroutine_fn mirror_pause(Job *job) @@ -1182,7 +1180,7 @@ static bool mirror_drained_poll(BlockJob *job) * from one of our own drain sections, to avoid a deadlock waiting for * ourselves. */ - if (!s->common.job.paused && !job_is_cancelled(&job->job) && !s->in_dr= ain) { + if (job_not_paused_nor_cancelled(&s->common.job) && !s->in_drain) { return true; } =20 --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038190045453.4918285901771; Thu, 4 Nov 2021 08:03:10 -0700 (PDT) Received: from localhost ([::1]:38066 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieGj-0001gc-4e for importer@patchew.org; Thu, 04 Nov 2021 11:03:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51324) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie87-0003Iw-ND for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:15 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:36185) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie84-0006tY-RT for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:15 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-427-VYRLpRfPMSquru5KwyAIGw-1; Thu, 04 Nov 2021 10:54:09 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D3194100730B; Thu, 4 Nov 2021 14:54:07 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id DD3376418A; Thu, 4 Nov 2021 14:54:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037652; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w6zwD1Ssd8EJF8BsmUr7e4wZ2fJX4HjFwmnEsRro7/Y=; b=DkFPL2lqFnHdTcUuSlEnnHfdlMvNk3Lk78GvZEhDh7FmESD6osrsWEShclbVYC068gPIyN llhO7mPBGNtxVdH83SgJS5fWsQXcs+TQJ79HKSh2meSWeATM1wZB3Uw6/GU/tfaKsJpslc uGIPTxpZYQzUoY2UOqkiXeqeWrOktuU= X-MC-Unique: VYRLpRfPMSquru5KwyAIGw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 06/14] job.c: make job_event_* functions static Date: Thu, 4 Nov 2021 10:53:26 -0400 Message-Id: <20211104145334.1346363-7-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038191981100003 Content-Type: text/plain; charset="utf-8" job_event_* functions can all be static, as they are not used outside job.c. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- include/qemu/job.h | 6 ------ job.c | 12 ++++++++++-- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index d34c55dad0..58b3af47e3 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -425,12 +425,6 @@ void job_progress_set_remaining(Job *job, uint64_t rem= aining); */ void job_progress_increase_remaining(Job *job, uint64_t delta); =20 -/** To be called when a cancelled job is finalised. */ -void job_event_cancelled(Job *job); - -/** To be called when a successfully completed job is finalised. */ -void job_event_completed(Job *job); - /** * Conditionally enter the job coroutine if the job is ready to run, not * already busy and fn() returns true. fn() is called while under the job_= lock diff --git a/job.c b/job.c index bd36207021..ce5066522f 100644 --- a/job.c +++ b/job.c @@ -514,12 +514,20 @@ void job_progress_increase_remaining(Job *job, uint64= _t delta) progress_increase_remaining(&job->progress, delta); } =20 -void job_event_cancelled(Job *job) +/** + * To be called when a cancelled job is finalised. + * Called with job_mutex held. + */ +static void job_event_cancelled(Job *job) { notifier_list_notify(&job->on_finalize_cancelled, job); } =20 -void job_event_completed(Job *job) +/** + * To be called when a successfully completed job is finalised. + * Called with job_mutex held. + */ +static void job_event_completed(Job *job) { notifier_list_notify(&job->on_finalize_completed, job); } --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038279028979.0249892130679; Thu, 4 Nov 2021 08:04:39 -0700 (PDT) Received: from localhost ([::1]:42688 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieIA-0004kU-3L for importer@patchew.org; Thu, 04 Nov 2021 11:04:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51370) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie89-0003OP-CT for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:17 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:22727) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie87-0006to-55 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:17 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-507-7zH0_Q7RPyW8gTLSuLqV_Q-1; Thu, 04 Nov 2021 10:54:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E1A1C1007302; Thu, 4 Nov 2021 14:54:08 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id ECCA36418A; Thu, 4 Nov 2021 14:54:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iaWunbwvqIDuIfno9n5JZVBiFuhnvi6sDZnnZWaTXuo=; b=C4vpLo8TMLXNqxjHiB9t4Ol4hMhZKe4/b4W6xke0iyDc23EiRJ5aFedBHKGABtmdml6Csb XdIzt2nyATse3cwB5Rfxef+7JmSGhRF2l4CxG6h7XRERYFFyqo73dOvI251HtdpAahXAhR psWdqzw4i2Qm7PyFfEMxWKnNMjgEnUQ= X-MC-Unique: 7zH0_Q7RPyW8gTLSuLqV_Q-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 07/14] job.c: move inner aiocontext lock in callbacks Date: Thu, 4 Nov 2021 10:53:27 -0400 Message-Id: <20211104145334.1346363-8-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038280258100001 Content-Type: text/plain; charset="utf-8" Instead of having the lock in job_tnx_apply, move it inside in the callback. This will be helpful for next commits, when we introduce job_lock/unlock pairs. job_transition_to_pending() and job_needs_finalize() do not need to be protected by the aiocontext lock. No functional change intended. Signed-off-by: Emanuele Giuseppe Esposito --- job.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/job.c b/job.c index ce5066522f..7856fa734b 100644 --- a/job.c +++ b/job.c @@ -170,7 +170,6 @@ static void job_txn_del_job(Job *job) =20 static int job_txn_apply(Job *job, int fn(Job *)) { - AioContext *inner_ctx; Job *other_job, *next; JobTxn *txn =3D job->txn; int rc =3D 0; @@ -185,10 +184,7 @@ static int job_txn_apply(Job *job, int fn(Job *)) aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { - inner_ctx =3D other_job->aio_context; - aio_context_acquire(inner_ctx); rc =3D fn(other_job); - aio_context_release(inner_ctx); if (rc) { break; } @@ -820,11 +816,15 @@ static void job_clean(Job *job) =20 static int job_finalize_single(Job *job) { + AioContext *ctx =3D job->aio_context; + assert(job_is_completed(job)); =20 /* Ensure abort is called for late-transactional failures */ job_update_rc(job); =20 + aio_context_acquire(ctx); + if (!job->ret) { job_commit(job); } else { @@ -832,6 +832,8 @@ static int job_finalize_single(Job *job) } job_clean(job); =20 + aio_context_release(ctx); + if (job->cb) { job->cb(job->opaque, job->ret); } @@ -952,11 +954,16 @@ static void job_completed_txn_abort(Job *job) =20 static int job_prepare(Job *job) { + AioContext *ctx =3D job->aio_context; assert(qemu_in_main_thread()); + if (job->ret =3D=3D 0 && job->driver->prepare) { + aio_context_acquire(ctx); job->ret =3D job->driver->prepare(job); + aio_context_release(ctx); job_update_rc(job); } + return job->ret; } =20 --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636037861727953.6425706698697; Thu, 4 Nov 2021 07:57:41 -0700 (PDT) Received: from localhost ([::1]:51604 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieBQ-00007O-PH for importer@patchew.org; Thu, 04 Nov 2021 10:57:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51380) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8A-0003RM-55 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:18 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:40923) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie86-0006uF-OA for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:17 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-594-A40Yh6BfMOqtRZsbJ_554Q-1; Thu, 04 Nov 2021 10:54:11 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0D52A87180C; Thu, 4 Nov 2021 14:54:10 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 067716418A; Thu, 4 Nov 2021 14:54:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037654; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hx3r2KUL5QHtcRGIVt9AjkqkTKKz9o4kc1FoO1ja1LY=; b=QmBo3uGF6xKkWmDQbLMSm0QHQ8l9hKsoSl0o5fCY9wgg2rwITJXFq3ry8Ich/V7ajUZ4nu JJ8elKlM0VGOl+XFKnc3TrCzozPL2hV/AiWwAtuV5Xs21CRiretQ4w0MShokrrGwvDCgE5 R1caDoCXlY6eMAJa7w8GkEM6LsNrHw0= X-MC-Unique: A40Yh6BfMOqtRZsbJ_554Q-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 08/14] aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED Date: Thu, 4 Nov 2021 10:53:28 -0400 Message-Id: <20211104145334.1346363-9-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636037862201100007 Content-Type: text/plain; charset="utf-8" Same as AIO_WAIT_WHILE macro, but if we are in the Main loop do not release and then acquire ctx_ 's aiocontext. Once all Aiocontext locks go away, this macro will replace AIO_WAIT_WHILE. Signed-off-by: Emanuele Giuseppe Esposito --- include/block/aio-wait.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index b39eefb38d..ff27fe4eab 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -59,10 +59,11 @@ typedef struct { extern AioWait global_aio_wait; =20 /** - * AIO_WAIT_WHILE: + * _AIO_WAIT_WHILE: * @ctx: the aio context, or NULL if multiple aio contexts (for which the * caller does not hold a lock) are involved in the polling conditio= n. * @cond: wait while this conditional expression is true + * @unlock: whether to unlock and then lock again @ctx * * Wait while a condition is true. Use this to implement synchronous * operations that require event loop activity. @@ -75,7 +76,7 @@ extern AioWait global_aio_wait; * wait on conditions between two IOThreads since that could lead to deadl= ock, * go via the main loop instead. */ -#define AIO_WAIT_WHILE(ctx, cond) ({ \ +#define _AIO_WAIT_WHILE(ctx, cond, unlock) ({ \ bool waited_ =3D false; \ AioWait *wait_ =3D &global_aio_wait; \ AioContext *ctx_ =3D (ctx); \ @@ -90,11 +91,11 @@ extern AioWait global_aio_wait; assert(qemu_get_current_aio_context() =3D=3D \ qemu_get_aio_context()); \ while ((cond)) { \ - if (ctx_) { \ + if (unlock && ctx_) { \ aio_context_release(ctx_); \ } \ aio_poll(qemu_get_aio_context(), true); \ - if (ctx_) { \ + if (unlock && ctx_) { \ aio_context_acquire(ctx_); \ } \ waited_ =3D true; \ @@ -103,6 +104,12 @@ extern AioWait global_aio_wait; qatomic_dec(&wait_->num_waiters); \ waited_; }) =20 +#define AIO_WAIT_WHILE(ctx, cond) \ + _AIO_WAIT_WHILE(ctx, cond, true) + +#define AIO_WAIT_WHILE_UNLOCKED(ctx, cond) \ + _AIO_WAIT_WHILE(ctx, cond, false) + /** * aio_wait_kick: * Wake up the main thread if it is waiting on AIO_WAIT_WHILE(). During --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038400630204.52978579906187; Thu, 4 Nov 2021 08:06:40 -0700 (PDT) Received: from localhost ([::1]:46970 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieK6-0007gX-4N for importer@patchew.org; Thu, 04 Nov 2021 11:06:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51374) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie89-0003Pt-PD for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:17 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:54813) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie87-0006wK-Rp for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:17 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-182-Ij8Xe5V3OzmxRhIcGSTJJQ-1; Thu, 04 Nov 2021 10:54:12 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1BEDF1007315; Thu, 4 Nov 2021 14:54:11 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 264E46418A; Thu, 4 Nov 2021 14:54:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p94Yp9hT0ZiVop2XyugpJj2jnVtBOkeJM12bjvwGUWE=; b=A2pVft0d5YbtVlCn5c1VSTtdI3kDeGbFJZMrrOg1AyLce17GnH5CumzRQMekRoqylX5/1R u1KXECW+VlJdGSpoeZEGgg3gz+qLEkorRrkRJDoygoKiP4agN1CbP3bdqdOe7vRmwou9du RNZ66BxaF0c/jRlNlW1BTVbYO7jtruo= X-MC-Unique: Ij8Xe5V3OzmxRhIcGSTJJQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 09/14] jobs: remove aiocontext locks since the functions are under BQL Date: Thu, 4 Nov 2021 10:53:29 -0400 Message-Id: <20211104145334.1346363-10-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038402582100001 Content-Type: text/plain; charset="utf-8" In preparation to the job_lock/unlock patch, remove these aiocontext locks. The main reason these two locks are removed here is because they are inside a loop iterating on the jobs list. Once the job_lock is added, it will have to protect the whole loop, wrapping also the aiocontext acquire/release. We don't want this, as job_lock can only be *wrapped by* the aiocontext lock, and not vice-versa, to avoid deadlocks. Signed-off-by: Emanuele Giuseppe Esposito --- blockdev.c | 4 ---- job-qmp.c | 4 ---- 2 files changed, 8 deletions(-) diff --git a/blockdev.c b/blockdev.c index d9bf842a81..67b55eec11 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3719,15 +3719,11 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { BlockJobInfo *value; - AioContext *aio_context; =20 if (block_job_is_internal(job)) { continue; } - aio_context =3D blk_get_aio_context(job->blk); - aio_context_acquire(aio_context); value =3D block_job_query(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_BlockJobInfoList(head); return NULL; diff --git a/job-qmp.c b/job-qmp.c index 829a28aa70..a6774aaaa5 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -173,15 +173,11 @@ JobInfoList *qmp_query_jobs(Error **errp) =20 for (job =3D job_next(NULL); job; job =3D job_next(job)) { JobInfo *value; - AioContext *aio_context; =20 if (job_is_internal(job)) { continue; } - aio_context =3D job->aio_context; - aio_context_acquire(aio_context); value =3D job_query_single(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_JobInfoList(head); return NULL; --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636037974846290.5688055105371; Thu, 4 Nov 2021 07:59:34 -0700 (PDT) Received: from localhost ([::1]:58096 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieDE-0004T0-1X for importer@patchew.org; Thu, 04 Nov 2021 10:59:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51500) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8G-0003aC-DI for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:37647) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8A-0006xF-H0 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:23 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-190-37sij-guNYm881n35-y9KQ-1; Thu, 04 Nov 2021 10:54:13 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2ED468066F6; Thu, 4 Nov 2021 14:54:12 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 35BD26418A; Thu, 4 Nov 2021 14:54:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037657; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2R9cjYVWSNgOQdAXl7OtwjBDdECT+/2n1YfuZGY0PAU=; b=eDF8pDPeA+vcL8cT4e0s7b3NuzGHb4TxQO6qGg0zaBkdCkJazKr8hRQMy4gx1np8n7SK9X Fo3GT/vqp1g3puY0Wbk5pZ3CL6CcOKlFU0UJYtzamC2SMAwc8iy5/VIyEGg+GgEv6jNFVm Ovhre1u1n5/VYHb2aUu7F6kqTo6eV2A= X-MC-Unique: 37sij-guNYm881n35-y9KQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 10/14] jobs: protect jobs with job_lock/unlock Date: Thu, 4 Nov 2021 10:53:30 -0400 Message-Id: <20211104145334.1346363-11-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636037976940100001 Content-Type: text/plain; charset="utf-8" Introduce the job locking mechanism through the whole job API, following the comments and requirements of job-monitor (assume lock is held) and job-driver (lock is not held). job_{lock/unlock} is independent from _job_{lock/unlock}. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- block.c | 6 ++ block/replication.c | 4 + blockdev.c | 13 +++ blockjob.c | 37 +++++++- job-qmp.c | 4 + job.c | 201 ++++++++++++++++++++++++++++++++++---------- monitor/qmp-cmds.c | 2 + qemu-img.c | 8 +- 8 files changed, 229 insertions(+), 46 deletions(-) diff --git a/block.c b/block.c index da80e89ad4..a6dcd9eb36 100644 --- a/block.c +++ b/block.c @@ -4826,7 +4826,9 @@ static void bdrv_close(BlockDriverState *bs) =20 void bdrv_close_all(void) { + job_lock(); assert(job_next(NULL) =3D=3D NULL); + job_unlock(); assert(qemu_in_main_thread()); =20 /* Drop references from requests still in flight, such as canceled blo= ck @@ -5965,6 +5967,8 @@ XDbgBlockGraph *bdrv_get_xdbg_block_graph(Error **err= p) } } =20 + job_lock(); + for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { GSList *el; =20 @@ -5975,6 +5979,8 @@ XDbgBlockGraph *bdrv_get_xdbg_block_graph(Error **err= p) } } =20 + job_unlock(); + QTAILQ_FOREACH(bs, &graph_bdrv_states, node_list) { xdbg_graph_add_node(gr, bs, X_DBG_BLOCK_GRAPH_NODE_TYPE_BLOCK_DRIV= ER, bs->node_name); diff --git a/block/replication.c b/block/replication.c index 55c8f894aa..0f487cc215 100644 --- a/block/replication.c +++ b/block/replication.c @@ -149,7 +149,9 @@ static void replication_close(BlockDriverState *bs) if (s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { commit_job =3D &s->commit_job->job; assert(commit_job->aio_context =3D=3D qemu_get_current_aio_context= ()); + job_lock(); job_cancel_sync(commit_job, false); + job_unlock(); } =20 if (s->mode =3D=3D REPLICATION_MODE_SECONDARY) { @@ -726,7 +728,9 @@ static void replication_stop(ReplicationState *rs, bool= failover, Error **errp) * disk, secondary disk in backup_job_completed(). */ if (s->backup_job) { + job_lock(); job_cancel_sync(&s->backup_job->job, true); + job_unlock(); } =20 if (!failover) { diff --git a/blockdev.c b/blockdev.c index 67b55eec11..c5a835d9ed 100644 --- a/blockdev.c +++ b/blockdev.c @@ -150,6 +150,8 @@ void blockdev_mark_auto_del(BlockBackend *blk) return; } =20 + job_lock(); + for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { AioContext *aio_context =3D job->job.aio_context; @@ -161,6 +163,8 @@ void blockdev_mark_auto_del(BlockBackend *blk) } } =20 + job_unlock(); + dinfo->auto_del =3D 1; } =20 @@ -1844,7 +1848,9 @@ static void drive_backup_abort(BlkActionState *common) aio_context =3D bdrv_get_aio_context(state->bs); aio_context_acquire(aio_context); =20 + job_lock(); job_cancel_sync(&state->job->job, true); + job_unlock(); =20 aio_context_release(aio_context); } @@ -1945,7 +1951,9 @@ static void blockdev_backup_abort(BlkActionState *com= mon) aio_context =3D bdrv_get_aio_context(state->bs); aio_context_acquire(aio_context); =20 + job_lock(); job_cancel_sync(&state->job->job, true); + job_unlock(); =20 aio_context_release(aio_context); } @@ -2394,7 +2402,9 @@ exit: if (!has_props) { qapi_free_TransactionProperties(props); } + job_lock(); job_txn_unref(block_job_txn); + job_unlock(); } =20 BlockDirtyBitmapSha256 *qmp_x_debug_block_dirty_bitmap_sha256(const char *= node, @@ -3717,6 +3727,7 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) BlockJobInfoList *head =3D NULL, **tail =3D &head; BlockJob *job; =20 + job_lock(); for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { BlockJobInfo *value; =20 @@ -3726,10 +3737,12 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) value =3D block_job_query(job, errp); if (!value) { qapi_free_BlockJobInfoList(head); + job_unlock(); return NULL; } QAPI_LIST_APPEND(tail, value); } + job_unlock(); =20 return head; } diff --git a/blockjob.c b/blockjob.c index 53c1e9c406..dcc13dc336 100644 --- a/blockjob.c +++ b/blockjob.c @@ -88,19 +88,25 @@ static char *child_job_get_parent_desc(BdrvChild *c) static void child_job_drained_begin(BdrvChild *c) { BlockJob *job =3D c->opaque; + job_lock(); job_pause(&job->job); + job_unlock(); } =20 static bool child_job_drained_poll(BdrvChild *c) { BlockJob *bjob =3D c->opaque; Job *job =3D &bjob->job; + bool inactive_incomplete; const BlockJobDriver *drv =3D block_job_driver(bjob); =20 /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point aft= er * being reentered, so no job driver code will run before they pause. = */ - if (!job->busy || job_is_completed(job)) { + job_lock(); + inactive_incomplete =3D !job->busy || job_is_completed(job); + job_unlock(); + if (inactive_incomplete) { return false; } =20 @@ -116,7 +122,9 @@ static bool child_job_drained_poll(BdrvChild *c) static void child_job_drained_end(BdrvChild *c, int *drained_end_counter) { BlockJob *job =3D c->opaque; + job_lock(); job_resume(&job->job); + job_unlock(); } =20 static bool child_job_can_set_aio_ctx(BdrvChild *c, AioContext *ctx, @@ -236,9 +244,16 @@ int block_job_add_bdrv(BlockJob *job, const char *name= , BlockDriverState *bs, return 0; } =20 +/* Called with job_mutex lock held. */ static void block_job_on_idle(Notifier *n, void *opaque) { + /* + * we can't kick with job_mutex held, but we also want + * to protect the notifier list. + */ + job_unlock(); aio_wait_kick(); + job_lock(); } =20 bool block_job_is_internal(BlockJob *job) @@ -257,6 +272,7 @@ static bool job_timer_pending(Job *job) return timer_pending(&job->sleep_timer); } =20 +/* Called with job_mutex held. May temporarly release the lock. */ bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) { const BlockJobDriver *drv =3D block_job_driver(job); @@ -278,7 +294,9 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp) job->speed =3D speed; =20 if (drv->set_speed) { + job_unlock(); drv->set_speed(job, speed); + job_lock(); } =20 if (speed && speed <=3D old_speed) { @@ -341,6 +359,7 @@ static void block_job_iostatus_set_err(BlockJob *job, i= nt error) } } =20 +/* Called with job_mutex lock held. */ static void block_job_event_cancelled(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -360,6 +379,7 @@ static void block_job_event_cancelled(Notifier *n, void= *opaque) job->speed); } =20 +/* Called with job_mutex lock held. */ static void block_job_event_completed(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -386,6 +406,7 @@ static void block_job_event_completed(Notifier *n, void= *opaque) msg); } =20 +/* Called with job_mutex lock held. */ static void block_job_event_pending(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -398,6 +419,7 @@ static void block_job_event_pending(Notifier *n, void *= opaque) job->job.id); } =20 +/* Called with job_mutex lock held. */ static void block_job_event_ready(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -458,6 +480,7 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, job->ready_notifier.notify =3D block_job_event_ready; job->idle_notifier.notify =3D block_job_on_idle; =20 + job_lock(); notifier_list_add(&job->job.on_finalize_cancelled, &job->finalize_cancelled_notifier); notifier_list_add(&job->job.on_finalize_completed, @@ -465,6 +488,7 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, notifier_list_add(&job->job.on_pending, &job->pending_notifier); notifier_list_add(&job->job.on_ready, &job->ready_notifier); notifier_list_add(&job->job.on_idle, &job->idle_notifier); + job_unlock(); =20 error_setg(&job->blocker, "block device is in use by block job: %s", job_type_str(&job->job)); @@ -477,14 +501,19 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, blk_set_disable_request_queuing(blk, true); blk_set_allow_aio_context_change(blk, true); =20 + job_lock(); if (!block_job_set_speed(job, speed, errp)) { - job_early_fail(&job->job); + job_early_fail_locked(&job->job); + job_unlock(); return NULL; } + job_unlock(); + =20 return job; } =20 +/* Called with job_mutex lock held. */ void block_job_iostatus_reset(BlockJob *job) { assert(qemu_in_main_thread()); @@ -499,7 +528,9 @@ void block_job_user_resume(Job *job) { assert(qemu_in_main_thread()); BlockJob *bjob =3D container_of(job, BlockJob, job); + job_lock(); block_job_iostatus_reset(bjob); + job_unlock(); } =20 BlockErrorAction block_job_error_action(BlockJob *job, BlockdevOnError on_= err, @@ -532,11 +563,13 @@ BlockErrorAction block_job_error_action(BlockJob *job= , BlockdevOnError on_err, action); } if (action =3D=3D BLOCK_ERROR_ACTION_STOP) { + job_lock(); if (!job->job.user_paused) { job_pause(&job->job); /* make the pause user visible, which will be resumed from QMP= . */ job->job.user_paused =3D true; } + job_unlock(); block_job_iostatus_set_err(job, error); } return action; diff --git a/job-qmp.c b/job-qmp.c index a6774aaaa5..a355dc2954 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -171,6 +171,8 @@ JobInfoList *qmp_query_jobs(Error **errp) JobInfoList *head =3D NULL, **tail =3D &head; Job *job; =20 + job_lock(); + for (job =3D job_next(NULL); job; job =3D job_next(job)) { JobInfo *value; =20 @@ -180,10 +182,12 @@ JobInfoList *qmp_query_jobs(Error **errp) value =3D job_query_single(job, errp); if (!value) { qapi_free_JobInfoList(head); + job_unlock(); return NULL; } QAPI_LIST_APPEND(tail, value); } + job_unlock(); =20 return head; } diff --git a/job.c b/job.c index 7856fa734b..5efbf38a72 100644 --- a/job.c +++ b/job.c @@ -55,6 +55,7 @@ */ static QemuMutex job_mutex; =20 +/* Protected by job_mutex */ static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -134,6 +135,7 @@ JobTxn *job_txn_new(void) return txn; } =20 +/* Called with job_mutex held. */ static void job_txn_ref(JobTxn *txn) { txn->refcnt++; @@ -159,6 +161,7 @@ void job_txn_add_job(JobTxn *txn, Job *job) job_txn_ref(txn); } =20 +/* Called with job_mutex held. */ static void job_txn_del_job(Job *job) { if (job->txn) { @@ -168,6 +171,7 @@ static void job_txn_del_job(Job *job) } } =20 +/* Called with job_mutex held. */ static int job_txn_apply(Job *job, int fn(Job *)) { Job *other_job, *next; @@ -204,6 +208,7 @@ bool job_is_internal(Job *job) return (job->id =3D=3D NULL); } =20 +/* Called with job_mutex held. */ static void job_state_transition(Job *job, JobStatus s1) { JobStatus s0 =3D job->status; @@ -371,6 +376,7 @@ static bool job_started(Job *job) return job->co; } =20 +/* Called with job_mutex held. */ static bool job_should_pause(Job *job) { return job->pause_count > 0; @@ -397,6 +403,7 @@ Job *job_get(const char *id) return NULL; } =20 +/* Called with job_mutex *not* held. */ static void job_sleep_timer_cb(void *opaque) { Job *job =3D opaque; @@ -404,12 +411,15 @@ static void job_sleep_timer_cb(void *opaque) job_enter(job); } =20 +/* Called with job_mutex *not* held. */ void *job_create(const char *job_id, const JobDriver *driver, JobTxn *txn, AioContext *ctx, int flags, BlockCompletionFunc *cb, void *opaque, Error **errp) { Job *job; =20 + JOB_LOCK_GUARD(); + if (job_id) { if (flags & JOB_INTERNAL) { error_setg(errp, "Cannot specify job ID for internal job"); @@ -483,7 +493,9 @@ void job_unref(Job *job) assert(!job->txn); =20 if (job->driver->free) { + job_unlock(); job->driver->free(job); + job_lock(); } =20 QLIST_REMOVE(job, job_list); @@ -495,16 +507,19 @@ void job_unref(Job *job) } } =20 +/* Progress API is thread safe */ void job_progress_update(Job *job, uint64_t done) { progress_work_done(&job->progress, done); } =20 +/* Progress API is thread safe */ void job_progress_set_remaining(Job *job, uint64_t remaining) { progress_set_remaining(&job->progress, remaining); } =20 +/* Progress API is thread safe */ void job_progress_increase_remaining(Job *job, uint64_t delta) { progress_increase_remaining(&job->progress, delta); @@ -528,16 +543,19 @@ static void job_event_completed(Job *job) notifier_list_notify(&job->on_finalize_completed, job); } =20 +/* Called with job_mutex held. */ static void job_event_pending(Job *job) { notifier_list_notify(&job->on_pending, job); } =20 +/* Called with job_mutex held. */ static void job_event_ready(Job *job) { notifier_list_notify(&job->on_ready, job); } =20 +/* Called with job_mutex held. */ static void job_event_idle(Job *job) { notifier_list_notify(&job->on_idle, job); @@ -567,11 +585,15 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) timer_del(&job->sleep_timer); job->busy =3D true; _job_unlock(); + job_unlock(); aio_co_enter(job->aio_context, job->co); + job_lock(); } =20 +/* Called with job_mutex *not* held. */ void job_enter(Job *job) { + JOB_LOCK_GUARD(); job_enter_cond(job, NULL); } =20 @@ -580,7 +602,10 @@ void job_enter(Job *job) * is allowed and cancels the timer. * * If @ns is (uint64_t) -1, no timer is scheduled and job_enter() must be - * called explicitly. */ + * called explicitly. + * + * Called with job_mutex held, but releases it temporarly. + */ static void coroutine_fn job_do_yield(Job *job, uint64_t ns) { _job_lock(); @@ -590,28 +615,39 @@ static void coroutine_fn job_do_yield(Job *job, uint6= 4_t ns) job->busy =3D false; job_event_idle(job); _job_unlock(); + job_unlock(); qemu_coroutine_yield(); + job_lock(); =20 /* Set by job_enter_cond() before re-entering the coroutine. */ assert(job->busy); } =20 +/* + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ void coroutine_fn job_pause_point(Job *job) { assert(job && job_started(job)); =20 + job_lock(); if (!job_should_pause(job)) { + job_unlock(); return; } - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { + job_unlock(); return; } =20 if (job->driver->pause) { + job_unlock(); job->driver->pause(job); + job_lock(); } =20 - if (job_should_pause(job) && !job_is_cancelled(job)) { + if (job_should_pause(job) && !job_is_cancelled_locked(job)) { JobStatus status =3D job->status; job_state_transition(job, status =3D=3D JOB_STATUS_READY ? JOB_STATUS_STANDBY @@ -621,45 +657,58 @@ void coroutine_fn job_pause_point(Job *job) job->paused =3D false; job_state_transition(job, status); } + job_unlock(); =20 if (job->driver->resume) { job->driver->resume(job); } } =20 +/* + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ void job_yield(Job *job) { - assert(job->busy); + WITH_JOB_LOCK_GUARD() { + assert(job->busy); =20 - /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { - return; - } + /* Check cancellation *before* setting busy =3D false, too! */ + if (job_is_cancelled_locked(job)) { + return; + } =20 - if (!job_should_pause(job)) { - job_do_yield(job, -1); + if (!job_should_pause(job)) { + job_do_yield(job, -1); + } } =20 job_pause_point(job); } =20 +/* + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns) { - assert(job->busy); + WITH_JOB_LOCK_GUARD() { + assert(job->busy); =20 - /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { - return; - } + /* Check cancellation *before* setting busy =3D false, too! */ + if (job_is_cancelled_locked(job)) { + return; + } =20 - if (!job_should_pause(job)) { - job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + if (!job_should_pause(job)) { + job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + } } =20 job_pause_point(job); } =20 -/* Assumes the block_job_mutex is held */ +/* Assumes the job_mutex is held */ static bool job_timer_not_pending(Job *job) { return !timer_pending(&job->sleep_timer); @@ -669,7 +718,7 @@ void job_pause(Job *job) { job->pause_count++; if (!job->paused) { - job_enter(job); + job_enter_cond(job, NULL); } } =20 @@ -723,12 +772,15 @@ void job_user_resume(Job *job, Error **errp) return; } if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; job_resume(job); } =20 +/* Called with job_mutex held. */ static void job_do_dismiss(Job *job) { assert(job); @@ -767,6 +819,7 @@ void job_early_fail(Job *job) job_early_fail_locked(job); } =20 +/* Called with job_mutex held. */ static void job_conclude(Job *job) { job_state_transition(job, JOB_STATUS_CONCLUDED); @@ -775,9 +828,10 @@ static void job_conclude(Job *job) } } =20 +/* Called with job_mutex held. */ static void job_update_rc(Job *job) { - if (!job->ret && job_is_cancelled(job)) { + if (!job->ret && job_is_cancelled_locked(job)) { job->ret =3D -ECANCELED; } if (job->ret) { @@ -788,34 +842,45 @@ static void job_update_rc(Job *job) } } =20 +/* Called with job_mutex held, but releases it temporarly */ static void job_commit(Job *job) { assert(!job->ret); assert(qemu_in_main_thread()); if (job->driver->commit) { + job_unlock(); job->driver->commit(job); + job_lock(); } } =20 +/* Called with job_mutex held, but releases it temporarly */ static void job_abort(Job *job) { assert(job->ret); assert(qemu_in_main_thread()); if (job->driver->abort) { + job_unlock(); job->driver->abort(job); + job_lock(); } } =20 +/* Called with job_mutex held, but releases it temporarly */ static void job_clean(Job *job) { assert(qemu_in_main_thread()); if (job->driver->clean) { + job_unlock(); job->driver->clean(job); + job_lock(); } } =20 +/* Called with job_mutex held, but releases it temporarly. */ static int job_finalize_single(Job *job) { + int job_ret; AioContext *ctx =3D job->aio_context; =20 assert(job_is_completed(job)); @@ -835,12 +900,15 @@ static int job_finalize_single(Job *job) aio_context_release(ctx); =20 if (job->cb) { - job->cb(job->opaque, job->ret); + job_ret =3D job->ret; + job_unlock(); + job->cb(job->opaque, job_ret); + job_lock(); } =20 /* Emit events only if we actually started */ if (job_started(job)) { - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { job_event_cancelled(job); } else { job_event_completed(job); @@ -852,11 +920,14 @@ static int job_finalize_single(Job *job) return 0; } =20 +/* Called with job_mutex held, but releases it temporarly. */ static void job_cancel_async(Job *job, bool force) { assert(qemu_in_main_thread()); if (job->driver->cancel) { + job_unlock(); force =3D job->driver->cancel(job, force); + job_lock(); } else { /* No .cancel() means the job will behave as if force-cancelled */ force =3D true; @@ -865,7 +936,9 @@ static void job_cancel_async(Job *job, bool force) if (job->user_paused) { /* Do not call job_enter here, the caller will handle it. */ if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; assert(job->pause_count > 0); @@ -886,6 +959,7 @@ static void job_cancel_async(Job *job, bool force) } } =20 +/* Called with job_mutex held. */ static void job_completed_txn_abort(Job *job) { AioContext *ctx; @@ -935,7 +1009,7 @@ static void job_completed_txn_abort(Job *job) ctx =3D other_job->aio_context; aio_context_acquire(ctx); if (!job_is_completed(other_job)) { - assert(job_cancel_requested(other_job)); + assert(job_cancel_requested_locked(other_job)); job_finish_sync(other_job, NULL, NULL); } job_finalize_single(other_job); @@ -952,26 +1026,33 @@ static void job_completed_txn_abort(Job *job) job_txn_unref(txn); } =20 +/* Called with job_mutex held, but releases it temporarly. */ static int job_prepare(Job *job) { + int ret; AioContext *ctx =3D job->aio_context; assert(qemu_in_main_thread()); =20 if (job->ret =3D=3D 0 && job->driver->prepare) { + job_unlock(); aio_context_acquire(ctx); - job->ret =3D job->driver->prepare(job); + ret =3D job->driver->prepare(job); aio_context_release(ctx); + job_lock(); + job->ret =3D ret; job_update_rc(job); } =20 return job->ret; } =20 +/* Called with job_mutex held. */ static int job_needs_finalize(Job *job) { return !job->auto_finalize; } =20 +/* Called with job_mutex held. */ static void job_do_finalize(Job *job) { int rc; @@ -995,6 +1076,7 @@ void job_finalize(Job *job, Error **errp) job_do_finalize(job); } =20 +/* Called with job_mutex held. */ static int job_transition_to_pending(Job *job) { job_state_transition(job, JOB_STATUS_PENDING); @@ -1004,12 +1086,15 @@ static int job_transition_to_pending(Job *job) return 0; } =20 +/* Called with job_mutex *not* held. */ void job_transition_to_ready(Job *job) { + JOB_LOCK_GUARD(); job_state_transition(job, JOB_STATUS_READY); job_event_ready(job); } =20 +/* Called with job_mutex held. */ static void job_completed_txn_success(Job *job) { JobTxn *txn =3D job->txn; @@ -1036,6 +1121,7 @@ static void job_completed_txn_success(Job *job) } } =20 +/* Called with job_mutex held. */ static void job_completed(Job *job) { assert(job && job->txn && !job_is_completed(job)); @@ -1049,12 +1135,16 @@ static void job_completed(Job *job) } } =20 -/** Useful only as a type shim for aio_bh_schedule_oneshot. */ +/** + * Useful only as a type shim for aio_bh_schedule_oneshot. + * Called with job_mutex *not* held. + */ static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; AioContext *ctx; =20 + JOB_LOCK_GUARD(); job_ref(job); aio_context_acquire(job->aio_context); =20 @@ -1081,28 +1171,36 @@ static void job_exit(void *opaque) /** * All jobs must allow a pause point before entering their job proper. This * ensures that jobs can be paused prior to being started, then resumed la= ter. + * + * Called with job_mutex *not* held. */ static void coroutine_fn job_co_entry(void *opaque) { Job *job =3D opaque; - + int ret; assert(job && job->driver && job->driver->run); job_pause_point(job); - job->ret =3D job->driver->run(job, &job->err); - job->deferred_to_main_loop =3D true; - job->busy =3D true; + ret =3D job->driver->run(job, &job->err); + WITH_JOB_LOCK_GUARD() { + job->ret =3D ret; + job->deferred_to_main_loop =3D true; + job->busy =3D true; + } aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job); } =20 +/* Called with job_mutex *not* held. */ void job_start(Job *job) { - assert(job && !job_started(job) && job->paused && - job->driver && job->driver->run); - job->co =3D qemu_coroutine_create(job_co_entry, job); - job->pause_count--; - job->busy =3D true; - job->paused =3D false; - job_state_transition(job, JOB_STATUS_RUNNING); + WITH_JOB_LOCK_GUARD() { + assert(job && !job_started(job) && job->paused && + job->driver && job->driver->run); + job->co =3D qemu_coroutine_create(job_co_entry, job); + job->pause_count--; + job->busy =3D true; + job->paused =3D false; + job_state_transition(job, JOB_STATUS_RUNNING); + } aio_co_enter(job->aio_context, job->co); } =20 @@ -1126,11 +1224,11 @@ void job_cancel(Job *job, bool force) * choose to call job_is_cancelled() to show that we invoke * job_completed_txn_abort() only for force-cancelled jobs.) */ - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { job_completed_txn_abort(job); } } else { - job_enter(job); + job_enter_cond(job, NULL); } } =20 @@ -1142,9 +1240,13 @@ void job_user_cancel(Job *job, bool force, Error **e= rrp) job_cancel(job, force); } =20 -/* A wrapper around job_cancel() taking an Error ** parameter so it may be +/* + * A wrapper around job_cancel() taking an Error ** parameter so it may be * used with job_finish_sync() without the need for (rather nasty) function - * pointer casts there. */ + * pointer casts there. + * + * Called with job_mutex held. + */ static void job_cancel_err(Job *job, Error **errp) { job_cancel(job, false); @@ -1152,6 +1254,8 @@ static void job_cancel_err(Job *job, Error **errp) =20 /** * Same as job_cancel_err(), but force-cancel. + * + * Called with job_mutex held. */ static void job_force_cancel_err(Job *job, Error **errp) { @@ -1167,11 +1271,17 @@ int job_cancel_sync(Job *job, bool force) } } =20 +/* + * Called with job_lock *not* held, unlike most other APIs consumed + * by the monitor! This is primarly to avoid adding lock-unlock + * patterns in the caller. + */ void job_cancel_sync_all(void) { Job *job; AioContext *aio_context; =20 + JOB_LOCK_GUARD(); while ((job =3D job_next(NULL))) { aio_context =3D job->aio_context; aio_context_acquire(aio_context); @@ -1193,13 +1303,15 @@ void job_complete(Job *job, Error **errp) if (job_apply_verb(job, JOB_VERB_COMPLETE, errp)) { return; } - if (job_cancel_requested(job) || !job->driver->complete) { + if (job_cancel_requested_locked(job) || !job->driver->complete) { error_setg(errp, "The active block job '%s' cannot be completed", job->id); return; } =20 + job_unlock(); job->driver->complete(job, errp); + job_lock(); } =20 int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp) @@ -1218,10 +1330,13 @@ int job_finish_sync(Job *job, void (*finish)(Job *,= Error **errp), Error **errp) return -EBUSY; } =20 + job_unlock(); AIO_WAIT_WHILE(job->aio_context, - (job_enter(job), !job_is_completed(job))); + (job_enter(job), !job_is_completed_unlocked(job))); + job_lock(); =20 - ret =3D (job_is_cancelled(job) && job->ret =3D=3D 0) ? -ECANCELED : jo= b->ret; + ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) ? + -ECANCELED : job->ret; job_unref(job); return ret; } diff --git a/monitor/qmp-cmds.c b/monitor/qmp-cmds.c index 5c0d5e116b..a0b023cac1 100644 --- a/monitor/qmp-cmds.c +++ b/monitor/qmp-cmds.c @@ -129,9 +129,11 @@ void qmp_cont(Error **errp) blk_iostatus_reset(blk); } =20 + job_lock(); for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { block_job_iostatus_reset(job); } + job_unlock(); =20 /* Continuing after completed migration. Images have been inactivated = to * allow the destination to take control. Need to get control back now. diff --git a/qemu-img.c b/qemu-img.c index f036a1d428..170c65b1b7 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -906,9 +906,11 @@ static void run_block_job(BlockJob *job, Error **errp) int ret =3D 0; =20 aio_context_acquire(aio_context); + job_lock(); job_ref(&job->job); do { float progress =3D 0.0f; + job_unlock(); aio_poll(aio_context, true); =20 progress_get_snapshot(&job->job.progress, &progress_current, @@ -917,7 +919,8 @@ static void run_block_job(BlockJob *job, Error **errp) progress =3D (float)progress_current / progress_total * 100.f; } qemu_progress_print(progress, 0); - } while (!job_is_ready(&job->job) && !job_is_completed(&job->job)); + job_lock(); + } while (!job_is_ready_locked(&job->job) && !job_is_completed(&job->jo= b)); =20 if (!job_is_completed(&job->job)) { ret =3D job_complete_sync(&job->job, errp); @@ -925,6 +928,7 @@ static void run_block_job(BlockJob *job, Error **errp) ret =3D job->job.ret; } job_unref(&job->job); + job_unlock(); aio_context_release(aio_context); =20 /* publish completion progress only when success */ @@ -1077,7 +1081,9 @@ static int img_commit(int argc, char **argv) bdrv_ref(bs); } =20 + job_lock(); job =3D block_job_get("commit"); + job_unlock(); assert(job); run_block_job(job, &local_err); if (local_err) { --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 163603818925527.421468398366642; Thu, 4 Nov 2021 08:03:09 -0700 (PDT) Received: from localhost ([::1]:38014 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieGg-0001eH-U0 for importer@patchew.org; Thu, 04 Nov 2021 11:03:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51426) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8C-0003Xg-53 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:20 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:55229) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie89-0006wf-36 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:19 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-519-UZaEpzF_PFORk4bbzW1QMg-1; Thu, 04 Nov 2021 10:54:14 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3DE0187181C; Thu, 4 Nov 2021 14:54:13 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 489015F4F5; Thu, 4 Nov 2021 14:54:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UfL5nlAKSh9sT83zKXmI8uqeY5iSaxKedXUut7gwXgU=; b=btpqaqYisUmtz5loYDrfbMUaruO7+oNgHprhpLN91Ia17g7HLWI+poNE2gM6nDB5OzvHVf WUe8nsZiKufTDY/3C4NTRtkqC1ny2RqlhwzIYDwcfJ6G75fT0h51ftyQv6ovqUOJcXYVO5 Lc/vemvUZ1D9S1qdEfEJ4y3u6+7tH3U= X-MC-Unique: UZaEpzF_PFORk4bbzW1QMg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 11/14] block_job_query: remove atomic read Date: Thu, 4 Nov 2021 10:53:31 -0400 Message-Id: <20211104145334.1346363-12-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038190891100001 Content-Type: text/plain; charset="utf-8" Not sure what the atomic here was supposed to do, since job.busy is protected by the job lock. Since the whole function will be called under job_mutex, just remove the atomic. Signed-off-by: Emanuele Giuseppe Esposito --- blockjob.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/blockjob.c b/blockjob.c index dcc13dc336..426dcddcc1 100644 --- a/blockjob.c +++ b/blockjob.c @@ -314,6 +314,7 @@ int64_t block_job_ratelimit_get_delay(BlockJob *job, ui= nt64_t n) return ratelimit_calculate_delay(&job->limit, n); } =20 +/* Called with job_mutex held */ BlockJobInfo *block_job_query(BlockJob *job, Error **errp) { BlockJobInfo *info; @@ -332,13 +333,13 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **= errp) info =3D g_new0(BlockJobInfo, 1); info->type =3D g_strdup(job_type_str(&job->job)); info->device =3D g_strdup(job->job.id); - info->busy =3D qatomic_read(&job->job.busy); + info->busy =3D job->job.busy; info->paused =3D job->job.pause_count > 0; info->offset =3D progress_current; info->len =3D progress_total; info->speed =3D job->speed; info->io_status =3D job->iostatus; - info->ready =3D job_is_ready(&job->job), + info->ready =3D job_is_ready_locked(&job->job), info->status =3D job->job.status; info->auto_finalize =3D job->job.auto_finalize; info->auto_dismiss =3D job->job.auto_dismiss; --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038500152305.52416780438614; Thu, 4 Nov 2021 08:08:20 -0700 (PDT) Received: from localhost ([::1]:51288 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieLj-00029r-2O for importer@patchew.org; Thu, 04 Nov 2021 11:08:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51498) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8G-0003aB-9n for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:57926) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8B-0006xU-GT for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:22 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-397-vsr43w16PR-K7dAQn0Tk-A-1; Thu, 04 Nov 2021 10:54:15 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4F3141018729; Thu, 4 Nov 2021 14:54:14 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 57A3B5F4ED; Thu, 4 Nov 2021 14:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sVnePlBmc0sCbH+COtClk2mVlRIggdBWk37uMW1uMws=; b=KVlSaKDu9M3kwHmHKk1PiXpTuQvl+C7XCB93OgkMm+Y0w1/QKhmX1kGt9qg3hsk/S4bmv/ Sn7xcyG7vTrp5qyPpUGDdapVh0aO6o0kuWWMH9UG/ETmJhaWvcAkTaz+7RY2Nj+rcy1NyS JZAJ4p6QFfOAGOkU2DvWfR3o9aFhKfU= X-MC-Unique: vsr43w16PR-K7dAQn0Tk-A-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 12/14] jobs: use job locks and helpers also in the unit tests Date: Thu, 4 Nov 2021 10:53:32 -0400 Message-Id: <20211104145334.1346363-13-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038502087100001 Content-Type: text/plain; charset="utf-8" Add missing job synchronization in the unit tests, with both explicit locks and helpers. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- tests/unit/test-bdrv-drain.c | 40 +++++++++++----------- tests/unit/test-block-iothread.c | 4 +++ tests/unit/test-blockjob-txn.c | 10 ++++++ tests/unit/test-blockjob.c | 57 +++++++++++++++++++++----------- 4 files changed, 72 insertions(+), 39 deletions(-) diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 2d3c17e566..535c39b5a8 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -941,61 +941,63 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 0); + g_assert_false(job_get_paused(&job->job)); g_assert_true(tjob->running); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 do_drain_begin_unlocked(drain_type, drain_bs); =20 if (drain_type =3D=3D BDRV_DRAIN_ALL) { /* bdrv_drain_all() drains both src and target */ - g_assert_cmpint(job->job.pause_count, =3D=3D, 2); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 2); } else { - g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 1); } - g_assert_true(job->job.paused); - g_assert_false(job->job.busy); /* The job is paused */ + g_assert_true(job_get_paused(&job->job)); + g_assert_false(job_get_busy(&job->job)); /* The job is paused */ =20 do_drain_end_unlocked(drain_type, drain_bs); =20 if (use_iothread) { /* paused is reset in the I/O thread, wait for it */ - while (job->job.paused) { + while (job_get_paused(&job->job)) { aio_poll(qemu_get_aio_context(), false); } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 0); + g_assert_false(job_get_paused(&job->job)); + g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 do_drain_begin_unlocked(drain_type, target); =20 if (drain_type =3D=3D BDRV_DRAIN_ALL) { /* bdrv_drain_all() drains both src and target */ - g_assert_cmpint(job->job.pause_count, =3D=3D, 2); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 2); } else { - g_assert_cmpint(job->job.pause_count, =3D=3D, 1); + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 1); } - g_assert_true(job->job.paused); - g_assert_false(job->job.busy); /* The job is paused */ + g_assert_true(job_get_paused(&job->job)); + g_assert_false(job_get_busy(&job->job)); /* The job is paused */ =20 do_drain_end_unlocked(drain_type, target); =20 if (use_iothread) { /* paused is reset in the I/O thread, wait for it */ - while (job->job.paused) { + while (job_get_paused(&job->job)) { aio_poll(qemu_get_aio_context(), false); } } =20 - g_assert_cmpint(job->job.pause_count, =3D=3D, 0); - g_assert_false(job->job.paused); - g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ + g_assert_cmpint(job_get_pause_count(&job->job), =3D=3D, 0); + g_assert_false(job_get_paused(&job->job)); + g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 aio_context_acquire(ctx); + job_lock(); ret =3D job_complete_sync(&job->job, &error_abort); + job_unlock(); g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 if (use_iothread) { diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index aea660aeed..f39cb8b7ef 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -456,7 +456,9 @@ static void test_attach_blockjob(void) } =20 aio_context_acquire(ctx); + job_lock(); job_complete_sync(&tjob->common.job, &error_abort); + job_unlock(); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 @@ -630,7 +632,9 @@ static void test_propagate_mirror(void) BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, false, "filter_node", MIRROR_COPY_MODE_BACKGROUND, &error_abort); + job_lock(); job =3D job_get("job0"); + job_unlock(); filter =3D bdrv_find_node("filter_node"); =20 /* Change the AioContext of src */ diff --git a/tests/unit/test-blockjob-txn.c b/tests/unit/test-blockjob-txn.c index 8bd13b9949..1ae3a9d443 100644 --- a/tests/unit/test-blockjob-txn.c +++ b/tests/unit/test-blockjob-txn.c @@ -124,16 +124,20 @@ static void test_single_job(int expected) job =3D test_block_job_start(1, true, expected, &result, txn); job_start(&job->job); =20 + job_lock(); if (expected =3D=3D -ECANCELED) { job_cancel(&job->job, false); } + job_unlock(); =20 while (result =3D=3D -EINPROGRESS) { aio_poll(qemu_get_aio_context(), true); } g_assert_cmpint(result, =3D=3D, expected); =20 + job_lock(); job_txn_unref(txn); + job_unlock(); } =20 static void test_single_job_success(void) @@ -168,6 +172,7 @@ static void test_pair_jobs(int expected1, int expected2) /* Release our reference now to trigger as many nice * use-after-free bugs as possible. */ + job_lock(); job_txn_unref(txn); =20 if (expected1 =3D=3D -ECANCELED) { @@ -176,6 +181,7 @@ static void test_pair_jobs(int expected1, int expected2) if (expected2 =3D=3D -ECANCELED) { job_cancel(&job2->job, false); } + job_unlock(); =20 while (result1 =3D=3D -EINPROGRESS || result2 =3D=3D -EINPROGRESS) { aio_poll(qemu_get_aio_context(), true); @@ -227,7 +233,9 @@ static void test_pair_jobs_fail_cancel_race(void) job_start(&job1->job); job_start(&job2->job); =20 + job_lock(); job_cancel(&job1->job, false); + job_unlock(); =20 /* Now make job2 finish before the main loop kicks jobs. This simulat= es * the race between a pending kick and another job completing. @@ -242,7 +250,9 @@ static void test_pair_jobs_fail_cancel_race(void) g_assert_cmpint(result1, =3D=3D, -ECANCELED); g_assert_cmpint(result2, =3D=3D, -ECANCELED); =20 + job_lock(); job_txn_unref(txn); + job_unlock(); } =20 int main(int argc, char **argv) diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index 4c9e1bf1e5..b94e1510c9 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -211,8 +211,11 @@ static CancelJob *create_common(Job **pjob) bjob =3D mk_job(blk, "Steve", &test_cancel_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; + job_lock(); job_ref(job); assert(job->status =3D=3D JOB_STATUS_CREATED); + job_unlock(); + s =3D container_of(bjob, CancelJob, common); s->blk =3D blk; =20 @@ -230,6 +233,7 @@ static void cancel_common(CancelJob *s) ctx =3D job->job.aio_context; aio_context_acquire(ctx); =20 + job_lock(); job_cancel_sync(&job->job, true); if (sts !=3D JOB_STATUS_CREATED && sts !=3D JOB_STATUS_CONCLUDED) { Job *dummy =3D &job->job; @@ -237,6 +241,7 @@ static void cancel_common(CancelJob *s) } assert(job->job.status =3D=3D JOB_STATUS_NULL); job_unref(&job->job); + job_unlock(); destroy_blk(blk); =20 aio_context_release(ctx); @@ -259,7 +264,7 @@ static void test_cancel_running(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 cancel_common(s); } @@ -272,11 +277,13 @@ static void test_cancel_paused(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 + job_lock(); job_user_pause(job, &error_abort); + job_unlock(); job_enter(job); - assert(job->status =3D=3D JOB_STATUS_PAUSED); + assert(job_get_status(job) =3D=3D JOB_STATUS_PAUSED); =20 cancel_common(s); } @@ -289,11 +296,11 @@ static void test_cancel_ready(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 cancel_common(s); } @@ -306,15 +313,17 @@ static void test_cancel_standby(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 + job_lock(); job_user_pause(job, &error_abort); + job_unlock(); job_enter(job); - assert(job->status =3D=3D JOB_STATUS_STANDBY); + assert(job_get_status(job) =3D=3D JOB_STATUS_STANDBY); =20 cancel_common(s); } @@ -327,20 +336,22 @@ static void test_cancel_pending(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 + job_lock(); job_complete(job, &error_abort); + job_unlock(); job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); } - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); aio_poll(qemu_get_aio_context(), true); - assert(job->status =3D=3D JOB_STATUS_PENDING); + assert(job_get_status(job) =3D=3D JOB_STATUS_PENDING); =20 cancel_common(s); } @@ -353,25 +364,29 @@ static void test_cancel_concluded(void) s =3D create_common(&job); =20 job_start(job); - assert(job->status =3D=3D JOB_STATUS_RUNNING); + assert(job_get_status(job) =3D=3D JOB_STATUS_RUNNING); =20 s->should_converge =3D true; job_enter(job); - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); =20 + job_lock(); job_complete(job, &error_abort); + job_unlock(); job_enter(job); while (!job->deferred_to_main_loop) { aio_poll(qemu_get_aio_context(), true); } - assert(job->status =3D=3D JOB_STATUS_READY); + assert(job_get_status(job) =3D=3D JOB_STATUS_READY); aio_poll(qemu_get_aio_context(), true); - assert(job->status =3D=3D JOB_STATUS_PENDING); + assert(job_get_status(job) =3D=3D JOB_STATUS_PENDING); =20 aio_context_acquire(job->aio_context); + job_lock(); job_finalize(job, &error_abort); + job_unlock(); aio_context_release(job->aio_context); - assert(job->status =3D=3D JOB_STATUS_CONCLUDED); + assert(job_get_status(job) =3D=3D JOB_STATUS_CONCLUDED); =20 cancel_common(s); } @@ -459,22 +474,23 @@ static void test_complete_in_standby(void) bjob =3D mk_job(blk, "job", &test_yielding_driver, true, JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS); job =3D &bjob->job; - assert(job->status =3D=3D JOB_STATUS_CREATED); + assert(job_get_status(job) =3D=3D JOB_STATUS_CREATED); =20 /* Wait for the job to become READY */ job_start(job); aio_context_acquire(ctx); - AIO_WAIT_WHILE(ctx, job->status !=3D JOB_STATUS_READY); + AIO_WAIT_WHILE(ctx, job_get_status(job) !=3D JOB_STATUS_READY); aio_context_release(ctx); =20 /* Begin the drained section, pausing the job */ bdrv_drain_all_begin(); - assert(job->status =3D=3D JOB_STATUS_STANDBY); + assert(job_get_status(job) =3D=3D JOB_STATUS_STANDBY); /* Lock the IO thread to prevent the job from being run */ aio_context_acquire(ctx); /* This will schedule the job to resume it */ bdrv_drain_all_end(); =20 + job_lock(); /* But the job cannot run, so it will remain on standby */ assert(job->status =3D=3D JOB_STATUS_STANDBY); =20 @@ -489,6 +505,7 @@ static void test_complete_in_standby(void) assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 job_dismiss(&job, &error_abort); + job_unlock(); =20 destroy_blk(blk); aio_context_release(ctx); --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 163603862523017.631439096286385; Thu, 4 Nov 2021 08:10:25 -0700 (PDT) Received: from localhost ([::1]:56010 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieNk-0005Jc-8l for importer@patchew.org; Thu, 04 Nov 2021 11:10:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51534) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8I-0003jJ-7O for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:26 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:48887) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8G-0006xx-12 for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:25 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-475-TgbBVmxlOvGru5RcNNhNww-1; Thu, 04 Nov 2021 10:54:17 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5C1C98066EF; Thu, 4 Nov 2021 14:54:15 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 666286418A; Thu, 4 Nov 2021 14:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/vd4XOfIs4VvsjHgEE5D1vk6Ol7npAzlbVZYdjHNk7Q=; b=hfnl/E+h2tGlNJbh9htIi/cJHwekHxE8OhXAC/p7SlN8qSxTFJctIe6zj2HH/ihh+MfVH5 JDdRQet+PO5oNu23wAzFKGNy36h5mY8KGwjMl2lA66qRT4qTY3uAgDi6M2smaT+wMFPJMk bOFvgRjA6qDSoUw9akkAFKX6hLX1hu0= X-MC-Unique: TgbBVmxlOvGru5RcNNhNww-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 13/14] jobs: add job lock in find_* functions Date: Thu, 4 Nov 2021 10:53:33 -0400 Message-Id: <20211104145334.1346363-14-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038626825100001 Content-Type: text/plain; charset="utf-8" Both blockdev.c and job-qmp.c have TOC/TOU conditions, because they first search for the job and then perform an action on it. Therefore, we need to do the search + action under the same job mutex critical section. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito --- blockdev.c | 9 +++++++++ job-qmp.c | 8 ++++++++ 2 files changed, 17 insertions(+) diff --git a/blockdev.c b/blockdev.c index c5a835d9ed..0bd79757fc 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3327,12 +3327,14 @@ static BlockJob *find_block_job(const char *id, Aio= Context **aio_context, assert(id !=3D NULL); =20 *aio_context =3D NULL; + job_lock(); =20 job =3D block_job_get(id); =20 if (!job) { error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE, "Block job '%s' not found", id); + job_unlock(); return NULL; } =20 @@ -3353,6 +3355,7 @@ void qmp_block_job_set_speed(const char *device, int6= 4_t speed, Error **errp) =20 block_job_set_speed(job, speed, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_cancel(const char *device, @@ -3379,6 +3382,7 @@ void qmp_block_job_cancel(const char *device, job_user_cancel(&job->job, force, errp); out: aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_pause(const char *device, Error **errp) @@ -3393,6 +3397,7 @@ void qmp_block_job_pause(const char *device, Error **= errp) trace_qmp_block_job_pause(job); job_user_pause(&job->job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_resume(const char *device, Error **errp) @@ -3407,6 +3412,7 @@ void qmp_block_job_resume(const char *device, Error *= *errp) trace_qmp_block_job_resume(job); job_user_resume(&job->job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_complete(const char *device, Error **errp) @@ -3421,6 +3427,7 @@ void qmp_block_job_complete(const char *device, Error= **errp) trace_qmp_block_job_complete(job); job_complete(&job->job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_finalize(const char *id, Error **errp) @@ -3444,6 +3451,7 @@ void qmp_block_job_finalize(const char *id, Error **e= rrp) aio_context =3D blk_get_aio_context(job->blk); job_unref(&job->job); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) @@ -3460,6 +3468,7 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) job =3D &bjob->job; job_dismiss(&job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_change_backing_file(const char *device, diff --git a/job-qmp.c b/job-qmp.c index a355dc2954..8f07c51db8 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -35,10 +35,12 @@ static Job *find_job(const char *id, AioContext **aio_c= ontext, Error **errp) Job *job; =20 *aio_context =3D NULL; + job_lock(); =20 job =3D job_get(id); if (!job) { error_setg(errp, "Job not found"); + job_unlock(); return NULL; } =20 @@ -60,6 +62,7 @@ void qmp_job_cancel(const char *id, Error **errp) trace_qmp_job_cancel(job); job_user_cancel(job, true, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_pause(const char *id, Error **errp) @@ -74,6 +77,7 @@ void qmp_job_pause(const char *id, Error **errp) trace_qmp_job_pause(job); job_user_pause(job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_resume(const char *id, Error **errp) @@ -88,6 +92,7 @@ void qmp_job_resume(const char *id, Error **errp) trace_qmp_job_resume(job); job_user_resume(job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_complete(const char *id, Error **errp) @@ -102,6 +107,7 @@ void qmp_job_complete(const char *id, Error **errp) trace_qmp_job_complete(job); job_complete(job, errp); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_finalize(const char *id, Error **errp) @@ -125,6 +131,7 @@ void qmp_job_finalize(const char *id, Error **errp) aio_context =3D job->aio_context; job_unref(job); aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_dismiss(const char *id, Error **errp) @@ -139,6 +146,7 @@ void qmp_job_dismiss(const char *id, Error **errp) trace_qmp_job_dismiss(job); job_dismiss(&job, errp); aio_context_release(aio_context); + job_unlock(); } =20 static JobInfo *job_query_single(Job *job, Error **errp) --=20 2.27.0 From nobody Sun May 19 14:40:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1636038322599560.9880926925885; Thu, 4 Nov 2021 08:05:22 -0700 (PDT) Received: from localhost ([::1]:44494 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mieIr-0005wW-HJ for importer@patchew.org; Thu, 04 Nov 2021 11:05:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51522) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8H-0003ee-8A for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:25 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:28326) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mie8B-0006xa-RX for qemu-devel@nongnu.org; Thu, 04 Nov 2021 10:54:24 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-472-t92zUXJjOL-3F05zt3hWsA-1; Thu, 04 Nov 2021 10:54:18 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D622D8066EB; Thu, 4 Nov 2021 14:54:16 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 764B66418A; Thu, 4 Nov 2021 14:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636037659; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ov/mehPNL/8B9RqNk/asIlDyOsWU3ie8LdI2BcgQhBk=; b=KV1QPb8t+EFrcWPGQ2LOVopFHjhTAGmvtRYkDXezG3nwO0JRy7DZLEv0biq5hrtxsf3ads rw+F96Ut0Ip6mzJl/EU/sI5ItJqiHX2/z8rC7okz8dMwTLPvzf9YypYgYgR84iZEUXTrgS yR4upBJeLIzAgiSXxqo+unSlkPXLlhE= X-MC-Unique: t92zUXJjOL-3F05zt3hWsA-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH v2 14/14] job.c: enable job lock/unlock and remove Aiocontext locks Date: Thu, 4 Nov 2021 10:53:34 -0400 Message-Id: <20211104145334.1346363-15-eesposit@redhat.com> In-Reply-To: <20211104145334.1346363-1-eesposit@redhat.com> References: <20211104145334.1346363-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.648, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Emanuele Giuseppe Esposito , Markus Armbruster , qemu-devel@nongnu.org, Hanna Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1636038323341100001 Content-Type: text/plain; charset="utf-8" Change the job_{lock/unlock} and macros to use job_mutex. Now that they are not nop anymore, remove the aiocontext to avoid deadlocks. Therefore: - when possible, remove completely the aiocontext lock/unlock pair - if it is used by some other functions too, reduce the locking section as much as possible, leaving the job API outside. There is only one JobDriver callback, ->free() that assumes that the aiocontext lock is held (because it calls bdrv_unref), so for now keep that under aiocontext lock. Also remove _job_{lock/unlock}, as they are replaced by the public functions. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 7 --- block/replication.c | 2 + blockdev.c | 62 ++++-------------------- job-qmp.c | 38 ++++----------- job.c | 81 ++++---------------------------- tests/unit/test-bdrv-drain.c | 4 +- tests/unit/test-block-iothread.c | 2 +- tests/unit/test-blockjob.c | 13 ++--- 8 files changed, 34 insertions(+), 175 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 58b3af47e3..d417e1b601 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -608,8 +608,6 @@ void job_user_cancel(Job *job, bool force, Error **errp= ); * * Returns the return value from the job if the job actually completed * during the call, or -ECANCELED if it was canceled. - * - * Callers must hold the AioContext lock of job->aio_context. */ int job_cancel_sync(Job *job, bool force); =20 @@ -633,9 +631,6 @@ void job_cancel_sync_all(void); * function). * * Returns the return value from the job. - * - * Callers must hold the AioContext lock of job->aio_context. - * * Called between job_lock and job_unlock. */ int job_complete_sync(Job *job, Error **errp); @@ -667,8 +662,6 @@ void job_dismiss(Job **job, Error **errp); * Returns 0 if the job is successfully completed, -ECANCELED if the job w= as * cancelled before completing, and -errno in other error cases. * - * Callers must hold the AioContext lock of job->aio_context. - * * Called between job_lock and job_unlock. */ int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp); diff --git a/block/replication.c b/block/replication.c index 0f487cc215..6a60c1af1a 100644 --- a/block/replication.c +++ b/block/replication.c @@ -728,9 +728,11 @@ static void replication_stop(ReplicationState *rs, boo= l failover, Error **errp) * disk, secondary disk in backup_job_completed(). */ if (s->backup_job) { + aio_context_release(aio_context); job_lock(); job_cancel_sync(&s->backup_job->job, true); job_unlock(); + aio_context_acquire(aio_context); } =20 if (!failover) { diff --git a/blockdev.c b/blockdev.c index 0bd79757fc..dfc73ef1bf 100644 --- a/blockdev.c +++ b/blockdev.c @@ -154,12 +154,7 @@ void blockdev_mark_auto_del(BlockBackend *blk) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { - AioContext *aio_context =3D job->job.aio_context; - aio_context_acquire(aio_context); - job_cancel(&job->job, false); - - aio_context_release(aio_context); } } =20 @@ -1843,16 +1838,9 @@ static void drive_backup_abort(BlkActionState *commo= n) DriveBackupState *state =3D DO_UPCAST(DriveBackupState, common, common= ); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - job_lock(); job_cancel_sync(&state->job->job, true); job_unlock(); - - aio_context_release(aio_context); } } =20 @@ -1946,16 +1934,9 @@ static void blockdev_backup_abort(BlkActionState *co= mmon) BlockdevBackupState *state =3D DO_UPCAST(BlockdevBackupState, common, = common); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - job_lock(); job_cancel_sync(&state->job->job, true); job_unlock(); - - aio_context_release(aio_context); } } =20 @@ -3318,15 +3299,13 @@ out: aio_context_release(aio_context); } =20 -/* Get a block job using its ID and acquire its AioContext */ -static BlockJob *find_block_job(const char *id, AioContext **aio_context, - Error **errp) +/* Get a block job using its ID. Returns with job_lock held on success */ +static BlockJob *find_block_job(const char *id, Error **errp) { BlockJob *job; =20 assert(id !=3D NULL); =20 - *aio_context =3D NULL; job_lock(); =20 job =3D block_job_get(id); @@ -3338,31 +3317,25 @@ static BlockJob *find_block_job(const char *id, Aio= Context **aio_context, return NULL; } =20 - *aio_context =3D blk_get_aio_context(job->blk); - aio_context_acquire(*aio_context); - return job; } =20 void qmp_block_job_set_speed(const char *device, int64_t speed, Error **er= rp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; } =20 block_job_set_speed(job, speed, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_cancel(const char *device, bool has_force, bool force, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3381,14 +3354,12 @@ void qmp_block_job_cancel(const char *device, trace_qmp_block_job_cancel(job); job_user_cancel(&job->job, force, errp); out: - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_pause(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3396,14 +3367,12 @@ void qmp_block_job_pause(const char *device, Error = **errp) =20 trace_qmp_block_job_pause(job); job_user_pause(&job->job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_resume(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3411,14 +3380,12 @@ void qmp_block_job_resume(const char *device, Error= **errp) =20 trace_qmp_block_job_resume(job); job_user_resume(&job->job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_complete(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3426,14 +3393,12 @@ void qmp_block_job_complete(const char *device, Err= or **errp) =20 trace_qmp_block_job_complete(job); job_complete(&job->job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(id, &aio_context, errp); + BlockJob *job =3D find_block_job(id, errp); =20 if (!job) { return; @@ -3443,21 +3408,13 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) job_ref(&job->job); job_finalize(&job->job, errp); =20 - /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. - */ - aio_context =3D blk_get_aio_context(job->blk); job_unref(&job->job); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; - BlockJob *bjob =3D find_block_job(id, &aio_context, errp); + BlockJob *bjob =3D find_block_job(id, errp); Job *job; =20 if (!bjob) { @@ -3467,7 +3424,6 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; job_dismiss(&job, errp); - aio_context_release(aio_context); job_unlock(); } =20 diff --git a/job-qmp.c b/job-qmp.c index 8f07c51db8..d592780953 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -29,12 +29,11 @@ #include "qapi/error.h" #include "trace/trace-root.h" =20 -/* Get a job using its ID and acquire its AioContext */ -static Job *find_job(const char *id, AioContext **aio_context, Error **err= p) +/* Get a job using its ID. Returns with job_lock held on success. */ +static Job *find_job(const char *id, Error **errp) { Job *job; =20 - *aio_context =3D NULL; job_lock(); =20 job =3D job_get(id); @@ -44,16 +43,12 @@ static Job *find_job(const char *id, AioContext **aio_c= ontext, Error **errp) return NULL; } =20 - *aio_context =3D job->aio_context; - aio_context_acquire(*aio_context); - return job; } =20 void qmp_job_cancel(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -61,14 +56,12 @@ void qmp_job_cancel(const char *id, Error **errp) =20 trace_qmp_job_cancel(job); job_user_cancel(job, true, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_pause(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -76,14 +69,12 @@ void qmp_job_pause(const char *id, Error **errp) =20 trace_qmp_job_pause(job); job_user_pause(job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_resume(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -91,14 +82,12 @@ void qmp_job_resume(const char *id, Error **errp) =20 trace_qmp_job_resume(job); job_user_resume(job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_complete(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -106,14 +95,12 @@ void qmp_job_complete(const char *id, Error **errp) =20 trace_qmp_job_complete(job); job_complete(job, errp); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -123,21 +110,13 @@ void qmp_job_finalize(const char *id, Error **errp) job_ref(job); job_finalize(job, errp); =20 - /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. - */ - aio_context =3D job->aio_context; job_unref(job); - aio_context_release(aio_context); job_unlock(); } =20 void qmp_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -145,7 +124,6 @@ void qmp_job_dismiss(const char *id, Error **errp) =20 trace_qmp_job_dismiss(job); job_dismiss(&job, errp); - aio_context_release(aio_context); job_unlock(); } =20 diff --git a/job.c b/job.c index 5efbf38a72..f225e770f5 100644 --- a/job.c +++ b/job.c @@ -98,26 +98,16 @@ struct JobTxn { int refcnt; }; =20 -#define JOB_LOCK_GUARD() /* QEMU_LOCK_GUARD(&job_mutex) */ +#define JOB_LOCK_GUARD() QEMU_LOCK_GUARD(&job_mutex) =20 -#define WITH_JOB_LOCK_GUARD() /* WITH_QEMU_LOCK_GUARD(&job_mutex) */ +#define WITH_JOB_LOCK_GUARD() WITH_QEMU_LOCK_GUARD(&job_mutex) =20 void job_lock(void) -{ - /* nop */ -} - -void job_unlock(void) -{ - /* nop */ -} - -static void _job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void _job_unlock(void) +void job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -185,7 +175,6 @@ static int job_txn_apply(Job *job, int fn(Job *)) * break AIO_WAIT_WHILE from within fn. */ job_ref(job); - aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { rc =3D fn(other_job); @@ -194,11 +183,6 @@ static int job_txn_apply(Job *job, int fn(Job *)) } } =20 - /* - * Note that job->aio_context might have been changed by calling fn, s= o we - * can't use a local variable to cache it. - */ - aio_context_acquire(job->aio_context); job_unref(job); return rc; } @@ -494,7 +478,10 @@ void job_unref(Job *job) =20 if (job->driver->free) { job_unlock(); + /* FIXME: aiocontext lock is required because cb calls blk_unr= ef */ + aio_context_acquire(job->aio_context); job->driver->free(job); + aio_context_release(job->aio_context); job_lock(); } =20 @@ -570,21 +557,17 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) return; } =20 - _job_lock(); if (job->busy) { - _job_unlock(); return; } =20 if (fn && !fn(job)) { - _job_unlock(); return; } =20 assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy =3D true; - _job_unlock(); job_unlock(); aio_co_enter(job->aio_context, job->co); job_lock(); @@ -608,13 +591,11 @@ void job_enter(Job *job) */ static void coroutine_fn job_do_yield(Job *job, uint64_t ns) { - _job_lock(); if (ns !=3D -1) { timer_mod(&job->sleep_timer, ns); } job->busy =3D false; job_event_idle(job); - _job_unlock(); job_unlock(); qemu_coroutine_yield(); job_lock(); @@ -962,7 +943,6 @@ static void job_cancel_async(Job *job, bool force) /* Called with job_mutex held. */ static void job_completed_txn_abort(Job *job) { - AioContext *ctx; JobTxn *txn =3D job->txn; Job *other_job; =20 @@ -975,54 +955,28 @@ static void job_completed_txn_abort(Job *job) txn->aborting =3D true; job_txn_ref(txn); =20 - /* - * We can only hold the single job's AioContext lock while calling - * job_finalize_single() because the finalization callbacks can involve - * calls of AIO_WAIT_WHILE(), which could deadlock otherwise. - * Note that the job's AioContext may change when it is finalized. - */ - job_ref(job); - aio_context_release(job->aio_context); - /* Other jobs are effectively cancelled by us, set the status for * them; this job, however, may or may not be cancelled, depending * on the caller, so leave it. */ QLIST_FOREACH(other_job, &txn->jobs, txn_list) { if (other_job !=3D job) { - ctx =3D other_job->aio_context; - aio_context_acquire(ctx); /* * This is a transaction: If one job failed, no result will ma= tter. * Therefore, pass force=3Dtrue to terminate all other jobs as= quickly * as possible. */ job_cancel_async(other_job, true); - aio_context_release(ctx); } } while (!QLIST_EMPTY(&txn->jobs)) { other_job =3D QLIST_FIRST(&txn->jobs); - /* - * The job's AioContext may change, so store it in @ctx so we - * release the same context that we have acquired before. - */ - ctx =3D other_job->aio_context; - aio_context_acquire(ctx); if (!job_is_completed(other_job)) { assert(job_cancel_requested_locked(other_job)); job_finish_sync(other_job, NULL, NULL); } job_finalize_single(other_job); - aio_context_release(ctx); } =20 - /* - * Use job_ref()/job_unref() so we can read the AioContext here - * even if the job went away during job_finalize_single(). - */ - aio_context_acquire(job->aio_context); - job_unref(job); - job_txn_unref(txn); } =20 @@ -1142,11 +1096,7 @@ static void job_completed(Job *job) static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; - AioContext *ctx; - JOB_LOCK_GUARD(); - job_ref(job); - aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion * callbacks. However, completion callbacks tend to involve operations= that @@ -1156,16 +1106,6 @@ static void job_exit(void *opaque) job_event_idle(job); =20 job_completed(job); - - /* - * Note that calling job_completed can move the job to a different - * aio_context, so we cannot cache from above. job_txn_apply takes car= e of - * acquiring the new lock, and we ref/unref to avoid job_completed fre= eing - * the job underneath us. - */ - ctx =3D job->aio_context; - job_unref(job); - aio_context_release(ctx); } =20 /** @@ -1279,14 +1219,10 @@ int job_cancel_sync(Job *job, bool force) void job_cancel_sync_all(void) { Job *job; - AioContext *aio_context; =20 JOB_LOCK_GUARD(); while ((job =3D job_next(NULL))) { - aio_context =3D job->aio_context; - aio_context_acquire(aio_context); job_cancel_sync(job, true); - aio_context_release(aio_context); } } =20 @@ -1331,8 +1267,9 @@ int job_finish_sync(Job *job, void (*finish)(Job *, E= rror **errp), Error **errp) } =20 job_unlock(); - AIO_WAIT_WHILE(job->aio_context, - (job_enter(job), !job_is_completed_unlocked(job))); + AIO_WAIT_WHILE_UNLOCKED(job->aio_context, + (job_enter(job), + !job_is_completed_unlocked(job))); job_lock(); =20 ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) ? diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 535c39b5a8..83485a33aa 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -928,9 +928,9 @@ static void test_blockjob_common_drain_node(enum drain_= type drain_type, tjob->prepare_ret =3D -EIO; break; } + aio_context_release(ctx); =20 job_start(&job->job); - aio_context_release(ctx); =20 if (use_iothread) { /* job_co_entry() is run in the I/O thread, wait for the actual job @@ -994,12 +994,12 @@ static void test_blockjob_common_drain_node(enum drai= n_type drain_type, g_assert_false(job_get_paused(&job->job)); g_assert_true(job_get_busy(&job->job)); /* We're in qemu_co_sleep_ns()= */ =20 - aio_context_acquire(ctx); job_lock(); ret =3D job_complete_sync(&job->job, &error_abort); job_unlock(); g_assert_cmpint(ret, =3D=3D, (result =3D=3D TEST_JOB_SUCCESS ? 0 : -EI= O)); =20 + aio_context_acquire(ctx); if (use_iothread) { blk_set_aio_context(blk_src, qemu_get_aio_context(), &error_abort); assert(blk_get_aio_context(blk_target) =3D=3D qemu_get_aio_context= ()); diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothr= ead.c index f39cb8b7ef..cf197347b7 100644 --- a/tests/unit/test-block-iothread.c +++ b/tests/unit/test-block-iothread.c @@ -455,10 +455,10 @@ static void test_attach_blockjob(void) aio_poll(qemu_get_aio_context(), false); } =20 - aio_context_acquire(ctx); job_lock(); job_complete_sync(&tjob->common.job, &error_abort); job_unlock(); + aio_context_acquire(ctx); blk_set_aio_context(blk, qemu_get_aio_context(), &error_abort); aio_context_release(ctx); =20 diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index b94e1510c9..11cff70a6b 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -228,10 +228,6 @@ static void cancel_common(CancelJob *s) BlockJob *job =3D &s->common; BlockBackend *blk =3D s->blk; JobStatus sts =3D job->job.status; - AioContext *ctx; - - ctx =3D job->job.aio_context; - aio_context_acquire(ctx); =20 job_lock(); job_cancel_sync(&job->job, true); @@ -244,7 +240,6 @@ static void cancel_common(CancelJob *s) job_unlock(); destroy_blk(blk); =20 - aio_context_release(ctx); } =20 static void test_cancel_created(void) @@ -381,11 +376,9 @@ static void test_cancel_concluded(void) aio_poll(qemu_get_aio_context(), true); assert(job_get_status(job) =3D=3D JOB_STATUS_PENDING); =20 - aio_context_acquire(job->aio_context); job_lock(); job_finalize(job, &error_abort); job_unlock(); - aio_context_release(job->aio_context); assert(job_get_status(job) =3D=3D JOB_STATUS_CONCLUDED); =20 cancel_common(s); @@ -478,9 +471,7 @@ static void test_complete_in_standby(void) =20 /* Wait for the job to become READY */ job_start(job); - aio_context_acquire(ctx); - AIO_WAIT_WHILE(ctx, job_get_status(job) !=3D JOB_STATUS_READY); - aio_context_release(ctx); + AIO_WAIT_WHILE_UNLOCKED(ctx, job_get_status(job) !=3D JOB_STATUS_READY= ); =20 /* Begin the drained section, pausing the job */ bdrv_drain_all_begin(); @@ -498,6 +489,7 @@ static void test_complete_in_standby(void) job_complete(job, &error_abort); =20 /* The test is done now, clean up. */ + aio_context_release(ctx); job_finish_sync(job, NULL, &error_abort); assert(job->status =3D=3D JOB_STATUS_PENDING); =20 @@ -507,6 +499,7 @@ static void test_complete_in_standby(void) job_dismiss(&job, &error_abort); job_unlock(); =20 + aio_context_acquire(ctx); destroy_blk(blk); aio_context_release(ctx); iothread_join(iothread); --=20 2.27.0