From nobody Fri May 17 03:12:46 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1625677210543478.74721679548145; Wed, 7 Jul 2021 10:00:10 -0700 (PDT) Received: from localhost ([::1]:34632 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m1Au9-00051T-9v for importer@patchew.org; Wed, 07 Jul 2021 13:00:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41700) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asf-0002ZE-U9 for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:37 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:32966) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asa-0000CH-76 for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:37 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-386-NuQxNECiPhCEqs37Evhymg-1; Wed, 07 Jul 2021 12:58:28 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4B99E804141; Wed, 7 Jul 2021 16:58:27 +0000 (UTC) Received: from localhost.localdomain (ovpn-115-49.ams2.redhat.com [10.36.115.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5C87A100EBAF; Wed, 7 Jul 2021 16:58:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625677111; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xbt58Whr8Dn3aSVkCbruxcBQMkjnDxH0dOYGTyebg1g=; b=X3wR6UXmxf03rvxWmDyCuefBjZ5p4yakDgTt6n+krwe4OKQsk9z0ydaBETk4I03P4OCna5 MO3gYccdSGSW4cnudsETN50uFQASRUXQU9C7AMv8GKqncgwquiYzqKF7nAUvku7jdRcB+H DUbsfhvyPsdejMNFGwVX6kWET8EMi4o= X-MC-Unique: NuQxNECiPhCEqs37Evhymg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH 1/6] job: use getter/setters instead of accessing the Job fields directly Date: Wed, 7 Jul 2021 18:58:08 +0200 Message-Id: <20210707165813.55361-2-eesposit@redhat.com> In-Reply-To: <20210707165813.55361-1-eesposit@redhat.com> References: <20210707165813.55361-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.439, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org, Wen Congyang , Xie Changlong , Markus Armbruster , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1625677213035100001 Content-Type: text/plain; charset="utf-8" Using getters/setters we can have a more strict control on struct Job fields. The struct remains public, because it is also used as base class for BlockJobs and various, but replace all direct accesses to the fields we want to protect with getters/setters. This is in preparation to the locking patches. No functional change intended. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 33 +++++++++++++++++++ block.c | 2 +- block/commit.c | 4 +-- block/mirror.c | 17 +++++----- block/replication.c | 3 +- blockdev.c | 2 +- blockjob.c | 78 ++++++++++++++++++++++++--------------------- job-qmp.c | 16 ++++++---- job.c | 52 +++++++++++++++++++++++++++++- qemu-img.c | 2 +- 10 files changed, 151 insertions(+), 58 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 41162ed494..72c7d0f69d 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -303,6 +303,39 @@ void job_txn_unref(JobTxn *txn); */ void job_txn_add_job(JobTxn *txn, Job *job); =20 +/** Returns the @ret field of a given Job. */ +int job_get_ret(Job *job); + +/** Returns the AioContext of a given Job. */ +AioContext *job_get_aiocontext(Job *job); + +/** Sets the AioContext of a given Job. */ +void job_set_aiocontext(Job *job, AioContext *aio); + +/** Returns if a given Job is busy. */ +bool job_is_busy(Job *job); + +/** Returns the Error of a given Job. */ +Error *job_get_err(Job *job); + +/** Returns if a Job has a pause_count > 0. */ +bool job_should_pause(Job *job); + +/** Sets the user_paused flag of a given Job to true. */ +void job_set_user_paused(Job *job); + +/** Sets the cancelled flag of a given Job. */ +void job_set_cancelled(Job *job, bool cancel); + +/** Returns if a given Job is paused. */ +bool job_is_paused(Job *job); + +/** Returns if a given Job is force cancelled. */ +bool job_is_force_cancel(Job *job); + +/** Returns the statis of a given Job. */ +JobStatus job_get_status(Job *job); + /** * Create a new long-running job and return it. * diff --git a/block.c b/block.c index acd35cb0cb..1628db2550 100644 --- a/block.c +++ b/block.c @@ -5721,7 +5721,7 @@ XDbgBlockGraph *bdrv_get_xdbg_block_graph(Error **err= p) GSList *el; =20 xdbg_graph_add_node(gr, job, X_DBG_BLOCK_GRAPH_NODE_TYPE_BLOCK_JOB, - job->job.id); + job->job.id); for (el =3D job->nodes; el; el =3D el->next) { xdbg_graph_add_edge(gr, job, (BdrvChild *)el->data); } diff --git a/block/commit.c b/block/commit.c index 42792b4556..087865953e 100644 --- a/block/commit.c +++ b/block/commit.c @@ -367,7 +367,7 @@ void commit_start(const char *job_id, BlockDriverState = *bs, goto fail; } =20 - s->base =3D blk_new(s->common.job.aio_context, + s->base =3D blk_new(job_get_aiocontext(&s->common.job), base_perms, BLK_PERM_CONSISTENT_READ | BLK_PERM_GRAPH_MOD @@ -380,7 +380,7 @@ void commit_start(const char *job_id, BlockDriverState = *bs, s->base_bs =3D base; =20 /* Required permissions are already taken with block_job_add_bdrv() */ - s->top =3D blk_new(s->common.job.aio_context, 0, BLK_PERM_ALL); + s->top =3D blk_new(job_get_aiocontext(&s->common.job), 0, BLK_PERM_ALL= ); ret =3D blk_insert_bs(s->top, top, errp); if (ret < 0) { goto fail; diff --git a/block/mirror.c b/block/mirror.c index 019f6deaa5..49aaaafffa 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -636,7 +636,7 @@ static int mirror_exit_common(Job *job) BlockDriverState *target_bs; BlockDriverState *mirror_top_bs; Error *local_err =3D NULL; - bool abort =3D job->ret < 0; + bool abort =3D job_get_ret(job) < 0; int ret =3D 0; =20 if (s->prepared) { @@ -930,7 +930,7 @@ static int coroutine_fn mirror_run(Job *job, Error **er= rp) while (!job_is_cancelled(&s->common.job) && !s->should_complete) { job_yield(&s->common.job); } - s->common.job.cancelled =3D false; + job_set_cancelled(&s->common.job, false); goto immediate_exit; } =20 @@ -1065,7 +1065,7 @@ static int coroutine_fn mirror_run(Job *job, Error **= errp) * completion. */ assert(QLIST_EMPTY(&bs->tracked_requests)); - s->common.job.cancelled =3D false; + job_set_cancelled(&s->common.job, false); need_drain =3D false; break; } @@ -1079,7 +1079,7 @@ static int coroutine_fn mirror_run(Job *job, Error **= errp) trace_mirror_before_sleep(s, cnt, s->synced, delay_ns); job_sleep_ns(&s->common.job, delay_ns); if (job_is_cancelled(&s->common.job) && - (!s->synced || s->common.job.force_cancel)) + (!s->synced || job_is_force_cancel(&s->common.job))) { break; } @@ -1092,8 +1092,8 @@ immediate_exit: * or it was cancelled prematurely so that we do not guarantee that * the target is a copy of the source. */ - assert(ret < 0 || ((s->common.job.force_cancel || !s->synced) && - job_is_cancelled(&s->common.job))); + assert(ret < 0 || ((job_is_force_cancel(&s->common.job) || !s->syn= ced) + && job_is_cancelled(&s->common.job))); assert(need_drain); mirror_wait_for_all_io(s); } @@ -1150,7 +1150,7 @@ static void mirror_complete(Job *job, Error **errp) s->should_complete =3D true; =20 /* If the job is paused, it will be re-entered when it is resumed */ - if (!job->paused) { + if (!job_is_paused(job)) { job_enter(job); } } @@ -1171,7 +1171,8 @@ static bool mirror_drained_poll(BlockJob *job) * from one of our own drain sections, to avoid a deadlock waiting for * ourselves. */ - if (!s->common.job.paused && !s->common.job.cancelled && !s->in_drain)= { + if (!job_is_paused(&s->common.job) && !job_is_cancelled(&s->common.job= ) && + !s->in_drain) { return true; } =20 diff --git a/block/replication.c b/block/replication.c index 52163f2d1f..3923761a54 100644 --- a/block/replication.c +++ b/block/replication.c @@ -149,7 +149,8 @@ static void replication_close(BlockDriverState *bs) } if (s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { commit_job =3D &s->commit_job->job; - assert(commit_job->aio_context =3D=3D qemu_get_current_aio_context= ()); + assert(job_get_aiocontext(commit_job) =3D=3D + qemu_get_current_aio_context()); job_cancel_sync(commit_job); } =20 diff --git a/blockdev.c b/blockdev.c index f08192deda..8e2c15370e 100644 --- a/blockdev.c +++ b/blockdev.c @@ -147,7 +147,7 @@ void blockdev_mark_auto_del(BlockBackend *blk) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { - AioContext *aio_context =3D job->job.aio_context; + AioContext *aio_context =3D job_get_aiocontext(&job->job); aio_context_acquire(aio_context); =20 job_cancel(&job->job, false); diff --git a/blockjob.c b/blockjob.c index 4bad1408cb..7f49f03ec7 100644 --- a/blockjob.c +++ b/blockjob.c @@ -112,7 +112,7 @@ static bool child_job_drained_poll(BdrvChild *c) /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point aft= er * being reentered, so no job driver code will run before they pause. = */ - if (!job->busy || job_is_completed(job)) { + if (!job_is_busy(job) || job_is_completed(job)) { return false; } =20 @@ -161,14 +161,14 @@ static void child_job_set_aio_ctx(BdrvChild *c, AioCo= ntext *ctx, bdrv_set_aio_context_ignore(sibling->bs, ctx, ignore); } =20 - job->job.aio_context =3D ctx; + job_set_aiocontext(&job->job, ctx); } =20 static AioContext *child_job_get_parent_aio_context(BdrvChild *c) { BlockJob *job =3D c->opaque; =20 - return job->job.aio_context; + return job_get_aiocontext(&job->job); } =20 static const BdrvChildClass child_job =3D { @@ -222,18 +222,19 @@ int block_job_add_bdrv(BlockJob *job, const char *nam= e, BlockDriverState *bs, { BdrvChild *c; bool need_context_ops; + AioContext *ctx =3D job_get_aiocontext(&job->job); =20 bdrv_ref(bs); =20 - need_context_ops =3D bdrv_get_aio_context(bs) !=3D job->job.aio_contex= t; + need_context_ops =3D bdrv_get_aio_context(bs) !=3D ctx; =20 - if (need_context_ops && job->job.aio_context !=3D qemu_get_aio_context= ()) { - aio_context_release(job->job.aio_context); + if (need_context_ops && ctx !=3D qemu_get_aio_context()) { + aio_context_release(ctx); } c =3D bdrv_root_attach_child(bs, name, &child_job, 0, perm, shared_per= m, job, errp); - if (need_context_ops && job->job.aio_context !=3D qemu_get_aio_context= ()) { - aio_context_acquire(job->job.aio_context); + if (need_context_ops && ctx !=3D qemu_get_aio_context()) { + aio_context_acquire(ctx); } if (c =3D=3D NULL) { return -EPERM; @@ -303,37 +304,41 @@ int64_t block_job_ratelimit_get_delay(BlockJob *job, = uint64_t n) return ratelimit_calculate_delay(&job->limit, n); } =20 -BlockJobInfo *block_job_query(BlockJob *job, Error **errp) +BlockJobInfo *block_job_query(BlockJob *blkjob, Error **errp) { BlockJobInfo *info; uint64_t progress_current, progress_total; + int job_ret; + Job *job =3D &blkjob->job; =20 - if (block_job_is_internal(job)) { + if (block_job_is_internal(blkjob)) { error_setg(errp, "Cannot query QEMU internal jobs"); return NULL; } =20 - progress_get_snapshot(&job->job.progress, &progress_current, + progress_get_snapshot(&job->progress, &progress_current, &progress_total); =20 info =3D g_new0(BlockJobInfo, 1); - info->type =3D g_strdup(job_type_str(&job->job)); - info->device =3D g_strdup(job->job.id); - info->busy =3D qatomic_read(&job->job.busy); - info->paused =3D job->job.pause_count > 0; + info->type =3D g_strdup(job_type_str(job)); + info->device =3D g_strdup(job->id); + info->busy =3D job_is_busy(job); + info->paused =3D job_should_pause(job); info->offset =3D progress_current; info->len =3D progress_total; - info->speed =3D job->speed; - info->io_status =3D job->iostatus; - info->ready =3D job_is_ready(&job->job), - info->status =3D job->job.status; - info->auto_finalize =3D job->job.auto_finalize; - info->auto_dismiss =3D job->job.auto_dismiss; - if (job->job.ret) { + info->speed =3D blkjob->speed; + info->io_status =3D blkjob->iostatus; + info->ready =3D job_is_ready(job); + info->status =3D job_get_status(job); + info->auto_finalize =3D job->auto_finalize; + info->auto_dismiss =3D job->auto_dismiss; + job_ret =3D job_get_ret(job); + if (job_ret) { + Error *job_err =3D job_get_err(job); info->has_error =3D true; - info->error =3D job->job.err ? - g_strdup(error_get_pretty(job->job.err)) : - g_strdup(strerror(-job->job.ret)); + info->error =3D job_err ? + g_strdup(error_get_pretty(job_err)) : + g_strdup(strerror(-job_ret)); } return info; } @@ -367,26 +372,27 @@ static void block_job_event_cancelled(Notifier *n, vo= id *opaque) =20 static void block_job_event_completed(Notifier *n, void *opaque) { - BlockJob *job =3D opaque; + BlockJob *blkjob =3D opaque; const char *msg =3D NULL; uint64_t progress_current, progress_total; + Job *job =3D &blkjob->job; =20 - if (block_job_is_internal(job)) { + if (block_job_is_internal(blkjob)) { return; } =20 - if (job->job.ret < 0) { - msg =3D error_get_pretty(job->job.err); + if (job_get_ret(job) < 0) { + msg =3D error_get_pretty(job_get_err(job)); } =20 - progress_get_snapshot(&job->job.progress, &progress_current, + progress_get_snapshot(&job->progress, &progress_current, &progress_total); =20 - qapi_event_send_block_job_completed(job_type(&job->job), - job->job.id, + qapi_event_send_block_job_completed(job_type(job), + job->id, progress_total, progress_current, - job->speed, + blkjob->speed, !!msg, msg); } @@ -498,7 +504,7 @@ void block_job_iostatus_reset(BlockJob *job) if (job->iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { return; } - assert(job->job.user_paused && job->job.pause_count > 0); + assert(job_user_paused(&job->job) && job_should_pause(&job->job)); job->iostatus =3D BLOCK_DEVICE_IO_STATUS_OK; } =20 @@ -538,10 +544,10 @@ BlockErrorAction block_job_error_action(BlockJob *job= , BlockdevOnError on_err, action); } if (action =3D=3D BLOCK_ERROR_ACTION_STOP) { - if (!job->job.user_paused) { + if (!job_user_paused(&job->job)) { job_pause(&job->job); /* make the pause user visible, which will be resumed from QMP= . */ - job->job.user_paused =3D true; + job_set_user_paused(&job->job); } block_job_iostatus_set_err(job, error); } diff --git a/job-qmp.c b/job-qmp.c index 829a28aa70..12238a1643 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -42,7 +42,7 @@ static Job *find_job(const char *id, AioContext **aio_con= text, Error **errp) return NULL; } =20 - *aio_context =3D job->aio_context; + *aio_context =3D job_get_aiocontext(job); aio_context_acquire(*aio_context); =20 return job; @@ -122,7 +122,7 @@ void qmp_job_finalize(const char *id, Error **errp) * automatically acquires the new one), so make sure we release the co= rrect * one. */ - aio_context =3D job->aio_context; + aio_context =3D job_get_aiocontext(job); job_unref(job); aio_context_release(aio_context); } @@ -146,21 +146,23 @@ static JobInfo *job_query_single(Job *job, Error **er= rp) JobInfo *info; uint64_t progress_current; uint64_t progress_total; + Error *job_err; =20 assert(!job_is_internal(job)); progress_get_snapshot(&job->progress, &progress_current, &progress_total); + job_err =3D job_get_err(job); =20 info =3D g_new(JobInfo, 1); *info =3D (JobInfo) { .id =3D g_strdup(job->id), .type =3D job_type(job), - .status =3D job->status, + .status =3D job_get_status(job), .current_progress =3D progress_current, .total_progress =3D progress_total, - .has_error =3D !!job->err, - .error =3D job->err ? \ - g_strdup(error_get_pretty(job->err)) : NULL, + .has_error =3D !!job_err, + .error =3D job_err ? \ + g_strdup(error_get_pretty(job_err)) : NULL, }; =20 return info; @@ -178,7 +180,7 @@ JobInfoList *qmp_query_jobs(Error **errp) if (job_is_internal(job)) { continue; } - aio_context =3D job->aio_context; + aio_context =3D job_get_aiocontext(job); aio_context_acquire(aio_context); value =3D job_query_single(job, errp); aio_context_release(aio_context); diff --git a/job.c b/job.c index e7a5d28854..872bbebb01 100644 --- a/job.c +++ b/job.c @@ -94,6 +94,46 @@ static void __attribute__((__constructor__)) job_init(vo= id) qemu_mutex_init(&job_mutex); } =20 +AioContext *job_get_aiocontext(Job *job) +{ + return job->aio_context; +} + +void job_set_aiocontext(Job *job, AioContext *aio) +{ + job->aio_context =3D aio; +} + +bool job_is_busy(Job *job) +{ + return qatomic_read(&job->busy); +} + +int job_get_ret(Job *job) +{ + return job->ret; +} + +Error *job_get_err(Job *job) +{ + return job->err; +} + +JobStatus job_get_status(Job *job) +{ + return job->status; +} + +void job_set_cancelled(Job *job, bool cancel) +{ + job->cancelled =3D cancel; +} + +bool job_is_force_cancel(Job *job) +{ + return job->force_cancel; +} + JobTxn *job_txn_new(void) { JobTxn *txn =3D g_new0(JobTxn, 1); @@ -269,11 +309,16 @@ static bool job_started(Job *job) return job->co; } =20 -static bool job_should_pause(Job *job) +bool job_should_pause(Job *job) { return job->pause_count > 0; } =20 +bool job_is_paused(Job *job) +{ + return job->paused; +} + Job *job_next(Job *job) { if (!job) { @@ -591,6 +636,11 @@ bool job_user_paused(Job *job) return job->user_paused; } =20 +void job_set_user_paused(Job *job) +{ + job->user_paused =3D true; +} + void job_user_resume(Job *job, Error **errp) { assert(job); diff --git a/qemu-img.c b/qemu-img.c index 7956a89965..d16bd367d9 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -921,7 +921,7 @@ static void run_block_job(BlockJob *job, Error **errp) if (!job_is_completed(&job->job)) { ret =3D job_complete_sync(&job->job, errp); } else { - ret =3D job->job.ret; + ret =3D job_get_ret(&job->job); } job_unref(&job->job); aio_context_release(aio_context); --=20 2.31.1 From nobody Fri May 17 03:12:46 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1625677306852991.894407200463; Wed, 7 Jul 2021 10:01:46 -0700 (PDT) Received: from localhost ([::1]:37302 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m1Avh-00071V-Qx for importer@patchew.org; Wed, 07 Jul 2021 13:01:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41750) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Ash-0002cp-K3 for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:59632) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asb-0000CS-Le for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:39 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-459-d2uamycrOTiiE8EjzaWD7Q-1; Wed, 07 Jul 2021 12:58:31 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3BFB08030B0; Wed, 7 Jul 2021 16:58:30 +0000 (UTC) Received: from localhost.localdomain (ovpn-115-49.ams2.redhat.com [10.36.115.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9A4C81036D03; Wed, 7 Jul 2021 16:58:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625677113; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1msqywGvdzMNmRG960je5Q8SBmaBNN/cjW23H/9JGYs=; b=J7LB9Oh7NYl+0bQ+rH+BadL4IIDD3N0b2aKLt3+Lq9P8t9HeVt0b8J7HjHglnVeB3wy9mi +it+pAXp91pPbdHnSAs4eae1pu7UaI+OX8xqNKkYO6nH8y+wfjnPphn3gQeTkbkKYCRl2Y MmIameuuUYEXyh22oYhCMKYJhn/rnhw= X-MC-Unique: d2uamycrOTiiE8EjzaWD7Q-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH 2/6] job: _locked functions and public job_lock/unlock for next patch Date: Wed, 7 Jul 2021 18:58:09 +0200 Message-Id: <20210707165813.55361-3-eesposit@redhat.com> In-Reply-To: <20210707165813.55361-1-eesposit@redhat.com> References: <20210707165813.55361-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.439, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org, Wen Congyang , Xie Changlong , Markus Armbruster , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1625677308784100001 Content-Type: text/plain; charset="utf-8" Create _locked functions, to make next patch a little bit smaller. Also set the locking functions as public, so that they can be used also from structures using the Job struct. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 23 +++++++++++++ job.c | 85 ++++++++++++++++++++++++++++++++++++++-------- 2 files changed, 93 insertions(+), 15 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 72c7d0f69d..ba2f9b2660 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -305,6 +305,7 @@ void job_txn_add_job(JobTxn *txn, Job *job); =20 /** Returns the @ret field of a given Job. */ int job_get_ret(Job *job); +int job_get_ret_locked(Job *job); =20 /** Returns the AioContext of a given Job. */ AioContext *job_get_aiocontext(Job *job); @@ -336,6 +337,24 @@ bool job_is_force_cancel(Job *job); /** Returns the statis of a given Job. */ JobStatus job_get_status(Job *job); =20 +/** + * job_lock: + * + * Take the mutex protecting the list of jobs and their status. + * Most functions called by the monitor need to call job_lock + * and job_unlock manually. On the other hand, function called + * by the block jobs themselves and by the block layer will take the + * lock for you. + */ +void job_lock(void); + +/** + * job_unlock: + * + * Release the mutex protecting the list of jobs and their status. + */ +void job_unlock(void); + /** * Create a new long-running job and return it. * @@ -424,6 +443,7 @@ void job_start(Job *job); * Continue the specified job by entering the coroutine. */ void job_enter(Job *job); +void job_enter_locked(Job *job); =20 /** * @job: The job that is ready to pause. @@ -462,12 +482,15 @@ bool job_is_internal(Job *job); =20 /** Returns whether the job is scheduled for cancellation. */ bool job_is_cancelled(Job *job); +bool job_is_cancelled_locked(Job *job); =20 /** Returns whether the job is in a completed state. */ bool job_is_completed(Job *job); +bool job_is_completed_locked(Job *job); =20 /** Returns whether the job is ready to be completed. */ bool job_is_ready(Job *job); +bool job_is_ready_locked(Job *job); =20 /** * Request @job to pause at the next pause point. Must be paired with diff --git a/job.c b/job.c index 872bbebb01..96fb8e9730 100644 --- a/job.c +++ b/job.c @@ -32,6 +32,10 @@ #include "trace/trace-root.h" #include "qapi/qapi-events-job.h" =20 +/* job_mutex protexts the jobs list, but also the job operations. */ +static QemuMutex job_mutex; + +/* Protected by job_mutex */ static QLIST_HEAD(, Job) jobs =3D QLIST_HEAD_INITIALIZER(jobs); =20 /* Job State Transition Table */ @@ -64,27 +68,22 @@ bool JobVerbTable[JOB_VERB__MAX][JOB_STATUS__MAX] =3D { /* Transactional group of jobs */ struct JobTxn { =20 - /* Is this txn being cancelled? */ + /* Is this txn being cancelled? Atomic.*/ bool aborting; =20 - /* List of jobs */ + /* List of jobs. Protected by job_mutex. */ QLIST_HEAD(, Job) jobs; =20 - /* Reference count */ + /* Reference count. Atomic. */ int refcnt; }; =20 -/* Right now, this mutex is only needed to synchronize accesses to job->bu= sy - * and job->sleep_timer, such as concurrent calls to job_do_yield and - * job_enter. */ -static QemuMutex job_mutex; - -static void job_lock(void) +void job_lock(void) { qemu_mutex_lock(&job_mutex); } =20 -static void job_unlock(void) +void job_unlock(void) { qemu_mutex_unlock(&job_mutex); } @@ -109,11 +108,22 @@ bool job_is_busy(Job *job) return qatomic_read(&job->busy); } =20 -int job_get_ret(Job *job) +/* Called with job_mutex held. */ +int job_get_ret_locked(Job *job) { return job->ret; } =20 +/* Called with job_mutex *not* held. */ +int job_get_ret(Job *job) +{ + int ret; + job_lock(); + ret =3D job_get_ret_locked(job); + job_unlock(); + return ret; +} + Error *job_get_err(Job *job) { return job->err; @@ -255,12 +265,24 @@ const char *job_type_str(const Job *job) return JobType_str(job_type(job)); } =20 -bool job_is_cancelled(Job *job) +/* Called with job_mutex held. */ +bool job_is_cancelled_locked(Job *job) { return job->cancelled; } =20 -bool job_is_ready(Job *job) +/* Called with job_mutex *not* held. */ +bool job_is_cancelled(Job *job) +{ + bool ret; + job_lock(); + ret =3D job_is_cancelled_locked(job); + job_unlock(); + return ret; +} + +/* Called with job_mutex held. */ +bool job_is_ready_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -282,7 +304,18 @@ bool job_is_ready(Job *job) return false; } =20 -bool job_is_completed(Job *job) +/* Called with job_mutex *not* held. */ +bool job_is_ready(Job *job) +{ + bool ret; + job_lock(); + ret =3D job_is_ready_locked(job); + job_unlock(); + return ret; +} + +/* Called with job_mutex held. */ +bool job_is_completed_locked(Job *job) { switch (job->status) { case JOB_STATUS_UNDEFINED: @@ -304,6 +337,17 @@ bool job_is_completed(Job *job) return false; } =20 +/* Called with job_mutex *not* held. */ +bool job_is_completed(Job *job) +{ + bool ret; + job_lock(); + ret =3D job_is_completed_locked(job); + job_unlock(); + return ret; +} + +/* Does not need job_mutex. Value is never modified */ static bool job_started(Job *job) { return job->co; @@ -503,11 +547,20 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) aio_co_enter(job->aio_context, job->co); } =20 -void job_enter(Job *job) +/* Called with job_mutex held. */ +void job_enter_locked(Job *job) { job_enter_cond(job, NULL); } =20 +/* Called with job_mutex *not* held. */ +void job_enter(Job *job) +{ + job_lock(); + job_enter_locked(job, NULL); + job_unlock(); +} + /* Yield, and schedule a timer to reenter the coroutine after @ns nanoseco= nds. * Reentering the job coroutine with job_enter() before the timer has expi= red * is allowed and cancels the timer. @@ -684,12 +737,14 @@ void job_dismiss(Job **jobptr, Error **errp) *jobptr =3D NULL; } =20 +/* Called with job_mutex held. */ void job_early_fail(Job *job) { assert(job->status =3D=3D JOB_STATUS_CREATED); job_do_dismiss(job); } =20 +/* Called with job_mutex held. */ static void job_conclude(Job *job) { job_state_transition(job, JOB_STATUS_CONCLUDED); --=20 2.31.1 From nobody Fri May 17 03:12:46 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1625677367760998.6735251342316; Wed, 7 Jul 2021 10:02:47 -0700 (PDT) Received: from localhost ([::1]:40960 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m1Awg-00017E-Ov for importer@patchew.org; Wed, 07 Jul 2021 13:02:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41756) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asi-0002e1-6B for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:40 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:56879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asg-0000Cx-Cw for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:39 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-589-0fWQ8cgtMnuez65sQ6DtvQ-1; Wed, 07 Jul 2021 12:58:34 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 14740801107; Wed, 7 Jul 2021 16:58:33 +0000 (UTC) Received: from localhost.localdomain (ovpn-115-49.ams2.redhat.com [10.36.115.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id 884E3100EB3D; Wed, 7 Jul 2021 16:58:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625677117; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W7TYMUwf6IpkICteKnmbomuPmxZgg9fNxbNqStYaMgg=; b=GBZJn6TkVOqfNLcV97kyZoMeITIC29ME5pTjck+MRuBJGEkCBdVgLGWT5T3zl+XZFu/NNp jPw/Xf3OT0W6LvBZMqUbgtfhsSyJ+ZUvFbfrT21tMN5+Zg3EZyUnnIUR2SrghF0LrQi3kG a2MUkrbrGJ+wvm+7JxmtP96diQ5V3ZU= X-MC-Unique: 0fWQ8cgtMnuez65sQ6DtvQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH 3/6] job: minor changes to simplify locking Date: Wed, 7 Jul 2021 18:58:10 +0200 Message-Id: <20210707165813.55361-4-eesposit@redhat.com> In-Reply-To: <20210707165813.55361-1-eesposit@redhat.com> References: <20210707165813.55361-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.439, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org, Wen Congyang , Xie Changlong , Markus Armbruster , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1625677368104100001 Content-Type: text/plain; charset="utf-8" Check for NULL id to job_get, so that in the next patch we can move job_get inside a single critical section of job_create. Also add missing notifier_list_init for the on_idle NotifierList, which seems to have been forgot. Signed-off-by: Emanuele Giuseppe Esposito --- job.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/job.c b/job.c index 96fb8e9730..48b304c3ff 100644 --- a/job.c +++ b/job.c @@ -375,6 +375,10 @@ Job *job_get(const char *id) { Job *job; =20 + if (!id) { + return NULL; + } + QLIST_FOREACH(job, &jobs, job_list) { if (job->id && !strcmp(id, job->id)) { return job; @@ -406,15 +410,18 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, error_setg(errp, "Invalid job ID '%s'", job_id); return NULL; } - if (job_get(job_id)) { - error_setg(errp, "Job ID '%s' already in use", job_id); - return NULL; - } } else if (!(flags & JOB_INTERNAL)) { error_setg(errp, "An explicit job ID is required"); return NULL; } =20 + job_lock(); + if (job_get(job_id)) { + error_setg(errp, "Job ID '%s' already in use", job_id); + job_unlock(); + return NULL; + } + job =3D g_malloc0(driver->instance_size); job->driver =3D driver; job->id =3D g_strdup(job_id); @@ -434,6 +441,7 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, notifier_list_init(&job->on_finalize_completed); notifier_list_init(&job->on_pending); notifier_list_init(&job->on_ready); + notifier_list_init(&job->on_idle); =20 job_state_transition(job, JOB_STATUS_CREATED); aio_timer_init(qemu_get_aio_context(), &job->sleep_timer, --=20 2.31.1 From nobody Fri May 17 03:12:46 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1625677418881147.54918386067095; Wed, 7 Jul 2021 10:03:38 -0700 (PDT) Received: from localhost ([::1]:44948 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m1AxV-0003oX-E8 for importer@patchew.org; Wed, 07 Jul 2021 13:03:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41778) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asj-0002h4-CV for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:41 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:33913) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asg-0000D8-Rz for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:41 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-199-VstCD4oVO9m_fmJ57juHXw-1; Wed, 07 Jul 2021 12:58:37 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0917C101F000; Wed, 7 Jul 2021 16:58:36 +0000 (UTC) Received: from localhost.localdomain (ovpn-115-49.ams2.redhat.com [10.36.115.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7A6641036D03; Wed, 7 Jul 2021 16:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625677118; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RIrPiEQ3tA6Dzc89klSAnn6Q+mMSCGklqnwTjsNCL9A=; b=fv50AR91BF78/zfjaaf+k63SWYH22W/xl/hd9XXTvhwCYGL5dTdZoyE4QsGJHgRg+Oz4Kw lcRO8PFdCiYTA+3zXJMKMNI5wdVEdaonCPmyaflU/uRec2bL3uxWQYKjdGgQ1u0fpunT5P TNFg0QwgQssNompqYg+NucQqDEsiE1I= X-MC-Unique: VstCD4oVO9m_fmJ57juHXw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH 4/6] job.h: categorize job fields Date: Wed, 7 Jul 2021 18:58:11 +0200 Message-Id: <20210707165813.55361-5-eesposit@redhat.com> In-Reply-To: <20210707165813.55361-1-eesposit@redhat.com> References: <20210707165813.55361-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.439, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org, Wen Congyang , Xie Changlong , Markus Armbruster , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1625677420093100001 Content-Type: text/plain; charset="utf-8" This makes it easier to understand what needs to be protected by a lock and what doesn't. Signed-off-by: Emanuele Giuseppe Esposito --- include/qemu/job.h | 101 ++++++++++++++++++++++++++++++++++++--------- 1 file changed, 82 insertions(+), 19 deletions(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index ba2f9b2660..4421d08d93 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -40,24 +40,40 @@ typedef struct JobTxn JobTxn; * Long-running operation. */ typedef struct Job { - /** The ID of the job. May be NULL for internal jobs. */ + /** + * The ID of the job. May be NULL for internal jobs. + * Set it in job_create and just read. + */ char *id; =20 - /** The type of this job. */ + /** + * The type of this job. + * Set it in job_create and just read. + */ const JobDriver *driver; =20 - /** Reference count of the block job */ + /** + * Reference count of the block job. + * Protected by job_mutex. + */ int refcnt; =20 - /** Current state; See @JobStatus for details. */ + /** + * Current state; See @JobStatus for details. + * Protected by job_mutex. + */ JobStatus status; =20 - /** AioContext to run the job coroutine in */ + /** + * AioContext to run the job coroutine in. + * Atomic. + */ AioContext *aio_context; =20 /** * The coroutine that executes the job. If not NULL, it is reentered = when * busy is false and the job is cancelled. + * Set it in job_create and just read. */ Coroutine *co; =20 @@ -70,13 +86,15 @@ typedef struct Job { /** * Counter for pause request. If non-zero, the block job is either pau= sed, * or if busy =3D=3D true will pause itself as soon as possible. + * Protected by job_mutex. */ int pause_count; =20 /** * Set to false by the job while the coroutine has yielded and may be * re-entered by job_enter(). There may still be I/O or event loop act= ivity - * pending. Accessed under block_job_mutex (in blockjob.c). + * pending. + * Protected by job_mutex. * * When the job is deferred to the main loop, busy is true as long as = the * bottom half is still pending. @@ -86,12 +104,14 @@ typedef struct Job { /** * Set to true by the job while it is in a quiescent state, where * no I/O or event loop activity is pending. + * Protected by job_mutex. */ bool paused; =20 /** * Set to true if the job is paused by user. Can be unpaused with the * block-job-resume QMP command. + * Protected by job_mutex. */ bool user_paused; =20 @@ -100,22 +120,33 @@ typedef struct Job { * always be tested just before toggling the busy flag from false * to true. After a job has been cancelled, it should only yield * if #aio_poll will ("sooner or later") reenter the coroutine. + * Protected by job_mutex. */ bool cancelled; =20 /** * Set to true if the job should abort immediately without waiting * for data to be in sync. + * Protected by job_mutex. */ bool force_cancel; =20 - /** Set to true when the job has deferred work to the main loop. */ + /** + * Set to true when the job has deferred work to the main loop. + * Protected by job_mutex. + */ bool deferred_to_main_loop; =20 - /** True if this job should automatically finalize itself */ + /** + * True if this job should automatically finalize itself. + * Set it in job_create and just read. + */ bool auto_finalize; =20 - /** True if this job should automatically dismiss itself */ + /** + * True if this job should automatically dismiss itself. + * Set it in job_create and just read. + */ bool auto_dismiss; =20 ProgressMeter progress; @@ -124,6 +155,7 @@ typedef struct Job { * Return code from @run and/or @prepare callback(s). * Not final until the job has reached the CONCLUDED status. * 0 on success, -errno on failure. + * Protected by job_mutex. */ int ret; =20 @@ -131,37 +163,68 @@ typedef struct Job { * Error object for a failed job. * If job->ret is nonzero and an error object was not set, it will be = set * to strerror(-job->ret) during job_completed. + * Protected by job_mutex. */ Error *err; =20 - /** The completion function that will be called when the job completes= . */ + /** + * The completion function that will be called when the job completes. + * Set it in job_create and just read. + */ BlockCompletionFunc *cb; =20 - /** The opaque value that is passed to the completion function. */ + /** + * The opaque value that is passed to the completion function. + * Set it in job_create and just read. + */ void *opaque; =20 - /** Notifiers called when a cancelled job is finalised */ + /** + * Notifiers called when a cancelled job is finalised. + * Protected by job_mutex. + */ NotifierList on_finalize_cancelled; =20 - /** Notifiers called when a successfully completed job is finalised */ + /** + * Notifiers called when a successfully completed job is finalised. + * Protected by job_mutex. + */ NotifierList on_finalize_completed; =20 - /** Notifiers called when the job transitions to PENDING */ + /** + * Notifiers called when the job transitions to PENDING. + * Protected by job_mutex. + */ NotifierList on_pending; =20 - /** Notifiers called when the job transitions to READY */ + /** + * Notifiers called when the job transitions to READY. + * Protected by job_mutex. + */ NotifierList on_ready; =20 - /** Notifiers called when the job coroutine yields or terminates */ + /** + * Notifiers called when the job coroutine yields or terminates. + * Protected by job_mutex. + */ NotifierList on_idle; =20 - /** Element of the list of jobs */ + /** + * Element of the list of jobs. + * Protected by job_mutex. + */ QLIST_ENTRY(Job) job_list; =20 - /** Transaction this job is part of */ + /** + * Transaction this job is part of. + * Protected by job_mutex. + */ JobTxn *txn; =20 - /** Element of the list of jobs in a job transaction */ + /** + * Element of the list of jobs in a job transaction. + * Protected by job_mutex. + */ QLIST_ENTRY(Job) txn_list; } Job; =20 --=20 2.31.1 From nobody Fri May 17 03:12:46 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1625677391564153.1499186205773; Wed, 7 Jul 2021 10:03:11 -0700 (PDT) Received: from localhost ([::1]:42960 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m1Ax4-0002VV-E0 for importer@patchew.org; Wed, 07 Jul 2021 13:03:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41852) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Ast-0003Bg-I7 for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:55 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:38233) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asp-0000Fj-PU for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:51 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-211-Sd0KixKvOOaatDWGoSxGcQ-1; Wed, 07 Jul 2021 12:58:43 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 845D9801107; Wed, 7 Jul 2021 16:58:42 +0000 (UTC) Received: from localhost.localdomain (ovpn-115-49.ams2.redhat.com [10.36.115.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id 566F9100EB3D; Wed, 7 Jul 2021 16:58:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625677127; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E5WVZ1cU90Cud9ObEFGXUsR9P3xEzv6AfVLyPz5vB2c=; b=cxmX+IjuheX5Qd84GBHklhJSf9R6PSRpEe8/wUiQJCscJU/+GKSj5e1c6fbozZzPBv7bXm BB40sClqOHn0stoHl7NbAL2VaFswpDQfgc8tumHPujvIW6LtCMDzg02f+Db+Zt1IlzHPHJ PgJuEKqUT4evVZEiHY/ghBwB/JqCH40= X-MC-Unique: Sd0KixKvOOaatDWGoSxGcQ-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH 5/6] job: use global job_mutex to protect struct Job Date: Wed, 7 Jul 2021 18:58:12 +0200 Message-Id: <20210707165813.55361-6-eesposit@redhat.com> In-Reply-To: <20210707165813.55361-1-eesposit@redhat.com> References: <20210707165813.55361-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.439, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org, Wen Congyang , Xie Changlong , Markus Armbruster , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1625677391970100001 Content-Type: text/plain; charset="utf-8" This lock is going to replace most of the AioContext locks in the job and blockjob, so that a Job can run in an arbitrary AioContext. Signed-off-by: Emanuele Giuseppe Esposito --- include/block/blockjob_int.h | 1 + include/qemu/job.h | 2 + block/backup.c | 4 + block/mirror.c | 11 +- blockdev.c | 62 ++++---- blockjob.c | 67 +++++++-- job-qmp.c | 55 +++---- job.c | 284 +++++++++++++++++++++++++++-------- qemu-img.c | 15 +- 9 files changed, 350 insertions(+), 151 deletions(-) diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h index 6633d83da2..8b91126506 100644 --- a/include/block/blockjob_int.h +++ b/include/block/blockjob_int.h @@ -53,6 +53,7 @@ struct BlockJobDriver { */ void (*attached_aio_context)(BlockJob *job, AioContext *new_context); =20 + /* Called with job mutex *not* held. */ void (*set_speed)(BlockJob *job, int64_t speed); }; =20 diff --git a/include/qemu/job.h b/include/qemu/job.h index 4421d08d93..359f4e6b3a 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -49,6 +49,8 @@ typedef struct Job { /** * The type of this job. * Set it in job_create and just read. + * All calls to the driver function must be not locked by job_mutex, + * to avoid deadlocks. */ const JobDriver *driver; =20 diff --git a/block/backup.c b/block/backup.c index bd3614ce70..80ce956299 100644 --- a/block/backup.c +++ b/block/backup.c @@ -315,6 +315,10 @@ static void coroutine_fn backup_pause(Job *job) } } =20 +/* + * Called with job mutex *not* held (we don't want to call block_copy_kick + * with the lock held!) + */ static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed) { BackupBlockJob *s =3D container_of(job, BackupBlockJob, common); diff --git a/block/mirror.c b/block/mirror.c index 49aaaafffa..deefaa6a39 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -1150,9 +1150,11 @@ static void mirror_complete(Job *job, Error **errp) s->should_complete =3D true; =20 /* If the job is paused, it will be re-entered when it is resumed */ + job_lock(); if (!job_is_paused(job)) { - job_enter(job); + job_enter_locked(job); } + job_unlock(); } =20 static void coroutine_fn mirror_pause(Job *job) @@ -1171,10 +1173,13 @@ static bool mirror_drained_poll(BlockJob *job) * from one of our own drain sections, to avoid a deadlock waiting for * ourselves. */ - if (!job_is_paused(&s->common.job) && !job_is_cancelled(&s->common.job= ) && - !s->in_drain) { + job_lock(); + if (!job_is_paused(&s->common.job) && + !job_is_cancelled_locked(&s->common.job) && !s->in_drain) { + job_unlock(); return true; } + job_unlock(); =20 return !!s->in_flight; } diff --git a/blockdev.c b/blockdev.c index 8e2c15370e..9255aea6a2 100644 --- a/blockdev.c +++ b/blockdev.c @@ -150,9 +150,11 @@ void blockdev_mark_auto_del(BlockBackend *blk) AioContext *aio_context =3D job_get_aiocontext(&job->job); aio_context_acquire(aio_context); =20 + job_lock(); job_cancel(&job->job, false); =20 aio_context_release(aio_context); + job_unlock(); } } =20 @@ -3309,48 +3311,44 @@ out: aio_context_release(aio_context); } =20 -/* Get a block job using its ID and acquire its AioContext */ -static BlockJob *find_block_job(const char *id, AioContext **aio_context, - Error **errp) +/* Get a block job using its ID and acquire its job_lock */ +static BlockJob *find_block_job(const char *id, Error **errp) { BlockJob *job; =20 assert(id !=3D NULL); =20 - *aio_context =3D NULL; - + job_lock(); job =3D block_job_get(id); =20 if (!job) { error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE, "Block job '%s' not found", id); + job_unlock(); return NULL; } =20 - *aio_context =3D blk_get_aio_context(job->blk); - aio_context_acquire(*aio_context); - return job; } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_set_speed(const char *device, int64_t speed, Error **er= rp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; } =20 block_job_set_speed(job, speed, errp); - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_cancel(const char *device, bool has_force, bool force, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3369,13 +3367,13 @@ void qmp_block_job_cancel(const char *device, trace_qmp_block_job_cancel(job); job_user_cancel(&job->job, force, errp); out: - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_pause(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3383,13 +3381,13 @@ void qmp_block_job_pause(const char *device, Error = **errp) =20 trace_qmp_block_job_pause(job); job_user_pause(&job->job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_resume(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3397,13 +3395,13 @@ void qmp_block_job_resume(const char *device, Error= **errp) =20 trace_qmp_block_job_resume(job); job_user_resume(&job->job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_complete(const char *device, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job =3D find_block_job(device, errp); =20 if (!job) { return; @@ -3411,13 +3409,13 @@ void qmp_block_job_complete(const char *device, Err= or **errp) =20 trace_qmp_block_job_complete(job); job_complete(&job->job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; - BlockJob *job =3D find_block_job(id, &aio_context, errp); + BlockJob *job =3D find_block_job(id, errp); =20 if (!job) { return; @@ -3427,20 +3425,14 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) job_ref(&job->job); job_finalize(&job->job, errp); =20 - /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. - */ - aio_context =3D blk_get_aio_context(job->blk); job_unref(&job->job); - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void qmp_block_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; - BlockJob *bjob =3D find_block_job(id, &aio_context, errp); + BlockJob *bjob =3D find_block_job(id, errp); Job *job; =20 if (!bjob) { @@ -3450,7 +3442,7 @@ void qmp_block_job_dismiss(const char *id, Error **er= rp) trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; job_dismiss(&job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 void qmp_change_backing_file(const char *device, diff --git a/blockjob.c b/blockjob.c index 7f49f03ec7..e7b289089b 100644 --- a/blockjob.c +++ b/blockjob.c @@ -42,15 +42,16 @@ * The first includes functions used by the monitor. The monitor is * peculiar in that it accesses the block job list with block_job_get, and * therefore needs consistency across block_job_get and the actual operati= on - * (e.g. block_job_set_speed). The consistency is achieved with - * aio_context_acquire/release. These functions are declared in blockjob.= h. + * (e.g. block_job_set_speed). To achieve this consistency, the caller + * calls block_job_lock/block_job_unlock itself around the whole operation. + * These functions are declared in blockjob.h. * * The second includes functions used by the block job drivers and sometim= es - * by the core block layer. These do not care about locking, because the - * whole coroutine runs under the AioContext lock, and are declared in - * blockjob_int.h. + * by the core block layer. These delegate the locking to the callee inste= ad, + * and are declared in blockjob_int.h. */ =20 +/* Does not need job_mutex. Value is never modified */ static bool is_block_job(Job *job) { return job_type(job) =3D=3D JOB_TYPE_BACKUP || @@ -59,6 +60,7 @@ static bool is_block_job(Job *job) job_type(job) =3D=3D JOB_TYPE_STREAM; } =20 +/* Called with job_mutex *not* held. */ BlockJob *block_job_next(BlockJob *bjob) { Job *job =3D bjob ? &bjob->job : NULL; @@ -70,6 +72,7 @@ BlockJob *block_job_next(BlockJob *bjob) return job ? container_of(job, BlockJob, job) : NULL; } =20 +/* Called with job_mutex held. */ BlockJob *block_job_get(const char *id) { Job *job =3D job_get(id); @@ -97,24 +100,31 @@ static char *child_job_get_parent_desc(BdrvChild *c) return g_strdup_printf("%s job '%s'", job_type_str(&job->job), job->jo= b.id); } =20 +/* Called with job_mutex *not* held. */ static void child_job_drained_begin(BdrvChild *c) { BlockJob *job =3D c->opaque; + job_lock(); job_pause(&job->job); + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ static bool child_job_drained_poll(BdrvChild *c) { BlockJob *bjob =3D c->opaque; Job *job =3D &bjob->job; const BlockJobDriver *drv =3D block_job_driver(bjob); =20 + job_lock(); /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point aft= er * being reentered, so no job driver code will run before they pause. = */ - if (!job_is_busy(job) || job_is_completed(job)) { + if (!job_is_busy(job) || job_is_completed_locked(job)) { + job_unlock(); return false; } + job_unlock(); =20 /* Otherwise, assume that it isn't fully stopped yet, but allow the jo= b to * override this assumption. */ @@ -125,10 +135,13 @@ static bool child_job_drained_poll(BdrvChild *c) } } =20 +/* Called with job_mutex *not* held. */ static void child_job_drained_end(BdrvChild *c, int *drained_end_counter) { BlockJob *job =3D c->opaque; + job_lock(); job_resume(&job->job); + job_unlock(); } =20 static bool child_job_can_set_aio_ctx(BdrvChild *c, AioContext *ctx, @@ -246,11 +259,15 @@ int block_job_add_bdrv(BlockJob *job, const char *nam= e, BlockDriverState *bs, return 0; } =20 +/* Called with job_mutex held. Temporarly releases the lock. */ static void block_job_on_idle(Notifier *n, void *opaque) { + job_unlock(); aio_wait_kick(); + job_lock(); } =20 +/* Does not need job_mutex. Value is never modified */ bool block_job_is_internal(BlockJob *job) { return (job->job.id =3D=3D NULL); @@ -267,6 +284,7 @@ static bool job_timer_pending(Job *job) return timer_pending(&job->sleep_timer); } =20 +/* Called with job_mutex held. May temporarly release the lock. */ bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp) { const BlockJobDriver *drv =3D block_job_driver(job); @@ -286,7 +304,9 @@ bool block_job_set_speed(BlockJob *job, int64_t speed, = Error **errp) job->speed =3D speed; =20 if (drv->set_speed) { + job_unlock(); drv->set_speed(job, speed); + job_lock(); } =20 if (speed && speed <=3D old_speed) { @@ -304,6 +324,7 @@ int64_t block_job_ratelimit_get_delay(BlockJob *job, ui= nt64_t n) return ratelimit_calculate_delay(&job->limit, n); } =20 +/* Called with block_job_mutex *not* held. */ BlockJobInfo *block_job_query(BlockJob *blkjob, Error **errp) { BlockJobInfo *info; @@ -319,6 +340,7 @@ BlockJobInfo *block_job_query(BlockJob *blkjob, Error *= *errp) progress_get_snapshot(&job->progress, &progress_current, &progress_total); =20 + job_lock(); info =3D g_new0(BlockJobInfo, 1); info->type =3D g_strdup(job_type_str(job)); info->device =3D g_strdup(job->id); @@ -328,11 +350,11 @@ BlockJobInfo *block_job_query(BlockJob *blkjob, Error= **errp) info->len =3D progress_total; info->speed =3D blkjob->speed; info->io_status =3D blkjob->iostatus; - info->ready =3D job_is_ready(job); + info->ready =3D job_is_ready_locked(job); info->status =3D job_get_status(job); info->auto_finalize =3D job->auto_finalize; info->auto_dismiss =3D job->auto_dismiss; - job_ret =3D job_get_ret(job); + job_ret =3D job_get_ret_locked(job); if (job_ret) { Error *job_err =3D job_get_err(job); info->has_error =3D true; @@ -340,9 +362,11 @@ BlockJobInfo *block_job_query(BlockJob *blkjob, Error = **errp) g_strdup(error_get_pretty(job_err)) : g_strdup(strerror(-job_ret)); } + job_unlock(); return info; } =20 +/* Called with job_mutex held. */ static void block_job_iostatus_set_err(BlockJob *job, int error) { if (job->iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { @@ -351,6 +375,7 @@ static void block_job_iostatus_set_err(BlockJob *job, i= nt error) } } =20 +/* Called with job_mutex held. */ static void block_job_event_cancelled(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -370,6 +395,7 @@ static void block_job_event_cancelled(Notifier *n, void= *opaque) job->speed); } =20 +/* Called with job_mutex held. */ static void block_job_event_completed(Notifier *n, void *opaque) { BlockJob *blkjob =3D opaque; @@ -381,7 +407,7 @@ static void block_job_event_completed(Notifier *n, void= *opaque) return; } =20 - if (job_get_ret(job) < 0) { + if (job_get_ret_locked(job) < 0) { msg =3D error_get_pretty(job_get_err(job)); } =20 @@ -397,6 +423,7 @@ static void block_job_event_completed(Notifier *n, void= *opaque) msg); } =20 +/* Called with job_mutex held. */ static void block_job_event_pending(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -409,6 +436,7 @@ static void block_job_event_pending(Notifier *n, void *= opaque) job->job.id); } =20 +/* Called with job_mutex held. */ static void block_job_event_ready(Notifier *n, void *opaque) { BlockJob *job =3D opaque; @@ -430,10 +458,11 @@ static void block_job_event_ready(Notifier *n, void *= opaque) =20 =20 /* - * API for block job drivers and the block layer. These functions are - * declared in blockjob_int.h. + * API for block job drivers and the block layer, who do not know about + * job_mutex. These functions are declared in blockjob_int.h. */ =20 +/* Called with block_job_mutex *not* held, but temporarly releases it. */ void *block_job_create(const char *job_id, const BlockJobDriver *driver, JobTxn *txn, BlockDriverState *bs, uint64_t perm, uint64_t shared_perm, int64_t speed, int flags, @@ -472,6 +501,8 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, job->ready_notifier.notify =3D block_job_event_ready; job->idle_notifier.notify =3D block_job_on_idle; =20 + job_lock(); + notifier_list_add(&job->job.on_finalize_cancelled, &job->finalize_cancelled_notifier); notifier_list_add(&job->job.on_finalize_completed, @@ -482,7 +513,11 @@ void *block_job_create(const char *job_id, const Block= JobDriver *driver, =20 error_setg(&job->blocker, "block device is in use by block job: %s", job_type_str(&job->job)); + + job_unlock(); + /* calls drain and friends, that already take the lock */ block_job_add_bdrv(job, "main node", bs, 0, BLK_PERM_ALL, &error_abort= ); + job_lock(); =20 bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker); =20 @@ -493,27 +528,35 @@ void *block_job_create(const char *job_id, const Bloc= kJobDriver *driver, =20 if (!block_job_set_speed(job, speed, errp)) { job_early_fail(&job->job); + job_unlock(); return NULL; } =20 + job_unlock(); return job; } =20 +/* Called with job_mutex *not* held. */ void block_job_iostatus_reset(BlockJob *job) { + job_lock(); if (job->iostatus =3D=3D BLOCK_DEVICE_IO_STATUS_OK) { + job_unlock(); return; } assert(job_user_paused(&job->job) && job_should_pause(&job->job)); job->iostatus =3D BLOCK_DEVICE_IO_STATUS_OK; + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ void block_job_user_resume(Job *job) { BlockJob *bjob =3D container_of(job, BlockJob, job); block_job_iostatus_reset(bjob); } =20 +/* Called with job_mutex *not* held. */ BlockErrorAction block_job_error_action(BlockJob *job, BlockdevOnError on_= err, int is_read, int error) { @@ -544,12 +587,14 @@ BlockErrorAction block_job_error_action(BlockJob *job= , BlockdevOnError on_err, action); } if (action =3D=3D BLOCK_ERROR_ACTION_STOP) { + job_lock(); if (!job_user_paused(&job->job)) { job_pause(&job->job); /* make the pause user visible, which will be resumed from QMP= . */ job_set_user_paused(&job->job); } block_job_iostatus_set_err(job, error); + job_unlock(); } return action; } diff --git a/job-qmp.c b/job-qmp.c index 12238a1643..03f3946490 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -29,29 +29,26 @@ #include "qapi/error.h" #include "trace/trace-root.h" =20 -/* Get a job using its ID and acquire its AioContext */ -static Job *find_job(const char *id, AioContext **aio_context, Error **err= p) +/* Get a job using its ID and acquire its job_lock */ +static Job *find_job(const char *id, Error **errp) { Job *job; =20 - *aio_context =3D NULL; + job_lock(); =20 job =3D job_get(id); if (!job) { error_setg(errp, "Job not found"); + job_unlock(); return NULL; } =20 - *aio_context =3D job_get_aiocontext(job); - aio_context_acquire(*aio_context); - return job; } =20 void qmp_job_cancel(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -59,13 +56,12 @@ void qmp_job_cancel(const char *id, Error **errp) =20 trace_qmp_job_cancel(job); job_user_cancel(job, true, errp); - aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_pause(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -73,13 +69,12 @@ void qmp_job_pause(const char *id, Error **errp) =20 trace_qmp_job_pause(job); job_user_pause(job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_resume(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -87,13 +82,12 @@ void qmp_job_resume(const char *id, Error **errp) =20 trace_qmp_job_resume(job); job_user_resume(job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_complete(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -101,13 +95,12 @@ void qmp_job_complete(const char *id, Error **errp) =20 trace_qmp_job_complete(job); job_complete(job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_finalize(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -117,20 +110,13 @@ void qmp_job_finalize(const char *id, Error **errp) job_ref(job); job_finalize(job, errp); =20 - /* - * Job's context might have changed via job_finalize (and job_txn_apply - * automatically acquires the new one), so make sure we release the co= rrect - * one. - */ - aio_context =3D job_get_aiocontext(job); job_unref(job); - aio_context_release(aio_context); + job_unlock(); } =20 void qmp_job_dismiss(const char *id, Error **errp) { - AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job =3D find_job(id, errp); =20 if (!job) { return; @@ -138,9 +124,10 @@ void qmp_job_dismiss(const char *id, Error **errp) =20 trace_qmp_job_dismiss(job); job_dismiss(&job, errp); - aio_context_release(aio_context); + job_unlock(); } =20 +/* Called with job_mutex held. */ static JobInfo *job_query_single(Job *job, Error **errp) { JobInfo *info; @@ -175,15 +162,15 @@ JobInfoList *qmp_query_jobs(Error **errp) =20 for (job =3D job_next(NULL); job; job =3D job_next(job)) { JobInfo *value; - AioContext *aio_context; =20 if (job_is_internal(job)) { continue; } - aio_context =3D job_get_aiocontext(job); - aio_context_acquire(aio_context); + + job_lock(); value =3D job_query_single(job, errp); - aio_context_release(aio_context); + job_unlock(); + if (!value) { qapi_free_JobInfoList(head); return NULL; diff --git a/job.c b/job.c index 48b304c3ff..e2006532b5 100644 --- a/job.c +++ b/job.c @@ -93,19 +93,22 @@ static void __attribute__((__constructor__)) job_init(v= oid) qemu_mutex_init(&job_mutex); } =20 +/* Does not need job_mutex */ AioContext *job_get_aiocontext(Job *job) { - return job->aio_context; + return qatomic_read(&job->aio_context); } =20 +/* Does not need job_mutex */ void job_set_aiocontext(Job *job, AioContext *aio) { - job->aio_context =3D aio; + qatomic_set(&job->aio_context, aio); } =20 +/* Called with job_mutex held. */ bool job_is_busy(Job *job) { - return qatomic_read(&job->busy); + return job->busy; } =20 /* Called with job_mutex held. */ @@ -124,59 +127,75 @@ int job_get_ret(Job *job) return ret; } =20 +/* Called with job_mutex held. */ Error *job_get_err(Job *job) { return job->err; } =20 +/* Called with job_mutex held. */ JobStatus job_get_status(Job *job) { return job->status; } - +/* Called with job_mutex *not* held. */ void job_set_cancelled(Job *job, bool cancel) { + job_lock(); job->cancelled =3D cancel; + job_unlock(); } =20 +/* Called with job_mutex *not* held. */ bool job_is_force_cancel(Job *job) { - return job->force_cancel; + bool ret; + job_lock(); + ret =3D job->force_cancel; + job_unlock(); + return ret; } =20 +/* Does not need job_mutex */ JobTxn *job_txn_new(void) { JobTxn *txn =3D g_new0(JobTxn, 1); QLIST_INIT(&txn->jobs); - txn->refcnt =3D 1; + qatomic_set(&txn->refcnt, 1); return txn; } =20 +/* Does not need job_mutex */ static void job_txn_ref(JobTxn *txn) { - txn->refcnt++; + qatomic_inc(&txn->refcnt); } =20 +/* Does not need job_mutex */ void job_txn_unref(JobTxn *txn) { - if (txn && --txn->refcnt =3D=3D 0) { + if (txn && qatomic_dec_fetch(&txn->refcnt) =3D=3D 0) { g_free(txn); } } =20 +/* Called with job_mutex *not* held. */ void job_txn_add_job(JobTxn *txn, Job *job) { if (!txn) { return; } =20 + job_lock(); assert(!job->txn); job->txn =3D txn; =20 QLIST_INSERT_HEAD(&txn->jobs, job, txn_list); + job_unlock(); job_txn_ref(txn); } =20 +/* Called with job_mutex held. */ static void job_txn_del_job(Job *job) { if (job->txn) { @@ -186,6 +205,7 @@ static void job_txn_del_job(Job *job) } } =20 +/* Called with job_mutex held. */ static int job_txn_apply(Job *job, int fn(Job *)) { AioContext *inner_ctx; @@ -221,11 +241,13 @@ static int job_txn_apply(Job *job, int fn(Job *)) return rc; } =20 +/* Does not need job_mutex */ bool job_is_internal(Job *job) { return (job->id =3D=3D NULL); } =20 +/* Called with job_mutex held. */ static void job_state_transition(Job *job, JobStatus s1) { JobStatus s0 =3D job->status; @@ -241,6 +263,7 @@ static void job_state_transition(Job *job, JobStatus s1) } } =20 +/* Called with job_mutex held. */ int job_apply_verb(Job *job, JobVerb verb, Error **errp) { JobStatus s0 =3D job->status; @@ -255,11 +278,13 @@ int job_apply_verb(Job *job, JobVerb verb, Error **er= rp) return -EPERM; } =20 +/* Does not need job_mutex. Value is never modified */ JobType job_type(const Job *job) { return job->driver->job_type; } =20 +/* Does not need job_mutex. Value is never modified */ const char *job_type_str(const Job *job) { return JobType_str(job_type(job)); @@ -353,24 +378,34 @@ static bool job_started(Job *job) return job->co; } =20 +/* Called with job_mutex held. */ bool job_should_pause(Job *job) { return job->pause_count > 0; } =20 +/* Called with job_mutex held. */ bool job_is_paused(Job *job) { return job->paused; } =20 +/* Called with job_mutex *not* held. */ Job *job_next(Job *job) { + Job *ret; + job_lock(); if (!job) { - return QLIST_FIRST(&jobs); + ret =3D QLIST_FIRST(&jobs); + job_unlock(); + return ret; } - return QLIST_NEXT(job, job_list); + ret =3D QLIST_NEXT(job, job_list); + job_unlock(); + return ret; } =20 +/* Called with job_mutex held. */ Job *job_get(const char *id) { Job *job; @@ -388,13 +423,14 @@ Job *job_get(const char *id) return NULL; } =20 +/* Called with job_mutex *not* held. */ static void job_sleep_timer_cb(void *opaque) { Job *job =3D opaque; - job_enter(job); } =20 +/* Called with job_mutex *not* held. */ void *job_create(const char *job_id, const JobDriver *driver, JobTxn *txn, AioContext *ctx, int flags, BlockCompletionFunc *cb, void *opaque, Error **errp) @@ -449,6 +485,7 @@ void *job_create(const char *job_id, const JobDriver *d= river, JobTxn *txn, job_sleep_timer_cb, job); =20 QLIST_INSERT_HEAD(&jobs, job, job_list); + job_unlock(); =20 /* Single jobs are modeled as single-job transactions for sake of * consolidating the job management logic */ @@ -463,11 +500,13 @@ void *job_create(const char *job_id, const JobDriver = *driver, JobTxn *txn, return job; } =20 +/* Called with job_mutex held. */ void job_ref(Job *job) { ++job->refcnt; } =20 +/* Called with job_mutex held. Temporarly releases the lock. */ void job_unref(Job *job) { if (--job->refcnt =3D=3D 0) { @@ -476,7 +515,9 @@ void job_unref(Job *job) assert(!job->txn); =20 if (job->driver->free) { + job_unlock(); job->driver->free(job); + job_lock(); } =20 QLIST_REMOVE(job, job_list); @@ -488,46 +529,55 @@ void job_unref(Job *job) } } =20 +/* API is thread safe */ void job_progress_update(Job *job, uint64_t done) { progress_work_done(&job->progress, done); } =20 +/* API is thread safe */ void job_progress_set_remaining(Job *job, uint64_t remaining) { progress_set_remaining(&job->progress, remaining); } =20 +/* API is thread safe */ void job_progress_increase_remaining(Job *job, uint64_t delta) { progress_increase_remaining(&job->progress, delta); } =20 +/* Called with job_mutex held. */ void job_event_cancelled(Job *job) { notifier_list_notify(&job->on_finalize_cancelled, job); } =20 +/* Called with job_mutex held. */ void job_event_completed(Job *job) { notifier_list_notify(&job->on_finalize_completed, job); } =20 +/* Called with job_mutex held. */ static void job_event_pending(Job *job) { notifier_list_notify(&job->on_pending, job); } =20 +/* Called with job_mutex held. */ static void job_event_ready(Job *job) { notifier_list_notify(&job->on_ready, job); } =20 +/* Called with job_mutex held. */ static void job_event_idle(Job *job) { notifier_list_notify(&job->on_idle, job); } =20 +/* Called with job_mutex held, but releases it temporarly. */ void job_enter_cond(Job *job, bool(*fn)(Job *job)) { if (!job_started(job)) { @@ -537,14 +587,11 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) return; } =20 - job_lock(); if (job->busy) { - job_unlock(); return; } =20 if (fn && !fn(job)) { - job_unlock(); return; } =20 @@ -552,7 +599,8 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) timer_del(&job->sleep_timer); job->busy =3D true; job_unlock(); - aio_co_enter(job->aio_context, job->co); + aio_co_enter(job_get_aiocontext(job), job->co); + job_lock(); } =20 /* Called with job_mutex held. */ @@ -565,7 +613,7 @@ void job_enter_locked(Job *job) void job_enter(Job *job) { job_lock(); - job_enter_locked(job, NULL); + job_enter_locked(job); job_unlock(); } =20 @@ -574,7 +622,11 @@ void job_enter(Job *job) * is allowed and cancels the timer. * * If @ns is (uint64_t) -1, no timer is scheduled and job_enter() must be - * called explicitly. */ + * called explicitly. + * + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ static void coroutine_fn job_do_yield(Job *job, uint64_t ns) { job_lock(); @@ -587,86 +639,122 @@ static void coroutine_fn job_do_yield(Job *job, uint= 64_t ns) qemu_coroutine_yield(); =20 /* Set by job_enter_cond() before re-entering the coroutine. */ + job_lock(); assert(job->busy); + job_unlock(); } =20 +/* + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ void coroutine_fn job_pause_point(Job *job) { assert(job && job_started(job)); =20 + job_lock(); if (!job_should_pause(job)) { + job_unlock(); return; } - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { + job_unlock(); return; } =20 if (job->driver->pause) { + job_unlock(); job->driver->pause(job); + job_lock(); } =20 - if (job_should_pause(job) && !job_is_cancelled(job)) { + if (job_should_pause(job) && !job_is_cancelled_locked(job)) { JobStatus status =3D job->status; job_state_transition(job, status =3D=3D JOB_STATUS_READY ? JOB_STATUS_STANDBY : JOB_STATUS_PAUSED); job->paused =3D true; + job_unlock(); job_do_yield(job, -1); + job_lock(); job->paused =3D false; job_state_transition(job, status); } + job_unlock(); =20 if (job->driver->resume) { job->driver->resume(job); } } =20 +/* + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ void job_yield(Job *job) { + bool res; + job_lock(); assert(job->busy); =20 /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { + job_unlock(); return; } =20 - if (!job_should_pause(job)) { + res =3D job_should_pause(job); + job_unlock(); + + if (!res) { job_do_yield(job, -1); } =20 job_pause_point(job); } =20 +/* + * Called with job_mutex *not* held (we don't want the coroutine + * to yield with the lock held!). + */ void coroutine_fn job_sleep_ns(Job *job, int64_t ns) { + bool res; + job_lock(); assert(job->busy); =20 /* Check cancellation *before* setting busy =3D false, too! */ - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { + job_unlock(); return; } =20 - if (!job_should_pause(job)) { + res =3D job_should_pause(job); + job_unlock(); + + if (!res) { job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); } =20 job_pause_point(job); } =20 -/* Assumes the block_job_mutex is held */ +/* Called with job_mutex held. */ static bool job_timer_not_pending(Job *job) { return !timer_pending(&job->sleep_timer); } =20 +/* Called with job_mutex held. */ void job_pause(Job *job) { job->pause_count++; if (!job->paused) { - job_enter(job); + job_enter_locked(job); } } =20 +/* Called with job_mutex held. */ void job_resume(Job *job) { assert(job->pause_count > 0); @@ -679,6 +767,7 @@ void job_resume(Job *job) job_enter_cond(job, job_timer_not_pending); } =20 +/* Called with job_mutex held. */ void job_user_pause(Job *job, Error **errp) { if (job_apply_verb(job, JOB_VERB_PAUSE, errp)) { @@ -692,16 +781,19 @@ void job_user_pause(Job *job, Error **errp) job_pause(job); } =20 +/* Called with job_mutex held. */ bool job_user_paused(Job *job) { return job->user_paused; } =20 +/* Called with job_mutex held. */ void job_set_user_paused(Job *job) { job->user_paused =3D true; } =20 +/* Called with job_mutex held. Temporarly releases the lock. */ void job_user_resume(Job *job, Error **errp) { assert(job); @@ -713,12 +805,15 @@ void job_user_resume(Job *job, Error **errp) return; } if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; job_resume(job); } =20 +/* Called with job_mutex held. */ static void job_do_dismiss(Job *job) { assert(job); @@ -732,6 +827,7 @@ static void job_do_dismiss(Job *job) job_unref(job); } =20 +/* Called with job_mutex held. */ void job_dismiss(Job **jobptr, Error **errp) { Job *job =3D *jobptr; @@ -761,9 +857,10 @@ static void job_conclude(Job *job) } } =20 +/* Called with job_mutex held. */ static void job_update_rc(Job *job) { - if (!job->ret && job_is_cancelled(job)) { + if (!job->ret && job_is_cancelled_locked(job)) { job->ret =3D -ECANCELED; } if (job->ret) { @@ -774,22 +871,25 @@ static void job_update_rc(Job *job) } } =20 +/* Called with job_mutex *not* held. */ static void job_commit(Job *job) { - assert(!job->ret); + assert(!job_get_ret(job)); if (job->driver->commit) { job->driver->commit(job); } } =20 +/* Called with job_mutex *not* held. */ static void job_abort(Job *job) { - assert(job->ret); + assert(job_get_ret(job)); if (job->driver->abort) { job->driver->abort(job); } } =20 +/* Called with job_mutex *not* held. */ static void job_clean(Job *job) { if (job->driver->clean) { @@ -797,14 +897,18 @@ static void job_clean(Job *job) } } =20 +/* Called with job lock held, but it releases it temporarily */ static int job_finalize_single(Job *job) { - assert(job_is_completed(job)); + int ret; + assert(job_is_completed_locked(job)); =20 /* Ensure abort is called for late-transactional failures */ job_update_rc(job); =20 - if (!job->ret) { + ret =3D job->ret; + job_unlock(); + if (!ret) { job_commit(job); } else { job_abort(job); @@ -812,12 +916,13 @@ static int job_finalize_single(Job *job) job_clean(job); =20 if (job->cb) { - job->cb(job->opaque, job->ret); + job->cb(job->opaque, ret); } + job_lock(); =20 /* Emit events only if we actually started */ if (job_started(job)) { - if (job_is_cancelled(job)) { + if (job_is_cancelled_locked(job)) { job_event_cancelled(job); } else { job_event_completed(job); @@ -829,15 +934,20 @@ static int job_finalize_single(Job *job) return 0; } =20 +/* Called with job_mutex held. Temporarly releases the lock. */ static void job_cancel_async(Job *job, bool force) { if (job->driver->cancel) { + job_unlock(); job->driver->cancel(job, force); + job_lock(); } if (job->user_paused) { /* Do not call job_enter here, the caller will handle it. */ if (job->driver->user_resume) { + job_unlock(); job->driver->user_resume(job); + job_lock(); } job->user_paused =3D false; assert(job->pause_count > 0); @@ -848,27 +958,21 @@ static void job_cancel_async(Job *job, bool force) job->force_cancel |=3D force; } =20 +/* Called with job_mutex held. */ static void job_completed_txn_abort(Job *job) { - AioContext *outer_ctx =3D job->aio_context; AioContext *ctx; JobTxn *txn =3D job->txn; Job *other_job; =20 - if (txn->aborting) { + if (qatomic_cmpxchg(&txn->aborting, false, true)) { /* * We are cancelled by another job, which will handle everything. */ return; } - txn->aborting =3D true; job_txn_ref(txn); =20 - /* We can only hold the single job's AioContext lock while calling - * job_finalize_single() because the finalization callbacks can involve - * calls of AIO_WAIT_WHILE(), which could deadlock otherwise. */ - aio_context_release(outer_ctx); - /* Other jobs are effectively cancelled by us, set the status for * them; this job, however, may or may not be cancelled, depending * on the caller, so leave it. */ @@ -884,33 +988,39 @@ static void job_completed_txn_abort(Job *job) other_job =3D QLIST_FIRST(&txn->jobs); ctx =3D other_job->aio_context; aio_context_acquire(ctx); - if (!job_is_completed(other_job)) { - assert(job_is_cancelled(other_job)); + if (!job_is_completed_locked(other_job)) { + assert(job_is_cancelled_locked(other_job)); job_finish_sync(other_job, NULL, NULL); } job_finalize_single(other_job); aio_context_release(ctx); } =20 - aio_context_acquire(outer_ctx); - job_txn_unref(txn); } =20 +/* Called with job_mutex held. Temporarly releases the lock. */ static int job_prepare(Job *job) { + int ret; + if (job->ret =3D=3D 0 && job->driver->prepare) { - job->ret =3D job->driver->prepare(job); + job_unlock(); + ret =3D job->driver->prepare(job); + job_lock(); + job->ret =3D ret; job_update_rc(job); } return job->ret; } =20 +/* Does not need job_mutex */ static int job_needs_finalize(Job *job) { return !job->auto_finalize; } =20 +/* Called with job_mutex held. */ static void job_do_finalize(Job *job) { int rc; @@ -925,6 +1035,7 @@ static void job_do_finalize(Job *job) } } =20 +/* Called with job_mutex held. */ void job_finalize(Job *job, Error **errp) { assert(job && job->id); @@ -934,6 +1045,7 @@ void job_finalize(Job *job, Error **errp) job_do_finalize(job); } =20 +/* Called with job_mutex held. */ static int job_transition_to_pending(Job *job) { job_state_transition(job, JOB_STATUS_PENDING); @@ -943,17 +1055,22 @@ static int job_transition_to_pending(Job *job) return 0; } =20 +/* Called with job_mutex *not* held. */ void job_transition_to_ready(Job *job) { + job_lock(); job_state_transition(job, JOB_STATUS_READY); job_event_ready(job); + job_unlock(); } =20 +/* Called with job_mutex held. */ static void job_completed_txn_success(Job *job) { - JobTxn *txn =3D job->txn; + JobTxn *txn; Job *other_job; =20 + txn =3D job->txn; job_state_transition(job, JOB_STATUS_WAITING); =20 /* @@ -961,7 +1078,7 @@ static void job_completed_txn_success(Job *job) * txn. */ QLIST_FOREACH(other_job, &txn->jobs, txn_list) { - if (!job_is_completed(other_job)) { + if (!job_is_completed_locked(other_job)) { return; } assert(other_job->ret =3D=3D 0); @@ -975,9 +1092,10 @@ static void job_completed_txn_success(Job *job) } } =20 +/* Called with job_mutex held. */ static void job_completed(Job *job) { - assert(job && job->txn && !job_is_completed(job)); + assert(job && job->txn && !job_is_completed_locked(job)); =20 job_update_rc(job); trace_job_completed(job, job->ret); @@ -988,14 +1106,16 @@ static void job_completed(Job *job) } } =20 -/** Useful only as a type shim for aio_bh_schedule_oneshot. */ +/** + * Useful only as a type shim for aio_bh_schedule_oneshot. + * Called with job_mutex *not* held. + */ static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; - AioContext *ctx; =20 + job_lock(); job_ref(job); - aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion * callbacks. However, completion callbacks tend to involve operations= that @@ -1012,29 +1132,40 @@ static void job_exit(void *opaque) * acquiring the new lock, and we ref/unref to avoid job_completed fre= eing * the job underneath us. */ - ctx =3D job->aio_context; job_unref(job); - aio_context_release(ctx); + job_unlock(); } =20 /** * All jobs must allow a pause point before entering their job proper. This * ensures that jobs can be paused prior to being started, then resumed la= ter. + * + * Called with job_mutex *not* held. */ static void coroutine_fn job_co_entry(void *opaque) { Job *job =3D opaque; + Error *local_error =3D NULL; + int ret; =20 assert(job && job->driver && job->driver->run); job_pause_point(job); - job->ret =3D job->driver->run(job, &job->err); + ret =3D job->driver->run(job, &local_error); + job_lock(); + if (local_error) { + error_propagate(&job->err, local_error); + } + job->ret =3D ret; job->deferred_to_main_loop =3D true; job->busy =3D true; + job_unlock(); aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job); } =20 +/* Called with job_mutex *not* held. */ void job_start(Job *job) { + job_lock(); assert(job && !job_started(job) && job->paused && job->driver && job->driver->run); job->co =3D qemu_coroutine_create(job_co_entry, job); @@ -1042,9 +1173,11 @@ void job_start(Job *job) job->busy =3D true; job->paused =3D false; job_state_transition(job, JOB_STATUS_RUNNING); - aio_co_enter(job->aio_context, job->co); + job_unlock(); + aio_co_enter(job_get_aiocontext(job), job->co); } =20 +/* Called with job_mutex held. */ void job_cancel(Job *job, bool force) { if (job->status =3D=3D JOB_STATUS_CONCLUDED) { @@ -1057,10 +1190,11 @@ void job_cancel(Job *job, bool force) } else if (job->deferred_to_main_loop) { job_completed_txn_abort(job); } else { - job_enter(job); + job_enter_locked(job); } } =20 +/* Called with job_mutex held. */ void job_user_cancel(Job *job, bool force, Error **errp) { if (job_apply_verb(job, JOB_VERB_CANCEL, errp)) { @@ -1069,19 +1203,36 @@ void job_user_cancel(Job *job, bool force, Error **= errp) job_cancel(job, force); } =20 -/* A wrapper around job_cancel() taking an Error ** parameter so it may be +/* + * A wrapper around job_cancel() taking an Error ** parameter so it may be * used with job_finish_sync() without the need for (rather nasty) function - * pointer casts there. */ + * pointer casts there. + * + * Called with job_mutex held. + */ static void job_cancel_err(Job *job, Error **errp) { job_cancel(job, false); } =20 +/* + * Called with job_mutex *not* held, unlike most other APIs consumed + * by the monitor! + */ int job_cancel_sync(Job *job) { - return job_finish_sync(job, &job_cancel_err, NULL); + int ret; + + job_lock(); + ret =3D job_finish_sync(job, &job_cancel_err, NULL); + job_unlock(); + return ret; } =20 +/* + * Called with job_mutex *not* held, unlike most other APIs consumed + * by the monitor! + */ void job_cancel_sync_all(void) { Job *job; @@ -1095,11 +1246,13 @@ void job_cancel_sync_all(void) } } =20 +/* Called with job_mutex held. */ int job_complete_sync(Job *job, Error **errp) { return job_finish_sync(job, job_complete, errp); } =20 +/* Called with job_mutex held. Temporarly releases the lock. */ void job_complete(Job *job, Error **errp) { /* Should not be reachable via external interface for internal jobs */ @@ -1107,15 +1260,18 @@ void job_complete(Job *job, Error **errp) if (job_apply_verb(job, JOB_VERB_COMPLETE, errp)) { return; } - if (job_is_cancelled(job) || !job->driver->complete) { + if (job_is_cancelled_locked(job) || !job->driver->complete) { error_setg(errp, "The active block job '%s' cannot be completed", job->id); return; } =20 + job_unlock(); job->driver->complete(job, errp); + job_lock(); } =20 +/* Called with job_mutex held. */ int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error *= *errp) { Error *local_err =3D NULL; @@ -1132,10 +1288,12 @@ int job_finish_sync(Job *job, void (*finish)(Job *,= Error **errp), Error **errp) return -EBUSY; } =20 - AIO_WAIT_WHILE(job->aio_context, - (job_enter(job), !job_is_completed(job))); + job_unlock(); + AIO_WAIT_WHILE(NULL, (job_enter(job), !job_is_completed(job))); + job_lock(); =20 - ret =3D (job_is_cancelled(job) && job->ret =3D=3D 0) ? -ECANCELED : jo= b->ret; + ret =3D (job_is_cancelled_locked(job) && job->ret =3D=3D 0) ? + -ECANCELED : job->ret; job_unref(job); return ret; } diff --git a/qemu-img.c b/qemu-img.c index d16bd367d9..82debde038 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -898,17 +898,19 @@ static void common_block_job_cb(void *opaque, int ret) } } =20 +/* Called with job_mutex held. Releases it temporarly */ static void run_block_job(BlockJob *job, Error **errp) { uint64_t progress_current, progress_total; AioContext *aio_context =3D blk_get_aio_context(job->blk); int ret =3D 0; =20 - aio_context_acquire(aio_context); job_ref(&job->job); do { float progress =3D 0.0f; + job_unlock(); aio_poll(aio_context, true); + job_lock(); =20 progress_get_snapshot(&job->job.progress, &progress_current, &progress_total); @@ -916,15 +918,15 @@ static void run_block_job(BlockJob *job, Error **errp) progress =3D (float)progress_current / progress_total * 100.f; } qemu_progress_print(progress, 0); - } while (!job_is_ready(&job->job) && !job_is_completed(&job->job)); + } while (!job_is_ready_locked(&job->job) && + !job_is_completed_locked(&job->job)); =20 - if (!job_is_completed(&job->job)) { + if (!job_is_completed_locked(&job->job)) { ret =3D job_complete_sync(&job->job, errp); } else { - ret =3D job_get_ret(&job->job); + ret =3D job_get_ret_locked(&job->job); } job_unref(&job->job); - aio_context_release(aio_context); =20 /* publish completion progress only when success */ if (!ret) { @@ -1076,9 +1078,12 @@ static int img_commit(int argc, char **argv) bdrv_ref(bs); } =20 + job_lock(); job =3D block_job_get("commit"); assert(job); run_block_job(job, &local_err); + job_unlock(); + if (local_err) { goto unref_backing; } --=20 2.31.1 From nobody Fri May 17 03:12:46 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1625677471071664.0361624055979; Wed, 7 Jul 2021 10:04:31 -0700 (PDT) Received: from localhost ([::1]:48740 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m1AyL-0006Rt-MH for importer@patchew.org; Wed, 07 Jul 2021 13:04:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41860) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asx-0003Et-EW for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:55 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40957) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m1Asq-0000Fo-8u for qemu-devel@nongnu.org; Wed, 07 Jul 2021 12:58:55 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-441-1emrBgcwNIqM-Lm1ifTGVg-1; Wed, 07 Jul 2021 12:58:46 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 50B1A362F9; Wed, 7 Jul 2021 16:58:45 +0000 (UTC) Received: from localhost.localdomain (ovpn-115-49.ams2.redhat.com [10.36.115.49]) by smtp.corp.redhat.com (Postfix) with ESMTP id E0C46100EB3D; Wed, 7 Jul 2021 16:58:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625677127; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rswzJXH8bwvgyNwKg4ziOWeRNp/6ufT2/ZqioPRfpU0=; b=BUohny2rNZDN81+TUg44SUaomtfIi4Y5RdPc+GjxInGb0+vVTsgwTJOYp8VdGHtx6Y5G0m sZfusXLQrClwEn2xmE54y9ZFrEs52oh+G4CRTO4b+AwA58pG5DCfNs030HGKzG7cmqVGvW 0Y06iWsRTJF+E4ZzlCKsOao88NbRLKk= X-MC-Unique: 1emrBgcwNIqM-Lm1ifTGVg-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Subject: [RFC PATCH 6/6] jobs: remove unnecessary AioContext aquire/release pairs Date: Wed, 7 Jul 2021 18:58:13 +0200 Message-Id: <20210707165813.55361-7-eesposit@redhat.com> In-Reply-To: <20210707165813.55361-1-eesposit@redhat.com> References: <20210707165813.55361-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eesposit@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.439, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Emanuele Giuseppe Esposito , Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org, Wen Congyang , Xie Changlong , Markus Armbruster , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1625677471713100001 Content-Type: text/plain; charset="utf-8" Now that we use the job_mutex, remove unnecessary aio_context_acquire/relea= se pairs. However, some place still needs it, so try to reduce the aio_context critical section to the minimum. This patch is separated from the one before because here we are removing locks without substituting it with aiocontext_acquire/release pairs. These sections will also be removed in future, when the underlaying bdrv_* API will also be free of context locks. Signed-off-by: Emanuele Giuseppe Esposito --- block/mirror.c | 6 ++ block/monitor/block-hmp-cmds.c | 6 -- blockdev.c | 173 ++++++++------------------------- blockjob.c | 3 + job.c | 9 +- qemu-img.c | 4 - 6 files changed, 54 insertions(+), 147 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index deefaa6a39..8d30c53690 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -1857,6 +1857,7 @@ void mirror_start(const char *job_id, BlockDriverStat= e *bs, { bool is_none_mode; BlockDriverState *base; + AioContext *aio_context; =20 if ((mode =3D=3D MIRROR_SYNC_MODE_INCREMENTAL) || (mode =3D=3D MIRROR_SYNC_MODE_BITMAP)) { @@ -1866,11 +1867,16 @@ void mirror_start(const char *job_id, BlockDriverSt= ate *bs, } is_none_mode =3D mode =3D=3D MIRROR_SYNC_MODE_NONE; base =3D mode =3D=3D MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs= ) : NULL; + + aio_context =3D bdrv_get_aio_context(bs); + aio_context_acquire(aio_context); mirror_start_job(job_id, bs, creation_flags, target, replaces, speed, granularity, buf_size, backing_mode, zero_targ= et, on_source_error, on_target_error, unmap, NULL, NULL, &mirror_job_driver, is_none_mode, base, false, filter_node_name, true, copy_mode, errp); + aio_context_release(aio_context); + } =20 BlockJob *commit_active_start(const char *job_id, BlockDriverState *bs, diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index 3e6670c963..99095afae7 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -206,7 +206,6 @@ void hmp_commit(Monitor *mon, const QDict *qdict) ret =3D blk_commit_all(); } else { BlockDriverState *bs; - AioContext *aio_context; =20 blk =3D blk_by_name(device); if (!blk) { @@ -219,12 +218,7 @@ void hmp_commit(Monitor *mon, const QDict *qdict) } =20 bs =3D bdrv_skip_implicit_filters(blk_bs(blk)); - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - ret =3D bdrv_commit(bs); - - aio_context_release(aio_context); } if (ret < 0) { error_report("'commit' error for '%s': %s", device, strerror(-ret)= ); diff --git a/blockdev.c b/blockdev.c index 9255aea6a2..119cb9a539 100644 --- a/blockdev.c +++ b/blockdev.c @@ -147,13 +147,8 @@ void blockdev_mark_auto_del(BlockBackend *blk) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { if (block_job_has_bdrv(job, blk_bs(blk))) { - AioContext *aio_context =3D job_get_aiocontext(&job->job); - aio_context_acquire(aio_context); - job_lock(); job_cancel(&job->job, false); - - aio_context_release(aio_context); job_unlock(); } } @@ -1714,7 +1709,6 @@ static void drive_backup_prepare(BlkActionState *comm= on, Error **errp) } =20 aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); =20 /* Paired with .clean() */ bdrv_drained_begin(bs); @@ -1726,7 +1720,7 @@ static void drive_backup_prepare(BlkActionState *comm= on, Error **errp) =20 /* Early check to avoid creating target */ if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { - goto out; + return; } =20 flags =3D bs->open_flags | BDRV_O_RDWR; @@ -1756,7 +1750,7 @@ static void drive_backup_prepare(BlkActionState *comm= on, Error **errp) size =3D bdrv_getlength(bs); if (size < 0) { error_setg_errno(errp, -size, "bdrv_getlength failed"); - goto out; + return; } =20 if (backup->mode !=3D NEW_IMAGE_MODE_EXISTING) { @@ -1779,7 +1773,7 @@ static void drive_backup_prepare(BlkActionState *comm= on, Error **errp) =20 if (local_err) { error_propagate(errp, local_err); - goto out; + return; } =20 options =3D qdict_new(); @@ -1791,12 +1785,11 @@ static void drive_backup_prepare(BlkActionState *co= mmon, Error **errp) =20 target_bs =3D bdrv_open(backup->target, NULL, options, flags, errp); if (!target_bs) { - goto out; + return; } =20 /* Honor bdrv_try_set_aio_context() context acquisition requirements. = */ old_context =3D bdrv_get_aio_context(target_bs); - aio_context_release(aio_context); aio_context_acquire(old_context); =20 ret =3D bdrv_try_set_aio_context(target_bs, aio_context, errp); @@ -1807,7 +1800,6 @@ static void drive_backup_prepare(BlkActionState *comm= on, Error **errp) } =20 aio_context_release(old_context); - aio_context_acquire(aio_context); =20 if (set_backing_hd) { if (bdrv_set_backing_hd(target_bs, source, errp) < 0) { @@ -1816,29 +1808,21 @@ static void drive_backup_prepare(BlkActionState *co= mmon, Error **errp) } =20 state->bs =3D bs; - + aio_context_acquire(aio_context); state->job =3D do_backup_common(qapi_DriveBackup_base(backup), bs, target_bs, aio_context, common->block_job_txn, errp); - + aio_context_release(aio_context); unref: bdrv_unref(target_bs); -out: - aio_context_release(aio_context); } =20 static void drive_backup_commit(BlkActionState *common) { DriveBackupState *state =3D DO_UPCAST(DriveBackupState, common, common= ); - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); =20 assert(state->job); job_start(&state->job->job); - - aio_context_release(aio_context); } =20 static void drive_backup_abort(BlkActionState *common) @@ -1846,32 +1830,18 @@ static void drive_backup_abort(BlkActionState *comm= on) DriveBackupState *state =3D DO_UPCAST(DriveBackupState, common, common= ); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - job_cancel_sync(&state->job->job); - - aio_context_release(aio_context); } } =20 static void drive_backup_clean(BlkActionState *common) { DriveBackupState *state =3D DO_UPCAST(DriveBackupState, common, common= ); - AioContext *aio_context; =20 - if (!state->bs) { - return; + if (state->bs) { + bdrv_drained_end(state->bs); } =20 - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - - bdrv_drained_end(state->bs); - - aio_context_release(aio_context); } =20 typedef struct BlockdevBackupState { @@ -1931,15 +1901,9 @@ static void blockdev_backup_prepare(BlkActionState *= common, Error **errp) static void blockdev_backup_commit(BlkActionState *common) { BlockdevBackupState *state =3D DO_UPCAST(BlockdevBackupState, common, = common); - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); =20 assert(state->job); job_start(&state->job->job); - - aio_context_release(aio_context); } =20 static void blockdev_backup_abort(BlkActionState *common) @@ -1947,32 +1911,17 @@ static void blockdev_backup_abort(BlkActionState *c= ommon) BlockdevBackupState *state =3D DO_UPCAST(BlockdevBackupState, common, = common); =20 if (state->job) { - AioContext *aio_context; - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - job_cancel_sync(&state->job->job); - - aio_context_release(aio_context); } } =20 static void blockdev_backup_clean(BlkActionState *common) { BlockdevBackupState *state =3D DO_UPCAST(BlockdevBackupState, common, = common); - AioContext *aio_context; =20 - if (!state->bs) { - return; + if (state->bs) { + bdrv_drained_end(state->bs); } - - aio_context =3D bdrv_get_aio_context(state->bs); - aio_context_acquire(aio_context); - - bdrv_drained_end(state->bs); - - aio_context_release(aio_context); } =20 typedef struct BlockDirtyBitmapState { @@ -2486,7 +2435,6 @@ void qmp_block_stream(bool has_job_id, const char *jo= b_id, const char *device, BlockDriverState *bs, *iter, *iter_end; BlockDriverState *base_bs =3D NULL; BlockDriverState *bottom_bs =3D NULL; - AioContext *aio_context; Error *local_err =3D NULL; int job_flags =3D JOB_DEFAULT; =20 @@ -2517,52 +2465,46 @@ void qmp_block_stream(bool has_job_id, const char *= job_id, const char *device, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - if (has_base) { base_bs =3D bdrv_find_backing_image(bs, base); if (base_bs =3D=3D NULL) { error_setg(errp, "Can't find '%s' in the backing chain", base); - goto out; + return; } - assert(bdrv_get_aio_context(base_bs) =3D=3D aio_context); } =20 if (has_base_node) { base_bs =3D bdrv_lookup_bs(NULL, base_node, errp); if (!base_bs) { - goto out; + return; } if (bs =3D=3D base_bs || !bdrv_chain_contains(bs, base_bs)) { error_setg(errp, "Node '%s' is not a backing image of '%s'", base_node, device); - goto out; + return; } - assert(bdrv_get_aio_context(base_bs) =3D=3D aio_context); bdrv_refresh_filename(base_bs); } =20 if (has_bottom) { bottom_bs =3D bdrv_lookup_bs(NULL, bottom, errp); if (!bottom_bs) { - goto out; + return; } if (!bottom_bs->drv) { error_setg(errp, "Node '%s' is not open", bottom); - goto out; + return; } if (bottom_bs->drv->is_filter) { error_setg(errp, "Node '%s' is a filter, use a non-filter node= " "as 'bottom'", bottom); - goto out; + return; } if (!bdrv_chain_contains(bs, bottom_bs)) { error_setg(errp, "Node '%s' is not in a chain starting from '%= s'", bottom, device); - goto out; + return; } - assert(bdrv_get_aio_context(bottom_bs) =3D=3D aio_context); } =20 /* @@ -2573,7 +2515,7 @@ void qmp_block_stream(bool has_job_id, const char *jo= b_id, const char *device, iter =3D bdrv_filter_or_cow_bs(iter)) { if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) { - goto out; + return; } } =20 @@ -2582,7 +2524,7 @@ void qmp_block_stream(bool has_job_id, const char *jo= b_id, const char *device, if (base_bs =3D=3D NULL && has_backing_file) { error_setg(errp, "backing file specified, but streaming the " "entire chain"); - goto out; + return; } =20 if (has_auto_finalize && !auto_finalize) { @@ -2597,13 +2539,10 @@ void qmp_block_stream(bool has_job_id, const char *= job_id, const char *device, filter_node_name, &local_err); if (local_err) { error_propagate(errp, local_err); - goto out; + return; } =20 trace_qmp_block_stream(bs); - -out: - aio_context_release(aio_context); } =20 void qmp_block_commit(bool has_job_id, const char *job_id, const char *dev= ice, @@ -2622,7 +2561,6 @@ void qmp_block_commit(bool has_job_id, const char *jo= b_id, const char *device, BlockDriverState *bs; BlockDriverState *iter; BlockDriverState *base_bs, *top_bs; - AioContext *aio_context; Error *local_err =3D NULL; int job_flags =3D JOB_DEFAULT; uint64_t top_perm, top_shared; @@ -2661,11 +2599,8 @@ void qmp_block_commit(bool has_job_id, const char *j= ob_id, const char *device, return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_COMMIT_SOURCE, errp)) { - goto out; + return; } =20 /* default top_bs is the active layer */ @@ -2673,16 +2608,16 @@ void qmp_block_commit(bool has_job_id, const char *= job_id, const char *device, =20 if (has_top_node && has_top) { error_setg(errp, "'top-node' and 'top' are mutually exclusive"); - goto out; + return; } else if (has_top_node) { top_bs =3D bdrv_lookup_bs(NULL, top_node, errp); if (top_bs =3D=3D NULL) { - goto out; + return; } if (!bdrv_chain_contains(bs, top_bs)) { error_setg(errp, "'%s' is not in this backing file chain", top_node); - goto out; + return; } } else if (has_top && top) { /* This strcmp() is just a shortcut, there is no need to @@ -2696,52 +2631,48 @@ void qmp_block_commit(bool has_job_id, const char *= job_id, const char *device, =20 if (top_bs =3D=3D NULL) { error_setg(errp, "Top image file %s not found", top ? top : "NULL"= ); - goto out; + return; } =20 - assert(bdrv_get_aio_context(top_bs) =3D=3D aio_context); - if (has_base_node && has_base) { error_setg(errp, "'base-node' and 'base' are mutually exclusive"); - goto out; + return; } else if (has_base_node) { base_bs =3D bdrv_lookup_bs(NULL, base_node, errp); if (base_bs =3D=3D NULL) { - goto out; + return; } if (!bdrv_chain_contains(top_bs, base_bs)) { error_setg(errp, "'%s' is not in this backing file chain", base_node); - goto out; + return; } } else if (has_base && base) { base_bs =3D bdrv_find_backing_image(top_bs, base); if (base_bs =3D=3D NULL) { error_setg(errp, "Can't find '%s' in the backing chain", base); - goto out; + return; } } else { base_bs =3D bdrv_find_base(top_bs); if (base_bs =3D=3D NULL) { error_setg(errp, "There is no backimg image"); - goto out; + return; } } =20 - assert(bdrv_get_aio_context(base_bs) =3D=3D aio_context); - for (iter =3D top_bs; iter !=3D bdrv_filter_or_cow_bs(base_bs); iter =3D bdrv_filter_or_cow_bs(iter)) { if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_COMMIT_TARGET, errp)) { - goto out; + return; } } =20 /* Do not allow attempts to commit an image into itself */ if (top_bs =3D=3D base_bs) { error_setg(errp, "cannot commit an image into itself"); - goto out; + return; } =20 /* @@ -2764,7 +2695,7 @@ void qmp_block_commit(bool has_job_id, const char *jo= b_id, const char *device, error_setg(errp, "'backing-file' specified, but 'top' has = a " "writer on it"); } - goto out; + return; } if (!has_job_id) { /* @@ -2780,7 +2711,7 @@ void qmp_block_commit(bool has_job_id, const char *jo= b_id, const char *device, } else { BlockDriverState *overlay_bs =3D bdrv_find_overlay(bs, top_bs); if (bdrv_op_is_blocked(overlay_bs, BLOCK_OP_TYPE_COMMIT_TARGET, er= rp)) { - goto out; + return; } commit_start(has_job_id ? job_id : NULL, bs, base_bs, top_bs, job_= flags, speed, on_error, has_backing_file ? backing_file : NU= LL, @@ -2788,11 +2719,8 @@ void qmp_block_commit(bool has_job_id, const char *j= ob_id, const char *device, } if (local_err !=3D NULL) { error_propagate(errp, local_err); - goto out; + return; } - -out: - aio_context_release(aio_context); } =20 /* Common QMP interface for drive-backup and blockdev-backup */ @@ -3089,7 +3017,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) { BlockDriverState *bs; BlockDriverState *target_backing_bs, *target_bs; - AioContext *aio_context; AioContext *old_context; BlockMirrorBackingMode backing_mode; Error *local_err =3D NULL; @@ -3110,9 +3037,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) return; } =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); - if (!arg->has_mode) { arg->mode =3D NEW_IMAGE_MODE_ABSOLUTE_PATHS; } @@ -3134,14 +3058,14 @@ void qmp_drive_mirror(DriveMirror *arg, Error **err= p) size =3D bdrv_getlength(bs); if (size < 0) { error_setg_errno(errp, -size, "bdrv_getlength failed"); - goto out; + return; } =20 if (arg->has_replaces) { if (!arg->has_node_name) { error_setg(errp, "a node-name must be provided when replacing = a" " named node of the graph"); - goto out; + return; } } =20 @@ -3184,7 +3108,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) =20 if (local_err) { error_propagate(errp, local_err); - goto out; + return; } =20 options =3D qdict_new(); @@ -3200,7 +3124,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) */ target_bs =3D bdrv_open(arg->target, NULL, options, flags, errp); if (!target_bs) { - goto out; + return; } =20 zero_target =3D (arg->sync =3D=3D MIRROR_SYNC_MODE_FULL && @@ -3210,10 +3134,9 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) =20 /* Honor bdrv_try_set_aio_context() context acquisition requirements. = */ old_context =3D bdrv_get_aio_context(target_bs); - aio_context_release(aio_context); aio_context_acquire(old_context); =20 - ret =3D bdrv_try_set_aio_context(target_bs, aio_context, errp); + ret =3D bdrv_try_set_aio_context(target_bs, bdrv_get_aio_context(bs), = errp); if (ret < 0) { bdrv_unref(target_bs); aio_context_release(old_context); @@ -3221,7 +3144,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) } =20 aio_context_release(old_context); - aio_context_acquire(aio_context); =20 blockdev_mirror_common(arg->has_job_id ? arg->job_id : NULL, bs, targe= t_bs, arg->has_replaces, arg->replaces, arg->sync, @@ -3238,8 +3160,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp) arg->has_auto_dismiss, arg->auto_dismiss, errp); bdrv_unref(target_bs); -out: - aio_context_release(aio_context); } =20 void qmp_blockdev_mirror(bool has_job_id, const char *job_id, @@ -3262,7 +3182,6 @@ void qmp_blockdev_mirror(bool has_job_id, const char = *job_id, { BlockDriverState *bs; BlockDriverState *target_bs; - AioContext *aio_context; AioContext *old_context; BlockMirrorBackingMode backing_mode =3D MIRROR_LEAVE_BACKING_CHAIN; bool zero_target; @@ -3282,16 +3201,14 @@ void qmp_blockdev_mirror(bool has_job_id, const cha= r *job_id, =20 /* Honor bdrv_try_set_aio_context() context acquisition requirements. = */ old_context =3D bdrv_get_aio_context(target_bs); - aio_context =3D bdrv_get_aio_context(bs); aio_context_acquire(old_context); =20 - ret =3D bdrv_try_set_aio_context(target_bs, aio_context, errp); + ret =3D bdrv_try_set_aio_context(target_bs, bdrv_get_aio_context(bs), = errp); =20 aio_context_release(old_context); - aio_context_acquire(aio_context); =20 if (ret < 0) { - goto out; + return; } =20 blockdev_mirror_common(has_job_id ? job_id : NULL, bs, target_bs, @@ -3307,8 +3224,6 @@ void qmp_blockdev_mirror(bool has_job_id, const char = *job_id, has_auto_finalize, auto_finalize, has_auto_dismiss, auto_dismiss, errp); -out: - aio_context_release(aio_context); } =20 /* Get a block job using its ID and acquire its job_lock */ @@ -3696,15 +3611,11 @@ BlockJobInfoList *qmp_query_block_jobs(Error **errp) =20 for (job =3D block_job_next(NULL); job; job =3D block_job_next(job)) { BlockJobInfo *value; - AioContext *aio_context; =20 if (block_job_is_internal(job)) { continue; } - aio_context =3D blk_get_aio_context(job->blk); - aio_context_acquire(aio_context); value =3D block_job_query(job, errp); - aio_context_release(aio_context); if (!value) { qapi_free_BlockJobInfoList(head); return NULL; diff --git a/blockjob.c b/blockjob.c index e7b289089b..633abb3811 100644 --- a/blockjob.c +++ b/blockjob.c @@ -195,6 +195,7 @@ static const BdrvChildClass child_job =3D { .get_parent_aio_context =3D child_job_get_parent_aio_context, }; =20 +/* Called with BQL held. */ void block_job_remove_all_bdrv(BlockJob *job) { /* @@ -216,6 +217,7 @@ void block_job_remove_all_bdrv(BlockJob *job) } } =20 +/* Called with BQL held. */ bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs) { GSList *el; @@ -230,6 +232,7 @@ bool block_job_has_bdrv(BlockJob *job, BlockDriverState= *bs) return false; } =20 +/* Called with BQL held. */ int block_job_add_bdrv(BlockJob *job, const char *name, BlockDriverState *= bs, uint64_t perm, uint64_t shared_perm, Error **errp) { diff --git a/job.c b/job.c index e2006532b5..b86fce3679 100644 --- a/job.c +++ b/job.c @@ -220,7 +220,6 @@ static int job_txn_apply(Job *job, int fn(Job *)) * break AIO_WAIT_WHILE from within fn. */ job_ref(job); - aio_context_release(job->aio_context); =20 QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { inner_ctx =3D other_job->aio_context; @@ -232,11 +231,6 @@ static int job_txn_apply(Job *job, int fn(Job *)) } } =20 - /* - * Note that job->aio_context might have been changed by calling fn, s= o we - * can't use a local variable to cache it. - */ - aio_context_acquire(job->aio_context); job_unref(job); return rc; } @@ -515,8 +509,11 @@ void job_unref(Job *job) assert(!job->txn); =20 if (job->driver->free) { + AioContext *ctx =3D job_get_aiocontext(job); job_unlock(); + aio_context_acquire(ctx); job->driver->free(job); + aio_context_release(ctx); job_lock(); } =20 diff --git a/qemu-img.c b/qemu-img.c index 82debde038..10bbe88b03 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -946,7 +946,6 @@ static int img_commit(int argc, char **argv) Error *local_err =3D NULL; CommonBlockJobCBInfo cbi; bool image_opts =3D false; - AioContext *aio_context; int64_t rate_limit =3D 0; =20 fmt =3D NULL; @@ -1060,12 +1059,9 @@ static int img_commit(int argc, char **argv) .bs =3D bs, }; =20 - aio_context =3D bdrv_get_aio_context(bs); - aio_context_acquire(aio_context); commit_active_start("commit", bs, base_bs, JOB_DEFAULT, rate_limit, BLOCKDEV_ON_ERROR_REPORT, NULL, common_block_job_c= b, &cbi, false, &local_err); - aio_context_release(aio_context); if (local_err) { goto done; } --=20 2.31.1