From nobody Mon Feb 9 22:19:52 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1585657507; cv=none; d=zohomail.com; s=zohoarc; b=cHn9FVAyq04TAkAnYHshvXPXNV4XCa6cHowZxnOz104hjn6hO3UKO4z1f7qzXahRA0OqXaEiGVIpQUtwlzKkKm0Fql5fx6zeXo1A6Igz9f0Qifl3maGLJUZUIjB9yqV+4/pKa4mN5IdGFK4n+neDJcDlS3DzIpDYRSUZlv/oXKA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585657507; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LqZRd8QOMZ5kRQ8s5de2efHMtTbvL3fmTQtTQORaqe0=; b=KGHzosTebX/RooLDPgv6mK0RobN5Hb7uJT5FVbmfpVRqy/tZCfgRknICcuH8dQR0/3MVoOAcD8Pb9K0q7+nuTfIoSNsLTl5D7veZIThMmqG2n0H+94QWi8CivxZzONyv+ay9NhGHhk24HCjhzKbPw2ynn3DqlpXfmGXklPprUA8= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1585657507983549.8089375470195; Tue, 31 Mar 2020 05:25:07 -0700 (PDT) Received: from localhost ([::1]:37124 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFx4-0004Hu-NC for importer@patchew.org; Tue, 31 Mar 2020 08:25:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34425) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFt6-0006jQ-Vu for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jJFt5-00086b-Mr for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:00 -0400 Received: from proxmox-new.maurer-it.com ([212.186.127.180]:21794) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jJFt2-00081U-Rz; Tue, 31 Mar 2020 08:20:57 -0400 Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D3CD943DBB; Tue, 31 Mar 2020 14:20:51 +0200 (CEST) From: Stefan Reiter To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [PATCH v3 1/3] job: take each job's lock individually in job_txn_apply Date: Tue, 31 Mar 2020 14:20:43 +0200 Message-Id: <20200331122045.164356-2-s.reiter@proxmox.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331122045.164356-1-s.reiter@proxmox.com> References: <20200331122045.164356-1-s.reiter@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.186.127.180 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, slp@redhat.com, mreitz@redhat.com, stefanha@redhat.com, jsnow@redhat.com, dietmar@proxmox.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" All callers of job_txn_apply hold a single job's lock, but different jobs within a transaction can have different contexts, thus we need to lock each one individually before applying the callback function. Similar to job_completed_txn_abort this also requires releasing the caller's context before and reacquiring it after to avoid recursive locks which might break AIO_WAIT_WHILE in the callback. This also brings to light a different issue: When a callback function in job_txn_apply moves it's job to a different AIO context, job_exit will try to release the wrong lock (now that we re-acquire the lock correctly, previously it would just continue with the old lock, leaving the job unlocked for the rest of the codepath back to job_exit). Fix this by not caching the job's context in job_exit and add a comment about why this is done. One test needed adapting, since it calls job_finalize directly, so it manually needs to acquire the correct context. Signed-off-by: Stefan Reiter --- job.c | 48 ++++++++++++++++++++++++++++++++++--------- tests/test-blockjob.c | 2 ++ 2 files changed, 40 insertions(+), 10 deletions(-) diff --git a/job.c b/job.c index 134a07b92e..5fbaaabf78 100644 --- a/job.c +++ b/job.c @@ -136,17 +136,36 @@ static void job_txn_del_job(Job *job) } } =20 -static int job_txn_apply(JobTxn *txn, int fn(Job *)) +static int job_txn_apply(Job *job, int fn(Job *)) { - Job *job, *next; + AioContext *inner_ctx; + Job *other_job, *next; + JobTxn *txn =3D job->txn; int rc =3D 0; =20 - QLIST_FOREACH_SAFE(job, &txn->jobs, txn_list, next) { - rc =3D fn(job); + /* + * Similar to job_completed_txn_abort, we take each job's lock before + * applying fn, but since we assume that outer_ctx is held by the call= er, + * we need to release it here to avoid holding the lock twice - which = would + * break AIO_WAIT_WHILE from within fn. + */ + aio_context_release(job->aio_context); + + QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { + inner_ctx =3D other_job->aio_context; + aio_context_acquire(inner_ctx); + rc =3D fn(other_job); + aio_context_release(inner_ctx); if (rc) { break; } } + + /* + * Note that job->aio_context might have been changed by calling fn, s= o we + * can't use a local variable to cache it. + */ + aio_context_acquire(job->aio_context); return rc; } =20 @@ -774,11 +793,11 @@ static void job_do_finalize(Job *job) assert(job && job->txn); =20 /* prepare the transaction to complete */ - rc =3D job_txn_apply(job->txn, job_prepare); + rc =3D job_txn_apply(job, job_prepare); if (rc) { job_completed_txn_abort(job); } else { - job_txn_apply(job->txn, job_finalize_single); + job_txn_apply(job, job_finalize_single); } } =20 @@ -824,10 +843,10 @@ static void job_completed_txn_success(Job *job) assert(other_job->ret =3D=3D 0); } =20 - job_txn_apply(txn, job_transition_to_pending); + job_txn_apply(job, job_transition_to_pending); =20 /* If no jobs need manual finalization, automatically do so */ - if (job_txn_apply(txn, job_needs_finalize) =3D=3D 0) { + if (job_txn_apply(job, job_needs_finalize) =3D=3D 0) { job_do_finalize(job); } } @@ -849,9 +868,10 @@ static void job_completed(Job *job) static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; - AioContext *ctx =3D job->aio_context; + AioContext *ctx; =20 - aio_context_acquire(ctx); + job_ref(job); + aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion * callbacks. However, completion callbacks tend to involve operations= that @@ -862,6 +882,14 @@ static void job_exit(void *opaque) =20 job_completed(job); =20 + /* + * Note that calling job_completed can move the job to a different + * aio_context, so we cannot cache from above. job_txn_apply takes car= e of + * acquiring the new lock, and we ref/unref to avoid job_completed fre= eing + * the job underneath us. + */ + ctx =3D job->aio_context; + job_unref(job); aio_context_release(ctx); } =20 diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c index 4eeb184caf..7519847912 100644 --- a/tests/test-blockjob.c +++ b/tests/test-blockjob.c @@ -367,7 +367,9 @@ static void test_cancel_concluded(void) aio_poll(qemu_get_aio_context(), true); assert(job->status =3D=3D JOB_STATUS_PENDING); =20 + aio_context_acquire(job->aio_context); job_finalize(job, &error_abort); + aio_context_release(job->aio_context); assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 cancel_common(s); --=20 2.26.0