From nobody Sun May 19 01:26:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1585657507; cv=none; d=zohomail.com; s=zohoarc; b=cHn9FVAyq04TAkAnYHshvXPXNV4XCa6cHowZxnOz104hjn6hO3UKO4z1f7qzXahRA0OqXaEiGVIpQUtwlzKkKm0Fql5fx6zeXo1A6Igz9f0Qifl3maGLJUZUIjB9yqV+4/pKa4mN5IdGFK4n+neDJcDlS3DzIpDYRSUZlv/oXKA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585657507; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LqZRd8QOMZ5kRQ8s5de2efHMtTbvL3fmTQtTQORaqe0=; b=KGHzosTebX/RooLDPgv6mK0RobN5Hb7uJT5FVbmfpVRqy/tZCfgRknICcuH8dQR0/3MVoOAcD8Pb9K0q7+nuTfIoSNsLTl5D7veZIThMmqG2n0H+94QWi8CivxZzONyv+ay9NhGHhk24HCjhzKbPw2ynn3DqlpXfmGXklPprUA8= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1585657507983549.8089375470195; Tue, 31 Mar 2020 05:25:07 -0700 (PDT) Received: from localhost ([::1]:37124 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFx4-0004Hu-NC for importer@patchew.org; Tue, 31 Mar 2020 08:25:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34425) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFt6-0006jQ-Vu for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jJFt5-00086b-Mr for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:00 -0400 Received: from proxmox-new.maurer-it.com ([212.186.127.180]:21794) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jJFt2-00081U-Rz; Tue, 31 Mar 2020 08:20:57 -0400 Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D3CD943DBB; Tue, 31 Mar 2020 14:20:51 +0200 (CEST) From: Stefan Reiter To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [PATCH v3 1/3] job: take each job's lock individually in job_txn_apply Date: Tue, 31 Mar 2020 14:20:43 +0200 Message-Id: <20200331122045.164356-2-s.reiter@proxmox.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331122045.164356-1-s.reiter@proxmox.com> References: <20200331122045.164356-1-s.reiter@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.186.127.180 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, slp@redhat.com, mreitz@redhat.com, stefanha@redhat.com, jsnow@redhat.com, dietmar@proxmox.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" All callers of job_txn_apply hold a single job's lock, but different jobs within a transaction can have different contexts, thus we need to lock each one individually before applying the callback function. Similar to job_completed_txn_abort this also requires releasing the caller's context before and reacquiring it after to avoid recursive locks which might break AIO_WAIT_WHILE in the callback. This also brings to light a different issue: When a callback function in job_txn_apply moves it's job to a different AIO context, job_exit will try to release the wrong lock (now that we re-acquire the lock correctly, previously it would just continue with the old lock, leaving the job unlocked for the rest of the codepath back to job_exit). Fix this by not caching the job's context in job_exit and add a comment about why this is done. One test needed adapting, since it calls job_finalize directly, so it manually needs to acquire the correct context. Signed-off-by: Stefan Reiter --- job.c | 48 ++++++++++++++++++++++++++++++++++--------- tests/test-blockjob.c | 2 ++ 2 files changed, 40 insertions(+), 10 deletions(-) diff --git a/job.c b/job.c index 134a07b92e..5fbaaabf78 100644 --- a/job.c +++ b/job.c @@ -136,17 +136,36 @@ static void job_txn_del_job(Job *job) } } =20 -static int job_txn_apply(JobTxn *txn, int fn(Job *)) +static int job_txn_apply(Job *job, int fn(Job *)) { - Job *job, *next; + AioContext *inner_ctx; + Job *other_job, *next; + JobTxn *txn =3D job->txn; int rc =3D 0; =20 - QLIST_FOREACH_SAFE(job, &txn->jobs, txn_list, next) { - rc =3D fn(job); + /* + * Similar to job_completed_txn_abort, we take each job's lock before + * applying fn, but since we assume that outer_ctx is held by the call= er, + * we need to release it here to avoid holding the lock twice - which = would + * break AIO_WAIT_WHILE from within fn. + */ + aio_context_release(job->aio_context); + + QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) { + inner_ctx =3D other_job->aio_context; + aio_context_acquire(inner_ctx); + rc =3D fn(other_job); + aio_context_release(inner_ctx); if (rc) { break; } } + + /* + * Note that job->aio_context might have been changed by calling fn, s= o we + * can't use a local variable to cache it. + */ + aio_context_acquire(job->aio_context); return rc; } =20 @@ -774,11 +793,11 @@ static void job_do_finalize(Job *job) assert(job && job->txn); =20 /* prepare the transaction to complete */ - rc =3D job_txn_apply(job->txn, job_prepare); + rc =3D job_txn_apply(job, job_prepare); if (rc) { job_completed_txn_abort(job); } else { - job_txn_apply(job->txn, job_finalize_single); + job_txn_apply(job, job_finalize_single); } } =20 @@ -824,10 +843,10 @@ static void job_completed_txn_success(Job *job) assert(other_job->ret =3D=3D 0); } =20 - job_txn_apply(txn, job_transition_to_pending); + job_txn_apply(job, job_transition_to_pending); =20 /* If no jobs need manual finalization, automatically do so */ - if (job_txn_apply(txn, job_needs_finalize) =3D=3D 0) { + if (job_txn_apply(job, job_needs_finalize) =3D=3D 0) { job_do_finalize(job); } } @@ -849,9 +868,10 @@ static void job_completed(Job *job) static void job_exit(void *opaque) { Job *job =3D (Job *)opaque; - AioContext *ctx =3D job->aio_context; + AioContext *ctx; =20 - aio_context_acquire(ctx); + job_ref(job); + aio_context_acquire(job->aio_context); =20 /* This is a lie, we're not quiescent, but still doing the completion * callbacks. However, completion callbacks tend to involve operations= that @@ -862,6 +882,14 @@ static void job_exit(void *opaque) =20 job_completed(job); =20 + /* + * Note that calling job_completed can move the job to a different + * aio_context, so we cannot cache from above. job_txn_apply takes car= e of + * acquiring the new lock, and we ref/unref to avoid job_completed fre= eing + * the job underneath us. + */ + ctx =3D job->aio_context; + job_unref(job); aio_context_release(ctx); } =20 diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c index 4eeb184caf..7519847912 100644 --- a/tests/test-blockjob.c +++ b/tests/test-blockjob.c @@ -367,7 +367,9 @@ static void test_cancel_concluded(void) aio_poll(qemu_get_aio_context(), true); assert(job->status =3D=3D JOB_STATUS_PENDING); =20 + aio_context_acquire(job->aio_context); job_finalize(job, &error_abort); + aio_context_release(job->aio_context); assert(job->status =3D=3D JOB_STATUS_CONCLUDED); =20 cancel_common(s); --=20 2.26.0 From nobody Sun May 19 01:26:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1585657370; cv=none; d=zohomail.com; s=zohoarc; b=LfkuqsDvLz+ai8kjjj+74cnLz/Rb/C1fPtPbWTGYjV5dxaQe28m883YTWaX0kwg3aekaHsI6f6Qp70kQdEXMX+G3clGkpXBVX1twhrZnW1huL1Ge9I99QremLJmPpa/P4ocahMLPMoHQa4Ef5RsVt1KY491VEiXh0Dz6cZe5eiY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585657370; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=/MLk6uDhwh3F5IWBS9i3ILfibCTIpAUCVyQZhQGmuxA=; b=beXjGfizpzCtZgPg5cwdfVO58bfQLZLfWR+UhyoEfHHF8YUgsdWJbrnb11edFZO1Sm/46KPDP8yigONfiqofmyBCJWeo0a+YUKXLNvud7svLZsq01nN2PU+2BOqMKbWUfnq5SC79Dr1zWZYzd9PHrCSNiWY2CI5QoBaZEMkLCU4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1585657370584133.50989885390243; Tue, 31 Mar 2020 05:22:50 -0700 (PDT) Received: from localhost ([::1]:37080 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFur-0000MW-81 for importer@patchew.org; Tue, 31 Mar 2020 08:22:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34406) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFt6-0006hz-6s for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jJFt5-00086D-8S for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:00 -0400 Received: from proxmox-new.maurer-it.com ([212.186.127.180]:19101) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jJFt2-00081S-S2; Tue, 31 Mar 2020 08:20:57 -0400 Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id DCC5F456DF; Tue, 31 Mar 2020 14:20:51 +0200 (CEST) From: Stefan Reiter To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [PATCH v3 2/3] replication: acquire aio context before calling job_cancel_sync Date: Tue, 31 Mar 2020 14:20:44 +0200 Message-Id: <20200331122045.164356-3-s.reiter@proxmox.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331122045.164356-1-s.reiter@proxmox.com> References: <20200331122045.164356-1-s.reiter@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.186.127.180 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, slp@redhat.com, mreitz@redhat.com, stefanha@redhat.com, jsnow@redhat.com, dietmar@proxmox.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" job_cancel_sync requires the job's lock to be held, all other callers already do this (replication_stop, drive_backup_abort, blockdev_backup_abort, job_cancel_sync_all, cancel_common). Signed-off-by: Stefan Reiter --- block/replication.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/block/replication.c b/block/replication.c index 413d95407d..6c809cda4e 100644 --- a/block/replication.c +++ b/block/replication.c @@ -144,12 +144,16 @@ fail: static void replication_close(BlockDriverState *bs) { BDRVReplicationState *s =3D bs->opaque; + Job *commit_job =3D &s->commit_job->job; + AioContext *commit_ctx =3D commit_job->aio_context; =20 if (s->stage =3D=3D BLOCK_REPLICATION_RUNNING) { replication_stop(s->rs, false, NULL); } if (s->stage =3D=3D BLOCK_REPLICATION_FAILOVER) { - job_cancel_sync(&s->commit_job->job); + aio_context_acquire(commit_ctx); + job_cancel_sync(commit_job); + aio_context_release(commit_ctx); } =20 if (s->mode =3D=3D REPLICATION_MODE_SECONDARY) { --=20 2.26.0 From nobody Sun May 19 01:26:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1585657382; cv=none; d=zohomail.com; s=zohoarc; b=LHwS1lvZY29HT1ZBWo8jIJ28yW4s5i4Uvefc9ok4r1lwfptWcD88TsPEfk4+LF1X7Slfr/p24Z/oKQWCn6kLPlaoguV0K+h7YZa1nZS8AjEYizO8Wq/OZS23HRvtt+Q4+mztV550qwUr6c7FZ79GByYKt/tG8OPyV1i3/dF7unE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585657382; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LA4ytw7qe1vydecYf/Pl8Nxk9PDet9/Q1eSCdRaLOfM=; b=IQBS5Lde17SvzGdN3BNOBE4Rov9MRzo5dZFV1Y4Ik64Fbq0o1PvK8rG0Xrw/Kusz/D0SoyLNOPbhn+rY4bZ7L4ShL1GoB/AJu9oATUgS/MJ4njrVlr/9jVnBes9WLK0a6JzjuOcon4CIL3QS/xJNkL52Fsx9QdDMwjUQjcEsHeY= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1585657382404600.9604969517417; Tue, 31 Mar 2020 05:23:02 -0700 (PDT) Received: from localhost ([::1]:37086 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFv2-0000nG-ST for importer@patchew.org; Tue, 31 Mar 2020 08:23:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34400) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJFt5-0006hg-SB for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:21:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jJFt4-00085x-Uf for qemu-devel@nongnu.org; Tue, 31 Mar 2020 08:20:59 -0400 Received: from proxmox-new.maurer-it.com ([212.186.127.180]:53973) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jJFt2-00081T-Be; Tue, 31 Mar 2020 08:20:56 -0400 Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 371EA442B4; Tue, 31 Mar 2020 14:20:52 +0200 (CEST) From: Stefan Reiter To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [PATCH v3 3/3] backup: don't acquire aio_context in backup_clean Date: Tue, 31 Mar 2020 14:20:45 +0200 Message-Id: <20200331122045.164356-4-s.reiter@proxmox.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331122045.164356-1-s.reiter@proxmox.com> References: <20200331122045.164356-1-s.reiter@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.186.127.180 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, slp@redhat.com, mreitz@redhat.com, stefanha@redhat.com, jsnow@redhat.com, dietmar@proxmox.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" All code-paths leading to backup_clean (via job_clean) have the job's context already acquired. The job's context is guaranteed to be the same as the one used by backup_top via backup_job_create. Since the previous logic effectively acquired the lock twice, this broke cleanup of backups for disks using IO threads, since the BDRV_POLL_WH= ILE in bdrv_backup_top_drop -> bdrv_do_drained_begin would only release the lock once, thus deadlocking with the IO thread. This is a partial revert of 0abf2581717a19. Signed-off-by: Stefan Reiter --- With the two previous patches applied, the commit message should now hold t= rue. block/backup.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/block/backup.c b/block/backup.c index 7430ca5883..a7a7dcaf4c 100644 --- a/block/backup.c +++ b/block/backup.c @@ -126,11 +126,7 @@ static void backup_abort(Job *job) static void backup_clean(Job *job) { BackupBlockJob *s =3D container_of(job, BackupBlockJob, common.job); - AioContext *aio_context =3D bdrv_get_aio_context(s->backup_top); - - aio_context_acquire(aio_context); bdrv_backup_top_drop(s->backup_top); - aio_context_release(aio_context); } =20 void backup_do_checkpoint(BlockJob *job, Error **errp) --=20 2.26.0