From nobody Tue Feb 10 13:16:42 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 151195138178322.742175670487313; Wed, 29 Nov 2017 02:29:41 -0800 (PST) Received: from localhost ([::1]:42166 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eJzcJ-0002KL-16 for importer@patchew.org; Wed, 29 Nov 2017 05:29:23 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43768) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eJzYc-000736-53 for qemu-devel@nongnu.org; Wed, 29 Nov 2017 05:25:35 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eJzYZ-00056T-1J for qemu-devel@nongnu.org; Wed, 29 Nov 2017 05:25:34 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54354) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eJzYV-00053r-RG; Wed, 29 Nov 2017 05:25:28 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0A18CC0587DE; Wed, 29 Nov 2017 10:25:27 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-117-214.ams2.redhat.com [10.36.117.214]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8EAE6600D1; Wed, 29 Nov 2017 10:25:25 +0000 (UTC) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 29 Nov 2017 11:25:13 +0100 Message-Id: <20171129102513.9153-5-pbonzini@redhat.com> In-Reply-To: <20171129102513.9153-1-pbonzini@redhat.com> References: <20171129102513.9153-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 29 Nov 2017 10:25:27 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 4/4] blockjob: reimplement block_job_sleep_ns to allow cancellation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, jcody@redhat.com, famz@redhat.com, qemu-block@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This reverts the effects of commit 4afeffc857 ("blockjob: do not allow coroutine double entry or entry-after-completion", 2017-11-21) This fixed the symptom of a bug rather than the root cause. Canceling the wait on a sleeping blockjob coroutine is generally fine, we just need to make it work correctly across AioContexts. To do so, use a QEMUTimer that calls block_job_enter. Use a mutex to ensure that block_job_enter synchronizes correctly with block_job_sleep_ns. Signed-off-by: Paolo Bonzini Reviewed-by: Jeff Cody Reviewed-by: Stefan Hajnoczi --- blockjob.c | 57 +++++++++++++++++++++++++++++++++++-----= ---- include/block/blockjob.h | 5 +++- include/block/blockjob_int.h | 4 ++-- 3 files changed, 52 insertions(+), 14 deletions(-) diff --git a/blockjob.c b/blockjob.c index 4d22b7d2fb..3fdaabbc1f 100644 --- a/blockjob.c +++ b/blockjob.c @@ -37,6 +37,26 @@ #include "qemu/timer.h" #include "qapi-event.h" =20 +/* Right now, this mutex is only needed to synchronize accesses to job->bu= sy, + * especially concurrent calls to block_job_enter. + */ +static QemuMutex block_job_mutex; + +static void block_job_lock(void) +{ + qemu_mutex_lock(&block_job_mutex); +} + +static void block_job_unlock(void) +{ + qemu_mutex_unlock(&block_job_mutex); +} + +static void __attribute__((__constructor__)) block_job_init(void) +{ + qemu_mutex_init(&block_job_mutex); +} + static void block_job_event_cancelled(BlockJob *job); static void block_job_event_completed(BlockJob *job, const char *msg); =20 @@ -161,6 +181,7 @@ void block_job_unref(BlockJob *job) blk_unref(job->blk); error_free(job->blocker); g_free(job->id); + assert(!timer_pending(&job->sleep_timer)); g_free(job); } } @@ -287,6 +308,13 @@ static void coroutine_fn block_job_co_entry(void *opaq= ue) job->driver->start(job); } =20 +static void block_job_sleep_timer_cb(void *opaque) +{ + BlockJob *job =3D opaque; + + block_job_enter(job); +} + void block_job_start(BlockJob *job) { assert(job && !block_job_started(job) && job->paused && @@ -556,7 +584,7 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **er= rp) info->type =3D g_strdup(BlockJobType_str(job->driver->job_type)); info->device =3D g_strdup(job->id); info->len =3D job->len; - info->busy =3D job->busy; + info->busy =3D atomic_read(&job->busy); info->paused =3D job->pause_count > 0; info->offset =3D job->offset; info->speed =3D job->speed; @@ -664,6 +692,9 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, job->paused =3D true; job->pause_count =3D 1; job->refcnt =3D 1; + aio_timer_init(qemu_get_aio_context(), &job->sleep_timer, + QEMU_CLOCK_REALTIME, SCALE_NS, + block_job_sleep_timer_cb, job); =20 error_setg(&job->blocker, "block device is in use by block job: %s", BlockJobType_str(driver->job_type)); @@ -729,9 +760,14 @@ static bool block_job_should_pause(BlockJob *job) return job->pause_count > 0; } =20 -static void block_job_do_yield(BlockJob *job) +static void block_job_do_yield(BlockJob *job, uint64_t ns) { + block_job_lock(); + if (ns !=3D -1) { + timer_mod(&job->sleep_timer, ns); + } job->busy =3D false; + block_job_unlock(); qemu_coroutine_yield(); =20 /* Set by block_job_enter before re-entering the coroutine. */ @@ -755,7 +791,7 @@ void coroutine_fn block_job_pause_point(BlockJob *job) =20 if (block_job_should_pause(job) && !block_job_is_cancelled(job)) { job->paused =3D true; - block_job_do_yield(job); + block_job_do_yield(job, -1); job->paused =3D false; } =20 @@ -785,11 +821,16 @@ void block_job_enter(BlockJob *job) return; } =20 + block_job_lock(); if (job->busy) { + block_job_unlock(); return; } =20 + assert(!job->deferred_to_main_loop); + timer_del(&job->sleep_timer); job->busy =3D true; + block_job_unlock(); aio_co_wake(job->co); } =20 @@ -807,14 +848,8 @@ void block_job_sleep_ns(BlockJob *job, int64_t ns) return; } =20 - /* We need to leave job->busy set here, because when we have - * put a coroutine to 'sleep', we have scheduled it to run in - * the future. We cannot enter that same coroutine again before - * it wakes and runs, otherwise we risk double-entry or entry after - * completion. */ if (!block_job_should_pause(job)) { - co_aio_sleep_ns(blk_get_aio_context(job->blk), - QEMU_CLOCK_REALTIME, ns); + block_job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + n= s); } =20 block_job_pause_point(job); @@ -830,7 +865,7 @@ void block_job_yield(BlockJob *job) } =20 if (!block_job_should_pause(job)) { - block_job_do_yield(job); + block_job_do_yield(job, -1); } =20 block_job_pause_point(job); diff --git a/include/block/blockjob.h b/include/block/blockjob.h index 67c0968fa5..956f0d6819 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -77,7 +77,7 @@ typedef struct BlockJob { /** * Set to false by the job while the coroutine has yielded and may be * re-entered by block_job_enter(). There may still be I/O or event l= oop - * activity pending. + * activity pending. Accessed under block_job_mutex (in blockjob.c). */ bool busy; =20 @@ -135,6 +135,9 @@ typedef struct BlockJob { */ int ret; =20 + /** Timer that is used by @block_job_sleep_ns. */ + QEMUTimer sleep_timer; + /** Non-NULL if this job is part of a transaction */ BlockJobTxn *txn; QLIST_ENTRY(BlockJob) txn_list; diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h index f7ab183a39..c9b23b0cc9 100644 --- a/include/block/blockjob_int.h +++ b/include/block/blockjob_int.h @@ -142,8 +142,8 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, * @ns: How many nanoseconds to stop for. * * Put the job to sleep (assuming that it wasn't canceled) for @ns - * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will not interrupt - * the wait, so the cancel will not process until the coroutine wakes up. + * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will immediately + * interrupt the wait. */ void block_job_sleep_ns(BlockJob *job, int64_t ns); =20 --=20 2.14.3