From nobody Sun Feb 8 01:21:18 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1511969811815594.9825258341752; Wed, 29 Nov 2017 07:36:51 -0800 (PST) Received: from localhost ([::1]:43698 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eK4Pg-0002ps-0T for importer@patchew.org; Wed, 29 Nov 2017 10:36:40 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54538) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eK4GX-00037p-MH for qemu-devel@nongnu.org; Wed, 29 Nov 2017 10:27:15 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eK4GW-0007iA-9f for qemu-devel@nongnu.org; Wed, 29 Nov 2017 10:27:13 -0500 Received: from mx1.redhat.com ([209.132.183.28]:35626) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eK4GR-0007gf-3j; Wed, 29 Nov 2017 10:27:07 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 424DB6A7DB; Wed, 29 Nov 2017 15:27:06 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-153.ams2.redhat.com [10.36.117.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29C1760465; Wed, 29 Nov 2017 15:26:55 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 29 Nov 2017 16:26:27 +0100 Message-Id: <20171129152628.14906-10-kwolf@redhat.com> In-Reply-To: <20171129152628.14906-1-kwolf@redhat.com> References: <20171129152628.14906-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 29 Nov 2017 15:27:06 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 09/10] blockjob: reimplement block_job_sleep_ns to allow cancellation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, qemu-devel@nongnu.org, mreitz@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Paolo Bonzini This reverts the effects of commit 4afeffc857 ("blockjob: do not allow coroutine double entry or entry-after-completion", 2017-11-21) This fixed the symptom of a bug rather than the root cause. Canceling the wait on a sleeping blockjob coroutine is generally fine, we just need to make it work correctly across AioContexts. To do so, use a QEMUTimer that calls block_job_enter. Use a mutex to ensure that block_job_enter synchronizes correctly with block_job_sleep_ns. Signed-off-by: Paolo Bonzini Tested-By: Jeff Cody Reviewed-by: Fam Zheng Reviewed-by: Stefan Hajnoczi Reviewed-by: Jeff Cody Signed-off-by: Kevin Wolf --- include/block/blockjob.h | 8 +++++- include/block/blockjob_int.h | 4 +-- blockjob.c | 63 ++++++++++++++++++++++++++++++++++++----= ---- 3 files changed, 61 insertions(+), 14 deletions(-) diff --git a/include/block/blockjob.h b/include/block/blockjob.h index 67c0968fa5..00403d9482 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -77,7 +77,7 @@ typedef struct BlockJob { /** * Set to false by the job while the coroutine has yielded and may be * re-entered by block_job_enter(). There may still be I/O or event l= oop - * activity pending. + * activity pending. Accessed under block_job_mutex (in blockjob.c). */ bool busy; =20 @@ -135,6 +135,12 @@ typedef struct BlockJob { */ int ret; =20 + /** + * Timer that is used by @block_job_sleep_ns. Accessed under + * block_job_mutex (in blockjob.c). + */ + QEMUTimer sleep_timer; + /** Non-NULL if this job is part of a transaction */ BlockJobTxn *txn; QLIST_ENTRY(BlockJob) txn_list; diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h index f7ab183a39..c9b23b0cc9 100644 --- a/include/block/blockjob_int.h +++ b/include/block/blockjob_int.h @@ -142,8 +142,8 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, * @ns: How many nanoseconds to stop for. * * Put the job to sleep (assuming that it wasn't canceled) for @ns - * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will not interrupt - * the wait, so the cancel will not process until the coroutine wakes up. + * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will immediately + * interrupt the wait. */ void block_job_sleep_ns(BlockJob *job, int64_t ns); =20 diff --git a/blockjob.c b/blockjob.c index 4d22b7d2fb..0ed50b953b 100644 --- a/blockjob.c +++ b/blockjob.c @@ -37,6 +37,26 @@ #include "qemu/timer.h" #include "qapi-event.h" =20 +/* Right now, this mutex is only needed to synchronize accesses to job->bu= sy + * and job->sleep_timer, such as concurrent calls to block_job_do_yield and + * block_job_enter. */ +static QemuMutex block_job_mutex; + +static void block_job_lock(void) +{ + qemu_mutex_lock(&block_job_mutex); +} + +static void block_job_unlock(void) +{ + qemu_mutex_unlock(&block_job_mutex); +} + +static void __attribute__((__constructor__)) block_job_init(void) +{ + qemu_mutex_init(&block_job_mutex); +} + static void block_job_event_cancelled(BlockJob *job); static void block_job_event_completed(BlockJob *job, const char *msg); =20 @@ -161,6 +181,7 @@ void block_job_unref(BlockJob *job) blk_unref(job->blk); error_free(job->blocker); g_free(job->id); + assert(!timer_pending(&job->sleep_timer)); g_free(job); } } @@ -287,6 +308,13 @@ static void coroutine_fn block_job_co_entry(void *opaq= ue) job->driver->start(job); } =20 +static void block_job_sleep_timer_cb(void *opaque) +{ + BlockJob *job =3D opaque; + + block_job_enter(job); +} + void block_job_start(BlockJob *job) { assert(job && !block_job_started(job) && job->paused && @@ -556,7 +584,7 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **er= rp) info->type =3D g_strdup(BlockJobType_str(job->driver->job_type)); info->device =3D g_strdup(job->id); info->len =3D job->len; - info->busy =3D job->busy; + info->busy =3D atomic_read(&job->busy); info->paused =3D job->pause_count > 0; info->offset =3D job->offset; info->speed =3D job->speed; @@ -664,6 +692,9 @@ void *block_job_create(const char *job_id, const BlockJ= obDriver *driver, job->paused =3D true; job->pause_count =3D 1; job->refcnt =3D 1; + aio_timer_init(qemu_get_aio_context(), &job->sleep_timer, + QEMU_CLOCK_REALTIME, SCALE_NS, + block_job_sleep_timer_cb, job); =20 error_setg(&job->blocker, "block device is in use by block job: %s", BlockJobType_str(driver->job_type)); @@ -729,9 +760,20 @@ static bool block_job_should_pause(BlockJob *job) return job->pause_count > 0; } =20 -static void block_job_do_yield(BlockJob *job) +/* Yield, and schedule a timer to reenter the coroutine after @ns nanoseco= nds. + * Reentering the job coroutine with block_job_enter() before the timer has + * expired is allowed and cancels the timer. + * + * If @ns is (uint64_t) -1, no timer is scheduled and block_job_enter() mu= st be + * called explicitly. */ +static void block_job_do_yield(BlockJob *job, uint64_t ns) { + block_job_lock(); + if (ns !=3D -1) { + timer_mod(&job->sleep_timer, ns); + } job->busy =3D false; + block_job_unlock(); qemu_coroutine_yield(); =20 /* Set by block_job_enter before re-entering the coroutine. */ @@ -755,7 +797,7 @@ void coroutine_fn block_job_pause_point(BlockJob *job) =20 if (block_job_should_pause(job) && !block_job_is_cancelled(job)) { job->paused =3D true; - block_job_do_yield(job); + block_job_do_yield(job, -1); job->paused =3D false; } =20 @@ -785,11 +827,16 @@ void block_job_enter(BlockJob *job) return; } =20 + block_job_lock(); if (job->busy) { + block_job_unlock(); return; } =20 + assert(!job->deferred_to_main_loop); + timer_del(&job->sleep_timer); job->busy =3D true; + block_job_unlock(); aio_co_wake(job->co); } =20 @@ -807,14 +854,8 @@ void block_job_sleep_ns(BlockJob *job, int64_t ns) return; } =20 - /* We need to leave job->busy set here, because when we have - * put a coroutine to 'sleep', we have scheduled it to run in - * the future. We cannot enter that same coroutine again before - * it wakes and runs, otherwise we risk double-entry or entry after - * completion. */ if (!block_job_should_pause(job)) { - co_aio_sleep_ns(blk_get_aio_context(job->blk), - QEMU_CLOCK_REALTIME, ns); + block_job_do_yield(job, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + n= s); } =20 block_job_pause_point(job); @@ -830,7 +871,7 @@ void block_job_yield(BlockJob *job) } =20 if (!block_job_should_pause(job)) { - block_job_do_yield(job); + block_job_do_yield(job, -1); } =20 block_job_pause_point(job); --=20 2.13.6