From nobody Mon Feb 9 08:29:25 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1657141412; cv=none; d=zohomail.com; s=zohoarc; b=knkowjm8I3jfM3VM26SyD6gOAkTj6eZy+gLMsS9vSgIiPrdUdeIAzWqfUszhdJkg3vvCOEIZ6pyN+sPqaz+NwLuW7rMssOrMM/idLlcTSElI1o8ZMltZQsgs0w3k8gfDVcpY/5PWJw+xu2qiyk2orADIo+5/Kfvl8exwn8n7Dkk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1657141412; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=UF8lbfEeqBqtSfv2OT8iAX0Zqqw7bqCKNOcrAs349yg=; b=KVmoh/h19+JRFgifhJIflgWvK8lxCUjpF+SV/Bf6h0xZxyyQJTMRu33hoC/6UfF86mf6IVr7WE+HA4YN6tRnck6RuxZtkaVul5/tbRmVczp3yyUOrUlENixzWxgmMqIywHx/IzOU/ByK2XIuzBujOfZla6Mkb3Ap9G48F0s5pJU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1657141412438678.2684481774119; Wed, 6 Jul 2022 14:03:32 -0700 (PDT) Received: from localhost ([::1]:47548 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o9CBG-0006Ph-HF for importer@patchew.org; Wed, 06 Jul 2022 17:03:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39120) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o9BR9-0005Cs-OZ for qemu-devel@nongnu.org; Wed, 06 Jul 2022 16:15:52 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:60727) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o9BR5-0007Dn-DX for qemu-devel@nongnu.org; Wed, 06 Jul 2022 16:15:50 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-655-0PWGpIUGMhu5Gt5lNhwWNw-1; Wed, 06 Jul 2022 16:15:45 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CEEDB1C05ABA; Wed, 6 Jul 2022 20:15:44 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7ABC91121315; Wed, 6 Jul 2022 20:15:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1657138546; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UF8lbfEeqBqtSfv2OT8iAX0Zqqw7bqCKNOcrAs349yg=; b=MnoU8fCDZi7kmBOVciIdVJlJaphvvjN8s2TWCGKzvmDshpWpF9MOFNP/pyx3Q6lCumW3oa ydAyapnDh2aM88ULqWPZcORZvcCVIcbvUWtd+ZOenTXzJvkDGPTHzdk+TBTN/NM7D87QL9 FieeqkDr99cWOila3Icvq4DXqPRX8jk= X-MC-Unique: 0PWGpIUGMhu5Gt5lNhwWNw-1 From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Paolo Bonzini , John Snow , Vladimir Sementsov-Ogievskiy , Wen Congyang , Xie Changlong , Markus Armbruster , Stefan Hajnoczi , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH v9 08/21] jobs: add job lock in find_* functions Date: Wed, 6 Jul 2022 16:15:20 -0400 Message-Id: <20220706201533.289775-9-eesposit@redhat.com> In-Reply-To: <20220706201533.289775-1-eesposit@redhat.com> References: <20220706201533.289775-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1657141416527100001 Content-Type: text/plain; charset="utf-8" Both blockdev.c and job-qmp.c have TOC/TOU conditions, because they first search for the job and then perform an action on it. Therefore, we need to do the search + action under the same job mutex critical section. Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Reviewed-by: Vladimir Sementsov-Ogievskiy --- blockdev.c | 67 +++++++++++++++++++++++++++++++++++++----------------- job-qmp.c | 57 ++++++++++++++++++++++++++++++++-------------- 2 files changed, 86 insertions(+), 38 deletions(-) diff --git a/blockdev.c b/blockdev.c index 9230888e34..71f793c4ab 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3302,9 +3302,13 @@ out: aio_context_release(aio_context); } =20 -/* Get a block job using its ID and acquire its AioContext */ -static BlockJob *find_block_job(const char *id, AioContext **aio_context, - Error **errp) +/* + * Get a block job using its ID and acquire its AioContext. + * Called with job_mutex held. + */ +static BlockJob *find_block_job_locked(const char *id, + AioContext **aio_context, + Error **errp) { BlockJob *job; =20 @@ -3312,7 +3316,7 @@ static BlockJob *find_block_job(const char *id, AioCo= ntext **aio_context, =20 *aio_context =3D NULL; =20 - job =3D block_job_get(id); + job =3D block_job_get_locked(id); =20 if (!job) { error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE, @@ -3329,13 +3333,16 @@ static BlockJob *find_block_job(const char *id, Aio= Context **aio_context, void qmp_block_job_set_speed(const char *device, int64_t speed, Error **er= rp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 - block_job_set_speed(job, speed, errp); + block_job_set_speed_locked(job, speed, errp); aio_context_release(aio_context); } =20 @@ -3343,7 +3350,10 @@ void qmp_block_job_cancel(const char *device, bool has_force, bool force, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; @@ -3353,14 +3363,14 @@ void qmp_block_job_cancel(const char *device, force =3D false; } =20 - if (job_user_paused(&job->job) && !force) { + if (job_user_paused_locked(&job->job) && !force) { error_setg(errp, "The block job for device '%s' is currently pause= d", device); goto out; } =20 trace_qmp_block_job_cancel(job); - job_user_cancel(&job->job, force, errp); + job_user_cancel_locked(&job->job, force, errp); out: aio_context_release(aio_context); } @@ -3368,57 +3378,69 @@ out: void qmp_block_job_pause(const char *device, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_pause(job); - job_user_pause(&job->job, errp); + job_user_pause_locked(&job->job, errp); aio_context_release(aio_context); } =20 void qmp_block_job_resume(const char *device, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_resume(job); - job_user_resume(&job->job, errp); + job_user_resume_locked(&job->job, errp); aio_context_release(aio_context); } =20 void qmp_block_job_complete(const char *device, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(device, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(device, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_complete(job); - job_complete(&job->job, errp); + job_complete_locked(&job->job, errp); aio_context_release(aio_context); } =20 void qmp_block_job_finalize(const char *id, Error **errp) { AioContext *aio_context; - BlockJob *job =3D find_block_job(id, &aio_context, errp); + BlockJob *job; + + JOB_LOCK_GUARD(); + job =3D find_block_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_block_job_finalize(job); - job_ref(&job->job); - job_finalize(&job->job, errp); + job_ref_locked(&job->job); + job_finalize_locked(&job->job, errp); =20 /* * Job's context might have changed via job_finalize (and job_txn_apply @@ -3426,23 +3448,26 @@ void qmp_block_job_finalize(const char *id, Error *= *errp) * one. */ aio_context =3D block_job_get_aio_context(job); - job_unref(&job->job); + job_unref_locked(&job->job); aio_context_release(aio_context); } =20 void qmp_block_job_dismiss(const char *id, Error **errp) { AioContext *aio_context; - BlockJob *bjob =3D find_block_job(id, &aio_context, errp); + BlockJob *bjob; Job *job; =20 + JOB_LOCK_GUARD(); + bjob =3D find_block_job_locked(id, &aio_context, errp); + if (!bjob) { return; } =20 trace_qmp_block_job_dismiss(bjob); job =3D &bjob->job; - job_dismiss(&job, errp); + job_dismiss_locked(&job, errp); aio_context_release(aio_context); } =20 diff --git a/job-qmp.c b/job-qmp.c index 829a28aa70..ac11a6c23c 100644 --- a/job-qmp.c +++ b/job-qmp.c @@ -29,14 +29,19 @@ #include "qapi/error.h" #include "trace/trace-root.h" =20 -/* Get a job using its ID and acquire its AioContext */ -static Job *find_job(const char *id, AioContext **aio_context, Error **err= p) +/* + * Get a block job using its ID and acquire its AioContext. + * Called with job_mutex held. + */ +static Job *find_job_locked(const char *id, + AioContext **aio_context, + Error **errp) { Job *job; =20 *aio_context =3D NULL; =20 - job =3D job_get(id); + job =3D job_get_locked(id); if (!job) { error_setg(errp, "Job not found"); return NULL; @@ -51,71 +56,86 @@ static Job *find_job(const char *id, AioContext **aio_c= ontext, Error **errp) void qmp_job_cancel(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_cancel(job); - job_user_cancel(job, true, errp); + job_user_cancel_locked(job, true, errp); aio_context_release(aio_context); } =20 void qmp_job_pause(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_pause(job); - job_user_pause(job, errp); + job_user_pause_locked(job, errp); aio_context_release(aio_context); } =20 void qmp_job_resume(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_resume(job); - job_user_resume(job, errp); + job_user_resume_locked(job, errp); aio_context_release(aio_context); } =20 void qmp_job_complete(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_complete(job); - job_complete(job, errp); + job_complete_locked(job, errp); aio_context_release(aio_context); } =20 void qmp_job_finalize(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_finalize(job); - job_ref(job); - job_finalize(job, errp); + job_ref_locked(job); + job_finalize_locked(job, errp); =20 /* * Job's context might have changed via job_finalize (and job_txn_apply @@ -123,21 +143,24 @@ void qmp_job_finalize(const char *id, Error **errp) * one. */ aio_context =3D job->aio_context; - job_unref(job); + job_unref_locked(job); aio_context_release(aio_context); } =20 void qmp_job_dismiss(const char *id, Error **errp) { AioContext *aio_context; - Job *job =3D find_job(id, &aio_context, errp); + Job *job; + + JOB_LOCK_GUARD(); + job =3D find_job_locked(id, &aio_context, errp); =20 if (!job) { return; } =20 trace_qmp_job_dismiss(job); - job_dismiss(&job, errp); + job_dismiss_locked(&job, errp); aio_context_release(aio_context); } =20 --=20 2.31.1