From nobody Sun Dec 14 12:15:17 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1764749156894186.29579496427345; Wed, 3 Dec 2025 00:05:56 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vQhoU-0004r0-HA; Wed, 03 Dec 2025 03:02:14 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vQhoT-0004nK-7w; Wed, 03 Dec 2025 03:02:13 -0500 Received: from isrv.corpit.ru ([212.248.84.144]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vQhoR-00076g-BH; Wed, 03 Dec 2025 03:02:12 -0500 Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2]) by isrv.corpit.ru (Postfix) with ESMTP id 0826217077A; Wed, 03 Dec 2025 10:59:24 +0300 (MSK) Received: from think4mjt.tls.msk.ru (mjtthink.wg.tls.msk.ru [192.168.177.146]) by tsrv.corpit.ru (Postfix) with ESMTP id C67F532B4A1; Wed, 03 Dec 2025 10:59:41 +0300 (MSK) From: Michael Tokarev To: qemu-devel@nongnu.org Cc: qemu-stable@nongnu.org, Kevin Wolf , Andrey Drobyshev , Hanna Czenczek , Fiona Ebner , Michael Tokarev Subject: [Stable-10.0.7 109/116] block-backend: Fix race when resuming queued requests Date: Wed, 3 Dec 2025 10:59:29 +0300 Message-ID: <20251203075939.2366131-28-mjt@tls.msk.ru> X-Mailer: git-send-email 2.47.3 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=212.248.84.144; envelope-from=mjt@tls.msk.ru; helo=isrv.corpit.ru X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZM-MESSAGEID: 1764749157999019200 Content-Type: text/plain; charset="utf-8" From: Kevin Wolf When new requests arrive at a BlockBackend that is currently drained, these requests are queued until the drain section ends. There is a race window between blk_root_drained_end() waking up a queued request in an iothread from the main thread and blk_wait_while_drained() actually being woken up in the iothread and calling blk_inc_in_flight(). If the BlockBackend is drained again during this window, drain won't wait for this request and it will sneak in when the BlockBackend is already supposed to be quiesced. This causes assertion failures in bdrv_drain_all_begin() and can have other unintended consequences. Fix this by increasing the in_flight counter immediately when scheduling the request to be resumed so that the next drain will wait for it to complete. Cc: qemu-stable@nongnu.org Reported-by: Andrey Drobyshev Signed-off-by: Kevin Wolf Message-ID: <20251119172720.135424-1-kwolf@redhat.com> Reviewed-by: Hanna Czenczek Tested-by: Andrey Drobyshev Reviewed-by: Fiona Ebner Signed-off-by: Kevin Wolf (cherry picked from commit 8eeaa706ba73251063cb80d87ae838d2d5b08e9a) Signed-off-by: Michael Tokarev diff --git a/block/block-backend.c b/block/block-backend.c index a402db13f2..50ccf3663f 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -1318,9 +1318,9 @@ static void coroutine_fn blk_wait_while_drained(Block= Backend *blk) * section. */ qemu_mutex_lock(&blk->queued_requests_lock); + /* blk_root_drained_end() has the corresponding blk_inc_in_flight(= ) */ blk_dec_in_flight(blk); qemu_co_queue_wait(&blk->queued_requests, &blk->queued_requests_lo= ck); - blk_inc_in_flight(blk); qemu_mutex_unlock(&blk->queued_requests_lock); } } @@ -2767,9 +2767,11 @@ static void blk_root_drained_end(BdrvChild *child) blk->dev_ops->drained_end(blk->dev_opaque); } qemu_mutex_lock(&blk->queued_requests_lock); - while (qemu_co_enter_next(&blk->queued_requests, - &blk->queued_requests_lock)) { + while (!qemu_co_queue_empty(&blk->queued_requests)) { /* Resume all queued requests */ + blk_inc_in_flight(blk); + qemu_co_enter_next(&blk->queued_requests, + &blk->queued_requests_lock); } qemu_mutex_unlock(&blk->queued_requests_lock); } --=20 2.47.3