From nobody Fri May 3 21:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1574154162; cv=none; d=zoho.com; s=zohoarc; b=FWbtCW0PaeFMVshgwdI8pe4nFdLvDlbvQtmYiswWID8BEhqLbLmT3ClH2FqfVhiPRx4VlGml1puAFuaH221e7sikQQX/OgGq2P0hSNzqhyAerN/G751HiDd5QinaSfdv38wYSi6hyx1zqVeTD80Dza9FCj0bLV/aEK4NJeITidg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1574154162; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lb4urd+nF2hDkjxdjOka4SIZOYV6dszILUzz7uCOTds=; b=SpZtPf0SuBoqIGAHTM2sd0u/2R4DJTQEymVmuR0qx2he03SOoSOBEkkH5fpvi2s9Fifazi8dE0+2rriAyP5JTGWDlCkuDT8ykC33kWX+/wK+oDdd3Z2JfeTpX9wJB8gg2by7dusLqUbn/XBqjCU4/FoKvvBwnPyf8wD67CEmWA8= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=pass; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1574154162386660.9862494734388; Tue, 19 Nov 2019 01:02:42 -0800 (PST) Received: from localhost ([::1]:42938 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iWzP7-0004hn-Co for importer@patchew.org; Tue, 19 Nov 2019 04:02:33 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56233) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iWzNQ-0004G7-4n for qemu-devel@nongnu.org; Tue, 19 Nov 2019 04:00:53 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iWzNM-0004aa-VP for qemu-devel@nongnu.org; Tue, 19 Nov 2019 04:00:47 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:26197 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iWzNM-0004aN-RO for qemu-devel@nongnu.org; Tue, 19 Nov 2019 04:00:44 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-409-93E6CYsfPOymbDTMosQv-g-1; Tue, 19 Nov 2019 04:00:40 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A2EC3107ACFA; Tue, 19 Nov 2019 09:00:39 +0000 (UTC) Received: from blackfin.pond.sub.org (unknown [10.36.118.163]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 299A6610B0; Tue, 19 Nov 2019 09:00:38 +0000 (UTC) Received: by blackfin.pond.sub.org (Postfix, from userid 1000) id 7706411385C7; Tue, 19 Nov 2019 10:00:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1574154044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lb4urd+nF2hDkjxdjOka4SIZOYV6dszILUzz7uCOTds=; b=bzR1JQm7uqwymQJr2uSvABsmIsMlKSKW3R+H1c9e5XipLULEFWhQuVcUY9pdNsXwgqNf5f QUDb3M/wiFN1n+u1M12wDbd3bcBJAWaIKqem9U8aiN/rtGbQyDryf1Qh7sU1rqCwCn9G+b 0BrJYUko+uKMbGMY8MgKlUdjAgWWoq4= From: Markus Armbruster To: qemu-devel@nongnu.org Subject: [PULL 1/1] monitor/qmp: resume monitor when clearing its queue Date: Tue, 19 Nov 2019 10:00:37 +0100 Message-Id: <20191119090037.21686-2-armbru@redhat.com> In-Reply-To: <20191119090037.21686-1-armbru@redhat.com> References: <20191119090037.21686-1-armbru@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-MC-Unique: 93E6CYsfPOymbDTMosQv-g-1 X-Mimecast-Spam-Score: 0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 207.211.31.81 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: w.bumiller@proxmox.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" From: Wolfgang Bumiller When a monitor's queue is filled up in handle_qmp_command() it gets suspended. It's the dispatcher bh's job currently to resume the monitor, which it does after processing an event from the queue. However, it is possible for a CHR_EVENT_CLOSED event to be processed before before the bh is scheduled, which will clear the queue without resuming the monitor, thereby preventing the dispatcher from reaching the resume() call. Any new connections to the qmp socket will be accept()ed and show the greeting, but will not respond to any messages sent afterwards (as they will not be read from the still-suspended socket). Fix this by resuming the monitor when clearing a queue which was filled up. Signed-off-by: Wolfgang Bumiller Message-Id: <20191115085914.21287-1-w.bumiller@proxmox.com> --- monitor/qmp.c | 36 +++++++++++++++++++++++++++++++----- 1 file changed, 31 insertions(+), 5 deletions(-) diff --git a/monitor/qmp.c b/monitor/qmp.c index 9d9e5d8b27..b67a8e7d1f 100644 --- a/monitor/qmp.c +++ b/monitor/qmp.c @@ -75,10 +75,35 @@ static void monitor_qmp_cleanup_req_queue_locked(Monito= rQMP *mon) } } =20 -static void monitor_qmp_cleanup_queues(MonitorQMP *mon) +static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon) { qemu_mutex_lock(&mon->qmp_queue_lock); + + /* + * Same condition as in monitor_qmp_bh_dispatcher(), but before + * removing an element from the queue (hence no `- 1`). + * Also, the queue should not be empty either, otherwise the + * monitor hasn't been suspended yet (or was already resumed). + */ + bool need_resume =3D (!qmp_oob_enabled(mon) || + mon->qmp_requests->length =3D=3D QMP_REQ_QUEUE_LEN_MAX) + && !g_queue_is_empty(mon->qmp_requests); + monitor_qmp_cleanup_req_queue_locked(mon); + + if (need_resume) { + /* + * handle_qmp_command() suspended the monitor because the + * request queue filled up, to be resumed when the queue has + * space again. We just emptied it; resume the monitor. + * + * Without this, the monitor would remain suspended forever + * when we get here while the monitor is suspended. An + * unfortunately timed CHR_EVENT_CLOSED can do the trick. + */ + monitor_resume(&mon->common); + } + qemu_mutex_unlock(&mon->qmp_queue_lock); } =20 @@ -263,9 +288,10 @@ static void handle_qmp_command(void *opaque, QObject *= req, Error *err) =20 /* * Suspend the monitor when we can't queue more requests after - * this one. Dequeuing in monitor_qmp_bh_dispatcher() will resume - * it. Note that when OOB is disabled, we queue at most one - * command, for backward compatibility. + * this one. Dequeuing in monitor_qmp_bh_dispatcher() or + * monitor_qmp_cleanup_queue_and_resume() will resume it. + * Note that when OOB is disabled, we queue at most one command, + * for backward compatibility. */ if (!qmp_oob_enabled(mon) || mon->qmp_requests->length =3D=3D QMP_REQ_QUEUE_LEN_MAX - 1) { @@ -332,7 +358,7 @@ static void monitor_qmp_event(void *opaque, int event) * stdio, it's possible that stdout is still open when stdin * is closed. */ - monitor_qmp_cleanup_queues(mon); + monitor_qmp_cleanup_queue_and_resume(mon); json_message_parser_destroy(&mon->parser); json_message_parser_init(&mon->parser, handle_qmp_command, mon, NULL); --=20 2.21.0