From nobody Sun Feb 8 23:40:31 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@gnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@gnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1506657287882123.75712925207188; Thu, 28 Sep 2017 20:54:47 -0700 (PDT) Received: from localhost ([::1]:33593 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dxmNu-0001FK-6P for importer@patchew.org; Thu, 28 Sep 2017 23:54:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36143) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dxmA8-0006Ub-0c for qemu-devel@nongnu.org; Thu, 28 Sep 2017 23:40:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dxmA2-0007As-W8 for qemu-devel@nongnu.org; Thu, 28 Sep 2017 23:40:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41484) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dxmA2-0007AM-LI for qemu-devel@nongnu.org; Thu, 28 Sep 2017 23:40:22 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9AC924E4F3; Fri, 29 Sep 2017 03:40:21 +0000 (UTC) Received: from pxdev.xzpeter.org.com (ovpn-12-70.pek2.redhat.com [10.72.12.70]) by smtp.corp.redhat.com (Postfix) with ESMTP id 692CE1895A; Fri, 29 Sep 2017 03:40:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9AC924E4F3 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=peterx@redhat.com From: Peter Xu To: qemu-devel@nongnu.org Date: Fri, 29 Sep 2017 11:38:35 +0800 Message-Id: <20170929033844.26935-14-peterx@redhat.com> In-Reply-To: <20170929033844.26935-1-peterx@redhat.com> References: <20170929033844.26935-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 29 Sep 2017 03:40:21 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC v2 13/22] monitor: separate QMP parser and dispatcher X-BeenThere: qemu-devel@gnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Fam Zheng , Juan Quintela , Markus Armbruster , peterx@redhat.com, mdroth@linux.vnet.ibm.com, Stefan Hajnoczi , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , Paolo Bonzini , "Dr . David Alan Gilbert" Errors-To: qemu-devel-bounces+importer=patchew.org@gnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Originally QMP is going throw these steps: JSON Parser --> QMP Dispatcher --> Respond /|\ (2) (3) | (1) | \|/ (4) +--------- main thread --------+ This patch does this: JSON Parser QMP Dispatcher --> Respond /|\ | /|\ (4) | | | (2) | (3) | (5) (1) | +-----> | \|/ +--------- main thread <-------+ So the parsing job and the dispatching job is isolated now. It gives us a chance in following up patches to totally move the parser outside. The isloation is done using one QEMUBH. Only one dispatcher QEMUBH is used for all the monitors. Signed-off-by: Peter Xu --- monitor.c | 156 +++++++++++++++++++++++++++++++++++++++++++++++++++++-----= ---- 1 file changed, 133 insertions(+), 23 deletions(-) diff --git a/monitor.c b/monitor.c index 7b76dff5ad..1e9a6cb6a5 100644 --- a/monitor.c +++ b/monitor.c @@ -208,10 +208,14 @@ struct Monitor { mon_cmd_t *cmd_table; QLIST_HEAD(,mon_fd_t) fds; QTAILQ_ENTRY(Monitor) entry; + /* Input queue that hangs all the parsed QMP requests */ + GQueue *qmp_requests; }; =20 struct MonitorGlobal { IOThread *mon_io_thread; + /* Bottom half to dispatch the requests received from IO thread */ + QEMUBH *qmp_dispatcher_bh; }; =20 static struct MonitorGlobal mon_global; @@ -586,6 +590,7 @@ static void monitor_data_init(Monitor *mon, bool skip_f= lush, mon->cmd_table =3D mon_cmds; mon->skip_flush =3D skip_flush; mon->use_io_thr =3D use_io_thr; + mon->qmp_requests =3D g_queue_new(); } =20 static void monitor_data_destroy(Monitor *mon) @@ -597,6 +602,7 @@ static void monitor_data_destroy(Monitor *mon) g_free(mon->rs); QDECREF(mon->outbuf); qemu_mutex_destroy(&mon->out_lock); + g_queue_free(mon->qmp_requests); } =20 char *qmp_human_monitor_command(const char *command_line, bool has_cpu_ind= ex, @@ -3861,29 +3867,31 @@ static void monitor_qmp_respond(Monitor *mon, QObje= ct *rsp, qobject_decref(rsp); } =20 -static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens, - void *opaque) +struct QMPRequest { + /* Owner of the request */ + Monitor *mon; + /* "id" field of the request */ + QObject *id; + /* Request object to be handled */ + QObject *req; +}; +typedef struct QMPRequest QMPRequest; + +/* + * Dispatch one single QMP request. The function will free the req_obj + * and objects inside it before return. + */ +static void monitor_qmp_dispatch_one(QMPRequest *req_obj) { - QObject *req, *rsp =3D NULL, *id =3D NULL; + Monitor *mon, *old_mon; + QObject *req, *rsp =3D NULL, *id; QDict *qdict =3D NULL; - Monitor *mon =3D opaque, *old_mon; - Error *err =3D NULL; =20 - req =3D json_parser_parse_err(tokens, NULL, &err); - if (!req && !err) { - /* json_parser_parse_err() sucks: can fail without setting @err */ - error_setg(&err, QERR_JSON_PARSING); - } - if (err) { - goto err_out; - } + req =3D req_obj->req; + mon =3D req_obj->mon; + id =3D req_obj->id; =20 - qdict =3D qobject_to_qdict(req); - if (qdict) { - id =3D qdict_get(qdict, "id"); - qobject_incref(id); - qdict_del(qdict, "id"); - } /* else will fail qmp_dispatch() */ + g_free(req_obj); =20 if (trace_event_get_state_backends(TRACE_HANDLE_QMP_COMMAND)) { QString *req_json =3D qobject_to_json(req); @@ -3894,7 +3902,7 @@ static void handle_qmp_command(JSONMessageParser *par= ser, GQueue *tokens, old_mon =3D cur_mon; cur_mon =3D mon; =20 - rsp =3D qmp_dispatch(cur_mon->qmp.commands, req); + rsp =3D qmp_dispatch(mon->qmp.commands, req); =20 cur_mon =3D old_mon; =20 @@ -3910,12 +3918,101 @@ static void handle_qmp_command(JSONMessageParser *= parser, GQueue *tokens, } } =20 -err_out: - monitor_qmp_respond(mon, rsp, err, id); - + /* Respond if necessary */ + monitor_qmp_respond(mon, rsp, NULL, id); qobject_decref(req); } =20 +/* + * Pop one QMP request from monitor queues, return NULL if not found. + * We are using round-robin fasion to pop the request, to avoid + * processing command only on a very busy monitor. To achieve that, + * when we processed one request on specific monitor, we put that + * monitor to the end of mon_list queue. + */ +static QMPRequest *monitor_qmp_requests_pop_one(void) +{ + QMPRequest *req_obj =3D NULL; + Monitor *mon; + + qemu_mutex_lock(&monitor_lock); + + QTAILQ_FOREACH(mon, &mon_list, entry) { + req_obj =3D g_queue_pop_head(mon->qmp_requests); + if (req_obj) { + break; + } + } + + if (req_obj) { + /* + * We found one request on the monitor. Degrade this monitor's + * priority to lowest by re-inserting it to end of queue. + */ + QTAILQ_REMOVE(&mon_list, mon, entry); + QTAILQ_INSERT_TAIL(&mon_list, mon, entry); + } + + qemu_mutex_unlock(&monitor_lock); + + return req_obj; +} + +static void monitor_qmp_bh_dispatcher(void *data) +{ + QMPRequest *req_obj; + + while (true) { + req_obj =3D monitor_qmp_requests_pop_one(); + if (!req_obj) { + break; + } + monitor_qmp_dispatch_one(req_obj); + } +} + +static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens, + void *opaque) +{ + QObject *req, *id =3D NULL; + QDict *qdict =3D NULL; + Monitor *mon =3D opaque; + Error *err =3D NULL; + QMPRequest *req_obj; + + req =3D json_parser_parse_err(tokens, NULL, &err); + if (!req && !err) { + /* json_parser_parse_err() sucks: can fail without setting @err */ + error_setg(&err, QERR_JSON_PARSING); + } + if (err) { + monitor_qmp_respond(mon, NULL, err, NULL); + qobject_decref(req); + } + + qdict =3D qobject_to_qdict(req); + if (qdict) { + id =3D qdict_get(qdict, "id"); + qobject_incref(id); + qdict_del(qdict, "id"); + } /* else will fail qmp_dispatch() */ + + req_obj =3D g_new0(QMPRequest, 1); + req_obj->mon =3D mon; + req_obj->id =3D id; + req_obj->req =3D req; + + /* + * Put the request to the end of queue so that requests will be + * handled in time order. Ownership for req_obj, req, id, + * etc. will be delivered to the handler side. + */ + g_queue_push_tail(mon->qmp_requests, req_obj); + + /* Kick the dispatcher routine */ + qemu_bh_schedule(mon_global.qmp_dispatcher_bh); +} + static void monitor_qmp_read(void *opaque, const uint8_t *buf, int size) { Monitor *mon =3D opaque; @@ -4085,6 +4182,15 @@ static void monitor_io_thread_init(void) * fetch the context we'll have that initialized. */ monitor_io_context_get(); + + /* + * This MUST be on main loop thread since we have commands that + * have assumption to be run on main loop thread (Yeah, we'd + * better remove this assumption in the future). + */ + mon_global.qmp_dispatcher_bh =3D aio_bh_new(qemu_get_aio_context(), + monitor_qmp_bh_dispatcher, + NULL); } =20 void monitor_init_globals(void) @@ -4188,6 +4294,10 @@ void monitor_init(Chardev *chr, int flags) =20 static void monitor_io_thread_destroy(void) { + /* QEMUBHs needs to be deleted before destroying the IOThread. */ + qemu_bh_delete(mon_global.qmp_dispatcher_bh); + mon_global.qmp_dispatcher_bh =3D NULL; + iothread_destroy(mon_global.mon_io_thread); mon_global.mon_io_thread =3D NULL; } --=20 2.13.5