From nobody Wed Nov 5 22:58:03 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1537806200588699.0194434570916; Mon, 24 Sep 2018 09:23:20 -0700 (PDT) Received: from localhost ([::1]:46029 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g4Tdn-00051R-74 for importer@patchew.org; Mon, 24 Sep 2018 12:23:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34746) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g4Tap-0002r7-3L for qemu-devel@nongnu.org; Mon, 24 Sep 2018 12:20:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g4Tao-0004Tw-8K for qemu-devel@nongnu.org; Mon, 24 Sep 2018 12:20:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47550) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g4Tan-0004T0-UL for qemu-devel@nongnu.org; Mon, 24 Sep 2018 12:20:14 -0400 Received: from smtp.corp.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 473543001FD0 for ; Mon, 24 Sep 2018 16:20:13 +0000 (UTC) Received: from blackfin.pond.sub.org (ovpn-116-98.ams2.redhat.com [10.36.116.98]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DE9CD8921E for ; Mon, 24 Sep 2018 16:20:12 +0000 (UTC) Received: by blackfin.pond.sub.org (Postfix, from userid 1000) id 57FB11138603; Mon, 24 Sep 2018 18:20:07 +0200 (CEST) From: Markus Armbruster To: qemu-devel@nongnu.org Date: Mon, 24 Sep 2018 18:20:03 +0200 Message-Id: <20180924162007.3084-3-armbru@redhat.com> In-Reply-To: <20180924162007.3084-1-armbru@redhat.com> References: <20180924162007.3084-1-armbru@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Mon, 24 Sep 2018 16:20:13 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 2/6] json: Clean up how lexer consumes "end of input" X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" When the lexer isn't in its start state at the end of input, it's working on a token. To flush it out, it needs to transit to its start state on "end of input" lookahead. There are two ways to the start state, depending on the current state: * If the lexer is in a TERMINAL(JSON_FOO) state, it can emit a JSON_FOO token. * Else, it can go to IN_ERROR state, and emit a JSON_ERROR token. There are complications, however: * The transition to IN_ERROR state consumes the input character and adds it to the JSON_ERROR token. The latter is inappropriate for the "end of input" character, so we suppress that. See also recent commit a2ec6be72b8 "json: Fix lexer to include the bad character in JSON_ERROR token". * The transition to a TERMINAL(JSON_FOO) state doesn't consume the input character. In that case, the lexer normally loops until it is consumed. We have to suppress that for the "end of input" input character. If we didn't, the lexer would consume it by entering IN_ERROR state, emitting a bogus JSON_ERROR token. We fixed that in commit bd3924a33a6. However, simply breaking the loop this way assumes that the lexer needs exactly one state transition to reach its start state. That assumption is correct now, but it's unclean, and I'll soon break it. Clean up: instead of breaking the loop after one iteration, break it after it reached the start state. Signed-off-by: Markus Armbruster Reviewed-by: Eric Blake Message-Id: <20180831075841.13363-3-armbru@redhat.com> --- qobject/json-lexer.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/qobject/json-lexer.c b/qobject/json-lexer.c index 4867839f66..ec3aec726f 100644 --- a/qobject/json-lexer.c +++ b/qobject/json-lexer.c @@ -261,7 +261,8 @@ void json_lexer_init(JSONLexer *lexer, bool enable_inte= rpolation) =20 static void json_lexer_feed_char(JSONLexer *lexer, char ch, bool flush) { - int char_consumed, new_state; + int new_state; + bool char_consumed =3D false; =20 lexer->x++; if (ch =3D=3D '\n') { @@ -269,11 +270,12 @@ static void json_lexer_feed_char(JSONLexer *lexer, ch= ar ch, bool flush) lexer->y++; } =20 - do { + while (flush ? lexer->state !=3D lexer->start_state : !char_consumed) { assert(lexer->state <=3D ARRAY_SIZE(json_lexer)); new_state =3D json_lexer[lexer->state][(uint8_t)ch]; - char_consumed =3D !TERMINAL_NEEDED_LOOKAHEAD(lexer->state, new_sta= te); - if (char_consumed && !flush) { + char_consumed =3D !flush + && !TERMINAL_NEEDED_LOOKAHEAD(lexer->state, new_state); + if (char_consumed) { g_string_append_c(lexer->token, ch); } =20 @@ -318,7 +320,7 @@ static void json_lexer_feed_char(JSONLexer *lexer, char= ch, bool flush) break; } lexer->state =3D new_state; - } while (!char_consumed && !flush); + } =20 /* Do not let a single token grow to an arbitrarily large size, * this is a security consideration. @@ -342,9 +344,8 @@ void json_lexer_feed(JSONLexer *lexer, const char *buff= er, size_t size) =20 void json_lexer_flush(JSONLexer *lexer) { - if (lexer->state !=3D lexer->start_state) { - json_lexer_feed_char(lexer, 0, true); - } + json_lexer_feed_char(lexer, 0, true); + assert(lexer->state =3D=3D lexer->start_state); json_message_process_token(lexer, lexer->token, JSON_END_OF_INPUT, lexer->x, lexer->y); } --=20 2.17.1