From nobody Sun May 5 21:17:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1489682042657325.384036790648; Thu, 16 Mar 2017 09:34:02 -0700 (PDT) Received: from localhost ([::1]:44736 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coYLf-0007WK-2n for importer@patchew.org; Thu, 16 Mar 2017 12:33:59 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42718) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coYKz-0007UX-SM for qemu-devel@nongnu.org; Thu, 16 Mar 2017 12:33:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1coYKw-00044L-FN for qemu-devel@nongnu.org; Thu, 16 Mar 2017 12:33:17 -0400 Received: from 6.mo69.mail-out.ovh.net ([46.105.50.107]:43012) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1coYKw-00041v-8W for qemu-devel@nongnu.org; Thu, 16 Mar 2017 12:33:14 -0400 Received: from player738.ha.ovh.net (b7.ovh.net [213.186.33.57]) by mo69.mail-out.ovh.net (Postfix) with ESMTP id D3C2C1DC2C for ; Thu, 16 Mar 2017 17:33:11 +0100 (CET) Received: from [192.168.66.23] (gar31-1-82-66-74-139.fbx.proxad.net [82.66.74.139]) (Authenticated sender: groug@kaod.org) by player738.ha.ovh.net (Postfix) with ESMTPA id A40BBCC0; Thu, 16 Mar 2017 17:33:05 +0100 (CET) From: Greg Kurz To: qemu-devel@nongnu.org Date: Thu, 16 Mar 2017 17:33:05 +0100 Message-ID: <148968198512.5555.1880820193606077571.stgit@bahia> User-Agent: StGit/0.17.1-20-gc0b1b-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Ovh-Tracer-Id: 13710646116457486745 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -100 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeelhedriedtgdeijecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 46.105.50.107 Subject: [Qemu-devel] [PATCH] 9pfs: don't try to flush self and avoid QEMU hang on reset X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-stable@nongnu.org, Greg Kurz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 According to the 9P spec [*], when a client wants to cancel a pending I/O request identified by a given tag (uint16), it must send a Tflush message and wait for the server to respond with a Rflush message before reusing this tag for another I/O. The server may still send a completion message for the I/O if it wasn't actually cancelled but the Rflush message must arrive after that. QEMU hence waits for the flushed PDU to complete before sending the Rflush message back to the client. If a client sends 'Tflush tag oldtag' and tag =3D=3D oldtag, QEMU will then allocate a PDU identified by tag, find it in the PDU list and wait for this same PDU to complete... i.e. wait for a completion that will never happen. This causes a tag and ring slot leak in the guest, and a PDU leak in QEMU, all of them limited by the maximal number of PDUs (128). But, worse, this causes QEMU to hang on device reset since v9fs_reset() wants to drain all pending I/O. This insane behavior is likely to denote a bug in the client, and it would deserve an Rerror message to be sent back. Unfortunately, the protocol allows it and requires all flush requests to suceed (only a Tflush response is expected). The only option is to detect when we have to handle a self-referencing flush request and report success to the client right away. [*] http://man.cat-v.org/plan_9/5/flush Reported-by: Al Viro Signed-off-by: Greg Kurz Reviewed-by: Eric Blake --- hw/9pfs/9p.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c index 76c9247c777d..e20417fbb0db 100644 --- a/hw/9pfs/9p.c +++ b/hw/9pfs/9p.c @@ -2369,7 +2369,7 @@ static void coroutine_fn v9fs_flush(void *opaque) break; } } - if (cancel_pdu) { + if (cancel_pdu && cancel_pdu !=3D pdu) { cancel_pdu->cancelled =3D 1; /* * Wait for pdu to complete.