From nobody Fri May 17 07:55:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1712239571; cv=none; d=zohomail.com; s=zohoarc; b=CzQCCBu/0fREflpnOzqAAjGDmznnsEtqF+zkE9eWfcs9QRgXsAEoRyQAw7DVJQeTIkY5N+kBUrfDrqCHodi9MDFZyvbslKCJ0MVK607IftHGmIU7n71ruEyiJD1+NC+1ynlgzjpcAbmvCoEh7uPZWhMnaLgLrfWW9N9M0GV5Hdg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1712239571; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6wG5hFEWSE8pCJM3CKdCtIfOIpf3B2JFMf25ZTR+uII=; b=kQVMZqnAyjurJsI3/3RYDkHoPAYKGys63y8Z3woSGiXP22Gq5GX7zamEa8JShwhbSsl+qABMjYzLbwTMoyRsuugI7v1VBmzbVaQuKDGU7OSDnKiOwzE03EQp9Cgak71s9Ec3QYkJ9P9dyDYMWVslvFVKVeU5+heOULdtZuYc6Q4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1712239570948448.5380075582085; Thu, 4 Apr 2024 07:06:10 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.700911.1094721 (Exim 4.92) (envelope-from ) id 1rsNj5-0007z6-Ay; Thu, 04 Apr 2024 14:05:59 +0000 Received: by outflank-mailman (output) from mailman id 700911.1094721; Thu, 04 Apr 2024 14:05:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rsNj5-0007yz-8I; Thu, 04 Apr 2024 14:05:59 +0000 Received: by outflank-mailman (input) for mailman id 700911; Thu, 04 Apr 2024 14:05:58 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rsNj4-0007yl-5u for xen-devel@lists.xenproject.org; Thu, 04 Apr 2024 14:05:58 +0000 Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com [2607:f8b0:4864:20::f2f]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 741356b9-f28c-11ee-a1ef-f123f15fe8a2; Thu, 04 Apr 2024 16:05:55 +0200 (CEST) Received: by mail-qv1-xf2f.google.com with SMTP id 6a1803df08f44-69682bdf1d5so5303696d6.2 for ; Thu, 04 Apr 2024 07:05:55 -0700 (PDT) Received: from rossla-lxenia.eng.citrite.net ([185.25.67.249]) by smtp.gmail.com with ESMTPSA id fq8-20020a056214258800b0069903766e06sm2688494qvb.124.2024.04.04.07.05.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Apr 2024 07:05:54 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 741356b9-f28c-11ee-a1ef-f123f15fe8a2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1712239554; x=1712844354; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=6wG5hFEWSE8pCJM3CKdCtIfOIpf3B2JFMf25ZTR+uII=; b=XCUP8dDhaetBhnJ5/j06OVacPx9QjTZ/dfjUf68cuBE3FIEhPn0p3f+z8nM3Kjv21C CEEfv9v1p11dED0w9dCVxHnYjLC3Cckt04EVqZgZPCV6oKlMiteoZT76iyi1zLfem5yx 2bONR0wUsWBZqlt+R3NOtRY1uVjfF1wgEm5pM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712239554; x=1712844354; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6wG5hFEWSE8pCJM3CKdCtIfOIpf3B2JFMf25ZTR+uII=; b=VnC8m/R2mpZvAj3ZPR3/eD/CohqNCy2x93/WWzJgz4oYjhfWuA9vejuxIMfkG8jABy s1MPBayDEJYO0TIqtYCXC8Hy45ZRLdAuxX3w6QDVJQbRIP25Baalksa1+lqVXGhPf5Ol os7FtzGs7gaVvbFPSaCx5fJlbRSCl61bWGvhigdRxkg4jNwP72QX4Bdg6hpIbsdnlhQ/ jmfTvWxx42+HxtSiW4uhx48rSdoJnBArElauFJIWfhjJ17BJBYgyx4372296HREuoC3+ gNGjpzakG8nlOr6xeawe402qkZ05kP7JC5YIx08WwkiQWiiWVaM7OQi6++JFWuPKUZZa hZzg== X-Gm-Message-State: AOJu0YxKg0TJlZwohcJi969k+3kjYdGQb2IonkY2ZlQC9G0/XoBYYjeL IIQ9yitqrDJw7JtjTROmFO2Y47TY6xif+K8Nh9OOPLjxpZMl9Ur8v/qQZsFgcQ== X-Google-Smtp-Source: AGHT+IGyY9sw3RGJVfOjgmqlVSv0/p+x+OV/m/Ct2LCHP8v9i+7ycUCSK6tJqaZi3Ko+zn40uXj7ww== X-Received: by 2002:a05:6214:2465:b0:699:28ee:da99 with SMTP id im5-20020a056214246500b0069928eeda99mr2482294qvb.65.1712239554621; Thu, 04 Apr 2024 07:05:54 -0700 (PDT) From: Ross Lagerwall To: Stefano Stabellini , Anthony Perard , Paul Durrant Cc: xen-devel@lists.xenproject.org, qemu-devel@nongnu.org, Ross Lagerwall Subject: [PATCH] xen-hvm: Avoid livelock while handling buffered ioreqs Date: Thu, 4 Apr 2024 15:08:33 +0100 Message-ID: <20240404140833.1557953-1-ross.lagerwall@citrix.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1712239572643100001 Content-Type: text/plain; charset="utf-8" A malicious or buggy guest may generated buffered ioreqs faster than QEMU can process them in handle_buffered_iopage(). The result is a livelock - QEMU continuously processes ioreqs on the main thread without iterating through the main loop which prevents handling other events, processing timers, etc. Without QEMU handling other events, it often results in the guest becoming unsable and makes it difficult to stop the source of buffered ioreqs. To avoid this, if we process a full page of buffered ioreqs, stop and reschedule an immediate timer to continue processing them. This lets QEMU go back to the main loop and catch up. Signed-off-by: Ross Lagerwall Reviewed-by: Paul Durrant --- hw/xen/xen-hvm-common.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c index 1627da739822..1116b3978938 100644 --- a/hw/xen/xen-hvm-common.c +++ b/hw/xen/xen-hvm-common.c @@ -463,11 +463,11 @@ static void handle_ioreq(XenIOState *state, ioreq_t *= req) } } =20 -static bool handle_buffered_iopage(XenIOState *state) +static unsigned int handle_buffered_iopage(XenIOState *state) { buffered_iopage_t *buf_page =3D state->buffered_io_page; buf_ioreq_t *buf_req =3D NULL; - bool handled_ioreq =3D false; + unsigned int handled =3D 0; ioreq_t req; int qw; =20 @@ -480,7 +480,7 @@ static bool handle_buffered_iopage(XenIOState *state) req.count =3D 1; req.dir =3D IOREQ_WRITE; =20 - for (;;) { + do { uint32_t rdptr =3D buf_page->read_pointer, wrptr; =20 xen_rmb(); @@ -521,22 +521,30 @@ static bool handle_buffered_iopage(XenIOState *state) assert(!req.data_is_ptr); =20 qatomic_add(&buf_page->read_pointer, qw + 1); - handled_ioreq =3D true; - } + handled +=3D qw + 1; + } while (handled < IOREQ_BUFFER_SLOT_NUM); =20 - return handled_ioreq; + return handled; } =20 static void handle_buffered_io(void *opaque) { + unsigned int handled; XenIOState *state =3D opaque; =20 - if (handle_buffered_iopage(state)) { + handled =3D handle_buffered_iopage(state); + if (handled >=3D IOREQ_BUFFER_SLOT_NUM) { + /* We handled a full page of ioreqs. Schedule a timer to continue + * processing while giving other stuff a chance to run. + */ timer_mod(state->buffered_io_timer, - BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIM= E)); - } else { + qemu_clock_get_ms(QEMU_CLOCK_REALTIME)); + } else if (handled =3D=3D 0) { timer_del(state->buffered_io_timer); qemu_xen_evtchn_unmask(state->xce_handle, state->bufioreq_local_po= rt); + } else { + timer_mod(state->buffered_io_timer, + BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIM= E)); } } =20 --=20 2.43.0