From nobody Wed Nov 5 07:02:39 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1533224190436554.6307903542239; Thu, 2 Aug 2018 08:36:30 -0700 (PDT) Received: from localhost ([::1]:46405 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1flFeP-00036u-4V for importer@patchew.org; Thu, 02 Aug 2018 11:36:29 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37170) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1flEwz-0004mD-TI for qemu-devel@nongnu.org; Thu, 02 Aug 2018 10:51:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1flEww-0001LR-QE for qemu-devel@nongnu.org; Thu, 02 Aug 2018 10:51:37 -0400 Received: from fanzine.igalia.com ([91.117.99.155]:34121) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1flEww-00010r-DK; Thu, 02 Aug 2018 10:51:34 -0400 Received: from [194.100.51.2] (helo=perseus.local) by fanzine.igalia.com with esmtpsa (Cipher TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim) id 1flEwJ-0006Dp-IQ; Thu, 02 Aug 2018 16:50:55 +0200 Received: from berto by perseus.local with local (Exim 4.89) (envelope-from ) id 1flEw0-0007af-PM; Thu, 02 Aug 2018 17:50:36 +0300 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=References:In-Reply-To:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=qQITS0hiD8dmBsk+pBEK7NVVXfmRTr1kWbv19S/giGY=; b=X94TkeBiSX+0lCRN5mbNgJ4Iqv8vcxIKjnEvEkYTn+AriiPisKT8nNQj8dgq3UINvBlI9FL2iEl5tQGf2OkInDDf8D0ysIbXMt+MvKRGo+dffOfNbJyyRivXHlDWQ8XgjU6/Jm+3uDNl7ttBLh0yKX4NU8VnMow8BA2vwaDcV0qjV/KmF+m72/BkvJag2OMqbaWQg/3zXINIn7pbCxQqEwwpty5X4358OOvy7d5qyIxKDSDjdvDYH1ClGtBQM1BogiLgXbkMfa9y9ndLPKffL9M9CNPXsrCQtOBt2kKJ4VaDRQWRl4sWdXKjE40pQVAntul1ChNpkzYu7bRPDQQetg==; From: Alberto Garcia To: qemu-devel@nongnu.org Date: Thu, 2 Aug 2018 17:50:23 +0300 Message-Id: <7b219a0844e8eefdb170eaf3e670cdb65383361b.1533219143.git.berto@igalia.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x (no timestamps) [generic] [fuzzy] X-Received-From: 91.117.99.155 Subject: [Qemu-devel] [PATCH 1/4] qemu-iotests: Test removing a throttle group member with a pending timer X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block@nongnu.org, Max Reitz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" A throttle group can have several members, and each one of them can have several pending requests in the queue. The requests are processed in a round-robin fashion, so the algorithm decides the drive that is going to run the next request and sets a timer in it. Once the timer fires and the throttled request is run then the next drive from the group is selected and a new timer is set. If the user tried to remove a drive from a group and that drive had a timer set then the code was not taking care of setting up a new timer in one of the remaining members of the group, freezing their I/O. This problem was fixed in 6fccbb475bc6effc313ee9481726a1748b6dae57, and this patch adds a new test case that reproduces this exact scenario. Signed-off-by: Alberto Garcia --- tests/qemu-iotests/093 | 52 ++++++++++++++++++++++++++++++++++++++++++= ++++ tests/qemu-iotests/093.out | 4 ++-- 2 files changed, 54 insertions(+), 2 deletions(-) diff --git a/tests/qemu-iotests/093 b/tests/qemu-iotests/093 index 68e344f8c1..b26cd34e32 100755 --- a/tests/qemu-iotests/093 +++ b/tests/qemu-iotests/093 @@ -208,6 +208,58 @@ class ThrottleTestCase(iotests.QMPTestCase): limits[tk] =3D rate self.do_test_throttle(ndrives, 5, limits) =20 + # Test that removing a drive from a throttle group should not + # affect the remaining members of the group. + # https://bugzilla.redhat.com/show_bug.cgi?id=3D1535914 + def test_remove_group_member(self): + # Create a throttle group with two drives + # and set a 4 KB/s read limit. + params =3D {"bps": 0, + "bps_rd": 4096, + "bps_wr": 0, + "iops": 0, + "iops_rd": 0, + "iops_wr": 0 } + self.configure_throttle(2, params) + + # Read 4KB from drive0. This is performed immediately. + self.vm.hmp_qemu_io("drive0", "aio_read 0 4096") + + # Read 4KB again. The I/O limit has been exceeded so this + # request is throttled and a timer is set to wake it up. + self.vm.hmp_qemu_io("drive0", "aio_read 0 4096") + + # Read from drive1. We're still over the I/O limit so this + # request is also throttled. There's no timer set in drive1 + # because there's already one in drive0. Once the timer in + # drive0 fires and its throttled request is processed then the + # next request in the queue will be scheduled: this one. + self.vm.hmp_qemu_io("drive1", "aio_read 0 4096") + + # At this point only the first 4KB have been read from drive0. + # The other requests are throttled. + self.assertEqual(self.blockstats('drive0')[0], 4096) + self.assertEqual(self.blockstats('drive1')[0], 0) + + # Remove drive0 from the throttle group and disable its I/O limits. + # drive1 remains in the group with a throttled request. + params['bps_rd'] =3D 0 + params['device'] =3D 'drive0' + result =3D self.vm.qmp("block_set_io_throttle", conv_keys=3DFalse,= **params) + self.assert_qmp(result, 'return', {}) + + # Removing the I/O limits from drive0 drains its pending request. + # The read request in drive1 is still throttled. + self.assertEqual(self.blockstats('drive0')[0], 8192) + self.assertEqual(self.blockstats('drive1')[0], 0) + + # Advance the clock 5 seconds. This completes the request in drive1 + self.vm.qtest("clock_step %d" % (5 * nsec_per_sec)) + + # Now all requests have been processed. + self.assertEqual(self.blockstats('drive0')[0], 8192) + self.assertEqual(self.blockstats('drive1')[0], 4096) + class ThrottleTestCoroutine(ThrottleTestCase): test_img =3D "null-co://" =20 diff --git a/tests/qemu-iotests/093.out b/tests/qemu-iotests/093.out index 594c16f49f..36376bed87 100644 --- a/tests/qemu-iotests/093.out +++ b/tests/qemu-iotests/093.out @@ -1,5 +1,5 @@ -........ +.......... ---------------------------------------------------------------------- -Ran 8 tests +Ran 10 tests =20 OK --=20 2.11.0