From nobody Fri May 17 23:16:17 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1664814190409345.054919316991; Mon, 3 Oct 2022 09:23:10 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.414962.659411 (Exim 4.92) (envelope-from ) id 1ofODF-0000fa-TJ; Mon, 03 Oct 2022 16:22:37 +0000 Received: by outflank-mailman (output) from mailman id 414962.659411; Mon, 03 Oct 2022 16:22:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ofODF-0000fT-Pp; Mon, 03 Oct 2022 16:22:37 +0000 Received: by outflank-mailman (input) for mailman id 414962; Mon, 03 Oct 2022 16:22:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ofODE-0000fN-6Q for xen-devel@lists.xenproject.org; Mon, 03 Oct 2022 16:22:36 +0000 Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9564fcfb-4337-11ed-9377-c1cf23e5d27e; Mon, 03 Oct 2022 18:22:33 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 086BD32007E8; Mon, 3 Oct 2022 12:22:30 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Mon, 03 Oct 2022 12:22:31 -0400 Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 3 Oct 2022 12:22:28 -0400 (EDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9564fcfb-4337-11ed-9377-c1cf23e5d27e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= invisiblethingslab.com; h=cc:cc:content-transfer-encoding :content-type:date:date:from:from:in-reply-to:message-id :mime-version:reply-to:sender:subject:subject:to:to; s=fm2; t= 1664814150; x=1664900550; bh=VUR8i+Yav/xwTfqV32RAVYeUwSbqqG2W50q eOLKdIuI=; b=VOxoQHX2n8Yru8vF7Qzuuqmnmfi6FgC96MP/CMtUVZJFfjAY/++ X0CmK+00HoOgS2aYtLIFL7kL1GvzqDAN696v1j96Yqqaxcj2jNDBo5n0KZeuDhc3 YeYIql7qp/rs3mwHnS7jr1B9l/rsd5/KZc8fwGm/vvXHbZqABfueRfqyGS1PuC4B auL2FupIsXjK95NzxS72Z8A7W7oHqvH8QYkc6nD51yNjEa3+ZNjfnjtUluY6VXXu cNPXJH1FWUmiZYlBOWaQ70BGvyT3qyaaPCHCY/ukPX83qwAcskagLK1du1dRosJu 0pR4zA+l02MV4SIm6XQEX9fyDHG/hC8NW2g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:message-id:mime-version:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1664814150; x=1664900550; bh=VUR8i+Yav/xwT fqV32RAVYeUwSbqqG2W50qeOLKdIuI=; b=zcUt0P+cAcKtYlSyB/HANsgr9YUNN TTD9IIQdUpykiuiRDW6/VE3pKiDSsz2nJ5io2lssNa8MG3TO1/+I5ZpTd8Zy5G2+ bq2H7nbPjldyCo3lzqFd8v9E9lxnPbD0U0hwzjK9R4cfnWMcboi+gGDrxMeoxfl3 o1jyKtR1qqgjosMnbO7iANY4m3fAxaCZOxSlpbsHyBjOfnr2hnPBL8I110ZVnYwI 0ReJrHPhCAZtFnMSbpblpUexeaLWqq7w8tWkMk2Wbb/oYJCNX9vpSw0RmhfpvJ7J lAEERaGT4l0ccd5kqOZW0Vxkeje9sPXAEd2TMF8beu+KElXOuQtFHnL/A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeehledgleelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm X-ME-Proxy: Feedback-ID: i1568416f:Fastmail From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= To: xen-devel@lists.xenproject.org Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= , Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v2] xen: credit2: respect credit2_runqueue=all when arranging runqueues Date: Mon, 3 Oct 2022 18:21:58 +0200 Message-Id: <20221003162158.2042-1-marmarek@invisiblethingslab.com> X-Mailer: git-send-email 2.37.3 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1664814192662100001 Documentation for credit2_runqueue=3Dall says it should create one queue for all pCPUs on the host. But since introduction sched_credit2_max_cpus_runqueue, it actually created separate runqueue per socket, even if the CPUs count is below sched_credit2_max_cpus_runqueue. Adjust the condition to skip syblink check in case of credit2_runqueue=3Dall. Fixes: 8e2aa76dc167 ("xen: credit2: limit the max number of CPUs in a runqu= eue") Signed-off-by: Marek Marczykowski-G=C3=B3recki Reviewed-by: Juergen Gross --- Changes in v2: - fix indentation - adjust doc The whole thing is under cpu_runqueue_match() already, so maybe cpu_runqueue_siblings_match() isn't needed at all? --- docs/misc/xen-command-line.pandoc | 5 +++++ xen/common/sched/credit2.c | 9 +++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line= .pandoc index 74b519f0c5bd..057cdb903042 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -724,6 +724,11 @@ Available alternatives, with their meaning, are: * `all`: just one runqueue shared by all the logical pCPUs of the host =20 +Regardless of the above choice, Xen attempts to respect +`sched_credit2_max_cpus_runqueue` limit, which may mean more than one runq= ueue +for the `all` value. If that isn't intended, raise +the `sched_credit2_max_cpus_runqueue` value. + ### dbgp > `=3D ehci[ | @pci:. ]` > `=3D xhci[ | @pci:. ][,share=3D|hwdom]` diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index 0e3f89e5378e..afff23b56238 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -996,9 +996,14 @@ cpu_add_to_runqueue(const struct scheduler *ops, unsig= ned int cpu) * * Otherwise, let's try to make sure that siblings stay in the * same runqueue, pretty much under any cinrcumnstances. + * + * Furthermore, try to respect credit2_runqueue=3Dall, as long= as + * max_cpus_runq isn't violated. */ - if ( rqd->refcnt < max_cpus_runq && (ops->cpupool->gran !=3D S= CHED_GRAN_cpu || - cpu_runqueue_siblings_match(rqd, cpu, max_cpus_runq)) ) + if ( rqd->refcnt < max_cpus_runq && + (ops->cpupool->gran !=3D SCHED_GRAN_cpu || + cpu_runqueue_siblings_match(rqd, cpu, max_cpus_runq) = || + opt_runqueue =3D=3D OPT_RUNQUEUE_ALL) ) { /* * This runqueue is ok, but as we said, we also want an ev= en --=20 2.37.3