From nobody Thu Mar 13 20:04:43 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1739239030; cv=none; d=zohomail.com; s=zohoarc; b=JHihRb4PhXZ6xad4AkgH9l0q9dnPcn2rfmzNkFXcASM19H/qlB+4GqCPxliDqnQyS6VFeoUzCJN7VkL7SBMS+nXcURDMN2md0jO3xeYSYnSLY0G2z7JJ9kx1EA7vAPnAC9N07LEyXQTuH9dl4k5sYslE2doEr1s8F7RQo2XAUo0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739239030; h=Content-Type:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=0J4KWDBWSzAnnDd+oWnqOkuQhKw1Q6LIYqEPV5dUFuM=; b=aLNpFHiuIBj6NNfYm8mkU9KGZqvAY5hRQdp47CcuUUCf51/4NNJ/lacZjl0G0RzP2TGFYFCxe8gsiq7lMr4x3Ye8pcCYijqnHKY98fwdIc1nrVKhbaReEDP2/1qVzU7MgITgjEBHbna/ZGWDPMWSWSxVvnoc9bf0uYWvS26Xwhk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 173923903042043.99227590990131; Mon, 10 Feb 2025 17:57:10 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.885065.1294839 (Exim 4.92) (envelope-from ) id 1thfVz-0008Hr-HA; Tue, 11 Feb 2025 01:56:43 +0000 Received: by outflank-mailman (output) from mailman id 885065.1294839; Tue, 11 Feb 2025 01:56:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1thfVz-0008Hk-CX; Tue, 11 Feb 2025 01:56:43 +0000 Received: by outflank-mailman (input) for mailman id 885065; Tue, 11 Feb 2025 01:56:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1thfVy-0008He-5i for xen-devel@lists.xenproject.org; Tue, 11 Feb 2025 01:56:42 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 6b2b8d6d-e81b-11ef-b3ef-695165c68f79; Tue, 11 Feb 2025 02:56:33 +0100 (CET) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 17B63A40240; Tue, 11 Feb 2025 01:54:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50625C4CED1; Tue, 11 Feb 2025 01:56:31 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6b2b8d6d-e81b-11ef-b3ef-695165c68f79 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739238992; bh=3cZUuDu1esfcPXH6WIUROhzRNNhSl5UMjaGoR/VVbu0=; h=Date:From:To:cc:Subject:From; b=iohoBrcA2BFKM3o3h+e09JQhQX0oaqAPyr0wyCqxqFSIpfXBNVe88M6T1Y9qlNSqe ni1XbhJWuqc15ziX7uA1C8PBqLlcnBNirYcjQxU5Dn4caf/N8i3ko64b5TiyEsMOgM XPOkMyK4kVxfzyQP5rUiYNsp2rUMaotlcD94N0gacS08oyXVzvRdv5HlgdiZPJQqMH QpB6Q7LXbLhFvXvsm5ViNnUHPLXJfUN2RW/eqcEOm6gjOJ0AXkLAuJbNZR2E4yCyhs g6sYpr1B7AIdF2Kpxa+zT2Yi33KmP/yZJLYi3fDEYqzf2sKD0aNb2wXpAxNQ8TIEtM nOHfJjD5ZcL8w== Date: Mon, 10 Feb 2025 17:56:30 -0800 (PST) From: Stefano Stabellini X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop To: xen-devel@lists.xenproject.org cc: sstabellini@kernel.org, Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Xenia.Ragiadakou@amd.com, dpsmith@apertussolutions.com Subject: [RFC] dom0less vcpu affinity bindings Message-ID: User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1739239031625019000 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hi all, We have received requests to introduce Dom0less vCPU affinity bindings to allow configuring which pCPUs a given vCPU is allowed to run on. After considering different approaches, I am thinking of using the following binding format: vcpu0 { compatible =3D "xen,vcpu-affinity"; // compatible string id =3D <0>; // vcpu id hard-affinity =3D "1,4-7"; // pcpu ranges }; Notably, the hard-affinity property is represented as a string. We also considered using a bitmask, such as: hard-affinity =3D <0x0f>; However, I decided against this approach because, on large server systems, the number of physical CPUs can be very high, making the bitmask size potentially very large. The string representation is more practical for large systems and is also easier to understand and write. It is also fully aligned with the way we have already implemented the llc-colors option (see docs/misc/arm/device-tree/booting.txt and docs/misc/cache-coloring.rst:). What do you think? diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c index 49d1f14d65..12379ecb20 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c @@ -818,6 +818,8 @@ void __init create_domUs(void) const struct dt_device_node *cpupool_node, *chosen =3D dt_find_node_by_path("/chosen"= ); const char *llc_colors_str =3D NULL; + const char *hard_affinity_str =3D NULL; + struct dt_device_node *np; =20 BUG_ON(chosen =3D=3D NULL); dt_for_each_child_node(chosen, node) @@ -992,6 +994,55 @@ void __init create_domUs(void) if ( rc ) panic("Could not set up domain %s (rc =3D %d)\n", dt_node_name(node), rc); + + dt_for_each_child_node(node, np) + { + const char *s; + struct vcpu *v; + cpumask_t affinity; + + if ( !dt_device_is_compatible(np, "xen,vcpu-affinity") ) + continue; + + if ( !dt_property_read_u32(np, "id", &val) ) + continue; + if ( val >=3D d->max_vcpus ) + panic("Invalid vcpu_id %u for domain %s\n", val, dt_node_n= ame(node)); + + v =3D d->vcpu[val]; + rc =3D dt_property_read_string(np, "hard-affinity", &hard_affi= nity_str); + if ( rc < 0 ) + continue; + =20 + s =3D hard_affinity_str; + cpumask_clear(&affinity); + while ( *s !=3D '\0' ) + { + unsigned int start, end; + + start =3D simple_strtoul(s, &s, 0); + + if ( *s =3D=3D '-' ) /* Range */ + { + s++; + end =3D simple_strtoul(s, &s, 0); + } + else /* Single value */ + end =3D start; + + for ( ; start <=3D end; start++ ) + cpumask_set_cpu(start, &affinity); + + if ( *s =3D=3D ',' ) + s++; + else if ( *s !=3D '\0' ) + break; + } + + rc =3D vcpu_set_hard_affinity(v, &affinity); + if ( rc ) + panic("vcpu%d: failed to set hard affinity\n", v->vcpu_id); + } } }