From nobody Mon Feb 9 23:03:07 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1647617186423229.24507024877596; Fri, 18 Mar 2022 08:26:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.292106.496105 (Exim 4.92) (envelope-from ) id 1nVEUO-00039p-EL; Fri, 18 Mar 2022 15:26:04 +0000 Received: by outflank-mailman (output) from mailman id 292106.496105; Fri, 18 Mar 2022 15:26:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nVEUO-000398-6b; Fri, 18 Mar 2022 15:26:04 +0000 Received: by outflank-mailman (input) for mailman id 292106; Fri, 18 Mar 2022 15:26:03 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nVEUM-0001ka-DW for xen-devel@lists.xenproject.org; Fri, 18 Mar 2022 15:26:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b78aa61c-a6cf-11ec-853c-5f4723681683; Fri, 18 Mar 2022 16:26:01 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86B891515; Fri, 18 Mar 2022 08:26:00 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3531D3F7D7; Fri, 18 Mar 2022 08:25:59 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b78aa61c-a6cf-11ec-853c-5f4723681683 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu Subject: [PATCH v3 5/6] arm/dom0less: assign dom0less guests to cpupools Date: Fri, 18 Mar 2022 15:25:40 +0000 Message-Id: <20220318152541.7460-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220318152541.7460-1-luca.fancellu@arm.com> References: <20220318152541.7460-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1647617188268100011 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Introduce domain-cpupool property of a xen,domain device tree node, that specifies the cpupool device tree handle of a xen,cpupool node that identifies a cpupool created at boot time where the guest will be assigned on creation. Add member to the xen_domctl_createdomain public interface so the XEN_DOMCTL_INTERFACE_VERSION version is bumped. Add public function to retrieve a pool id from the device tree cpupool node. Update documentation about the property. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini --- Changes in v3: - Use explicitely sized integer for struct xen_domctl_createdomain cpupool_id member. (Stefano) - Changed code due to previous commit code changes Changes in v2: - Moved cpupool_id from arch specific to common part (Juergen) - Implemented functions to retrieve the cpupool id from the cpupool dtb node. --- docs/misc/arm/device-tree/booting.txt | 5 +++++ xen/arch/arm/domain_build.c | 14 +++++++++++++- xen/common/boot_cpupools.c | 24 ++++++++++++++++++++++++ xen/common/domain.c | 2 +- xen/include/public/domctl.h | 4 +++- xen/include/xen/sched.h | 9 +++++++++ 6 files changed, 55 insertions(+), 3 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-t= ree/booting.txt index a94125394e35..7b4a29a2c293 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -188,6 +188,11 @@ with the following properties: An empty property to request the memory of the domain to be direct-map (guest physical address =3D=3D physical address). =20 +- domain-cpupool + + Optional. Handle to a xen,cpupool device tree node that identifies the + cpupool where the guest will be started at boot. + Under the "xen,domain" compatible node, one or more sub-nodes are present for the DomU kernel and ramdisk. =20 diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 8be01678de05..9c67a483d4a4 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3172,7 +3172,8 @@ static int __init construct_domU(struct domain *d, void __init create_domUs(void) { struct dt_device_node *node; - const struct dt_device_node *chosen =3D dt_find_node_by_path("/chosen"= ); + const struct dt_device_node *cpupool_node, + *chosen =3D dt_find_node_by_path("/chosen"= ); =20 BUG_ON(chosen =3D=3D NULL); dt_for_each_child_node(chosen, node) @@ -3241,6 +3242,17 @@ void __init create_domUs(void) vpl011_virq - 32 + 1); } =20 + /* Get the optional property domain-cpupool */ + cpupool_node =3D dt_parse_phandle(node, "domain-cpupool", 0); + if ( cpupool_node ) + { + int pool_id =3D btcpupools_get_domain_pool_id(cpupool_node); + if ( pool_id < 0 ) + panic("Error getting cpupool id from domain-cpupool (%d)\n= ", + pool_id); + d_cfg.cpupool_id =3D pool_id; + } + /* * The variable max_init_domid is initialized with zero, so here i= t's * very important to use the pre-increment operator to call diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c index f6f2fa8f2701..feba93a243fc 100644 --- a/xen/common/boot_cpupools.c +++ b/xen/common/boot_cpupools.c @@ -23,6 +23,8 @@ static unsigned int __initdata next_pool_id; =20 #define BTCPUPOOLS_DT_NODE_NO_REG (-1) #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) +#define BTCPUPOOLS_DT_WRONG_NODE (-3) +#define BTCPUPOOLS_DT_CORRUPTED_NODE (-4) =20 static int __init get_logical_cpu_from_hw_id(unsigned int hwid) { @@ -55,6 +57,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_nod= e *cpu_node) return cpu_num; } =20 +int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + const struct dt_device_node *phandle_node; + int cpu_num; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + return BTCPUPOOLS_DT_WRONG_NODE; + /* + * Get first cpu listed in the cpupool, from its reg it's possible to + * retrieve the cpupool id. + */ + phandle_node =3D dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !phandle_node ) + return BTCPUPOOLS_DT_CORRUPTED_NODE; + + cpu_num =3D get_logical_cpu_from_cpu_node(phandle_node); + if ( cpu_num < 0 ) + return cpu_num; + + return pool_cpu_map[cpu_num]; +} + static int __init check_and_get_sched_id(const char* scheduler_name) { int sched_id =3D sched_get_id_by_name(scheduler_name); diff --git a/xen/common/domain.c b/xen/common/domain.c index 351029f8b239..0827400f4f49 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid, if ( !d->pbuf ) goto fail; =20 - if ( (err =3D sched_init_domain(d, 0)) !=3D 0 ) + if ( (err =3D sched_init_domain(d, config->cpupool_id)) !=3D 0 ) goto fail; =20 if ( (err =3D late_hwdom_init(d)) !=3D 0 ) diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b85e6170b0aa..2f4cf56f438d 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" =20 -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015 =20 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -106,6 +106,8 @@ struct xen_domctl_createdomain { /* Per-vCPU buffer size in bytes. 0 to disable. */ uint32_t vmtrace_size; =20 + uint32_t cpupool_id; + struct xen_arch_domainconfig arch; }; =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 5d83465d3915..4e749a604f25 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1182,6 +1182,7 @@ unsigned int btcpupools_get_cpupool_id(unsigned int c= pu); =20 #ifdef CONFIG_HAS_DEVICE_TREE void btcpupools_dtb_parse(void); +int btcpupools_get_domain_pool_id(const struct dt_device_node *node); #else static inline void btcpupools_dtb_parse(void) {} #endif @@ -1193,6 +1194,14 @@ static inline unsigned int btcpupools_get_cpupool_id= (unsigned int cpu) { return 0; } +#ifdef CONFIG_HAS_DEVICE_TREE +static inline int +btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + return 0; +} +#endif + #endif =20 #endif /* __SCHED_H__ */ --=20 2.17.1