From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1651838479593460.0333699577459; Fri, 6 May 2022 05:01:19 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322974.544487 (Exim 4.92) (envelope-from ) id 1nmwdJ-0007mU-04; Fri, 06 May 2022 12:00:29 +0000 Received: by outflank-mailman (output) from mailman id 322974.544487; Fri, 06 May 2022 12:00:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdI-0007mL-RB; Fri, 06 May 2022 12:00:28 +0000 Received: by outflank-mailman (input) for mailman id 322974; Fri, 06 May 2022 12:00:28 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdH-0007iY-Sc for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:27 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 1d168bd3-cd34-11ec-8fc4-03012f2f19d4; Fri, 06 May 2022 14:00:25 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B1DDB14BF; Fri, 6 May 2022 05:00:24 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C1E603F7F5; Fri, 6 May 2022 05:00:23 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1d168bd3-cd34-11ec-8fc4-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Wei Liu , Anthony PERARD , Juergen Gross Subject: [PATCH v8 1/7] tools/cpupools: Give a name to unnamed cpupools Date: Fri, 6 May 2022 13:00:06 +0100 Message-Id: <20220506120012.32326-2-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838480517100001 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" With the introduction of boot time cpupools, Xen can create many different cpupools at boot time other than cpupool with id 0. Since these newly created cpupools can't have an entry in Xenstore, create the entry using xen-init-dom0 helper with the usual convention: Pool-. Given the change, remove the check for poolid =3D=3D 0 from libxl_cpupoolid_to_name(...). Signed-off-by: Luca Fancellu Reviewed-by: Anthony PERARD --- Changes in v8: - no changes Changes in v7: - Add R-by from Anthony Changes in v6: - Reworked loop to have only one error path (Anthony) Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - no changes, add R-by Changes in v2: - Remove unused variable, moved xc_cpupool_infofree ahead to simplify the code, use asprintf (Juergen) --- tools/helpers/xen-init-dom0.c | 37 +++++++++++++++++++++++++++++++++- tools/libs/light/libxl_utils.c | 3 +-- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c index c99224a4b607..37eff8868f25 100644 --- a/tools/helpers/xen-init-dom0.c +++ b/tools/helpers/xen-init-dom0.c @@ -43,7 +43,10 @@ int main(int argc, char **argv) int rc; struct xs_handle *xsh =3D NULL; xc_interface *xch =3D NULL; - char *domname_string =3D NULL, *domid_string =3D NULL; + char *domname_string =3D NULL, *domid_string =3D NULL, + *pool_path =3D NULL, *pool_name =3D NULL; + xc_cpupoolinfo_t *xcinfo; + unsigned int pool_id =3D 0; libxl_uuid uuid; =20 /* Accept 0 or 1 argument */ @@ -114,9 +117,41 @@ int main(int argc, char **argv) goto out; } =20 + /* Create an entry in xenstore for each cpupool on the system */ + do { + xcinfo =3D xc_cpupool_getinfo(xch, pool_id); + if (xcinfo !=3D NULL) { + if (xcinfo->cpupool_id !=3D pool_id) + pool_id =3D xcinfo->cpupool_id; + xc_cpupool_infofree(xch, xcinfo); + if (asprintf(&pool_path, "/local/pool/%d/name", pool_id) <=3D = 0) { + fprintf(stderr, "cannot allocate memory for pool path\n"); + rc =3D 1; + goto out; + } + if (asprintf(&pool_name, "Pool-%d", pool_id) <=3D 0) { + fprintf(stderr, "cannot allocate memory for pool name\n"); + rc =3D 1; + goto out; + } + pool_id++; + if (!xs_write(xsh, XBT_NULL, pool_path, pool_name, + strlen(pool_name))) { + fprintf(stderr, "cannot set pool name\n"); + rc =3D 1; + goto out; + } + free(pool_name); + free(pool_path); + pool_path =3D pool_name =3D NULL; + } + } while(xcinfo !=3D NULL); + printf("Done setting up Dom0\n"); =20 out: + free(pool_path); + free(pool_name); free(domid_string); free(domname_string); xs_close(xsh); diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c index 1d8a7f64ef4a..e5e6b2da9660 100644 --- a/tools/libs/light/libxl_utils.c +++ b/tools/libs/light/libxl_utils.c @@ -146,8 +146,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t = poolid) =20 snprintf(path, sizeof(path), "/local/pool/%d/name", poolid); s =3D xs_read(ctx->xsh, XBT_NULL, path, &len); - if (!s && (poolid =3D=3D 0)) - return strdup("Pool-0"); + return s; } =20 --=20 2.17.1 From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1651838486273683.3813931150746; Fri, 6 May 2022 05:01:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322975.544494 (Exim 4.92) (envelope-from ) id 1nmwdJ-0007t3-9Y; Fri, 06 May 2022 12:00:29 +0000 Received: by outflank-mailman (output) from mailman id 322975.544494; Fri, 06 May 2022 12:00:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdJ-0007qP-2h; Fri, 06 May 2022 12:00:29 +0000 Received: by outflank-mailman (input) for mailman id 322975; Fri, 06 May 2022 12:00:28 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdI-0007iY-Dc for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:28 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 1e0e4039-cd34-11ec-8fc4-03012f2f19d4; Fri, 06 May 2022 14:00:27 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5DFF6153B; Fri, 6 May 2022 05:00:26 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DBB363F7F5; Fri, 6 May 2022 05:00:24 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1e0e4039-cd34-11ec-8fc4-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v8 2/7] xen/sched: create public function for cpupools creation Date: Fri, 6 May 2022 13:00:07 +0100 Message-Id: <20220506120012.32326-3-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838486787100003 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Create new public function to create cpupools, can take as parameter the scheduler id or a negative value that means the default Xen scheduler will be used. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- Changes in v8: - no changes Changes in v7: - no changes Changes in v6: - add R-by Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - Fixed comment (Andrew) Changes in v2: - cpupool_create_pool doesn't check anymore for pool id uniqueness before calling cpupool_create. Modified commit message accordingly --- xen/common/sched/cpupool.c | 15 +++++++++++++++ xen/include/xen/sched.h | 16 ++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index a6da4970506a..89a891af7076 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1219,6 +1219,21 @@ static void cpupool_hypfs_init(void) =20 #endif /* CONFIG_HYPFS */ =20 +struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched= _id) +{ + struct cpupool *pool; + + if ( sched_id < 0 ) + sched_id =3D scheduler_get_default()->sched_id; + + pool =3D cpupool_create(pool_id, sched_id); + + BUG_ON(IS_ERR(pool)); + cpupool_put(pool); + + return pool; +} + static int __init cf_check cpupool_init(void) { unsigned int cpu; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index ed8539f6d297..0164db996b8b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1153,6 +1153,22 @@ int cpupool_move_domain(struct domain *d, struct cpu= pool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); unsigned int cpupool_get_id(const struct domain *d); const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); + +/* + * cpupool_create_pool - Creates a cpupool + * @pool_id: id of the pool to be created + * @sched_id: id of the scheduler to be used for the pool + * + * Creates a cpupool with pool_id id. + * The sched_id parameter identifies the scheduler to be used, if it is + * negative, the default scheduler of Xen will be used. + * + * returns: + * pointer to the struct cpupool just created, or Xen will panic in ca= se of + * error + */ +struct cpupool *cpupool_create_pool(unsigned int pool_id, int sched_id); + extern void cf_check dump_runq(unsigned char key); =20 void arch_do_physinfo(struct xen_sysctl_physinfo *pi); --=20 2.17.1 From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1651838462811464.7527704327099; Fri, 6 May 2022 05:01:02 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322976.544514 (Exim 4.92) (envelope-from ) id 1nmwdL-0008Tg-GB; Fri, 06 May 2022 12:00:31 +0000 Received: by outflank-mailman (output) from mailman id 322976.544514; Fri, 06 May 2022 12:00:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdL-0008TZ-Cy; Fri, 06 May 2022 12:00:31 +0000 Received: by outflank-mailman (input) for mailman id 322976; Fri, 06 May 2022 12:00:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdJ-0007iY-Lj for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:29 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 1ef06350-cd34-11ec-8fc4-03012f2f19d4; Fri, 06 May 2022 14:00:28 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E389C113E; Fri, 6 May 2022 05:00:27 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 87C9C3F7F5; Fri, 6 May 2022 05:00:26 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1ef06350-cd34-11ec-8fc4-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, George Dunlap , Dario Faggioli , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v8 3/7] xen/sched: retrieve scheduler id by name Date: Fri, 6 May 2022 13:00:08 +0100 Message-Id: <20220506120012.32326-4-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838464903100001 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Add a static function to retrieve the scheduler pointer using the scheduler name. Add a public function to retrieve the scheduler id by the scheduler name that makes use of the new static function. Take the occasion to replace open coded scheduler search with the new static function in scheduler_init. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross Reviewed-by: Dario Faggioli --- Changes in v8: - no changes Changes in v7: - Add R-by (Dario) Changes in v6: - no changes Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - add R-by Changes in v2: - replace open coded scheduler search in scheduler_init (Juergen) --- xen/common/sched/core.c | 40 ++++++++++++++++++++++++++-------------- xen/include/xen/sched.h | 11 +++++++++++ 2 files changed, 37 insertions(+), 14 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 8a8c25bbda47..8c73489654a1 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -2947,10 +2947,30 @@ void scheduler_enable(void) scheduler_active =3D true; } =20 +static inline +const struct scheduler *__init sched_get_by_name(const char *sched_name) +{ + unsigned int i; + + for ( i =3D 0; i < NUM_SCHEDULERS; i++ ) + if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name)= ) + return schedulers[i]; + + return NULL; +} + +int __init sched_get_id_by_name(const char *sched_name) +{ + const struct scheduler *scheduler =3D sched_get_by_name(sched_name); + + return scheduler ? scheduler->sched_id : -1; +} + /* Initialise the data structures. */ void __init scheduler_init(void) { struct domain *idle_domain; + const struct scheduler *scheduler; int i; =20 scheduler_enable(); @@ -2981,25 +3001,17 @@ void __init scheduler_init(void) schedulers[i]->opt_name); schedulers[i] =3D NULL; } - - if ( schedulers[i] && !ops.name && - !strcmp(schedulers[i]->opt_name, opt_sched) ) - ops =3D *schedulers[i]; } =20 - if ( !ops.name ) + scheduler =3D sched_get_by_name(opt_sched); + if ( !scheduler ) { printk("Could not find scheduler: %s\n", opt_sched); - for ( i =3D 0; i < NUM_SCHEDULERS; i++ ) - if ( schedulers[i] && - !strcmp(schedulers[i]->opt_name, CONFIG_SCHED_DEFAULT) ) - { - ops =3D *schedulers[i]; - break; - } - BUG_ON(!ops.name); - printk("Using '%s' (%s)\n", ops.name, ops.opt_name); + scheduler =3D sched_get_by_name(CONFIG_SCHED_DEFAULT); + BUG_ON(!scheduler); + printk("Using '%s' (%s)\n", scheduler->name, scheduler->opt_name); } + ops =3D *scheduler; =20 if ( cpu_schedule_up(0) ) BUG(); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 0164db996b8b..4442a1940c25 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -764,6 +764,17 @@ void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); + +/* + * sched_get_id_by_name - retrieves a scheduler id given a scheduler name + * @sched_name: scheduler name as a string + * + * returns: + * positive value being the scheduler id, on success + * negative value if the scheduler name is not found. + */ +int sched_get_id_by_name(const char *sched_name); + void vcpu_wake(struct vcpu *v); long vcpu_yield(void); void vcpu_sleep_nosync(struct vcpu *v); --=20 2.17.1 From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1651838482566331.4545213461091; Fri, 6 May 2022 05:01:22 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322977.544525 (Exim 4.92) (envelope-from ) id 1nmwdN-0000La-PE; Fri, 06 May 2022 12:00:33 +0000 Received: by outflank-mailman (output) from mailman id 322977.544525; Fri, 06 May 2022 12:00:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdN-0000LM-Lr; Fri, 06 May 2022 12:00:33 +0000 Received: by outflank-mailman (input) for mailman id 322977; Fri, 06 May 2022 12:00:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdL-0007iY-Sw for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:32 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 200c9d7c-cd34-11ec-8fc4-03012f2f19d4; Fri, 06 May 2022 14:00:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C77F814BF; Fri, 6 May 2022 05:00:29 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1A1193F7F5; Fri, 6 May 2022 05:00:27 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 200c9d7c-cd34-11ec-8fc4-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Volodymyr Babchuk , Dario Faggioli , Juergen Gross Subject: [PATCH v8 4/7] xen/cpupool: Create different cpupools at boot time Date: Fri, 6 May 2022 13:00:09 +0100 Message-Id: <20220506120012.32326-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838485020100001 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Introduce a way to create different cpupools at boot time, this is particularly useful on ARM big.LITTLE system where there might be the need to have different cpupools for each type of core, but also systems using NUMA can have different cpu pools for each node. The feature on arm relies on a specification of the cpupools from the device tree to build pools and assign cpus to them. ACPI is not supported for this feature. With this patch, cpupool0 can now have less cpus than the number of online ones, so update the default case for opt_dom0_max_vcpus. Documentation is created to explain the feature. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini Acked-by: George Dunlap Reviewed-by: Juergen Gross # non-arm parts --- Changes in v8: - moved Kconfig parameter from xen/common/Kconfig to xen/common/sched/Kconfig (Jan) - Add R-by (Stefano) Changes in v7: - rename xen/common/boot_cpupools.c to xen/common/sched/boot-cpupool.c (Jan) - reverted xen/common/Makefile, add entry in xen/common/sched/Makefile - changed line in MAINTAINERS under CPU POOLS section (Dario) - Fix documentation, update opt_dom0_max_vcpus to the number of cpu in cpupool0 (Julien) Changes in v6: - Changed docs, return if booted with ACPI in btcpupools_dtb_parse, panic if /chosen does not exists. Changed commit message (Julien) - Add Juergen R-by for the xen/common/sched part that didn't change Changes in v5: - Fixed wrong variable name, swapped schedulers, add scheduler info in the printk (Stefano) - introduce assert in cpupool_init and btcpupools_get_cpupool_id to harden the code Changes in v4: - modify Makefile to put in *.init.o, fixed stubs and macro (Jan) - fixed docs, fix brakets (Stefano) - keep cpu0 in Pool-0 (Julien) - moved printk from btcpupools_allocate_pools to btcpupools_get_cpupool_id - Add to docs constraint about cpu0 and Pool-0 Changes in v3: - Add newline to cpupools.txt and removed "default n" from Kconfig (Jan) - Fixed comment, moved defines, used global cpu_online_map, use HAS_DEVICE_TREE instead of ARM and place arch specific code in header (Juergen) - Fix brakets, x86 code only panic, get rid of scheduler dt node, don't save pool pointer and look for it from the pool list (Stefano) - Changed data structures to allow modification to the code. Changes in v2: - Move feature to common code (Juergen) - Try to decouple dtb parse and cpupool creation to allow more way to specify cpupools (for example command line) - Created standalone dt node for the scheduler so it can be used in future work to set scheduler specific parameters - Use only auto generated ids for cpupools --- MAINTAINERS | 2 +- docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++ xen/arch/arm/domain_build.c | 5 +- xen/arch/arm/include/asm/smp.h | 3 + xen/common/sched/Kconfig | 7 + xen/common/sched/Makefile | 1 + xen/common/sched/boot-cpupool.c | 207 +++++++++++++++++++++++++ xen/common/sched/cpupool.c | 12 +- xen/include/xen/sched.h | 14 ++ 9 files changed, 388 insertions(+), 3 deletions(-) create mode 100644 docs/misc/arm/device-tree/cpupools.txt create mode 100644 xen/common/sched/boot-cpupool.c diff --git a/MAINTAINERS b/MAINTAINERS index ba0d1c0c1bfa..a417c3586051 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -279,7 +279,7 @@ CPU POOLS M: Juergen Gross M: Dario Faggioli S: Supported -F: xen/common/sched/cpupool.c +F: xen/common/sched/*cpupool.c =20 DEVICE TREE M: Stefano Stabellini diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-= tree/cpupools.txt new file mode 100644 index 000000000000..1f640d680317 --- /dev/null +++ b/docs/misc/arm/device-tree/cpupools.txt @@ -0,0 +1,140 @@ +Boot time cpupools +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possibl= e to +create cpupools during boot phase by specifying them in the device tree. +ACPI is not supported for this feature. + +Cpupools specification nodes shall be direct childs of /chosen node. +Each cpupool node contains the following properties: + +- compatible (mandatory) + + Must always include the compatiblity string: "xen,cpupool". + +- cpupool-cpus (mandatory) + + Must be a list of device tree phandle to nodes describing cpus (e.g. h= aving + device_type =3D "cpu"), it can't be empty. + +- cpupool-sched (optional) + + Must be a string having the name of a Xen scheduler. Check the sched= =3D<...> + boot argument for allowed values [1]. When this property is omitted, t= he Xen + default scheduler will be used. + + +Constraints +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +If no cpupools are specified, all cpus will be assigned to one cpupool +implicitly created (Pool-0). + +If cpupools node are specified, but not every cpu brought up by Xen is ass= igned, +all the not assigned cpu will be assigned to an additional cpupool. + +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen = will +stop. + +The boot cpu must be assigned to Pool-0, so the cpupool containing that co= re +will become Pool-0 automatically. + + +Examples +=3D=3D=3D=3D=3D=3D=3D=3D + +A system having two types of core, the following device tree specification= will +instruct Xen to have two cpupools: + +- The cpupool described by node cpupool_a will have 4 cpus assigned. +- The cpupool described by node cpupool_b will have 2 cpus assigned. + +The following example can work only if hmp-unsafe=3D1 is passed to Xen boot +arguments, otherwise not all cores will be brought up by Xen and the cpupo= ol +creation process will stop Xen. + + +a72_1: cpu@0 { + compatible =3D "arm,cortex-a72"; + reg =3D <0x0 0x0>; + device_type =3D "cpu"; + [...] +}; + +a72_2: cpu@1 { + compatible =3D "arm,cortex-a72"; + reg =3D <0x0 0x1>; + device_type =3D "cpu"; + [...] +}; + +a53_1: cpu@100 { + compatible =3D "arm,cortex-a53"; + reg =3D <0x0 0x100>; + device_type =3D "cpu"; + [...] +}; + +a53_2: cpu@101 { + compatible =3D "arm,cortex-a53"; + reg =3D <0x0 0x101>; + device_type =3D "cpu"; + [...] +}; + +a53_3: cpu@102 { + compatible =3D "arm,cortex-a53"; + reg =3D <0x0 0x102>; + device_type =3D "cpu"; + [...] +}; + +a53_4: cpu@103 { + compatible =3D "arm,cortex-a53"; + reg =3D <0x0 0x103>; + device_type =3D "cpu"; + [...] +}; + +chosen { + + cpupool_a { + compatible =3D "xen,cpupool"; + cpupool-cpus =3D <&a53_1 &a53_2 &a53_3 &a53_4>; + }; + cpupool_b { + compatible =3D "xen,cpupool"; + cpupool-cpus =3D <&a72_1 &a72_2>; + cpupool-sched =3D "credit2"; + }; + + [...] + +}; + + +A system having the cpupools specification below will instruct Xen to have= three +cpupools: + +- The cpupool described by node cpupool_a will have 2 cpus assigned. +- The cpupool described by node cpupool_b will have 2 cpus assigned. +- An additional cpupool will be created, having 2 cpus assigned (created b= y Xen + with all the unassigned cpus a53_3 and a53_4). + +chosen { + + cpupool_a { + compatible =3D "xen,cpupool"; + cpupool-cpus =3D <&a53_1 &a53_2>; + }; + cpupool_b { + compatible =3D "xen,cpupool"; + cpupool-cpus =3D <&a72_1 &a72_2>; + cpupool-sched =3D "null"; + }; + + [...] + +}; + +[1] docs/misc/xen-command-line.pandoc diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 1472ca4972b0..5df5c8ffb8ba 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -73,7 +73,10 @@ custom_param("dom0_mem", parse_dom0_mem); unsigned int __init dom0_max_vcpus(void) { if ( opt_dom0_max_vcpus =3D=3D 0 ) - opt_dom0_max_vcpus =3D num_online_cpus(); + { + ASSERT(cpupool0); + opt_dom0_max_vcpus =3D cpumask_weight(cpupool_valid_cpus(cpupool0)= ); + } if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS ) opt_dom0_max_vcpus =3D MAX_VIRT_CPUS; =20 diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h index af5a2fe65266..83c0cd69767b 100644 --- a/xen/arch/arm/include/asm/smp.h +++ b/xen/arch/arm/include/asm/smp.h @@ -34,6 +34,9 @@ extern void init_secondary(void); extern void smp_init_cpus(void); extern void smp_clear_cpu_maps (void); extern int smp_get_max_cpus (void); + +#define cpu_physical_id(cpu) cpu_logical_map(cpu) + #endif =20 /* diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig index 3d9f9214b8cc..b2ef0c99a3f8 100644 --- a/xen/common/sched/Kconfig +++ b/xen/common/sched/Kconfig @@ -64,3 +64,10 @@ config SCHED_DEFAULT default "credit2" =20 endmenu + +config BOOT_TIME_CPUPOOLS + bool "Create cpupools at boot time" + depends on HAS_DEVICE_TREE + help + Creates cpupools during boot time and assigns cpus to them. Cpupools + options can be specified in the device tree. diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile index 3537f2a68d69..697bd54bfe93 100644 --- a/xen/common/sched/Makefile +++ b/xen/common/sched/Makefile @@ -1,3 +1,4 @@ +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) +=3D boot-cpupool.init.o obj-y +=3D cpupool.o obj-$(CONFIG_SCHED_ARINC653) +=3D arinc653.o obj-$(CONFIG_SCHED_CREDIT) +=3D credit.o diff --git a/xen/common/sched/boot-cpupool.c b/xen/common/sched/boot-cpupoo= l.c new file mode 100644 index 000000000000..9429a5025fc4 --- /dev/null +++ b/xen/common/sched/boot-cpupool.c @@ -0,0 +1,207 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/common/boot_cpupools.c + * + * Code to create cpupools at boot time. + * + * Copyright (C) 2022 Arm Ltd. + */ + +#include +#include + +/* + * pool_cpu_map: Index is logical cpu number, content is cpupool id, (-1= ) for + * unassigned. + * pool_sched_map: Index is cpupool id, content is scheduler id, (-1) for + * unassigned. + */ +static int __initdata pool_cpu_map[NR_CPUS] =3D { [0 ... NR_CPUS-1] =3D = -1 }; +static int __initdata pool_sched_map[NR_CPUS] =3D { [0 ... NR_CPUS-1] =3D = -1 }; +static unsigned int __initdata next_pool_id; + +#define BTCPUPOOLS_DT_NODE_NO_REG (-1) +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) + +static int __init get_logical_cpu_from_hw_id(unsigned int hwid) +{ + unsigned int i; + + for ( i =3D 0; i < nr_cpu_ids; i++ ) + { + if ( cpu_physical_id(i) =3D=3D hwid ) + return i; + } + + return -1; +} + +static int __init +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node) +{ + int cpu_num; + const __be32 *prop; + unsigned int cpu_reg; + + prop =3D dt_get_property(cpu_node, "reg", NULL); + if ( !prop ) + return BTCPUPOOLS_DT_NODE_NO_REG; + + cpu_reg =3D dt_read_number(prop, dt_n_addr_cells(cpu_node)); + + cpu_num =3D get_logical_cpu_from_hw_id(cpu_reg); + if ( cpu_num < 0 ) + return BTCPUPOOLS_DT_NODE_NO_LOG_CPU; + + return cpu_num; +} + +static int __init check_and_get_sched_id(const char* scheduler_name) +{ + int sched_id =3D sched_get_id_by_name(scheduler_name); + + if ( sched_id < 0 ) + panic("Scheduler %s does not exists!\n", scheduler_name); + + return sched_id; +} + +void __init btcpupools_dtb_parse(void) +{ + const struct dt_device_node *chosen, *node; + + if ( !acpi_disabled ) + return; + + chosen =3D dt_find_node_by_path("/chosen"); + if ( !chosen ) + panic("/chosen missing. Boot time cpupools can't be parsed from DT= .\n"); + + dt_for_each_child_node(chosen, node) + { + const struct dt_device_node *phandle_node; + int sched_id =3D -1; + const char* scheduler_name; + unsigned int i =3D 0; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + continue; + + if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_na= me) ) + sched_id =3D check_and_get_sched_id(scheduler_name); + + phandle_node =3D dt_parse_phandle(node, "cpupool-cpus", i++); + if ( !phandle_node ) + panic("Missing or empty cpupool-cpus property!\n"); + + while ( phandle_node ) + { + int cpu_num; + + cpu_num =3D get_logical_cpu_from_cpu_node(phandle_node); + + if ( cpu_num < 0 ) + panic("Error retrieving logical cpu from node %s (%d)\n", + dt_node_name(node), cpu_num); + + if ( pool_cpu_map[cpu_num] !=3D -1 ) + panic("Logical cpu %d already added to a cpupool!\n", cpu_= num); + + pool_cpu_map[cpu_num] =3D next_pool_id; + + phandle_node =3D dt_parse_phandle(node, "cpupool-cpus", i++); + } + + /* Save scheduler choice for this cpupool id */ + pool_sched_map[next_pool_id] =3D sched_id; + + /* Let Xen generate pool ids */ + next_pool_id++; + } +} + +void __init btcpupools_allocate_pools(void) +{ + unsigned int i; + bool add_extra_cpupool =3D false; + int swap_id =3D -1; + + /* + * If there are no cpupools, the value of next_pool_id is zero, so the= code + * below will assign every cpu to cpupool0 as the default behavior. + * When there are cpupools, the code below is assigning all the not + * assigned cpu to a new pool (next_pool_id value is the last id + 1). + * In the same loop we check if there is any assigned cpu that is not + * online. + */ + for ( i =3D 0; i < nr_cpu_ids; i++ ) + { + if ( cpumask_test_cpu(i, &cpu_online_map) ) + { + /* Unassigned cpu gets next_pool_id pool id value */ + if ( pool_cpu_map[i] < 0 ) + { + pool_cpu_map[i] =3D next_pool_id; + add_extra_cpupool =3D true; + } + + /* + * Cpu0 must be in cpupool0, otherwise some operations like mo= ving + * cpus between cpupools, cpu hotplug, destroying cpupools, sh= utdown + * of the host, might not work in a sane way. + */ + if ( !i && (pool_cpu_map[0] !=3D 0) ) + swap_id =3D pool_cpu_map[0]; + + if ( swap_id !=3D -1 ) + { + if ( pool_cpu_map[i] =3D=3D swap_id ) + pool_cpu_map[i] =3D 0; + else if ( pool_cpu_map[i] =3D=3D 0 ) + pool_cpu_map[i] =3D swap_id; + } + } + else + { + if ( pool_cpu_map[i] >=3D 0 ) + panic("Pool-%d contains cpu%u that is not online!\n", + pool_cpu_map[i], i); + } + } + + /* A swap happened, swap schedulers between cpupool id 0 and the other= */ + if ( swap_id !=3D -1 ) + { + int swap_sched =3D pool_sched_map[swap_id]; + + pool_sched_map[swap_id] =3D pool_sched_map[0]; + pool_sched_map[0] =3D swap_sched; + } + + if ( add_extra_cpupool ) + next_pool_id++; + + /* Create cpupools with selected schedulers */ + for ( i =3D 0; i < next_pool_id; i++ ) + cpupool_create_pool(i, pool_sched_map[i]); +} + +unsigned int __init btcpupools_get_cpupool_id(unsigned int cpu) +{ + ASSERT((cpu < NR_CPUS) && (pool_cpu_map[cpu] >=3D 0)); + + printk(XENLOG_INFO "Logical CPU %u in Pool-%d (Scheduler id: %d).\n", + cpu, pool_cpu_map[cpu], pool_sched_map[pool_cpu_map[cpu]]); + + return pool_cpu_map[cpu]; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 89a891af7076..86a175f99cd5 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1247,12 +1247,22 @@ static int __init cf_check cpupool_init(void) cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); =20 + btcpupools_dtb_parse(); + + btcpupools_allocate_pools(); + spin_lock(&cpupool_lock); =20 cpumask_copy(&cpupool_free_cpus, &cpu_online_map); =20 for_each_cpu ( cpu, &cpupool_free_cpus ) - cpupool_assign_cpu_locked(cpupool0, cpu); + { + unsigned int pool_id =3D btcpupools_get_cpupool_id(cpu); + struct cpupool *pool =3D cpupool_find_by_id(pool_id); + + ASSERT(pool); + cpupool_assign_cpu_locked(pool, cpu); + } =20 spin_unlock(&cpupool_lock); =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 4442a1940c25..74b3aae10b94 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1184,6 +1184,20 @@ extern void cf_check dump_runq(unsigned char key); =20 void arch_do_physinfo(struct xen_sysctl_physinfo *pi); =20 +#ifdef CONFIG_BOOT_TIME_CPUPOOLS +void btcpupools_allocate_pools(void); +unsigned int btcpupools_get_cpupool_id(unsigned int cpu); +void btcpupools_dtb_parse(void); + +#else /* !CONFIG_BOOT_TIME_CPUPOOLS */ +static inline void btcpupools_allocate_pools(void) {} +static inline void btcpupools_dtb_parse(void) {} +static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) +{ + return 0; +} +#endif + #endif /* __SCHED_H__ */ =20 /* --=20 2.17.1 From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 165183846499189.85327814366565; Fri, 6 May 2022 05:01:04 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322978.544531 (Exim 4.92) (envelope-from ) id 1nmwdO-0000Pe-6h; Fri, 06 May 2022 12:00:34 +0000 Received: by outflank-mailman (output) from mailman id 322978.544531; Fri, 06 May 2022 12:00:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdN-0000P7-VZ; Fri, 06 May 2022 12:00:33 +0000 Received: by outflank-mailman (input) for mailman id 322978; Fri, 06 May 2022 12:00:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdM-0007iX-9T for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:32 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 20c4ebcb-cd34-11ec-a406-831a346695d4; Fri, 06 May 2022 14:00:31 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB718113E; Fri, 6 May 2022 05:00:30 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F36EF3F7F5; Fri, 6 May 2022 05:00:29 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 20c4ebcb-cd34-11ec-a406-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap Subject: [PATCH v8 5/7] xen/cpupool: Don't allow removing cpu0 from cpupool0 Date: Fri, 6 May 2022 13:00:10 +0100 Message-Id: <20220506120012.32326-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838466405100005 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Cpu0 must remain in cpupool0, otherwise some operations like moving cpus between cpupools, cpu hotplug, destroying cpupools, shutdown of the host, might not work in a sane way. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- Changes in v8: - Add R-by (Juergen) Changes in v7: - new patch --- xen/common/sched/cpupool.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 86a175f99cd5..0a93bcc631bf 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -572,6 +572,7 @@ static long cf_check cpupool_unassign_cpu_helper(void *= info) * possible failures: * - last cpu and still active domains in cpupool * - cpu just being unplugged + * - Attempt to remove boot cpu from cpupool0 */ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu) { @@ -582,7 +583,12 @@ static int cpupool_unassign_cpu(struct cpupool *c, uns= igned int cpu) debugtrace_printk("cpupool_unassign_cpu(pool=3D%u,cpu=3D%d)\n", c->cpupool_id, cpu); =20 - if ( !cpu_online(cpu) ) + /* + * Cpu0 must remain in cpupool0, otherwise some operations like moving= cpus + * between cpupools, cpu hotplug, destroying cpupools, shutdown of the= host, + * might not work in a sane way. + */ + if ( (!c->cpupool_id && !cpu) || !cpu_online(cpu) ) return -EINVAL; =20 master_cpu =3D sched_get_resource_cpu(cpu); --=20 2.17.1 From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1651838480089147.94716708968258; Fri, 6 May 2022 05:01:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322979.544547 (Exim 4.92) (envelope-from ) id 1nmwdP-0000u9-UF; Fri, 06 May 2022 12:00:35 +0000 Received: by outflank-mailman (output) from mailman id 322979.544547; Fri, 06 May 2022 12:00:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdP-0000tu-Ky; Fri, 06 May 2022 12:00:35 +0000 Received: by outflank-mailman (input) for mailman id 322979; Fri, 06 May 2022 12:00:34 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdO-0007iY-9s for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:34 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 21bff693-cd34-11ec-8fc4-03012f2f19d4; Fri, 06 May 2022 14:00:33 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B2B24113E; Fri, 6 May 2022 05:00:32 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 21AFB3F7F5; Fri, 6 May 2022 05:00:31 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 21bff693-cd34-11ec-8fc4-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , Juergen Gross , Dario Faggioli Subject: [PATCH v8 6/7] arm/dom0less: assign dom0less guests to cpupools Date: Fri, 6 May 2022 13:00:11 +0100 Message-Id: <20220506120012.32326-7-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838480525100002 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Introduce domain-cpupool property of a xen,domain device tree node, that specifies the cpupool device tree handle of a xen,cpupool node that identifies a cpupool created at boot time where the guest will be assigned on creation. Add member to the xen_domctl_createdomain public interface so the XEN_DOMCTL_INTERFACE_VERSION version is bumped. Add public function to retrieve a pool id from the device tree cpupool node. Update documentation about the property. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini Reviewed-by: Juergen Gross # non-arm parts --- Changes in v8: - no changes Changes in v7: - Add comment for cpupool_id struct member. (Jan) Changes in v6: - no changes Changes in v5: - no changes Changes in v4: - no changes - add R-by Changes in v3: - Use explicitely sized integer for struct xen_domctl_createdomain cpupool_id member. (Stefano) - Changed code due to previous commit code changes Changes in v2: - Moved cpupool_id from arch specific to common part (Juergen) - Implemented functions to retrieve the cpupool id from the cpupool dtb node. --- docs/misc/arm/device-tree/booting.txt | 5 +++++ xen/arch/arm/domain_build.c | 14 +++++++++++++- xen/common/domain.c | 2 +- xen/common/sched/boot-cpupool.c | 24 ++++++++++++++++++++++++ xen/include/public/domctl.h | 5 ++++- xen/include/xen/sched.h | 9 +++++++++ 6 files changed, 56 insertions(+), 3 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-t= ree/booting.txt index a94125394e35..7b4a29a2c293 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -188,6 +188,11 @@ with the following properties: An empty property to request the memory of the domain to be direct-map (guest physical address =3D=3D physical address). =20 +- domain-cpupool + + Optional. Handle to a xen,cpupool device tree node that identifies the + cpupool where the guest will be started at boot. + Under the "xen,domain" compatible node, one or more sub-nodes are present for the DomU kernel and ramdisk. =20 diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 5df5c8ffb8ba..aa777741bdd0 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3174,7 +3174,8 @@ static int __init construct_domU(struct domain *d, void __init create_domUs(void) { struct dt_device_node *node; - const struct dt_device_node *chosen =3D dt_find_node_by_path("/chosen"= ); + const struct dt_device_node *cpupool_node, + *chosen =3D dt_find_node_by_path("/chosen"= ); =20 BUG_ON(chosen =3D=3D NULL); dt_for_each_child_node(chosen, node) @@ -3243,6 +3244,17 @@ void __init create_domUs(void) vpl011_virq - 32 + 1); } =20 + /* Get the optional property domain-cpupool */ + cpupool_node =3D dt_parse_phandle(node, "domain-cpupool", 0); + if ( cpupool_node ) + { + int pool_id =3D btcpupools_get_domain_pool_id(cpupool_node); + if ( pool_id < 0 ) + panic("Error getting cpupool id from domain-cpupool (%d)\n= ", + pool_id); + d_cfg.cpupool_id =3D pool_id; + } + /* * The variable max_init_domid is initialized with zero, so here i= t's * very important to use the pre-increment operator to call diff --git a/xen/common/domain.c b/xen/common/domain.c index 8d2c2a989708..7570eae91a24 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -697,7 +697,7 @@ struct domain *domain_create(domid_t domid, if ( !d->pbuf ) goto fail; =20 - if ( (err =3D sched_init_domain(d, 0)) !=3D 0 ) + if ( (err =3D sched_init_domain(d, config->cpupool_id)) !=3D 0 ) goto fail; =20 if ( (err =3D late_hwdom_init(d)) !=3D 0 ) diff --git a/xen/common/sched/boot-cpupool.c b/xen/common/sched/boot-cpupoo= l.c index 9429a5025fc4..240bae4cebb8 100644 --- a/xen/common/sched/boot-cpupool.c +++ b/xen/common/sched/boot-cpupool.c @@ -22,6 +22,8 @@ static unsigned int __initdata next_pool_id; =20 #define BTCPUPOOLS_DT_NODE_NO_REG (-1) #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) +#define BTCPUPOOLS_DT_WRONG_NODE (-3) +#define BTCPUPOOLS_DT_CORRUPTED_NODE (-4) =20 static int __init get_logical_cpu_from_hw_id(unsigned int hwid) { @@ -56,6 +58,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_nod= e *cpu_node) return cpu_num; } =20 +int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + const struct dt_device_node *phandle_node; + int cpu_num; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + return BTCPUPOOLS_DT_WRONG_NODE; + /* + * Get first cpu listed in the cpupool, from its reg it's possible to + * retrieve the cpupool id. + */ + phandle_node =3D dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !phandle_node ) + return BTCPUPOOLS_DT_CORRUPTED_NODE; + + cpu_num =3D get_logical_cpu_from_cpu_node(phandle_node); + if ( cpu_num < 0 ) + return cpu_num; + + return pool_cpu_map[cpu_num]; +} + static int __init check_and_get_sched_id(const char* scheduler_name) { int sched_id =3D sched_get_id_by_name(scheduler_name); diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b85e6170b0aa..84e75829b980 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" =20 -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015 =20 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -106,6 +106,9 @@ struct xen_domctl_createdomain { /* Per-vCPU buffer size in bytes. 0 to disable. */ uint32_t vmtrace_size; =20 + /* CPU pool to use; specify 0 or a specific existing pool */ + uint32_t cpupool_id; + struct xen_arch_domainconfig arch; }; =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 74b3aae10b94..32d2a6294b6d 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1188,6 +1188,7 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi); void btcpupools_allocate_pools(void); unsigned int btcpupools_get_cpupool_id(unsigned int cpu); void btcpupools_dtb_parse(void); +int btcpupools_get_domain_pool_id(const struct dt_device_node *node); =20 #else /* !CONFIG_BOOT_TIME_CPUPOOLS */ static inline void btcpupools_allocate_pools(void) {} @@ -1196,6 +1197,14 @@ static inline unsigned int btcpupools_get_cpupool_id= (unsigned int cpu) { return 0; } +#ifdef CONFIG_HAS_DEVICE_TREE +static inline int +btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + return 0; +} +#endif + #endif =20 #endif /* __SCHED_H__ */ --=20 2.17.1 From nobody Wed Apr 24 22:59:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1651838478833659.3414267035233; Fri, 6 May 2022 05:01:18 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.322980.544557 (Exim 4.92) (envelope-from ) id 1nmwdS-0001HM-3h; Fri, 06 May 2022 12:00:38 +0000 Received: by outflank-mailman (output) from mailman id 322980.544557; Fri, 06 May 2022 12:00:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdR-0001H9-VH; Fri, 06 May 2022 12:00:37 +0000 Received: by outflank-mailman (input) for mailman id 322980; Fri, 06 May 2022 12:00:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nmwdQ-0007iY-3k for xen-devel@lists.xenproject.org; Fri, 06 May 2022 12:00:36 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 22def1e5-cd34-11ec-8fc4-03012f2f19d4; Fri, 06 May 2022 14:00:35 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F91D153B; Fri, 6 May 2022 05:00:34 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DC9473F7F5; Fri, 6 May 2022 05:00:32 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 22def1e5-cd34-11ec-8fc4-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v8 7/7] xen/cpupool: Allow cpupool0 to use different scheduler Date: Fri, 6 May 2022 13:00:12 +0100 Message-Id: <20220506120012.32326-8-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220506120012.32326-1-luca.fancellu@arm.com> References: <20220506120012.32326-1-luca.fancellu@arm.com> X-ZM-MESSAGEID: 1651838480550100003 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Currently cpupool0 can use only the default scheduler, and cpupool_create has an hardcoded behavior when creating the pool 0 that doesn't allocate new memory for the scheduler, but uses the default scheduler structure in memory. With this commit it is possible to allocate a different scheduler for the cpupool0 when using the boot time cpupool. To achieve this the hardcoded behavior in cpupool_create is removed and the cpupool0 creation is moved. When compiling without boot time cpupools enabled, the current behavior is maintained (except that cpupool0 scheduler memory will be allocated). Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- Changes in v8: - no changes Changes in v7: - no changes Changes in v6: - Add R-by Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - fix typo in commit message (Juergen) - rebase changes Changes in v2: - new patch --- xen/common/sched/boot-cpupool.c | 5 ++++- xen/common/sched/cpupool.c | 8 +------- xen/include/xen/sched.h | 5 ++++- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/xen/common/sched/boot-cpupool.c b/xen/common/sched/boot-cpupoo= l.c index 240bae4cebb8..5955e6f9a98b 100644 --- a/xen/common/sched/boot-cpupool.c +++ b/xen/common/sched/boot-cpupool.c @@ -205,8 +205,11 @@ void __init btcpupools_allocate_pools(void) if ( add_extra_cpupool ) next_pool_id++; =20 + /* Keep track of cpupool id 0 with the global cpupool0 */ + cpupool0 =3D cpupool_create_pool(0, pool_sched_map[0]); + /* Create cpupools with selected schedulers */ - for ( i =3D 0; i < next_pool_id; i++ ) + for ( i =3D 1; i < next_pool_id; i++ ) cpupool_create_pool(i, pool_sched_map[i]); } =20 diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 0a93bcc631bf..f6e3d97e5288 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -312,10 +312,7 @@ static struct cpupool *cpupool_create(unsigned int poo= lid, c->cpupool_id =3D q->cpupool_id + 1; } =20 - if ( poolid =3D=3D 0 ) - c->sched =3D scheduler_get_default(); - else - c->sched =3D scheduler_alloc(sched_id); + c->sched =3D scheduler_alloc(sched_id); if ( IS_ERR(c->sched) ) { ret =3D PTR_ERR(c->sched); @@ -1248,9 +1245,6 @@ static int __init cf_check cpupool_init(void) =20 cpupool_hypfs_init(); =20 - cpupool0 =3D cpupool_create(0, 0); - BUG_ON(IS_ERR(cpupool0)); - cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); =20 btcpupools_dtb_parse(); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 32d2a6294b6d..6040fa3b3830 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1191,7 +1191,10 @@ void btcpupools_dtb_parse(void); int btcpupools_get_domain_pool_id(const struct dt_device_node *node); =20 #else /* !CONFIG_BOOT_TIME_CPUPOOLS */ -static inline void btcpupools_allocate_pools(void) {} +static inline void btcpupools_allocate_pools(void) +{ + cpupool0 =3D cpupool_create_pool(0, -1); +} static inline void btcpupools_dtb_parse(void) {} static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) { --=20 2.17.1