From nobody Tue Nov 11 08:47:04 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567826; cv=none; d=zoho.com; s=zohoarc; b=gu+8H+36ZObuOd6k8Gqe9Gd79G3Hxtupvh0Pdh1/pgj29P299TcBldDl6KvHYKi07FeOHDMiHTHrbohfSGpxB1YGqL6SnYu5PcgvGzTsPxLOz65Y6EHV2lmEccVa1OlbOa1dws0Vdz8yzoXr4ye1yFTmDFjaRVst1fRaBYrmmTs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567826; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=1S0Dr1MXn6DjoP3zBmjglq515Ww6ekayYB7SxPqXecI=; b=arlCtZSUmhnPMsrgykEFxtWZkLyYkdrUJtdYaJll4xid6yhwziTPuaihzCBDWZmMN7n6rK3NYgYilVA7qoNsnAIDw3qu8sPtbA8XIVhz2W9NDsxw9GpHpolwBZjn/LS6UPHpg9IsKynqUqFxZ7U8zfl2QNcw0S6asSxSXlYC5nQ= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567826733275.1545285417294; Fri, 27 Sep 2019 00:03:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGv-000593-NB; Fri, 27 Sep 2019 07:02:33 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGu-00057S-T8 for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:02:32 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 951356e0-e0f4-11e9-bf31-bc764e2007e4; Fri, 27 Sep 2019 07:01:09 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 7064BB04F; Fri, 27 Sep 2019 07:01:07 +0000 (UTC) X-Inumbo-ID: 951356e0-e0f4-11e9-bf31-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:46 +0200 Message-Id: <20190927070050.12405-43-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 42/46] xen/sched: support multiple cpus per scheduling resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Prepare supporting multiple cpus per scheduling resource by allocating the cpumask per resource dynamically. Modify sched_res_mask to have only one bit per scheduling resource set. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V1: new patch (carved out from other patch) V4: - use cpumask_t for sched_res_mask (Jan Beulich) - clear cpu in sched_res_mask when taking cpu away (Jan Beulich) --- xen/common/cpupool.c | 4 ++-- xen/common/schedule.c | 15 +++++++++++++-- xen/include/xen/sched-if.h | 4 ++-- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index 7228ca84b4..13dffaadcf 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -283,7 +283,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c,= unsigned int cpu) cpupool_cpu_moving =3D NULL; } cpumask_set_cpu(cpu, c->cpu_valid); - cpumask_and(c->res_valid, c->cpu_valid, sched_res_mask); + cpumask_and(c->res_valid, c->cpu_valid, &sched_res_mask); =20 rcu_read_lock(&domlist_read_lock); for_each_domain_in_cpupool(d, c) @@ -376,7 +376,7 @@ static int cpupool_unassign_cpu_start(struct cpupool *c= , unsigned int cpu) atomic_inc(&c->refcnt); cpupool_cpu_moving =3D c; cpumask_clear_cpu(cpu, c->cpu_valid); - cpumask_and(c->res_valid, c->cpu_valid, sched_res_mask); + cpumask_and(c->res_valid, c->cpu_valid, &sched_res_mask); =20 out: spin_unlock(&cpupool_lock); diff --git a/xen/common/schedule.c b/xen/common/schedule.c index c14a66d5f0..bab24104cd 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -61,7 +61,7 @@ integer_param("sched_ratelimit_us", sched_ratelimit_us); =20 /* Number of vcpus per struct sched_unit. */ bool __read_mostly sched_disable_smt_switching; -const cpumask_t *sched_res_mask =3D &cpumask_all; +cpumask_t sched_res_mask; =20 /* Common lock for free cpus. */ static DEFINE_SPINLOCK(sched_free_cpu_lock); @@ -2411,8 +2411,14 @@ static int cpu_schedule_up(unsigned int cpu) sr =3D xzalloc(struct sched_resource); if ( sr =3D=3D NULL ) return -ENOMEM; + if ( !zalloc_cpumask_var(&sr->cpus) ) + { + xfree(sr); + return -ENOMEM; + } + sr->master_cpu =3D cpu; - sr->cpus =3D cpumask_of(cpu); + cpumask_copy(sr->cpus, cpumask_of(cpu)); set_sched_res(cpu, sr); =20 sr->scheduler =3D &sched_idle_ops; @@ -2424,6 +2430,8 @@ static int cpu_schedule_up(unsigned int cpu) /* We start with cpu granularity. */ sr->granularity =3D 1; =20 + cpumask_set_cpu(cpu, &sched_res_mask); + /* Boot CPU is dealt with later in scheduler_init(). */ if ( cpu =3D=3D 0 ) return 0; @@ -2456,6 +2464,7 @@ static void sched_res_free(struct rcu_head *head) { struct sched_resource *sr =3D container_of(head, struct sched_resource= , rcu); =20 + free_cpumask_var(sr->cpus); xfree(sr); } =20 @@ -2469,7 +2478,9 @@ static void cpu_schedule_down(unsigned int cpu) =20 kill_timer(&sr->s_timer); =20 + cpumask_clear_cpu(cpu, &sched_res_mask); set_sched_res(cpu, NULL); + call_rcu(&sr->rcu, sched_res_free); =20 rcu_read_unlock(&sched_res_rculock); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 3988985ee6..780735dda3 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -24,7 +24,7 @@ extern cpumask_t cpupool_free_cpus; extern int sched_ratelimit_us; =20 /* Scheduling resource mask. */ -extern const cpumask_t *sched_res_mask; +extern cpumask_t sched_res_mask; =20 /* Number of vcpus per struct sched_unit. */ enum sched_gran { @@ -57,7 +57,7 @@ struct sched_resource { /* Cpu with lowest id in scheduling resource. */ unsigned int master_cpu; unsigned int granularity; - const cpumask_t *cpus; /* cpus covered by this struct = */ + cpumask_var_t cpus; /* cpus covered by this struct = */ struct rcu_head rcu; }; =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel