From nobody Tue Nov 11 08:28:55 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569820990; cv=none; d=zoho.com; s=zohoarc; b=aXEuuQppA18/9KsnOKrsduT7L4R24ZPB+x6G59F9+yBm1gxTOzDfq8m3FHbaAHfJoFzE1Y2zIA/M/kK5X0aV3S1rA+6oJLtYcYjWgCQ7D19gyjUzd60AEWTsW50v0oTuH6oPXM2PzfDGc7PWLHxXvZpP2agVrrnGqrzsTMlil1s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569820990; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=RxuXabKP8MBNF05fMH9J0Bf6nRqKypcDT1IiKjXjW98=; b=eElJs47qV1h2SoqKd0secP+tBVkQOMY3vSVNajZNoAzWSPOSZG+AaJWIVrNU98pAC3zvSDUCbw8w3OpUkbPpKRzExMZhBmRujxxaav+vAT/3G4pvTG8EArHRcNW4Iylwx0x4CqwHFJ0dPXOP4786DcBAflDHgLUDAK/KytsvNLI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569820990897237.813140509336; Sun, 29 Sep 2019 22:23:10 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8k-0002cy-7P; Mon, 30 Sep 2019 05:22:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8j-0002bx-BD for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 05:22:29 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 31b6ca00-e342-11e9-bf31-bc764e2007e4; Mon, 30 Sep 2019 05:21:45 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 4D373B114; Mon, 30 Sep 2019 05:21:43 +0000 (UTC) X-Inumbo-ID: 31b6ca00-e342-11e9-bf31-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 30 Sep 2019 07:21:31 +0200 Message-Id: <20190930052135.11257-16-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190930052135.11257-1-jgross@suse.com> References: <20190930052135.11257-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v5 15/19] xen/sched: support multiple cpus per scheduling resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Prepare supporting multiple cpus per scheduling resource by allocating the cpumask per resource dynamically. Modify sched_res_mask to have only one bit per scheduling resource set. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V1: new patch (carved out from other patch) V4: - use cpumask_t for sched_res_mask (Jan Beulich) - clear cpu in sched_res_mask when taking cpu away (Jan Beulich) --- xen/common/cpupool.c | 4 ++-- xen/common/schedule.c | 15 +++++++++++++-- xen/include/xen/sched-if.h | 4 ++-- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index 7228ca84b4..13dffaadcf 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -283,7 +283,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c,= unsigned int cpu) cpupool_cpu_moving =3D NULL; } cpumask_set_cpu(cpu, c->cpu_valid); - cpumask_and(c->res_valid, c->cpu_valid, sched_res_mask); + cpumask_and(c->res_valid, c->cpu_valid, &sched_res_mask); =20 rcu_read_lock(&domlist_read_lock); for_each_domain_in_cpupool(d, c) @@ -376,7 +376,7 @@ static int cpupool_unassign_cpu_start(struct cpupool *c= , unsigned int cpu) atomic_inc(&c->refcnt); cpupool_cpu_moving =3D c; cpumask_clear_cpu(cpu, c->cpu_valid); - cpumask_and(c->res_valid, c->cpu_valid, sched_res_mask); + cpumask_and(c->res_valid, c->cpu_valid, &sched_res_mask); =20 out: spin_unlock(&cpupool_lock); diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 1f23bf0e83..efe077b01f 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -63,7 +63,7 @@ integer_param("sched_ratelimit_us", sched_ratelimit_us); =20 /* Number of vcpus per struct sched_unit. */ bool __read_mostly sched_disable_smt_switching; -const cpumask_t *sched_res_mask =3D &cpumask_all; +cpumask_t sched_res_mask; =20 /* Common lock for free cpus. */ static DEFINE_SPINLOCK(sched_free_cpu_lock); @@ -2426,8 +2426,14 @@ static int cpu_schedule_up(unsigned int cpu) sr =3D xzalloc(struct sched_resource); if ( sr =3D=3D NULL ) return -ENOMEM; + if ( !zalloc_cpumask_var(&sr->cpus) ) + { + xfree(sr); + return -ENOMEM; + } + sr->master_cpu =3D cpu; - sr->cpus =3D cpumask_of(cpu); + cpumask_copy(sr->cpus, cpumask_of(cpu)); set_sched_res(cpu, sr); =20 sr->scheduler =3D &sched_idle_ops; @@ -2439,6 +2445,8 @@ static int cpu_schedule_up(unsigned int cpu) /* We start with cpu granularity. */ sr->granularity =3D 1; =20 + cpumask_set_cpu(cpu, &sched_res_mask); + /* Boot CPU is dealt with later in scheduler_init(). */ if ( cpu =3D=3D 0 ) return 0; @@ -2471,6 +2479,7 @@ static void sched_res_free(struct rcu_head *head) { struct sched_resource *sr =3D container_of(head, struct sched_resource= , rcu); =20 + free_cpumask_var(sr->cpus); xfree(sr); } =20 @@ -2484,7 +2493,9 @@ static void cpu_schedule_down(unsigned int cpu) =20 kill_timer(&sr->s_timer); =20 + cpumask_clear_cpu(cpu, &sched_res_mask); set_sched_res(cpu, NULL); + call_rcu(&sr->rcu, sched_res_free); =20 rcu_read_unlock(&sched_res_rculock); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 3988985ee6..780735dda3 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -24,7 +24,7 @@ extern cpumask_t cpupool_free_cpus; extern int sched_ratelimit_us; =20 /* Scheduling resource mask. */ -extern const cpumask_t *sched_res_mask; +extern cpumask_t sched_res_mask; =20 /* Number of vcpus per struct sched_unit. */ enum sched_gran { @@ -57,7 +57,7 @@ struct sched_resource { /* Cpu with lowest id in scheduling resource. */ unsigned int master_cpu; unsigned int granularity; - const cpumask_t *cpus; /* cpus covered by this struct = */ + cpumask_var_t cpus; /* cpus covered by this struct = */ struct rcu_head rcu; }; =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel