From nobody Tue Feb 10 02:48:07 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559039758; cv=none; d=zoho.com; s=zohoarc; b=mqIvyBTvnfz9UZIyf6pGe14NBTWIeLc/U0ow0gcL9OKrrrm7BzD9ktranS7ybDUURxAPcO+aCOwybV7WISA3noPXbXSKVTSGmHMikGAalip9dTqU7C91bd4jePKYThaUfNIOq8/TQjVFrT8/+Bs24Ykm6HfqZm0aJw+06tHfAQ4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559039758; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=ad2et9A7oCLujay7mF2H2+3r7dilLzw3VNiC9hdRRyA=; b=Zbq898sfiykDvlhyqB26h1qOUOyEPFX3FmPUmN33PKI4ewNLx5R49h8EeqkI0czNbXITS09JtLDNMOi47Uknrykz4uUdSCnr2EkQVMxnz/GioMSEdXJwvmpKYjwAuinVQbP/oXsNOtmSPsyntLqCH2nLOJKU2+ioHkvhhnIlosw= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559039758479987.2284298978262; Tue, 28 May 2019 03:35:58 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZR7-0006oO-8H; Tue, 28 May 2019 10:34:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZQG-0004x2-A9 for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:33:36 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 08e80b5e-8134-11e9-8980-bc764e045a96; Tue, 28 May 2019 10:33:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 951E1B07B; Tue, 28 May 2019 10:33:29 +0000 (UTC) X-Inumbo-ID: 08e80b5e-8134-11e9-8980-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 28 May 2019 12:33:03 +0200 Message-Id: <20190528103313.1343-51-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190528103313.1343-1-jgross@suse.com> References: <20190528103313.1343-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 50/60] xen/sched: prepare per-cpupool scheduling granularity X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On- and offlining cpus with core scheduling is rather complicated as the cpus are taken on- or offline one by one, but scheduling wants them rather to be handled per core. As the future plan is to be able to select scheduling granularity per cpupool prepare that by storing the granularity in struct cpupool and struct sched_resource (we need it there for free cpus which are not associated to any cpupool). Free cpus will always use granularity 1. Store the selected granularity option (cpu, core or socket) in the cpupool as well, as we will need it to select the appropriate cpu mask when populating the cpupool with cpus. This will make on- and offlining of cpus much easier and avoids writing code which would needed to be thrown away later. Signed-off-by: Juergen Gross --- V1: new patch --- xen/common/cpupool.c | 2 ++ xen/common/schedule.c | 23 +++++++++++++++-------- xen/include/xen/sched-if.h | 12 ++++++++++++ 3 files changed, 29 insertions(+), 8 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index aa0428cdc0..403036c092 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -177,6 +177,8 @@ static struct cpupool *cpupool_create( return NULL; } } + c->granularity =3D sched_granularity; + c->opt_granularity =3D opt_sched_granularity; =20 *q =3D c; =20 diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 8607262a71..7fd83ffd4e 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -56,7 +56,8 @@ int sched_ratelimit_us =3D SCHED_DEFAULT_RATELIMIT_US; integer_param("sched_ratelimit_us", sched_ratelimit_us); =20 /* Number of vcpus per struct sched_unit. */ -static unsigned int sched_granularity =3D 1; +enum sched_gran opt_sched_granularity =3D SCHED_GRAN_cpu; +unsigned int sched_granularity =3D 1; bool sched_disable_smt_switching; const cpumask_t *sched_res_mask =3D &cpumask_all; =20 @@ -350,10 +351,10 @@ static struct sched_unit *sched_alloc_unit(struct vcp= u *v) { struct sched_unit *unit, **prev_unit; struct domain *d =3D v->domain; + unsigned int gran =3D d->cpupool ? d->cpupool->granularity : 1; =20 for_each_sched_unit ( d, unit ) - if ( unit->vcpu->vcpu_id / sched_granularity =3D=3D - v->vcpu_id / sched_granularity ) + if ( unit->vcpu->vcpu_id / gran =3D=3D v->vcpu_id / gran ) break; =20 if ( unit ) @@ -1696,11 +1697,11 @@ static void sched_switch_units(struct sched_resourc= e *sd, if ( is_idle_unit(prev) ) { prev->runstate_cnt[RUNSTATE_running] =3D 0; - prev->runstate_cnt[RUNSTATE_runnable] =3D sched_granularity; + prev->runstate_cnt[RUNSTATE_runnable] =3D 1; } if ( is_idle_unit(next) ) { - next->runstate_cnt[RUNSTATE_running] =3D sched_granularity; + next->runstate_cnt[RUNSTATE_running] =3D 1; next->runstate_cnt[RUNSTATE_runnable] =3D 0; } } @@ -1946,11 +1947,12 @@ static struct sched_unit *sched_wait_rendezvous_in(= struct sched_unit *prev, { struct sched_unit *next; struct vcpu *v; + unsigned int gran =3D get_sched_res(cpu)->granularity; =20 if ( !--prev->rendezvous_in_cnt ) { next =3D do_schedule(prev, now, cpu); - atomic_set(&next->rendezvous_out_cnt, sched_granularity + 1); + atomic_set(&next->rendezvous_out_cnt, gran + 1); return next; } =20 @@ -2054,6 +2056,7 @@ static void schedule(void) struct sched_resource *sd; spinlock_t *lock; int cpu =3D smp_processor_id(); + unsigned int gran =3D get_sched_res(cpu)->granularity; =20 ASSERT_NOT_IN_ATOMIC(); =20 @@ -2079,11 +2082,11 @@ static void schedule(void) =20 stop_timer(&sd->s_timer); =20 - if ( sched_granularity > 1 ) + if ( gran > 1 ) { cpumask_t mask; =20 - prev->rendezvous_in_cnt =3D sched_granularity; + prev->rendezvous_in_cnt =3D gran; cpumask_andnot(&mask, sd->cpus, cpumask_of(cpu)); cpumask_raise_softirq(&mask, SCHED_SLAVE_SOFTIRQ); next =3D sched_wait_rendezvous_in(prev, lock, cpu, now); @@ -2150,6 +2153,9 @@ static int cpu_schedule_up(unsigned int cpu) init_timer(&sd->s_timer, s_timer_fn, NULL, cpu); atomic_set(&sd->urgent_count, 0); =20 + /* We start with cpu granularity. */ + sd->granularity =3D 1; + /* Boot CPU is dealt with later in schedule_init(). */ if ( cpu =3D=3D 0 ) return 0; @@ -2495,6 +2501,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) sched_free_pdata(old_ops, ppriv_old, cpu); =20 out: + get_sched_res(cpu)->granularity =3D c ? c->granularity : 1; get_sched_res(cpu)->cpupool =3D c; /* When a cpu is added to a pool, trigger it to go pick up some work */ if ( c !=3D NULL ) diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index f75f9673e9..a0f11d0c15 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -25,6 +25,15 @@ extern int sched_ratelimit_us; /* Scheduling resource mask. */ extern const cpumask_t *sched_res_mask; =20 +/* Number of vcpus per struct sched_unit. */ +enum sched_gran { + SCHED_GRAN_cpu, + SCHED_GRAN_core, + SCHED_GRAN_socket +}; +extern enum sched_gran opt_sched_granularity; +extern unsigned int sched_granularity; + /* * In order to allow a scheduler to remap the lock->cpu mapping, * we have a per-cpu pointer, along with a pre-allocated set of @@ -47,6 +56,7 @@ struct sched_resource { struct timer s_timer; /* scheduling timer = */ atomic_t urgent_count; /* how many urgent vcpus = */ unsigned int processor; + unsigned int granularity; const cpumask_t *cpus; /* cpus covered by this struct = */ }; =20 @@ -520,6 +530,8 @@ struct cpupool unsigned int n_dom; struct scheduler *sched; atomic_t refcnt; + unsigned int granularity; + enum sched_gran opt_granularity; }; =20 #define cpupool_online_cpumask(_pool) \ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel