From nobody Tue Nov 11 08:28:57 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569820980; cv=none; d=zoho.com; s=zohoarc; b=SLMu6btb0jn+Y2Hd5hyrS9h1eC/Y4uwZ0F78fQcskbMEURqMfsxJVVCs83K7nJhAW//2UFkNGBUL144qKQuJsv6ZRMgMRPMEMgI/gDag+RKVMugzEJu0oaZYKf+efrFevecVosnZDMAiIYCjEUskZ3hnW+xajIBnoU3/OInKMnY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569820980; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=w11bo7fgDFJLLvlwgsWa9U/O+KsZnHrlADSHDw/gFsQ=; b=iXn0kBOzY0eoGNTB1slHSN+vc11QOigJvcguf1kI/vKeYZ/W55Z/Bpicv7f9LA5Hkf5kd6KKv3558N+TkACzbhiSNNdvN179DccdcFCcXpSrfjTiRR6TgHnJq2a1pYJnIgWaal6FhbwKrxBS9RYZejQycgb/QMosoV+GUqbfxsE= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569820980331932.2764991811772; Sun, 29 Sep 2019 22:23:00 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8a-0002Pk-Lm; Mon, 30 Sep 2019 05:22:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8Z-0002Nr-BV for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 05:22:19 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 30c694f4-e342-11e9-8628-bc764e2007e4; Mon, 30 Sep 2019 05:21:44 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5D44BB090; Mon, 30 Sep 2019 05:21:41 +0000 (UTC) X-Inumbo-ID: 30c694f4-e342-11e9-8628-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 30 Sep 2019 07:21:25 +0200 Message-Id: <20190930052135.11257-10-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190930052135.11257-1-jgross@suse.com> References: <20190930052135.11257-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v5 09/19] xen/sched: move per-cpu variable scheduler to struct sched_resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Having a pointer to struct scheduler in struct sched_resource instead of per cpu is enough. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V1: new patch V4: - several renames sd -> sr (Jan Beulich) - use ops instead or sr->scheduler (Jan Beulich) --- xen/common/sched_credit.c | 18 +++++++++++------- xen/common/sched_credit2.c | 3 ++- xen/common/schedule.c | 15 +++++++-------- xen/include/xen/sched-if.h | 2 +- 4 files changed, 21 insertions(+), 17 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index a6dff8ec62..86603adcb6 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -352,9 +352,10 @@ DEFINE_PER_CPU(unsigned int, last_tickle_cpu); static inline void __runq_tickle(struct csched_unit *new) { unsigned int cpu =3D sched_unit_master(new->unit); + struct sched_resource *sr =3D get_sched_res(cpu); struct sched_unit *unit =3D new->unit; struct csched_unit * const cur =3D CSCHED_UNIT(curr_on_cpu(cpu)); - struct csched_private *prv =3D CSCHED_PRIV(per_cpu(scheduler, cpu)); + struct csched_private *prv =3D CSCHED_PRIV(sr->scheduler); cpumask_t mask, idle_mask, *online; int balance_step, idlers_empty; =20 @@ -931,7 +932,8 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) { struct sched_unit *currunit =3D current->sched_unit; struct csched_unit * const svc =3D CSCHED_UNIT(currunit); - const struct scheduler *ops =3D per_cpu(scheduler, cpu); + struct sched_resource *sr =3D get_sched_res(cpu); + const struct scheduler *ops =3D sr->scheduler; =20 ASSERT( sched_unit_master(currunit) =3D=3D cpu ); ASSERT( svc->sdom !=3D NULL ); @@ -987,8 +989,7 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) * idlers. But, if we are here, it means there is someone runn= ing * on it, and hence the bit must be zero already. */ - ASSERT(!cpumask_test_cpu(cpu, - CSCHED_PRIV(per_cpu(scheduler, cpu))-= >idlers)); + ASSERT(!cpumask_test_cpu(cpu, CSCHED_PRIV(ops)->idlers)); cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } } @@ -1083,6 +1084,7 @@ csched_unit_sleep(const struct scheduler *ops, struct= sched_unit *unit) { struct csched_unit * const svc =3D CSCHED_UNIT(unit); unsigned int cpu =3D sched_unit_master(unit); + struct sched_resource *sr =3D get_sched_res(cpu); =20 SCHED_STAT_CRANK(unit_sleep); =20 @@ -1095,7 +1097,7 @@ csched_unit_sleep(const struct scheduler *ops, struct= sched_unit *unit) * But, we are here because unit is going to sleep while running o= n cpu, * so the bit must be zero already. */ - ASSERT(!cpumask_test_cpu(cpu, CSCHED_PRIV(per_cpu(scheduler, cpu))= ->idlers)); + ASSERT(!cpumask_test_cpu(cpu, CSCHED_PRIV(sr->scheduler)->idlers)); cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } else if ( __unit_on_runq(svc) ) @@ -1575,8 +1577,9 @@ static void csched_tick(void *_cpu) { unsigned int cpu =3D (unsigned long)_cpu; + struct sched_resource *sr =3D get_sched_res(cpu); struct csched_pcpu *spc =3D CSCHED_PCPU(cpu); - struct csched_private *prv =3D CSCHED_PRIV(per_cpu(scheduler, cpu)); + struct csched_private *prv =3D CSCHED_PRIV(sr->scheduler); =20 spc->tick++; =20 @@ -1601,7 +1604,8 @@ csched_tick(void *_cpu) static struct csched_unit * csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) { - const struct csched_private * const prv =3D CSCHED_PRIV(per_cpu(schedu= ler, cpu)); + struct sched_resource *sr =3D get_sched_res(cpu); + const struct csched_private * const prv =3D CSCHED_PRIV(sr->scheduler); const struct csched_pcpu * const peer_pcpu =3D CSCHED_PCPU(peer_cpu); struct csched_unit *speer; struct list_head *iter; diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index d51df05887..af58ee161d 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3268,8 +3268,9 @@ runq_candidate(struct csched2_runqueue_data *rqd, unsigned int *skipped) { struct list_head *iter, *temp; + struct sched_resource *sr =3D get_sched_res(cpu); struct csched2_unit *snext =3D NULL; - struct csched2_private *prv =3D csched2_priv(per_cpu(scheduler, cpu)); + struct csched2_private *prv =3D csched2_priv(sr->scheduler); bool yield =3D false, soft_aff_preempt =3D false; =20 *skipped =3D 0; diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 9442be1c83..5e9cee1f82 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -75,7 +75,6 @@ static void vcpu_singleshot_timer_fn(void *data); static void poll_timer_fn(void *data); =20 /* This is global for now so that private implementations can reach it */ -DEFINE_PER_CPU(struct scheduler *, scheduler); DEFINE_PER_CPU_READ_MOSTLY(struct sched_resource *, sched_res); static DEFINE_PER_CPU_READ_MOSTLY(unsigned int, sched_res_idx); =20 @@ -200,7 +199,7 @@ static inline struct scheduler *unit_scheduler(const st= ruct sched_unit *unit) */ =20 ASSERT(is_idle_domain(d)); - return per_cpu(scheduler, unit->res->master_cpu); + return unit->res->scheduler; } =20 static inline struct scheduler *vcpu_scheduler(const struct vcpu *v) @@ -1921,8 +1920,8 @@ static bool sched_tasklet_check(unsigned int cpu) static struct sched_unit *do_schedule(struct sched_unit *prev, s_time_t no= w, unsigned int cpu) { - struct scheduler *sched =3D per_cpu(scheduler, cpu); struct sched_resource *sr =3D get_sched_res(cpu); + struct scheduler *sched =3D sr->scheduler; struct sched_unit *next; =20 /* get policy-specific decision on scheduling... */ @@ -2342,7 +2341,7 @@ static int cpu_schedule_up(unsigned int cpu) sr->cpus =3D cpumask_of(cpu); set_sched_res(cpu, sr); =20 - per_cpu(scheduler, cpu) =3D &sched_idle_ops; + sr->scheduler =3D &sched_idle_ops; spin_lock_init(&sr->_lock); sr->schedule_lock =3D &sched_free_cpu_lock; init_timer(&sr->s_timer, s_timer_fn, NULL, cpu); @@ -2553,7 +2552,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) { struct vcpu *idle; void *ppriv, *ppriv_old, *vpriv, *vpriv_old; - struct scheduler *old_ops =3D per_cpu(scheduler, cpu); + struct scheduler *old_ops =3D get_sched_res(cpu)->scheduler; struct scheduler *new_ops =3D (c =3D=3D NULL) ? &sched_idle_ops : c->s= ched; struct cpupool *old_pool =3D per_cpu(cpupool, cpu); struct sched_resource *sd =3D get_sched_res(cpu); @@ -2617,7 +2616,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) ppriv_old =3D sd->sched_priv; new_lock =3D sched_switch_sched(new_ops, cpu, ppriv, vpriv); =20 - per_cpu(scheduler, cpu) =3D new_ops; + sd->scheduler =3D new_ops; sd->sched_priv =3D ppriv; =20 /* @@ -2717,7 +2716,7 @@ void sched_tick_suspend(void) struct scheduler *sched; unsigned int cpu =3D smp_processor_id(); =20 - sched =3D per_cpu(scheduler, cpu); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_suspend(sched, cpu); rcu_idle_enter(cpu); rcu_idle_timer_start(); @@ -2730,7 +2729,7 @@ void sched_tick_resume(void) =20 rcu_idle_timer_stop(); rcu_idle_exit(cpu); - sched =3D per_cpu(scheduler, cpu); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_resume(sched, cpu); } =20 diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 021c1d7c2c..01821b3e5b 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -36,6 +36,7 @@ extern const cpumask_t *sched_res_mask; * as the rest of the struct. Just have the scheduler point to the * one it wants (This may be the one right in front of it).*/ struct sched_resource { + struct scheduler *scheduler; spinlock_t *schedule_lock, _lock; struct sched_unit *curr; @@ -49,7 +50,6 @@ struct sched_resource { const cpumask_t *cpus; /* cpus covered by this struct = */ }; =20 -DECLARE_PER_CPU(struct scheduler *, scheduler); DECLARE_PER_CPU(struct cpupool *, cpupool); DECLARE_PER_CPU(struct sched_resource *, sched_res); =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel