From nobody Mon Feb 9 22:24:27 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559039803; cv=none; d=zoho.com; s=zohoarc; b=FeQDgdwObRBPrZCextic8mL3+Ct0uNchrfXs2jTAuvIr7oM3IXflPHBIQEjAyjr9nWX8Cs2lKaiAz6r1DfbIzVka2NiTeHHrc4RUCmU6lW91W9hRdJ0dAJoshpvSgx2zFEdoL0zLFcCnFFoLf5sQThbvuzNCSEakngWBMycOVK4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559039803; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=unlSagKC+uO/R+CAvrTarxkgIXFG8euE5RdcP4N4tLA=; b=AjWb0GIuMGVcSQC2Nq+9XCXe77D72Bt1JZOJdOECMsks9ozxDGyIrMbh2iNvUvN/TTN0Oi96tZ05TUG/Qb19ReaU8md0zl7OI6Rsd5zxGGyLXPvdvSed8lsgin/w8nMXeOJ+Xxjn8LwrmknldY8oYcA+cyrrwETOgoYC2sKhylI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 155903980370157.31480615724627; Tue, 28 May 2019 03:36:43 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZSh-00019h-Jo; Tue, 28 May 2019 10:36:07 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZSg-000185-E0 for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:36:06 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 08541656-8134-11e9-8980-bc764e045a96; Tue, 28 May 2019 10:33:29 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A170FB071; Tue, 28 May 2019 10:33:28 +0000 (UTC) X-Inumbo-ID: 08541656-8134-11e9-8980-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 28 May 2019 12:33:00 +0200 Message-Id: <20190528103313.1343-48-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190528103313.1343-1-jgross@suse.com> References: <20190528103313.1343-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 47/60] xen/sched: move per-cpu variable scheduler to struct sched_resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Having a pointer to struct scheduler in struct sched_resource instead of per cpu is enough. Signed-off-by: Juergen Gross --- V1: new patch --- xen/common/sched_credit.c | 18 +++++++++++------- xen/common/sched_credit2.c | 3 ++- xen/common/schedule.c | 21 ++++++++++----------- xen/include/xen/sched-if.h | 2 +- 4 files changed, 24 insertions(+), 20 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 15339e6fae..5e60788112 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -350,9 +350,10 @@ DEFINE_PER_CPU(unsigned int, last_tickle_cpu); static inline void __runq_tickle(struct csched_unit *new) { unsigned int cpu =3D sched_unit_cpu(new->unit); + struct sched_resource *sd =3D get_sched_res(cpu); struct sched_unit *unit =3D new->unit; struct csched_unit * const cur =3D CSCHED_UNIT(curr_on_cpu(cpu)); - struct csched_private *prv =3D CSCHED_PRIV(per_cpu(scheduler, cpu)); + struct csched_private *prv =3D CSCHED_PRIV(sd->scheduler); cpumask_t mask, idle_mask, *online; int balance_step, idlers_empty; =20 @@ -937,7 +938,8 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) { struct sched_unit *currunit =3D current->sched_unit; struct csched_unit * const svc =3D CSCHED_UNIT(currunit); - const struct scheduler *ops =3D per_cpu(scheduler, cpu); + struct sched_resource *sd =3D get_sched_res(cpu); + const struct scheduler *ops =3D sd->scheduler; =20 ASSERT( sched_unit_cpu(currunit) =3D=3D cpu ); ASSERT( svc->sdom !=3D NULL ); @@ -993,8 +995,7 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) * idlers. But, if we are here, it means there is someone runn= ing * on it, and hence the bit must be zero already. */ - ASSERT(!cpumask_test_cpu(cpu, - CSCHED_PRIV(per_cpu(scheduler, cpu))-= >idlers)); + ASSERT(!cpumask_test_cpu(cpu, CSCHED_PRIV(sd->scheduler)->idle= rs)); cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } } @@ -1089,6 +1090,7 @@ csched_unit_sleep(const struct scheduler *ops, struct= sched_unit *unit) { struct csched_unit * const svc =3D CSCHED_UNIT(unit); unsigned int cpu =3D sched_unit_cpu(unit); + struct sched_resource *sd =3D get_sched_res(cpu); =20 SCHED_STAT_CRANK(unit_sleep); =20 @@ -1101,7 +1103,7 @@ csched_unit_sleep(const struct scheduler *ops, struct= sched_unit *unit) * But, we are here because unit is going to sleep while running o= n cpu, * so the bit must be zero already. */ - ASSERT(!cpumask_test_cpu(cpu, CSCHED_PRIV(per_cpu(scheduler, cpu))= ->idlers)); + ASSERT(!cpumask_test_cpu(cpu, CSCHED_PRIV(sd->scheduler)->idlers)); cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } else if ( __unit_on_runq(svc) ) @@ -1581,8 +1583,9 @@ static void csched_tick(void *_cpu) { unsigned int cpu =3D (unsigned long)_cpu; + struct sched_resource *sd =3D get_sched_res(cpu); struct csched_pcpu *spc =3D CSCHED_PCPU(cpu); - struct csched_private *prv =3D CSCHED_PRIV(per_cpu(scheduler, cpu)); + struct csched_private *prv =3D CSCHED_PRIV(sd->scheduler); =20 spc->tick++; =20 @@ -1607,7 +1610,8 @@ csched_tick(void *_cpu) static struct csched_unit * csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) { - const struct csched_private * const prv =3D CSCHED_PRIV(per_cpu(schedu= ler, cpu)); + struct sched_resource *sd =3D get_sched_res(cpu); + const struct csched_private * const prv =3D CSCHED_PRIV(sd->scheduler); const struct csched_pcpu * const peer_pcpu =3D CSCHED_PCPU(peer_cpu); struct csched_unit *speer; struct list_head *iter; diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 3bfceefa46..1764aa704e 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3265,8 +3265,9 @@ runq_candidate(struct csched2_runqueue_data *rqd, unsigned int *skipped) { struct list_head *iter, *temp; + struct sched_resource *sd =3D get_sched_res(cpu); struct csched2_unit *snext =3D NULL; - struct csched2_private *prv =3D csched2_priv(per_cpu(scheduler, cpu)); + struct csched2_private *prv =3D csched2_priv(sd->scheduler); bool yield =3D false, soft_aff_preempt =3D false; =20 *skipped =3D 0; diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 9bff4dc183..34c95d1dc6 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -66,7 +66,6 @@ static void vcpu_singleshot_timer_fn(void *data); static void poll_timer_fn(void *data); =20 /* This is global for now so that private implementations can reach it */ -DEFINE_PER_CPU(struct scheduler *, scheduler); DEFINE_PER_CPU(struct sched_resource *, sched_res); static DEFINE_PER_CPU(unsigned int, sched_res_idx); =20 @@ -133,7 +132,7 @@ static inline struct scheduler *vcpu_scheduler(const st= ruct vcpu *v) * for idle vCPUs, it is safe to use it, with no locks, to figure that= out. */ ASSERT(is_idle_domain(d)); - return per_cpu(scheduler, v->processor); + return get_sched_res(v->processor)->scheduler; } #define VCPU2ONLINE(_v) cpupool_domain_cpumask((_v)->domain) =20 @@ -1767,8 +1766,8 @@ static bool sched_tasklet_check(unsigned int cpu) static struct sched_unit *do_schedule(struct sched_unit *prev, s_time_t no= w, unsigned int cpu) { - struct scheduler *sched =3D per_cpu(scheduler, cpu); struct sched_resource *sd =3D get_sched_res(cpu); + struct scheduler *sched =3D sd->scheduler; struct sched_unit *next; =20 /* get policy-specific decision on scheduling... */ @@ -2144,7 +2143,7 @@ static int cpu_schedule_up(unsigned int cpu) sd->cpus =3D cpumask_of(cpu); set_sched_res(cpu, sd); =20 - per_cpu(scheduler, cpu) =3D &ops; + sd->scheduler =3D &ops; spin_lock_init(&sd->_lock); sd->schedule_lock =3D &sd->_lock; init_timer(&sd->s_timer, s_timer_fn, NULL, cpu); @@ -2203,7 +2202,7 @@ static int cpu_schedule_up(unsigned int cpu) static void cpu_schedule_down(unsigned int cpu) { struct sched_resource *sd =3D get_sched_res(cpu); - struct scheduler *sched =3D per_cpu(scheduler, cpu); + struct scheduler *sched =3D sd->scheduler; =20 sched_free_pdata(sched, sd->sched_priv, cpu); sched_free_vdata(sched, idle_vcpu[cpu]->sched_unit->priv); @@ -2219,8 +2218,8 @@ static void cpu_schedule_down(unsigned int cpu) =20 void scheduler_percpu_init(unsigned int cpu) { - struct scheduler *sched =3D per_cpu(scheduler, cpu); struct sched_resource *sd =3D get_sched_res(cpu); + struct scheduler *sched =3D sd->scheduler; =20 if ( system_state !=3D SYS_STATE_resume ) sched_init_pdata(sched, sd->sched_priv, cpu); @@ -2230,8 +2229,8 @@ static int cpu_schedule_callback( struct notifier_block *nfb, unsigned long action, void *hcpu) { unsigned int cpu =3D (unsigned long)hcpu; - struct scheduler *sched =3D per_cpu(scheduler, cpu); struct sched_resource *sd =3D get_sched_res(cpu); + struct scheduler *sched =3D sd->scheduler; int rc =3D 0; =20 /* @@ -2407,7 +2406,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) { struct vcpu *idle; void *ppriv, *ppriv_old, *vpriv, *vpriv_old; - struct scheduler *old_ops =3D per_cpu(scheduler, cpu); + struct scheduler *old_ops =3D get_sched_res(cpu)->scheduler; struct scheduler *new_ops =3D (c =3D=3D NULL) ? &ops : c->sched; struct cpupool *old_pool =3D per_cpu(cpupool, cpu); struct sched_resource *sd =3D get_sched_res(cpu); @@ -2473,7 +2472,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) ppriv_old =3D sd->sched_priv; new_lock =3D sched_switch_sched(new_ops, cpu, ppriv, vpriv); =20 - per_cpu(scheduler, cpu) =3D new_ops; + sd->scheduler =3D new_ops; sd->sched_priv =3D ppriv; =20 /* @@ -2574,7 +2573,7 @@ void sched_tick_suspend(void) struct scheduler *sched; unsigned int cpu =3D smp_processor_id(); =20 - sched =3D per_cpu(scheduler, cpu); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_suspend(sched, cpu); rcu_idle_enter(cpu); rcu_idle_timer_start(); @@ -2587,7 +2586,7 @@ void sched_tick_resume(void) =20 rcu_idle_timer_stop(); rcu_idle_exit(cpu); - sched =3D per_cpu(scheduler, cpu); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_resume(sched, cpu); } =20 diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index f5962cbcfb..ad6cf43425 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -36,6 +36,7 @@ extern const cpumask_t *sched_res_mask; * as the rest of the struct. Just have the scheduler point to the * one it wants (This may be the one right in front of it).*/ struct sched_resource { + struct scheduler *scheduler; spinlock_t *schedule_lock, _lock; struct sched_unit *curr; /* current task = */ @@ -50,7 +51,6 @@ struct sched_resource { =20 #define curr_on_cpu(c) (get_sched_res(c)->curr) =20 -DECLARE_PER_CPU(struct scheduler *, scheduler); DECLARE_PER_CPU(struct cpupool *, cpupool); DECLARE_PER_CPU(struct sched_resource *, sched_res); =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel