From nobody Tue Nov 11 08:42:58 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567766; cv=none; d=zoho.com; s=zohoarc; b=aA2GbMoxvnVZczjo9OXeRay6y++EcoxZwZw0adS68IdxStF/FKNQdIOgLD8dK+uSHqqZP96YW5c6SVqSznjCMdvy9ScPhfTWvgpSno1Erds6NC3szTB60soz1cJlNtLlexhEC+JK4sGj8Zu9thMAep/F5yIR3TO3PF9N/C2d+BA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567766; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=rStDi5cJvN70ifqwdwNrfppjgtgiu1aXbMDP6LVP8Ks=; b=J0esFWHPQZbHRUSyPWUC8HJ3N4SPIUOxBswQSsb/CYB1is7HiK2soxguyOCWCAVvsKOGB3UgYUtwbGxxGYxmunxBloPj8FJlyYIzKg9KotIJap2u8qaGqFl6U/ea4mS1Yz0KDggffc8lxNFe5C9tAtLeOHG1Ffr/Ua6+7Oc5wi0= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567766656874.8386852144546; Fri, 27 Sep 2019 00:02:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFy-0003V3-0c; Fri, 27 Sep 2019 07:01:34 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFw-0003TS-P8 for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:32 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 8ed177bc-e0f4-11e9-97fb-bc764e2007e4; Fri, 27 Sep 2019 07:01:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 82028AFE8; Fri, 27 Sep 2019 07:00:56 +0000 (UTC) X-Inumbo-ID: 8ed177bc-e0f4-11e9-97fb-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:11 +0200 Message-Id: <20190927070050.12405-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 07/46] xen/sched: move per cpu scheduler private data into struct sched_resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This prepares support of larger scheduling granularities, e.g. core scheduling. While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h into sched.h removing the need for including sched-if.h in cpuidle.h. For that purpose remobe urgent_count from the scheduler private data and make it a plain percpu variable. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V1: - move sched_has_urgent_vcpu() V2: - make sched_has_urgent_vcpu() return bool (Jan Beulich) V3: - split out removing sched-if.h include in some C files (Jan Beulich) - make urgent_count a plain percpu variable (Jan Beulich) V4: - avoid introducing local variables used only once (Jan Beulich) - name sched_resource pointers "sr" (Jan Beulich) - make curr_on_cpu() a static inline function (Jan Beulich) --- xen/common/sched_arinc653.c | 6 ++-- xen/common/sched_credit.c | 12 ++++---- xen/common/sched_credit2.c | 20 ++++++------- xen/common/sched_null.c | 6 ++-- xen/common/sched_rt.c | 8 ++--- xen/common/schedule.c | 69 ++++++++++++++++++++++-----------------= ---- xen/include/asm-x86/cpuidle.h | 11 ------- xen/include/xen/sched-if.h | 26 ++++++++-------- xen/include/xen/sched.h | 11 +++++++ 9 files changed, 85 insertions(+), 84 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 7bdaf257ce..5cf47f5622 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -481,7 +481,7 @@ a653sched_unit_sleep(const struct scheduler *ops, struc= t sched_unit *unit) * If the VCPU being put to sleep is the same one that is currently * running, raise a softirq to invoke the scheduler to switch domains. */ - if ( per_cpu(schedule_data, vc->processor).curr =3D=3D unit ) + if ( get_sched_res(vc->processor)->curr =3D=3D unit ) cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); } =20 @@ -649,14 +649,14 @@ static spinlock_t * a653_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { - struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); + struct sched_resource *sr =3D get_sched_res(cpu); arinc653_vcpu_t *svc =3D vdata; =20 ASSERT(!pdata && svc && is_idle_vcpu(svc->vc)); =20 idle_vcpu[cpu]->sched_unit->priv =3D vdata; =20 - return &sd->_lock; + return &sr->_lock; } =20 /** diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index cfe3edc14c..59a77e874b 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -82,7 +82,7 @@ #define CSCHED_PRIV(_ops) \ ((struct csched_private *)((_ops)->sched_data)) #define CSCHED_PCPU(_c) \ - ((struct csched_pcpu *)per_cpu(schedule_data, _c).sched_priv) + ((struct csched_pcpu *)get_sched_res(_c)->sched_priv) #define CSCHED_UNIT(unit) ((struct csched_unit *) (unit)->priv) #define CSCHED_DOM(_dom) ((struct csched_dom *) (_dom)->sched_priv) #define RUNQ(_cpu) (&(CSCHED_PCPU(_cpu)->runq)) @@ -250,7 +250,7 @@ static inline bool_t is_runq_idle(unsigned int cpu) /* * We're peeking at cpu's runq, we must hold the proper lock. */ - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 return list_empty(RUNQ(cpu)) || is_idle_vcpu(__runq_elem(RUNQ(cpu)->next)->vcpu); @@ -259,7 +259,7 @@ static inline bool_t is_runq_idle(unsigned int cpu) static inline void inc_nr_runnable(unsigned int cpu) { - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); CSCHED_PCPU(cpu)->nr_runnable++; =20 } @@ -267,7 +267,7 @@ inc_nr_runnable(unsigned int cpu) static inline void dec_nr_runnable(unsigned int cpu) { - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); ASSERT(CSCHED_PCPU(cpu)->nr_runnable >=3D 1); CSCHED_PCPU(cpu)->nr_runnable--; } @@ -628,7 +628,7 @@ static spinlock_t * csched_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { - struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); + struct sched_resource *sr =3D get_sched_res(cpu); struct csched_private *prv =3D CSCHED_PRIV(new_ops); struct csched_unit *svc =3D vdata; =20 @@ -646,7 +646,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned= int cpu, init_pdata(prv, pdata, cpu); spin_unlock(&prv->lock); =20 - return &sd->_lock; + return &sr->_lock; } =20 #ifndef NDEBUG diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index afeb70b845..ef0dd1d228 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -568,7 +568,7 @@ static inline struct csched2_private *csched2_priv(cons= t struct scheduler *ops) =20 static inline struct csched2_pcpu *csched2_pcpu(unsigned int cpu) { - return per_cpu(schedule_data, cpu).sched_priv; + return get_sched_res(cpu)->sched_priv; } =20 static inline struct csched2_unit *csched2_unit(const struct sched_unit *u= nit) @@ -1277,7 +1277,7 @@ runq_insert(const struct scheduler *ops, struct csche= d2_unit *svc) struct list_head * runq =3D &c2rqd(ops, cpu)->runq; int pos =3D 0; =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 ASSERT(!vcpu_on_runq(svc)); ASSERT(c2r(cpu) =3D=3D c2r(svc->vcpu->processor)); @@ -1798,7 +1798,7 @@ static bool vcpu_grab_budget(struct csched2_unit *svc) struct csched2_dom *sdom =3D svc->sdom; unsigned int cpu =3D svc->vcpu->processor; =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 if ( svc->budget > 0 ) return true; @@ -1845,7 +1845,7 @@ vcpu_return_budget(struct csched2_unit *svc, struct l= ist_head *parked) struct csched2_dom *sdom =3D svc->sdom; unsigned int cpu =3D svc->vcpu->processor; =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); ASSERT(list_empty(parked)); =20 /* budget_lock nests inside runqueue lock. */ @@ -2102,7 +2102,7 @@ csched2_unit_wake(const struct scheduler *ops, struct= sched_unit *unit) unsigned int cpu =3D vc->processor; s_time_t now; =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 ASSERT(!is_idle_vcpu(vc)); =20 @@ -2230,7 +2230,7 @@ csched2_res_pick(const struct scheduler *ops, const s= truct sched_unit *unit) * just grab the prv lock. Instead, we'll have to trylock, and * do something else reasonable if we fail. */ - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 if ( !read_trylock(&prv->lock) ) { @@ -2570,7 +2570,7 @@ static void balance_load(const struct scheduler *ops,= int cpu, s_time_t now) * on either side may be empty). */ =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); st.lrqd =3D c2rqd(ops, cpu); =20 update_runq_load(ops, st.lrqd, 0, now); @@ -3476,7 +3476,7 @@ csched2_schedule( rqd =3D c2rqd(ops, cpu); BUG_ON(!cpumask_test_cpu(cpu, &rqd->active)); =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd !=3D rqd); =20 @@ -3867,7 +3867,7 @@ csched2_init_pdata(const struct scheduler *ops, void = *pdata, int cpu) =20 rqi =3D init_pdata(prv, pdata, cpu); /* Move the scheduler lock to the new runq lock. */ - per_cpu(schedule_data, cpu).schedule_lock =3D &prv->rqd[rqi].lock; + get_sched_res(cpu)->schedule_lock =3D &prv->rqd[rqi].lock; =20 /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ spin_unlock(old_lock); @@ -3906,7 +3906,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsig= ned int cpu, * this scheduler, and so it's safe to have taken it /before/ our * private global lock. */ - ASSERT(per_cpu(schedule_data, cpu).schedule_lock !=3D &prv->rqd[rqi].l= ock); + ASSERT(get_sched_res(cpu)->schedule_lock !=3D &prv->rqd[rqi].lock); =20 write_unlock(&prv->lock); =20 diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 3619774318..b95214601f 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -269,7 +269,7 @@ pick_res(struct null_private *prv, const struct sched_u= nit *unit) unsigned int cpu =3D v->processor, new_cpu; cpumask_t *cpus =3D cpupool_domain_cpumask(v->domain); =20 - ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 for_each_affinity_balance_step( bs ) { @@ -419,7 +419,7 @@ static spinlock_t *null_switch_sched(struct scheduler *= new_ops, unsigned int cpu, void *pdata, void *vdata) { - struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); + struct sched_resource *sr =3D get_sched_res(cpu); struct null_private *prv =3D null_priv(new_ops); struct null_unit *nvc =3D vdata; =20 @@ -436,7 +436,7 @@ static spinlock_t *null_switch_sched(struct scheduler *= new_ops, =20 init_pdata(prv, cpu); =20 - return &sd->_lock; + return &sr->_lock; } =20 static void null_unit_insert(const struct scheduler *ops, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 57da55d90f..a168668a70 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -75,7 +75,7 @@ /* * Locking: * A global system lock is used to protect the RunQ and DepletedQ. - * The global lock is referenced by schedule_data.schedule_lock + * The global lock is referenced by sched_res->schedule_lock * from all physical cpus. * * The lock is already grabbed when calling wake/sleep/schedule/ functions @@ -176,7 +176,7 @@ static void repl_timer_handler(void *data); =20 /* * System-wide private data, include global RunQueue/DepletedQ - * Global lock is referenced by schedule_data.schedule_lock from all + * Global lock is referenced by sched_res->schedule_lock from all * physical cpus. It can be grabbed via vcpu_schedule_lock_irq() */ struct rt_private { @@ -722,7 +722,7 @@ rt_init_pdata(const struct scheduler *ops, void *pdata,= int cpu) } =20 /* Move the scheduler lock to our global runqueue lock. */ - per_cpu(schedule_data, cpu).schedule_lock =3D &prv->lock; + get_sched_res(cpu)->schedule_lock =3D &prv->lock; =20 /* _Not_ pcpu_schedule_unlock(): per_cpu().schedule_lock changed! */ spin_unlock_irqrestore(old_lock, flags); @@ -744,7 +744,7 @@ rt_switch_sched(struct scheduler *new_ops, unsigned int= cpu, * another scheduler, but that is how things need to be, for * preventing races. */ - ASSERT(per_cpu(schedule_data, cpu).schedule_lock !=3D &prv->lock); + ASSERT(get_sched_res(cpu)->schedule_lock !=3D &prv->lock); =20 /* * If we are the absolute first cpu being switched toward this diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 6d6d8a234f..67ccb78739 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -65,13 +65,15 @@ static void vcpu_singleshot_timer_fn(void *data); static void poll_timer_fn(void *data); =20 /* This is global for now so that private implementations can reach it */ -DEFINE_PER_CPU(struct schedule_data, schedule_data); DEFINE_PER_CPU(struct scheduler *, scheduler); DEFINE_PER_CPU_READ_MOSTLY(struct sched_resource *, sched_res); =20 /* Scratch space for cpumasks. */ DEFINE_PER_CPU(cpumask_t, cpumask_scratch); =20 +/* How many urgent vcpus. */ +DEFINE_PER_CPU(atomic_t, sched_urgent_count); + extern const struct scheduler *__start_schedulers_array[], *__end_schedule= rs_array[]; #define NUM_SCHEDULERS (__end_schedulers_array - __start_schedulers_array) #define schedulers __start_schedulers_array @@ -213,7 +215,7 @@ static inline void vcpu_urgent_count_update(struct vcpu= *v) !test_bit(v->vcpu_id, v->domain->poll_mask) ) { v->is_urgent =3D 0; - atomic_dec(&per_cpu(schedule_data,v->processor).urgent_count); + atomic_dec(&per_cpu(sched_urgent_count, v->processor)); } } else @@ -222,7 +224,7 @@ static inline void vcpu_urgent_count_update(struct vcpu= *v) unlikely(test_bit(v->vcpu_id, v->domain->poll_mask)) ) { v->is_urgent =3D 1; - atomic_inc(&per_cpu(schedule_data,v->processor).urgent_count); + atomic_inc(&per_cpu(sched_urgent_count, v->processor)); } } } @@ -233,7 +235,7 @@ static inline void vcpu_runstate_change( s_time_t delta; =20 ASSERT(v->runstate.state !=3D new_state); - ASSERT(spin_is_locked(per_cpu(schedule_data,v->processor).schedule_loc= k)); + ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock)); =20 vcpu_urgent_count_update(v); =20 @@ -394,7 +396,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int proces= sor) /* Idle VCPUs are scheduled immediately, so don't put them in runqueue= . */ if ( is_idle_domain(d) ) { - per_cpu(schedule_data, v->processor).curr =3D unit; + get_sched_res(v->processor)->curr =3D unit; v->is_running =3D 1; } else @@ -519,7 +521,7 @@ void sched_destroy_vcpu(struct vcpu *v) kill_timer(&v->singleshot_timer); kill_timer(&v->poll_timer); if ( test_and_clear_bool(v->is_urgent) ) - atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count); + atomic_dec(&per_cpu(sched_urgent_count, v->processor)); sched_remove_unit(vcpu_scheduler(v), unit); sched_free_udata(vcpu_scheduler(v), unit->priv); sched_free_unit(unit); @@ -566,7 +568,7 @@ void sched_destroy_domain(struct domain *d) =20 void vcpu_sleep_nosync_locked(struct vcpu *v) { - ASSERT(spin_is_locked(per_cpu(schedule_data,v->processor).schedule_loc= k)); + ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock)); =20 if ( likely(!vcpu_runnable(v)) ) { @@ -661,8 +663,8 @@ static void vcpu_move_locked(struct vcpu *v, unsigned i= nt new_cpu) */ if ( unlikely(v->is_urgent) && (old_cpu !=3D new_cpu) ) { - atomic_inc(&per_cpu(schedule_data, new_cpu).urgent_count); - atomic_dec(&per_cpu(schedule_data, old_cpu).urgent_count); + atomic_inc(&per_cpu(sched_urgent_count, new_cpu)); + atomic_dec(&per_cpu(sched_urgent_count, old_cpu)); } =20 /* @@ -728,20 +730,20 @@ static void vcpu_migrate_finish(struct vcpu *v) * are not correct any longer after evaluating old and new cpu hol= ding * the locks. */ - old_lock =3D per_cpu(schedule_data, old_cpu).schedule_lock; - new_lock =3D per_cpu(schedule_data, new_cpu).schedule_lock; + old_lock =3D get_sched_res(old_cpu)->schedule_lock; + new_lock =3D get_sched_res(new_cpu)->schedule_lock; =20 sched_spin_lock_double(old_lock, new_lock, &flags); =20 old_cpu =3D v->processor; - if ( old_lock =3D=3D per_cpu(schedule_data, old_cpu).schedule_lock= ) + if ( old_lock =3D=3D get_sched_res(old_cpu)->schedule_lock ) { /* * If we selected a CPU on the previosu iteration, check if it * remains suitable for running this vCPU. */ if ( pick_called && - (new_lock =3D=3D per_cpu(schedule_data, new_cpu).schedule= _lock) && + (new_lock =3D=3D get_sched_res(new_cpu)->schedule_lock) && cpumask_test_cpu(new_cpu, v->cpu_hard_affinity) && cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) break; @@ -749,7 +751,7 @@ static void vcpu_migrate_finish(struct vcpu *v) /* Select a new CPU. */ new_cpu =3D sched_pick_resource(vcpu_scheduler(v), v->sched_unit)->master_cpu; - if ( (new_lock =3D=3D per_cpu(schedule_data, new_cpu).schedule= _lock) && + if ( (new_lock =3D=3D get_sched_res(new_cpu)->schedule_lock) && cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) break; pick_called =3D 1; @@ -1566,7 +1568,7 @@ static void schedule(void) struct scheduler *sched; unsigned long *tasklet_work =3D &this_cpu(tasklet_work_to_do); bool_t tasklet_work_scheduled =3D 0; - struct schedule_data *sd; + struct sched_resource *sd; spinlock_t *lock; struct task_slice next_slice; int cpu =3D smp_processor_id(); @@ -1575,7 +1577,7 @@ static void schedule(void) =20 SCHED_STAT_CRANK(sched_run); =20 - sd =3D &this_cpu(schedule_data); + sd =3D get_sched_res(cpu); =20 /* Update tasklet scheduling status. */ switch ( *tasklet_work ) @@ -1716,20 +1718,19 @@ static void poll_timer_fn(void *data) =20 static int cpu_schedule_up(unsigned int cpu) { - struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); - struct sched_resource *res; + struct sched_resource *sr; =20 - res =3D xzalloc(struct sched_resource); - if ( res =3D=3D NULL ) + sr =3D xzalloc(struct sched_resource); + if ( sr =3D=3D NULL ) return -ENOMEM; - res->master_cpu =3D cpu; - set_sched_res(cpu, res); + sr->master_cpu =3D cpu; + set_sched_res(cpu, sr); =20 per_cpu(scheduler, cpu) =3D &sched_idle_ops; - spin_lock_init(&sd->_lock); - sd->schedule_lock =3D &sched_free_cpu_lock; - init_timer(&sd->s_timer, s_timer_fn, NULL, cpu); - atomic_set(&sd->urgent_count, 0); + spin_lock_init(&sr->_lock); + sr->schedule_lock =3D &sched_free_cpu_lock; + init_timer(&sr->s_timer, s_timer_fn, NULL, cpu); + atomic_set(&per_cpu(sched_urgent_count, cpu), 0); =20 /* Boot CPU is dealt with later in scheduler_init(). */ if ( cpu =3D=3D 0 ) @@ -1738,7 +1739,7 @@ static int cpu_schedule_up(unsigned int cpu) if ( idle_vcpu[cpu] =3D=3D NULL ) vcpu_create(idle_vcpu[0]->domain, cpu, cpu); else - idle_vcpu[cpu]->sched_unit->res =3D res; + idle_vcpu[cpu]->sched_unit->res =3D sr; =20 if ( idle_vcpu[cpu] =3D=3D NULL ) return -ENOMEM; @@ -1749,21 +1750,21 @@ static int cpu_schedule_up(unsigned int cpu) * allocated. */ =20 - sd->curr =3D idle_vcpu[cpu]->sched_unit; + sr->curr =3D idle_vcpu[cpu]->sched_unit; =20 - sd->sched_priv =3D NULL; + sr->sched_priv =3D NULL; =20 return 0; } =20 static void cpu_schedule_down(unsigned int cpu) { - struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); + struct sched_resource *sr =3D get_sched_res(cpu); =20 - kill_timer(&sd->s_timer); + kill_timer(&sr->s_timer); =20 set_sched_res(cpu, NULL); - xfree(sd); + xfree(sr); } =20 void sched_rm_cpu(unsigned int cpu) @@ -1917,7 +1918,7 @@ void __init scheduler_init(void) idle_domain->max_vcpus =3D nr_cpu_ids; if ( vcpu_create(idle_domain, 0, 0) =3D=3D NULL ) BUG(); - this_cpu(schedule_data).curr =3D idle_vcpu[0]->sched_unit; + get_sched_res(0)->curr =3D idle_vcpu[0]->sched_unit; } =20 /* @@ -1934,7 +1935,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) struct scheduler *old_ops =3D per_cpu(scheduler, cpu); struct scheduler *new_ops =3D (c =3D=3D NULL) ? &sched_idle_ops : c->s= ched; struct cpupool *old_pool =3D per_cpu(cpupool, cpu); - struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); + struct sched_resource *sd =3D get_sched_res(cpu); spinlock_t *old_lock, *new_lock; unsigned long flags; =20 diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h index 488f708305..5d7dffd228 100644 --- a/xen/include/asm-x86/cpuidle.h +++ b/xen/include/asm-x86/cpuidle.h @@ -4,7 +4,6 @@ #include #include #include -#include =20 extern struct acpi_processor_power *processor_powers[]; =20 @@ -27,14 +26,4 @@ void update_idle_stats(struct acpi_processor_power *, void update_last_cx_stat(struct acpi_processor_power *, struct acpi_processor_cx *, uint64_t); =20 -/* - * vcpu is urgent if vcpu is polling event channel - * - * if urgent vcpu exists, CPU should not enter deep C state - */ -static inline int sched_has_urgent_vcpu(void) -{ - return atomic_read(&this_cpu(schedule_data).urgent_count); -} - #endif /* __X86_ASM_CPUIDLE_H__ */ diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 4b817347d5..4dbf8f974c 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -33,22 +33,17 @@ extern int sched_ratelimit_us; * For cache betterness, keep the actual lock in the same cache area * as the rest of the struct. Just have the scheduler point to the * one it wants (This may be the one right in front of it).*/ -struct schedule_data { +struct sched_resource { spinlock_t *schedule_lock, _lock; struct sched_unit *curr; void *sched_priv; struct timer s_timer; /* scheduling timer = */ - atomic_t urgent_count; /* how many urgent vcpus = */ -}; =20 -#define curr_on_cpu(c) (per_cpu(schedule_data, c).curr) - -struct sched_resource { - unsigned int master_cpu; /* Cpu with lowest id in scheduling resource= . */ + /* Cpu with lowest id in scheduling resource. */ + unsigned int master_cpu; }; =20 -DECLARE_PER_CPU(struct schedule_data, schedule_data); DECLARE_PER_CPU(struct scheduler *, scheduler); DECLARE_PER_CPU(struct cpupool *, cpupool); DECLARE_PER_CPU(struct sched_resource *, sched_res); @@ -63,6 +58,11 @@ static inline void set_sched_res(unsigned int cpu, struc= t sched_resource *res) per_cpu(sched_res, cpu) =3D res; } =20 +static inline struct sched_unit *curr_on_cpu(unsigned int cpu) +{ + return get_sched_res(cpu)->curr; +} + /* * Scratch space, for avoiding having too many cpumask_t on the stack. * Within each scheduler, when using the scratch mask of one pCPU: @@ -79,7 +79,7 @@ static inline spinlock_t *kind##_schedule_lock##irq(param= EXTRA_TYPE(arg)) \ { \ for ( ; ; ) \ { \ - spinlock_t *lock =3D per_cpu(schedule_data, cpu).schedule_lock; \ + spinlock_t *lock =3D get_sched_res(cpu)->schedule_lock; \ /* \ * v->processor may change when grabbing the lock; but \ * per_cpu(v->processor) may also change, if changing cpu pool \ @@ -89,7 +89,7 @@ static inline spinlock_t *kind##_schedule_lock##irq(param= EXTRA_TYPE(arg)) \ * lock may be the same; this will succeed in that case. \ */ \ spin_lock##irq(lock, ## arg); \ - if ( likely(lock =3D=3D per_cpu(schedule_data, cpu).schedule_lock)= ) \ + if ( likely(lock =3D=3D get_sched_res(cpu)->schedule_lock) ) \ return lock; \ spin_unlock##irq(lock, ## arg); \ } \ @@ -99,7 +99,7 @@ static inline spinlock_t *kind##_schedule_lock##irq(param= EXTRA_TYPE(arg)) \ static inline void kind##_schedule_unlock##irq(spinlock_t *lock \ EXTRA_TYPE(arg), param) \ { \ - ASSERT(lock =3D=3D per_cpu(schedule_data, cpu).schedule_lock); \ + ASSERT(lock =3D=3D get_sched_res(cpu)->schedule_lock); \ spin_unlock##irq(lock, ## arg); \ } =20 @@ -128,11 +128,11 @@ sched_unlock(vcpu, const struct vcpu *v, v->processor= , _irqrestore, flags) =20 static inline spinlock_t *pcpu_schedule_trylock(unsigned int cpu) { - spinlock_t *lock =3D per_cpu(schedule_data, cpu).schedule_lock; + spinlock_t *lock =3D get_sched_res(cpu)->schedule_lock; =20 if ( !spin_trylock(lock) ) return NULL; - if ( lock =3D=3D per_cpu(schedule_data, cpu).schedule_lock ) + if ( lock =3D=3D get_sched_res(cpu)->schedule_lock ) return lock; spin_unlock(lock); return NULL; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 5b034d5b59..fc29d72b57 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -883,6 +883,17 @@ static inline struct vcpu *domain_vcpu(const struct do= main *d, =20 void cpu_init(void); =20 +/* + * vcpu is urgent if vcpu is polling event channel + * + * if urgent vcpu exists, CPU should not enter deep C state + */ +DECLARE_PER_CPU(atomic_t, sched_urgent_count); +static inline bool sched_has_urgent_vcpu(void) +{ + return atomic_read(&this_cpu(sched_urgent_count)); +} + struct scheduler; =20 struct scheduler *scheduler_get_default(void); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel