From nobody Tue Nov 11 08:45:25 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567746; cv=none; d=zoho.com; s=zohoarc; b=mxBnSoMZJLghAjBc9u5F0hMFDai1BvEJYvkRZ90NOT4iZIH18iytvJIlE3bgiG5j89IGYZv+rYjZqrmYXsXm9zRvG00d/+Nbsl9GCTBfInm+UfpTl0Wb3uOaSQDZTmUt2zqS0eFvLEGO3pn05lJCQlqzRybtmyara419VHJfB0Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567746; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=Fh1v7ELPg+3nY4Mh3SxKT9RHTKhsihALyDA1O1dVx1M=; b=CmgbPMAShIeD6gAVE2lWfGLo+RfpPei5NiGxkFwOzZmdTI+/9JspNL8Lh60/rpjzx99h3CO5+UA6KUNDd0f/hxkl/G60MnSmJXHPUY8f+FluZpIHUG1SObch8m/NW9t2ET/EuBnP2AH5BQdA15n3grgdt3ZEl8HnpQ1SLNs5PIo= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567746415316.73466920867634; Fri, 27 Sep 2019 00:02:26 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFg-0003Dw-AS; Fri, 27 Sep 2019 07:01:16 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFf-0003D0-3H for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:15 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 8e655064-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:00:58 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1C5E5AFCE; Fri, 27 Sep 2019 07:00:56 +0000 (UTC) X-Inumbo-ID: 8e655064-e0f4-11e9-966c-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:10 +0200 Message-Id: <20190927070050.12405-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 06/46] xen/sched: switch schedule_data.curr to point at sched_unit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In preparation of core scheduling let the percpu pointer schedule_data.curr point to a strct sched_unit instead of the related vcpu. At the same time rename the per-vcpu scheduler specific structs to per-unit ones. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V3: - remove no longer matching comment (Jan Beulich) --- xen/common/sched_arinc653.c | 2 +- xen/common/sched_credit.c | 105 ++++++++++++++------------- xen/common/sched_credit2.c | 168 ++++++++++++++++++++++------------------= ---- xen/common/sched_null.c | 46 ++++++------ xen/common/sched_rt.c | 118 +++++++++++++++---------------- xen/common/schedule.c | 8 +-- xen/include/xen/sched-if.h | 2 +- 7 files changed, 222 insertions(+), 227 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 9faa1c48c4..7bdaf257ce 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -481,7 +481,7 @@ a653sched_unit_sleep(const struct scheduler *ops, struc= t sched_unit *unit) * If the VCPU being put to sleep is the same one that is currently * running, raise a softirq to invoke the scheduler to switch domains. */ - if ( per_cpu(schedule_data, vc->processor).curr =3D=3D vc ) + if ( per_cpu(schedule_data, vc->processor).curr =3D=3D unit ) cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); } =20 diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index fa73081b3c..cfe3edc14c 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -83,7 +83,7 @@ ((struct csched_private *)((_ops)->sched_data)) #define CSCHED_PCPU(_c) \ ((struct csched_pcpu *)per_cpu(schedule_data, _c).sched_priv) -#define CSCHED_VCPU(_vcpu) ((struct csched_vcpu *) (_vcpu)->sched_unit->p= riv) +#define CSCHED_UNIT(unit) ((struct csched_unit *) (unit)->priv) #define CSCHED_DOM(_dom) ((struct csched_dom *) (_dom)->sched_priv) #define RUNQ(_cpu) (&(CSCHED_PCPU(_cpu)->runq)) =20 @@ -160,7 +160,7 @@ struct csched_pcpu { /* * Virtual CPU */ -struct csched_vcpu { +struct csched_unit { struct list_head runq_elem; struct list_head active_vcpu_elem; =20 @@ -233,15 +233,15 @@ static void csched_tick(void *_cpu); static void csched_acct(void *dummy); =20 static inline int -__vcpu_on_runq(struct csched_vcpu *svc) +__vcpu_on_runq(struct csched_unit *svc) { return !list_empty(&svc->runq_elem); } =20 -static inline struct csched_vcpu * +static inline struct csched_unit * __runq_elem(struct list_head *elem) { - return list_entry(elem, struct csched_vcpu, runq_elem); + return list_entry(elem, struct csched_unit, runq_elem); } =20 /* Is the first element of cpu's runq (if any) cpu's idle vcpu? */ @@ -273,7 +273,7 @@ dec_nr_runnable(unsigned int cpu) } =20 static inline void -__runq_insert(struct csched_vcpu *svc) +__runq_insert(struct csched_unit *svc) { unsigned int cpu =3D svc->vcpu->processor; const struct list_head * const runq =3D RUNQ(cpu); @@ -283,7 +283,7 @@ __runq_insert(struct csched_vcpu *svc) =20 list_for_each( iter, runq ) { - const struct csched_vcpu * const iter_svc =3D __runq_elem(iter); + const struct csched_unit * const iter_svc =3D __runq_elem(iter); if ( svc->pri > iter_svc->pri ) break; } @@ -304,34 +304,34 @@ __runq_insert(struct csched_vcpu *svc) } =20 static inline void -runq_insert(struct csched_vcpu *svc) +runq_insert(struct csched_unit *svc) { __runq_insert(svc); inc_nr_runnable(svc->vcpu->processor); } =20 static inline void -__runq_remove(struct csched_vcpu *svc) +__runq_remove(struct csched_unit *svc) { BUG_ON( !__vcpu_on_runq(svc) ); list_del_init(&svc->runq_elem); } =20 static inline void -runq_remove(struct csched_vcpu *svc) +runq_remove(struct csched_unit *svc) { dec_nr_runnable(svc->vcpu->processor); __runq_remove(svc); } =20 -static void burn_credits(struct csched_vcpu *svc, s_time_t now) +static void burn_credits(struct csched_unit *svc, s_time_t now) { s_time_t delta; uint64_t val; unsigned int credits; =20 /* Assert svc is current */ - ASSERT( svc =3D=3D CSCHED_VCPU(curr_on_cpu(svc->vcpu->processor)) ); + ASSERT( svc =3D=3D CSCHED_UNIT(curr_on_cpu(svc->vcpu->processor)) ); =20 if ( (delta =3D now - svc->start_time) <=3D 0 ) return; @@ -349,10 +349,10 @@ boolean_param("tickle_one_idle_cpu", opt_tickle_one_i= dle); =20 DEFINE_PER_CPU(unsigned int, last_tickle_cpu); =20 -static inline void __runq_tickle(struct csched_vcpu *new) +static inline void __runq_tickle(struct csched_unit *new) { unsigned int cpu =3D new->vcpu->processor; - struct csched_vcpu * const cur =3D CSCHED_VCPU(curr_on_cpu(cpu)); + struct csched_unit * const cur =3D CSCHED_UNIT(curr_on_cpu(cpu)); struct csched_private *prv =3D CSCHED_PRIV(per_cpu(scheduler, cpu)); cpumask_t mask, idle_mask, *online; int balance_step, idlers_empty; @@ -607,7 +607,7 @@ init_pdata(struct csched_private *prv, struct csched_pc= pu *spc, int cpu) spc->idle_bias =3D nr_cpu_ids - 1; =20 /* Start off idling... */ - BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu))); + BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)->vcpu_list)); cpumask_set_cpu(cpu, prv->idlers); spc->nr_runnable =3D 0; } @@ -630,7 +630,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned= int cpu, { struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); struct csched_private *prv =3D CSCHED_PRIV(new_ops); - struct csched_vcpu *svc =3D vdata; + struct csched_unit *svc =3D vdata; =20 ASSERT(svc && is_idle_vcpu(svc->vcpu)); =20 @@ -653,7 +653,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned= int cpu, static inline void __csched_vcpu_check(struct vcpu *vc) { - struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); + struct csched_unit * const svc =3D CSCHED_UNIT(vc->sched_unit); struct csched_dom * const sdom =3D svc->sdom; =20 BUG_ON( svc->vcpu !=3D vc ); @@ -686,7 +686,7 @@ integer_param("vcpu_migration_delay", vcpu_migration_de= lay_us); =20 static inline bool __csched_vcpu_is_cache_hot(const struct csched_private *prv, - const struct csched_vcpu *svc) + const struct csched_unit *svc) { bool hot =3D prv->vcpu_migr_delay && (NOW() - svc->last_sched_time) < prv->vcpu_migr_delay; @@ -701,7 +701,7 @@ static inline int __csched_vcpu_is_migrateable(const struct csched_private *prv, struct vcpu= *vc, int dest_cpu, cpumask_t *mask) { - const struct csched_vcpu *svc =3D CSCHED_VCPU(vc); + const struct csched_unit *svc =3D CSCHED_UNIT(vc->sched_unit); /* * Don't pick up work that's hot on peer PCPU, or that can't (or * would prefer not to) run on cpu. @@ -857,7 +857,7 @@ static struct sched_resource * csched_res_pick(const struct scheduler *ops, const struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu *svc =3D CSCHED_VCPU(vc); + struct csched_unit *svc =3D CSCHED_UNIT(unit); =20 /* * We have been called by vcpu_migrate() (in schedule.c), as part @@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const stru= ct sched_unit *unit) } =20 static inline void -__csched_vcpu_acct_start(struct csched_private *prv, struct csched_vcpu *s= vc) +__csched_vcpu_acct_start(struct csched_private *prv, struct csched_unit *s= vc) { struct csched_dom * const sdom =3D svc->sdom; unsigned long flags; @@ -901,7 +901,7 @@ __csched_vcpu_acct_start(struct csched_private *prv, st= ruct csched_vcpu *svc) =20 static inline void __csched_vcpu_acct_stop_locked(struct csched_private *prv, - struct csched_vcpu *svc) + struct csched_unit *svc) { struct csched_dom * const sdom =3D svc->sdom; =20 @@ -926,7 +926,7 @@ __csched_vcpu_acct_stop_locked(struct csched_private *p= rv, static void csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) { - struct csched_vcpu * const svc =3D CSCHED_VCPU(current); + struct csched_unit * const svc =3D CSCHED_UNIT(current->sched_unit); const struct scheduler *ops =3D per_cpu(scheduler, cpu); =20 ASSERT( current->processor =3D=3D cpu ); @@ -995,10 +995,10 @@ csched_alloc_udata(const struct scheduler *ops, struc= t sched_unit *unit, void *dd) { struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu *svc; + struct csched_unit *svc; =20 /* Allocate per-VCPU info */ - svc =3D xzalloc(struct csched_vcpu); + svc =3D xzalloc(struct csched_unit); if ( svc =3D=3D NULL ) return NULL; =20 @@ -1017,7 +1017,7 @@ static void csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu *svc =3D unit->priv; + struct csched_unit *svc =3D unit->priv; spinlock_t *lock; =20 BUG_ON( is_idle_vcpu(vc) ); @@ -1043,7 +1043,7 @@ csched_unit_insert(const struct scheduler *ops, struc= t sched_unit *unit) static void csched_free_udata(const struct scheduler *ops, void *priv) { - struct csched_vcpu *svc =3D priv; + struct csched_unit *svc =3D priv; =20 BUG_ON( !list_empty(&svc->runq_elem) ); =20 @@ -1054,8 +1054,7 @@ static void csched_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { struct csched_private *prv =3D CSCHED_PRIV(ops); - struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); + struct csched_unit * const svc =3D CSCHED_UNIT(unit); struct csched_dom * const sdom =3D svc->sdom; =20 SCHED_STAT_CRANK(vcpu_remove); @@ -1082,14 +1081,14 @@ static void csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); + struct csched_unit * const svc =3D CSCHED_UNIT(unit); unsigned int cpu =3D vc->processor; =20 SCHED_STAT_CRANK(vcpu_sleep); =20 BUG_ON( is_idle_vcpu(vc) ); =20 - if ( curr_on_cpu(cpu) =3D=3D vc ) + if ( curr_on_cpu(cpu) =3D=3D unit ) { /* * We are about to tickle cpu, so we should clear its bit in idler= s. @@ -1107,12 +1106,12 @@ static void csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); + struct csched_unit * const svc =3D CSCHED_UNIT(unit); bool_t migrating; =20 BUG_ON( is_idle_vcpu(vc) ); =20 - if ( unlikely(curr_on_cpu(vc->processor) =3D=3D vc) ) + if ( unlikely(curr_on_cpu(vc->processor) =3D=3D unit) ) { SCHED_STAT_CRANK(vcpu_wake_running); return; @@ -1168,8 +1167,7 @@ csched_unit_wake(const struct scheduler *ops, struct = sched_unit *unit) static void csched_unit_yield(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *vc =3D unit->vcpu_list; - struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); + struct csched_unit * const svc =3D CSCHED_UNIT(unit); =20 /* Let the scheduler know that this vcpu is trying to yield */ set_bit(CSCHED_FLAG_VCPU_YIELD, &svc->flags); @@ -1224,8 +1222,7 @@ static void csched_aff_cntl(const struct scheduler *ops, struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft) { - struct vcpu *v =3D unit->vcpu_list; - struct csched_vcpu *svc =3D CSCHED_VCPU(v); + struct csched_unit *svc =3D CSCHED_UNIT(unit); =20 if ( !hard ) return; @@ -1328,7 +1325,7 @@ csched_runq_sort(struct csched_private *prv, unsigned= int cpu) { struct csched_pcpu * const spc =3D CSCHED_PCPU(cpu); struct list_head *runq, *elem, *next, *last_under; - struct csched_vcpu *svc_elem; + struct csched_unit *svc_elem; spinlock_t *lock; unsigned long flags; int sort_epoch; @@ -1374,7 +1371,7 @@ csched_acct(void* dummy) unsigned long flags; struct list_head *iter_vcpu, *next_vcpu; struct list_head *iter_sdom, *next_sdom; - struct csched_vcpu *svc; + struct csched_unit *svc; struct csched_dom *sdom; uint32_t credit_total; uint32_t weight_total; @@ -1497,7 +1494,7 @@ csched_acct(void* dummy) =20 list_for_each_safe( iter_vcpu, next_vcpu, &sdom->active_vcpu ) { - svc =3D list_entry(iter_vcpu, struct csched_vcpu, active_vcpu_= elem); + svc =3D list_entry(iter_vcpu, struct csched_unit, active_vcpu_= elem); BUG_ON( sdom !=3D svc->sdom ); =20 /* Increment credit */ @@ -1601,12 +1598,12 @@ csched_tick(void *_cpu) set_timer(&spc->ticker, NOW() + MICROSECS(prv->tick_period_us) ); } =20 -static struct csched_vcpu * +static struct csched_unit * csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) { const struct csched_private * const prv =3D CSCHED_PRIV(per_cpu(schedu= ler, cpu)); const struct csched_pcpu * const peer_pcpu =3D CSCHED_PCPU(peer_cpu); - struct csched_vcpu *speer; + struct csched_unit *speer; struct list_head *iter; struct vcpu *vc; =20 @@ -1616,7 +1613,7 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int= balance_step) * Don't steal from an idle CPU's runq because it's about to * pick up work from it itself. */ - if ( unlikely(is_idle_vcpu(curr_on_cpu(peer_cpu))) ) + if ( unlikely(is_idle_vcpu(curr_on_cpu(peer_cpu)->vcpu_list)) ) goto out; =20 list_for_each( iter, &peer_pcpu->runq ) @@ -1678,12 +1675,12 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, i= nt balance_step) return NULL; } =20 -static struct csched_vcpu * +static struct csched_unit * csched_load_balance(struct csched_private *prv, int cpu, - struct csched_vcpu *snext, bool_t *stolen) + struct csched_unit *snext, bool_t *stolen) { struct cpupool *c =3D per_cpu(cpupool, cpu); - struct csched_vcpu *speer; + struct csched_unit *speer; cpumask_t workers; cpumask_t *online; int peer_cpu, first_cpu, peer_node, bstep; @@ -1832,9 +1829,9 @@ csched_schedule( { const int cpu =3D smp_processor_id(); struct list_head * const runq =3D RUNQ(cpu); - struct csched_vcpu * const scurr =3D CSCHED_VCPU(current); + struct csched_unit * const scurr =3D CSCHED_UNIT(current->sched_unit); struct csched_private *prv =3D CSCHED_PRIV(ops); - struct csched_vcpu *snext; + struct csched_unit *snext; struct task_slice ret; s_time_t runtime, tslice; =20 @@ -1951,7 +1948,7 @@ csched_schedule( if ( tasklet_work_scheduled ) { TRACE_0D(TRC_CSCHED_SCHED_TASKLET); - snext =3D CSCHED_VCPU(idle_vcpu[cpu]); + snext =3D CSCHED_UNIT(idle_vcpu[cpu]->sched_unit); snext->pri =3D CSCHED_PRI_TS_BOOST; } =20 @@ -2003,7 +2000,7 @@ out: } =20 static void -csched_dump_vcpu(struct csched_vcpu *svc) +csched_dump_vcpu(struct csched_unit *svc) { struct csched_dom * const sdom =3D svc->sdom; =20 @@ -2039,7 +2036,7 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) struct list_head *runq, *iter; struct csched_private *prv =3D CSCHED_PRIV(ops); struct csched_pcpu *spc; - struct csched_vcpu *svc; + struct csched_unit *svc; spinlock_t *lock; unsigned long flags; int loop; @@ -2063,7 +2060,7 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) CPUMASK_PR(per_cpu(cpu_core_mask, cpu))); =20 /* current VCPU (nothing to say if that's the idle vcpu). */ - svc =3D CSCHED_VCPU(curr_on_cpu(cpu)); + svc =3D CSCHED_UNIT(curr_on_cpu(cpu)); if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); @@ -2132,10 +2129,10 @@ csched_dump(const struct scheduler *ops) =20 list_for_each( iter_svc, &sdom->active_vcpu ) { - struct csched_vcpu *svc; + struct csched_unit *svc; spinlock_t *lock; =20 - svc =3D list_entry(iter_svc, struct csched_vcpu, active_vcpu_e= lem); + svc =3D list_entry(iter_svc, struct csched_unit, active_vcpu_e= lem); lock =3D vcpu_schedule_lock(svc->vcpu); =20 printk("\t%3d: ", ++loop); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 37192e6713..afeb70b845 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -176,7 +176,7 @@ * load balancing; * + serializes runqueue operations (removing and inserting vcpus); * + protects runqueue-wide data in csched2_runqueue_data; - * + protects vcpu parameters in csched2_vcpu for the vcpu in the + * + protects vcpu parameters in csched2_unit for the vcpu in the * runqueue. * * - Private scheduler lock @@ -512,7 +512,7 @@ struct csched2_pcpu { /* * Virtual CPU */ -struct csched2_vcpu { +struct csched2_unit { struct csched2_dom *sdom; /* Up-pointer to domain = */ struct vcpu *vcpu; /* Up-pointer, to vcpu = */ struct csched2_runqueue_data *rqd; /* Up-pointer to the runqueue = */ @@ -571,9 +571,9 @@ static inline struct csched2_pcpu *csched2_pcpu(unsigne= d int cpu) return per_cpu(schedule_data, cpu).sched_priv; } =20 -static inline struct csched2_vcpu *csched2_vcpu(const struct vcpu *v) +static inline struct csched2_unit *csched2_unit(const struct sched_unit *u= nit) { - return v->sched_unit->priv; + return unit->priv; } =20 static inline struct csched2_dom *csched2_dom(const struct domain *d) @@ -595,7 +595,7 @@ static inline struct csched2_runqueue_data *c2rqd(const= struct scheduler *ops, } =20 /* Does the domain of this vCPU have a cap? */ -static inline bool has_cap(const struct csched2_vcpu *svc) +static inline bool has_cap(const struct csched2_unit *svc) { return svc->budget !=3D STIME_MAX; } @@ -689,7 +689,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *m= ask) * Of course, 1, 2 and 3 makes sense only if svc has a soft affinity. Also * note that at least 5 is guaranteed to _always_ return at least one pcpu. */ -static int get_fallback_cpu(struct csched2_vcpu *svc) +static int get_fallback_cpu(struct csched2_unit *svc) { struct vcpu *v =3D svc->vcpu; unsigned int bs; @@ -774,7 +774,7 @@ static int get_fallback_cpu(struct csched2_vcpu *svc) * FIXME: Do pre-calculated division? */ static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time, - struct csched2_vcpu *svc) + struct csched2_unit *svc) { uint64_t val =3D time * rqd->max_weight + svc->residual; =20 @@ -782,7 +782,7 @@ static void t2c_update(struct csched2_runqueue_data *rq= d, s_time_t time, svc->credit -=3D val; } =20 -static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, st= ruct csched2_vcpu *svc) +static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, st= ruct csched2_unit *svc) { return credit * svc->weight / rqd->max_weight; } @@ -791,14 +791,14 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd= , s_time_t credit, struct c * Runqueue related code. */ =20 -static inline int vcpu_on_runq(struct csched2_vcpu *svc) +static inline int vcpu_on_runq(struct csched2_unit *svc) { return !list_empty(&svc->runq_elem); } =20 -static inline struct csched2_vcpu * runq_elem(struct list_head *elem) +static inline struct csched2_unit * runq_elem(struct list_head *elem) { - return list_entry(elem, struct csched2_vcpu, runq_elem); + return list_entry(elem, struct csched2_unit, runq_elem); } =20 static void activate_runqueue(struct csched2_private *prv, int rqi) @@ -916,7 +916,7 @@ static void update_max_weight(struct csched2_runqueue_d= ata *rqd, int new_weight, =20 list_for_each( iter, &rqd->svc ) { - struct csched2_vcpu * svc =3D list_entry(iter, struct csched2_= vcpu, rqd_elem); + struct csched2_unit * svc =3D list_entry(iter, struct csched2_= unit, rqd_elem); =20 if ( svc->weight > max_weight ) max_weight =3D svc->weight; @@ -941,7 +941,7 @@ static void update_max_weight(struct csched2_runqueue_d= ata *rqd, int new_weight, =20 /* Add and remove from runqueue assignment (not active run queue) */ static void -_runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd) +_runq_assign(struct csched2_unit *svc, struct csched2_runqueue_data *rqd) { =20 svc->rqd =3D rqd; @@ -971,7 +971,7 @@ _runq_assign(struct csched2_vcpu *svc, struct csched2_r= unqueue_data *rqd) static void runq_assign(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu *svc =3D vc->sched_unit->priv; + struct csched2_unit *svc =3D vc->sched_unit->priv; =20 ASSERT(svc->rqd =3D=3D NULL); =20 @@ -979,7 +979,7 @@ runq_assign(const struct scheduler *ops, struct vcpu *v= c) } =20 static void -_runq_deassign(struct csched2_vcpu *svc) +_runq_deassign(struct csched2_unit *svc) { struct csched2_runqueue_data *rqd =3D svc->rqd; =20 @@ -998,7 +998,7 @@ _runq_deassign(struct csched2_vcpu *svc) static void runq_deassign(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu *svc =3D vc->sched_unit->priv; + struct csched2_unit *svc =3D vc->sched_unit->priv; =20 ASSERT(svc->rqd =3D=3D c2rqd(ops, vc->processor)); =20 @@ -1200,7 +1200,7 @@ update_runq_load(const struct scheduler *ops, =20 static void update_svc_load(const struct scheduler *ops, - struct csched2_vcpu *svc, int change, s_time_t now) + struct csched2_unit *svc, int change, s_time_t now) { struct csched2_private *prv =3D csched2_priv(ops); s_time_t delta, vcpu_load; @@ -1260,7 +1260,7 @@ update_svc_load(const struct scheduler *ops, static void update_load(const struct scheduler *ops, struct csched2_runqueue_data *rqd, - struct csched2_vcpu *svc, int change, s_time_t now) + struct csched2_unit *svc, int change, s_time_t now) { trace_var(TRC_CSCHED2_UPDATE_LOAD, 1, 0, NULL); =20 @@ -1270,7 +1270,7 @@ update_load(const struct scheduler *ops, } =20 static void -runq_insert(const struct scheduler *ops, struct csched2_vcpu *svc) +runq_insert(const struct scheduler *ops, struct csched2_unit *svc) { struct list_head *iter; unsigned int cpu =3D svc->vcpu->processor; @@ -1289,7 +1289,7 @@ runq_insert(const struct scheduler *ops, struct csche= d2_vcpu *svc) =20 list_for_each( iter, runq ) { - struct csched2_vcpu * iter_svc =3D runq_elem(iter); + struct csched2_unit * iter_svc =3D runq_elem(iter); =20 if ( svc->credit > iter_svc->credit ) break; @@ -1313,13 +1313,13 @@ runq_insert(const struct scheduler *ops, struct csc= hed2_vcpu *svc) } } =20 -static inline void runq_remove(struct csched2_vcpu *svc) +static inline void runq_remove(struct csched2_unit *svc) { ASSERT(vcpu_on_runq(svc)); list_del_init(&svc->runq_elem); } =20 -void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_vcpu *= , s_time_t); +void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *= , s_time_t); =20 static inline void tickle_cpu(unsigned int cpu, struct csched2_runqueue_data *rqd) @@ -1335,7 +1335,7 @@ tickle_cpu(unsigned int cpu, struct csched2_runqueue_= data *rqd) * whether or not it already run for more than the ratelimit, to which we * apply some tolerance). */ -static inline bool is_preemptable(const struct csched2_vcpu *svc, +static inline bool is_preemptable(const struct csched2_unit *svc, s_time_t now, s_time_t ratelimit) { if ( ratelimit <=3D CSCHED2_RATELIMIT_TICKLE_TOLERANCE ) @@ -1361,10 +1361,10 @@ static inline bool is_preemptable(const struct csch= ed2_vcpu *svc, * Within the same class, the highest difference of credit. */ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now, - struct csched2_vcpu *new, unsigned int cpu) + struct csched2_unit *new, unsigned int cpu) { struct csched2_runqueue_data *rqd =3D c2rqd(ops, cpu); - struct csched2_vcpu * cur =3D csched2_vcpu(curr_on_cpu(cpu)); + struct csched2_unit * cur =3D csched2_unit(curr_on_cpu(cpu)); struct csched2_private *prv =3D csched2_priv(ops); s_time_t score; =20 @@ -1433,7 +1433,7 @@ static s_time_t tickle_score(const struct scheduler *= ops, s_time_t now, * pick up some work, so it would be wrong to consider it idle. */ static void -runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_= t now) +runq_tickle(const struct scheduler *ops, struct csched2_unit *new, s_time_= t now) { int i, ipid =3D -1; s_time_t max =3D 0; @@ -1588,7 +1588,7 @@ runq_tickle(const struct scheduler *ops, struct csche= d2_vcpu *new, s_time_t now) return; } =20 - ASSERT(!is_idle_vcpu(curr_on_cpu(ipid))); + ASSERT(!is_idle_vcpu(curr_on_cpu(ipid)->vcpu_list)); SCHED_STAT_CRANK(tickled_busy_cpu); tickle: BUG_ON(ipid =3D=3D -1); @@ -1615,7 +1615,7 @@ runq_tickle(const struct scheduler *ops, struct csche= d2_vcpu *new, s_time_t now) * Credit-related code */ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t no= w, - struct csched2_vcpu *snext) + struct csched2_unit *snext) { struct csched2_runqueue_data *rqd =3D c2rqd(ops, cpu); struct list_head *iter; @@ -1645,10 +1645,10 @@ static void reset_credit(const struct scheduler *op= s, int cpu, s_time_t now, list_for_each( iter, &rqd->svc ) { unsigned int svc_cpu; - struct csched2_vcpu * svc; + struct csched2_unit * svc; int start_credit; =20 - svc =3D list_entry(iter, struct csched2_vcpu, rqd_elem); + svc =3D list_entry(iter, struct csched2_unit, rqd_elem); svc_cpu =3D svc->vcpu->processor; =20 ASSERT(!is_idle_vcpu(svc->vcpu)); @@ -1658,7 +1658,7 @@ static void reset_credit(const struct scheduler *ops,= int cpu, s_time_t now, * If svc is running, it is our responsibility to make sure, here, * that the credit it has spent so far get accounted. */ - if ( svc->vcpu =3D=3D curr_on_cpu(svc_cpu) ) + if ( svc->vcpu =3D=3D curr_on_cpu(svc_cpu)->vcpu_list ) { burn_credits(rqd, svc, now); /* @@ -1710,11 +1710,11 @@ static void reset_credit(const struct scheduler *op= s, int cpu, s_time_t now, } =20 void burn_credits(struct csched2_runqueue_data *rqd, - struct csched2_vcpu *svc, s_time_t now) + struct csched2_unit *svc, s_time_t now) { s_time_t delta; =20 - ASSERT(svc =3D=3D csched2_vcpu(curr_on_cpu(svc->vcpu->processor))); + ASSERT(svc =3D=3D csched2_unit(curr_on_cpu(svc->vcpu->processor))); =20 if ( unlikely(is_idle_vcpu(svc->vcpu)) ) { @@ -1764,7 +1764,7 @@ void burn_credits(struct csched2_runqueue_data *rqd, * Budget-related code. */ =20 -static void park_vcpu(struct csched2_vcpu *svc) +static void park_vcpu(struct csched2_unit *svc) { struct vcpu *v =3D svc->vcpu; =20 @@ -1793,7 +1793,7 @@ static void park_vcpu(struct csched2_vcpu *svc) list_add(&svc->parked_elem, &svc->sdom->parked_vcpus); } =20 -static bool vcpu_grab_budget(struct csched2_vcpu *svc) +static bool vcpu_grab_budget(struct csched2_unit *svc) { struct csched2_dom *sdom =3D svc->sdom; unsigned int cpu =3D svc->vcpu->processor; @@ -1840,7 +1840,7 @@ static bool vcpu_grab_budget(struct csched2_vcpu *svc) } =20 static void -vcpu_return_budget(struct csched2_vcpu *svc, struct list_head *parked) +vcpu_return_budget(struct csched2_unit *svc, struct list_head *parked) { struct csched2_dom *sdom =3D svc->sdom; unsigned int cpu =3D svc->vcpu->processor; @@ -1883,7 +1883,7 @@ vcpu_return_budget(struct csched2_vcpu *svc, struct l= ist_head *parked) static void unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus) { - struct csched2_vcpu *svc, *tmp; + struct csched2_unit *svc, *tmp; spinlock_t *lock; =20 list_for_each_entry_safe(svc, tmp, vcpus, parked_elem) @@ -2005,7 +2005,7 @@ static void replenish_domain_budget(void* data) static inline void csched2_vcpu_check(struct vcpu *vc) { - struct csched2_vcpu * const svc =3D csched2_vcpu(vc); + struct csched2_unit * const svc =3D csched2_unit(vc->sched_unit); struct csched2_dom * const sdom =3D svc->sdom; =20 BUG_ON( svc->vcpu !=3D vc ); @@ -2031,10 +2031,10 @@ csched2_alloc_udata(const struct scheduler *ops, st= ruct sched_unit *unit, void *dd) { struct vcpu *vc =3D unit->vcpu_list; - struct csched2_vcpu *svc; + struct csched2_unit *svc; =20 /* Allocate per-VCPU info */ - svc =3D xzalloc(struct csched2_vcpu); + svc =3D xzalloc(struct csched2_unit); if ( svc =3D=3D NULL ) return NULL; =20 @@ -2075,12 +2075,12 @@ static void csched2_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched2_vcpu * const svc =3D csched2_vcpu(vc); + struct csched2_unit * const svc =3D csched2_unit(unit); =20 ASSERT(!is_idle_vcpu(vc)); SCHED_STAT_CRANK(vcpu_sleep); =20 - if ( curr_on_cpu(vc->processor) =3D=3D vc ) + if ( curr_on_cpu(vc->processor) =3D=3D unit ) { tickle_cpu(vc->processor, svc->rqd); } @@ -2098,7 +2098,7 @@ static void csched2_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched2_vcpu * const svc =3D csched2_vcpu(vc); + struct csched2_unit * const svc =3D csched2_unit(unit); unsigned int cpu =3D vc->processor; s_time_t now; =20 @@ -2106,7 +2106,7 @@ csched2_unit_wake(const struct scheduler *ops, struct= sched_unit *unit) =20 ASSERT(!is_idle_vcpu(vc)); =20 - if ( unlikely(curr_on_cpu(cpu) =3D=3D vc) ) + if ( unlikely(curr_on_cpu(cpu) =3D=3D unit) ) { SCHED_STAT_CRANK(vcpu_wake_running); goto out; @@ -2153,8 +2153,7 @@ out: static void csched2_unit_yield(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *v =3D unit->vcpu_list; - struct csched2_vcpu * const svc =3D csched2_vcpu(v); + struct csched2_unit * const svc =3D csched2_unit(unit); =20 __set_bit(__CSFLAG_vcpu_yield, &svc->flags); } @@ -2163,7 +2162,7 @@ static void csched2_context_saved(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched2_vcpu * const svc =3D csched2_vcpu(vc); + struct csched2_unit * const svc =3D csched2_unit(unit); spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); s_time_t now =3D NOW(); LIST_HEAD(were_parked); @@ -2209,7 +2208,7 @@ csched2_res_pick(const struct scheduler *ops, const s= truct sched_unit *unit) struct vcpu *vc =3D unit->vcpu_list; int i, min_rqi =3D -1, min_s_rqi =3D -1; unsigned int new_cpu, cpu =3D vc->processor; - struct csched2_vcpu *svc =3D csched2_vcpu(vc); + struct csched2_unit *svc =3D csched2_unit(unit); s_time_t min_avgload =3D MAX_LOAD, min_s_avgload =3D MAX_LOAD; bool has_soft; =20 @@ -2431,15 +2430,15 @@ csched2_res_pick(const struct scheduler *ops, const= struct sched_unit *unit) typedef struct { /* NB: Modified by consider() */ s_time_t load_delta; - struct csched2_vcpu * best_push_svc, *best_pull_svc; + struct csched2_unit * best_push_svc, *best_pull_svc; /* NB: Read by consider() */ struct csched2_runqueue_data *lrqd; struct csched2_runqueue_data *orqd; =20 } balance_state_t; =20 static void consider(balance_state_t *st,=20 - struct csched2_vcpu *push_svc, - struct csched2_vcpu *pull_svc) + struct csched2_unit *push_svc, + struct csched2_unit *pull_svc) { s_time_t l_load, o_load, delta; =20 @@ -2472,8 +2471,8 @@ static void consider(balance_state_t *st, =20 =20 static void migrate(const struct scheduler *ops, - struct csched2_vcpu *svc,=20 - struct csched2_runqueue_data *trqd,=20 + struct csched2_unit *svc, + struct csched2_runqueue_data *trqd, s_time_t now) { int cpu =3D svc->vcpu->processor; @@ -2542,7 +2541,7 @@ static void migrate(const struct scheduler *ops, * - svc is not already flagged to migrate, * - if svc is allowed to run on at least one of the pcpus of rqd. */ -static bool vcpu_is_migrateable(struct csched2_vcpu *svc, +static bool vcpu_is_migrateable(struct csched2_unit *svc, struct csched2_runqueue_data *rqd) { struct vcpu *v =3D svc->vcpu; @@ -2692,7 +2691,7 @@ retry: /* Reuse load delta (as we're trying to minimize it) */ list_for_each( push_iter, &st.lrqd->svc ) { - struct csched2_vcpu * push_svc =3D list_entry(push_iter, struct cs= ched2_vcpu, rqd_elem); + struct csched2_unit * push_svc =3D list_entry(push_iter, struct cs= ched2_unit, rqd_elem); =20 update_svc_load(ops, push_svc, 0, now); =20 @@ -2701,7 +2700,7 @@ retry: =20 list_for_each( pull_iter, &st.orqd->svc ) { - struct csched2_vcpu * pull_svc =3D list_entry(pull_iter, struc= t csched2_vcpu, rqd_elem); + struct csched2_unit * pull_svc =3D list_entry(pull_iter, struc= t csched2_unit, rqd_elem); =20 if ( !inner_load_updated ) update_svc_load(ops, pull_svc, 0, now); @@ -2720,7 +2719,7 @@ retry: =20 list_for_each( pull_iter, &st.orqd->svc ) { - struct csched2_vcpu * pull_svc =3D list_entry(pull_iter, struct cs= ched2_vcpu, rqd_elem); + struct csched2_unit * pull_svc =3D list_entry(pull_iter, struct cs= ched2_unit, rqd_elem); =20 if ( !vcpu_is_migrateable(pull_svc, st.lrqd) ) continue; @@ -2747,7 +2746,7 @@ csched2_unit_migrate( { struct vcpu *vc =3D unit->vcpu_list; struct domain *d =3D vc->domain; - struct csched2_vcpu * const svc =3D csched2_vcpu(vc); + struct csched2_unit * const svc =3D csched2_unit(unit); struct csched2_runqueue_data *trqd; s_time_t now =3D NOW(); =20 @@ -2848,7 +2847,7 @@ csched2_dom_cntl( /* Update weights for vcpus, and max_weight for runqueues on w= hich they reside */ for_each_vcpu ( d, v ) { - struct csched2_vcpu *svc =3D csched2_vcpu(v); + struct csched2_unit *svc =3D csched2_unit(v->sched_unit); spinlock_t *lock =3D vcpu_schedule_lock(svc->vcpu); =20 ASSERT(svc->rqd =3D=3D c2rqd(ops, svc->vcpu->processor)); @@ -2862,7 +2861,7 @@ csched2_dom_cntl( /* Cap */ if ( op->u.credit2.cap !=3D 0 ) { - struct csched2_vcpu *svc; + struct csched2_unit *svc; spinlock_t *lock; =20 /* Cap is only valid if it's below 100 * nr_of_vCPUS */ @@ -2886,7 +2885,7 @@ csched2_dom_cntl( */ for_each_vcpu ( d, v ) { - svc =3D csched2_vcpu(v); + svc =3D csched2_unit(v->sched_unit); lock =3D vcpu_schedule_lock(svc->vcpu); /* * Too small quotas would in theory cause a lot of overhea= d, @@ -2929,14 +2928,14 @@ csched2_dom_cntl( */ for_each_vcpu ( d, v ) { - svc =3D csched2_vcpu(v); + svc =3D csched2_unit(v->sched_unit); lock =3D vcpu_schedule_lock(svc->vcpu); if ( v->is_running ) { unsigned int cpu =3D v->processor; struct csched2_runqueue_data *rqd =3D c2rqd(ops, c= pu); =20 - ASSERT(curr_on_cpu(cpu) =3D=3D v); + ASSERT(curr_on_cpu(cpu)->vcpu_list =3D=3D v); =20 /* * We are triggering a reschedule on the vCPU's @@ -2976,7 +2975,7 @@ csched2_dom_cntl( /* Disable budget accounting for all the vCPUs. */ for_each_vcpu ( d, v ) { - struct csched2_vcpu *svc =3D csched2_vcpu(v); + struct csched2_unit *svc =3D csched2_unit(v->sched_unit); spinlock_t *lock =3D vcpu_schedule_lock(svc->vcpu); =20 svc->budget =3D STIME_MAX; @@ -3013,8 +3012,7 @@ static void csched2_aff_cntl(const struct scheduler *ops, struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft) { - struct vcpu *v =3D unit->vcpu_list; - struct csched2_vcpu *svc =3D csched2_vcpu(v); + struct csched2_unit *svc =3D csched2_unit(unit); =20 if ( !hard ) return; @@ -3114,7 +3112,7 @@ static void csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched2_vcpu *svc =3D unit->priv; + struct csched2_unit *svc =3D unit->priv; struct csched2_dom * const sdom =3D svc->sdom; spinlock_t *lock; =20 @@ -3146,7 +3144,7 @@ csched2_unit_insert(const struct scheduler *ops, stru= ct sched_unit *unit) static void csched2_free_udata(const struct scheduler *ops, void *priv) { - struct csched2_vcpu *svc =3D priv; + struct csched2_unit *svc =3D priv; =20 xfree(svc); } @@ -3155,7 +3153,7 @@ static void csched2_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct csched2_vcpu * const svc =3D csched2_vcpu(vc); + struct csched2_unit * const svc =3D csched2_unit(unit); spinlock_t *lock; =20 ASSERT(!is_idle_vcpu(vc)); @@ -3176,7 +3174,7 @@ csched2_unit_remove(const struct scheduler *ops, stru= ct sched_unit *unit) /* How long should we let this vcpu run for? */ static s_time_t csched2_runtime(const struct scheduler *ops, int cpu, - struct csched2_vcpu *snext, s_time_t now) + struct csched2_unit *snext, s_time_t now) { s_time_t time, min_time; int rt_credit; /* Proposed runtime measured in credits */ @@ -3221,7 +3219,7 @@ csched2_runtime(const struct scheduler *ops, int cpu, */ if ( ! list_empty(runq) ) { - struct csched2_vcpu *swait =3D runq_elem(runq->next); + struct csched2_unit *swait =3D runq_elem(runq->next); =20 if ( ! is_idle_vcpu(swait->vcpu) && swait->credit > 0 ) @@ -3272,14 +3270,14 @@ csched2_runtime(const struct scheduler *ops, int cp= u, /* * Find a candidate. */ -static struct csched2_vcpu * +static struct csched2_unit * runq_candidate(struct csched2_runqueue_data *rqd, - struct csched2_vcpu *scurr, + struct csched2_unit *scurr, int cpu, s_time_t now, unsigned int *skipped) { struct list_head *iter, *temp; - struct csched2_vcpu *snext =3D NULL; + struct csched2_unit *snext =3D NULL; struct csched2_private *prv =3D csched2_priv(per_cpu(scheduler, cpu)); bool yield =3D false, soft_aff_preempt =3D false; =20 @@ -3360,12 +3358,12 @@ runq_candidate(struct csched2_runqueue_data *rqd, if ( vcpu_runnable(scurr->vcpu) && !soft_aff_preempt ) snext =3D scurr; else - snext =3D csched2_vcpu(idle_vcpu[cpu]); + snext =3D csched2_unit(idle_vcpu[cpu]->sched_unit); =20 check_runq: list_for_each_safe( iter, temp, &rqd->runq ) { - struct csched2_vcpu * svc =3D list_entry(iter, struct csched2_vcpu= , runq_elem); + struct csched2_unit * svc =3D list_entry(iter, struct csched2_unit= , runq_elem); =20 if ( unlikely(tb_init_done) ) { @@ -3464,8 +3462,8 @@ csched2_schedule( { const int cpu =3D smp_processor_id(); struct csched2_runqueue_data *rqd; - struct csched2_vcpu * const scurr =3D csched2_vcpu(current); - struct csched2_vcpu *snext =3D NULL; + struct csched2_unit * const scurr =3D csched2_unit(current->sched_unit= ); + struct csched2_unit *snext =3D NULL; unsigned int skipped_vcpus =3D 0; struct task_slice ret; bool tickled; @@ -3541,7 +3539,7 @@ csched2_schedule( { __clear_bit(__CSFLAG_vcpu_yield, &scurr->flags); trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL); - snext =3D csched2_vcpu(idle_vcpu[cpu]); + snext =3D csched2_unit(idle_vcpu[cpu]->sched_unit); } else snext =3D runq_candidate(rqd, scurr, cpu, now, &skipped_vcpus); @@ -3644,7 +3642,7 @@ csched2_schedule( } =20 static void -csched2_dump_vcpu(struct csched2_private *prv, struct csched2_vcpu *svc) +csched2_dump_vcpu(struct csched2_private *prv, struct csched2_unit *svc) { printk("[%i.%i] flags=3D%x cpu=3D%i", svc->vcpu->domain->domain_id, @@ -3668,7 +3666,7 @@ static inline void dump_pcpu(const struct scheduler *ops, int cpu) { struct csched2_private *prv =3D csched2_priv(ops); - struct csched2_vcpu *svc; + struct csched2_unit *svc; =20 printk("CPU[%02d] runq=3D%d, sibling=3D%*pb, core=3D%*pb\n", cpu, c2r(cpu), @@ -3676,7 +3674,7 @@ dump_pcpu(const struct scheduler *ops, int cpu) CPUMASK_PR(per_cpu(cpu_core_mask, cpu))); =20 /* current VCPU (nothing to say if that's the idle vcpu) */ - svc =3D csched2_vcpu(curr_on_cpu(cpu)); + svc =3D csched2_unit(curr_on_cpu(cpu)); if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); @@ -3749,7 +3747,7 @@ csched2_dump(const struct scheduler *ops) =20 for_each_vcpu( sdom->dom, v ) { - struct csched2_vcpu * const svc =3D csched2_vcpu(v); + struct csched2_unit * const svc =3D csched2_unit(v->sched_unit= ); spinlock_t *lock; =20 lock =3D vcpu_schedule_lock(svc->vcpu); @@ -3778,7 +3776,7 @@ csched2_dump(const struct scheduler *ops) printk("RUNQ:\n"); list_for_each( iter, runq ) { - struct csched2_vcpu *svc =3D runq_elem(iter); + struct csched2_unit *svc =3D runq_elem(iter); =20 if ( svc ) { @@ -3882,7 +3880,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsig= ned int cpu, void *pdata, void *vdata) { struct csched2_private *prv =3D csched2_priv(new_ops); - struct csched2_vcpu *svc =3D vdata; + struct csched2_unit *svc =3D vdata; unsigned rqi; =20 ASSERT(pdata && svc && is_idle_vcpu(svc->vcpu)); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index cb400f55d0..3619774318 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -93,7 +93,7 @@ DEFINE_PER_CPU(struct null_pcpu, npc); /* * Virtual CPU */ -struct null_vcpu { +struct null_unit { struct list_head waitq_elem; struct vcpu *vcpu; }; @@ -114,9 +114,9 @@ static inline struct null_private *null_priv(const stru= ct scheduler *ops) return ops->sched_data; } =20 -static inline struct null_vcpu *null_vcpu(const struct vcpu *v) +static inline struct null_unit *null_unit(const struct sched_unit *unit) { - return v->sched_unit->priv; + return unit->priv; } =20 static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu, @@ -189,9 +189,9 @@ static void *null_alloc_udata(const struct scheduler *o= ps, struct sched_unit *unit, void *dd) { struct vcpu *v =3D unit->vcpu_list; - struct null_vcpu *nvc; + struct null_unit *nvc; =20 - nvc =3D xzalloc(struct null_vcpu); + nvc =3D xzalloc(struct null_unit); if ( nvc =3D=3D NULL ) return NULL; =20 @@ -205,7 +205,7 @@ static void *null_alloc_udata(const struct scheduler *o= ps, =20 static void null_free_udata(const struct scheduler *ops, void *priv) { - struct null_vcpu *nvc =3D priv; + struct null_unit *nvc =3D priv; =20 xfree(nvc); } @@ -362,9 +362,9 @@ static bool vcpu_deassign(struct null_private *prv, str= uct vcpu *v) { unsigned int bs; unsigned int cpu =3D v->processor; - struct null_vcpu *wvc; + struct null_unit *wvc; =20 - ASSERT(list_empty(&null_vcpu(v)->waitq_elem)); + ASSERT(list_empty(&null_unit(v->sched_unit)->waitq_elem)); ASSERT(per_cpu(npc, v->processor).vcpu =3D=3D v); ASSERT(!cpumask_test_cpu(v->processor, &prv->cpus_free)); =20 @@ -421,7 +421,7 @@ static spinlock_t *null_switch_sched(struct scheduler *= new_ops, { struct schedule_data *sd =3D &per_cpu(schedule_data, cpu); struct null_private *prv =3D null_priv(new_ops); - struct null_vcpu *nvc =3D vdata; + struct null_unit *nvc =3D vdata; =20 ASSERT(nvc && is_idle_vcpu(nvc->vcpu)); =20 @@ -444,7 +444,7 @@ static void null_unit_insert(const struct scheduler *op= s, { struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); - struct null_vcpu *nvc =3D null_vcpu(v); + struct null_unit *nvc =3D null_unit(unit); unsigned int cpu; spinlock_t *lock; =20 @@ -508,7 +508,7 @@ static void null_unit_remove(const struct scheduler *op= s, { struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); - struct null_vcpu *nvc =3D null_vcpu(v); + struct null_unit *nvc =3D null_unit(unit); spinlock_t *lock; =20 ASSERT(!is_idle_vcpu(v)); @@ -546,12 +546,12 @@ static void null_unit_wake(const struct scheduler *op= s, { struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); - struct null_vcpu *nvc =3D null_vcpu(v); + struct null_unit *nvc =3D null_unit(unit); unsigned int cpu =3D v->processor; =20 ASSERT(!is_idle_vcpu(v)); =20 - if ( unlikely(curr_on_cpu(cpu) =3D=3D v) ) + if ( unlikely(curr_on_cpu(cpu) =3D=3D unit) ) { SCHED_STAT_CRANK(vcpu_wake_running); return; @@ -631,7 +631,7 @@ static void null_unit_sleep(const struct scheduler *ops, */ if ( unlikely(!is_vcpu_online(v)) ) { - struct null_vcpu *nvc =3D null_vcpu(v); + struct null_unit *nvc =3D null_unit(unit); =20 if ( unlikely(!list_empty(&nvc->waitq_elem)) ) { @@ -644,7 +644,7 @@ static void null_unit_sleep(const struct scheduler *ops, } =20 /* If v is not assigned to a pCPU, or is not running, no need to bothe= r */ - if ( likely(!tickled && curr_on_cpu(cpu) =3D=3D v) ) + if ( likely(!tickled && curr_on_cpu(cpu) =3D=3D unit) ) cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); =20 SCHED_STAT_CRANK(vcpu_sleep); @@ -662,7 +662,7 @@ static void null_unit_migrate(const struct scheduler *o= ps, { struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); - struct null_vcpu *nvc =3D null_vcpu(v); + struct null_unit *nvc =3D null_unit(unit); =20 ASSERT(!is_idle_vcpu(v)); =20 @@ -758,7 +758,7 @@ static void null_unit_migrate(const struct scheduler *o= ps, #ifndef NDEBUG static inline void null_vcpu_check(struct vcpu *v) { - struct null_vcpu * const nvc =3D null_vcpu(v); + struct null_unit * const nvc =3D null_unit(v->sched_unit); struct null_dom * const ndom =3D v->domain->sched_priv; =20 BUG_ON(nvc->vcpu !=3D v); @@ -788,7 +788,7 @@ static struct task_slice null_schedule(const struct sch= eduler *ops, unsigned int bs; const unsigned int cpu =3D smp_processor_id(); struct null_private *prv =3D null_priv(ops); - struct null_vcpu *wvc; + struct null_unit *wvc; struct task_slice ret; =20 SCHED_STAT_CRANK(schedule); @@ -874,7 +874,7 @@ static struct task_slice null_schedule(const struct sch= eduler *ops, return ret; } =20 -static inline void dump_vcpu(struct null_private *prv, struct null_vcpu *n= vc) +static inline void dump_vcpu(struct null_private *prv, struct null_unit *n= vc) { printk("[%i.%i] pcpu=3D%d", nvc->vcpu->domain->domain_id, nvc->vcpu->vcpu_id, list_empty(&nvc->waitq_elem) ? @@ -884,7 +884,7 @@ static inline void dump_vcpu(struct null_private *prv, = struct null_vcpu *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv =3D null_priv(ops); - struct null_vcpu *nvc; + struct null_unit *nvc; spinlock_t *lock; unsigned long flags; =20 @@ -898,7 +898,7 @@ static void null_dump_pcpu(const struct scheduler *ops,= int cpu) printk("\n"); =20 /* current VCPU (nothing to say if that's the idle vcpu) */ - nvc =3D null_vcpu(curr_on_cpu(cpu)); + nvc =3D null_unit(curr_on_cpu(cpu)); if ( nvc && !is_idle_vcpu(nvc->vcpu) ) { printk("\trun: "); @@ -932,7 +932,7 @@ static void null_dump(const struct scheduler *ops) printk("\tDomain: %d\n", ndom->dom->domain_id); for_each_vcpu( ndom->dom, v ) { - struct null_vcpu * const nvc =3D null_vcpu(v); + struct null_unit * const nvc =3D null_unit(v->sched_unit); spinlock_t *lock; =20 lock =3D vcpu_schedule_lock(nvc->vcpu); @@ -950,7 +950,7 @@ static void null_dump(const struct scheduler *ops) spin_lock(&prv->waitq_lock); list_for_each( iter, &prv->waitq ) { - struct null_vcpu *nvc =3D list_entry(iter, struct null_vcpu, waitq= _elem); + struct null_unit *nvc =3D list_entry(iter, struct null_unit, waitq= _elem); =20 if ( loop++ !=3D 0 ) printk(", "); diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 6ca792e643..57da55d90f 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -195,7 +195,7 @@ struct rt_private { /* * Virtual CPU */ -struct rt_vcpu { +struct rt_unit { struct list_head q_elem; /* on the runq/depletedq list */ struct list_head replq_elem; /* on the replenishment events list */ =20 @@ -233,9 +233,9 @@ static inline struct rt_private *rt_priv(const struct s= cheduler *ops) return ops->sched_data; } =20 -static inline struct rt_vcpu *rt_vcpu(const struct vcpu *vcpu) +static inline struct rt_unit *rt_unit(const struct sched_unit *unit) { - return vcpu->sched_unit->priv; + return unit->priv; } =20 static inline struct list_head *rt_runq(const struct scheduler *ops) @@ -253,7 +253,7 @@ static inline struct list_head *rt_replq(const struct s= cheduler *ops) return &rt_priv(ops)->replq; } =20 -static inline bool has_extratime(const struct rt_vcpu *svc) +static inline bool has_extratime(const struct rt_unit *svc) { return svc->flags & RTDS_extratime; } @@ -263,25 +263,25 @@ static inline bool has_extratime(const struct rt_vcpu= *svc) * and the replenishment events queue. */ static int -vcpu_on_q(const struct rt_vcpu *svc) +vcpu_on_q(const struct rt_unit *svc) { return !list_empty(&svc->q_elem); } =20 -static struct rt_vcpu * +static struct rt_unit * q_elem(struct list_head *elem) { - return list_entry(elem, struct rt_vcpu, q_elem); + return list_entry(elem, struct rt_unit, q_elem); } =20 -static struct rt_vcpu * +static struct rt_unit * replq_elem(struct list_head *elem) { - return list_entry(elem, struct rt_vcpu, replq_elem); + return list_entry(elem, struct rt_unit, replq_elem); } =20 static int -vcpu_on_replq(const struct rt_vcpu *svc) +vcpu_on_replq(const struct rt_unit *svc) { return !list_empty(&svc->replq_elem); } @@ -291,7 +291,7 @@ vcpu_on_replq(const struct rt_vcpu *svc) * Otherwise, return value < 0 */ static s_time_t -compare_vcpu_priority(const struct rt_vcpu *v1, const struct rt_vcpu *v2) +compare_vcpu_priority(const struct rt_unit *v1, const struct rt_unit *v2) { int prio =3D v2->priority_level - v1->priority_level; =20 @@ -305,7 +305,7 @@ compare_vcpu_priority(const struct rt_vcpu *v1, const s= truct rt_vcpu *v2) * Debug related code, dump vcpu/cpu information */ static void -rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc) +rt_dump_vcpu(const struct scheduler *ops, const struct rt_unit *svc) { cpumask_t *cpupool_mask, *mask; =20 @@ -351,13 +351,13 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv =3D rt_priv(ops); - struct rt_vcpu *svc; + struct rt_unit *svc; unsigned long flags; =20 spin_lock_irqsave(&prv->lock, flags); printk("CPU[%02d]\n", cpu); /* current VCPU (nothing to say if that's the idle vcpu). */ - svc =3D rt_vcpu(curr_on_cpu(cpu)); + svc =3D rt_unit(curr_on_cpu(cpu)); if ( svc && !is_idle_vcpu(svc->vcpu) ) { rt_dump_vcpu(ops, svc); @@ -370,7 +370,7 @@ rt_dump(const struct scheduler *ops) { struct list_head *runq, *depletedq, *replq, *iter; struct rt_private *prv =3D rt_priv(ops); - struct rt_vcpu *svc; + struct rt_unit *svc; struct rt_dom *sdom; unsigned long flags; =20 @@ -414,7 +414,7 @@ rt_dump(const struct scheduler *ops) =20 for_each_vcpu ( sdom->dom, v ) { - svc =3D rt_vcpu(v); + svc =3D rt_unit(v->sched_unit); rt_dump_vcpu(ops, svc); } } @@ -428,7 +428,7 @@ rt_dump(const struct scheduler *ops) * it needs to be updated to the deadline of the current period */ static void -rt_update_deadline(s_time_t now, struct rt_vcpu *svc) +rt_update_deadline(s_time_t now, struct rt_unit *svc) { ASSERT(now >=3D svc->cur_deadline); ASSERT(svc->period !=3D 0); @@ -499,8 +499,8 @@ deadline_queue_remove(struct list_head *queue, struct l= ist_head *elem) } =20 static inline bool -deadline_queue_insert(struct rt_vcpu * (*qelem)(struct list_head *), - struct rt_vcpu *svc, struct list_head *elem, +deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *), + struct rt_unit *svc, struct list_head *elem, struct list_head *queue) { struct list_head *iter; @@ -508,7 +508,7 @@ deadline_queue_insert(struct rt_vcpu * (*qelem)(struct = list_head *), =20 list_for_each ( iter, queue ) { - struct rt_vcpu * iter_svc =3D (*qelem)(iter); + struct rt_unit * iter_svc =3D (*qelem)(iter); if ( compare_vcpu_priority(svc, iter_svc) > 0 ) break; pos++; @@ -522,14 +522,14 @@ deadline_queue_insert(struct rt_vcpu * (*qelem)(struc= t list_head *), deadline_queue_insert(&replq_elem, ##__VA_ARGS__) =20 static inline void -q_remove(struct rt_vcpu *svc) +q_remove(struct rt_unit *svc) { ASSERT( vcpu_on_q(svc) ); list_del_init(&svc->q_elem); } =20 static inline void -replq_remove(const struct scheduler *ops, struct rt_vcpu *svc) +replq_remove(const struct scheduler *ops, struct rt_unit *svc) { struct rt_private *prv =3D rt_priv(ops); struct list_head *replq =3D rt_replq(ops); @@ -546,7 +546,7 @@ replq_remove(const struct scheduler *ops, struct rt_vcp= u *svc) */ if ( !list_empty(replq) ) { - struct rt_vcpu *svc_next =3D replq_elem(replq->next); + struct rt_unit *svc_next =3D replq_elem(replq->next); set_timer(&prv->repl_timer, svc_next->cur_deadline); } else @@ -560,7 +560,7 @@ replq_remove(const struct scheduler *ops, struct rt_vcp= u *svc) * Insert svc without budget in DepletedQ unsorted; */ static void -runq_insert(const struct scheduler *ops, struct rt_vcpu *svc) +runq_insert(const struct scheduler *ops, struct rt_unit *svc) { struct rt_private *prv =3D rt_priv(ops); struct list_head *runq =3D rt_runq(ops); @@ -578,7 +578,7 @@ runq_insert(const struct scheduler *ops, struct rt_vcpu= *svc) } =20 static void -replq_insert(const struct scheduler *ops, struct rt_vcpu *svc) +replq_insert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq =3D rt_replq(ops); struct rt_private *prv =3D rt_priv(ops); @@ -600,10 +600,10 @@ replq_insert(const struct scheduler *ops, struct rt_v= cpu *svc) * changed. */ static void -replq_reinsert(const struct scheduler *ops, struct rt_vcpu *svc) +replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq =3D rt_replq(ops); - struct rt_vcpu *rearm_svc =3D svc; + struct rt_unit *rearm_svc =3D svc; bool_t rearm =3D 0; =20 ASSERT( vcpu_on_replq(svc) ); @@ -734,7 +734,7 @@ rt_switch_sched(struct scheduler *new_ops, unsigned int= cpu, void *pdata, void *vdata) { struct rt_private *prv =3D rt_priv(new_ops); - struct rt_vcpu *svc =3D vdata; + struct rt_unit *svc =3D vdata; =20 ASSERT(!pdata && svc && is_idle_vcpu(svc->vcpu)); =20 @@ -841,10 +841,10 @@ static void * rt_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, void = *dd) { struct vcpu *vc =3D unit->vcpu_list; - struct rt_vcpu *svc; + struct rt_unit *svc; =20 /* Allocate per-VCPU info */ - svc =3D xzalloc(struct rt_vcpu); + svc =3D xzalloc(struct rt_unit); if ( svc =3D=3D NULL ) return NULL; =20 @@ -869,7 +869,7 @@ rt_alloc_udata(const struct scheduler *ops, struct sche= d_unit *unit, void *dd) static void rt_free_udata(const struct scheduler *ops, void *priv) { - struct rt_vcpu *svc =3D priv; + struct rt_unit *svc =3D priv; =20 xfree(svc); } @@ -885,7 +885,7 @@ static void rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct rt_vcpu *svc =3D rt_vcpu(vc); + struct rt_unit *svc =3D rt_unit(unit); s_time_t now; spinlock_t *lock; =20 @@ -914,13 +914,13 @@ rt_unit_insert(const struct scheduler *ops, struct sc= hed_unit *unit) } =20 /* - * Remove rt_vcpu svc from the old scheduler in source cpupool. + * Remove rt_unit svc from the old scheduler in source cpupool. */ static void rt_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct rt_vcpu * const svc =3D rt_vcpu(vc); + struct rt_unit * const svc =3D rt_unit(unit); struct rt_dom * const sdom =3D svc->sdom; spinlock_t *lock; =20 @@ -942,7 +942,7 @@ rt_unit_remove(const struct scheduler *ops, struct sche= d_unit *unit) * Burn budget in nanosecond granularity */ static void -burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now) +burn_budget(const struct scheduler *ops, struct rt_unit *svc, s_time_t now) { s_time_t delta; =20 @@ -1006,13 +1006,13 @@ burn_budget(const struct scheduler *ops, struct rt_= vcpu *svc, s_time_t now) * RunQ is sorted. Pick first one within cpumask. If no one, return NULL * lock is grabbed before calling this function */ -static struct rt_vcpu * +static struct rt_unit * runq_pick(const struct scheduler *ops, const cpumask_t *mask) { struct list_head *runq =3D rt_runq(ops); struct list_head *iter; - struct rt_vcpu *svc =3D NULL; - struct rt_vcpu *iter_svc =3D NULL; + struct rt_unit *svc =3D NULL; + struct rt_unit *iter_svc =3D NULL; cpumask_t cpu_common; cpumask_t *online; =20 @@ -1063,8 +1063,8 @@ rt_schedule(const struct scheduler *ops, s_time_t now= , bool_t tasklet_work_sched { const int cpu =3D smp_processor_id(); struct rt_private *prv =3D rt_priv(ops); - struct rt_vcpu *const scurr =3D rt_vcpu(current); - struct rt_vcpu *snext =3D NULL; + struct rt_unit *const scurr =3D rt_unit(current->sched_unit); + struct rt_unit *snext =3D NULL; struct task_slice ret =3D { .migrated =3D 0 }; =20 /* TRACE */ @@ -1090,13 +1090,13 @@ rt_schedule(const struct scheduler *ops, s_time_t n= ow, bool_t tasklet_work_sched if ( tasklet_work_scheduled ) { trace_var(TRC_RTDS_SCHED_TASKLET, 1, 0, NULL); - snext =3D rt_vcpu(idle_vcpu[cpu]); + snext =3D rt_unit(idle_vcpu[cpu]->sched_unit); } else { snext =3D runq_pick(ops, cpumask_of(cpu)); if ( snext =3D=3D NULL ) - snext =3D rt_vcpu(idle_vcpu[cpu]); + snext =3D rt_unit(idle_vcpu[cpu]->sched_unit); =20 /* if scurr has higher priority and budget, still pick scurr */ if ( !is_idle_vcpu(current) && @@ -1142,12 +1142,12 @@ static void rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct rt_vcpu * const svc =3D rt_vcpu(vc); + struct rt_unit * const svc =3D rt_unit(unit); =20 BUG_ON( is_idle_vcpu(vc) ); SCHED_STAT_CRANK(vcpu_sleep); =20 - if ( curr_on_cpu(vc->processor) =3D=3D vc ) + if ( curr_on_cpu(vc->processor) =3D=3D unit ) cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); else if ( vcpu_on_q(svc) ) { @@ -1177,11 +1177,11 @@ rt_unit_sleep(const struct scheduler *ops, struct s= ched_unit *unit) * lock is grabbed before calling this function */ static void -runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) +runq_tickle(const struct scheduler *ops, struct rt_unit *new) { struct rt_private *prv =3D rt_priv(ops); - struct rt_vcpu *latest_deadline_vcpu =3D NULL; /* lowest priority */ - struct rt_vcpu *iter_svc; + struct rt_unit *latest_deadline_vcpu =3D NULL; /* lowest priority */ + struct rt_unit *iter_svc; struct vcpu *iter_vc; int cpu =3D 0, cpu_to_tickle =3D 0; cpumask_t not_tickled; @@ -1202,14 +1202,14 @@ runq_tickle(const struct scheduler *ops, struct rt_= vcpu *new) cpu =3D cpumask_test_or_cycle(new->vcpu->processor, ¬_tickled); while ( cpu!=3D nr_cpu_ids ) { - iter_vc =3D curr_on_cpu(cpu); + iter_vc =3D curr_on_cpu(cpu)->vcpu_list; if ( is_idle_vcpu(iter_vc) ) { SCHED_STAT_CRANK(tickled_idle_cpu); cpu_to_tickle =3D cpu; goto out; } - iter_svc =3D rt_vcpu(iter_vc); + iter_svc =3D rt_unit(iter_vc->sched_unit); if ( latest_deadline_vcpu =3D=3D NULL || compare_vcpu_priority(iter_svc, latest_deadline_vcpu) < 0 ) latest_deadline_vcpu =3D iter_svc; @@ -1258,13 +1258,13 @@ static void rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct rt_vcpu * const svc =3D rt_vcpu(vc); + struct rt_unit * const svc =3D rt_unit(unit); s_time_t now; bool_t missed; =20 BUG_ON( is_idle_vcpu(vc) ); =20 - if ( unlikely(curr_on_cpu(vc->processor) =3D=3D vc) ) + if ( unlikely(curr_on_cpu(vc->processor) =3D=3D unit) ) { SCHED_STAT_CRANK(vcpu_wake_running); return; @@ -1329,7 +1329,7 @@ static void rt_context_saved(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; - struct rt_vcpu *svc =3D rt_vcpu(vc); + struct rt_unit *svc =3D rt_unit(unit); spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); =20 __clear_bit(__RTDS_scheduled, &svc->flags); @@ -1360,7 +1360,7 @@ rt_dom_cntl( struct xen_domctl_scheduler_op *op) { struct rt_private *prv =3D rt_priv(ops); - struct rt_vcpu *svc; + struct rt_unit *svc; struct vcpu *v; unsigned long flags; int rc =3D 0; @@ -1384,7 +1384,7 @@ rt_dom_cntl( spin_lock_irqsave(&prv->lock, flags); for_each_vcpu ( d, v ) { - svc =3D rt_vcpu(v); + svc =3D rt_unit(v->sched_unit); svc->period =3D MICROSECS(op->u.rtds.period); /* transfer to n= anosec */ svc->budget =3D MICROSECS(op->u.rtds.budget); } @@ -1410,7 +1410,7 @@ rt_dom_cntl( if ( op->cmd =3D=3D XEN_DOMCTL_SCHEDOP_getvcpuinfo ) { spin_lock_irqsave(&prv->lock, flags); - svc =3D rt_vcpu(d->vcpu[local_sched.vcpuid]); + svc =3D rt_unit(d->vcpu[local_sched.vcpuid]->sched_unit); local_sched.u.rtds.budget =3D svc->budget / MICROSECS(1); local_sched.u.rtds.period =3D svc->period / MICROSECS(1); if ( has_extratime(svc) ) @@ -1438,7 +1438,7 @@ rt_dom_cntl( } =20 spin_lock_irqsave(&prv->lock, flags); - svc =3D rt_vcpu(d->vcpu[local_sched.vcpuid]); + svc =3D rt_unit(d->vcpu[local_sched.vcpuid]->sched_unit); svc->period =3D period; svc->budget =3D budget; if ( local_sched.u.rtds.flags & XEN_DOMCTL_SCHEDRT_extra ) @@ -1471,7 +1471,7 @@ static void repl_timer_handler(void *data){ struct list_head *replq =3D rt_replq(ops); struct list_head *runq =3D rt_runq(ops); struct list_head *iter, *tmp; - struct rt_vcpu *svc; + struct rt_unit *svc; LIST_HEAD(tmp_replq); =20 spin_lock_irq(&prv->lock); @@ -1513,10 +1513,10 @@ static void repl_timer_handler(void *data){ { svc =3D replq_elem(iter); =20 - if ( curr_on_cpu(svc->vcpu->processor) =3D=3D svc->vcpu && + if ( curr_on_cpu(svc->vcpu->processor) =3D=3D svc->vcpu->sched_uni= t && !list_empty(runq) ) { - struct rt_vcpu *next_on_runq =3D q_elem(runq->next); + struct rt_unit *next_on_runq =3D q_elem(runq->next); =20 if ( compare_vcpu_priority(svc, next_on_runq) < 0 ) runq_tickle(ops, next_on_runq); diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 8bca32f5c4..6d6d8a234f 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -394,7 +394,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int proces= sor) /* Idle VCPUs are scheduled immediately, so don't put them in runqueue= . */ if ( is_idle_domain(d) ) { - per_cpu(schedule_data, v->processor).curr =3D v; + per_cpu(schedule_data, v->processor).curr =3D unit; v->is_running =3D 1; } else @@ -1607,7 +1607,7 @@ static void schedule(void) =20 next =3D next_slice.task; =20 - sd->curr =3D next; + sd->curr =3D next->sched_unit; =20 if ( next_slice.time >=3D 0 ) /* -ve means no limit */ set_timer(&sd->s_timer, now + next_slice.time); @@ -1749,7 +1749,7 @@ static int cpu_schedule_up(unsigned int cpu) * allocated. */ =20 - sd->curr =3D idle_vcpu[cpu]; + sd->curr =3D idle_vcpu[cpu]->sched_unit; =20 sd->sched_priv =3D NULL; =20 @@ -1917,7 +1917,7 @@ void __init scheduler_init(void) idle_domain->max_vcpus =3D nr_cpu_ids; if ( vcpu_create(idle_domain, 0, 0) =3D=3D NULL ) BUG(); - this_cpu(schedule_data).curr =3D idle_vcpu[0]; + this_cpu(schedule_data).curr =3D idle_vcpu[0]->sched_unit; } =20 /* diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 4f61f65288..4b817347d5 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -36,7 +36,7 @@ extern int sched_ratelimit_us; struct schedule_data { spinlock_t *schedule_lock, _lock; - struct vcpu *curr; /* current task = */ + struct sched_unit *curr; void *sched_priv; struct timer s_timer; /* scheduling timer = */ atomic_t urgent_count; /* how many urgent vcpus = */ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel