From nobody Tue Nov 11 08:45:25 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567748; cv=none; d=zoho.com; s=zohoarc; b=F2j8iK499eHdw6mSZlUh7RoyBrojr06bIWOwDZI4pY+dscA34SKOsFFxDqOdL0X2HDNmbfvojb/TprdUTYRrTvJGZoz98SldQrGBrOBl6MuS40gIABBRHBE8UaO7QZtPUaFOuK3s+6+giu5P/h4j2fArqwljhDJ8wWz0yqiDqHI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567748; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=gfQxQvOkDkv3yqmDCHcz8OqYspTO2FI+QrBkTl8eMT4=; b=Ttm8IB1selN7yGynU27ErM8ToS9ipmve4OF2bDyyaeZRbCFTRlqJ6VAs+nM8vUc+mml5zmVbSDYx18CPtPxKK4ScJTA9ay0BA/G4LxCXxY9vEpH9IHI5XjgGX3C6caxCl90nqmaOSazTwvm0OY6kdQNmCZQxbFQETgzr9303vec= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567748230658.3825101026422; Fri, 27 Sep 2019 00:02:28 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFZ-000395-3v; Fri, 27 Sep 2019 07:01:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFX-00038t-P5 for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:07 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 8d495248-e0f4-11e9-97fb-bc764e2007e4; Fri, 27 Sep 2019 07:00:56 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 490CBAE61; Fri, 27 Sep 2019 07:00:55 +0000 (UTC) X-Inumbo-ID: 8d495248-e0f4-11e9-97fb-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:05 +0200 Message-Id: <20190927070050.12405-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 01/46] xen/sched: use new sched_unit instead of vcpu in scheduler interfaces X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order to prepare core- and socket-scheduling use a new struct sched_unit instead of struct vcpu for interfaces of the different schedulers. Rename the per-scheduler functions insert_vcpu and remove_vcpu to insert_unit and remove_unit to reflect the change of the parameter. In the schedulers rename local functions switched to sched_unit, too. Rename alloc_vdata and free_vdata functions to alloc_udata and free_udata. For now this new struct will contain a domain, a vcpu pointer and a unit_id only and is allocated at vcpu creation time. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- RFC V2: - move definition of struct sched_unit to sched.h (Andrew Cooper) V1: - rename "item" to "unit" (George Dunlap) V2: - rename unit->vcpu to unit->vcpu_list (Jan Beulich) - merge patch with next one in series (Dario Faggioli) - merge patch introducing domain pointer in sched_unit into this one (Jan Beulich) - merge patch introducing unit_id into this one V3: - make unit parameter of pick_cpu const (Jan Beulich) - set vcpu->unit only after intializing uniti, freeing unit only after clearing vcpu->unit (Jan Beulich) - remove pre-definition of struct sched_unit in sched.h (Jan Beulich) - make unit_id unsigned int (Jan Beulich) V4: - rename alloc_vdata and free_vdata (Jan Beulich) --- xen/common/sched_arinc653.c | 36 ++++++++++------- xen/common/sched_credit.c | 47 +++++++++++++--------- xen/common/sched_credit2.c | 63 +++++++++++++++++------------ xen/common/sched_null.c | 46 +++++++++++++-------- xen/common/sched_rt.c | 39 ++++++++++-------- xen/common/schedule.c | 72 ++++++++++++++++++++------------- xen/include/xen/sched-if.h | 98 ++++++++++++++++++++++++++---------------= ---- xen/include/xen/sched.h | 7 ++++ 8 files changed, 247 insertions(+), 161 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index d47b747ef4..7f9ef36b42 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -376,13 +376,16 @@ a653sched_deinit(struct scheduler *ops) * This function allocates scheduler-specific data for a VCPU * * @param ops Pointer to this instance of the scheduler structure + * @param unit Pointer to struct sched_unit * * @return Pointer to the allocated data */ static void * -a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *= dd) +a653sched_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, + void *dd) { a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); + struct vcpu *vc =3D unit->vcpu_list; arinc653_vcpu_t *svc; unsigned int entry; unsigned long flags; @@ -440,7 +443,7 @@ a653sched_alloc_vdata(const struct scheduler *ops, stru= ct vcpu *vc, void *dd) * @param ops Pointer to this instance of the scheduler structure */ static void -a653sched_free_vdata(const struct scheduler *ops, void *priv) +a653sched_free_udata(const struct scheduler *ops, void *priv) { a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); arinc653_vcpu_t *av =3D priv; @@ -464,11 +467,13 @@ a653sched_free_vdata(const struct scheduler *ops, voi= d *priv) * Xen scheduler callback function to sleep a VCPU * * @param ops Pointer to this instance of the scheduler structure - * @param vc Pointer to the VCPU structure for the current domain + * @param unit Pointer to struct sched_unit */ static void -a653sched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) +a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; + if ( AVCPU(vc) !=3D NULL ) AVCPU(vc)->awake =3D 0; =20 @@ -484,11 +489,13 @@ a653sched_vcpu_sleep(const struct scheduler *ops, str= uct vcpu *vc) * Xen scheduler callback function to wake up a VCPU * * @param ops Pointer to this instance of the scheduler structure - * @param vc Pointer to the VCPU structure for the current domain + * @param unit Pointer to struct sched_unit */ static void -a653sched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) +a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; + if ( AVCPU(vc) !=3D NULL ) AVCPU(vc)->awake =3D 1; =20 @@ -603,13 +610,14 @@ a653sched_do_schedule( * Xen scheduler callback function to select a CPU for the VCPU to run on * * @param ops Pointer to this instance of the scheduler structure - * @param v Pointer to the VCPU structure for the current domain + * @param unit Pointer to struct sched_unit * * @return Number of selected physical CPU */ static int -a653sched_pick_cpu(const struct scheduler *ops, struct vcpu *vc) +a653sched_pick_cpu(const struct scheduler *ops, const struct sched_unit *u= nit) { + struct vcpu *vc =3D unit->vcpu_list; cpumask_t *online; unsigned int cpu; =20 @@ -705,14 +713,14 @@ static const struct scheduler sched_arinc653_def =3D { .init =3D a653sched_init, .deinit =3D a653sched_deinit, =20 - .free_vdata =3D a653sched_free_vdata, - .alloc_vdata =3D a653sched_alloc_vdata, + .free_udata =3D a653sched_free_udata, + .alloc_udata =3D a653sched_alloc_udata, =20 - .insert_vcpu =3D NULL, - .remove_vcpu =3D NULL, + .insert_unit =3D NULL, + .remove_unit =3D NULL, =20 - .sleep =3D a653sched_vcpu_sleep, - .wake =3D a653sched_vcpu_wake, + .sleep =3D a653sched_unit_sleep, + .wake =3D a653sched_unit_wake, .yield =3D NULL, .context_saved =3D NULL, =20 diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 70fe718127..f7c751c2e9 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -854,15 +854,16 @@ _csched_cpu_pick(const struct scheduler *ops, struct = vcpu *vc, bool_t commit) } =20 static int -csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc) +csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu *svc =3D CSCHED_VCPU(vc); =20 /* * We have been called by vcpu_migrate() (in schedule.c), as part * of the process of seeing if vc can be migrated to another pcpu. * We make a note about this in svc->flags so that later, in - * csched_vcpu_wake() (still called from vcpu_migrate()) we won't + * csched_unit_wake() (still called from vcpu_migrate()) we won't * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_VCPU_MIGRATING, &svc->flags); @@ -990,8 +991,10 @@ csched_vcpu_acct(struct csched_private *prv, unsigned = int cpu) } =20 static void * -csched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) +csched_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, + void *dd) { + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu *svc; =20 /* Allocate per-VCPU info */ @@ -1011,8 +1014,9 @@ csched_alloc_vdata(const struct scheduler *ops, struc= t vcpu *vc, void *dd) } =20 static void -csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc) +csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu *svc =3D vc->sched_priv; spinlock_t *lock; =20 @@ -1021,7 +1025,7 @@ csched_vcpu_insert(const struct scheduler *ops, struc= t vcpu *vc) /* csched_cpu_pick() looks in vc->processor's runq, so we need the loc= k. */ lock =3D vcpu_schedule_lock_irq(vc); =20 - vc->processor =3D csched_cpu_pick(ops, vc); + vc->processor =3D csched_cpu_pick(ops, unit); =20 spin_unlock_irq(lock); =20 @@ -1036,7 +1040,7 @@ csched_vcpu_insert(const struct scheduler *ops, struc= t vcpu *vc) } =20 static void -csched_free_vdata(const struct scheduler *ops, void *priv) +csched_free_udata(const struct scheduler *ops, void *priv) { struct csched_vcpu *svc =3D priv; =20 @@ -1046,9 +1050,10 @@ csched_free_vdata(const struct scheduler *ops, void = *priv) } =20 static void -csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) +csched_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { struct csched_private *prv =3D CSCHED_PRIV(ops); + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); struct csched_dom * const sdom =3D svc->sdom; =20 @@ -1073,8 +1078,9 @@ csched_vcpu_remove(const struct scheduler *ops, struc= t vcpu *vc) } =20 static void -csched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) +csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); unsigned int cpu =3D vc->processor; =20 @@ -1097,8 +1103,9 @@ csched_vcpu_sleep(const struct scheduler *ops, struct= vcpu *vc) } =20 static void -csched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) +csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); bool_t migrating; =20 @@ -1158,8 +1165,9 @@ csched_vcpu_wake(const struct scheduler *ops, struct = vcpu *vc) } =20 static void -csched_vcpu_yield(const struct scheduler *ops, struct vcpu *vc) +csched_unit_yield(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu * const svc =3D CSCHED_VCPU(vc); =20 /* Let the scheduler know that this vcpu is trying to yield */ @@ -1212,9 +1220,10 @@ csched_dom_cntl( } =20 static void -csched_aff_cntl(const struct scheduler *ops, struct vcpu *v, +csched_aff_cntl(const struct scheduler *ops, struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft) { + struct vcpu *v =3D unit->vcpu_list; struct csched_vcpu *svc =3D CSCHED_VCPU(v); =20 if ( !hard ) @@ -1743,7 +1752,7 @@ csched_load_balance(struct csched_private *prv, int c= pu, * - if we race with inc_nr_runnable(), we skip a pCPU tha= t may * have runnable vCPUs in its runqueue, but that's not a * problem because: - * + if racing with csched_vcpu_insert() or csched_vcpu_= wake(), + * + if racing with csched_unit_insert() or csched_unit_= wake(), * __runq_tickle() will be called afterwords, so the v= CPU * won't get stuck in the runqueue for too long; * + if racing with csched_runq_steal(), it may be that a @@ -2256,12 +2265,12 @@ static const struct scheduler sched_credit_def =3D { =20 .global_init =3D csched_global_init, =20 - .insert_vcpu =3D csched_vcpu_insert, - .remove_vcpu =3D csched_vcpu_remove, + .insert_unit =3D csched_unit_insert, + .remove_unit =3D csched_unit_remove, =20 - .sleep =3D csched_vcpu_sleep, - .wake =3D csched_vcpu_wake, - .yield =3D csched_vcpu_yield, + .sleep =3D csched_unit_sleep, + .wake =3D csched_unit_wake, + .yield =3D csched_unit_yield, =20 .adjust =3D csched_dom_cntl, .adjust_affinity=3D csched_aff_cntl, @@ -2274,8 +2283,8 @@ static const struct scheduler sched_credit_def =3D { .dump_settings =3D csched_dump, .init =3D csched_init, .deinit =3D csched_deinit, - .alloc_vdata =3D csched_alloc_vdata, - .free_vdata =3D csched_free_vdata, + .alloc_udata =3D csched_alloc_udata, + .free_udata =3D csched_free_udata, .alloc_pdata =3D csched_alloc_pdata, .init_pdata =3D csched_init_pdata, .deinit_pdata =3D csched_deinit_pdata, diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 6b77da7476..929f2a2450 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -273,7 +273,7 @@ * CSFLAG_delayed_runq_add: Do we need to add this to the runqueue once it= 'd done * being context switched out? * + Set when scheduling out in csched2_schedule() if prev is runnable - * + Set in csched2_vcpu_wake if it finds CSFLAG_scheduled set + * + Set in csched2_unit_wake if it finds CSFLAG_scheduled set * + Read in csched2_context_saved(). If set, it adds prev to the runqueu= e and * clears the bit. */ @@ -624,14 +624,14 @@ static inline bool has_cap(const struct csched2_vcpu = *svc) * This logic is entirely implemented in runq_tickle(), and that is enough. * In fact, in this scheduler, placement of a vcpu on one of the pcpus of a * runq, _always_ happens by means of tickling: - * - when a vcpu wakes up, it calls csched2_vcpu_wake(), which calls + * - when a vcpu wakes up, it calls csched2_unit_wake(), which calls * runq_tickle(); * - when a migration is initiated in schedule.c, we call csched2_cpu_pic= k(), - * csched2_vcpu_migrate() (which calls migrate()) and csched2_vcpu_wake= (). + * csched2_unit_migrate() (which calls migrate()) and csched2_unit_wake= (). * csched2_cpu_pick() looks for the least loaded runq and return just a= ny - * of its processors. Then, csched2_vcpu_migrate() just moves the vcpu = to + * of its processors. Then, csched2_unit_migrate() just moves the vcpu = to * the chosen runq, and it is again runq_tickle(), called by - * csched2_vcpu_wake() that actually decides what pcpu to use within the + * csched2_unit_wake() that actually decides what pcpu to use within the * chosen runq; * - when a migration is initiated in sched_credit2.c, by calling migrat= e() * directly, that again temporarily use a random pcpu from the new runq, @@ -2027,8 +2027,10 @@ csched2_vcpu_check(struct vcpu *vc) #endif =20 static void * -csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) +csched2_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, + void *dd) { + struct vcpu *vc =3D unit->vcpu_list; struct csched2_vcpu *svc; =20 /* Allocate per-VCPU info */ @@ -2070,8 +2072,9 @@ csched2_alloc_vdata(const struct scheduler *ops, stru= ct vcpu *vc, void *dd) } =20 static void -csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) +csched2_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched2_vcpu * const svc =3D csched2_vcpu(vc); =20 ASSERT(!is_idle_vcpu(vc)); @@ -2092,8 +2095,9 @@ csched2_vcpu_sleep(const struct scheduler *ops, struc= t vcpu *vc) } =20 static void -csched2_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) +csched2_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched2_vcpu * const svc =3D csched2_vcpu(vc); unsigned int cpu =3D vc->processor; s_time_t now; @@ -2147,16 +2151,18 @@ out: } =20 static void -csched2_vcpu_yield(const struct scheduler *ops, struct vcpu *v) +csched2_unit_yield(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *v =3D unit->vcpu_list; struct csched2_vcpu * const svc =3D csched2_vcpu(v); =20 __set_bit(__CSFLAG_vcpu_yield, &svc->flags); } =20 static void -csched2_context_saved(const struct scheduler *ops, struct vcpu *vc) +csched2_context_saved(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched2_vcpu * const svc =3D csched2_vcpu(vc); spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); s_time_t now =3D NOW(); @@ -2197,9 +2203,10 @@ csched2_context_saved(const struct scheduler *ops, s= truct vcpu *vc) =20 #define MAX_LOAD (STIME_MAX) static int -csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) +csched2_cpu_pick(const struct scheduler *ops, const struct sched_unit *uni= t) { struct csched2_private *prv =3D csched2_priv(ops); + struct vcpu *vc =3D unit->vcpu_list; int i, min_rqi =3D -1, min_s_rqi =3D -1; unsigned int new_cpu, cpu =3D vc->processor; struct csched2_vcpu *svc =3D csched2_vcpu(vc); @@ -2734,9 +2741,10 @@ retry: } =20 static void -csched2_vcpu_migrate( - const struct scheduler *ops, struct vcpu *vc, unsigned int new_cpu) +csched2_unit_migrate( + const struct scheduler *ops, struct sched_unit *unit, unsigned int new= _cpu) { + struct vcpu *vc =3D unit->vcpu_list; struct domain *d =3D vc->domain; struct csched2_vcpu * const svc =3D csched2_vcpu(vc); struct csched2_runqueue_data *trqd; @@ -2997,9 +3005,10 @@ csched2_dom_cntl( } =20 static void -csched2_aff_cntl(const struct scheduler *ops, struct vcpu *v, +csched2_aff_cntl(const struct scheduler *ops, struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft) { + struct vcpu *v =3D unit->vcpu_list; struct csched2_vcpu *svc =3D csched2_vcpu(v); =20 if ( !hard ) @@ -3097,8 +3106,9 @@ csched2_free_domdata(const struct scheduler *ops, voi= d *data) } =20 static void -csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc) +csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched2_vcpu *svc =3D vc->sched_priv; struct csched2_dom * const sdom =3D svc->sdom; spinlock_t *lock; @@ -3109,7 +3119,7 @@ csched2_vcpu_insert(const struct scheduler *ops, stru= ct vcpu *vc) /* csched2_cpu_pick() expects the pcpu lock to be held */ lock =3D vcpu_schedule_lock_irq(vc); =20 - vc->processor =3D csched2_cpu_pick(ops, vc); + vc->processor =3D csched2_cpu_pick(ops, unit); =20 spin_unlock_irq(lock); =20 @@ -3128,7 +3138,7 @@ csched2_vcpu_insert(const struct scheduler *ops, stru= ct vcpu *vc) } =20 static void -csched2_free_vdata(const struct scheduler *ops, void *priv) +csched2_free_udata(const struct scheduler *ops, void *priv) { struct csched2_vcpu *svc =3D priv; =20 @@ -3136,8 +3146,9 @@ csched2_free_vdata(const struct scheduler *ops, void = *priv) } =20 static void -csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) +csched2_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct csched2_vcpu * const svc =3D csched2_vcpu(vc); spinlock_t *lock; =20 @@ -4083,27 +4094,27 @@ static const struct scheduler sched_credit2_def =3D= { =20 .global_init =3D csched2_global_init, =20 - .insert_vcpu =3D csched2_vcpu_insert, - .remove_vcpu =3D csched2_vcpu_remove, + .insert_unit =3D csched2_unit_insert, + .remove_unit =3D csched2_unit_remove, =20 - .sleep =3D csched2_vcpu_sleep, - .wake =3D csched2_vcpu_wake, - .yield =3D csched2_vcpu_yield, + .sleep =3D csched2_unit_sleep, + .wake =3D csched2_unit_wake, + .yield =3D csched2_unit_yield, =20 .adjust =3D csched2_dom_cntl, .adjust_affinity=3D csched2_aff_cntl, .adjust_global =3D csched2_sys_cntl, =20 .pick_cpu =3D csched2_cpu_pick, - .migrate =3D csched2_vcpu_migrate, + .migrate =3D csched2_unit_migrate, .do_schedule =3D csched2_schedule, .context_saved =3D csched2_context_saved, =20 .dump_settings =3D csched2_dump, .init =3D csched2_init, .deinit =3D csched2_deinit, - .alloc_vdata =3D csched2_alloc_vdata, - .free_vdata =3D csched2_free_vdata, + .alloc_udata =3D csched2_alloc_udata, + .free_udata =3D csched2_free_udata, .alloc_pdata =3D csched2_alloc_pdata, .init_pdata =3D csched2_init_pdata, .deinit_pdata =3D csched2_deinit_pdata, diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 6782ecda5c..870bb67a18 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -185,9 +185,10 @@ static void null_deinit_pdata(const struct scheduler *= ops, void *pcpu, int cpu) per_cpu(npc, cpu).vcpu =3D NULL; } =20 -static void *null_alloc_vdata(const struct scheduler *ops, - struct vcpu *v, void *dd) +static void *null_alloc_udata(const struct scheduler *ops, + struct sched_unit *unit, void *dd) { + struct vcpu *v =3D unit->vcpu_list; struct null_vcpu *nvc; =20 nvc =3D xzalloc(struct null_vcpu); @@ -202,7 +203,7 @@ static void *null_alloc_vdata(const struct scheduler *o= ps, return nvc; } =20 -static void null_free_vdata(const struct scheduler *ops, void *priv) +static void null_free_udata(const struct scheduler *ops, void *priv) { struct null_vcpu *nvc =3D priv; =20 @@ -435,8 +436,10 @@ static spinlock_t *null_switch_sched(struct scheduler = *new_ops, return &sd->_lock; } =20 -static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v) +static void null_unit_insert(const struct scheduler *ops, + struct sched_unit *unit) { + struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); struct null_vcpu *nvc =3D null_vcpu(v); unsigned int cpu; @@ -496,8 +499,10 @@ static void null_vcpu_insert(const struct scheduler *o= ps, struct vcpu *v) SCHED_STAT_CRANK(vcpu_insert); } =20 -static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v) +static void null_unit_remove(const struct scheduler *ops, + struct sched_unit *unit) { + struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); struct null_vcpu *nvc =3D null_vcpu(v); spinlock_t *lock; @@ -532,8 +537,10 @@ static void null_vcpu_remove(const struct scheduler *o= ps, struct vcpu *v) SCHED_STAT_CRANK(vcpu_remove); } =20 -static void null_vcpu_wake(const struct scheduler *ops, struct vcpu *v) +static void null_unit_wake(const struct scheduler *ops, + struct sched_unit *unit) { + struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); struct null_vcpu *nvc =3D null_vcpu(v); unsigned int cpu =3D v->processor; @@ -604,8 +611,10 @@ static void null_vcpu_wake(const struct scheduler *ops= , struct vcpu *v) cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ); } =20 -static void null_vcpu_sleep(const struct scheduler *ops, struct vcpu *v) +static void null_unit_sleep(const struct scheduler *ops, + struct sched_unit *unit) { + struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); unsigned int cpu =3D v->processor; bool tickled =3D false; @@ -637,15 +646,18 @@ static void null_vcpu_sleep(const struct scheduler *o= ps, struct vcpu *v) SCHED_STAT_CRANK(vcpu_sleep); } =20 -static int null_cpu_pick(const struct scheduler *ops, struct vcpu *v) +static int null_cpu_pick(const struct scheduler *ops, + const struct sched_unit *unit) { + struct vcpu *v =3D unit->vcpu_list; ASSERT(!is_idle_vcpu(v)); return pick_cpu(null_priv(ops), v); } =20 -static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v, - unsigned int new_cpu) +static void null_unit_migrate(const struct scheduler *ops, + struct sched_unit *unit, unsigned int new_cp= u) { + struct vcpu *v =3D unit->vcpu_list; struct null_private *prv =3D null_priv(ops); struct null_vcpu *nvc =3D null_vcpu(v); =20 @@ -960,18 +972,18 @@ static const struct scheduler sched_null_def =3D { .switch_sched =3D null_switch_sched, .deinit_pdata =3D null_deinit_pdata, =20 - .alloc_vdata =3D null_alloc_vdata, - .free_vdata =3D null_free_vdata, + .alloc_udata =3D null_alloc_udata, + .free_udata =3D null_free_udata, .alloc_domdata =3D null_alloc_domdata, .free_domdata =3D null_free_domdata, =20 - .insert_vcpu =3D null_vcpu_insert, - .remove_vcpu =3D null_vcpu_remove, + .insert_unit =3D null_unit_insert, + .remove_unit =3D null_unit_remove, =20 - .wake =3D null_vcpu_wake, - .sleep =3D null_vcpu_sleep, + .wake =3D null_unit_wake, + .sleep =3D null_unit_sleep, .pick_cpu =3D null_cpu_pick, - .migrate =3D null_vcpu_migrate, + .migrate =3D null_unit_migrate, .do_schedule =3D null_schedule, =20 .dump_cpu_state =3D null_dump_pcpu, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index e0e350bdf3..492d8f6d2b 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -136,7 +136,7 @@ * RTDS_delayed_runq_add: Do we need to add this to the RunQ/DepletedQ * once it's done being context switching out? * + Set when scheduling out in rt_schedule() if prev is runable - * + Set in rt_vcpu_wake if it finds RTDS_scheduled set + * + Set in rt_unit_wake if it finds RTDS_scheduled set * + Read in rt_context_saved(). If set, it adds prev to the Runqueue/Depl= etedQ * and clears the bit. */ @@ -636,8 +636,9 @@ replq_reinsert(const struct scheduler *ops, struct rt_v= cpu *svc) * and available cpus */ static int -rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc) +rt_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; cpumask_t cpus; cpumask_t *online; int cpu; @@ -837,8 +838,9 @@ rt_free_domdata(const struct scheduler *ops, void *data) } =20 static void * -rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) +rt_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, void = *dd) { + struct vcpu *vc =3D unit->vcpu_list; struct rt_vcpu *svc; =20 /* Allocate per-VCPU info */ @@ -865,7 +867,7 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu= *vc, void *dd) } =20 static void -rt_free_vdata(const struct scheduler *ops, void *priv) +rt_free_udata(const struct scheduler *ops, void *priv) { struct rt_vcpu *svc =3D priv; =20 @@ -880,8 +882,9 @@ rt_free_vdata(const struct scheduler *ops, void *priv) * dest. cpupool. */ static void -rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc) +rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct rt_vcpu *svc =3D rt_vcpu(vc); s_time_t now; spinlock_t *lock; @@ -889,7 +892,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu= *vc) BUG_ON( is_idle_vcpu(vc) ); =20 /* This is safe because vc isn't yet being scheduled */ - vc->processor =3D rt_cpu_pick(ops, vc); + vc->processor =3D rt_cpu_pick(ops, unit); =20 lock =3D vcpu_schedule_lock_irq(vc); =20 @@ -913,8 +916,9 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu= *vc) * Remove rt_vcpu svc from the old scheduler in source cpupool. */ static void -rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) +rt_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct rt_vcpu * const svc =3D rt_vcpu(vc); struct rt_dom * const sdom =3D svc->sdom; spinlock_t *lock; @@ -1133,8 +1137,9 @@ rt_schedule(const struct scheduler *ops, s_time_t now= , bool_t tasklet_work_sched * The lock is already grabbed in schedule.c, no need to lock here */ static void -rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) +rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct rt_vcpu * const svc =3D rt_vcpu(vc); =20 BUG_ON( is_idle_vcpu(vc) ); @@ -1248,8 +1253,9 @@ runq_tickle(const struct scheduler *ops, struct rt_vc= pu *new) * TODO: what if these two vcpus belongs to the same domain? */ static void -rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) +rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct rt_vcpu * const svc =3D rt_vcpu(vc); s_time_t now; bool_t missed; @@ -1318,8 +1324,9 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu= *vc) * and then pick the highest priority vcpu from runq to run */ static void -rt_context_saved(const struct scheduler *ops, struct vcpu *vc) +rt_context_saved(const struct scheduler *ops, struct sched_unit *unit) { + struct vcpu *vc =3D unit->vcpu_list; struct rt_vcpu *svc =3D rt_vcpu(vc); spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); =20 @@ -1546,17 +1553,17 @@ static const struct scheduler sched_rtds_def =3D { .deinit_pdata =3D rt_deinit_pdata, .alloc_domdata =3D rt_alloc_domdata, .free_domdata =3D rt_free_domdata, - .alloc_vdata =3D rt_alloc_vdata, - .free_vdata =3D rt_free_vdata, - .insert_vcpu =3D rt_vcpu_insert, - .remove_vcpu =3D rt_vcpu_remove, + .alloc_udata =3D rt_alloc_udata, + .free_udata =3D rt_free_udata, + .insert_unit =3D rt_unit_insert, + .remove_unit =3D rt_unit_remove, =20 .adjust =3D rt_dom_cntl, =20 .pick_cpu =3D rt_cpu_pick, .do_schedule =3D rt_schedule, - .sleep =3D rt_vcpu_sleep, - .wake =3D rt_vcpu_wake, + .sleep =3D rt_unit_sleep, + .wake =3D rt_unit_wake, .context_saved =3D rt_context_saved, }; =20 diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 13c17fe944..1e9f5d5d5b 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -87,13 +87,13 @@ sched_idle_switch_sched(struct scheduler *new_ops, unsi= gned int cpu, } =20 static int -sched_idle_cpu_pick(const struct scheduler *ops, struct vcpu *v) +sched_idle_cpu_pick(const struct scheduler *ops, const struct sched_unit *= unit) { - return v->processor; + return unit->vcpu_list->processor; } =20 static void * -sched_idle_alloc_vdata(const struct scheduler *ops, struct vcpu *v, +sched_idle_alloc_udata(const struct scheduler *ops, struct sched_unit *uni= t, void *dd) { /* Any non-NULL pointer is fine here. */ @@ -101,7 +101,7 @@ sched_idle_alloc_vdata(const struct scheduler *ops, str= uct vcpu *v, } =20 static void -sched_idle_free_vdata(const struct scheduler *ops, void *priv) +sched_idle_free_udata(const struct scheduler *ops, void *priv) { } =20 @@ -124,8 +124,8 @@ static struct scheduler sched_idle_ops =3D { .pick_cpu =3D sched_idle_cpu_pick, .do_schedule =3D sched_idle_schedule, =20 - .alloc_vdata =3D sched_idle_alloc_vdata, - .free_vdata =3D sched_idle_free_vdata, + .alloc_udata =3D sched_idle_alloc_udata, + .free_udata =3D sched_idle_free_udata, .switch_sched =3D sched_idle_switch_sched, }; =20 @@ -308,9 +308,16 @@ static void sched_spin_unlock_double(spinlock_t *lock1= , spinlock_t *lock2, int sched_init_vcpu(struct vcpu *v, unsigned int processor) { struct domain *d =3D v->domain; + struct sched_unit *unit; =20 v->processor =3D processor; =20 + if ( (unit =3D xzalloc(struct sched_unit)) =3D=3D NULL ) + return 1; + unit->vcpu_list =3D v; + unit->unit_id =3D v->vcpu_id; + unit->domain =3D d; + /* Initialise the per-vcpu timers. */ spin_lock_init(&v->periodic_timer_lock); init_timer(&v->periodic_timer, vcpu_periodic_timer_fn, @@ -320,9 +327,14 @@ int sched_init_vcpu(struct vcpu *v, unsigned int proce= ssor) init_timer(&v->poll_timer, poll_timer_fn, v, v->processor); =20 - v->sched_priv =3D sched_alloc_vdata(dom_scheduler(d), v, d->sched_priv= ); + v->sched_priv =3D sched_alloc_udata(dom_scheduler(d), unit, d->sched_p= riv); if ( v->sched_priv =3D=3D NULL ) + { + xfree(unit); return 1; + } + + v->sched_unit =3D unit; =20 /* * Initialize affinity settings. The idler, and potentially @@ -341,7 +353,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int proces= sor) } else { - sched_insert_vcpu(dom_scheduler(d), v); + sched_insert_unit(dom_scheduler(d), unit); } =20 return 0; @@ -382,11 +394,12 @@ int sched_move_domain(struct domain *d, struct cpupoo= l *c) =20 for_each_vcpu ( d, v ) { - vcpu_priv[v->vcpu_id] =3D sched_alloc_vdata(c->sched, v, domdata); + vcpu_priv[v->vcpu_id] =3D sched_alloc_udata(c->sched, v->sched_uni= t, + domdata); if ( vcpu_priv[v->vcpu_id] =3D=3D NULL ) { for_each_vcpu ( d, v ) - sched_free_vdata(c->sched, vcpu_priv[v->vcpu_id]); + sched_free_udata(c->sched, vcpu_priv[v->vcpu_id]); xfree(vcpu_priv); sched_free_domdata(c->sched, domdata); return -ENOMEM; @@ -400,7 +413,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) =20 for_each_vcpu ( d, v ) { - sched_remove_vcpu(old_ops, v); + sched_remove_unit(old_ops, v->sched_unit); } =20 d->cpupool =3D c; @@ -435,9 +448,9 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) =20 new_p =3D cpumask_cycle(new_p, c->cpu_valid); =20 - sched_insert_vcpu(c->sched, v); + sched_insert_unit(c->sched, v->sched_unit); =20 - sched_free_vdata(old_ops, vcpudata); + sched_free_udata(old_ops, vcpudata); } =20 domain_update_node_affinity(d); @@ -453,13 +466,17 @@ int sched_move_domain(struct domain *d, struct cpupoo= l *c) =20 void sched_destroy_vcpu(struct vcpu *v) { + struct sched_unit *unit =3D v->sched_unit; + kill_timer(&v->periodic_timer); kill_timer(&v->singleshot_timer); kill_timer(&v->poll_timer); if ( test_and_clear_bool(v->is_urgent) ) atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count); - sched_remove_vcpu(vcpu_scheduler(v), v); - sched_free_vdata(vcpu_scheduler(v), v->sched_priv); + sched_remove_unit(vcpu_scheduler(v), unit); + sched_free_udata(vcpu_scheduler(v), v->sched_priv); + v->sched_unit =3D NULL; + xfree(unit); } =20 int sched_init_domain(struct domain *d, int poolid) @@ -510,7 +527,7 @@ void vcpu_sleep_nosync_locked(struct vcpu *v) if ( v->runstate.state =3D=3D RUNSTATE_runnable ) vcpu_runstate_change(v, RUNSTATE_offline, NOW()); =20 - sched_sleep(vcpu_scheduler(v), v); + sched_sleep(vcpu_scheduler(v), v->sched_unit); } } =20 @@ -551,7 +568,7 @@ void vcpu_wake(struct vcpu *v) { if ( v->runstate.state >=3D RUNSTATE_blocked ) vcpu_runstate_change(v, RUNSTATE_runnable, NOW()); - sched_wake(vcpu_scheduler(v), v); + sched_wake(vcpu_scheduler(v), v->sched_unit); } else if ( !(v->pause_flags & VPF_blocked) ) { @@ -606,7 +623,7 @@ static void vcpu_move_locked(struct vcpu *v, unsigned i= nt new_cpu) * Actual CPU switch to new CPU. This is safe because the lock * pointer can't change while the current lock is held. */ - sched_migrate(vcpu_scheduler(v), v, new_cpu); + sched_migrate(vcpu_scheduler(v), v->sched_unit, new_cpu); } =20 /* @@ -684,7 +701,7 @@ static void vcpu_migrate_finish(struct vcpu *v) break; =20 /* Select a new CPU. */ - new_cpu =3D sched_pick_cpu(vcpu_scheduler(v), v); + new_cpu =3D sched_pick_cpu(vcpu_scheduler(v), v->sched_unit); if ( (new_lock =3D=3D per_cpu(schedule_data, new_cpu).schedule= _lock) && cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) break; @@ -776,7 +793,7 @@ void restore_vcpu_affinity(struct domain *d) =20 /* v->processor might have changed, so reacquire the lock. */ lock =3D vcpu_schedule_lock_irq(v); - v->processor =3D sched_pick_cpu(vcpu_scheduler(v), v); + v->processor =3D sched_pick_cpu(vcpu_scheduler(v), v->sched_unit); spin_unlock_irq(lock); =20 if ( old_cpu !=3D v->processor ) @@ -888,7 +905,7 @@ static int cpu_disable_scheduler_check(unsigned int cpu) void sched_set_affinity( struct vcpu *v, const cpumask_t *hard, const cpumask_t *soft) { - sched_adjust_affinity(dom_scheduler(v->domain), v, hard, soft); + sched_adjust_affinity(dom_scheduler(v->domain), v->sched_unit, hard, s= oft); =20 if ( hard ) cpumask_copy(v->cpu_hard_affinity, hard); @@ -1063,7 +1080,7 @@ long vcpu_yield(void) struct vcpu * v=3Dcurrent; spinlock_t *lock =3D vcpu_schedule_lock_irq(v); =20 - sched_yield(vcpu_scheduler(v), v); + sched_yield(vcpu_scheduler(v), v->sched_unit); vcpu_schedule_unlock_irq(lock, v); =20 SCHED_STAT_CRANK(vcpu_yield); @@ -1612,7 +1629,7 @@ void context_saved(struct vcpu *prev) /* Check for migration request /after/ clearing running flag. */ smp_mb(); =20 - sched_context_saved(vcpu_scheduler(prev), prev); + sched_context_saved(vcpu_scheduler(prev), prev->sched_unit); =20 vcpu_migrate_finish(prev); } @@ -1778,8 +1795,8 @@ void __init scheduler_init(void) sched_test_func(init); sched_test_func(deinit); sched_test_func(pick_cpu); - sched_test_func(alloc_vdata); - sched_test_func(free_vdata); + sched_test_func(alloc_udata); + sched_test_func(free_udata); sched_test_func(switch_sched); sched_test_func(do_schedule); =20 @@ -1888,7 +1905,8 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) ppriv =3D sched_alloc_pdata(new_ops, cpu); if ( IS_ERR(ppriv) ) return PTR_ERR(ppriv); - vpriv =3D sched_alloc_vdata(new_ops, idle, idle->domain->sched_priv); + vpriv =3D sched_alloc_udata(new_ops, idle->sched_unit, + idle->domain->sched_priv); if ( vpriv =3D=3D NULL ) { sched_free_pdata(new_ops, ppriv, cpu); @@ -1933,7 +1951,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) =20 sched_deinit_pdata(old_ops, ppriv_old, cpu); =20 - sched_free_vdata(old_ops, vpriv_old); + sched_free_udata(old_ops, vpriv_old); sched_free_pdata(old_ops, ppriv_old, cpu); =20 per_cpu(cpupool, cpu) =3D c; diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index dc255b064b..a10f278ba3 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -140,9 +140,9 @@ struct scheduler { int (*init) (struct scheduler *); void (*deinit) (struct scheduler *); =20 - void (*free_vdata) (const struct scheduler *, void *); - void * (*alloc_vdata) (const struct scheduler *, struct vcpu = *, - void *); + void (*free_udata) (const struct scheduler *, void *); + void * (*alloc_udata) (const struct scheduler *, + struct sched_unit *, void *); void (*free_pdata) (const struct scheduler *, void *, int); void * (*alloc_pdata) (const struct scheduler *, int); void (*init_pdata) (const struct scheduler *, void *, int); @@ -156,24 +156,32 @@ struct scheduler { spinlock_t * (*switch_sched) (struct scheduler *, unsigned int, void *, void *); =20 - /* Activate / deactivate vcpus in a cpu pool */ - void (*insert_vcpu) (const struct scheduler *, struct vcpu = *); - void (*remove_vcpu) (const struct scheduler *, struct vcpu = *); - - void (*sleep) (const struct scheduler *, struct vcpu = *); - void (*wake) (const struct scheduler *, struct vcpu = *); - void (*yield) (const struct scheduler *, struct vcpu = *); - void (*context_saved) (const struct scheduler *, struct vcpu = *); + /* Activate / deactivate units in a cpu pool */ + void (*insert_unit) (const struct scheduler *, + struct sched_unit *); + void (*remove_unit) (const struct scheduler *, + struct sched_unit *); + + void (*sleep) (const struct scheduler *, + struct sched_unit *); + void (*wake) (const struct scheduler *, + struct sched_unit *); + void (*yield) (const struct scheduler *, + struct sched_unit *); + void (*context_saved) (const struct scheduler *, + struct sched_unit *); =20 struct task_slice (*do_schedule) (const struct scheduler *, s_time_t, bool_t tasklet_work_scheduled); =20 - int (*pick_cpu) (const struct scheduler *, struct vcpu = *); - void (*migrate) (const struct scheduler *, struct vcpu = *, - unsigned int); + int (*pick_cpu) (const struct scheduler *, + const struct sched_unit *); + void (*migrate) (const struct scheduler *, + struct sched_unit *, unsigned int); int (*adjust) (const struct scheduler *, struct domai= n *, struct xen_domctl_scheduler_op *); - void (*adjust_affinity)(const struct scheduler *, struct vcpu = *, + void (*adjust_affinity)(const struct scheduler *, + struct sched_unit *, const struct cpumask *, const struct cpumask *); int (*adjust_global) (const struct scheduler *, @@ -267,75 +275,81 @@ static inline void sched_deinit_pdata(const struct sc= heduler *s, void *data, s->deinit_pdata(s, data, cpu); } =20 -static inline void *sched_alloc_vdata(const struct scheduler *s, struct vc= pu *v, - void *dom_data) +static inline void *sched_alloc_udata(const struct scheduler *s, + struct sched_unit *unit, void *dom_d= ata) { - return s->alloc_vdata(s, v, dom_data); + return s->alloc_udata(s, unit, dom_data); } =20 -static inline void sched_free_vdata(const struct scheduler *s, void *data) +static inline void sched_free_udata(const struct scheduler *s, void *data) { - s->free_vdata(s, data); + s->free_udata(s, data); } =20 -static inline void sched_insert_vcpu(const struct scheduler *s, struct vcp= u *v) +static inline void sched_insert_unit(const struct scheduler *s, + struct sched_unit *unit) { - if ( s->insert_vcpu ) - s->insert_vcpu(s, v); + if ( s->insert_unit ) + s->insert_unit(s, unit); } =20 -static inline void sched_remove_vcpu(const struct scheduler *s, struct vcp= u *v) +static inline void sched_remove_unit(const struct scheduler *s, + struct sched_unit *unit) { - if ( s->remove_vcpu ) - s->remove_vcpu(s, v); + if ( s->remove_unit ) + s->remove_unit(s, unit); } =20 -static inline void sched_sleep(const struct scheduler *s, struct vcpu *v) +static inline void sched_sleep(const struct scheduler *s, + struct sched_unit *unit) { if ( s->sleep ) - s->sleep(s, v); + s->sleep(s, unit); } =20 -static inline void sched_wake(const struct scheduler *s, struct vcpu *v) +static inline void sched_wake(const struct scheduler *s, + struct sched_unit *unit) { if ( s->wake ) - s->wake(s, v); + s->wake(s, unit); } =20 -static inline void sched_yield(const struct scheduler *s, struct vcpu *v) +static inline void sched_yield(const struct scheduler *s, + struct sched_unit *unit) { if ( s->yield ) - s->yield(s, v); + s->yield(s, unit); } =20 static inline void sched_context_saved(const struct scheduler *s, - struct vcpu *v) + struct sched_unit *unit) { if ( s->context_saved ) - s->context_saved(s, v); + s->context_saved(s, unit); } =20 -static inline void sched_migrate(const struct scheduler *s, struct vcpu *v, - unsigned int cpu) +static inline void sched_migrate(const struct scheduler *s, + struct sched_unit *unit, unsigned int cpu) { if ( s->migrate ) - s->migrate(s, v, cpu); + s->migrate(s, unit, cpu); else - v->processor =3D cpu; + unit->vcpu_list->processor =3D cpu; } =20 -static inline int sched_pick_cpu(const struct scheduler *s, struct vcpu *v) +static inline int sched_pick_cpu(const struct scheduler *s, + const struct sched_unit *unit) { - return s->pick_cpu(s, v); + return s->pick_cpu(s, unit); } =20 static inline void sched_adjust_affinity(const struct scheduler *s, - struct vcpu *v, + struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft) { if ( s->adjust_affinity ) - s->adjust_affinity(s, v, hard, soft); + s->adjust_affinity(s, unit, hard, soft); } =20 static inline int sched_adjust_dom(const struct scheduler *s, struct domai= n *d, diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 6f2ee4c2ea..ebe95b59a4 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -161,6 +161,7 @@ struct vcpu =20 struct timer poll_timer; /* timeout for SCHEDOP_poll */ =20 + struct sched_unit *sched_unit; void *sched_priv; /* scheduler-specific data */ =20 struct vcpu_runstate_info runstate; @@ -273,6 +274,12 @@ struct vcpu struct arch_vcpu arch; }; =20 +struct sched_unit { + struct domain *domain; + struct vcpu *vcpu_list; + unsigned int unit_id; +}; + /* Per-domain lock can be recursively acquired in fault handlers. */ #define domain_lock(d) spin_lock_recursive(&(d)->domain_lock) #define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock) --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel