From nobody Mon Feb 9 08:11:35 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557125917; cv=none; d=zoho.com; s=zohoarc; b=hODmslQ/m3brx4l+VzzbOICvNetkw80MldaePuFagbxS1dvEQhyKAic9GfYyft4w/ake5ASnM2UNG6Qqb5VQHQT07Fl3hkjNfzbbfefGQ/ztg3QEzRNuRZjXPWZuxVfrDu6YxrAc6ZOv8vQCPezrrXtkt/TxOpGPl26Fhyc/U9c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557125917; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=kE/OuPqRcv/TNuYTpySgkNuiq3ymt0DVq53RKqVjBAs=; b=l462oxekHdspQ1W5uik8527CAKcai3Lzkn29qb3mETueT1Zgf5hJl3sRLTrbn09GvbkFU1Vg4QjRHErJ/C7O0FVt+dUZnfBjTPUK9aRO45WA67AYR4TndJvaGBeyNpW6dxpvr1gOmJ2WrL8bq87NZehNm5B5GwRWpmamGuWDy9w= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557125917321916.8538356178703; Sun, 5 May 2019 23:58:37 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYu-0002aD-2a; Mon, 06 May 2019 06:57:20 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYk-0002Ck-4y for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:10 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 26aa375c-6fcc-11e9-843c-bc764e045a96; Mon, 06 May 2019 06:57:03 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5AF5DAF35; Mon, 6 May 2019 06:56:57 +0000 (UTC) X-Inumbo-ID: 26aa375c-6fcc-11e9-843c-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:31 +0200 Message-Id: <20190506065644.7415-33-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 32/45] xen/sched: move struct task_slice into struct sched_item X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order to prepare for multiple vcpus per schedule item move struct task_slice in schedule() from the local stack into struct sched_item of the currently running item. To make access easier for the single schedulers add the pointer of the currently running item as a parameter of do_schedule(). While at it switch the tasklet_work_scheduled parameter of do_schedule() from bool_t to bool. As struct task_slice is only ever modified with the local schedule lock held it is safe to directly set the different items in struct sched_item instead of using an on-stack copy for returning the data. Signed-off-by: Juergen Gross --- xen/common/sched_arinc653.c | 20 +++++++------------- xen/common/sched_credit.c | 25 +++++++++++-------------- xen/common/sched_credit2.c | 21 +++++++++------------ xen/common/sched_null.c | 26 ++++++++++++-------------- xen/common/sched_rt.c | 22 +++++++++++----------- xen/common/schedule.c | 21 ++++++++++----------- xen/include/xen/sched-if.h | 11 +++-------- xen/include/xen/sched.h | 6 ++++++ 8 files changed, 69 insertions(+), 83 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 3919c0a3e9..e98e98116b 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -497,18 +497,14 @@ a653sched_item_wake(const struct scheduler *ops, stru= ct sched_item *item) * * @param ops Pointer to this instance of the scheduler structure * @param now Current time - * - * @return Address of the ITEM structure scheduled to be run next - * Amount of time to execute the returned ITEM - * Flag for whether the ITEM was migrated */ -static struct task_slice +static void a653sched_do_schedule( const struct scheduler *ops, + struct sched_item *prev, s_time_t now, - bool_t tasklet_work_scheduled) + bool tasklet_work_scheduled) { - struct task_slice ret; /* hold the chosen domain = */ struct sched_item *new_task =3D NULL; static unsigned int sched_index =3D 0; static s_time_t next_switch_time; @@ -586,13 +582,11 @@ a653sched_do_schedule( * Return the amount of time the next domain has to run and the address * of the selected task's ITEM structure. */ - ret.time =3D next_switch_time - now; - ret.task =3D new_task; - ret.migrated =3D 0; - - BUG_ON(ret.time <=3D 0); + prev->next_time =3D next_switch_time - now; + prev->next_task =3D new_task; + new_task->migrated =3D false; =20 - return ret; + BUG_ON(prev->next_time <=3D 0); } =20 /** diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 4734f52fc7..064f88ab23 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1689,7 +1689,7 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int= balance_step) =20 static struct csched_item * csched_load_balance(struct csched_private *prv, int cpu, - struct csched_item *snext, bool_t *stolen) + struct csched_item *snext, bool *stolen) { struct cpupool *c =3D per_cpu(cpupool, cpu); struct csched_item *speer; @@ -1805,7 +1805,7 @@ csched_load_balance(struct csched_private *prv, int c= pu, /* As soon as one item is found, balancing ends */ if ( speer !=3D NULL ) { - *stolen =3D 1; + *stolen =3D true; /* * Next time we'll look for work to steal on this node= , we * will start from the next pCPU, with respect to this= one, @@ -1835,19 +1835,18 @@ csched_load_balance(struct csched_private *prv, int= cpu, * This function is in the critical path. It is designed to be simple and * fast for the common case. */ -static struct task_slice -csched_schedule( - const struct scheduler *ops, s_time_t now, bool_t tasklet_work_schedul= ed) +static void csched_schedule( + const struct scheduler *ops, struct sched_item *item, s_time_t now, + bool tasklet_work_scheduled) { const unsigned int cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cpu); struct list_head * const runq =3D RUNQ(sched_cpu); - struct sched_item *item =3D current->sched_item; struct csched_item * const scurr =3D CSCHED_ITEM(item); struct csched_private *prv =3D CSCHED_PRIV(ops); struct csched_item *snext; - struct task_slice ret; s_time_t runtime, tslice; + bool migrated =3D false; =20 SCHED_STAT_CRANK(schedule); CSCHED_ITEM_CHECK(item); @@ -1937,7 +1936,6 @@ csched_schedule( (unsigned char *)&d); } =20 - ret.migrated =3D 0; goto out; } tslice =3D prv->tslice; @@ -1955,7 +1953,6 @@ csched_schedule( } =20 snext =3D __runq_elem(runq->next); - ret.migrated =3D 0; =20 /* Tasklet work (which runs in idle ITEM context) overrides all else. = */ if ( tasklet_work_scheduled ) @@ -1981,7 +1978,7 @@ csched_schedule( if ( snext->pri > CSCHED_PRI_TS_OVER ) __runq_remove(snext); else - snext =3D csched_load_balance(prv, sched_cpu, snext, &ret.migrated= ); + snext =3D csched_load_balance(prv, sched_cpu, snext, &migrated); =20 /* * Update idlers mask if necessary. When we're idling, other CPUs @@ -2004,12 +2001,12 @@ out: /* * Return task to run next... */ - ret.time =3D (is_idle_item(snext->item) ? + item->next_time =3D (is_idle_item(snext->item) ? -1 : tslice); - ret.task =3D snext->item; + item->next_task =3D snext->item; + snext->item->migrated =3D migrated; =20 - CSCHED_ITEM_CHECK(ret.task); - return ret; + CSCHED_ITEM_CHECK(item->next_task); } =20 static void diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index d5cb8c0200..f1074be25d 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3443,19 +3443,18 @@ runq_candidate(struct csched2_runqueue_data *rqd, * This function is in the critical path. It is designed to be simple and * fast for the common case. */ -static struct task_slice -csched2_schedule( - const struct scheduler *ops, s_time_t now, bool tasklet_work_scheduled) +static void csched2_schedule( + const struct scheduler *ops, struct sched_item *curritem, s_time_t now, + bool tasklet_work_scheduled) { const unsigned int cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cpu); struct csched2_runqueue_data *rqd; - struct sched_item *curritem =3D current->sched_item; struct csched2_item * const scurr =3D csched2_item(curritem); struct csched2_item *snext =3D NULL; unsigned int skipped_items =3D 0; - struct task_slice ret; bool tickled; + bool migrated =3D false; =20 SCHED_STAT_CRANK(schedule); CSCHED2_ITEM_CHECK(curritem); @@ -3540,8 +3539,6 @@ csched2_schedule( && item_runnable(curritem) ) __set_bit(__CSFLAG_delayed_runq_add, &scurr->flags); =20 - ret.migrated =3D 0; - /* Accounting for non-idle tasks */ if ( !is_idle_item(snext->item) ) { @@ -3591,7 +3588,7 @@ csched2_schedule( snext->credit +=3D CSCHED2_MIGRATE_COMPENSATION; sched_set_res(snext->item, per_cpu(sched_res, sched_cpu)); SCHED_STAT_CRANK(migrated); - ret.migrated =3D 1; + migrated =3D true; } } else @@ -3622,11 +3619,11 @@ csched2_schedule( /* * Return task to run next... */ - ret.time =3D csched2_runtime(ops, sched_cpu, snext, now); - ret.task =3D snext->item; + curritem->next_time =3D csched2_runtime(ops, sched_cpu, snext, now); + curritem->next_task =3D snext->item; + snext->item->migrated =3D migrated; =20 - CSCHED2_ITEM_CHECK(ret.task); - return ret; + CSCHED2_ITEM_CHECK(curritem->next_task); } =20 static void diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 34ce7a05d3..1af396dcdb 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -703,16 +703,14 @@ static inline void null_item_check(struct sched_item = *item) * - the item assigned to the pCPU, if there's one and it can run; * - the idle item, otherwise. */ -static struct task_slice null_schedule(const struct scheduler *ops, - s_time_t now, - bool_t tasklet_work_scheduled) +static void null_schedule(const struct scheduler *ops, struct sched_item *= prev, + s_time_t now, bool tasklet_work_scheduled) { unsigned int bs; const unsigned int cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cpu); struct null_private *prv =3D null_priv(ops); struct null_item *wvc; - struct task_slice ret; =20 SCHED_STAT_CRANK(schedule); NULL_ITEM_CHECK(current->sched_item); @@ -740,19 +738,18 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, if ( tasklet_work_scheduled ) { trace_var(TRC_SNULL_TASKLET, 1, 0, NULL); - ret.task =3D sched_idle_item(sched_cpu); + prev->next_task =3D sched_idle_item(sched_cpu); } else - ret.task =3D per_cpu(npc, sched_cpu).item; - ret.migrated =3D 0; - ret.time =3D -1; + prev->next_task =3D per_cpu(npc, sched_cpu).item; + prev->next_time =3D -1; =20 /* * We may be new in the cpupool, or just coming back online. In which * case, there may be items in the waitqueue that we can assign to us * and run. */ - if ( unlikely(ret.task =3D=3D NULL) ) + if ( unlikely(prev->next_task =3D=3D NULL) ) { spin_lock(&prv->waitq_lock); =20 @@ -778,7 +775,7 @@ static struct task_slice null_schedule(const struct sch= eduler *ops, { item_assign(prv, wvc->item, sched_cpu); list_del_init(&wvc->waitq_elem); - ret.task =3D wvc->item; + prev->next_task =3D wvc->item; goto unlock; } } @@ -787,11 +784,12 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, spin_unlock(&prv->waitq_lock); } =20 - if ( unlikely(ret.task =3D=3D NULL || !item_runnable(ret.task)) ) - ret.task =3D sched_idle_item(sched_cpu); + if ( unlikely(prev->next_task =3D=3D NULL || !item_runnable(prev->next= _task)) ) + prev->next_task =3D sched_idle_item(sched_cpu); =20 - NULL_ITEM_CHECK(ret.task); - return ret; + NULL_ITEM_CHECK(prev->next_task); + + prev->next_task->migrated =3D false; } =20 static inline void dump_item(struct null_private *prv, struct null_item *n= vc) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 2366e33beb..c5e8b559f3 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1062,16 +1062,16 @@ runq_pick(const struct scheduler *ops, const cpumas= k_t *mask) * schedule function for rt scheduler. * The lock is already grabbed in schedule.c, no need to lock here */ -static struct task_slice -rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work= _scheduled) +static void +rt_schedule(const struct scheduler *ops, struct sched_item *curritem, + s_time_t now, bool tasklet_work_scheduled) { const unsigned int cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cpu); struct rt_private *prv =3D rt_priv(ops); - struct rt_item *const scurr =3D rt_item(current->sched_item); + struct rt_item *const scurr =3D rt_item(curritem); struct rt_item *snext =3D NULL; - struct task_slice ret =3D { .migrated =3D 0 }; - struct sched_item *curritem =3D current->sched_item; + bool migrated =3D false; =20 /* TRACE */ { @@ -1119,7 +1119,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now= , bool_t tasklet_work_sched __set_bit(__RTDS_delayed_runq_add, &scurr->flags); =20 snext->last_start =3D now; - ret.time =3D -1; /* if an idle item is picked */ + curritem->next_time =3D -1; /* if an idle item is picked */ if ( !is_idle_item(snext->item) ) { if ( snext !=3D scurr ) @@ -1130,13 +1130,13 @@ rt_schedule(const struct scheduler *ops, s_time_t n= ow, bool_t tasklet_work_sched if ( sched_item_cpu(snext->item) !=3D sched_cpu ) { sched_set_res(snext->item, per_cpu(sched_res, sched_cpu)); - ret.migrated =3D 1; + migrated =3D true; } - ret.time =3D snext->cur_budget; /* invoke the scheduler next time = */ + /* Invoke the scheduler next time. */ + curritem->next_time =3D snext->cur_budget; } - ret.task =3D snext->item; - - return ret; + curritem->next_task =3D snext->item; + snext->item->migrated =3D migrated; } =20 /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 9f9d6eb95b..b5fb48c553 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1575,10 +1575,9 @@ static void schedule(void) s_time_t now; struct scheduler *sched; unsigned long *tasklet_work =3D &this_cpu(tasklet_work_to_do); - bool_t tasklet_work_scheduled =3D 0; + bool tasklet_work_scheduled =3D false; struct sched_resource *sd; spinlock_t *lock; - struct task_slice next_slice; int cpu =3D smp_processor_id(); =20 ASSERT_NOT_IN_ATOMIC(); @@ -1594,12 +1593,12 @@ static void schedule(void) set_bit(_TASKLET_scheduled, tasklet_work); /* fallthrough */ case TASKLET_enqueued|TASKLET_scheduled: - tasklet_work_scheduled =3D 1; + tasklet_work_scheduled =3D true; break; case TASKLET_scheduled: clear_bit(_TASKLET_scheduled, tasklet_work); case 0: - /*tasklet_work_scheduled =3D 0;*/ + /*tasklet_work_scheduled =3D false;*/ break; default: BUG(); @@ -1613,14 +1612,14 @@ static void schedule(void) =20 /* get policy-specific decision on scheduling... */ sched =3D this_cpu(scheduler); - next_slice =3D sched->do_schedule(sched, now, tasklet_work_scheduled); + sched->do_schedule(sched, prev, now, tasklet_work_scheduled); =20 - next =3D next_slice.task; + next =3D prev->next_task; =20 sd->curr =3D next; =20 - if ( next_slice.time >=3D 0 ) /* -ve means no limit */ - set_timer(&sd->s_timer, now + next_slice.time); + if ( prev->next_time >=3D 0 ) /* -ve means no limit */ + set_timer(&sd->s_timer, now + prev->next_time); =20 if ( unlikely(prev =3D=3D next) ) { @@ -1628,7 +1627,7 @@ static void schedule(void) TRACE_4D(TRC_SCHED_SWITCH_INFCONT, next->domain->domain_id, next->item_id, now - prev->state_entry_time, - next_slice.time); + prev->next_time); trace_continue_running(next->vcpu); return continue_running(prev->vcpu); } @@ -1640,7 +1639,7 @@ static void schedule(void) next->domain->domain_id, next->item_id, (next->vcpu->runstate.state =3D=3D RUNSTATE_runnable) ? (now - next->state_entry_time) : 0, - next_slice.time); + prev->next_time); =20 ASSERT(prev->vcpu->runstate.state =3D=3D RUNSTATE_running); =20 @@ -1670,7 +1669,7 @@ static void schedule(void) =20 stop_timer(&prev->vcpu->periodic_timer); =20 - if ( next_slice.migrated ) + if ( next->migrated ) vcpu_move_irqs(next->vcpu); =20 vcpu_periodic_timer_work(next->vcpu); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 2506538649..09544e05c0 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -180,12 +180,6 @@ static inline spinlock_t *pcpu_schedule_trylock(unsign= ed int cpu) return NULL; } =20 -struct task_slice { - struct sched_item *task; - s_time_t time; - bool_t migrated; -}; - struct scheduler { char *name; /* full name for this scheduler */ char *opt_name; /* option name for this scheduler */ @@ -228,8 +222,9 @@ struct scheduler { void (*context_saved) (const struct scheduler *, struct sched_item *); =20 - struct task_slice (*do_schedule) (const struct scheduler *, s_time_t, - bool_t tasklet_work_scheduled); + void (*do_schedule) (const struct scheduler *, + struct sched_item *, s_time_t, + bool tasklet_work_scheduled); =20 struct sched_resource * (*pick_resource) (const struct scheduler *, struct sched_item *); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index cbd97f34c7..8bde790d27 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -281,6 +281,8 @@ struct sched_item { bool affinity_broken; /* Does soft affinity actually play a role (given hard affinity)? */ bool soft_aff_effective; + /* Item has been migrated to other cpu(s). */ + bool migrated; /* Bitmask of CPUs on which this VCPU may run. */ cpumask_var_t cpu_hard_affinity; /* Used to change affinity temporarily. */ @@ -289,6 +291,10 @@ struct sched_item { cpumask_var_t cpu_hard_affinity_saved; /* Bitmask of CPUs on which this VCPU prefers to run. */ cpumask_var_t cpu_soft_affinity; + + /* Next item to run. */ + struct sched_item *next_task; + s_time_t next_time; }; =20 #define for_each_sched_item(d, e) \ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel