From nobody Tue Nov 11 08:47:04 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567764; cv=none; d=zoho.com; s=zohoarc; b=IjLS8Pal5aYcyDmGrPvwuBMz0T3FhHkQM3CBq62peWykon+kaltGL7/7x3S5lW7AyKNEb5VfXYddtoyV5Gax5lxwgmskWtfu6qem/PBn7fb1ChihDENOEwhZFHBPs6kdw5BaHTty4Hjz8HU+jksvxkdGZFgtMaDTsk9dxDWlQxA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567764; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=8RY1KWzQesNVOUA7mAmCfbWiyqVvPVhevF6IPcXe0dk=; b=ZH7eZkQUmLzRcD+65GWeSp0YY2nYUrToDlVBEI7IcvXoauKhSqwPbt4ZimNzy8d98U/PPA6pWuSqMBN0Nw4mjj0gMeJl0xUH83/0B3G7mT4qIdGZqwoWN+xoQO6fcgz29u7FX8S1qJOVBO753yBVzeX6Krje0SChNtO4LPYsQGQ= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567764266827.0776626596846; Fri, 27 Sep 2019 00:02:44 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGM-000450-Ck; Fri, 27 Sep 2019 07:01:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGL-00043y-Qq for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:57 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 92a56966-e0f4-11e9-bf31-bc764e2007e4; Fri, 27 Sep 2019 07:01:05 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 9DD72B17D; Fri, 27 Sep 2019 07:01:02 +0000 (UTC) X-Inumbo-ID: 92a56966-e0f4-11e9-bf31-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:31 +0200 Message-Id: <20190927070050.12405-28-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 27/46] xen/sched: move struct task_slice into struct sched_unit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order to prepare for multiple vcpus per schedule unit move struct task_slice in schedule() from the local stack into struct sched_unit of the currently running unit. To make access easier for the single schedulers add the pointer of the currently running unit as a parameter of do_schedule(). While at it switch the tasklet_work_scheduled parameter of do_schedule() from bool_t to bool. As struct task_slice is only ever modified with the local schedule lock held it is safe to directly set the different units in struct sched_unit instead of using an on-stack copy for returning the data. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V3: - readd accidentally dropped call of continue_running() (Dario Faggioli) --- xen/common/sched_arinc653.c | 20 +++++++------------- xen/common/sched_credit.c | 25 +++++++++++-------------- xen/common/sched_credit2.c | 21 +++++++++------------ xen/common/sched_null.c | 29 ++++++++++++++--------------- xen/common/sched_rt.c | 22 +++++++++++----------- xen/common/schedule.c | 30 ++++++++++++++---------------- xen/include/xen/sched-if.h | 11 +++-------- xen/include/xen/sched.h | 6 ++++++ 8 files changed, 75 insertions(+), 89 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 2bc187c92b..fcf81db19a 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -503,18 +503,14 @@ a653sched_unit_wake(const struct scheduler *ops, stru= ct sched_unit *unit) * * @param ops Pointer to this instance of the scheduler structure * @param now Current time - * - * @return Address of the UNIT structure scheduled to be run next - * Amount of time to execute the returned UNIT - * Flag for whether the UNIT was migrated */ -static struct task_slice +static void a653sched_do_schedule( const struct scheduler *ops, + struct sched_unit *prev, s_time_t now, - bool_t tasklet_work_scheduled) + bool tasklet_work_scheduled) { - struct task_slice ret; /* hold the chosen domain = */ struct sched_unit *new_task =3D NULL; static unsigned int sched_index =3D 0; static s_time_t next_switch_time; @@ -592,13 +588,11 @@ a653sched_do_schedule( * Return the amount of time the next domain has to run and the address * of the selected task's UNIT structure. */ - ret.time =3D next_switch_time - now; - ret.task =3D new_task; - ret.migrated =3D 0; - - BUG_ON(ret.time <=3D 0); + prev->next_time =3D next_switch_time - now; + prev->next_task =3D new_task; + new_task->migrated =3D false; =20 - return ret; + BUG_ON(prev->next_time <=3D 0); } =20 /** diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 7f6ba35766..299eff21ac 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1675,7 +1675,7 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int= balance_step) =20 static struct csched_unit * csched_load_balance(struct csched_private *prv, int cpu, - struct csched_unit *snext, bool_t *stolen) + struct csched_unit *snext, bool *stolen) { struct cpupool *c =3D per_cpu(cpupool, cpu); struct csched_unit *speer; @@ -1791,7 +1791,7 @@ csched_load_balance(struct csched_private *prv, int c= pu, /* As soon as one unit is found, balancing ends */ if ( speer !=3D NULL ) { - *stolen =3D 1; + *stolen =3D true; /* * Next time we'll look for work to steal on this node= , we * will start from the next pCPU, with respect to this= one, @@ -1821,19 +1821,18 @@ csched_load_balance(struct csched_private *prv, int= cpu, * This function is in the critical path. It is designed to be simple and * fast for the common case. */ -static struct task_slice -csched_schedule( - const struct scheduler *ops, s_time_t now, bool_t tasklet_work_schedul= ed) +static void csched_schedule( + const struct scheduler *ops, struct sched_unit *unit, s_time_t now, + bool tasklet_work_scheduled) { const unsigned int cur_cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cur_cpu); struct list_head * const runq =3D RUNQ(sched_cpu); - struct sched_unit *unit =3D current->sched_unit; struct csched_unit * const scurr =3D CSCHED_UNIT(unit); struct csched_private *prv =3D CSCHED_PRIV(ops); struct csched_unit *snext; - struct task_slice ret; s_time_t runtime, tslice; + bool migrated =3D false; =20 SCHED_STAT_CRANK(schedule); CSCHED_UNIT_CHECK(unit); @@ -1924,7 +1923,6 @@ csched_schedule( (unsigned char *)&d); } =20 - ret.migrated =3D 0; goto out; } tslice =3D prv->tslice; @@ -1942,7 +1940,6 @@ csched_schedule( } =20 snext =3D __runq_elem(runq->next); - ret.migrated =3D 0; =20 /* Tasklet work (which runs in idle UNIT context) overrides all else. = */ if ( tasklet_work_scheduled ) @@ -1968,7 +1965,7 @@ csched_schedule( if ( snext->pri > CSCHED_PRI_TS_OVER ) __runq_remove(snext); else - snext =3D csched_load_balance(prv, sched_cpu, snext, &ret.migrated= ); + snext =3D csched_load_balance(prv, sched_cpu, snext, &migrated); =20 /* * Update idlers mask if necessary. When we're idling, other CPUs @@ -1991,12 +1988,12 @@ out: /* * Return task to run next... */ - ret.time =3D (is_idle_unit(snext->unit) ? + unit->next_time =3D (is_idle_unit(snext->unit) ? -1 : tslice); - ret.task =3D snext->unit; + unit->next_task =3D snext->unit; + snext->unit->migrated =3D migrated; =20 - CSCHED_UNIT_CHECK(ret.task); - return ret; + CSCHED_UNIT_CHECK(unit->next_task); } =20 static void diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index c4c6c69a0e..87d142bbe4 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3446,19 +3446,18 @@ runq_candidate(struct csched2_runqueue_data *rqd, * This function is in the critical path. It is designed to be simple and * fast for the common case. */ -static struct task_slice -csched2_schedule( - const struct scheduler *ops, s_time_t now, bool tasklet_work_scheduled) +static void csched2_schedule( + const struct scheduler *ops, struct sched_unit *currunit, s_time_t now, + bool tasklet_work_scheduled) { const unsigned int cur_cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cur_cpu); struct csched2_runqueue_data *rqd; - struct sched_unit *currunit =3D current->sched_unit; struct csched2_unit * const scurr =3D csched2_unit(currunit); struct csched2_unit *snext =3D NULL; unsigned int skipped_units =3D 0; - struct task_slice ret; bool tickled; + bool migrated =3D false; =20 SCHED_STAT_CRANK(schedule); CSCHED2_UNIT_CHECK(currunit); @@ -3543,8 +3542,6 @@ csched2_schedule( && unit_runnable(currunit) ) __set_bit(__CSFLAG_delayed_runq_add, &scurr->flags); =20 - ret.migrated =3D 0; - /* Accounting for non-idle tasks */ if ( !is_idle_unit(snext->unit) ) { @@ -3594,7 +3591,7 @@ csched2_schedule( snext->credit +=3D CSCHED2_MIGRATE_COMPENSATION; sched_set_res(snext->unit, get_sched_res(sched_cpu)); SCHED_STAT_CRANK(migrated); - ret.migrated =3D 1; + migrated =3D true; } } else @@ -3625,11 +3622,11 @@ csched2_schedule( /* * Return task to run next... */ - ret.time =3D csched2_runtime(ops, sched_cpu, snext, now); - ret.task =3D snext->unit; + currunit->next_time =3D csched2_runtime(ops, sched_cpu, snext, now); + currunit->next_task =3D snext->unit; + snext->unit->migrated =3D migrated; =20 - CSCHED2_UNIT_CHECK(ret.task); - return ret; + CSCHED2_UNIT_CHECK(currunit->next_task); } =20 static void diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 51edc3dbb9..80a7d45935 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -779,16 +779,14 @@ static inline void null_unit_check(struct sched_unit = *unit) * - the unit assigned to the pCPU, if there's one and it can run; * - the idle unit, otherwise. */ -static struct task_slice null_schedule(const struct scheduler *ops, - s_time_t now, - bool_t tasklet_work_scheduled) +static void null_schedule(const struct scheduler *ops, struct sched_unit *= prev, + s_time_t now, bool tasklet_work_scheduled) { unsigned int bs; const unsigned int cur_cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cur_cpu); struct null_private *prv =3D null_priv(ops); struct null_unit *wvc; - struct task_slice ret; =20 SCHED_STAT_CRANK(schedule); NULL_UNIT_CHECK(current->sched_unit); @@ -816,19 +814,18 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, if ( tasklet_work_scheduled ) { trace_var(TRC_SNULL_TASKLET, 1, 0, NULL); - ret.task =3D sched_idle_unit(sched_cpu); + prev->next_task =3D sched_idle_unit(sched_cpu); } else - ret.task =3D per_cpu(npc, sched_cpu).unit; - ret.migrated =3D 0; - ret.time =3D -1; + prev->next_task =3D per_cpu(npc, sched_cpu).unit; + prev->next_time =3D -1; =20 /* * We may be new in the cpupool, or just coming back online. In which * case, there may be units in the waitqueue that we can assign to us * and run. */ - if ( unlikely(ret.task =3D=3D NULL) ) + if ( unlikely(prev->next_task =3D=3D NULL) ) { spin_lock(&prv->waitq_lock); =20 @@ -854,7 +851,7 @@ static struct task_slice null_schedule(const struct sch= eduler *ops, { unit_assign(prv, wvc->unit, sched_cpu); list_del_init(&wvc->waitq_elem); - ret.task =3D wvc->unit; + prev->next_task =3D wvc->unit; goto unlock; } } @@ -862,15 +859,17 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, unlock: spin_unlock(&prv->waitq_lock); =20 - if ( ret.task =3D=3D NULL && !cpumask_test_cpu(sched_cpu, &prv->cp= us_free) ) + if ( prev->next_task =3D=3D NULL && + !cpumask_test_cpu(sched_cpu, &prv->cpus_free) ) cpumask_set_cpu(sched_cpu, &prv->cpus_free); } =20 - if ( unlikely(ret.task =3D=3D NULL || !unit_runnable(ret.task)) ) - ret.task =3D sched_idle_unit(sched_cpu); + if ( unlikely(prev->next_task =3D=3D NULL || !unit_runnable(prev->next= _task)) ) + prev->next_task =3D sched_idle_unit(sched_cpu); =20 - NULL_UNIT_CHECK(ret.task); - return ret; + NULL_UNIT_CHECK(prev->next_task); + + prev->next_task->migrated =3D false; } =20 static inline void dump_unit(struct null_private *prv, struct null_unit *n= vc) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 151353b9a0..cfd7d334fa 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1053,16 +1053,16 @@ runq_pick(const struct scheduler *ops, const cpumas= k_t *mask) * schedule function for rt scheduler. * The lock is already grabbed in schedule.c, no need to lock here */ -static struct task_slice -rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work= _scheduled) +static void +rt_schedule(const struct scheduler *ops, struct sched_unit *currunit, + s_time_t now, bool tasklet_work_scheduled) { const unsigned int cur_cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cur_cpu); struct rt_private *prv =3D rt_priv(ops); - struct rt_unit *const scurr =3D rt_unit(current->sched_unit); + struct rt_unit *const scurr =3D rt_unit(currunit); struct rt_unit *snext =3D NULL; - struct task_slice ret =3D { .migrated =3D 0 }; - struct sched_unit *currunit =3D current->sched_unit; + bool migrated =3D false; =20 /* TRACE */ { @@ -1110,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now= , bool_t tasklet_work_sched __set_bit(__RTDS_delayed_runq_add, &scurr->flags); =20 snext->last_start =3D now; - ret.time =3D -1; /* if an idle unit is picked */ + currunit->next_time =3D -1; /* if an idle unit is picked */ if ( !is_idle_unit(snext->unit) ) { if ( snext !=3D scurr ) @@ -1121,13 +1121,13 @@ rt_schedule(const struct scheduler *ops, s_time_t n= ow, bool_t tasklet_work_sched if ( sched_unit_master(snext->unit) !=3D sched_cpu ) { sched_set_res(snext->unit, get_sched_res(sched_cpu)); - ret.migrated =3D 1; + migrated =3D true; } - ret.time =3D snext->cur_budget; /* invoke the scheduler next time = */ + /* Invoke the scheduler next time. */ + currunit->next_time =3D snext->cur_budget; } - ret.task =3D snext->unit; - - return ret; + currunit->next_task =3D snext->unit; + snext->unit->migrated =3D migrated; } =20 /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 83f5b837a9..6f1a6fbd6e 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -115,15 +115,14 @@ sched_idle_free_udata(const struct scheduler *ops, vo= id *priv) { } =20 -static struct task_slice sched_idle_schedule( - const struct scheduler *ops, s_time_t now, +static void sched_idle_schedule( + const struct scheduler *ops, struct sched_unit *unit, s_time_t now, bool tasklet_work_scheduled) { const unsigned int cpu =3D smp_processor_id(); - struct task_slice ret =3D { .time =3D -1 }; =20 - ret.task =3D sched_idle_unit(cpu); - return ret; + unit->next_time =3D -1; + unit->next_task =3D sched_idle_unit(cpu); } =20 static struct scheduler sched_idle_ops =3D { @@ -1724,10 +1723,9 @@ static void schedule(void) s_time_t now; struct scheduler *sched; unsigned long *tasklet_work =3D &this_cpu(tasklet_work_to_do); - bool_t tasklet_work_scheduled =3D 0; + bool tasklet_work_scheduled =3D false; struct sched_resource *sd; spinlock_t *lock; - struct task_slice next_slice; int cpu =3D smp_processor_id(); =20 ASSERT_NOT_IN_ATOMIC(); @@ -1743,12 +1741,12 @@ static void schedule(void) set_bit(_TASKLET_scheduled, tasklet_work); /* fallthrough */ case TASKLET_enqueued|TASKLET_scheduled: - tasklet_work_scheduled =3D 1; + tasklet_work_scheduled =3D true; break; case TASKLET_scheduled: clear_bit(_TASKLET_scheduled, tasklet_work); case 0: - /*tasklet_work_scheduled =3D 0;*/ + /*tasklet_work_scheduled =3D false;*/ break; default: BUG(); @@ -1762,14 +1760,14 @@ static void schedule(void) =20 /* get policy-specific decision on scheduling... */ sched =3D this_cpu(scheduler); - next_slice =3D sched->do_schedule(sched, now, tasklet_work_scheduled); + sched->do_schedule(sched, prev, now, tasklet_work_scheduled); =20 - next =3D next_slice.task; + next =3D prev->next_task; =20 sd->curr =3D next; =20 - if ( next_slice.time >=3D 0 ) /* -ve means no limit */ - set_timer(&sd->s_timer, now + next_slice.time); + if ( prev->next_time >=3D 0 ) /* -ve means no limit */ + set_timer(&sd->s_timer, now + prev->next_time); =20 if ( unlikely(prev =3D=3D next) ) { @@ -1777,7 +1775,7 @@ static void schedule(void) TRACE_4D(TRC_SCHED_SWITCH_INFCONT, next->domain->domain_id, next->unit_id, now - prev->state_entry_time, - next_slice.time); + prev->next_time); trace_continue_running(next->vcpu_list); return continue_running(prev->vcpu_list); } @@ -1789,7 +1787,7 @@ static void schedule(void) next->domain->domain_id, next->unit_id, (next->vcpu_list->runstate.state =3D=3D RUNSTATE_runnable) ? (now - next->state_entry_time) : 0, - next_slice.time); + prev->next_time); =20 ASSERT(prev->vcpu_list->runstate.state =3D=3D RUNSTATE_running); =20 @@ -1818,7 +1816,7 @@ static void schedule(void) =20 stop_timer(&prev->vcpu_list->periodic_timer); =20 - if ( next_slice.migrated ) + if ( next->migrated ) vcpu_move_irqs(next->vcpu_list); =20 vcpu_periodic_timer_work(next->vcpu_list); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index d7fad0cbcc..0423be987d 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -230,12 +230,6 @@ static inline spinlock_t *pcpu_schedule_trylock(unsign= ed int cpu) return NULL; } =20 -struct task_slice { - struct sched_unit *task; - s_time_t time; - bool_t migrated; -}; - struct scheduler { char *name; /* full name for this scheduler */ char *opt_name; /* option name for this scheduler */ @@ -278,8 +272,9 @@ struct scheduler { void (*context_saved) (const struct scheduler *, struct sched_unit *); =20 - struct task_slice (*do_schedule) (const struct scheduler *, s_time_t, - bool_t tasklet_work_scheduled); + void (*do_schedule) (const struct scheduler *, + struct sched_unit *, s_time_t, + bool tasklet_work_scheduled); =20 struct sched_resource *(*pick_resource)(const struct scheduler *, const struct sched_unit *); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 92272256ea..ebf723a866 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -274,6 +274,8 @@ struct sched_unit { bool is_running; /* Does soft affinity actually play a role (given hard affinity)? */ bool soft_aff_effective; + /* Item has been migrated to other cpu(s). */ + bool migrated; =20 /* Last time unit got (de-)scheduled. */ uint64_t state_entry_time; @@ -286,6 +288,10 @@ struct sched_unit { cpumask_var_t cpu_hard_affinity_saved; /* Bitmask of CPUs on which this VCPU prefers to run. */ cpumask_var_t cpu_soft_affinity; + + /* Next unit to run. */ + struct sched_unit *next_task; + s_time_t next_time; }; =20 #define for_each_sched_unit(d, u) \ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel