From nobody Tue Nov 11 08:42:57 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567747; cv=none; d=zoho.com; s=zohoarc; b=PYR1fOBVSn/9vmp1WaAPfFk7sEi/+LWAanYx5ahA4RPGBFfkoCKGexJpZ1n7i8FoldqBeH+2UO1SNYZczgLzJoqp0OQJl/bvNmkNqMjQjGv9aD3+/cGyhtJxbjv3NcckDHDyLds2RSOew5QJqGtIgby/0LzeoKzvDBAkuCtFtwA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567747; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=InPneO+da0q81V/rzh9tiOKD7eT3tL+UqBrQyY7lGR4=; b=Aoe0WJeNK2UAvECttg13v5sWoYgqkq5ZyTuDMGO51YDQK08IaOfjolUjtMM1iN/vF7JqOsNo7b8YTQhlomCIhEglZUMQ9lbT7b3O/ryhMfhuoX13APtp1btJBGmosnT9ONl6bZGZY/0aUKTsqIECvIEO/Xefztll8iudwUYxbdM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567747340732.6749973813563; Fri, 27 Sep 2019 00:02:27 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFe-0003Cx-Ul; Fri, 27 Sep 2019 07:01:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFc-0003Be-Ok for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:12 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 8d602ee6-e0f4-11e9-b588-bc764e2007e4; Fri, 27 Sep 2019 07:00:56 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A8D82AFBE; Fri, 27 Sep 2019 07:00:55 +0000 (UTC) X-Inumbo-ID: 8d602ee6-e0f4-11e9-b588-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:09 +0200 Message-Id: <20190927070050.12405-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 05/46] xen/sched: let pick_cpu return a scheduler resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of returning a physical cpu number let pick_cpu() return a scheduler resource instead. Rename pick_cpu() to pick_resource() to reflect that change. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V3: - style fix (Jan Beulich) --- xen/common/sched_arinc653.c | 13 +++++++------ xen/common/sched_credit.c | 16 ++++++++-------- xen/common/sched_credit2.c | 22 +++++++++++----------- xen/common/sched_null.c | 23 ++++++++++++----------- xen/common/sched_rt.c | 18 +++++++++--------- xen/common/schedule.c | 18 ++++++++++-------- xen/include/xen/perfc_defn.h | 2 +- xen/include/xen/sched-if.h | 10 +++++----- 8 files changed, 63 insertions(+), 59 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 67009f235d..9faa1c48c4 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -607,15 +607,16 @@ a653sched_do_schedule( } =20 /** - * Xen scheduler callback function to select a CPU for the VCPU to run on + * Xen scheduler callback function to select a resource for the VCPU to ru= n on * * @param ops Pointer to this instance of the scheduler structure * @param unit Pointer to struct sched_unit * - * @return Number of selected physical CPU + * @return Scheduler resource to run on */ -static int -a653sched_pick_cpu(const struct scheduler *ops, const struct sched_unit *u= nit) +static struct sched_resource * +a653sched_pick_resource(const struct scheduler *ops, + const struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; cpumask_t *online; @@ -633,7 +634,7 @@ a653sched_pick_cpu(const struct scheduler *ops, const s= truct sched_unit *unit) || (cpu >=3D nr_cpu_ids) ) cpu =3D vc->processor; =20 - return cpu; + return get_sched_res(cpu); } =20 /** @@ -726,7 +727,7 @@ static const struct scheduler sched_arinc653_def =3D { =20 .do_schedule =3D a653sched_do_schedule, =20 - .pick_cpu =3D a653sched_pick_cpu, + .pick_resource =3D a653sched_pick_resource, =20 .switch_sched =3D a653_switch_sched, =20 diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 4b4d7021de..fa73081b3c 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -853,8 +853,8 @@ _csched_cpu_pick(const struct scheduler *ops, struct vc= pu *vc, bool_t commit) return cpu; } =20 -static int -csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit) +static struct sched_resource * +csched_res_pick(const struct scheduler *ops, const struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; struct csched_vcpu *svc =3D CSCHED_VCPU(vc); @@ -867,7 +867,7 @@ csched_cpu_pick(const struct scheduler *ops, const stru= ct sched_unit *unit) * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_VCPU_MIGRATING, &svc->flags); - return _csched_cpu_pick(ops, vc, 1); + return get_sched_res(_csched_cpu_pick(ops, vc, 1)); } =20 static inline void @@ -967,7 +967,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned i= nt cpu) /* * If it's been active a while, check if we'd be better off * migrating it to run elsewhere (see multi-core and multi-thread - * support in csched_cpu_pick()). + * support in csched_res_pick()). */ new_cpu =3D _csched_cpu_pick(ops, current, 0); =20 @@ -1022,11 +1022,11 @@ csched_unit_insert(const struct scheduler *ops, str= uct sched_unit *unit) =20 BUG_ON( is_idle_vcpu(vc) ); =20 - /* csched_cpu_pick() looks in vc->processor's runq, so we need the loc= k. */ + /* csched_res_pick() looks in vc->processor's runq, so we need the loc= k. */ lock =3D vcpu_schedule_lock_irq(vc); =20 - vc->processor =3D csched_cpu_pick(ops, unit); - unit->res =3D get_sched_res(vc->processor); + unit->res =3D csched_res_pick(ops, unit); + vc->processor =3D unit->res->master_cpu; =20 spin_unlock_irq(lock); =20 @@ -2278,7 +2278,7 @@ static const struct scheduler sched_credit_def =3D { .adjust_affinity=3D csched_aff_cntl, .adjust_global =3D csched_sys_cntl, =20 - .pick_cpu =3D csched_cpu_pick, + .pick_resource =3D csched_res_pick, .do_schedule =3D csched_schedule, =20 .dump_cpu_state =3D csched_dump_pcpu, diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 2981d642b0..37192e6713 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -626,9 +626,9 @@ static inline bool has_cap(const struct csched2_vcpu *s= vc) * runq, _always_ happens by means of tickling: * - when a vcpu wakes up, it calls csched2_unit_wake(), which calls * runq_tickle(); - * - when a migration is initiated in schedule.c, we call csched2_cpu_pic= k(), + * - when a migration is initiated in schedule.c, we call csched2_res_pic= k(), * csched2_unit_migrate() (which calls migrate()) and csched2_unit_wake= (). - * csched2_cpu_pick() looks for the least loaded runq and return just a= ny + * csched2_res_pick() looks for the least loaded runq and return just a= ny * of its processors. Then, csched2_unit_migrate() just moves the vcpu = to * the chosen runq, and it is again runq_tickle(), called by * csched2_unit_wake() that actually decides what pcpu to use within the @@ -677,7 +677,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *m= ask) } =20 /* - * In csched2_cpu_pick(), it may not be possible to actually look at remote + * In csched2_res_pick(), it may not be possible to actually look at remote * runqueues (the trylock-s on their spinlocks can fail!). If that happens, * we pick, in order of decreasing preference: * 1) svc's current pcpu, if it is part of svc's soft affinity; @@ -2202,8 +2202,8 @@ csched2_context_saved(const struct scheduler *ops, st= ruct sched_unit *unit) } =20 #define MAX_LOAD (STIME_MAX) -static int -csched2_cpu_pick(const struct scheduler *ops, const struct sched_unit *uni= t) +static struct sched_resource * +csched2_res_pick(const struct scheduler *ops, const struct sched_unit *uni= t) { struct csched2_private *prv =3D csched2_priv(ops); struct vcpu *vc =3D unit->vcpu_list; @@ -2215,7 +2215,7 @@ csched2_cpu_pick(const struct scheduler *ops, const s= truct sched_unit *unit) =20 ASSERT(!cpumask_empty(&prv->active_queues)); =20 - SCHED_STAT_CRANK(pick_cpu); + SCHED_STAT_CRANK(pick_resource); =20 /* Locking: * - Runqueue lock of vc->processor is already locked @@ -2424,7 +2424,7 @@ csched2_cpu_pick(const struct scheduler *ops, const s= truct sched_unit *unit) (unsigned char *)&d); } =20 - return new_cpu; + return get_sched_res(new_cpu); } =20 /* Working state of the load-balancing algorithm */ @@ -3121,11 +3121,11 @@ csched2_unit_insert(const struct scheduler *ops, st= ruct sched_unit *unit) ASSERT(!is_idle_vcpu(vc)); ASSERT(list_empty(&svc->runq_elem)); =20 - /* csched2_cpu_pick() expects the pcpu lock to be held */ + /* csched2_res_pick() expects the pcpu lock to be held */ lock =3D vcpu_schedule_lock_irq(vc); =20 - vc->processor =3D csched2_cpu_pick(ops, unit); - unit->res =3D get_sched_res(vc->processor); + unit->res =3D csched2_res_pick(ops, unit); + vc->processor =3D unit->res->master_cpu; =20 spin_unlock_irq(lock); =20 @@ -4112,7 +4112,7 @@ static const struct scheduler sched_credit2_def =3D { .adjust_affinity=3D csched2_aff_cntl, .adjust_global =3D csched2_sys_cntl, =20 - .pick_cpu =3D csched2_cpu_pick, + .pick_resource =3D csched2_res_pick, .migrate =3D csched2_unit_migrate, .do_schedule =3D csched2_schedule, .context_saved =3D csched2_context_saved, diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index cb5e1b52db..cb400f55d0 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -261,9 +261,11 @@ static void null_free_domdata(const struct scheduler *= ops, void *data) * * So this is not part of any hot path. */ -static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v) +static struct sched_resource * +pick_res(struct null_private *prv, const struct sched_unit *unit) { unsigned int bs; + struct vcpu *v =3D unit->vcpu_list; unsigned int cpu =3D v->processor, new_cpu; cpumask_t *cpus =3D cpupool_domain_cpumask(v->domain); =20 @@ -327,7 +329,7 @@ static unsigned int pick_cpu(struct null_private *prv, = struct vcpu *v) __trace_var(TRC_SNULL_PICKED_CPU, 1, sizeof(d), &d); } =20 - return new_cpu; + return get_sched_res(new_cpu); } =20 static void vcpu_assign(struct null_private *prv, struct vcpu *v, @@ -457,8 +459,8 @@ static void null_unit_insert(const struct scheduler *op= s, } =20 retry: - cpu =3D v->processor =3D pick_cpu(prv, v); - unit->res =3D get_sched_res(cpu); + unit->res =3D pick_res(prv, unit); + cpu =3D v->processor =3D unit->res->master_cpu; =20 spin_unlock(lock); =20 @@ -599,7 +601,7 @@ static void null_unit_wake(const struct scheduler *ops, */ while ( cpumask_intersects(&prv->cpus_free, cpumask_scratch_cpu(cp= u)) ) { - unsigned int new_cpu =3D pick_cpu(prv, v); + unsigned int new_cpu =3D pick_res(prv, unit)->master_cpu; =20 if ( test_and_clear_bit(new_cpu, &prv->cpus_free) ) { @@ -648,12 +650,11 @@ static void null_unit_sleep(const struct scheduler *o= ps, SCHED_STAT_CRANK(vcpu_sleep); } =20 -static int null_cpu_pick(const struct scheduler *ops, - const struct sched_unit *unit) +static struct sched_resource * +null_res_pick(const struct scheduler *ops, const struct sched_unit *unit) { - struct vcpu *v =3D unit->vcpu_list; - ASSERT(!is_idle_vcpu(v)); - return pick_cpu(null_priv(ops), v); + ASSERT(!is_idle_vcpu(unit->vcpu_list)); + return pick_res(null_priv(ops), unit); } =20 static void null_unit_migrate(const struct scheduler *ops, @@ -985,7 +986,7 @@ static const struct scheduler sched_null_def =3D { =20 .wake =3D null_unit_wake, .sleep =3D null_unit_sleep, - .pick_cpu =3D null_cpu_pick, + .pick_resource =3D null_res_pick, .migrate =3D null_unit_migrate, .do_schedule =3D null_schedule, =20 diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 01e95f3276..6ca792e643 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -631,12 +631,12 @@ replq_reinsert(const struct scheduler *ops, struct rt= _vcpu *svc) } =20 /* - * Pick a valid CPU for the vcpu vc - * Valid CPU of a vcpu is intesection of vcpu's affinity - * and available cpus + * Pick a valid resource for the vcpu vc + * Valid resource of a vcpu is intesection of vcpu's affinity + * and available resources */ -static int -rt_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit) +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; cpumask_t cpus; @@ -651,7 +651,7 @@ rt_cpu_pick(const struct scheduler *ops, const struct s= ched_unit *unit) : cpumask_cycle(vc->processor, &cpus); ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); =20 - return cpu; + return get_sched_res(cpu); } =20 /* @@ -892,8 +892,8 @@ rt_unit_insert(const struct scheduler *ops, struct sche= d_unit *unit) BUG_ON( is_idle_vcpu(vc) ); =20 /* This is safe because vc isn't yet being scheduled */ - vc->processor =3D rt_cpu_pick(ops, unit); - unit->res =3D get_sched_res(vc->processor); + unit->res =3D rt_res_pick(ops, unit); + vc->processor =3D unit->res->master_cpu; =20 lock =3D vcpu_schedule_lock_irq(vc); =20 @@ -1562,7 +1562,7 @@ static const struct scheduler sched_rtds_def =3D { =20 .adjust =3D rt_dom_cntl, =20 - .pick_cpu =3D rt_cpu_pick, + .pick_resource =3D rt_res_pick, .do_schedule =3D rt_schedule, .sleep =3D rt_unit_sleep, .wake =3D rt_unit_wake, diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 774f127d88..8bca32f5c4 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -87,10 +87,10 @@ sched_idle_switch_sched(struct scheduler *new_ops, unsi= gned int cpu, return &sched_free_cpu_lock; } =20 -static int -sched_idle_cpu_pick(const struct scheduler *ops, const struct sched_unit *= unit) +static struct sched_resource * +sched_idle_res_pick(const struct scheduler *ops, const struct sched_unit *= unit) { - return unit->res->master_cpu; + return unit->res; } =20 static void * @@ -122,7 +122,7 @@ static struct scheduler sched_idle_ops =3D { .opt_name =3D "idle", .sched_data =3D NULL, =20 - .pick_cpu =3D sched_idle_cpu_pick, + .pick_resource =3D sched_idle_res_pick, .do_schedule =3D sched_idle_schedule, =20 .alloc_udata =3D sched_idle_alloc_udata, @@ -747,7 +747,8 @@ static void vcpu_migrate_finish(struct vcpu *v) break; =20 /* Select a new CPU. */ - new_cpu =3D sched_pick_cpu(vcpu_scheduler(v), v->sched_unit); + new_cpu =3D sched_pick_resource(vcpu_scheduler(v), + v->sched_unit)->master_cpu; if ( (new_lock =3D=3D per_cpu(schedule_data, new_cpu).schedule= _lock) && cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) break; @@ -840,8 +841,9 @@ void restore_vcpu_affinity(struct domain *d) =20 /* v->processor might have changed, so reacquire the lock. */ lock =3D vcpu_schedule_lock_irq(v); - v->processor =3D sched_pick_cpu(vcpu_scheduler(v), v->sched_unit); - v->sched_unit->res =3D get_sched_res(v->processor); + v->sched_unit->res =3D sched_pick_resource(vcpu_scheduler(v), + v->sched_unit); + v->processor =3D v->sched_unit->res->master_cpu; spin_unlock_irq(lock); =20 if ( old_cpu !=3D v->processor ) @@ -1854,7 +1856,7 @@ void __init scheduler_init(void) =20 sched_test_func(init); sched_test_func(deinit); - sched_test_func(pick_cpu); + sched_test_func(pick_resource); sched_test_func(alloc_udata); sched_test_func(free_udata); sched_test_func(switch_sched); diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h index ef6f86b91e..1ad4384080 100644 --- a/xen/include/xen/perfc_defn.h +++ b/xen/include/xen/perfc_defn.h @@ -69,7 +69,7 @@ PERFCOUNTER(migrate_on_runq, "csched2: migrate_on_= runq") PERFCOUNTER(migrate_no_runq, "csched2: migrate_no_runq") PERFCOUNTER(runtime_min_timer, "csched2: runtime_min_timer") PERFCOUNTER(runtime_max_timer, "csched2: runtime_max_timer") -PERFCOUNTER(pick_cpu, "csched2: pick_cpu") +PERFCOUNTER(pick_resource, "csched2: pick_resource") PERFCOUNTER(need_fallback_cpu, "csched2: need_fallback_cpu") PERFCOUNTER(migrated, "csched2: migrated") PERFCOUNTER(migrate_resisted, "csched2: migrate_resisted") diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 5c9ac07587..4f61f65288 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -189,8 +189,8 @@ struct scheduler { struct task_slice (*do_schedule) (const struct scheduler *, s_time_t, bool_t tasklet_work_scheduled); =20 - int (*pick_cpu) (const struct scheduler *, - const struct sched_unit *); + struct sched_resource *(*pick_resource)(const struct scheduler *, + const struct sched_unit *); void (*migrate) (const struct scheduler *, struct sched_unit *, unsigned int); int (*adjust) (const struct scheduler *, struct domai= n *, @@ -355,10 +355,10 @@ static inline void sched_migrate(const struct schedul= er *s, } } =20 -static inline int sched_pick_cpu(const struct scheduler *s, - const struct sched_unit *unit) +static inline struct sched_resource *sched_pick_resource( + const struct scheduler *s, const struct sched_unit *unit) { - return s->pick_cpu(s, unit); + return s->pick_resource(s, unit); } =20 static inline void sched_adjust_affinity(const struct scheduler *s, --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel