From nobody Tue Nov 11 08:42:57 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567808; cv=none; d=zoho.com; s=zohoarc; b=ctPxBf9fcWeulqQrUnt1n2EoqYw5pCNeMy8BG8jNdF+tQSJv8gCdENec6Z7NjEEe05AVzE18Gy42WjswuSUXDjoH37JA9MUYAyDycHCNo7380m2lG2E0lp41mMAk77EjaFYKFu/QD/Y+6YTEN5RLqA7qGJBW1GVooo+xNH4JJZc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567808; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=4D1wOc9hogrxU5htWS+p2lpbTgTn1MNH0+dIC6oAvx4=; b=A/7g0vgu5SiqI78GGQk1bbiPiAHW+RpzcbWwzPGdOFmuEVF+u7fDDhRjGVrF7/lPdVRYTz1nzVqgQYzQJiaMvfb9p5CRlwTHr96OCua4hs5WG14aX4P+4s056qS0a8KVjq9B99Uvn/fitf/TYMKlYwxF6Rc4zeK4qMBUfqnp5WA= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567808940353.91642845295337; Fri, 27 Sep 2019 00:03:28 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGe-0004cP-KO; Fri, 27 Sep 2019 07:02:16 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGd-0004ZU-5T for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:02:15 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 92d1405f-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:07 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 12726B17E; Fri, 27 Sep 2019 07:01:04 +0000 (UTC) X-Inumbo-ID: 92d1405f-e0f4-11e9-966c-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:35 +0200 Message-Id: <20190927070050.12405-32-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 31/46] xen/sched: modify cpupool_domain_cpumask() to be an unit mask X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" cpupool_domain_cpumask() is used by scheduling to select cpus or to iterate over cpus. In order to support scheduling units spanning multiple cpus rename cpupool_domain_cpumask() to cpupool_domain_master_cpumask() and let it return a cpumask with only one bit set per scheduling resource. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V4: - rename to cpupool_domain_master_cpumask() (Jan Beulich) - check return value of zalloc_cpumask_var() (Jan Beulich) --- xen/common/cpupool.c | 27 ++++++++++++++++++--------- xen/common/domain.c | 2 +- xen/common/domctl.c | 2 +- xen/common/sched_arinc653.c | 2 +- xen/common/sched_credit.c | 4 ++-- xen/common/sched_credit2.c | 22 +++++++++++----------- xen/common/sched_null.c | 8 ++++---- xen/common/sched_rt.c | 8 ++++---- xen/common/schedule.c | 13 +++++++------ xen/include/xen/sched-if.h | 9 ++++++--- 10 files changed, 55 insertions(+), 42 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index fd30040922..441a26f16c 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -36,26 +36,33 @@ static DEFINE_SPINLOCK(cpupool_lock); =20 DEFINE_PER_CPU(struct cpupool *, cpupool); =20 +static void free_cpupool_struct(struct cpupool *c) +{ + if ( c ) + { + free_cpumask_var(c->res_valid); + free_cpumask_var(c->cpu_valid); + } + xfree(c); +} + static struct cpupool *alloc_cpupool_struct(void) { struct cpupool *c =3D xzalloc(struct cpupool); =20 - if ( !c || !zalloc_cpumask_var(&c->cpu_valid) ) + if ( !c ) + return NULL; + + if ( !zalloc_cpumask_var(&c->cpu_valid) || + !zalloc_cpumask_var(&c->res_valid) ) { - xfree(c); + free_cpupool_struct(c); c =3D NULL; } =20 return c; } =20 -static void free_cpupool_struct(struct cpupool *c) -{ - if ( c ) - free_cpumask_var(c->cpu_valid); - xfree(c); -} - /* * find a cpupool by it's id. to be called with cpupool lock held * if exact is not specified, the first cpupool with an id larger or equal= to @@ -269,6 +276,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c,= unsigned int cpu) cpupool_cpu_moving =3D NULL; } cpumask_set_cpu(cpu, c->cpu_valid); + cpumask_and(c->res_valid, c->cpu_valid, sched_res_mask); =20 rcu_read_lock(&domlist_read_lock); for_each_domain_in_cpupool(d, c) @@ -361,6 +369,7 @@ static int cpupool_unassign_cpu_start(struct cpupool *c= , unsigned int cpu) atomic_inc(&c->refcnt); cpupool_cpu_moving =3D c; cpumask_clear_cpu(cpu, c->cpu_valid); + cpumask_and(c->res_valid, c->cpu_valid, sched_res_mask); =20 out: spin_unlock(&cpupool_lock); diff --git a/xen/common/domain.c b/xen/common/domain.c index ea1225367d..09792f0db8 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -584,7 +584,7 @@ void domain_update_node_affinity(struct domain *d) return; } =20 - online =3D cpupool_domain_cpumask(d); + online =3D cpupool_domain_master_cpumask(d); =20 spin_lock(&d->node_affinity_lock); =20 diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 8a694e0d37..d597a09f98 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -619,7 +619,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_d= omctl) if ( op->cmd =3D=3D XEN_DOMCTL_setvcpuaffinity ) { cpumask_var_t new_affinity, old_affinity; - cpumask_t *online =3D cpupool_domain_cpumask(v->domain); + cpumask_t *online =3D cpupool_domain_master_cpumask(v->domain); =20 /* * We want to be able to restore hard affinity if we are trying diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index dd5876eacd..45c05c6cd9 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -614,7 +614,7 @@ a653sched_pick_resource(const struct scheduler *ops, * If present, prefer unit's current processor, else * just find the first valid unit. */ - online =3D cpupool_domain_cpumask(unit->domain); + online =3D cpupool_domain_master_cpumask(unit->domain); =20 cpu =3D cpumask_first(online); =20 diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 00beac3ea4..a6dff8ec62 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -361,7 +361,7 @@ static inline void __runq_tickle(struct csched_unit *ne= w) ASSERT(cur); cpumask_clear(&mask); =20 - online =3D cpupool_domain_cpumask(new->sdom->dom); + online =3D cpupool_domain_master_cpumask(new->sdom->dom); cpumask_and(&idle_mask, prv->idlers, online); idlers_empty =3D cpumask_empty(&idle_mask); =20 @@ -724,7 +724,7 @@ _csched_cpu_pick(const struct scheduler *ops, const str= uct sched_unit *unit, /* We must always use cpu's scratch space */ cpumask_t *cpus =3D cpumask_scratch_cpu(cpu); cpumask_t idlers; - cpumask_t *online =3D cpupool_domain_cpumask(unit->domain); + cpumask_t *online =3D cpupool_domain_master_cpumask(unit->domain); struct csched_pcpu *spc =3D NULL; int balance_step; =20 diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 0e29e56d5a..d51df05887 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -705,7 +705,7 @@ static int get_fallback_cpu(struct csched2_unit *svc) =20 affinity_balance_cpumask(unit, bs, cpumask_scratch_cpu(cpu)); cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); =20 /* * This is cases 1 or 3 (depending on bs): if processor is (still) @@ -1440,7 +1440,7 @@ runq_tickle(const struct scheduler *ops, struct csche= d2_unit *new, s_time_t now) struct sched_unit *unit =3D new->unit; unsigned int bs, cpu =3D sched_unit_master(unit); struct csched2_runqueue_data *rqd =3D c2rqd(ops, cpu); - cpumask_t *online =3D cpupool_domain_cpumask(unit->domain); + cpumask_t *online =3D cpupool_domain_master_cpumask(unit->domain); cpumask_t mask; =20 ASSERT(new->rqd =3D=3D rqd); @@ -2243,7 +2243,7 @@ csched2_res_pick(const struct scheduler *ops, const s= truct sched_unit *unit) } =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); =20 /* * First check to see if we're here because someone else suggested a p= lace @@ -2358,8 +2358,8 @@ csched2_res_pick(const struct scheduler *ops, const s= truct sched_unit *unit) * ok because: * - we know that unit->cpu_hard_affinity and ->cpu_soft_affinity = have * a non-empty intersection (because has_soft is true); - * - we have unit->cpu_hard_affinity & cpupool_domain_cpumask() al= ready - * in cpumask_scratch, we do save a lot doing like this. + * - we have unit->cpu_hard_affinity & cpupool_domain_master_cpuma= sk() + * already in cpumask_scratch, we do save a lot doing like this. * * It's kind of like open coding affinity_balance_cpumask() but, in * this specific case, calling that would mean a lot of (unnecessa= ry) @@ -2378,7 +2378,7 @@ csched2_res_pick(const struct scheduler *ops, const s= truct sched_unit *unit) * affinity, so go for it. * * cpumask_scratch already has unit->cpu_hard_affinity & - * cpupool_domain_cpumask() in it, so it's enough that we filter + * cpupool_domain_master_cpumask() in it, so it's enough that we f= ilter * with the cpus of the runq. */ cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), @@ -2513,7 +2513,7 @@ static void migrate(const struct scheduler *ops, _runq_deassign(svc); =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), &trqd->active); sched_set_res(unit, @@ -2547,7 +2547,7 @@ static bool unit_is_migrateable(struct csched2_unit *= svc, int cpu =3D sched_unit_master(unit); =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); =20 return !(svc->flags & CSFLAG_runq_migrate_request) && cpumask_intersects(cpumask_scratch_cpu(cpu), &rqd->active); @@ -2763,7 +2763,7 @@ csched2_unit_migrate( * v->processor will be chosen, and during actual domain unpause that * the unit will be assigned to and added to the proper runqueue. */ - if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_cpumask(d))) ) + if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_master_cpumask= (d))) ) { ASSERT(system_state =3D=3D SYS_STATE_suspend); if ( unit_on_runq(svc) ) @@ -3069,7 +3069,7 @@ csched2_alloc_domdata(const struct scheduler *ops, st= ruct domain *dom) sdom->nr_units =3D 0; =20 init_timer(&sdom->repl_timer, replenish_domain_budget, sdom, - cpumask_any(cpupool_domain_cpumask(dom))); + cpumask_any(cpupool_domain_master_cpumask(dom))); spin_lock_init(&sdom->budget_lock); INIT_LIST_HEAD(&sdom->parked_units); =20 @@ -3317,7 +3317,7 @@ runq_candidate(struct csched2_runqueue_data *rqd, cpumask_scratch); if ( unlikely(!cpumask_test_cpu(cpu, cpumask_scratch)) ) { - cpumask_t *online =3D cpupool_domain_cpumask(scurr->unit->doma= in); + cpumask_t *online =3D cpupool_domain_master_cpumask(scurr->uni= t->domain); =20 /* Ok, is any of the pcpus in scurr soft-affinity idle? */ cpumask_and(cpumask_scratch, cpumask_scratch, &rqd->idle); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 3dde1dcd00..2525464a7c 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -125,7 +125,7 @@ static inline bool unit_check_affinity(struct sched_uni= t *unit, { affinity_balance_cpumask(unit, balance_step, cpumask_scratch_cpu(cpu)); cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); =20 return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu)); } @@ -266,7 +266,7 @@ pick_res(struct null_private *prv, const struct sched_u= nit *unit) { unsigned int bs; unsigned int cpu =3D sched_unit_master(unit), new_cpu; - cpumask_t *cpus =3D cpupool_domain_cpumask(unit->domain); + cpumask_t *cpus =3D cpupool_domain_master_cpumask(unit->domain); =20 ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 @@ -467,7 +467,7 @@ static void null_unit_insert(const struct scheduler *op= s, lock =3D unit_schedule_lock(unit); =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); =20 /* If the pCPU is free, we assign unit to it */ if ( likely(per_cpu(npc, cpu).unit =3D=3D NULL) ) @@ -579,7 +579,7 @@ static void null_unit_wake(const struct scheduler *ops, spin_unlock(&prv->waitq_lock); =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(unit->domain)); + cpupool_domain_master_cpumask(unit->domain)); =20 if ( !cpumask_intersects(&prv->cpus_free, cpumask_scratch_cpu(cpu)= ) ) { diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index fd882f2ca4..d21c416cae 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -326,7 +326,7 @@ rt_dump_unit(const struct scheduler *ops, const struct = rt_unit *svc) */ mask =3D cpumask_scratch_cpu(sched_unit_master(svc->unit)); =20 - cpupool_mask =3D cpupool_domain_cpumask(svc->unit->domain); + cpupool_mask =3D cpupool_domain_master_cpumask(svc->unit->domain); cpumask_and(mask, cpupool_mask, svc->unit->cpu_hard_affinity); printk("[%5d.%-2u] cpu %u, (%"PRI_stime", %"PRI_stime")," " cur_b=3D%"PRI_stime" cur_d=3D%"PRI_stime" last_start=3D%"PRI_= stime"\n" @@ -642,7 +642,7 @@ rt_res_pick(const struct scheduler *ops, const struct s= ched_unit *unit) cpumask_t *online; int cpu; =20 - online =3D cpupool_domain_cpumask(unit->domain); + online =3D cpupool_domain_master_cpumask(unit->domain); cpumask_and(&cpus, online, unit->cpu_hard_affinity); =20 cpu =3D cpumask_test_cpu(sched_unit_master(unit), &cpus) @@ -1016,7 +1016,7 @@ runq_pick(const struct scheduler *ops, const cpumask_= t *mask) iter_svc =3D q_elem(iter); =20 /* mask cpu_hard_affinity & cpupool & mask */ - online =3D cpupool_domain_cpumask(iter_svc->unit->domain); + online =3D cpupool_domain_master_cpumask(iter_svc->unit->domain); cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity= ); cpumask_and(&cpu_common, mask, &cpu_common); if ( cpumask_empty(&cpu_common) ) @@ -1191,7 +1191,7 @@ runq_tickle(const struct scheduler *ops, struct rt_un= it *new) if ( new =3D=3D NULL || is_idle_unit(new->unit) ) return; =20 - online =3D cpupool_domain_cpumask(new->unit->domain); + online =3D cpupool_domain_master_cpumask(new->unit->domain); cpumask_and(¬_tickled, online, new->unit->cpu_hard_affinity); cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); =20 diff --git a/xen/common/schedule.c b/xen/common/schedule.c index fa3d88938a..ae5c807c6a 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -61,6 +61,7 @@ integer_param("sched_ratelimit_us", sched_ratelimit_us); =20 /* Number of vcpus per struct sched_unit. */ static unsigned int __read_mostly sched_granularity =3D 1; +const cpumask_t *sched_res_mask =3D &cpumask_all; =20 /* Common lock for free cpus. */ static DEFINE_SPINLOCK(sched_free_cpu_lock); @@ -186,7 +187,7 @@ static inline struct scheduler *vcpu_scheduler(const st= ruct vcpu *v) { return unit_scheduler(v->sched_unit); } -#define VCPU2ONLINE(_v) cpupool_domain_cpumask((_v)->domain) +#define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain) =20 static inline void trace_runstate_change(struct vcpu *v, int new_state) { @@ -423,9 +424,9 @@ static unsigned int sched_select_initial_cpu(const stru= ct vcpu *v) cpumask_clear(cpus); for_each_node_mask ( node, d->node_affinity ) cpumask_or(cpus, cpus, &node_to_cpumask(node)); - cpumask_and(cpus, cpus, cpupool_domain_cpumask(d)); + cpumask_and(cpus, cpus, d->cpupool->cpu_valid); if ( cpumask_empty(cpus) ) - cpumask_copy(cpus, cpupool_domain_cpumask(d)); + cpumask_copy(cpus, d->cpupool->cpu_valid); =20 if ( v->vcpu_id =3D=3D 0 ) cpu_ret =3D cpumask_first(cpus); @@ -971,7 +972,7 @@ void restore_vcpu_affinity(struct domain *d) lock =3D unit_schedule_lock_irq(unit); =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(d)); + cpupool_domain_master_cpumask(d)); if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) { if ( sched_check_affinity_broken(unit) ) @@ -979,7 +980,7 @@ void restore_vcpu_affinity(struct domain *d) sched_set_affinity(unit, unit->cpu_hard_affinity_saved, NU= LL); sched_reset_affinity_broken(unit); cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affin= ity, - cpupool_domain_cpumask(d)); + cpupool_domain_master_cpumask(d)); } =20 if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) @@ -989,7 +990,7 @@ void restore_vcpu_affinity(struct domain *d) unit->vcpu_list); sched_set_affinity(unit, &cpumask_all, NULL); cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affin= ity, - cpupool_domain_cpumask(d)); + cpupool_domain_master_cpumask(d)); } } =20 diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 983f2ece83..1b296b150f 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -22,6 +22,8 @@ extern cpumask_t cpupool_free_cpus; #define SCHED_DEFAULT_RATELIMIT_US 1000 extern int sched_ratelimit_us; =20 +/* Scheduling resource mask. */ +extern const cpumask_t *sched_res_mask; =20 /* * In order to allow a scheduler to remap the lock->cpu mapping, @@ -535,6 +537,7 @@ struct cpupool int cpupool_id; unsigned int n_dom; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ + cpumask_var_t res_valid; /* all scheduling resources of pool */ struct cpupool *next; struct scheduler *sched; atomic_t refcnt; @@ -543,14 +546,14 @@ struct cpupool #define cpupool_online_cpumask(_pool) \ (((_pool) =3D=3D NULL) ? &cpu_online_map : (_pool)->cpu_valid) =20 -static inline cpumask_t *cpupool_domain_cpumask(const struct domain *d) +static inline cpumask_t *cpupool_domain_master_cpumask(const struct domain= *d) { /* * d->cpupool is NULL only for the idle domain, and no one should * be interested in calling this for the idle domain. */ ASSERT(d->cpupool !=3D NULL); - return d->cpupool->cpu_valid; + return d->cpupool->res_valid; } =20 /* @@ -590,7 +593,7 @@ static inline cpumask_t *cpupool_domain_cpumask(const s= truct domain *d) static inline int has_soft_affinity(const struct sched_unit *unit) { return unit->soft_aff_effective && - !cpumask_subset(cpupool_domain_cpumask(unit->domain), + !cpumask_subset(cpupool_domain_master_cpumask(unit->domain), unit->cpu_soft_affinity); } =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel