From nobody Mon Feb 9 16:19:56 2026 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1576655406; cv=none; d=zohomail.com; s=zohoarc; b=Y3PtL+/25DNTXbGFXRsL/qu1jRtzSygmjSKZ83mb+q0HFDnXpspnYmgXphrtPKVii0s8Mnv5kcfLzrEp/KqcmQBZs+V2Q+UMTauIRgGguhmTOan8nSyxHCltftDlL3S7Sn0GQSEz7g8CUvIoZgCcHAF4UPiK45VCCF1fsFaApBo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1576655406; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=QLUMad0U8KetuMwRnAypQFzf2XEDy0vfA0CX9RjayFM=; b=BDHXSarTAUA+7yxyifPpevi3xAbkWr9k3IAVZrhUVHa4Sp80RXQW3auuLFfS0aIIfsuJTFi5HbMRagMNPzIcQUzpzVHX4PfZ40sPTFmJu1JmPZAkFlL1we9Q69HWfQyVcuwddNKrhfkeQr4N6vEQ0m13HXamoT0YNM/sgxy8P8o= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1576655406139670.6698833457439; Tue, 17 Dec 2019 23:50:06 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5P-0003p1-FF; Wed, 18 Dec 2019 07:49:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5O-0003oP-6G for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:34 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id deef49a2-216a-11ea-88e7-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5BFCDAE19; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) X-Inumbo-ID: deef49a2-216a-11ea-88e7-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:57 +0100 Message-Id: <20191218074859.21665-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 7/9] xen/sched: switch scheduling to bool where appropriate X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Scheduling code has several places using int or bool_t instead of bool. Switch those. Signed-off-by: Juergen Gross --- xen/common/sched/cpupool.c | 10 +++++----- xen/common/sched/sched-if.h | 2 +- xen/common/sched/sched_arinc653.c | 8 ++++---- xen/common/sched/sched_credit.c | 12 ++++++------ xen/common/sched/sched_rt.c | 14 +++++++------- xen/common/sched/schedule.c | 14 +++++++------- xen/include/xen/sched.h | 6 +++--- 7 files changed, 33 insertions(+), 33 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index d5b64d0a6a..14212bb4ae 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void) * the searched id is returned * returns NULL if not found. */ -static struct cpupool *__cpupool_find_by_id(int id, int exact) +static struct cpupool *__cpupool_find_by_id(int id, bool exact) { struct cpupool **q; =20 @@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, i= nt exact) =20 static struct cpupool *cpupool_find_by_id(int poolid) { - return __cpupool_find_by_id(poolid, 1); + return __cpupool_find_by_id(poolid, true); } =20 -static struct cpupool *__cpupool_get_by_id(int poolid, int exact) +static struct cpupool *__cpupool_get_by_id(int poolid, bool exact) { struct cpupool *c; spin_lock(&cpupool_lock); @@ -185,12 +185,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid= , int exact) =20 struct cpupool *cpupool_get_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 1); + return __cpupool_get_by_id(poolid, true); } =20 static struct cpupool *cpupool_get_next_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 0); + return __cpupool_get_by_id(poolid, false); } =20 void cpupool_put(struct cpupool *pool) diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h index edce354dc7..9d0db75cbb 100644 --- a/xen/common/sched/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -589,7 +589,7 @@ unsigned int cpupool_get_granularity(const struct cpupo= ol *c); * * The hard affinity is not a subset of soft affinity * * There is an overlap between the soft and hard affinity masks */ -static inline int has_soft_affinity(const struct sched_unit *unit) +static inline bool has_soft_affinity(const struct sched_unit *unit) { return unit->soft_aff_effective && !cpumask_subset(cpupool_domain_master_cpumask(unit->domain), diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_ari= nc653.c index fe15754900..dc45378952 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -75,7 +75,7 @@ typedef struct arinc653_unit_s * arinc653_unit_t pointer. */ struct sched_unit * unit; /* awake holds whether the UNIT has been woken with vcpu_wake() */ - bool_t awake; + bool awake; /* list holds the linked list information for the list this UNIT * is stored in */ struct list_head list; @@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, stru= ct sched_unit *unit, * will mark the UNIT awake. */ svc->unit =3D unit; - svc->awake =3D 0; + svc->awake =3D false; if ( !is_idle_unit(unit) ) list_add(&svc->list, &SCHED_PRIV(ops)->unit_list); update_schedule_units(ops); @@ -473,7 +473,7 @@ static void a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) !=3D NULL ) - AUNIT(unit)->awake =3D 0; + AUNIT(unit)->awake =3D false; =20 /* * If the UNIT being put to sleep is the same one that is currently @@ -493,7 +493,7 @@ static void a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) !=3D NULL ) - AUNIT(unit)->awake =3D 1; + AUNIT(unit)->awake =3D true; =20 cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ); } diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credi= t.c index 8b1de9b033..05930261d9 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -245,7 +245,7 @@ __runq_elem(struct list_head *elem) } =20 /* Is the first element of cpu's runq (if any) cpu's idle unit? */ -static inline bool_t is_runq_idle(unsigned int cpu) +static inline bool is_runq_idle(unsigned int cpu) { /* * We're peeking at cpu's runq, we must hold the proper lock. @@ -344,7 +344,7 @@ static void burn_credits(struct csched_unit *svc, s_tim= e_t now) svc->start_time +=3D (credits * MILLISECS(1)) / CSCHED_CREDITS_PER_MSE= C; } =20 -static bool_t __read_mostly opt_tickle_one_idle =3D 1; +static bool __read_mostly opt_tickle_one_idle =3D true; boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle); =20 DEFINE_PER_CPU(unsigned int, last_tickle_cpu); @@ -719,7 +719,7 @@ __csched_unit_is_migrateable(const struct csched_privat= e *prv, =20 static int _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *uni= t, - bool_t commit) + bool commit) { int cpu =3D sched_unit_master(unit); /* We must always use cpu's scratch space */ @@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const stru= ct sched_unit *unit) * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_UNIT_MIGRATING, &svc->flags); - return get_sched_res(_csched_cpu_pick(ops, unit, 1)); + return get_sched_res(_csched_cpu_pick(ops, unit, true)); } =20 static inline void @@ -975,7 +975,7 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) * migrating it to run elsewhere (see multi-core and multi-thread * support in csched_res_pick()). */ - new_cpu =3D _csched_cpu_pick(ops, currunit, 0); + new_cpu =3D _csched_cpu_pick(ops, currunit, false); =20 unit_schedule_unlock_irqrestore(lock, flags, currunit); =20 @@ -1108,7 +1108,7 @@ static void csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct csched_unit * const svc =3D CSCHED_UNIT(unit); - bool_t migrating; + bool migrating; =20 BUG_ON( is_idle_unit(unit) ); =20 diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 264a753116..8646d77343 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -490,10 +490,10 @@ rt_update_deadline(s_time_t now, struct rt_unit *svc) static inline bool deadline_queue_remove(struct list_head *queue, struct list_head *elem) { - int pos =3D 0; + bool pos =3D false; =20 if ( queue->next !=3D elem ) - pos =3D 1; + pos =3D true; =20 list_del_init(elem); return !pos; @@ -505,14 +505,14 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struc= t list_head *), struct list_head *queue) { struct list_head *iter; - int pos =3D 0; + bool pos =3D false; =20 list_for_each ( iter, queue ) { struct rt_unit * iter_svc =3D (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; - pos++; + pos =3D true; } list_add_tail(elem, iter); return !pos; @@ -605,7 +605,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_u= nit *svc) { struct list_head *replq =3D rt_replq(ops); struct rt_unit *rearm_svc =3D svc; - bool_t rearm =3D 0; + bool rearm =3D false; =20 ASSERT( unit_on_replq(svc) ); =20 @@ -622,7 +622,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_u= nit *svc) { deadline_replq_insert(svc, &svc->replq_elem, replq); rearm_svc =3D replq_elem(replq->next); - rearm =3D 1; + rearm =3D true; } else rearm =3D deadline_replq_insert(svc, &svc->replq_elem, replq); @@ -1279,7 +1279,7 @@ rt_unit_wake(const struct scheduler *ops, struct sche= d_unit *unit) { struct rt_unit * const svc =3D rt_unit(unit); s_time_t now; - bool_t missed; + bool missed; =20 BUG_ON( is_idle_unit(unit) ); =20 diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index db8ce146ca..3307e88b6c 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -53,7 +53,7 @@ string_param("sched", opt_sched); * scheduler will give preferrence to partially idle package compared to * the full idle package, when picking pCPU to schedule vCPU. */ -bool_t sched_smt_power_savings =3D 0; +bool sched_smt_power_savings; boolean_param("sched_smt_power_savings", sched_smt_power_savings); =20 /* Default scheduling rate limit: 1ms @@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v) { get_sched_res(v->processor)->curr =3D unit; get_sched_res(v->processor)->sched_unit_idle =3D unit; - v->is_running =3D 1; + v->is_running =3D true; unit->is_running =3D true; unit->state_entry_time =3D NOW(); } @@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit= *unit) unsigned long flags; unsigned int old_cpu, new_cpu; spinlock_t *old_lock, *new_lock; - bool_t pick_called =3D 0; + bool pick_called =3D false; struct vcpu *v; =20 /* @@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_un= it *unit) if ( (new_lock =3D=3D get_sched_res(new_cpu)->schedule_lock) && cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_vali= d) ) break; - pick_called =3D 1; + pick_called =3D true; } else { @@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_un= it *unit) * We do not hold the scheduler lock appropriate for this vCPU. * Thus we cannot select a new CPU on this iteration. Try agai= n. */ - pick_called =3D 0; + pick_called =3D false; } =20 sched_spin_unlock_double(old_lock, new_lock, flags); @@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource = *sr, vcpu_runstate_change(vnext, vnext->new_state, now); } =20 - vnext->is_running =3D 1; + vnext->is_running =3D true; =20 if ( is_idle_vcpu(vnext) ) vnext->sched_unit =3D next; @@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, st= ruct vcpu *vnext) smp_wmb(); =20 if ( vprev !=3D vnext ) - vprev->is_running =3D 0; + vprev->is_running =3D false; } =20 static void unit_context_saved(struct sched_resource *sr) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 55335d6ab3..b2f48a3512 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -557,18 +557,18 @@ static inline bool is_system_domain(const struct doma= in *d) * Use this when you don't have an existing reference to @d. It returns * FALSE if @d is being destroyed. */ -static always_inline int get_domain(struct domain *d) +static always_inline bool get_domain(struct domain *d) { int old, seen =3D atomic_read(&d->refcnt); do { old =3D seen; if ( unlikely(old & DOMAIN_DESTROYED) ) - return 0; + return false; seen =3D atomic_cmpxchg(&d->refcnt, old, old + 1); } while ( unlikely(seen !=3D old) ); - return 1; + return true; } =20 /* --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel