From nobody Mon Feb 9 08:57:44 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557125920; cv=none; d=zoho.com; s=zohoarc; b=cSIwuYa4OEWeOcNvThF5/lrXThzhD6jeXQM/0oA8B3g3yakU2GHqgOo7TY0HfwrI9qvNSOiQ/t1C8oIEjvYoY6zl2o9BsCyUpGLNi7e1XHcAFG2PBQN+JvDB3Sb2WTl44QLHwFberfXEhY+0vXBEoaW2vt8zg6ZrATi9aQKeOKU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557125920; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=csVN9hKJlc2tXMBImhvcZ+jnTkemKENvaZeMiQzqH/Q=; b=KmqgCTHRNFcysCJ05pK+hn2dMQDFqOFt495YhBJCDXnX4sO4/qgmgoJypLpdSYReJuBd3cbQJRR14GLnbQxqmRvPsukFR3Ip7alE8YKXBBKeBt5rbNqFw183NVUIpVqMS9ZQoK+ZMGohiYbHqIJZsXpWWBhrZa411KAIMXUjMl4= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557125920365418.7548606336288; Sun, 5 May 2019 23:58:40 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYv-0002eU-VA; Mon, 06 May 2019 06:57:21 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYl-0002Fa-L2 for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:11 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 26f3d5a4-6fcc-11e9-843c-bc764e045a96; Mon, 06 May 2019 06:57:03 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5D908AC8C; Mon, 6 May 2019 06:56:58 +0000 (UTC) X-Inumbo-ID: 26f3d5a4-6fcc-11e9-843c-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:34 +0200 Message-Id: <20190506065644.7415-36-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 35/45] xen/sched: add support for multiple vcpus per sched item where missing X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In several places there is support for multiple vcpus per sched item missing. Add that missing support (with the exception of initial allocation) and missing helpers for that. Signed-off-by: Juergen Gross --- RFC V2: fix vcpu_runstate_helper() --- xen/common/schedule.c | 26 ++++++++-------- xen/include/xen/sched-if.h | 74 ++++++++++++++++++++++++++++++++++++------= ---- 2 files changed, 73 insertions(+), 27 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 6ba6e70338..1134733314 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -180,8 +180,9 @@ static inline void vcpu_runstate_change( s_time_t delta; struct sched_item *item =3D v->sched_item; =20 - ASSERT(v->runstate.state !=3D new_state); ASSERT(spin_is_locked(per_cpu(sched_res, v->processor)->schedule_lock)= ); + if ( v->runstate.state =3D=3D new_state ) + return; =20 vcpu_urgent_count_update(v); =20 @@ -203,15 +204,16 @@ static inline void vcpu_runstate_change( static inline void sched_item_runstate_change(struct sched_item *item, bool running, s_time_t new_entry_time) { - struct vcpu *v =3D item->vcpu; + struct vcpu *v; =20 - if ( running ) - vcpu_runstate_change(v, v->new_state, new_entry_time); - else - vcpu_runstate_change(v, - ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : - (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)), - new_entry_time); + for_each_sched_item_vcpu( item, v ) + if ( running ) + vcpu_runstate_change(v, v->new_state, new_entry_time); + else + vcpu_runstate_change(v, + ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : + (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)= ), + new_entry_time); } =20 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) @@ -1580,7 +1582,7 @@ static void sched_switch_items(struct sched_resource = *sd, (next->vcpu->runstate.state =3D=3D RUNSTATE_runnable) ? (now - next->state_entry_time) : 0, prev->next_time); =20 - ASSERT(prev->vcpu->runstate.state =3D=3D RUNSTATE_running); + ASSERT(item_running(prev)); =20 TRACE_4D(TRC_SCHED_SWITCH, prev->domain->domain_id, prev->item_id, next->domain->domain_id, next->item_id); @@ -1588,7 +1590,7 @@ static void sched_switch_items(struct sched_resource = *sd, sched_item_runstate_change(prev, false, now); prev->last_run_time =3D now; =20 - ASSERT(next->vcpu->runstate.state !=3D RUNSTATE_running); + ASSERT(!item_running(next)); sched_item_runstate_change(next, true, now); =20 /* @@ -1703,7 +1705,7 @@ void sched_context_switched(struct vcpu *vprev, struc= t vcpu *vnext) while ( atomic_read(&next->rendezvous_out_cnt) ) cpu_relax(); } - else if ( vprev !=3D vnext ) + else if ( vprev !=3D vnext && sched_granularity =3D=3D 1 ) context_saved(vprev); } =20 diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 755b0f8f74..88fbc06860 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -55,29 +55,55 @@ static inline bool is_idle_item(const struct sched_item= *item) return is_idle_vcpu(item->vcpu); } =20 +static inline unsigned int item_running(const struct sched_item *item) +{ + return item->runstate_cnt[RUNSTATE_running]; +} + static inline bool item_runnable(const struct sched_item *item) { - return vcpu_runnable(item->vcpu); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + if ( vcpu_runnable(v) ) + return true; + + return false; } =20 static inline bool item_runnable_state(const struct sched_item *item) { struct vcpu *v; - bool runnable; + bool runnable, ret =3D false; + + for_each_sched_item_vcpu( item, v ) + { + runnable =3D vcpu_runnable(v); + + v->new_state =3D runnable ? RUNSTATE_running + : (v->pause_flags & VPF_blocked) + ? RUNSTATE_blocked : RUNSTATE_offline; =20 - v =3D item->vcpu; - runnable =3D vcpu_runnable(v); + if ( runnable ) + ret =3D true; + } =20 - v->new_state =3D runnable ? RUNSTATE_running - : (v->pause_flags & VPF_blocked) - ? RUNSTATE_blocked : RUNSTATE_offline; - return runnable; + return ret; } =20 static inline void sched_set_res(struct sched_item *item, struct sched_resource *res) { - item->vcpu->processor =3D res->processor; + int cpu =3D cpumask_first(res->cpus); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + { + ASSERT(cpu < nr_cpu_ids); + v->processor =3D cpu; + cpu =3D cpumask_next(cpu, res->cpus); + } + item->res =3D res; } =20 @@ -89,25 +115,37 @@ static inline unsigned int sched_item_cpu(struct sched= _item *item) static inline void sched_set_pause_flags(struct sched_item *item, unsigned int bit) { - __set_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + __set_bit(bit, &v->pause_flags); } =20 static inline void sched_clear_pause_flags(struct sched_item *item, unsigned int bit) { - __clear_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + __clear_bit(bit, &v->pause_flags); } =20 static inline void sched_set_pause_flags_atomic(struct sched_item *item, unsigned int bit) { - set_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + set_bit(bit, &v->pause_flags); } =20 static inline void sched_clear_pause_flags_atomic(struct sched_item *item, unsigned int bit) { - clear_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + clear_bit(bit, &v->pause_flags); } =20 static inline struct sched_item *sched_idle_item(unsigned int cpu) @@ -468,12 +506,18 @@ static inline int sched_adjust_cpupool(const struct s= cheduler *s, =20 static inline void sched_item_pause_nosync(struct sched_item *item) { - vcpu_pause_nosync(item->vcpu); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + vcpu_pause_nosync(v); } =20 static inline void sched_item_unpause(struct sched_item *item) { - vcpu_unpause(item->vcpu); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + vcpu_unpause(v); } =20 #define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel