From nobody Mon Feb 9 19:30:32 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1568451256; cv=none; d=zoho.com; s=zohoarc; b=DtindRM4nnMmwLLirIHVLnG9Uxcs8SekOI+hDdCuxYI/S6jlUlf+bdDYidVFYp5Mq8rwv9suBOlYvncDaAOqatxtq5VpIoNHn7buhbuokvHX90454nvqkON1MBGL4L7tGLkx+GJm1PIhUYaOhlMqO5XzPzfzG2usHDNk53YuVAQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568451256; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=g/hjA7Q8+udENZt2cE7IHlSu7YDx8INz1rloUdfacR8=; b=dnbOcjnt0Sx273VBz4bcABCnNEhIzSTHsAa4P8onGJ5Gt4EjQhhIvM6xncfd8stQA3/4Owht9rsI6g4T48VM3uJk2k8ka3JtbEfBHzZ4YONgk/fONLHtME2mofisWqmSrCyCuVlimNexSQQ+PkJFbGkUvTaEYlrg8nteevSsl7k= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1568451256164891.251132921426; Sat, 14 Sep 2019 01:54:16 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i93oE-00081O-I8; Sat, 14 Sep 2019 08:53:34 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i93oC-0007zJ-UR for xen-devel@lists.xenproject.org; Sat, 14 Sep 2019 08:53:32 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1149e846-d6cd-11e9-a337-bc764e2007e4; Sat, 14 Sep 2019 08:53:06 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 49923B672; Sat, 14 Sep 2019 08:53:04 +0000 (UTC) X-Inumbo-ID: 1149e846-d6cd-11e9-a337-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Sat, 14 Sep 2019 10:52:34 +0200 Message-Id: <20190914085251.18816-31-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190914085251.18816-1-jgross@suse.com> References: <20190914085251.18816-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v3 30/47] xen/sched: add support for multiple vcpus per sched unit where missing X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In several places there is support for multiple vcpus per sched unit missing. Add that missing support (with the exception of initial allocation) and missing helpers for that. Signed-off-by: Juergen Gross --- RFC V2: - fix vcpu_runstate_helper() V1: - add special handling for idle unit in unit_runnable() and unit_runnable_state() V2: - handle affinity_broken correctly (Jan Beulich) V3: - type for cpu ->unsigned int (Jan Beulich) --- xen/common/domain.c | 5 +++- xen/common/schedule.c | 37 +++++++++++++------------- xen/include/xen/sched-if.h | 65 +++++++++++++++++++++++++++++++++++++++---= ---- 3 files changed, 79 insertions(+), 28 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index fa4023936b..ea6aee3858 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1259,7 +1259,10 @@ int vcpu_reset(struct vcpu *v) v->async_exception_mask =3D 0; memset(v->async_exception_state, 0, sizeof(v->async_exception_state)); #endif - v->affinity_broken =3D 0; + if ( v->affinity_broken & VCPU_AFFINITY_OVERRIDE ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); + if ( v->affinity_broken & VCPU_AFFINITY_WAIT ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_WAIT); clear_bit(_VPF_blocked, &v->pause_flags); clear_bit(_VPF_in_reset, &v->pause_flags); =20 diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 03bcf796ae..a79065c826 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -243,8 +243,9 @@ static inline void vcpu_runstate_change( s_time_t delta; struct sched_unit *unit =3D v->sched_unit; =20 - ASSERT(v->runstate.state !=3D new_state); ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock)); + if ( v->runstate.state =3D=3D new_state ) + return; =20 vcpu_urgent_count_update(v); =20 @@ -266,15 +267,16 @@ static inline void vcpu_runstate_change( static inline void sched_unit_runstate_change(struct sched_unit *unit, bool running, s_time_t new_entry_time) { - struct vcpu *v =3D unit->vcpu_list; + struct vcpu *v; =20 - if ( running ) - vcpu_runstate_change(v, v->new_state, new_entry_time); - else - vcpu_runstate_change(v, - ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : - (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)), - new_entry_time); + for_each_sched_unit_vcpu ( unit, v ) + if ( running ) + vcpu_runstate_change(v, v->new_state, new_entry_time); + else + vcpu_runstate_change(v, + ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : + (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)= ), + new_entry_time); } =20 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) @@ -1031,10 +1033,9 @@ int cpu_disable_scheduler(unsigned int cpu) if ( cpumask_empty(&online_affinity) && cpumask_test_cpu(cpu, unit->cpu_hard_affinity) ) { - /* TODO: multiple vcpus per unit. */ - if ( unit->vcpu_list->affinity_broken ) + if ( sched_check_affinity_broken(unit) ) { - /* The vcpu is temporarily pinned, can't move it. */ + /* The unit is temporarily pinned, can't move it. */ unit_schedule_unlock_irqrestore(lock, flags, unit); ret =3D -EADDRINUSE; break; @@ -1392,17 +1393,17 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigne= d int cpu, uint8_t reason) ret =3D 0; v->affinity_broken &=3D ~reason; } - if ( !ret && !v->affinity_broken ) + if ( !ret && !sched_check_affinity_broken(unit) ) sched_set_affinity(v, unit->cpu_hard_affinity_saved, NULL); } else if ( cpu < nr_cpu_ids ) { if ( (v->affinity_broken & reason) || - (v->affinity_broken && v->processor !=3D cpu) ) + (sched_check_affinity_broken(unit) && v->processor !=3D cpu) ) ret =3D -EBUSY; else if ( cpumask_test_cpu(cpu, VCPU2ONLINE(v)) ) { - if ( !v->affinity_broken ) + if ( !sched_check_affinity_broken(unit) ) { cpumask_copy(unit->cpu_hard_affinity_saved, unit->cpu_hard_affinity); @@ -1722,14 +1723,14 @@ static void sched_switch_units(struct sched_resourc= e *sd, (next->vcpu_list->runstate.state =3D=3D RUNSTATE_runnable) ? (now - next->state_entry_time) : 0, prev->next_time); =20 - ASSERT(prev->vcpu_list->runstate.state =3D=3D RUNSTATE_running); + ASSERT(unit_running(prev)); =20 TRACE_4D(TRC_SCHED_SWITCH, prev->domain->domain_id, prev->unit_id, next->domain->domain_id, next->unit_id); =20 sched_unit_runstate_change(prev, false, now); =20 - ASSERT(next->vcpu_list->runstate.state !=3D RUNSTATE_running); + ASSERT(!unit_running(next)); sched_unit_runstate_change(next, true, now); =20 /* @@ -1851,7 +1852,7 @@ void sched_context_switched(struct vcpu *vprev, struc= t vcpu *vnext) while ( atomic_read(&next->rendezvous_out_cnt) ) cpu_relax(); } - else if ( vprev !=3D vnext ) + else if ( vprev !=3D vnext && sched_granularity =3D=3D 1 ) context_saved(vprev); } =20 diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 25ba6f25c9..6a4dbac935 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -68,12 +68,32 @@ static inline bool is_idle_unit(const struct sched_unit= *unit) =20 static inline bool is_unit_online(const struct sched_unit *unit) { - return is_vcpu_online(unit->vcpu_list); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + if ( is_vcpu_online(v) ) + return true; + + return false; +} + +static inline unsigned int unit_running(const struct sched_unit *unit) +{ + return unit->runstate_cnt[RUNSTATE_running]; } =20 static inline bool unit_runnable(const struct sched_unit *unit) { - return vcpu_runnable(unit->vcpu_list); + struct vcpu *v; + + if ( is_idle_unit(unit) ) + return true; + + for_each_sched_unit_vcpu ( unit, v ) + if ( vcpu_runnable(v) ) + return true; + + return false; } =20 static inline bool unit_runnable_state(const struct sched_unit *unit) @@ -102,7 +122,16 @@ static inline bool unit_runnable_state(const struct sc= hed_unit *unit) static inline void sched_set_res(struct sched_unit *unit, struct sched_resource *res) { - unit->vcpu_list->processor =3D res->master_cpu; + unsigned int cpu =3D cpumask_first(res->cpus); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + { + ASSERT(cpu < nr_cpu_ids); + v->processor =3D cpu; + cpu =3D cpumask_next(cpu, res->cpus); + } + unit->res =3D res; } =20 @@ -114,25 +143,37 @@ static inline unsigned int sched_unit_cpu(const struc= t sched_unit *unit) static inline void sched_set_pause_flags(struct sched_unit *unit, unsigned int bit) { - __set_bit(bit, &unit->vcpu_list->pause_flags); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + __set_bit(bit, &v->pause_flags); } =20 static inline void sched_clear_pause_flags(struct sched_unit *unit, unsigned int bit) { - __clear_bit(bit, &unit->vcpu_list->pause_flags); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + __clear_bit(bit, &v->pause_flags); } =20 static inline void sched_set_pause_flags_atomic(struct sched_unit *unit, unsigned int bit) { - set_bit(bit, &unit->vcpu_list->pause_flags); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + set_bit(bit, &v->pause_flags); } =20 static inline void sched_clear_pause_flags_atomic(struct sched_unit *unit, unsigned int bit) { - clear_bit(bit, &unit->vcpu_list->pause_flags); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + clear_bit(bit, &v->pause_flags); } =20 static inline struct sched_unit *sched_idle_unit(unsigned int cpu) @@ -458,12 +499,18 @@ static inline int sched_adjust_cpupool(const struct s= cheduler *s, =20 static inline void sched_unit_pause_nosync(struct sched_unit *unit) { - vcpu_pause_nosync(unit->vcpu_list); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + vcpu_pause_nosync(v); } =20 static inline void sched_unit_unpause(struct sched_unit *unit) { - vcpu_unpause(unit->vcpu_list); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + vcpu_unpause(v); } =20 #define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel