From nobody Mon Feb 9 17:22:44 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557125909; cv=none; d=zoho.com; s=zohoarc; b=MljP9bum0RgunEIzs0BJxSpePQA4h/vcnexmC4KXmCmqgpry1G92bBMFobrnmEWRSFXkYycLOn7gIQmFPUJmtQVqkjw7HY9hj7oRG7tOQiscG+nYmpoBUKmw1sxApNZopGsXdnj+LJoqGR72SZYHDxUL2tXjmHcFY0O4vWlCy0o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557125909; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=OsIDmDng3/WoHlVU/r/KDZekPqQtSGyqACdd8q3xCzs=; b=juWMxhAjl5jIkVJPmqF+Epy+IOiGhIk8p4ndMOv/oWfwMpQepze0Qk8xUARSqgVQ177dAOqivYaSzn4HnsBczy6GIPt1j0jRpurAkT52RK8Nsxp2DjTVJpYzkKH9m46RdKs7JMYgGoP0TQaklN5Cmg2omzaKPzbU1MvNQGtquH4= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557125909607601.1704378004817; Sun, 5 May 2019 23:58:29 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYm-0002Ic-P1; Mon, 06 May 2019 06:57:12 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYf-00023Z-OX for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:05 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 2508fa17-6fcc-11e9-843c-bc764e045a96; Mon, 06 May 2019 06:57:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 37678AF2F; Mon, 6 May 2019 06:56:56 +0000 (UTC) X-Inumbo-ID: 2508fa17-6fcc-11e9-843c-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:27 +0200 Message-Id: <20190506065644.7415-29-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 28/45] xen: switch from for_each_vcpu() to for_each_sched_item() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Where appropriate switch from for_each_vcpu() to for_each_sched_item() in order to prepare core scheduling. Signed-off-by: Juergen Gross --- xen/common/domain.c | 9 ++--- xen/common/schedule.c | 107 ++++++++++++++++++++++++++--------------------= ---- 2 files changed, 59 insertions(+), 57 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 78a838fab3..d0f9e5e86a 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -510,7 +510,7 @@ void domain_update_node_affinity(struct domain *d) cpumask_var_t dom_cpumask, dom_cpumask_soft; cpumask_t *dom_affinity; const cpumask_t *online; - struct vcpu *v; + struct sched_item *item; unsigned int cpu; =20 /* Do we have vcpus already? If not, no need to update node-affinity. = */ @@ -543,12 +543,11 @@ void domain_update_node_affinity(struct domain *d) * and the full mask of where it would prefer to run (the union of * the soft affinity of all its various vcpus). Let's build them. */ - for_each_vcpu ( d, v ) + for_each_sched_item ( d, item ) { - cpumask_or(dom_cpumask, dom_cpumask, - v->sched_item->cpu_hard_affinity); + cpumask_or(dom_cpumask, dom_cpumask, item->cpu_hard_affinity); cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - v->sched_item->cpu_soft_affinity); + item->cpu_soft_affinity); } /* Filter out non-online cpus */ cpumask_and(dom_cpumask, dom_cpumask, online); diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 5368d66cfc..bc0554f2da 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -423,16 +423,17 @@ static void sched_move_irqs(struct sched_item *item) int sched_move_domain(struct domain *d, struct cpupool *c) { struct vcpu *v; + struct sched_item *item; unsigned int new_p; - void **vcpu_priv; + void **item_priv; void *domdata; - void *vcpudata; + void *itemdata; struct scheduler *old_ops; void *old_domdata; =20 - for_each_vcpu ( d, v ) + for_each_sched_item ( d, item ) { - if ( v->sched_item->affinity_broken ) + if ( item->affinity_broken ) return -EBUSY; } =20 @@ -440,22 +441,21 @@ int sched_move_domain(struct domain *d, struct cpupoo= l *c) if ( IS_ERR(domdata) ) return PTR_ERR(domdata); =20 - vcpu_priv =3D xzalloc_array(void *, d->max_vcpus); - if ( vcpu_priv =3D=3D NULL ) + item_priv =3D xzalloc_array(void *, d->max_vcpus); + if ( item_priv =3D=3D NULL ) { sched_free_domdata(c->sched, domdata); return -ENOMEM; } =20 - for_each_vcpu ( d, v ) + for_each_sched_item ( d, item ) { - vcpu_priv[v->vcpu_id] =3D sched_alloc_vdata(c->sched, v->sched_ite= m, - domdata); - if ( vcpu_priv[v->vcpu_id] =3D=3D NULL ) + item_priv[item->item_id] =3D sched_alloc_vdata(c->sched, item, dom= data); + if ( item_priv[item->item_id] =3D=3D NULL ) { - for_each_vcpu ( d, v ) - xfree(vcpu_priv[v->vcpu_id]); - xfree(vcpu_priv); + for_each_sched_item ( d, item ) + xfree(item_priv[item->item_id]); + xfree(item_priv); sched_free_domdata(c->sched, domdata); return -ENOMEM; } @@ -466,30 +466,35 @@ int sched_move_domain(struct domain *d, struct cpupoo= l *c) old_ops =3D dom_scheduler(d); old_domdata =3D d->sched_priv; =20 - for_each_vcpu ( d, v ) + for_each_sched_item ( d, item ) { - sched_remove_item(old_ops, v->sched_item); + sched_remove_item(old_ops, item); } =20 d->cpupool =3D c; d->sched_priv =3D domdata; =20 new_p =3D cpumask_first(c->cpu_valid); - for_each_vcpu ( d, v ) + for_each_sched_item ( d, item ) { spinlock_t *lock; + unsigned int item_p =3D new_p; =20 - vcpudata =3D v->sched_item->priv; + itemdata =3D item->priv; =20 - migrate_timer(&v->periodic_timer, new_p); - migrate_timer(&v->singleshot_timer, new_p); - migrate_timer(&v->poll_timer, new_p); + for_each_sched_item_vcpu( item, v ) + { + migrate_timer(&v->periodic_timer, new_p); + migrate_timer(&v->singleshot_timer, new_p); + migrate_timer(&v->poll_timer, new_p); + new_p =3D cpumask_cycle(new_p, c->cpu_valid); + } =20 - lock =3D item_schedule_lock_irq(v->sched_item); + lock =3D item_schedule_lock_irq(item); =20 - sched_set_affinity(v, &cpumask_all, &cpumask_all); + sched_set_affinity(item->vcpu, &cpumask_all, &cpumask_all); =20 - sched_set_res(v->sched_item, per_cpu(sched_res, new_p)); + sched_set_res(item, per_cpu(sched_res, item_p)); /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, @@ -497,15 +502,13 @@ int sched_move_domain(struct domain *d, struct cpupoo= l *c) */ spin_unlock_irq(lock); =20 - v->sched_item->priv =3D vcpu_priv[v->vcpu_id]; + item->priv =3D item_priv[item->item_id]; if ( !d->is_dying ) sched_move_irqs(v->sched_item); =20 - new_p =3D cpumask_cycle(new_p, c->cpu_valid); + sched_insert_item(c->sched, item); =20 - sched_insert_item(c->sched, v->sched_item); - - sched_free_vdata(old_ops, vcpudata); + sched_free_vdata(old_ops, itemdata); } =20 domain_update_node_affinity(d); @@ -514,7 +517,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) =20 sched_free_domdata(old_ops, old_domdata); =20 - xfree(vcpu_priv); + xfree(item_priv); =20 return 0; } @@ -819,15 +822,14 @@ void vcpu_force_reschedule(struct vcpu *v) void restore_vcpu_affinity(struct domain *d) { unsigned int cpu =3D smp_processor_id(); - struct vcpu *v; + struct sched_item *item; =20 ASSERT(system_state =3D=3D SYS_STATE_resume); =20 - for_each_vcpu ( d, v ) + for_each_sched_item ( d, item ) { spinlock_t *lock; - unsigned int old_cpu =3D v->processor; - struct sched_item *item =3D v->sched_item; + unsigned int old_cpu =3D sched_item_cpu(item); struct sched_resource *res; =20 ASSERT(!item_runnable(item)); @@ -846,7 +848,8 @@ void restore_vcpu_affinity(struct domain *d) { if ( item->affinity_broken ) { - sched_set_affinity(v, item->cpu_hard_affinity_saved, NULL); + sched_set_affinity(item->vcpu, item->cpu_hard_affinity_sav= ed, + NULL); item->affinity_broken =3D 0; cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affin= ity, cpupool_domain_cpumask(d)); @@ -854,8 +857,8 @@ void restore_vcpu_affinity(struct domain *d) =20 if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) { - printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); - sched_set_affinity(v, &cpumask_all, NULL); + printk(XENLOG_DEBUG "Breaking affinity for %pv\n", item->v= cpu); + sched_set_affinity(item->vcpu, &cpumask_all, NULL); cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affin= ity, cpupool_domain_cpumask(d)); } @@ -865,12 +868,12 @@ void restore_vcpu_affinity(struct domain *d) sched_set_res(item, res); =20 lock =3D item_schedule_lock_irq(item); - res =3D sched_pick_resource(vcpu_scheduler(v), item); + res =3D sched_pick_resource(vcpu_scheduler(item->vcpu), item); sched_set_res(item, res); spin_unlock_irq(lock); =20 - if ( old_cpu !=3D v->processor ) - sched_move_irqs(v->sched_item); + if ( old_cpu !=3D sched_item_cpu(item) ) + sched_move_irqs(item); } =20 domain_update_node_affinity(d); @@ -884,7 +887,6 @@ void restore_vcpu_affinity(struct domain *d) int cpu_disable_scheduler(unsigned int cpu) { struct domain *d; - struct vcpu *v; struct cpupool *c; cpumask_t online_affinity; int ret =3D 0; @@ -895,10 +897,11 @@ int cpu_disable_scheduler(unsigned int cpu) =20 for_each_domain_in_cpupool ( d, c ) { - for_each_vcpu ( d, v ) + struct sched_item *item; + + for_each_sched_item ( d, item ) { unsigned long flags; - struct sched_item *item =3D v->sched_item; spinlock_t *lock =3D item_schedule_lock_irqsave(item, &flags); =20 cpumask_and(&online_affinity, item->cpu_hard_affinity, c->cpu_= valid); @@ -913,14 +916,14 @@ int cpu_disable_scheduler(unsigned int cpu) break; } =20 - printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); + printk(XENLOG_DEBUG "Breaking affinity for %pv\n", item->v= cpu); =20 - sched_set_affinity(v, &cpumask_all, NULL); + sched_set_affinity(item->vcpu, &cpumask_all, NULL); } =20 - if ( v->processor !=3D cpu ) + if ( sched_item_cpu(item) !=3D sched_get_resource_cpu(cpu) ) { - /* The vcpu is not on this cpu, so we can move on. */ + /* The item is not on this cpu, so we can move on. */ item_schedule_unlock_irqrestore(lock, flags, item); continue; } @@ -933,17 +936,17 @@ int cpu_disable_scheduler(unsigned int cpu) * * the scheduler will always find a suitable solution, or * things would have failed before getting in here. */ - vcpu_migrate_start(v); + vcpu_migrate_start(item->vcpu); item_schedule_unlock_irqrestore(lock, flags, item); =20 - vcpu_migrate_finish(v); + vcpu_migrate_finish(item->vcpu); =20 /* * The only caveat, in this case, is that if a vcpu active in * the hypervisor isn't migratable. In this case, the caller * should try again after releasing and reaquiring all locks. */ - if ( v->processor =3D=3D cpu ) + if ( sched_item_cpu(item) =3D=3D sched_get_resource_cpu(cpu) ) ret =3D -EAGAIN; } } @@ -954,16 +957,16 @@ int cpu_disable_scheduler(unsigned int cpu) static int cpu_disable_scheduler_check(unsigned int cpu) { struct domain *d; - struct vcpu *v; struct cpupool *c; + struct sched_item *item; =20 c =3D per_cpu(cpupool, cpu); if ( c =3D=3D NULL ) return 0; =20 for_each_domain_in_cpupool ( d, c ) - for_each_vcpu ( d, v ) - if ( v->sched_item->affinity_broken ) + for_each_sched_item ( d, item ) + if ( item->affinity_broken ) return -EADDRINUSE; =20 return 0; --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel