From nobody Tue Nov 11 08:29:45 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569820982; cv=none; d=zoho.com; s=zohoarc; b=FvLQ89qfiDgmC13I/bsGtXh9Q0mWrgRdImbxGk4XBHh3keOYriRpWgC9fo3z0CDmODwbNqxjOlK95KcfMDmd/pxYPzqeei+72hyYcHf1K8Kpc6kRRjw9pMq38w6UilD1aRMbzePVrntXxkfFyXHlez5FjO9jTPf/ebCjR48W4Tc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569820982; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=mwIrYjMSuN5B23oOwnfROF+hglyB3HoY+Xs7fUudbbE=; b=NMS/T3+EN2cEBIhXznK7EoBA4hXukAPy1TVdOmlnU5Tffh7d1B0m72IhWbQa/itPhO47zfaX9nf4XguiWEay8sMdmO/QXJgN2uZc4rwy8B2Ge//UwQhKzP8X2A2o6UZN//Zgu4HplluBkNAzWHXbOcSCoaa6s3lIeOAsiP0rUnM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156982098268680.79334446876908; Sun, 29 Sep 2019 22:23:02 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8J-00025t-Az; Mon, 30 Sep 2019 05:22:03 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8I-00025I-GT for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 05:22:02 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 3171ab00-e342-11e9-96c8-12813bfff9fa; Mon, 30 Sep 2019 05:21:45 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A3FDEB0F2; Mon, 30 Sep 2019 05:21:42 +0000 (UTC) X-Inumbo-ID: 3171ab00-e342-11e9-96c8-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 30 Sep 2019 07:21:29 +0200 Message-Id: <20190930052135.11257-14-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190930052135.11257-1-jgross@suse.com> References: <20190930052135.11257-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v5 13/19] xen/sched: split schedule_cpu_switch() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of letting schedule_cpu_switch() handle moving cpus from and to cpupools, split it into schedule_cpu_add() and schedule_cpu_rm(). This will allow us to drop allocating/freeing scheduler data for free cpus as the idle scheduler doesn't need such data. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V1: new patch V4: - rename sd -> sr (Jan Beulich) --- xen/common/cpupool.c | 4 +- xen/common/schedule.c | 133 +++++++++++++++++++++++++++-----------------= ---- xen/include/xen/sched.h | 3 +- 3 files changed, 78 insertions(+), 62 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index 51f0ff0d88..02825e779d 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -271,7 +271,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c,= unsigned int cpu) =20 if ( (cpupool_moving_cpu =3D=3D cpu) && (c !=3D cpupool_cpu_moving) ) return -EADDRNOTAVAIL; - ret =3D schedule_cpu_switch(cpu, c); + ret =3D schedule_cpu_add(cpu, c); if ( ret ) return ret; =20 @@ -321,7 +321,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *= c) */ if ( !ret ) { - ret =3D schedule_cpu_switch(cpu, NULL); + ret =3D schedule_cpu_rm(cpu); if ( ret ) cpumask_clear_cpu(cpu, &cpupool_free_cpus); else diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 5257225050..a96fc82282 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -93,15 +93,6 @@ static struct scheduler __read_mostly ops; static void sched_set_affinity( struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft); =20 -static spinlock_t * -sched_idle_switch_sched(struct scheduler *new_ops, unsigned int cpu, - void *pdata, void *vdata) -{ - sched_idle_unit(cpu)->priv =3D NULL; - - return &sched_free_cpu_lock; -} - static struct sched_resource * sched_idle_res_pick(const struct scheduler *ops, const struct sched_unit *= unit) { @@ -141,7 +132,6 @@ static struct scheduler sched_idle_ops =3D { =20 .alloc_udata =3D sched_idle_alloc_udata, .free_udata =3D sched_idle_free_udata, - .switch_sched =3D sched_idle_switch_sched, }; =20 static inline struct vcpu *unit2vcpu_cpu(const struct sched_unit *unit, @@ -2547,36 +2537,22 @@ void __init scheduler_init(void) } =20 /* - * Move a pCPU outside of the influence of the scheduler of its current - * cpupool, or subject it to the scheduler of a new cpupool. - * - * For the pCPUs that are removed from their cpupool, their scheduler beco= mes - * &sched_idle_ops (the idle scheduler). + * Move a pCPU from free cpus (running the idle scheduler) to a cpupool + * using any "real" scheduler. + * The cpu is still marked as "free" and not yet valid for its cpupool. */ -int schedule_cpu_switch(unsigned int cpu, struct cpupool *c) +int schedule_cpu_add(unsigned int cpu, struct cpupool *c) { struct vcpu *idle; - void *ppriv, *ppriv_old, *vpriv, *vpriv_old; - struct scheduler *old_ops =3D get_sched_res(cpu)->scheduler; - struct scheduler *new_ops =3D (c =3D=3D NULL) ? &sched_idle_ops : c->s= ched; - struct sched_resource *sd =3D get_sched_res(cpu); - struct cpupool *old_pool =3D sd->cpupool; + void *ppriv, *vpriv; + struct scheduler *new_ops =3D c->sched; + struct sched_resource *sr =3D get_sched_res(cpu); spinlock_t *old_lock, *new_lock; unsigned long flags; =20 - /* - * pCPUs only move from a valid cpupool to free (i.e., out of any pool= ), - * or from free to a valid cpupool. In the former case (which happens = when - * c is NULL), we want the CPU to have been marked as free already, as - * well as to not be valid for the source pool any longer, when we get= to - * here. In the latter case (which happens when c is a valid cpupool),= we - * want the CPU to still be marked as free, as well as to not yet be v= alid - * for the destination pool. - */ - ASSERT(c !=3D old_pool && (c !=3D NULL || old_pool !=3D NULL)); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); - ASSERT((c =3D=3D NULL && !cpumask_test_cpu(cpu, old_pool->cpu_valid)) = || - (c !=3D NULL && !cpumask_test_cpu(cpu, c->cpu_valid))); + ASSERT(!cpumask_test_cpu(cpu, c->cpu_valid)); + ASSERT(get_sched_res(cpu)->cpupool =3D=3D NULL); =20 /* * To setup the cpu for the new scheduler we need: @@ -2601,52 +2577,91 @@ int schedule_cpu_switch(unsigned int cpu, struct cp= upool *c) return -ENOMEM; } =20 - sched_do_tick_suspend(old_ops, cpu); - /* - * The actual switch, including (if necessary) the rerouting of the - * scheduler lock to whatever new_ops prefers, needs to happen in one - * critical section, protected by old_ops' lock, or races are possible. - * It is, in fact, the lock of another scheduler that we are taking (t= he - * scheduler of the cpupool that cpu still belongs to). But that is ok - * as, anyone trying to schedule on this cpu will spin until when we - * release that lock (bottom of this function). When he'll get the lock - * --thanks to the loop inside *_schedule_lock() functions-- he'll not= ice - * that the lock itself changed, and retry acquiring the new one (which - * will be the correct, remapped one, at that point). + * The actual switch, including the rerouting of the scheduler lock to + * whatever new_ops prefers, needs to happen in one critical section, + * protected by old_ops' lock, or races are possible. + * It is, in fact, the lock of the idle scheduler that we are taking. + * But that is ok as anyone trying to schedule on this cpu will spin u= ntil + * when we release that lock (bottom of this function). When he'll get= the + * lock --thanks to the loop inside *_schedule_lock() functions-- he'll + * notice that the lock itself changed, and retry acquiring the new one + * (which will be the correct, remapped one, at that point). */ old_lock =3D pcpu_schedule_lock_irqsave(cpu, &flags); =20 - vpriv_old =3D idle->sched_unit->priv; - ppriv_old =3D sd->sched_priv; new_lock =3D sched_switch_sched(new_ops, cpu, ppriv, vpriv); =20 - sd->scheduler =3D new_ops; - sd->sched_priv =3D ppriv; + sr->scheduler =3D new_ops; + sr->sched_priv =3D ppriv; =20 /* - * The data above is protected under new_lock, which may be unlocked. - * Another CPU can take new_lock as soon as sd->schedule_lock is visib= le, - * and must observe all prior initialisation. + * Reroute the lock to the per pCPU lock as /last/ thing. In fact, + * if it is free (and it can be) we want that anyone that manages + * taking it, finds all the initializations we've done above in place. */ smp_wmb(); - sd->schedule_lock =3D new_lock; + sr->schedule_lock =3D new_lock; =20 - /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ + /* _Not_ pcpu_schedule_unlock(): schedule_lock has changed! */ spin_unlock_irqrestore(old_lock, flags); =20 sched_do_tick_resume(new_ops, cpu); =20 + sr->granularity =3D cpupool_get_granularity(c); + sr->cpupool =3D c; + /* The cpu is added to a pool, trigger it to go pick up some work */ + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); + + return 0; +} + +/* + * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops + * (the idle scheduler). + * The cpu is already marked as "free" and not valid any longer for its + * cpupool. + */ +int schedule_cpu_rm(unsigned int cpu) +{ + struct vcpu *idle; + void *ppriv_old, *vpriv_old; + struct sched_resource *sr =3D get_sched_res(cpu); + struct scheduler *old_ops =3D sr->scheduler; + spinlock_t *old_lock; + unsigned long flags; + + ASSERT(sr->cpupool !=3D NULL); + ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); + ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid)); + + idle =3D idle_vcpu[cpu]; + + sched_do_tick_suspend(old_ops, cpu); + + /* See comment in schedule_cpu_add() regarding lock switching. */ + old_lock =3D pcpu_schedule_lock_irqsave(cpu, &flags); + + vpriv_old =3D idle->sched_unit->priv; + ppriv_old =3D sr->sched_priv; + + idle->sched_unit->priv =3D NULL; + sr->scheduler =3D &sched_idle_ops; + sr->sched_priv =3D NULL; + + smp_mb(); + sr->schedule_lock =3D &sched_free_cpu_lock; + + /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ + spin_unlock_irqrestore(old_lock, flags); + sched_deinit_pdata(old_ops, ppriv_old, cpu); =20 sched_free_udata(old_ops, vpriv_old); sched_free_pdata(old_ops, ppriv_old, cpu); =20 - get_sched_res(cpu)->granularity =3D cpupool_get_granularity(c); - get_sched_res(cpu)->cpupool =3D c; - /* When a cpu is added to a pool, trigger it to go pick up some work */ - if ( c !=3D NULL ) - cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); + sr->granularity =3D 1; + sr->cpupool =3D NULL; =20 return 0; } diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index aa8257edc9..a40bd5fb56 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -920,7 +920,8 @@ struct scheduler; struct scheduler *scheduler_get_default(void); struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); void scheduler_free(struct scheduler *sched); -int schedule_cpu_switch(unsigned int cpu, struct cpupool *c); +int schedule_cpu_add(unsigned int cpu, struct cpupool *c); +int schedule_cpu_rm(unsigned int cpu); void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value); int cpu_disable_scheduler(unsigned int cpu); void sched_setup_dom0_vcpus(struct domain *d); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel