From nobody Mon Feb 9 22:23:10 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559039812; cv=none; d=zoho.com; s=zohoarc; b=c+R6S04WNlqqtIDkOHSjSD7SO/kwP7Qt4JMHPz35RthSfzytObFA+P5iwVXuRosTCNdlqyxVEIM7pBA4rsyCDlI5VwI3pv+eUzp8xgrX1qdYYCdAXYQ7GvqSRzY+R5Y2xNfMbh8Sh6fK5UtgpVgqTJbBV4cPqexZbtO1T+jU+/E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559039812; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=fqgrXuZMZfGsp9IWAUaBo41qQiVTGRL83ByIAMgB0Ls=; b=GeDk8S4a5jSsO7uTq6EKbM3DXIP8PUW/mwwS+VvUfXmr5js46ErBCt6PX6qutPSLHq1ODS0/n4gF6gWI35Xb6DU4SYKopWGe09mRp8kt0ik10bE3RGQrA9MFxJEfuoj40rpEOn++Ih9biX99ziSqUK6PVhBOeRGwg9tnF/1yvEg= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 155903981202313.991307074201131; Tue, 28 May 2019 03:36:52 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZSf-00017H-1P; Tue, 28 May 2019 10:36:05 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZSe-00016c-5e for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:36:04 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 04cb3dfc-8134-11e9-8980-bc764e045a96; Tue, 28 May 2019 10:33:23 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EA94FB03A; Tue, 28 May 2019 10:33:21 +0000 (UTC) X-Inumbo-ID: 04cb3dfc-8134-11e9-8980-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 28 May 2019 12:32:33 +0200 Message-Id: <20190528103313.1343-21-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190528103313.1343-1-jgross@suse.com> References: <20190528103313.1343-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 20/60] xen/sched: make null scheduler vcpu agnostic. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Switch null scheduler completely from vcpu to sched_unit usage. Signed-off-by: Juergen Gross --- xen/common/sched_null.c | 304 ++++++++++++++++++++++++--------------------= ---- 1 file changed, 149 insertions(+), 155 deletions(-) diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index e490b791b8..53a20d73aa 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -18,10 +18,10 @@ =20 /* * The 'null' scheduler always choose to run, on each pCPU, either nothing - * (i.e., the pCPU stays idle) or always the same vCPU. + * (i.e., the pCPU stays idle) or always the same Item. * * It is aimed at supporting static scenarios, where there always are - * less vCPUs than pCPUs (and the vCPUs don't need to move among pCPUs + * less Items than pCPUs (and the Items don't need to move among pCPUs * for any reason) with the least possible overhead. * * Typical usecase are embedded applications, but also HPC, especially @@ -38,8 +38,8 @@ * null tracing events. Check include/public/trace.h for more details. */ #define TRC_SNULL_PICKED_CPU TRC_SCHED_CLASS_EVT(SNULL, 1) -#define TRC_SNULL_VCPU_ASSIGN TRC_SCHED_CLASS_EVT(SNULL, 2) -#define TRC_SNULL_VCPU_DEASSIGN TRC_SCHED_CLASS_EVT(SNULL, 3) +#define TRC_SNULL_UNIT_ASSIGN TRC_SCHED_CLASS_EVT(SNULL, 2) +#define TRC_SNULL_UNIT_DEASSIGN TRC_SCHED_CLASS_EVT(SNULL, 3) #define TRC_SNULL_MIGRATE TRC_SCHED_CLASS_EVT(SNULL, 4) #define TRC_SNULL_SCHEDULE TRC_SCHED_CLASS_EVT(SNULL, 5) #define TRC_SNULL_TASKLET TRC_SCHED_CLASS_EVT(SNULL, 6) @@ -48,13 +48,13 @@ * Locking: * - Scheduler-lock (a.k.a. runqueue lock): * + is per-pCPU; - * + serializes assignment and deassignment of vCPUs to a pCPU. + * + serializes assignment and deassignment of Items to a pCPU. * - Private data lock (a.k.a. private scheduler lock): * + is scheduler-wide; * + serializes accesses to the list of domains in this scheduler. * - Waitqueue lock: * + is scheduler-wide; - * + serialize accesses to the list of vCPUs waiting to be assigned + * + serialize accesses to the list of Items waiting to be assigned * to pCPUs. * * Ordering is: private lock, runqueue lock, waitqueue lock. Or, OTOH, @@ -78,25 +78,25 @@ struct null_private { spinlock_t lock; /* scheduler lock; nests inside cpupool_lock */ struct list_head ndom; /* Domains of this scheduler */ - struct list_head waitq; /* vCPUs not assigned to any pCPU */ + struct list_head waitq; /* Items not assigned to any pCPU */ spinlock_t waitq_lock; /* serializes waitq; nests inside runq locks */ - cpumask_t cpus_free; /* CPUs without a vCPU associated to them */ + cpumask_t cpus_free; /* CPUs without a Item associated to them */ }; =20 /* * Physical CPU */ struct null_pcpu { - struct vcpu *vcpu; + struct sched_unit *unit; }; DEFINE_PER_CPU(struct null_pcpu, npc); =20 /* - * Virtual CPU + * Schedule Item */ struct null_unit { struct list_head waitq_elem; - struct vcpu *vcpu; + struct sched_unit *unit; }; =20 /* @@ -120,13 +120,13 @@ static inline struct null_unit *null_unit(const struc= t sched_unit *unit) return unit->priv; } =20 -static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu, +static inline bool unit_check_affinity(struct sched_unit *unit, + unsigned int cpu, unsigned int balance_step) { - affinity_balance_cpumask(v->sched_unit, balance_step, - cpumask_scratch_cpu(cpu)); + affinity_balance_cpumask(unit, balance_step, cpumask_scratch_cpu(cpu)); cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), - cpupool_domain_cpumask(v->domain)); + cpupool_domain_cpumask(unit->domain)); =20 return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu)); } @@ -161,9 +161,9 @@ static void null_deinit(struct scheduler *ops) =20 static void init_pdata(struct null_private *prv, unsigned int cpu) { - /* Mark the pCPU as free, and with no vCPU assigned */ + /* Mark the pCPU as free, and with no unit assigned */ cpumask_set_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).vcpu =3D NULL; + per_cpu(npc, cpu).unit =3D NULL; } =20 static void null_init_pdata(const struct scheduler *ops, void *pdata, int = cpu) @@ -191,13 +191,12 @@ static void null_deinit_pdata(const struct scheduler = *ops, void *pcpu, int cpu) ASSERT(!pcpu); =20 cpumask_clear_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).vcpu =3D NULL; + per_cpu(npc, cpu).unit =3D NULL; } =20 static void *null_alloc_vdata(const struct scheduler *ops, struct sched_unit *unit, void *dd) { - struct vcpu *v =3D unit->vcpu; struct null_unit *nvc; =20 nvc =3D xzalloc(struct null_unit); @@ -205,7 +204,7 @@ static void *null_alloc_vdata(const struct scheduler *o= ps, return NULL; =20 INIT_LIST_HEAD(&nvc->waitq_elem); - nvc->vcpu =3D v; + nvc->unit =3D unit; =20 SCHED_STAT_CRANK(unit_alloc); =20 @@ -257,15 +256,15 @@ static void null_free_domdata(const struct scheduler = *ops, void *data) } =20 /* - * vCPU to pCPU assignment and placement. This _only_ happens: + * unit to pCPU assignment and placement. This _only_ happens: * - on insert, * - on migrate. * - * Insert occurs when a vCPU joins this scheduler for the first time + * Insert occurs when a unit joins this scheduler for the first time * (e.g., when the domain it's part of is moved to the scheduler's * cpupool). * - * Migration may be necessary if a pCPU (with a vCPU assigned to it) + * Migration may be necessary if a pCPU (with a unit assigned to it) * is removed from the scheduler's cpupool. * * So this is not part of any hot path. @@ -274,9 +273,8 @@ static struct sched_resource * pick_res(struct null_private *prv, struct sched_unit *unit) { unsigned int bs; - struct vcpu *v =3D unit->vcpu; - unsigned int cpu =3D v->processor, new_cpu; - cpumask_t *cpus =3D cpupool_domain_cpumask(v->domain); + unsigned int cpu =3D sched_unit_cpu(unit), new_cpu; + cpumask_t *cpus =3D cpupool_domain_cpumask(unit->domain); =20 ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 @@ -291,11 +289,12 @@ pick_res(struct null_private *prv, struct sched_unit = *unit) /* * If our processor is free, or we are assigned to it, and it is a= lso * still valid and part of our affinity, just go for it. - * (Note that we may call vcpu_check_affinity(), but we deliberate= ly + * (Note that we may call unit_check_affinity(), but we deliberate= ly * don't, so we get to keep in the scratch cpumask what we have ju= st * put in it.) */ - if ( likely((per_cpu(npc, cpu).vcpu =3D=3D NULL || per_cpu(npc, cp= u).vcpu =3D=3D v) + if ( likely((per_cpu(npc, cpu).unit =3D=3D NULL || + per_cpu(npc, cpu).unit =3D=3D unit) && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) { new_cpu =3D cpu; @@ -313,13 +312,13 @@ pick_res(struct null_private *prv, struct sched_unit = *unit) =20 /* * If we didn't find any free pCPU, just pick any valid pcpu, even if - * it has another vCPU assigned. This will happen during shutdown and + * it has another Item assigned. This will happen during shutdown and * suspend/resume, but it may also happen during "normal operation", if * all the pCPUs are busy. * * In fact, there must always be something sane in v->processor, or * unit_schedule_lock() and friends won't work. This is not a problem, - * as we will actually assign the vCPU to the pCPU we return from here, + * as we will actually assign the Item to the pCPU we return from here, * only if the pCPU is free. */ cpumask_and(cpumask_scratch_cpu(cpu), cpus, unit->cpu_hard_affinity); @@ -329,11 +328,11 @@ pick_res(struct null_private *prv, struct sched_unit = *unit) if ( unlikely(tb_init_done) ) { struct { - uint16_t vcpu, dom; + uint16_t unit, dom; uint32_t new_cpu; } d; - d.dom =3D v->domain->domain_id; - d.vcpu =3D v->vcpu_id; + d.dom =3D unit->domain->domain_id; + d.unit =3D unit->unit_id; d.new_cpu =3D new_cpu; __trace_var(TRC_SNULL_PICKED_CPU, 1, sizeof(d), &d); } @@ -341,47 +340,47 @@ pick_res(struct null_private *prv, struct sched_unit = *unit) return get_sched_res(new_cpu); } =20 -static void vcpu_assign(struct null_private *prv, struct vcpu *v, +static void unit_assign(struct null_private *prv, struct sched_unit *unit, unsigned int cpu) { - per_cpu(npc, cpu).vcpu =3D v; - v->processor =3D cpu; - v->sched_unit->res =3D get_sched_res(cpu); + per_cpu(npc, cpu).unit =3D unit; + sched_set_res(unit, get_sched_res(cpu)); cpumask_clear_cpu(cpu, &prv->cpus_free); =20 - dprintk(XENLOG_G_INFO, "%d <-- %pv\n", cpu, v); + dprintk(XENLOG_G_INFO, "%d <-- %pdv%d\n", cpu, unit->domain, unit->uni= t_id); =20 if ( unlikely(tb_init_done) ) { struct { - uint16_t vcpu, dom; + uint16_t unit, dom; uint32_t cpu; } d; - d.dom =3D v->domain->domain_id; - d.vcpu =3D v->vcpu_id; + d.dom =3D unit->domain->domain_id; + d.unit =3D unit->unit_id; d.cpu =3D cpu; - __trace_var(TRC_SNULL_VCPU_ASSIGN, 1, sizeof(d), &d); + __trace_var(TRC_SNULL_UNIT_ASSIGN, 1, sizeof(d), &d); } } =20 -static void vcpu_deassign(struct null_private *prv, struct vcpu *v, +static void unit_deassign(struct null_private *prv, struct sched_unit *uni= t, unsigned int cpu) { - per_cpu(npc, cpu).vcpu =3D NULL; + per_cpu(npc, cpu).unit =3D NULL; cpumask_set_cpu(cpu, &prv->cpus_free); =20 - dprintk(XENLOG_G_INFO, "%d <-- NULL (%pv)\n", cpu, v); + dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain, + unit->unit_id); =20 if ( unlikely(tb_init_done) ) { struct { - uint16_t vcpu, dom; + uint16_t unit, dom; uint32_t cpu; } d; - d.dom =3D v->domain->domain_id; - d.vcpu =3D v->vcpu_id; + d.dom =3D unit->domain->domain_id; + d.unit =3D unit->unit_id; d.cpu =3D cpu; - __trace_var(TRC_SNULL_VCPU_DEASSIGN, 1, sizeof(d), &d); + __trace_var(TRC_SNULL_UNIT_DEASSIGN, 1, sizeof(d), &d); } } =20 @@ -394,9 +393,9 @@ static spinlock_t *null_switch_sched(struct scheduler *= new_ops, struct null_private *prv =3D null_priv(new_ops); struct null_unit *nvc =3D vdata; =20 - ASSERT(nvc && is_idle_vcpu(nvc->vcpu)); + ASSERT(nvc && is_idle_unit(nvc->unit)); =20 - idle_vcpu[cpu]->sched_unit->priv =3D vdata; + sched_idle_unit(cpu)->priv =3D vdata; =20 /* * We are holding the runqueue lock already (it's been taken in @@ -413,35 +412,34 @@ static spinlock_t *null_switch_sched(struct scheduler= *new_ops, static void null_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *v =3D unit->vcpu; struct null_private *prv =3D null_priv(ops); struct null_unit *nvc =3D null_unit(unit); unsigned int cpu; spinlock_t *lock; =20 - ASSERT(!is_idle_vcpu(v)); + ASSERT(!is_idle_unit(unit)); =20 lock =3D unit_schedule_lock_irq(unit); retry: =20 - unit->res =3D pick_res(prv, unit); - cpu =3D v->processor =3D unit->res->processor; + sched_set_res(unit, pick_res(prv, unit)); + cpu =3D sched_unit_cpu(unit); =20 spin_unlock(lock); =20 lock =3D unit_schedule_lock(unit); =20 cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity, - cpupool_domain_cpumask(v->domain)); + cpupool_domain_cpumask(unit->domain)); =20 - /* If the pCPU is free, we assign v to it */ - if ( likely(per_cpu(npc, cpu).vcpu =3D=3D NULL) ) + /* If the pCPU is free, we assign unit to it */ + if ( likely(per_cpu(npc, cpu).unit =3D=3D NULL) ) { /* * Insert is followed by vcpu_wake(), so there's no need to poke * the pcpu with the SCHEDULE_SOFTIRQ, as wake will do that. */ - vcpu_assign(prv, v, cpu); + unit_assign(prv, unit, cpu); } else if ( cpumask_intersects(&prv->cpus_free, cpumask_scratch_cpu(cpu)= ) ) { @@ -460,7 +458,8 @@ static void null_unit_insert(const struct scheduler *op= s, */ spin_lock(&prv->waitq_lock); list_add_tail(&nvc->waitq_elem, &prv->waitq); - dprintk(XENLOG_G_WARNING, "WARNING: %pv not assigned to any CPU!\n= ", v); + dprintk(XENLOG_G_WARNING, "WARNING: %pdv%d not assigned to any CPU= !\n", + unit->domain, unit->unit_id); spin_unlock(&prv->waitq_lock); } spin_unlock_irq(lock); @@ -468,35 +467,34 @@ static void null_unit_insert(const struct scheduler *= ops, SCHED_STAT_CRANK(unit_insert); } =20 -static void _vcpu_remove(struct null_private *prv, struct vcpu *v) +static void _unit_remove(struct null_private *prv, struct sched_unit *unit) { unsigned int bs; - unsigned int cpu =3D v->processor; + unsigned int cpu =3D sched_unit_cpu(unit); struct null_unit *wvc; =20 - ASSERT(list_empty(&null_unit(v->sched_unit)->waitq_elem)); + ASSERT(list_empty(&null_unit(unit)->waitq_elem)); =20 - vcpu_deassign(prv, v, cpu); + unit_deassign(prv, unit, cpu); =20 spin_lock(&prv->waitq_lock); =20 /* - * If v is assigned to a pCPU, let's see if there is someone waiting, - * suitable to be assigned to it (prioritizing vcpus that have + * If unit is assigned to a pCPU, let's see if there is someone waitin= g, + * suitable to be assigned to it (prioritizing units that have * soft-affinity with cpu). */ for_each_affinity_balance_step( bs ) { list_for_each_entry( wvc, &prv->waitq, waitq_elem ) { - if ( bs =3D=3D BALANCE_SOFT_AFFINITY && - !has_soft_affinity(wvc->vcpu->sched_unit) ) + if ( bs =3D=3D BALANCE_SOFT_AFFINITY && !has_soft_affinity(wvc= ->unit) ) continue; =20 - if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) ) + if ( unit_check_affinity(wvc->unit, cpu, bs) ) { list_del_init(&wvc->waitq_elem); - vcpu_assign(prv, wvc->vcpu, cpu); + unit_assign(prv, wvc->unit, cpu); cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); spin_unlock(&prv->waitq_lock); return; @@ -509,16 +507,15 @@ static void _vcpu_remove(struct null_private *prv, st= ruct vcpu *v) static void null_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *v =3D unit->vcpu; struct null_private *prv =3D null_priv(ops); struct null_unit *nvc =3D null_unit(unit); spinlock_t *lock; =20 - ASSERT(!is_idle_vcpu(v)); + ASSERT(!is_idle_unit(unit)); =20 lock =3D unit_schedule_lock_irq(unit); =20 - /* If v is in waitqueue, just get it out of there and bail */ + /* If unit is in waitqueue, just get it out of there and bail */ if ( unlikely(!list_empty(&nvc->waitq_elem)) ) { spin_lock(&prv->waitq_lock); @@ -528,10 +525,10 @@ static void null_unit_remove(const struct scheduler *= ops, goto out; } =20 - ASSERT(per_cpu(npc, v->processor).vcpu =3D=3D v); - ASSERT(!cpumask_test_cpu(v->processor, &prv->cpus_free)); + ASSERT(per_cpu(npc, sched_unit_cpu(unit)).unit =3D=3D unit); + ASSERT(!cpumask_test_cpu(sched_unit_cpu(unit), &prv->cpus_free)); =20 - _vcpu_remove(prv, v); + _unit_remove(prv, unit); =20 out: unit_schedule_unlock_irq(lock, unit); @@ -542,11 +539,9 @@ static void null_unit_remove(const struct scheduler *o= ps, static void null_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *v =3D unit->vcpu; + ASSERT(!is_idle_unit(unit)); =20 - ASSERT(!is_idle_vcpu(v)); - - if ( unlikely(curr_on_cpu(v->processor) =3D=3D unit) ) + if ( unlikely(curr_on_cpu(sched_unit_cpu(unit)) =3D=3D unit) ) { SCHED_STAT_CRANK(unit_wake_running); return; @@ -559,25 +554,23 @@ static void null_unit_wake(const struct scheduler *op= s, return; } =20 - if ( likely(vcpu_runnable(v)) ) + if ( likely(unit_runnable(unit)) ) SCHED_STAT_CRANK(unit_wake_runnable); else SCHED_STAT_CRANK(unit_wake_not_runnable); =20 - /* Note that we get here only for vCPUs assigned to a pCPU */ - cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ); + /* Note that we get here only for units assigned to a pCPU */ + cpu_raise_softirq(sched_unit_cpu(unit), SCHEDULE_SOFTIRQ); } =20 static void null_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *v =3D unit->vcpu; - - ASSERT(!is_idle_vcpu(v)); + ASSERT(!is_idle_unit(unit)); =20 - /* If v is not assigned to a pCPU, or is not running, no need to bothe= r */ - if ( curr_on_cpu(v->processor) =3D=3D unit ) - cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ); + /* If unit isn't assigned to a pCPU, or isn't running, no need to both= er */ + if ( curr_on_cpu(sched_unit_cpu(unit)) =3D=3D unit ) + cpu_raise_softirq(sched_unit_cpu(unit), SCHEDULE_SOFTIRQ); =20 SCHED_STAT_CRANK(unit_sleep); } @@ -585,37 +578,36 @@ static void null_unit_sleep(const struct scheduler *o= ps, static struct sched_resource * null_res_pick(const struct scheduler *ops, struct sched_unit *unit) { - ASSERT(!is_idle_vcpu(unit->vcpu)); + ASSERT(!is_idle_unit(unit)); return pick_res(null_priv(ops), unit); } =20 static void null_unit_migrate(const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cp= u) { - struct vcpu *v =3D unit->vcpu; struct null_private *prv =3D null_priv(ops); struct null_unit *nvc =3D null_unit(unit); =20 - ASSERT(!is_idle_vcpu(v)); + ASSERT(!is_idle_unit(unit)); =20 - if ( v->processor =3D=3D new_cpu ) + if ( sched_unit_cpu(unit) =3D=3D new_cpu ) return; =20 if ( unlikely(tb_init_done) ) { struct { - uint16_t vcpu, dom; + uint16_t unit, dom; uint16_t cpu, new_cpu; } d; - d.dom =3D v->domain->domain_id; - d.vcpu =3D v->vcpu_id; - d.cpu =3D v->processor; + d.dom =3D unit->domain->domain_id; + d.unit =3D unit->unit_id; + d.cpu =3D sched_unit_cpu(unit); d.new_cpu =3D new_cpu; __trace_var(TRC_SNULL_MIGRATE, 1, sizeof(d), &d); } =20 /* - * v is either assigned to a pCPU, or in the waitqueue. + * unit is either assigned to a pCPU, or in the waitqueue. * * In the former case, the pCPU to which it was assigned would * become free, and we, therefore, should check whether there is @@ -625,7 +617,7 @@ static void null_unit_migrate(const struct scheduler *o= ps, */ if ( likely(list_empty(&nvc->waitq_elem)) ) { - _vcpu_remove(prv, v); + _unit_remove(prv, unit); SCHED_STAT_CRANK(migrate_running); } else @@ -634,32 +626,34 @@ static void null_unit_migrate(const struct scheduler = *ops, SCHED_STAT_CRANK(migrated); =20 /* - * Let's now consider new_cpu, which is where v is being sent. It can = be - * either free, or have a vCPU already assigned to it. + * Let's now consider new_cpu, which is where unit is being sent. It c= an be + * either free, or have a unit already assigned to it. * - * In the former case, we should assign v to it, and try to get it to = run, + * In the former case we should assign unit to it, and try to get it t= o run, * if possible, according to affinity. * - * In latter, all we can do is to park v in the waitqueue. + * In latter, all we can do is to park unit in the waitqueue. */ - if ( per_cpu(npc, new_cpu).vcpu =3D=3D NULL && - vcpu_check_affinity(v, new_cpu, BALANCE_HARD_AFFINITY) ) + if ( per_cpu(npc, new_cpu).unit =3D=3D NULL && + unit_check_affinity(unit, new_cpu, BALANCE_HARD_AFFINITY) ) { - /* v might have been in the waitqueue, so remove it */ + /* unit might have been in the waitqueue, so remove it */ spin_lock(&prv->waitq_lock); list_del_init(&nvc->waitq_elem); spin_unlock(&prv->waitq_lock); =20 - vcpu_assign(prv, v, new_cpu); + unit_assign(prv, unit, new_cpu); } else { - /* Put v in the waitqueue, if it wasn't there already */ + /* Put unit in the waitqueue, if it wasn't there already */ spin_lock(&prv->waitq_lock); if ( list_empty(&nvc->waitq_elem) ) { list_add_tail(&nvc->waitq_elem, &prv->waitq); - dprintk(XENLOG_G_WARNING, "WARNING: %pv not assigned to any CP= U!\n", v); + dprintk(XENLOG_G_WARNING, + "WARNING: %pdv%d not assigned to any CPU!\n", unit->do= main, + unit->unit_id); } spin_unlock(&prv->waitq_lock); } @@ -672,35 +666,34 @@ static void null_unit_migrate(const struct scheduler = *ops, * at least. In case of suspend, any temporary inconsistency caused * by this, will be fixed-up during resume. */ - v->processor =3D new_cpu; - unit->res =3D get_sched_res(new_cpu); + sched_set_res(unit, get_sched_res(new_cpu)); } =20 #ifndef NDEBUG -static inline void null_vcpu_check(struct vcpu *v) +static inline void null_unit_check(struct sched_unit *unit) { - struct null_unit * const nvc =3D null_unit(v->sched_unit); - struct null_dom * const ndom =3D v->domain->sched_priv; + struct null_unit * const nvc =3D null_unit(unit); + struct null_dom * const ndom =3D unit->domain->sched_priv; =20 - BUG_ON(nvc->vcpu !=3D v); + BUG_ON(nvc->unit !=3D unit); =20 if ( ndom ) - BUG_ON(is_idle_vcpu(v)); + BUG_ON(is_idle_unit(unit)); else - BUG_ON(!is_idle_vcpu(v)); + BUG_ON(!is_idle_unit(unit)); =20 SCHED_STAT_CRANK(unit_check); } -#define NULL_VCPU_CHECK(v) (null_vcpu_check(v)) +#define NULL_UNIT_CHECK(unit) (null_unit_check(unit)) #else -#define NULL_VCPU_CHECK(v) +#define NULL_UNIT_CHECK(unit) #endif =20 =20 /* * The most simple scheduling function of all times! We either return: - * - the vCPU assigned to the pCPU, if there's one and it can run; - * - the idle vCPU, otherwise. + * - the unit assigned to the pCPU, if there's one and it can run; + * - the idle unit, otherwise. */ static struct task_slice null_schedule(const struct scheduler *ops, s_time_t now, @@ -713,24 +706,24 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, struct task_slice ret; =20 SCHED_STAT_CRANK(schedule); - NULL_VCPU_CHECK(current); + NULL_UNIT_CHECK(current->sched_unit); =20 if ( unlikely(tb_init_done) ) { struct { uint16_t tasklet, cpu; - int16_t vcpu, dom; + int16_t unit, dom; } d; d.cpu =3D cpu; d.tasklet =3D tasklet_work_scheduled; - if ( per_cpu(npc, cpu).vcpu =3D=3D NULL ) + if ( per_cpu(npc, cpu).unit =3D=3D NULL ) { - d.vcpu =3D d.dom =3D -1; + d.unit =3D d.dom =3D -1; } else { - d.vcpu =3D per_cpu(npc, cpu).vcpu->vcpu_id; - d.dom =3D per_cpu(npc, cpu).vcpu->domain->domain_id; + d.unit =3D per_cpu(npc, cpu).unit->unit_id; + d.dom =3D per_cpu(npc, cpu).unit->domain->domain_id; } __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d); } @@ -738,16 +731,16 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, if ( tasklet_work_scheduled ) { trace_var(TRC_SNULL_TASKLET, 1, 0, NULL); - ret.task =3D idle_vcpu[cpu]->sched_unit; + ret.task =3D sched_idle_unit(cpu); } else - ret.task =3D per_cpu(npc, cpu).vcpu->sched_unit; + ret.task =3D per_cpu(npc, cpu).unit; ret.migrated =3D 0; ret.time =3D -1; =20 /* * We may be new in the cpupool, or just coming back online. In which - * case, there may be vCPUs in the waitqueue that we can assign to us + * case, there may be units in the waitqueue that we can assign to us * and run. */ if ( unlikely(ret.task =3D=3D NULL) ) @@ -758,10 +751,10 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, goto unlock; =20 /* - * We scan the waitqueue twice, for prioritizing vcpus that have + * We scan the waitqueue twice, for prioritizing units that have * soft-affinity with cpu. This may look like something expensive = to - * do here in null_schedule(), but it's actually fine, beceuse we = do - * it only in cases where a pcpu has no vcpu associated (e.g., as + * do here in null_schedule(), but it's actually fine, because we = do + * it only in cases where a pcpu has no unit associated (e.g., as * said above, the cpu has just joined a cpupool). */ for_each_affinity_balance_step( bs ) @@ -769,14 +762,14 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, list_for_each_entry( wvc, &prv->waitq, waitq_elem ) { if ( bs =3D=3D BALANCE_SOFT_AFFINITY && - !has_soft_affinity(wvc->vcpu->sched_unit) ) + !has_soft_affinity(wvc->unit) ) continue; =20 - if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) ) + if ( unit_check_affinity(wvc->unit, cpu, bs) ) { - vcpu_assign(prv, wvc->vcpu, cpu); + unit_assign(prv, wvc->unit, cpu); list_del_init(&wvc->waitq_elem); - ret.task =3D wvc->vcpu->sched_unit; + ret.task =3D wvc->unit; goto unlock; } } @@ -786,17 +779,17 @@ static struct task_slice null_schedule(const struct s= cheduler *ops, } =20 if ( unlikely(ret.task =3D=3D NULL || !unit_runnable(ret.task)) ) - ret.task =3D idle_vcpu[cpu]->sched_unit; + ret.task =3D sched_idle_unit(cpu); =20 - NULL_VCPU_CHECK(ret.task->vcpu); + NULL_UNIT_CHECK(ret.task); return ret; } =20 -static inline void dump_vcpu(struct null_private *prv, struct null_unit *n= vc) +static inline void dump_unit(struct null_private *prv, struct null_unit *n= vc) { - printk("[%i.%i] pcpu=3D%d", nvc->vcpu->domain->domain_id, - nvc->vcpu->vcpu_id, list_empty(&nvc->waitq_elem) ? - nvc->vcpu->processor : -1); + printk("[%i.%i] pcpu=3D%d", nvc->unit->domain->domain_id, + nvc->unit->unit_id, list_empty(&nvc->waitq_elem) ? + sched_unit_cpu(nvc->unit) : -1); } =20 static void null_dump_pcpu(const struct scheduler *ops, int cpu) @@ -812,16 +805,17 @@ static void null_dump_pcpu(const struct scheduler *op= s, int cpu) cpu, nr_cpu_ids, cpumask_bits(per_cpu(cpu_sibling_mask, cpu)), nr_cpu_ids, cpumask_bits(per_cpu(cpu_core_mask, cpu))); - if ( per_cpu(npc, cpu).vcpu !=3D NULL ) - printk(", vcpu=3D%pv", per_cpu(npc, cpu).vcpu); + if ( per_cpu(npc, cpu).unit !=3D NULL ) + printk(", unit=3D%pdv%d", per_cpu(npc, cpu).unit->domain, + per_cpu(npc, cpu).unit->unit_id); printk("\n"); =20 - /* current VCPU (nothing to say if that's the idle vcpu) */ + /* current unit (nothing to say if that's the idle unit) */ nvc =3D null_unit(curr_on_cpu(cpu)); - if ( nvc && !is_idle_vcpu(nvc->vcpu) ) + if ( nvc && !is_idle_unit(nvc->unit) ) { printk("\trun: "); - dump_vcpu(prv, nvc); + dump_unit(prv, nvc); printk("\n"); } =20 @@ -844,23 +838,23 @@ static void null_dump(const struct scheduler *ops) list_for_each( iter, &prv->ndom ) { struct null_dom *ndom; - struct vcpu *v; + struct sched_unit *unit; =20 ndom =3D list_entry(iter, struct null_dom, ndom_elem); =20 printk("\tDomain: %d\n", ndom->dom->domain_id); - for_each_vcpu( ndom->dom, v ) + for_each_sched_unit( ndom->dom, unit ) { - struct null_unit * const nvc =3D null_unit(v->sched_unit); + struct null_unit * const nvc =3D null_unit(unit); spinlock_t *lock; =20 - lock =3D unit_schedule_lock(nvc->vcpu->sched_unit); + lock =3D unit_schedule_lock(unit); =20 printk("\t%3d: ", ++loop); - dump_vcpu(prv, nvc); + dump_unit(prv, nvc); printk("\n"); =20 - unit_schedule_unlock(lock, nvc->vcpu->sched_unit); + unit_schedule_unlock(lock, unit); } } =20 @@ -875,7 +869,7 @@ static void null_dump(const struct scheduler *ops) printk(", "); if ( loop % 24 =3D=3D 0 ) printk("\n\t"); - printk("%pv", nvc->vcpu); + printk("%pdv%d", nvc->unit->domain, nvc->unit->unit_id); } printk("\n"); spin_unlock(&prv->waitq_lock); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel