From nobody Tue Nov 11 08:42:57 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567761; cv=none; d=zoho.com; s=zohoarc; b=T4iYvMpYILNghxdMyNZOWzPJ+cl2ezSW9V94uHTLrMFIWIju5rqD2o4Gz04+Hp1oRUzG+BIwx+lN3E6a6Pu37e0Xha5DasPzPNREG/W4D2zswRvX76c8b1CdmAukaUxcxZ8CgA6qnx87BXkKx+TYl2wsFeXShd9zymZbNSo4ZRE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567761; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=A/0MN/S+8yL5ldMHwMBU2qxIK8BtYKnBGNBgOeP+vXQ=; b=JYa9Z+VGlUXTTvx7ui1q2rhDuJOYLJfetxuf1Rob1XKsJt8kdzqq1r9HXAtr7a7m2KsS1o1ZSrnp21e8pKg7sEli0AXMtE1W28oLu7OU6+jiCFD5MEFB4M4abixA622e311XSgK2bVHIstPQIoXskDhJ01xSkKjWMfy30tKk0YI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567761059779.8923844332224; Fri, 27 Sep 2019 00:02:41 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFn-0003Kd-Sh; Fri, 27 Sep 2019 07:01:23 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFm-0003JZ-Qk for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:22 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 8ed0d28a-e0f4-11e9-8628-bc764e2007e4; Fri, 27 Sep 2019 07:00:58 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DC294AFCC; Fri, 27 Sep 2019 07:00:56 +0000 (UTC) X-Inumbo-ID: 8ed0d28a-e0f4-11e9-8628-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:12 +0200 Message-Id: <20190927070050.12405-9-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 08/46] xen/sched: switch vcpu_schedule_lock to unit_schedule_lock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Rename vcpu_schedule_[un]lock[_irq]() to unit_schedule_[un]lock[_irq]() and let it take a sched_unit pointer instead of a vcpu pointer as parameter. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched_credit.c | 17 +++++++++-------- xen/common/sched_credit2.c | 40 ++++++++++++++++++++-------------------- xen/common/sched_null.c | 16 ++++++++-------- xen/common/sched_rt.c | 15 +++++++-------- xen/common/schedule.c | 45 +++++++++++++++++++++++-------------------= --- xen/include/xen/sched-if.h | 12 ++++++------ 6 files changed, 73 insertions(+), 72 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 59a77e874b..d0e4ddc76b 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -926,7 +926,8 @@ __csched_vcpu_acct_stop_locked(struct csched_private *p= rv, static void csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) { - struct csched_unit * const svc =3D CSCHED_UNIT(current->sched_unit); + struct sched_unit *currunit =3D current->sched_unit; + struct csched_unit * const svc =3D CSCHED_UNIT(currunit); const struct scheduler *ops =3D per_cpu(scheduler, cpu); =20 ASSERT( current->processor =3D=3D cpu ); @@ -962,7 +963,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned i= nt cpu) { unsigned int new_cpu; unsigned long flags; - spinlock_t *lock =3D vcpu_schedule_lock_irqsave(current, &flags); + spinlock_t *lock =3D unit_schedule_lock_irqsave(currunit, &flags); =20 /* * If it's been active a while, check if we'd be better off @@ -971,7 +972,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned i= nt cpu) */ new_cpu =3D _csched_cpu_pick(ops, current, 0); =20 - vcpu_schedule_unlock_irqrestore(lock, flags, current); + unit_schedule_unlock_irqrestore(lock, flags, currunit); =20 if ( new_cpu !=3D cpu ) { @@ -1023,19 +1024,19 @@ csched_unit_insert(const struct scheduler *ops, str= uct sched_unit *unit) BUG_ON( is_idle_vcpu(vc) ); =20 /* csched_res_pick() looks in vc->processor's runq, so we need the loc= k. */ - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); =20 unit->res =3D csched_res_pick(ops, unit); vc->processor =3D unit->res->master_cpu; =20 spin_unlock_irq(lock); =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); =20 if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running ) runq_insert(svc); =20 - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); =20 SCHED_STAT_CRANK(vcpu_insert); } @@ -2133,12 +2134,12 @@ csched_dump(const struct scheduler *ops) spinlock_t *lock; =20 svc =3D list_entry(iter_svc, struct csched_unit, active_vcpu_e= lem); - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D unit_schedule_lock(svc->vcpu->sched_unit); =20 printk("\t%3d: ", ++loop); csched_dump_vcpu(svc); =20 - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } =20 diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index ef0dd1d228..82d03a0683 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -171,7 +171,7 @@ * - runqueue lock * + it is per-runqueue, so: * * cpus in a runqueue take the runqueue lock, when using - * pcpu_schedule_lock() / vcpu_schedule_lock() (and friends), + * pcpu_schedule_lock() / unit_schedule_lock() (and friends), * * a cpu may (try to) take a "remote" runqueue lock, e.g., for * load balancing; * + serializes runqueue operations (removing and inserting vcpus); @@ -1891,7 +1891,7 @@ unpark_parked_vcpus(const struct scheduler *ops, stru= ct list_head *vcpus) unsigned long flags; s_time_t now; =20 - lock =3D vcpu_schedule_lock_irqsave(svc->vcpu, &flags); + lock =3D unit_schedule_lock_irqsave(svc->vcpu->sched_unit, &flags); =20 __clear_bit(_VPF_parked, &svc->vcpu->pause_flags); if ( unlikely(svc->flags & CSFLAG_scheduled) ) @@ -1924,7 +1924,7 @@ unpark_parked_vcpus(const struct scheduler *ops, stru= ct list_head *vcpus) } list_del_init(&svc->parked_elem); =20 - vcpu_schedule_unlock_irqrestore(lock, flags, svc->vcpu); + unit_schedule_unlock_irqrestore(lock, flags, svc->vcpu->sched_unit= ); } } =20 @@ -2163,7 +2163,7 @@ csched2_context_saved(const struct scheduler *ops, st= ruct sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; struct csched2_unit * const svc =3D csched2_unit(unit); - spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); + spinlock_t *lock =3D unit_schedule_lock_irq(unit); s_time_t now =3D NOW(); LIST_HEAD(were_parked); =20 @@ -2195,7 +2195,7 @@ csched2_context_saved(const struct scheduler *ops, st= ruct sched_unit *unit) else if ( !is_idle_vcpu(vc) ) update_load(ops, svc->rqd, svc, -1, now); =20 - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); =20 unpark_parked_vcpus(ops, &were_parked); } @@ -2848,14 +2848,14 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { struct csched2_unit *svc =3D csched2_unit(v->sched_unit); - spinlock_t *lock =3D vcpu_schedule_lock(svc->vcpu); + spinlock_t *lock =3D unit_schedule_lock(svc->vcpu->sched_u= nit); =20 ASSERT(svc->rqd =3D=3D c2rqd(ops, svc->vcpu->processor)); =20 svc->weight =3D sdom->weight; update_max_weight(svc->rqd, svc->weight, old_weight); =20 - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } /* Cap */ @@ -2886,7 +2886,7 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { svc =3D csched2_unit(v->sched_unit); - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D unit_schedule_lock(svc->vcpu->sched_unit); /* * Too small quotas would in theory cause a lot of overhea= d, * which then won't happen because, in csched2_runtime(), @@ -2894,7 +2894,7 @@ csched2_dom_cntl( */ svc->budget_quota =3D max(sdom->tot_budget / sdom->nr_vcpu= s, CSCHED2_MIN_TIMER); - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } =20 if ( sdom->cap =3D=3D 0 ) @@ -2929,7 +2929,7 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { svc =3D csched2_unit(v->sched_unit); - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D unit_schedule_lock(svc->vcpu->sched_unit); if ( v->is_running ) { unsigned int cpu =3D v->processor; @@ -2960,7 +2960,7 @@ csched2_dom_cntl( cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } svc->budget =3D 0; - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } =20 @@ -2976,12 +2976,12 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { struct csched2_unit *svc =3D csched2_unit(v->sched_unit); - spinlock_t *lock =3D vcpu_schedule_lock(svc->vcpu); + spinlock_t *lock =3D unit_schedule_lock(svc->vcpu->sched_u= nit); =20 svc->budget =3D STIME_MAX; svc->budget_quota =3D 0; =20 - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } sdom->cap =3D 0; /* @@ -3120,19 +3120,19 @@ csched2_unit_insert(const struct scheduler *ops, st= ruct sched_unit *unit) ASSERT(list_empty(&svc->runq_elem)); =20 /* csched2_res_pick() expects the pcpu lock to be held */ - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); =20 unit->res =3D csched2_res_pick(ops, unit); vc->processor =3D unit->res->master_cpu; =20 spin_unlock_irq(lock); =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); =20 /* Add vcpu to runqueue of initial processor */ runq_assign(ops, vc); =20 - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); =20 sdom->nr_vcpus++; =20 @@ -3162,11 +3162,11 @@ csched2_unit_remove(const struct scheduler *ops, st= ruct sched_unit *unit) SCHED_STAT_CRANK(vcpu_remove); =20 /* Remove from runqueue */ - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); =20 runq_deassign(ops, vc); =20 - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); =20 svc->sdom->nr_vcpus--; } @@ -3750,12 +3750,12 @@ csched2_dump(const struct scheduler *ops) struct csched2_unit * const svc =3D csched2_unit(v->sched_unit= ); spinlock_t *lock; =20 - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D unit_schedule_lock(svc->vcpu->sched_unit); =20 printk("\t%3d: ", ++loop); csched2_dump_vcpu(prv, svc); =20 - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } =20 diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index b95214601f..47d1b2ab56 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -309,7 +309,7 @@ pick_res(struct null_private *prv, const struct sched_u= nit *unit) * all the pCPUs are busy. * * In fact, there must always be something sane in v->processor, or - * vcpu_schedule_lock() and friends won't work. This is not a problem, + * unit_schedule_lock() and friends won't work. This is not a problem, * as we will actually assign the vCPU to the pCPU we return from here, * only if the pCPU is free. */ @@ -450,11 +450,11 @@ static void null_unit_insert(const struct scheduler *= ops, =20 ASSERT(!is_idle_vcpu(v)); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(unit); =20 if ( unlikely(!is_vcpu_online(v)) ) { - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, unit); return; } =20 @@ -464,7 +464,7 @@ static void null_unit_insert(const struct scheduler *op= s, =20 spin_unlock(lock); =20 - lock =3D vcpu_schedule_lock(v); + lock =3D unit_schedule_lock(unit); =20 cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(v->domain)); @@ -513,7 +513,7 @@ static void null_unit_remove(const struct scheduler *op= s, =20 ASSERT(!is_idle_vcpu(v)); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(unit); =20 /* If offline, the vcpu shouldn't be assigned, nor in the waitqueue */ if ( unlikely(!is_vcpu_online(v)) ) @@ -536,7 +536,7 @@ static void null_unit_remove(const struct scheduler *op= s, vcpu_deassign(prv, v); =20 out: - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, unit); =20 SCHED_STAT_CRANK(vcpu_remove); } @@ -935,13 +935,13 @@ static void null_dump(const struct scheduler *ops) struct null_unit * const nvc =3D null_unit(v->sched_unit); spinlock_t *lock; =20 - lock =3D vcpu_schedule_lock(nvc->vcpu); + lock =3D unit_schedule_lock(nvc->vcpu->sched_unit); =20 printk("\t%3d: ", ++loop); dump_vcpu(prv, nvc); printk("\n"); =20 - vcpu_schedule_unlock(lock, nvc->vcpu); + unit_schedule_unlock(lock, nvc->vcpu->sched_unit); } } =20 diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index a168668a70..da0a9c402f 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -177,7 +177,7 @@ static void repl_timer_handler(void *data); /* * System-wide private data, include global RunQueue/DepletedQ * Global lock is referenced by sched_res->schedule_lock from all - * physical cpus. It can be grabbed via vcpu_schedule_lock_irq() + * physical cpus. It can be grabbed via unit_schedule_lock_irq() */ struct rt_private { spinlock_t lock; /* the global coarse-grained lock */ @@ -895,7 +895,7 @@ rt_unit_insert(const struct scheduler *ops, struct sche= d_unit *unit) unit->res =3D rt_res_pick(ops, unit); vc->processor =3D unit->res->master_cpu; =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); =20 now =3D NOW(); if ( now >=3D svc->cur_deadline ) @@ -908,7 +908,7 @@ rt_unit_insert(const struct scheduler *ops, struct sche= d_unit *unit) if ( !vc->is_running ) runq_insert(ops, svc); } - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); =20 SCHED_STAT_CRANK(vcpu_insert); } @@ -919,7 +919,6 @@ rt_unit_insert(const struct scheduler *ops, struct sche= d_unit *unit) static void rt_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *vc =3D unit->vcpu_list; struct rt_unit * const svc =3D rt_unit(unit); struct rt_dom * const sdom =3D svc->sdom; spinlock_t *lock; @@ -928,14 +927,14 @@ rt_unit_remove(const struct scheduler *ops, struct sc= hed_unit *unit) =20 BUG_ON( sdom =3D=3D NULL ); =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D unit_schedule_lock_irq(unit); if ( vcpu_on_q(svc) ) q_remove(svc); =20 if ( vcpu_on_replq(svc) ) replq_remove(ops,svc); =20 - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); } =20 /* @@ -1330,7 +1329,7 @@ rt_context_saved(const struct scheduler *ops, struct = sched_unit *unit) { struct vcpu *vc =3D unit->vcpu_list; struct rt_unit *svc =3D rt_unit(unit); - spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); + spinlock_t *lock =3D unit_schedule_lock_irq(unit); =20 __clear_bit(__RTDS_scheduled, &svc->flags); /* not insert idle vcpu to runq */ @@ -1347,7 +1346,7 @@ rt_context_saved(const struct scheduler *ops, struct = sched_unit *unit) replq_remove(ops, svc); =20 out: - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); } =20 /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 67ccb78739..6c8fa38052 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -253,7 +253,8 @@ static inline void vcpu_runstate_change( =20 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) { - spinlock_t *lock =3D likely(v =3D=3D current) ? NULL : vcpu_schedule_l= ock_irq(v); + spinlock_t *lock =3D likely(v =3D=3D current) + ? NULL : unit_schedule_lock_irq(v->sched_unit); s_time_t delta; =20 memcpy(runstate, &v->runstate, sizeof(*runstate)); @@ -262,7 +263,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runs= tate_info *runstate) runstate->time[runstate->state] +=3D delta; =20 if ( unlikely(lock !=3D NULL) ) - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); } =20 uint64_t get_cpu_idle_time(unsigned int cpu) @@ -478,7 +479,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) migrate_timer(&v->singleshot_timer, new_p); migrate_timer(&v->poll_timer, new_p); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(v->sched_unit); =20 sched_set_affinity(v, &cpumask_all, &cpumask_all); =20 @@ -487,7 +488,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, - * - use vcpu_schedule_unlock_irq(). + * - use unit_schedule_unlock_irq(). */ spin_unlock_irq(lock); =20 @@ -586,11 +587,11 @@ void vcpu_sleep_nosync(struct vcpu *v) =20 TRACE_2D(TRC_SCHED_SLEEP, v->domain->domain_id, v->vcpu_id); =20 - lock =3D vcpu_schedule_lock_irqsave(v, &flags); + lock =3D unit_schedule_lock_irqsave(v->sched_unit, &flags); =20 vcpu_sleep_nosync_locked(v); =20 - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); } =20 void vcpu_sleep_sync(struct vcpu *v) @@ -610,7 +611,7 @@ void vcpu_wake(struct vcpu *v) =20 TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id); =20 - lock =3D vcpu_schedule_lock_irqsave(v, &flags); + lock =3D unit_schedule_lock_irqsave(v->sched_unit, &flags); =20 if ( likely(vcpu_runnable(v)) ) { @@ -624,7 +625,7 @@ void vcpu_wake(struct vcpu *v) vcpu_runstate_change(v, RUNSTATE_offline, NOW()); } =20 - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); } =20 void vcpu_unblock(struct vcpu *v) @@ -692,9 +693,9 @@ static void vcpu_move_locked(struct vcpu *v, unsigned i= nt new_cpu) * These steps are encapsulated in the following two functions; they * should be called like this: * - * lock =3D vcpu_schedule_lock_irq(v); + * lock =3D unit_schedule_lock_irq(unit); * vcpu_migrate_start(v); - * vcpu_schedule_unlock_irq(lock, v) + * unit_schedule_unlock_irq(lock, unit) * vcpu_migrate_finish(v); * * vcpu_migrate_finish() will do the work now if it can, or simply @@ -813,7 +814,7 @@ void restore_vcpu_affinity(struct domain *d) * set v->processor of each of their vCPUs to something that will * make sense for the scheduler of the cpupool in which they are i= n. */ - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(v->sched_unit); =20 cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(d)); @@ -842,7 +843,7 @@ void restore_vcpu_affinity(struct domain *d) spin_unlock_irq(lock); =20 /* v->processor might have changed, so reacquire the lock. */ - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(v->sched_unit); v->sched_unit->res =3D sched_pick_resource(vcpu_scheduler(v), v->sched_unit); v->processor =3D v->sched_unit->res->master_cpu; @@ -877,7 +878,7 @@ int cpu_disable_scheduler(unsigned int cpu) for_each_vcpu ( d, v ) { unsigned long flags; - spinlock_t *lock =3D vcpu_schedule_lock_irqsave(v, &flags); + spinlock_t *lock =3D unit_schedule_lock_irqsave(v->sched_unit,= &flags); =20 cpumask_and(&online_affinity, v->cpu_hard_affinity, c->cpu_val= id); if ( cpumask_empty(&online_affinity) && @@ -886,7 +887,7 @@ int cpu_disable_scheduler(unsigned int cpu) if ( v->affinity_broken ) { /* The vcpu is temporarily pinned, can't move it. */ - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_= unit); ret =3D -EADDRINUSE; break; } @@ -899,7 +900,7 @@ int cpu_disable_scheduler(unsigned int cpu) if ( v->processor !=3D cpu ) { /* The vcpu is not on this cpu, so we can move on. */ - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit= ); continue; } =20 @@ -912,7 +913,7 @@ int cpu_disable_scheduler(unsigned int cpu) * things would have failed before getting in here. */ vcpu_migrate_start(v); - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); =20 vcpu_migrate_finish(v); =20 @@ -976,7 +977,7 @@ static int vcpu_set_affinity( spinlock_t *lock; int ret =3D 0; =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(v->sched_unit); =20 if ( v->affinity_broken ) ret =3D -EBUSY; @@ -998,7 +999,7 @@ static int vcpu_set_affinity( vcpu_migrate_start(v); } =20 - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); =20 domain_update_node_affinity(v->domain); =20 @@ -1130,10 +1131,10 @@ static long do_poll(struct sched_poll *sched_poll) long vcpu_yield(void) { struct vcpu * v=3Dcurrent; - spinlock_t *lock =3D vcpu_schedule_lock_irq(v); + spinlock_t *lock =3D unit_schedule_lock_irq(v->sched_unit); =20 sched_yield(vcpu_scheduler(v), v->sched_unit); - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); =20 SCHED_STAT_CRANK(vcpu_yield); =20 @@ -1230,7 +1231,7 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned = int cpu, uint8_t reason) int ret =3D -EINVAL; bool migrate; =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D unit_schedule_lock_irq(v->sched_unit); =20 if ( cpu =3D=3D NR_CPUS ) { @@ -1263,7 +1264,7 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned = int cpu, uint8_t reason) if ( migrate ) vcpu_migrate_start(v); =20 - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); =20 if ( migrate ) vcpu_migrate_finish(v); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 4dbf8f974c..f2c071358f 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -105,22 +105,22 @@ static inline void kind##_schedule_unlock##irq(spinlo= ck_t *lock \ =20 #define EXTRA_TYPE(arg) sched_lock(pcpu, unsigned int cpu, cpu, ) -sched_lock(vcpu, const struct vcpu *v, v->processor, ) +sched_lock(unit, const struct sched_unit *i, i->res->master_cpu, ) sched_lock(pcpu, unsigned int cpu, cpu, _irq) -sched_lock(vcpu, const struct vcpu *v, v->processor, _irq) +sched_lock(unit, const struct sched_unit *i, i->res->master_cpu, _irq) sched_unlock(pcpu, unsigned int cpu, cpu, ) -sched_unlock(vcpu, const struct vcpu *v, v->processor, ) +sched_unlock(unit, const struct sched_unit *i, i->res->master_cpu, ) sched_unlock(pcpu, unsigned int cpu, cpu, _irq) -sched_unlock(vcpu, const struct vcpu *v, v->processor, _irq) +sched_unlock(unit, const struct sched_unit *i, i->res->master_cpu, _irq) #undef EXTRA_TYPE =20 #define EXTRA_TYPE(arg) , unsigned long arg #define spin_unlock_irqsave spin_unlock_irqrestore sched_lock(pcpu, unsigned int cpu, cpu, _irqsave, *flags) -sched_lock(vcpu, const struct vcpu *v, v->processor, _irqsave, *flags) +sched_lock(unit, const struct sched_unit *i, i->res->master_cpu, _irqsave,= *flags) #undef spin_unlock_irqsave sched_unlock(pcpu, unsigned int cpu, cpu, _irqrestore, flags) -sched_unlock(vcpu, const struct vcpu *v, v->processor, _irqrestore, flags) +sched_unlock(unit, const struct sched_unit *i, i->res->master_cpu, _irqres= tore, flags) #undef EXTRA_TYPE =20 #undef sched_unlock --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel