From nobody Mon Feb 9 11:06:21 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557125928; cv=none; d=zoho.com; s=zohoarc; b=jokGfvoGtdqxIeF9U0Xpd+lS2s/SmSrv25K/wcN/HEkbEHtSjVf9jy0fEbfREGi+4h8azHwtJsxRzVMRzVa5vepuNQYbDHHr+l4v+KExehpA3X4zSD/4ykLNTUv/6RKyw3LBKeVJyqi/bah6pzDnhgmurybjFys0vB96WOUz1k8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557125928; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=Au2nyjFf0KUtJfPE8GsuNQDlnaMijoMuhJ03KelzE4c=; b=nyZGrEfCIqJf/ccxNB5a+i9VaciqfZM6THBMTpzn5sTPxFQ9PCYWxEamwFHkd4hY9D5oq3il1GKm/B69+pcpk1qW3EkmNWejr9jgFnZ3ozWqbklmypz02Bw54u8zeG5WximHmmVOiEOFHYEKaEpM63uIduGjsW2IjQIBmDurKg4= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557125928088259.32026515757605; Sun, 5 May 2019 23:58:48 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYX-0001ti-Ff; Mon, 06 May 2019 06:56:57 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYV-0001ss-SB for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:56:55 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 21347ddc-6fcc-11e9-843c-bc764e045a96; Mon, 06 May 2019 06:56:53 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 74DD2AECE; Mon, 6 May 2019 06:56:51 +0000 (UTC) X-Inumbo-ID: 21347ddc-6fcc-11e9-843c-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:09 +0200 Message-Id: <20190506065644.7415-11-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 10/45] xen/sched: switch vcpu_schedule_lock to item_schedule_lock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Rename vcpu_schedule_[un]lock[_irq]() to item_schedule_[un]lock[_irq]() and let it take a sched_item pointer instead of a vcpu pointer as parameter. Signed-off-by: Juergen Gross --- xen/common/sched_credit.c | 17 +++++++++-------- xen/common/sched_credit2.c | 40 +++++++++++++++++++-------------------- xen/common/sched_null.c | 14 +++++++------- xen/common/sched_rt.c | 15 +++++++-------- xen/common/schedule.c | 47 +++++++++++++++++++++++-------------------= ---- xen/include/xen/sched-if.h | 12 ++++++------ 6 files changed, 73 insertions(+), 72 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index e8369b3648..de4face2bc 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -940,7 +940,8 @@ __csched_vcpu_acct_stop_locked(struct csched_private *p= rv, static void csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) { - struct csched_item * const svc =3D CSCHED_ITEM(current->sched_item); + struct sched_item *curritem =3D current->sched_item; + struct csched_item * const svc =3D CSCHED_ITEM(curritem); const struct scheduler *ops =3D per_cpu(scheduler, cpu); =20 ASSERT( current->processor =3D=3D cpu ); @@ -976,7 +977,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned i= nt cpu) { unsigned int new_cpu; unsigned long flags; - spinlock_t *lock =3D vcpu_schedule_lock_irqsave(current, &flags); + spinlock_t *lock =3D item_schedule_lock_irqsave(curritem, &flags); =20 /* * If it's been active a while, check if we'd be better off @@ -985,7 +986,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned i= nt cpu) */ new_cpu =3D _csched_cpu_pick(ops, current, 0); =20 - vcpu_schedule_unlock_irqrestore(lock, flags, current); + item_schedule_unlock_irqrestore(lock, flags, curritem); =20 if ( new_cpu !=3D cpu ) { @@ -1037,19 +1038,19 @@ csched_item_insert(const struct scheduler *ops, str= uct sched_item *item) BUG_ON( is_idle_vcpu(vc) ); =20 /* csched_res_pick() looks in vc->processor's runq, so we need the loc= k. */ - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); =20 item->res =3D csched_res_pick(ops, item); vc->processor =3D item->res->processor; =20 spin_unlock_irq(lock); =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); =20 if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running ) runq_insert(svc); =20 - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); =20 SCHED_STAT_CRANK(vcpu_insert); } @@ -2145,12 +2146,12 @@ csched_dump(const struct scheduler *ops) spinlock_t *lock; =20 svc =3D list_entry(iter_svc, struct csched_item, active_vcpu_e= lem); - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D item_schedule_lock(svc->vcpu->sched_item); =20 printk("\t%3d: ", ++loop); csched_dump_vcpu(svc); =20 - vcpu_schedule_unlock(lock, svc->vcpu); + item_schedule_unlock(lock, svc->vcpu->sched_item); } } =20 diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index df0e7282ce..6106293b3f 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -171,7 +171,7 @@ * - runqueue lock * + it is per-runqueue, so: * * cpus in a runqueue take the runqueue lock, when using - * pcpu_schedule_lock() / vcpu_schedule_lock() (and friends), + * pcpu_schedule_lock() / item_schedule_lock() (and friends), * * a cpu may (try to) take a "remote" runqueue lock, e.g., for * load balancing; * + serializes runqueue operations (removing and inserting vcpus); @@ -1890,7 +1890,7 @@ unpark_parked_vcpus(const struct scheduler *ops, stru= ct list_head *vcpus) unsigned long flags; s_time_t now; =20 - lock =3D vcpu_schedule_lock_irqsave(svc->vcpu, &flags); + lock =3D item_schedule_lock_irqsave(svc->vcpu->sched_item, &flags); =20 __clear_bit(_VPF_parked, &svc->vcpu->pause_flags); if ( unlikely(svc->flags & CSFLAG_scheduled) ) @@ -1923,7 +1923,7 @@ unpark_parked_vcpus(const struct scheduler *ops, stru= ct list_head *vcpus) } list_del_init(&svc->parked_elem); =20 - vcpu_schedule_unlock_irqrestore(lock, flags, svc->vcpu); + item_schedule_unlock_irqrestore(lock, flags, svc->vcpu->sched_item= ); } } =20 @@ -2162,7 +2162,7 @@ csched2_context_saved(const struct scheduler *ops, st= ruct sched_item *item) { struct vcpu *vc =3D item->vcpu; struct csched2_item * const svc =3D csched2_item(item); - spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); + spinlock_t *lock =3D item_schedule_lock_irq(item); s_time_t now =3D NOW(); LIST_HEAD(were_parked); =20 @@ -2194,7 +2194,7 @@ csched2_context_saved(const struct scheduler *ops, st= ruct sched_item *item) else if ( !is_idle_vcpu(vc) ) update_load(ops, svc->rqd, svc, -1, now); =20 - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); =20 unpark_parked_vcpus(ops, &were_parked); } @@ -2847,14 +2847,14 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { struct csched2_item *svc =3D csched2_item(v->sched_item); - spinlock_t *lock =3D vcpu_schedule_lock(svc->vcpu); + spinlock_t *lock =3D item_schedule_lock(svc->vcpu->sched_i= tem); =20 ASSERT(svc->rqd =3D=3D c2rqd(ops, svc->vcpu->processor)); =20 svc->weight =3D sdom->weight; update_max_weight(svc->rqd, svc->weight, old_weight); =20 - vcpu_schedule_unlock(lock, svc->vcpu); + item_schedule_unlock(lock, svc->vcpu->sched_item); } } /* Cap */ @@ -2885,7 +2885,7 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { svc =3D csched2_item(v->sched_item); - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D item_schedule_lock(svc->vcpu->sched_item); /* * Too small quotas would in theory cause a lot of overhea= d, * which then won't happen because, in csched2_runtime(), @@ -2893,7 +2893,7 @@ csched2_dom_cntl( */ svc->budget_quota =3D max(sdom->tot_budget / sdom->nr_vcpu= s, CSCHED2_MIN_TIMER); - vcpu_schedule_unlock(lock, svc->vcpu); + item_schedule_unlock(lock, svc->vcpu->sched_item); } =20 if ( sdom->cap =3D=3D 0 ) @@ -2928,7 +2928,7 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { svc =3D csched2_item(v->sched_item); - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D item_schedule_lock(svc->vcpu->sched_item); if ( v->is_running ) { unsigned int cpu =3D v->processor; @@ -2959,7 +2959,7 @@ csched2_dom_cntl( cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } svc->budget =3D 0; - vcpu_schedule_unlock(lock, svc->vcpu); + item_schedule_unlock(lock, svc->vcpu->sched_item); } } =20 @@ -2975,12 +2975,12 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { struct csched2_item *svc =3D csched2_item(v->sched_item); - spinlock_t *lock =3D vcpu_schedule_lock(svc->vcpu); + spinlock_t *lock =3D item_schedule_lock(svc->vcpu->sched_i= tem); =20 svc->budget =3D STIME_MAX; svc->budget_quota =3D 0; =20 - vcpu_schedule_unlock(lock, svc->vcpu); + item_schedule_unlock(lock, svc->vcpu->sched_item); } sdom->cap =3D 0; /* @@ -3119,19 +3119,19 @@ csched2_item_insert(const struct scheduler *ops, st= ruct sched_item *item) ASSERT(list_empty(&svc->runq_elem)); =20 /* csched2_res_pick() expects the pcpu lock to be held */ - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); =20 item->res =3D csched2_res_pick(ops, item); vc->processor =3D item->res->processor; =20 spin_unlock_irq(lock); =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); =20 /* Add vcpu to runqueue of initial processor */ runq_assign(ops, vc); =20 - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); =20 sdom->nr_vcpus++; =20 @@ -3161,11 +3161,11 @@ csched2_item_remove(const struct scheduler *ops, st= ruct sched_item *item) SCHED_STAT_CRANK(vcpu_remove); =20 /* Remove from runqueue */ - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); =20 runq_deassign(ops, vc); =20 - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); =20 svc->sdom->nr_vcpus--; } @@ -3749,12 +3749,12 @@ csched2_dump(const struct scheduler *ops) struct csched2_item * const svc =3D csched2_item(v->sched_item= ); spinlock_t *lock; =20 - lock =3D vcpu_schedule_lock(svc->vcpu); + lock =3D item_schedule_lock(svc->vcpu->sched_item); =20 printk("\t%3d: ", ++loop); csched2_dump_vcpu(prv, svc); =20 - vcpu_schedule_unlock(lock, svc->vcpu); + item_schedule_unlock(lock, svc->vcpu->sched_item); } } =20 diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index a9cfa163b9..620925e8ce 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -317,7 +317,7 @@ pick_res(struct null_private *prv, struct sched_item *i= tem) * all the pCPUs are busy. * * In fact, there must always be something sane in v->processor, or - * vcpu_schedule_lock() and friends won't work. This is not a problem, + * item_schedule_lock() and friends won't work. This is not a problem, * as we will actually assign the vCPU to the pCPU we return from here, * only if the pCPU is free. */ @@ -428,7 +428,7 @@ static void null_item_insert(const struct scheduler *op= s, =20 ASSERT(!is_idle_vcpu(v)); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D item_schedule_lock_irq(item); retry: =20 item->res =3D pick_res(prv, item); @@ -436,7 +436,7 @@ static void null_item_insert(const struct scheduler *op= s, =20 spin_unlock(lock); =20 - lock =3D vcpu_schedule_lock(v); + lock =3D item_schedule_lock(item); =20 cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(v->domain)); @@ -522,7 +522,7 @@ static void null_item_remove(const struct scheduler *op= s, =20 ASSERT(!is_idle_vcpu(v)); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D item_schedule_lock_irq(item); =20 /* If v is in waitqueue, just get it out of there and bail */ if ( unlikely(!list_empty(&nvc->waitq_elem)) ) @@ -540,7 +540,7 @@ static void null_item_remove(const struct scheduler *op= s, _vcpu_remove(prv, v); =20 out: - vcpu_schedule_unlock_irq(lock, v); + item_schedule_unlock_irq(lock, item); =20 SCHED_STAT_CRANK(vcpu_remove); } @@ -860,13 +860,13 @@ static void null_dump(const struct scheduler *ops) struct null_item * const nvc =3D null_item(v->sched_item); spinlock_t *lock; =20 - lock =3D vcpu_schedule_lock(nvc->vcpu); + lock =3D item_schedule_lock(nvc->vcpu->sched_item); =20 printk("\t%3d: ", ++loop); dump_vcpu(prv, nvc); printk("\n"); =20 - vcpu_schedule_unlock(lock, nvc->vcpu); + item_schedule_unlock(lock, nvc->vcpu->sched_item); } } =20 diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 0019646b52..a604a0d5a6 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -177,7 +177,7 @@ static void repl_timer_handler(void *data); /* * System-wide private data, include global RunQueue/DepletedQ * Global lock is referenced by sched_res->schedule_lock from all - * physical cpus. It can be grabbed via vcpu_schedule_lock_irq() + * physical cpus. It can be grabbed via item_schedule_lock_irq() */ struct rt_private { spinlock_t lock; /* the global coarse-grained lock */ @@ -904,7 +904,7 @@ rt_item_insert(const struct scheduler *ops, struct sche= d_item *item) item->res =3D rt_res_pick(ops, item); vc->processor =3D item->res->processor; =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); =20 now =3D NOW(); if ( now >=3D svc->cur_deadline ) @@ -917,7 +917,7 @@ rt_item_insert(const struct scheduler *ops, struct sche= d_item *item) if ( !vc->is_running ) runq_insert(ops, svc); } - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); =20 SCHED_STAT_CRANK(vcpu_insert); } @@ -928,7 +928,6 @@ rt_item_insert(const struct scheduler *ops, struct sche= d_item *item) static void rt_item_remove(const struct scheduler *ops, struct sched_item *item) { - struct vcpu *vc =3D item->vcpu; struct rt_item * const svc =3D rt_item(item); struct rt_dom * const sdom =3D svc->sdom; spinlock_t *lock; @@ -937,14 +936,14 @@ rt_item_remove(const struct scheduler *ops, struct sc= hed_item *item) =20 BUG_ON( sdom =3D=3D NULL ); =20 - lock =3D vcpu_schedule_lock_irq(vc); + lock =3D item_schedule_lock_irq(item); if ( vcpu_on_q(svc) ) q_remove(svc); =20 if ( vcpu_on_replq(svc) ) replq_remove(ops,svc); =20 - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); } =20 /* @@ -1339,7 +1338,7 @@ rt_context_saved(const struct scheduler *ops, struct = sched_item *item) { struct vcpu *vc =3D item->vcpu; struct rt_item *svc =3D rt_item(item); - spinlock_t *lock =3D vcpu_schedule_lock_irq(vc); + spinlock_t *lock =3D item_schedule_lock_irq(item); =20 __clear_bit(__RTDS_scheduled, &svc->flags); /* not insert idle vcpu to runq */ @@ -1356,7 +1355,7 @@ rt_context_saved(const struct scheduler *ops, struct = sched_item *item) replq_remove(ops, svc); =20 out: - vcpu_schedule_unlock_irq(lock, vc); + item_schedule_unlock_irq(lock, item); } =20 /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 51db98bcaa..464e358f70 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -194,7 +194,8 @@ static inline void vcpu_runstate_change( =20 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) { - spinlock_t *lock =3D likely(v =3D=3D current) ? NULL : vcpu_schedule_l= ock_irq(v); + spinlock_t *lock =3D likely(v =3D=3D current) + ? NULL : item_schedule_lock_irq(v->sched_item); s_time_t delta; =20 memcpy(runstate, &v->runstate, sizeof(*runstate)); @@ -203,7 +204,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runs= tate_info *runstate) runstate->time[runstate->state] +=3D delta; =20 if ( unlikely(lock !=3D NULL) ) - vcpu_schedule_unlock_irq(lock, v); + item_schedule_unlock_irq(lock, v->sched_item); } =20 uint64_t get_cpu_idle_time(unsigned int cpu) @@ -415,7 +416,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) migrate_timer(&v->singleshot_timer, new_p); migrate_timer(&v->poll_timer, new_p); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D item_schedule_lock_irq(v->sched_item); =20 sched_set_affinity(v, &cpumask_all, &cpumask_all); =20 @@ -424,7 +425,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, - * - use vcpu_schedule_unlock_irq(). + * - use item_schedule_unlock_irq(). */ spin_unlock_irq(lock); =20 @@ -523,11 +524,11 @@ void vcpu_sleep_nosync(struct vcpu *v) =20 TRACE_2D(TRC_SCHED_SLEEP, v->domain->domain_id, v->vcpu_id); =20 - lock =3D vcpu_schedule_lock_irqsave(v, &flags); + lock =3D item_schedule_lock_irqsave(v->sched_item, &flags); =20 vcpu_sleep_nosync_locked(v); =20 - vcpu_schedule_unlock_irqrestore(lock, flags, v); + item_schedule_unlock_irqrestore(lock, flags, v->sched_item); } =20 void vcpu_sleep_sync(struct vcpu *v) @@ -547,7 +548,7 @@ void vcpu_wake(struct vcpu *v) =20 TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id); =20 - lock =3D vcpu_schedule_lock_irqsave(v, &flags); + lock =3D item_schedule_lock_irqsave(v->sched_item, &flags); =20 if ( likely(vcpu_runnable(v)) ) { @@ -561,7 +562,7 @@ void vcpu_wake(struct vcpu *v) vcpu_runstate_change(v, RUNSTATE_offline, NOW()); } =20 - vcpu_schedule_unlock_irqrestore(lock, flags, v); + item_schedule_unlock_irqrestore(lock, flags, v->sched_item); } =20 void vcpu_unblock(struct vcpu *v) @@ -629,9 +630,9 @@ static void vcpu_move_locked(struct vcpu *v, unsigned i= nt new_cpu) * These steps are encapsulated in the following two functions; they * should be called like this: * - * lock =3D vcpu_schedule_lock_irq(v); + * lock =3D item_schedule_lock_irq(item); * vcpu_migrate_start(v); - * vcpu_schedule_unlock_irq(lock, v) + * item_schedule_unlock_irq(lock, item) * vcpu_migrate_finish(v); * * vcpu_migrate_finish() will do the work now if it can, or simply @@ -736,12 +737,12 @@ static void vcpu_migrate_finish(struct vcpu *v) */ void vcpu_force_reschedule(struct vcpu *v) { - spinlock_t *lock =3D vcpu_schedule_lock_irq(v); + spinlock_t *lock =3D item_schedule_lock_irq(v->sched_item); =20 if ( v->is_running ) vcpu_migrate_start(v); =20 - vcpu_schedule_unlock_irq(lock, v); + item_schedule_unlock_irq(lock, v->sched_item); =20 vcpu_migrate_finish(v); } @@ -792,7 +793,7 @@ void restore_vcpu_affinity(struct domain *d) v->processor =3D cpumask_any(cpumask_scratch_cpu(cpu)); v->sched_item->res =3D per_cpu(sched_res, v->processor); =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D item_schedule_lock_irq(v->sched_item); v->sched_item->res =3D sched_pick_resource(vcpu_scheduler(v), v->sched_item); v->processor =3D v->sched_item->res->processor; @@ -827,7 +828,7 @@ int cpu_disable_scheduler(unsigned int cpu) for_each_vcpu ( d, v ) { unsigned long flags; - spinlock_t *lock =3D vcpu_schedule_lock_irqsave(v, &flags); + spinlock_t *lock =3D item_schedule_lock_irqsave(v->sched_item,= &flags); =20 cpumask_and(&online_affinity, v->cpu_hard_affinity, c->cpu_val= id); if ( cpumask_empty(&online_affinity) && @@ -836,7 +837,7 @@ int cpu_disable_scheduler(unsigned int cpu) if ( v->affinity_broken ) { /* The vcpu is temporarily pinned, can't move it. */ - vcpu_schedule_unlock_irqrestore(lock, flags, v); + item_schedule_unlock_irqrestore(lock, flags, v->sched_= item); ret =3D -EADDRINUSE; break; } @@ -849,7 +850,7 @@ int cpu_disable_scheduler(unsigned int cpu) if ( v->processor !=3D cpu ) { /* The vcpu is not on this cpu, so we can move on. */ - vcpu_schedule_unlock_irqrestore(lock, flags, v); + item_schedule_unlock_irqrestore(lock, flags, v->sched_item= ); continue; } =20 @@ -862,7 +863,7 @@ int cpu_disable_scheduler(unsigned int cpu) * things would have failed before getting in here. */ vcpu_migrate_start(v); - vcpu_schedule_unlock_irqrestore(lock, flags, v); + item_schedule_unlock_irqrestore(lock, flags, v->sched_item); =20 vcpu_migrate_finish(v); =20 @@ -926,7 +927,7 @@ static int vcpu_set_affinity( spinlock_t *lock; int ret =3D 0; =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D item_schedule_lock_irq(v->sched_item); =20 if ( v->affinity_broken ) ret =3D -EBUSY; @@ -948,7 +949,7 @@ static int vcpu_set_affinity( vcpu_migrate_start(v); } =20 - vcpu_schedule_unlock_irq(lock, v); + item_schedule_unlock_irq(lock, v->sched_item); =20 domain_update_node_affinity(v->domain); =20 @@ -1080,10 +1081,10 @@ static long do_poll(struct sched_poll *sched_poll) long vcpu_yield(void) { struct vcpu * v=3Dcurrent; - spinlock_t *lock =3D vcpu_schedule_lock_irq(v); + spinlock_t *lock =3D item_schedule_lock_irq(v->sched_item); =20 sched_yield(vcpu_scheduler(v), v->sched_item); - vcpu_schedule_unlock_irq(lock, v); + item_schedule_unlock_irq(lock, v->sched_item); =20 SCHED_STAT_CRANK(vcpu_yield); =20 @@ -1169,7 +1170,7 @@ int vcpu_pin_override(struct vcpu *v, int cpu) spinlock_t *lock; int ret =3D -EINVAL; =20 - lock =3D vcpu_schedule_lock_irq(v); + lock =3D item_schedule_lock_irq(v->sched_item); =20 if ( cpu < 0 ) { @@ -1196,7 +1197,7 @@ int vcpu_pin_override(struct vcpu *v, int cpu) if ( ret =3D=3D 0 ) vcpu_migrate_start(v); =20 - vcpu_schedule_unlock_irq(lock, v); + item_schedule_unlock_irq(lock, v->sched_item); =20 domain_update_node_affinity(v->domain); =20 diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 93617f0459..17f1ee8887 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -91,22 +91,22 @@ static inline void kind##_schedule_unlock##irq(spinlock= _t *lock \ =20 #define EXTRA_TYPE(arg) sched_lock(pcpu, unsigned int cpu, cpu, ) -sched_lock(vcpu, const struct vcpu *v, v->processor, ) +sched_lock(item, const struct sched_item *i, i->res->processor, ) sched_lock(pcpu, unsigned int cpu, cpu, _irq) -sched_lock(vcpu, const struct vcpu *v, v->processor, _irq) +sched_lock(item, const struct sched_item *i, i->res->processor, _irq) sched_unlock(pcpu, unsigned int cpu, cpu, ) -sched_unlock(vcpu, const struct vcpu *v, v->processor, ) +sched_unlock(item, const struct sched_item *i, i->res->processor, ) sched_unlock(pcpu, unsigned int cpu, cpu, _irq) -sched_unlock(vcpu, const struct vcpu *v, v->processor, _irq) +sched_unlock(item, const struct sched_item *i, i->res->processor, _irq) #undef EXTRA_TYPE =20 #define EXTRA_TYPE(arg) , unsigned long arg #define spin_unlock_irqsave spin_unlock_irqrestore sched_lock(pcpu, unsigned int cpu, cpu, _irqsave, *flags) -sched_lock(vcpu, const struct vcpu *v, v->processor, _irqsave, *flags) +sched_lock(item, const struct sched_item *i, i->res->processor, _irqsave, = *flags) #undef spin_unlock_irqsave sched_unlock(pcpu, unsigned int cpu, cpu, _irqrestore, flags) -sched_unlock(vcpu, const struct vcpu *v, v->processor, _irqrestore, flags) +sched_unlock(item, const struct sched_item *i, i->res->processor, _irqrest= ore, flags) #undef EXTRA_TYPE =20 #undef sched_unlock --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel