From nobody Mon Feb 9 09:15:39 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1565362796; cv=none; d=zoho.com; s=zohoarc; b=Yqgbtmf1QL5uIFOg4GP5wMxd2FfXuA4xJXt71hXAriB7dIyBXmXJSHp0oQXCWKuvgHzC0/EzMRaZT7Ph5Eabq/6T266cJsAGYNJGjlvHoMnVS3FdU/pBtTxhGKIH++TNEkbkW0o/BzEtoHRYuahXJ1hggAwoRTI5If5KMIptemY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1565362796; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=w3/f+xXR9GYnkKJGL2nTOiaFnQq/AmfFPywEQ6HWEb0=; b=KkX1lAitimF3N1Ub6OsDMtK7OO0S5kWejUSAfZklDNZETJnJInII2ODvXHZ+gXef1ALkcDzIUrx7Et99IFP3pIiuoYaiTBO3IUmMK5uqXFnICMhT6JlR+DA45W4bcB0CPm7zrG+WBQAOiwsSAhs1vAbH1pQ8uShPAR/2tj2gLUc= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1565362796429152.52825892006217; Fri, 9 Aug 2019 07:59:56 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6MJ-0007D0-Jc; Fri, 09 Aug 2019 14:59:11 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6M3-0006dj-Et for xen-devel@lists.xenproject.org; Fri, 09 Aug 2019 14:58:55 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 3145821f-bab6-11e9-8980-bc764e045a96; Fri, 09 Aug 2019 14:58:49 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3218EB0E6; Fri, 9 Aug 2019 14:58:47 +0000 (UTC) X-Inumbo-ID: 3145821f-bab6-11e9-8980-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 9 Aug 2019 16:58:12 +0200 Message-Id: <20190809145833.1020-28-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190809145833.1020-1-jgross@suse.com> References: <20190809145833.1020-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 27/48] xen/sched: Change vcpu_migrate_*() to operate on schedule unit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only to ensure a vcpu is running on a suitable processor they can be switched to operate on schedule units instead of vcpus. While doing that rename them accordingly and make the _start() variant static. As it is needed anyway call vcpu_sync_execstate() for each vcpu of the unit when changing processors. vcpu_move_locked() is switched to schedule unit, too. Signed-off-by: Juergen Gross --- xen/common/schedule.c | 106 ++++++++++++++++++++++++++++++----------------= ---- 1 file changed, 63 insertions(+), 43 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 4c488ddde0..e4d0dd4b65 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -733,35 +733,40 @@ void vcpu_unblock(struct vcpu *v) } =20 /* - * Do the actual movement of a vcpu from old to new CPU. Locks for *both* + * Do the actual movement of an unit from old to new CPU. Locks for *both* * CPUs needs to have been taken already when calling this! */ -static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu) +static void sched_unit_move_locked(struct sched_unit *unit, + unsigned int new_cpu) { - unsigned int old_cpu =3D v->processor; + unsigned int old_cpu =3D unit->res->processor; + struct vcpu *v; =20 /* * Transfer urgency status to new CPU before switching CPUs, as * once the switch occurs, v->is_urgent is no longer protected by * the per-CPU scheduler lock we are holding. */ - if ( unlikely(v->is_urgent) && (old_cpu !=3D new_cpu) ) + for_each_sched_unit_vcpu ( unit, v ) { - atomic_inc(&get_sched_res(new_cpu)->urgent_count); - atomic_dec(&get_sched_res(old_cpu)->urgent_count); + if ( unlikely(v->is_urgent) && (old_cpu !=3D new_cpu) ) + { + atomic_inc(&get_sched_res(new_cpu)->urgent_count); + atomic_dec(&get_sched_res(old_cpu)->urgent_count); + } } =20 /* * Actual CPU switch to new CPU. This is safe because the lock * pointer can't change while the current lock is held. */ - sched_migrate(vcpu_scheduler(v), v->sched_unit, new_cpu); + sched_migrate(unit_scheduler(unit), unit, new_cpu); } =20 /* * Initiating migration * - * In order to migrate, we need the vcpu in question to have stopped + * In order to migrate, we need the unit in question to have stopped * running and had sched_sleep() called (to take it off any * runqueues, for instance); and if it is currently running, it needs * to be scheduled out. Finally, we need to hold the scheduling locks @@ -777,37 +782,45 @@ static void vcpu_move_locked(struct vcpu *v, unsigned= int new_cpu) * should be called like this: * * lock =3D unit_schedule_lock_irq(unit); - * vcpu_migrate_start(v); + * sched_unit_migrate_start(unit); * unit_schedule_unlock_irq(lock, unit) - * vcpu_migrate_finish(v); + * sched_unit_migrate_finish(unit); * - * vcpu_migrate_finish() will do the work now if it can, or simply - * return if it can't (because v is still running); in that case - * vcpu_migrate_finish() will be called by context_saved(). + * sched_unit_migrate_finish() will do the work now if it can, or simply + * return if it can't (because unit is still running); in that case + * sched_unit_migrate_finish() will be called by context_saved(). */ -static void vcpu_migrate_start(struct vcpu *v) +static void sched_unit_migrate_start(struct sched_unit *unit) { - set_bit(_VPF_migrating, &v->pause_flags); - vcpu_sleep_nosync_locked(v); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + { + set_bit(_VPF_migrating, &v->pause_flags); + vcpu_sleep_nosync_locked(v); + } } =20 -static void vcpu_migrate_finish(struct vcpu *v) +static void sched_unit_migrate_finish(struct sched_unit *unit) { unsigned long flags; unsigned int old_cpu, new_cpu; spinlock_t *old_lock, *new_lock; bool_t pick_called =3D 0; + struct vcpu *v; =20 /* - * If the vcpu is currently running, this will be handled by + * If the unit is currently running, this will be handled by * context_saved(); and in any case, if the bit is cleared, then * someone else has already done the work so we don't need to. */ - if ( v->sched_unit->is_running || - !test_bit(_VPF_migrating, &v->pause_flags) ) - return; + for_each_sched_unit_vcpu ( unit, v ) + { + if ( unit->is_running || !test_bit(_VPF_migrating, &v->pause_flags= ) ) + return; + } =20 - old_cpu =3D new_cpu =3D v->processor; + old_cpu =3D new_cpu =3D unit->res->processor; for ( ; ; ) { /* @@ -820,7 +833,7 @@ static void vcpu_migrate_finish(struct vcpu *v) =20 sched_spin_lock_double(old_lock, new_lock, &flags); =20 - old_cpu =3D v->processor; + old_cpu =3D unit->res->processor; if ( old_lock =3D=3D get_sched_res(old_cpu)->schedule_lock ) { /* @@ -829,15 +842,15 @@ static void vcpu_migrate_finish(struct vcpu *v) */ if ( pick_called && (new_lock =3D=3D get_sched_res(new_cpu)->schedule_lock) && - cpumask_test_cpu(new_cpu, v->sched_unit->cpu_hard_affinit= y) && - cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) + cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity) && + cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_vali= d) ) break; =20 /* Select a new CPU. */ - new_cpu =3D sched_pick_resource(vcpu_scheduler(v), - v->sched_unit)->processor; + new_cpu =3D sched_pick_resource(unit_scheduler(unit), + unit)->processor; if ( (new_lock =3D=3D get_sched_res(new_cpu)->schedule_lock) && - cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) + cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_vali= d) ) break; pick_called =3D 1; } @@ -858,22 +871,30 @@ static void vcpu_migrate_finish(struct vcpu *v) * because they both happen in (different) spinlock regions, and those * regions are strictly serialised. */ - if ( v->sched_unit->is_running || - !test_and_clear_bit(_VPF_migrating, &v->pause_flags) ) + for_each_sched_unit_vcpu ( unit, v ) { - sched_spin_unlock_double(old_lock, new_lock, flags); - return; + if ( unit->is_running || + !test_and_clear_bit(_VPF_migrating, &v->pause_flags) ) + { + sched_spin_unlock_double(old_lock, new_lock, flags); + return; + } } =20 - vcpu_move_locked(v, new_cpu); + sched_unit_move_locked(unit, new_cpu); =20 sched_spin_unlock_double(old_lock, new_lock, flags); =20 if ( old_cpu !=3D new_cpu ) - sched_move_irqs(v->sched_unit); + { + for_each_sched_unit_vcpu ( unit, v ) + sync_vcpu_execstate(v); + sched_move_irqs(unit); + } =20 /* Wake on new CPU. */ - vcpu_wake(v); + for_each_sched_unit_vcpu ( unit, v ) + vcpu_wake(v); } =20 /* @@ -1041,10 +1062,9 @@ int cpu_disable_scheduler(unsigned int cpu) * * the scheduler will always find a suitable solution, or * things would have failed before getting in here. */ - vcpu_migrate_start(unit->vcpu_list); + sched_unit_migrate_start(unit); unit_schedule_unlock_irqrestore(lock, flags, unit); - - vcpu_migrate_finish(unit->vcpu_list); + sched_unit_migrate_finish(unit); =20 /* * The only caveat, in this case, is that if a vcpu active in @@ -1128,14 +1148,14 @@ static int vcpu_set_affinity( ASSERT(which =3D=3D unit->cpu_soft_affinity); sched_set_affinity(v, NULL, affinity); } - vcpu_migrate_start(v); + sched_unit_migrate_start(unit); } =20 unit_schedule_unlock_irq(lock, unit); =20 domain_update_node_affinity(v->domain); =20 - vcpu_migrate_finish(v); + sched_unit_migrate_finish(unit); =20 return ret; } @@ -1396,12 +1416,12 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigne= d int cpu, uint8_t reason) =20 migrate =3D !ret && !cpumask_test_cpu(v->processor, unit->cpu_hard_aff= inity); if ( migrate ) - vcpu_migrate_start(v); + sched_unit_migrate_start(unit); =20 unit_schedule_unlock_irq(lock, unit); =20 if ( migrate ) - vcpu_migrate_finish(v); + sched_unit_migrate_finish(unit); =20 return ret; } @@ -1794,7 +1814,7 @@ void context_saved(struct vcpu *prev) =20 sched_context_saved(vcpu_scheduler(prev), prev->sched_unit); =20 - vcpu_migrate_finish(prev); + sched_unit_migrate_finish(prev->sched_unit); } =20 /* The scheduler timer: force a run through the scheduler */ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel