From nobody Tue Nov 11 08:49:10 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567738; cv=none; d=zoho.com; s=zohoarc; b=CjxM9K+72hXK4nNP6ExcYOlMz7YoYLCs5NXfo4KlPfhcO30WgaX6GmaLrmQiMu6gPMeE9kQbHWB6dPYYxoq4JX2O21N8gVuCC5YhvywANqGqf2AZ4IOf08xJpdxQ3ne0QLKWxP5xuEV8RRLfgSIywOcishmimfpYHR2oyHUJFTE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567738; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=vJ0Hld7WxVuWVustv3jqaggqpIgtVbrifC3xzu5BtaQ=; b=SF6XxA1LAfPawqbUyT82zMjByVrvwRqaO1QUkVuQ/IuXUyNhrHP8oxlMfqjAaWtWNTgBEGFxb0WSUFnNEaEyEm027w0FM3noZRWUe7HUyFdvaTFD9PntgPeE/hgmy9O6nkq5Apop+uIe/xHgeCZY2vnqywZOPDvz6JfGJz3uS9k= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567737730868.0096490490715; Fri, 27 Sep 2019 00:02:17 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkG0-0003XX-Cv; Fri, 27 Sep 2019 07:01:36 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFz-0003WJ-3r for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:35 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 91f76eba-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:04 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 2391DB177; Fri, 27 Sep 2019 07:01:01 +0000 (UTC) X-Inumbo-ID: 91f76eba-e0f4-11e9-966c-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:26 +0200 Message-Id: <20190927070050.12405-23-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 22/46] xen/sched: switch schedule() from vcpus to sched_units X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Use sched_units instead of vcpus in schedule(). This includes the introduction of sched_unit_runstate_change() as a replacement of vcpu_runstate_change() in schedule(). Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- Note that sched_unit_runstate_change() will be subsumed by another rework in a later patch. V4: - loop over vcpus in sched_unit_runstate_change() (Jan Beulich) --- xen/common/schedule.c | 73 ++++++++++++++++++++++++++++++-----------------= ---- 1 file changed, 43 insertions(+), 30 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 26ce04bfd8..ce07b2cf99 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -258,6 +258,23 @@ static inline void vcpu_runstate_change( v->runstate.state =3D new_state; } =20 +static inline void sched_unit_runstate_change(struct sched_unit *unit, + bool running, s_time_t new_entry_time) +{ + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + { + if ( running ) + vcpu_runstate_change(v, RUNSTATE_running, new_entry_time); + else + vcpu_runstate_change(v, + ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : + (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)= ), + new_entry_time); + } +} + void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) { spinlock_t *lock =3D likely(v =3D=3D current) @@ -1629,7 +1646,7 @@ void vcpu_set_periodic_timer(struct vcpu *v, s_time_t= value) */ static void schedule(void) { - struct vcpu *prev =3D current, *next =3D NULL; + struct sched_unit *prev =3D current->sched_unit, *next =3D NULL; s_time_t now; struct scheduler *sched; unsigned long *tasklet_work =3D &this_cpu(tasklet_work_to_do); @@ -1673,9 +1690,9 @@ static void schedule(void) sched =3D this_cpu(scheduler); next_slice =3D sched->do_schedule(sched, now, tasklet_work_scheduled); =20 - next =3D next_slice.task->vcpu_list; + next =3D next_slice.task; =20 - sd->curr =3D next->sched_unit; + sd->curr =3D next; =20 if ( next_slice.time >=3D 0 ) /* -ve means no limit */ set_timer(&sd->s_timer, now + next_slice.time); @@ -1684,59 +1701,55 @@ static void schedule(void) { pcpu_schedule_unlock_irq(lock, cpu); TRACE_4D(TRC_SCHED_SWITCH_INFCONT, - next->domain->domain_id, next->vcpu_id, - now - prev->runstate.state_entry_time, + next->domain->domain_id, next->unit_id, + now - prev->state_entry_time, next_slice.time); - trace_continue_running(next); - return continue_running(prev); + trace_continue_running(next->vcpu_list); + return continue_running(prev->vcpu_list); } =20 TRACE_3D(TRC_SCHED_SWITCH_INFPREV, - prev->domain->domain_id, prev->vcpu_id, - now - prev->runstate.state_entry_time); + prev->domain->domain_id, prev->unit_id, + now - prev->state_entry_time); TRACE_4D(TRC_SCHED_SWITCH_INFNEXT, - next->domain->domain_id, next->vcpu_id, - (next->runstate.state =3D=3D RUNSTATE_runnable) ? - (now - next->runstate.state_entry_time) : 0, + next->domain->domain_id, next->unit_id, + (next->vcpu_list->runstate.state =3D=3D RUNSTATE_runnable) ? + (now - next->state_entry_time) : 0, next_slice.time); =20 - ASSERT(prev->runstate.state =3D=3D RUNSTATE_running); + ASSERT(prev->vcpu_list->runstate.state =3D=3D RUNSTATE_running); =20 TRACE_4D(TRC_SCHED_SWITCH, - prev->domain->domain_id, prev->vcpu_id, - next->domain->domain_id, next->vcpu_id); + prev->domain->domain_id, prev->unit_id, + next->domain->domain_id, next->unit_id); =20 - vcpu_runstate_change( - prev, - ((prev->pause_flags & VPF_blocked) ? RUNSTATE_blocked : - (vcpu_runnable(prev) ? RUNSTATE_runnable : RUNSTATE_offline)), - now); + sched_unit_runstate_change(prev, false, now); =20 - ASSERT(next->runstate.state !=3D RUNSTATE_running); - vcpu_runstate_change(next, RUNSTATE_running, now); + ASSERT(next->vcpu_list->runstate.state !=3D RUNSTATE_running); + sched_unit_runstate_change(next, true, now); =20 /* * NB. Don't add any trace records from here until the actual context * switch, else lost_records resume will not work properly. */ =20 - ASSERT(!next->sched_unit->is_running); - next->is_running =3D 1; - next->sched_unit->is_running =3D true; - next->sched_unit->state_entry_time =3D now; + ASSERT(!next->is_running); + next->vcpu_list->is_running =3D 1; + next->is_running =3D true; + next->state_entry_time =3D now; =20 pcpu_schedule_unlock_irq(lock, cpu); =20 SCHED_STAT_CRANK(sched_ctx); =20 - stop_timer(&prev->periodic_timer); + stop_timer(&prev->vcpu_list->periodic_timer); =20 if ( next_slice.migrated ) - sched_move_irqs(next); + sched_move_irqs(next->vcpu_list); =20 - vcpu_periodic_timer_work(next); + vcpu_periodic_timer_work(next->vcpu_list); =20 - context_switch(prev, next); + context_switch(prev->vcpu_list, next->vcpu_list); } =20 void context_saved(struct vcpu *prev) --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel