From nobody Tue Nov 11 08:49:11 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567751; cv=none; d=zoho.com; s=zohoarc; b=aeSHAbfCA9a2VZKWaZEvGRi58fZeU4hgIVuHpO+Ep/DnOuhG2hf0WSNLqCaArmA/szl6Yx4npL87HOub4hL/AAOwgsY7VopR97ivF7zJNiclDYPfu/+eXdVRJxSwIIXQPeAbfcQE3dbivnl1Eqfpzm0INThNJPpodrFXIg809CU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567751; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=I2mePS4z7f0ka514H5JvzsUjxubv7F1JYlS2b0vrLz0=; b=b6uJEukRR5mOQVzu0EsoweuwDlbB/SIClZ08v9jh/C1jXpQUTqNOTI8pmEisoYTI5bXOZ7UliI0eNKWG39w5WO/VCPczbM2bUcSEiWVKOCR5FNfkZyfIIkmlY1FUbtOJmvfkKizZNyUWtM2xE6lJ2L8xc7Y9YMieWgle0I7eTM0= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567751417356.52063205164495; Fri, 27 Sep 2019 00:02:31 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkG9-0003kC-T2; Fri, 27 Sep 2019 07:01:45 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkG9-0003jA-50 for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:45 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 912446f2-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:02 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DD747B175; Fri, 27 Sep 2019 07:00:59 +0000 (UTC) X-Inumbo-ID: 912446f2-e0f4-11e9-966c-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:22 +0200 Message-Id: <20190927070050.12405-19-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 18/46] xen/sched: make arinc653 scheduler vcpu agnostic. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Josh Whitehead , Robert VanVossen , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Switch arinc653 scheduler completely from vcpu to sched_unit usage. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched_arinc653.c | 208 +++++++++++++++++++++-------------------= ---- 1 file changed, 101 insertions(+), 107 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 9ff1d7f245..f04d9c9cb1 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -45,15 +45,15 @@ #define DEFAULT_TIMESLICE MILLISECS(10) =20 /** - * Retrieve the idle VCPU for a given physical CPU + * Retrieve the idle UNIT for a given physical CPU */ -#define IDLETASK(cpu) (idle_vcpu[cpu]) +#define IDLETASK(cpu) (sched_idle_unit(cpu)) =20 /** * Return a pointer to the ARINC 653-specific scheduler data information - * associated with the given VCPU (vc) + * associated with the given UNIT (unit) */ -#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_unit->priv) +#define AUNIT(unit) ((arinc653_unit_t *)(unit)->priv) =20 /** * Return the global scheduler private data given the scheduler ops pointer @@ -65,20 +65,20 @@ *************************************************************************= */ =20 /** - * The arinc653_vcpu_t structure holds ARINC 653-scheduler-specific - * information for all non-idle VCPUs + * The arinc653_unit_t structure holds ARINC 653-scheduler-specific + * information for all non-idle UNITs */ -typedef struct arinc653_vcpu_s +typedef struct arinc653_unit_s { - /* vc points to Xen's struct vcpu so we can get to it from an - * arinc653_vcpu_t pointer. */ - struct vcpu * vc; - /* awake holds whether the VCPU has been woken with vcpu_wake() */ + /* unit points to Xen's struct sched_unit so we can get to it from an + * arinc653_unit_t pointer. */ + struct sched_unit * unit; + /* awake holds whether the UNIT has been woken with vcpu_wake() */ bool_t awake; - /* list holds the linked list information for the list this VCPU + /* list holds the linked list information for the list this UNIT * is stored in */ struct list_head list; -} arinc653_vcpu_t; +} arinc653_unit_t; =20 /** * The sched_entry_t structure holds a single entry of the @@ -89,14 +89,14 @@ typedef struct sched_entry_s /* dom_handle holds the handle ("UUID") for the domain that this * schedule entry refers to. */ xen_domain_handle_t dom_handle; - /* vcpu_id holds the VCPU number for the VCPU that this schedule + /* unit_id holds the UNIT number for the UNIT that this schedule * entry refers to. */ - int vcpu_id; - /* runtime holds the number of nanoseconds that the VCPU for this + int unit_id; + /* runtime holds the number of nanoseconds that the UNIT for this * schedule entry should be allowed to run per major frame. */ s_time_t runtime; - /* vc holds a pointer to the Xen VCPU structure */ - struct vcpu * vc; + /* unit holds a pointer to the Xen sched_unit structure */ + struct sched_unit * unit; } sched_entry_t; =20 /** @@ -110,9 +110,9 @@ typedef struct a653sched_priv_s /** * This array holds the active ARINC 653 schedule. * - * When the system tries to start a new VCPU, this schedule is scanned - * to look for a matching (handle, VCPU #) pair. If both the handle (U= UID) - * and VCPU number match, then the VCPU is allowed to run. Its run time + * When the system tries to start a new UNIT, this schedule is scanned + * to look for a matching (handle, UNIT #) pair. If both the handle (U= UID) + * and UNIT number match, then the UNIT is allowed to run. Its run time * (per major frame) is given in the third entry of the schedule. */ sched_entry_t schedule[ARINC653_MAX_DOMAINS_PER_SCHEDULE]; @@ -123,8 +123,8 @@ typedef struct a653sched_priv_s * * This is not necessarily the same as the number of domains in the * schedule. A domain could be listed multiple times within the schedu= le, - * or a domain with multiple VCPUs could have a different - * schedule entry for each VCPU. + * or a domain with multiple UNITs could have a different + * schedule entry for each UNIT. */ unsigned int num_schedule_entries; =20 @@ -139,9 +139,9 @@ typedef struct a653sched_priv_s s_time_t next_major_frame; =20 /** - * pointers to all Xen VCPU structures for iterating through + * pointers to all Xen UNIT structures for iterating through */ - struct list_head vcpu_list; + struct list_head unit_list; } a653sched_priv_t; =20 /************************************************************************** @@ -167,50 +167,50 @@ static int dom_handle_cmp(const xen_domain_handle_t h= 1, } =20 /** - * This function searches the vcpu list to find a VCPU that matches - * the domain handle and VCPU ID specified. + * This function searches the unit list to find a UNIT that matches + * the domain handle and UNIT ID specified. * * @param ops Pointer to this instance of the scheduler structure * @param handle Pointer to handler - * @param vcpu_id VCPU ID + * @param unit_id UNIT ID * * @return
    - *
  • Pointer to the matching VCPU if one is found + *
  • Pointer to the matching UNIT if one is found *
  • NULL otherwise *
*/ -static struct vcpu *find_vcpu( +static struct sched_unit *find_unit( const struct scheduler *ops, xen_domain_handle_t handle, - int vcpu_id) + int unit_id) { - arinc653_vcpu_t *avcpu; + arinc653_unit_t *aunit; =20 - /* loop through the vcpu_list looking for the specified VCPU */ - list_for_each_entry ( avcpu, &SCHED_PRIV(ops)->vcpu_list, list ) - if ( (dom_handle_cmp(avcpu->vc->domain->handle, handle) =3D=3D 0) - && (vcpu_id =3D=3D avcpu->vc->vcpu_id) ) - return avcpu->vc; + /* loop through the unit_list looking for the specified UNIT */ + list_for_each_entry ( aunit, &SCHED_PRIV(ops)->unit_list, list ) + if ( (dom_handle_cmp(aunit->unit->domain->handle, handle) =3D=3D 0) + && (unit_id =3D=3D aunit->unit->unit_id) ) + return aunit->unit; =20 return NULL; } =20 /** - * This function updates the pointer to the Xen VCPU structure for each en= try + * This function updates the pointer to the Xen UNIT structure for each en= try * in the ARINC 653 schedule. * * @param ops Pointer to this instance of the scheduler structure * @return */ -static void update_schedule_vcpus(const struct scheduler *ops) +static void update_schedule_units(const struct scheduler *ops) { unsigned int i, n_entries =3D SCHED_PRIV(ops)->num_schedule_entries; =20 for ( i =3D 0; i < n_entries; i++ ) - SCHED_PRIV(ops)->schedule[i].vc =3D - find_vcpu(ops, + SCHED_PRIV(ops)->schedule[i].unit =3D + find_unit(ops, SCHED_PRIV(ops)->schedule[i].dom_handle, - SCHED_PRIV(ops)->schedule[i].vcpu_id); + SCHED_PRIV(ops)->schedule[i].unit_id); } =20 /** @@ -268,12 +268,12 @@ arinc653_sched_set( memcpy(sched_priv->schedule[i].dom_handle, schedule->sched_entries[i].dom_handle, sizeof(sched_priv->schedule[i].dom_handle)); - sched_priv->schedule[i].vcpu_id =3D + sched_priv->schedule[i].unit_id =3D schedule->sched_entries[i].vcpu_id; sched_priv->schedule[i].runtime =3D schedule->sched_entries[i].runtime; } - update_schedule_vcpus(ops); + update_schedule_units(ops); =20 /* * The newly-installed schedule takes effect immediately. We do not ev= en @@ -319,7 +319,7 @@ arinc653_sched_get( memcpy(schedule->sched_entries[i].dom_handle, sched_priv->schedule[i].dom_handle, sizeof(sched_priv->schedule[i].dom_handle)); - schedule->sched_entries[i].vcpu_id =3D sched_priv->schedule[i].vcp= u_id; + schedule->sched_entries[i].vcpu_id =3D sched_priv->schedule[i].uni= t_id; schedule->sched_entries[i].runtime =3D sched_priv->schedule[i].run= time; } =20 @@ -355,7 +355,7 @@ a653sched_init(struct scheduler *ops) =20 prv->next_major_frame =3D 0; spin_lock_init(&prv->lock); - INIT_LIST_HEAD(&prv->vcpu_list); + INIT_LIST_HEAD(&prv->unit_list); =20 return 0; } @@ -373,7 +373,7 @@ a653sched_deinit(struct scheduler *ops) } =20 /** - * This function allocates scheduler-specific data for a VCPU + * This function allocates scheduler-specific data for a UNIT * * @param ops Pointer to this instance of the scheduler structure * @param unit Pointer to struct sched_unit @@ -385,35 +385,34 @@ a653sched_alloc_udata(const struct scheduler *ops, st= ruct sched_unit *unit, void *dd) { a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); - struct vcpu *vc =3D unit->vcpu_list; - arinc653_vcpu_t *svc; + arinc653_unit_t *svc; unsigned int entry; unsigned long flags; =20 /* * Allocate memory for the ARINC 653-specific scheduler data informati= on - * associated with the given VCPU (vc). + * associated with the given UNIT (unit). */ - svc =3D xmalloc(arinc653_vcpu_t); + svc =3D xmalloc(arinc653_unit_t); if ( svc =3D=3D NULL ) return NULL; =20 spin_lock_irqsave(&sched_priv->lock, flags); =20 - /*=20 - * Add every one of dom0's vcpus to the schedule, as long as there are + /* + * Add every one of dom0's units to the schedule, as long as there are * slots available. */ - if ( vc->domain->domain_id =3D=3D 0 ) + if ( unit->domain->domain_id =3D=3D 0 ) { entry =3D sched_priv->num_schedule_entries; =20 if ( entry < ARINC653_MAX_DOMAINS_PER_SCHEDULE ) { sched_priv->schedule[entry].dom_handle[0] =3D '\0'; - sched_priv->schedule[entry].vcpu_id =3D vc->vcpu_id; + sched_priv->schedule[entry].unit_id =3D unit->unit_id; sched_priv->schedule[entry].runtime =3D DEFAULT_TIMESLICE; - sched_priv->schedule[entry].vc =3D vc; + sched_priv->schedule[entry].unit =3D unit; =20 sched_priv->major_frame +=3D DEFAULT_TIMESLICE; ++sched_priv->num_schedule_entries; @@ -421,16 +420,16 @@ a653sched_alloc_udata(const struct scheduler *ops, st= ruct sched_unit *unit, } =20 /* - * Initialize our ARINC 653 scheduler-specific information for the VCP= U. - * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it + * Initialize our ARINC 653 scheduler-specific information for the UNI= T. + * The UNIT starts "asleep." When Xen is ready for the UNIT to run, it * will call the vcpu_wake scheduler callback function and our schedul= er - * will mark the VCPU awake. + * will mark the UNIT awake. */ - svc->vc =3D vc; + svc->unit =3D unit; svc->awake =3D 0; - if ( !is_idle_vcpu(vc) ) - list_add(&svc->list, &SCHED_PRIV(ops)->vcpu_list); - update_schedule_vcpus(ops); + if ( !is_idle_unit(unit) ) + list_add(&svc->list, &SCHED_PRIV(ops)->unit_list); + update_schedule_units(ops); =20 spin_unlock_irqrestore(&sched_priv->lock, flags); =20 @@ -438,7 +437,7 @@ a653sched_alloc_udata(const struct scheduler *ops, stru= ct sched_unit *unit, } =20 /** - * This function frees scheduler-specific VCPU data + * This function frees scheduler-specific UNIT data * * @param ops Pointer to this instance of the scheduler structure */ @@ -446,7 +445,7 @@ static void a653sched_free_udata(const struct scheduler *ops, void *priv) { a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); - arinc653_vcpu_t *av =3D priv; + arinc653_unit_t *av =3D priv; unsigned long flags; =20 if (av =3D=3D NULL) @@ -454,17 +453,17 @@ a653sched_free_udata(const struct scheduler *ops, voi= d *priv) =20 spin_lock_irqsave(&sched_priv->lock, flags); =20 - if ( !is_idle_vcpu(av->vc) ) + if ( !is_idle_unit(av->unit) ) list_del(&av->list); =20 xfree(av); - update_schedule_vcpus(ops); + update_schedule_units(ops); =20 spin_unlock_irqrestore(&sched_priv->lock, flags); } =20 /** - * Xen scheduler callback function to sleep a VCPU + * Xen scheduler callback function to sleep a UNIT * * @param ops Pointer to this instance of the scheduler structure * @param unit Pointer to struct sched_unit @@ -472,21 +471,19 @@ a653sched_free_udata(const struct scheduler *ops, voi= d *priv) static void a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *vc =3D unit->vcpu_list; - - if ( AVCPU(vc) !=3D NULL ) - AVCPU(vc)->awake =3D 0; + if ( AUNIT(unit) !=3D NULL ) + AUNIT(unit)->awake =3D 0; =20 /* - * If the VCPU being put to sleep is the same one that is currently + * If the UNIT being put to sleep is the same one that is currently * running, raise a softirq to invoke the scheduler to switch domains. */ - if ( get_sched_res(vc->processor)->curr =3D=3D unit ) - cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); + if ( get_sched_res(sched_unit_master(unit))->curr =3D=3D unit ) + cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ); } =20 /** - * Xen scheduler callback function to wake up a VCPU + * Xen scheduler callback function to wake up a UNIT * * @param ops Pointer to this instance of the scheduler structure * @param unit Pointer to struct sched_unit @@ -494,24 +491,22 @@ a653sched_unit_sleep(const struct scheduler *ops, str= uct sched_unit *unit) static void a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *vc =3D unit->vcpu_list; + if ( AUNIT(unit) !=3D NULL ) + AUNIT(unit)->awake =3D 1; =20 - if ( AVCPU(vc) !=3D NULL ) - AVCPU(vc)->awake =3D 1; - - cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); + cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ); } =20 /** - * Xen scheduler callback function to select a VCPU to run. + * Xen scheduler callback function to select a UNIT to run. * This is the main scheduler routine. * * @param ops Pointer to this instance of the scheduler structure * @param now Current time * - * @return Address of the VCPU structure scheduled to be run next - * Amount of time to execute the returned VCPU - * Flag for whether the VCPU was migrated + * @return Address of the UNIT structure scheduled to be run next + * Amount of time to execute the returned UNIT + * Flag for whether the UNIT was migrated */ static struct task_slice a653sched_do_schedule( @@ -520,7 +515,7 @@ a653sched_do_schedule( bool_t tasklet_work_scheduled) { struct task_slice ret; /* hold the chosen domain = */ - struct vcpu * new_task =3D NULL; + struct sched_unit *new_task =3D NULL; static unsigned int sched_index =3D 0; static s_time_t next_switch_time; a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); @@ -565,14 +560,14 @@ a653sched_do_schedule( * sched_unit structure. */ new_task =3D (sched_index < sched_priv->num_schedule_entries) - ? sched_priv->schedule[sched_index].vc + ? sched_priv->schedule[sched_index].unit : IDLETASK(cpu); =20 /* Check to see if the new task can be run (awake & runnable). */ if ( !((new_task !=3D NULL) - && (AVCPU(new_task) !=3D NULL) - && AVCPU(new_task)->awake - && vcpu_runnable(new_task)) ) + && (AUNIT(new_task) !=3D NULL) + && AUNIT(new_task)->awake + && unit_runnable(new_task)) ) new_task =3D IDLETASK(cpu); BUG_ON(new_task =3D=3D NULL); =20 @@ -584,21 +579,21 @@ a653sched_do_schedule( =20 spin_unlock_irqrestore(&sched_priv->lock, flags); =20 - /* Tasklet work (which runs in idle VCPU context) overrides all else. = */ + /* Tasklet work (which runs in idle UNIT context) overrides all else. = */ if ( tasklet_work_scheduled ) new_task =3D IDLETASK(cpu); =20 /* Running this task would result in a migration */ - if ( !is_idle_vcpu(new_task) - && (new_task->processor !=3D cpu) ) + if ( !is_idle_unit(new_task) + && (sched_unit_master(new_task) !=3D cpu) ) new_task =3D IDLETASK(cpu); =20 /* * Return the amount of time the next domain has to run and the address - * of the selected task's VCPU structure. + * of the selected task's UNIT structure. */ ret.time =3D next_switch_time - now; - ret.task =3D new_task->sched_unit; + ret.task =3D new_task; ret.migrated =3D 0; =20 BUG_ON(ret.time <=3D 0); @@ -607,7 +602,7 @@ a653sched_do_schedule( } =20 /** - * Xen scheduler callback function to select a resource for the VCPU to ru= n on + * Xen scheduler callback function to select a resource for the UNIT to ru= n on * * @param ops Pointer to this instance of the scheduler structure * @param unit Pointer to struct sched_unit @@ -618,21 +613,20 @@ static struct sched_resource * a653sched_pick_resource(const struct scheduler *ops, const struct sched_unit *unit) { - struct vcpu *vc =3D unit->vcpu_list; cpumask_t *online; unsigned int cpu; =20 - /*=20 - * If present, prefer vc's current processor, else - * just find the first valid vcpu . + /* + * If present, prefer unit's current processor, else + * just find the first valid unit. */ - online =3D cpupool_domain_cpumask(vc->domain); + online =3D cpupool_domain_cpumask(unit->domain); =20 cpu =3D cpumask_first(online); =20 - if ( cpumask_test_cpu(vc->processor, online) + if ( cpumask_test_cpu(sched_unit_master(unit), online) || (cpu >=3D nr_cpu_ids) ) - cpu =3D vc->processor; + cpu =3D sched_unit_master(unit); =20 return get_sched_res(cpu); } @@ -643,18 +637,18 @@ a653sched_pick_resource(const struct scheduler *ops, * @param new_ops Pointer to this instance of the scheduler structure * @param cpu The cpu that is changing scheduler * @param pdata scheduler specific PCPU data (we don't have any) - * @param vdata scheduler specific VCPU data of the idle vcpu + * @param vdata scheduler specific UNIT data of the idle unit */ static spinlock_t * a653_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { struct sched_resource *sr =3D get_sched_res(cpu); - arinc653_vcpu_t *svc =3D vdata; + arinc653_unit_t *svc =3D vdata; =20 - ASSERT(!pdata && svc && is_idle_vcpu(svc->vc)); + ASSERT(!pdata && svc && is_idle_unit(svc->unit)); =20 - idle_vcpu[cpu]->sched_unit->priv =3D vdata; + sched_idle_unit(cpu)->priv =3D vdata; =20 return &sr->_lock; } --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel