From nobody Mon Feb 9 09:16:27 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557125907; cv=none; d=zoho.com; s=zohoarc; b=DvRkduWaxOWChrUXw4xs4WzF3d6pYce+Bxb95XVyzE0TwcrU8uxSYJ9Unp4E/bvZfu0+2o9hgsaX1+F8X4LVWUNkh8K800p+thwg3bLEV9eSnKeOQMi+Q18gdmabgvpuugV4tZJ0iEGpI0pYu1aoi0BSuUexBQg6w0c7Q49LWLA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557125907; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=zEa8YZ2YCgf2y4aTaHbT+xw3XB2yAmBPx7DHcj4HspU=; b=AC/JnmRisJLNgLkW8XzE3QJCt6Vw5EkVV6iK16rPGdEv42PTM+MhGcElOiZfiR+vcOy/L9Dl459N28Qu9fguYBCq+OpVhu5bop644xTNy7qRwD2ESnb2tRMuRbcIyRXd5nHreyxTP1NEF53EyjZix3U37O6tKPZEquFemuh5vkU= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557125907260637.3797408213691; Sun, 5 May 2019 23:58:27 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYl-0002F7-85; Mon, 06 May 2019 06:57:11 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYf-000234-Ik for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:05 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 23da1c8c-6fcc-11e9-af09-dfa1d5d5247e; Mon, 06 May 2019 06:56:58 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 8CBDEAC8E; Mon, 6 May 2019 06:56:54 +0000 (UTC) X-Inumbo-ID: 23da1c8c-6fcc-11e9-af09-dfa1d5d5247e X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:21 +0200 Message-Id: <20190506065644.7415-23-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 22/45] xen/sched: make arinc653 scheduler vcpu agnostic. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Josh Whitehead , Robert VanVossen , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Switch arinc653 scheduler completely from vcpu to sched_item usage. Signed-off-by: Juergen Gross --- xen/common/sched_arinc653.c | 208 +++++++++++++++++++++-------------------= ---- 1 file changed, 101 insertions(+), 107 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 5733a2a6b8..61f9ea6824 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -45,15 +45,15 @@ #define DEFAULT_TIMESLICE MILLISECS(10) =20 /** - * Retrieve the idle VCPU for a given physical CPU + * Retrieve the idle ITEM for a given physical CPU */ -#define IDLETASK(cpu) (idle_vcpu[cpu]) +#define IDLETASK(cpu) (sched_idle_item(cpu)) =20 /** * Return a pointer to the ARINC 653-specific scheduler data information - * associated with the given VCPU (vc) + * associated with the given ITEM (item) */ -#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_item->priv) +#define AITEM(item) ((arinc653_item_t *)(item)->priv) =20 /** * Return the global scheduler private data given the scheduler ops pointer @@ -65,20 +65,20 @@ *************************************************************************= */ =20 /** - * The arinc653_vcpu_t structure holds ARINC 653-scheduler-specific - * information for all non-idle VCPUs + * The arinc653_item_t structure holds ARINC 653-scheduler-specific + * information for all non-idle ITEMs */ -typedef struct arinc653_vcpu_s +typedef struct arinc653_item_s { - /* vc points to Xen's struct vcpu so we can get to it from an - * arinc653_vcpu_t pointer. */ - struct vcpu * vc; - /* awake holds whether the VCPU has been woken with vcpu_wake() */ + /* item points to Xen's struct sched_item so we can get to it from an + * arinc653_item_t pointer. */ + struct sched_item * item; + /* awake holds whether the ITEM has been woken with vcpu_wake() */ bool_t awake; - /* list holds the linked list information for the list this VCPU + /* list holds the linked list information for the list this ITEM * is stored in */ struct list_head list; -} arinc653_vcpu_t; +} arinc653_item_t; =20 /** * The sched_entry_t structure holds a single entry of the @@ -89,14 +89,14 @@ typedef struct sched_entry_s /* dom_handle holds the handle ("UUID") for the domain that this * schedule entry refers to. */ xen_domain_handle_t dom_handle; - /* vcpu_id holds the VCPU number for the VCPU that this schedule + /* item_id holds the ITEM number for the ITEM that this schedule * entry refers to. */ - int vcpu_id; - /* runtime holds the number of nanoseconds that the VCPU for this + int item_id; + /* runtime holds the number of nanoseconds that the ITEM for this * schedule entry should be allowed to run per major frame. */ s_time_t runtime; - /* vc holds a pointer to the Xen VCPU structure */ - struct vcpu * vc; + /* item holds a pointer to the Xen sched_item structure */ + struct sched_item * item; } sched_entry_t; =20 /** @@ -110,9 +110,9 @@ typedef struct a653sched_priv_s /** * This array holds the active ARINC 653 schedule. * - * When the system tries to start a new VCPU, this schedule is scanned - * to look for a matching (handle, VCPU #) pair. If both the handle (U= UID) - * and VCPU number match, then the VCPU is allowed to run. Its run time + * When the system tries to start a new ITEM, this schedule is scanned + * to look for a matching (handle, ITEM #) pair. If both the handle (U= UID) + * and ITEM number match, then the ITEM is allowed to run. Its run time * (per major frame) is given in the third entry of the schedule. */ sched_entry_t schedule[ARINC653_MAX_DOMAINS_PER_SCHEDULE]; @@ -123,8 +123,8 @@ typedef struct a653sched_priv_s * * This is not necessarily the same as the number of domains in the * schedule. A domain could be listed multiple times within the schedu= le, - * or a domain with multiple VCPUs could have a different - * schedule entry for each VCPU. + * or a domain with multiple ITEMs could have a different + * schedule entry for each ITEM. */ unsigned int num_schedule_entries; =20 @@ -139,9 +139,9 @@ typedef struct a653sched_priv_s s_time_t next_major_frame; =20 /** - * pointers to all Xen VCPU structures for iterating through + * pointers to all Xen ITEM structures for iterating through */ - struct list_head vcpu_list; + struct list_head item_list; } a653sched_priv_t; =20 /************************************************************************** @@ -167,50 +167,50 @@ static int dom_handle_cmp(const xen_domain_handle_t h= 1, } =20 /** - * This function searches the vcpu list to find a VCPU that matches - * the domain handle and VCPU ID specified. + * This function searches the item list to find a ITEM that matches + * the domain handle and ITEM ID specified. * * @param ops Pointer to this instance of the scheduler structure * @param handle Pointer to handler - * @param vcpu_id VCPU ID + * @param item_id ITEM ID * * @return
    - *
  • Pointer to the matching VCPU if one is found + *
  • Pointer to the matching ITEM if one is found *
  • NULL otherwise *
*/ -static struct vcpu *find_vcpu( +static struct sched_item *find_item( const struct scheduler *ops, xen_domain_handle_t handle, - int vcpu_id) + int item_id) { - arinc653_vcpu_t *avcpu; + arinc653_item_t *aitem; =20 - /* loop through the vcpu_list looking for the specified VCPU */ - list_for_each_entry ( avcpu, &SCHED_PRIV(ops)->vcpu_list, list ) - if ( (dom_handle_cmp(avcpu->vc->domain->handle, handle) =3D=3D 0) - && (vcpu_id =3D=3D avcpu->vc->vcpu_id) ) - return avcpu->vc; + /* loop through the item_list looking for the specified ITEM */ + list_for_each_entry ( aitem, &SCHED_PRIV(ops)->item_list, list ) + if ( (dom_handle_cmp(aitem->item->domain->handle, handle) =3D=3D 0) + && (item_id =3D=3D aitem->item->item_id) ) + return aitem->item; =20 return NULL; } =20 /** - * This function updates the pointer to the Xen VCPU structure for each en= try + * This function updates the pointer to the Xen ITEM structure for each en= try * in the ARINC 653 schedule. * * @param ops Pointer to this instance of the scheduler structure * @return */ -static void update_schedule_vcpus(const struct scheduler *ops) +static void update_schedule_items(const struct scheduler *ops) { unsigned int i, n_entries =3D SCHED_PRIV(ops)->num_schedule_entries; =20 for ( i =3D 0; i < n_entries; i++ ) - SCHED_PRIV(ops)->schedule[i].vc =3D - find_vcpu(ops, + SCHED_PRIV(ops)->schedule[i].item =3D + find_item(ops, SCHED_PRIV(ops)->schedule[i].dom_handle, - SCHED_PRIV(ops)->schedule[i].vcpu_id); + SCHED_PRIV(ops)->schedule[i].item_id); } =20 /** @@ -268,12 +268,12 @@ arinc653_sched_set( memcpy(sched_priv->schedule[i].dom_handle, schedule->sched_entries[i].dom_handle, sizeof(sched_priv->schedule[i].dom_handle)); - sched_priv->schedule[i].vcpu_id =3D + sched_priv->schedule[i].item_id =3D schedule->sched_entries[i].vcpu_id; sched_priv->schedule[i].runtime =3D schedule->sched_entries[i].runtime; } - update_schedule_vcpus(ops); + update_schedule_items(ops); =20 /* * The newly-installed schedule takes effect immediately. We do not ev= en @@ -319,7 +319,7 @@ arinc653_sched_get( memcpy(schedule->sched_entries[i].dom_handle, sched_priv->schedule[i].dom_handle, sizeof(sched_priv->schedule[i].dom_handle)); - schedule->sched_entries[i].vcpu_id =3D sched_priv->schedule[i].vcp= u_id; + schedule->sched_entries[i].vcpu_id =3D sched_priv->schedule[i].ite= m_id; schedule->sched_entries[i].runtime =3D sched_priv->schedule[i].run= time; } =20 @@ -355,7 +355,7 @@ a653sched_init(struct scheduler *ops) =20 prv->next_major_frame =3D 0; spin_lock_init(&prv->lock); - INIT_LIST_HEAD(&prv->vcpu_list); + INIT_LIST_HEAD(&prv->item_list); =20 return 0; } @@ -373,7 +373,7 @@ a653sched_deinit(struct scheduler *ops) } =20 /** - * This function allocates scheduler-specific data for a VCPU + * This function allocates scheduler-specific data for a ITEM * * @param ops Pointer to this instance of the scheduler structure * @param item Pointer to struct sched_item @@ -385,35 +385,34 @@ a653sched_alloc_vdata(const struct scheduler *ops, st= ruct sched_item *item, void *dd) { a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); - struct vcpu *vc =3D item->vcpu; - arinc653_vcpu_t *svc; + arinc653_item_t *svc; unsigned int entry; unsigned long flags; =20 /* * Allocate memory for the ARINC 653-specific scheduler data informati= on - * associated with the given VCPU (vc). + * associated with the given ITEM (item). */ - svc =3D xmalloc(arinc653_vcpu_t); + svc =3D xmalloc(arinc653_item_t); if ( svc =3D=3D NULL ) return NULL; =20 spin_lock_irqsave(&sched_priv->lock, flags); =20 - /*=20 - * Add every one of dom0's vcpus to the schedule, as long as there are + /* + * Add every one of dom0's items to the schedule, as long as there are * slots available. */ - if ( vc->domain->domain_id =3D=3D 0 ) + if ( item->domain->domain_id =3D=3D 0 ) { entry =3D sched_priv->num_schedule_entries; =20 if ( entry < ARINC653_MAX_DOMAINS_PER_SCHEDULE ) { sched_priv->schedule[entry].dom_handle[0] =3D '\0'; - sched_priv->schedule[entry].vcpu_id =3D vc->vcpu_id; + sched_priv->schedule[entry].item_id =3D item->item_id; sched_priv->schedule[entry].runtime =3D DEFAULT_TIMESLICE; - sched_priv->schedule[entry].vc =3D vc; + sched_priv->schedule[entry].item =3D item; =20 sched_priv->major_frame +=3D DEFAULT_TIMESLICE; ++sched_priv->num_schedule_entries; @@ -421,16 +420,16 @@ a653sched_alloc_vdata(const struct scheduler *ops, st= ruct sched_item *item, } =20 /* - * Initialize our ARINC 653 scheduler-specific information for the VCP= U. - * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it + * Initialize our ARINC 653 scheduler-specific information for the ITE= M. + * The ITEM starts "asleep." When Xen is ready for the ITEM to run, it * will call the vcpu_wake scheduler callback function and our schedul= er - * will mark the VCPU awake. + * will mark the ITEM awake. */ - svc->vc =3D vc; + svc->item =3D item; svc->awake =3D 0; - if ( !is_idle_vcpu(vc) ) - list_add(&svc->list, &SCHED_PRIV(ops)->vcpu_list); - update_schedule_vcpus(ops); + if ( !is_idle_item(item) ) + list_add(&svc->list, &SCHED_PRIV(ops)->item_list); + update_schedule_items(ops); =20 spin_unlock_irqrestore(&sched_priv->lock, flags); =20 @@ -438,27 +437,27 @@ a653sched_alloc_vdata(const struct scheduler *ops, st= ruct sched_item *item, } =20 /** - * This function frees scheduler-specific VCPU data + * This function frees scheduler-specific ITEM data * * @param ops Pointer to this instance of the scheduler structure */ static void a653sched_free_vdata(const struct scheduler *ops, void *priv) { - arinc653_vcpu_t *av =3D priv; + arinc653_item_t *av =3D priv; =20 if (av =3D=3D NULL) return; =20 - if ( !is_idle_vcpu(av->vc) ) + if ( !is_idle_item(av->item) ) list_del(&av->list); =20 xfree(av); - update_schedule_vcpus(ops); + update_schedule_items(ops); } =20 /** - * Xen scheduler callback function to sleep a VCPU + * Xen scheduler callback function to sleep a ITEM * * @param ops Pointer to this instance of the scheduler structure * @param item Pointer to struct sched_item @@ -466,21 +465,19 @@ a653sched_free_vdata(const struct scheduler *ops, voi= d *priv) static void a653sched_item_sleep(const struct scheduler *ops, struct sched_item *item) { - struct vcpu *vc =3D item->vcpu; - - if ( AVCPU(vc) !=3D NULL ) - AVCPU(vc)->awake =3D 0; + if ( AITEM(item) !=3D NULL ) + AITEM(item)->awake =3D 0; =20 /* - * If the VCPU being put to sleep is the same one that is currently + * If the ITEM being put to sleep is the same one that is currently * running, raise a softirq to invoke the scheduler to switch domains. */ - if ( per_cpu(sched_res, vc->processor)->curr =3D=3D item ) - cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); + if ( per_cpu(sched_res, sched_item_cpu(item))->curr =3D=3D item ) + cpu_raise_softirq(sched_item_cpu(item), SCHEDULE_SOFTIRQ); } =20 /** - * Xen scheduler callback function to wake up a VCPU + * Xen scheduler callback function to wake up a ITEM * * @param ops Pointer to this instance of the scheduler structure * @param item Pointer to struct sched_item @@ -488,24 +485,22 @@ a653sched_item_sleep(const struct scheduler *ops, str= uct sched_item *item) static void a653sched_item_wake(const struct scheduler *ops, struct sched_item *item) { - struct vcpu *vc =3D item->vcpu; + if ( AITEM(item) !=3D NULL ) + AITEM(item)->awake =3D 1; =20 - if ( AVCPU(vc) !=3D NULL ) - AVCPU(vc)->awake =3D 1; - - cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); + cpu_raise_softirq(sched_item_cpu(item), SCHEDULE_SOFTIRQ); } =20 /** - * Xen scheduler callback function to select a VCPU to run. + * Xen scheduler callback function to select a ITEM to run. * This is the main scheduler routine. * * @param ops Pointer to this instance of the scheduler structure * @param now Current time * - * @return Address of the VCPU structure scheduled to be run next - * Amount of time to execute the returned VCPU - * Flag for whether the VCPU was migrated + * @return Address of the ITEM structure scheduled to be run next + * Amount of time to execute the returned ITEM + * Flag for whether the ITEM was migrated */ static struct task_slice a653sched_do_schedule( @@ -514,7 +509,7 @@ a653sched_do_schedule( bool_t tasklet_work_scheduled) { struct task_slice ret; /* hold the chosen domain = */ - struct vcpu * new_task =3D NULL; + struct sched_item *new_task =3D NULL; static unsigned int sched_index =3D 0; static s_time_t next_switch_time; a653sched_priv_t *sched_priv =3D SCHED_PRIV(ops); @@ -559,14 +554,14 @@ a653sched_do_schedule( * sched_item structure. */ new_task =3D (sched_index < sched_priv->num_schedule_entries) - ? sched_priv->schedule[sched_index].vc + ? sched_priv->schedule[sched_index].item : IDLETASK(cpu); =20 /* Check to see if the new task can be run (awake & runnable). */ if ( !((new_task !=3D NULL) - && (AVCPU(new_task) !=3D NULL) - && AVCPU(new_task)->awake - && vcpu_runnable(new_task)) ) + && (AITEM(new_task) !=3D NULL) + && AITEM(new_task)->awake + && item_runnable(new_task)) ) new_task =3D IDLETASK(cpu); BUG_ON(new_task =3D=3D NULL); =20 @@ -578,21 +573,21 @@ a653sched_do_schedule( =20 spin_unlock_irqrestore(&sched_priv->lock, flags); =20 - /* Tasklet work (which runs in idle VCPU context) overrides all else. = */ + /* Tasklet work (which runs in idle ITEM context) overrides all else. = */ if ( tasklet_work_scheduled ) new_task =3D IDLETASK(cpu); =20 /* Running this task would result in a migration */ - if ( !is_idle_vcpu(new_task) - && (new_task->processor !=3D cpu) ) + if ( !is_idle_item(new_task) + && (sched_item_cpu(new_task) !=3D cpu) ) new_task =3D IDLETASK(cpu); =20 /* * Return the amount of time the next domain has to run and the address - * of the selected task's VCPU structure. + * of the selected task's ITEM structure. */ ret.time =3D next_switch_time - now; - ret.task =3D new_task->sched_item; + ret.task =3D new_task; ret.migrated =3D 0; =20 BUG_ON(ret.time <=3D 0); @@ -601,7 +596,7 @@ a653sched_do_schedule( } =20 /** - * Xen scheduler callback function to select a resource for the VCPU to ru= n on + * Xen scheduler callback function to select a resource for the ITEM to ru= n on * * @param ops Pointer to this instance of the scheduler structure * @param item Pointer to struct sched_item @@ -611,21 +606,20 @@ a653sched_do_schedule( static struct sched_resource * a653sched_pick_resource(const struct scheduler *ops, struct sched_item *it= em) { - struct vcpu *vc =3D item->vcpu; cpumask_t *online; unsigned int cpu; =20 - /*=20 - * If present, prefer vc's current processor, else - * just find the first valid vcpu . + /* + * If present, prefer item's current processor, else + * just find the first valid item. */ - online =3D cpupool_domain_cpumask(vc->domain); + online =3D cpupool_domain_cpumask(item->domain); =20 cpu =3D cpumask_first(online); =20 - if ( cpumask_test_cpu(vc->processor, online) + if ( cpumask_test_cpu(sched_item_cpu(item), online) || (cpu >=3D nr_cpu_ids) ) - cpu =3D vc->processor; + cpu =3D sched_item_cpu(item); =20 return per_cpu(sched_res, cpu); } @@ -636,18 +630,18 @@ a653sched_pick_resource(const struct scheduler *ops, = struct sched_item *item) * @param new_ops Pointer to this instance of the scheduler structure * @param cpu The cpu that is changing scheduler * @param pdata scheduler specific PCPU data (we don't have any) - * @param vdata scheduler specific VCPU data of the idle vcpu + * @param vdata scheduler specific ITEM data of the idle item */ static void a653_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { struct sched_resource *sd =3D per_cpu(sched_res, cpu); - arinc653_vcpu_t *svc =3D vdata; + arinc653_item_t *svc =3D vdata; =20 - ASSERT(!pdata && svc && is_idle_vcpu(svc->vc)); + ASSERT(!pdata && svc && is_idle_item(svc->item)); =20 - idle_vcpu[cpu]->sched_item->priv =3D vdata; + sched_idle_item(cpu)->priv =3D vdata; =20 per_cpu(scheduler, cpu) =3D new_ops; per_cpu(sched_res, cpu)->sched_priv =3D NULL; /* no pdata */ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel