Leaving aside highly disaggregated environments, the control domain is
what will invoke XEN_SYSCTL_SCHEDOP_putinfo. Its vCPU-s therefore need to
be able to run unconditionally, not those of the domain with ID 0 (which
may not exist at all).
Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There being no "else" to the if(), what about other control domain vCPU-s?
And why are they added to all scheduler instances?
---
v2: New.
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -411,10 +411,10 @@ a653sched_alloc_udata(const struct sched
spin_lock_irqsave(&sched_priv->lock, flags);
/*
- * Add every one of dom0's units to the schedule, as long as there are
- * slots available.
+ * Add every one of the control domain's units to the schedule, as long as
+ * there are slots available.
*/
- if ( unit->domain->domain_id == 0 )
+ if ( is_control_domain(unit->domain) )
{
entry = sched_priv->num_schedule_entries;
On 3/25/26 08:54, Jan Beulich wrote:
> Leaving aside highly disaggregated environments, the control domain is
> what will invoke XEN_SYSCTL_SCHEDOP_putinfo. Its vCPU-s therefore need to
> be able to run unconditionally, not those of the domain with ID 0 (which
> may not exist at all).
>
> Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> There being no "else" to the if(), what about other control domain vCPU-s?
I wonder if a panic() is appropriate in the else case. If not, at least a
warning or error message should be logged.
> And why are they added to all scheduler instances?
> ---
> v2: New.
>
> --- a/xen/common/sched/arinc653.c
> +++ b/xen/common/sched/arinc653.c
> @@ -411,10 +411,10 @@ a653sched_alloc_udata(const struct sched
> spin_lock_irqsave(&sched_priv->lock, flags);
>
> /*
> - * Add every one of dom0's units to the schedule, as long as there are
> - * slots available.
> + * Add every one of the control domain's units to the schedule, as long as
> + * there are slots available.
> */
> - if ( unit->domain->domain_id == 0 )
> + if ( is_control_domain(unit->domain) )
Sorry, I didn't realize before that is_control_domain() includes the idle
domain. We don't want to include the idle domain in the default schedule here.
I suggest adding '&& !is_idle_domain(unit->domain)' or similar.
> {
> entry = sched_priv->num_schedule_entries;
>
>
>
On 25.03.2026 22:32, Stewart Hildebrand wrote: > On 3/25/26 08:54, Jan Beulich wrote: >> --- a/xen/common/sched/arinc653.c >> +++ b/xen/common/sched/arinc653.c >> @@ -411,10 +411,10 @@ a653sched_alloc_udata(const struct sched >> spin_lock_irqsave(&sched_priv->lock, flags); >> >> /* >> - * Add every one of dom0's units to the schedule, as long as there are >> - * slots available. >> + * Add every one of the control domain's units to the schedule, as long as >> + * there are slots available. >> */ >> - if ( unit->domain->domain_id == 0 ) >> + if ( is_control_domain(unit->domain) ) > > Sorry, I didn't realize before that is_control_domain() includes the idle > domain. We don't want to include the idle domain in the default schedule here. Hmm, I didn't recall that either (yet seeing xsm_set_system_active() I now remember having been involved). Will add the extra check, albeit it'll matter only during boot (i.e. if ARINC is the default scheduler or when BOOT_TIME_CPUPOOLS=y, aiui). Jan
On 25.03.26 13:54, Jan Beulich wrote:
> Leaving aside highly disaggregated environments, the control domain is
> what will invoke XEN_SYSCTL_SCHEDOP_putinfo. Its vCPU-s therefore need to
> be able to run unconditionally, not those of the domain with ID 0 (which
> may not exist at all).
>
> Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
> ---
> There being no "else" to the if(), what about other control domain vCPU-s?
a653sched_alloc_udata() is called for all units of a domain.
> And why are they added to all scheduler instances?
What do you mean with "added to all scheduler instances"?
a653sched_alloc_udata() will be called only for the units of a domain being
in the instance (i.e. cpupool) designated by the ops parameter.
> ---
> v2: New.
>
> --- a/xen/common/sched/arinc653.c
> +++ b/xen/common/sched/arinc653.c
> @@ -411,10 +411,10 @@ a653sched_alloc_udata(const struct sched
> spin_lock_irqsave(&sched_priv->lock, flags);
>
> /*
> - * Add every one of dom0's units to the schedule, as long as there are
> - * slots available.
> + * Add every one of the control domain's units to the schedule, as long as
> + * there are slots available.
> */
> - if ( unit->domain->domain_id == 0 )
> + if ( is_control_domain(unit->domain) )
> {
> entry = sched_priv->num_schedule_entries;
On 25.03.2026 14:38, Jürgen Groß wrote:
> On 25.03.26 13:54, Jan Beulich wrote:
>> Leaving aside highly disaggregated environments, the control domain is
>> what will invoke XEN_SYSCTL_SCHEDOP_putinfo. Its vCPU-s therefore need to
>> be able to run unconditionally, not those of the domain with ID 0 (which
>> may not exist at all).
>>
>> Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> Reviewed-by: Juergen Gross <jgross@suse.com>
Thanks.
>> ---
>> There being no "else" to the if(), what about other control domain vCPU-s?
>
> a653sched_alloc_udata() is called for all units of a domain.
Yes, and what if for the last few no vacant slot is available? With one vCPU
per unit and Dom0 having 65 vCPU-s, this would already be a problem aiui.
And ARINC653_MAX_DOMAINS_PER_SCHEDULE can't even be changed easily, as it's
baked into the sysctl public header (when such an upper bound should really
be an implementation detail, maybe a Kconfig setting). Yet then even the
"DOMAINS" in the name is confusing, when it limits the number of units which
can be dealt with.
Imo at the very, very least not being able to deal with all Dom0 / ctldom
vCPU-s should be logged.
>> And why are they added to all scheduler instances?
>
> What do you mean with "added to all scheduler instances"?
>
> a653sched_alloc_udata() will be called only for the units of a domain being
> in the instance (i.e. cpupool) designated by the ops parameter.
Perhaps the question is a result of me being confused. My understanding was
that a653sched_alloc_udata() is supposed to be setting up per-unit data,
not per-scheduler instance stuff. Yet the latter is what looks to be
happening in the Dom0 (now control-domain) specific block of code.
Jan
On 25.03.26 15:18, Jan Beulich wrote:
> On 25.03.2026 14:38, Jürgen Groß wrote:
>> On 25.03.26 13:54, Jan Beulich wrote:
>>> Leaving aside highly disaggregated environments, the control domain is
>>> what will invoke XEN_SYSCTL_SCHEDOP_putinfo. Its vCPU-s therefore need to
>>> be able to run unconditionally, not those of the domain with ID 0 (which
>>> may not exist at all).
>>>
>>> Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
>
> Thanks.
>
>>> ---
>>> There being no "else" to the if(), what about other control domain vCPU-s?
>>
>> a653sched_alloc_udata() is called for all units of a domain.
>
> Yes, and what if for the last few no vacant slot is available? With one vCPU
> per unit and Dom0 having 65 vCPU-s, this would already be a problem aiui.
> And ARINC653_MAX_DOMAINS_PER_SCHEDULE can't even be changed easily, as it's
> baked into the sysctl public header (when such an upper bound should really
> be an implementation detail, maybe a Kconfig setting). Yet then even the
> "DOMAINS" in the name is confusing, when it limits the number of units which
> can be dealt with.
AFAIUI the arinc653 scheduler is meant to serve a single physical cpu only.
With core scheduling this can be extended to a core or a socket, but this
won't increase the number of units, of course.
I agree that there should be safety checks added to ensure this assumption
isn't violated, but OTOH the arinc653 scheduler is for very special setups
only.
>
> Imo at the very, very least not being able to deal with all Dom0 / ctldom
> vCPU-s should be logged.
I agree.
>
>>> And why are they added to all scheduler instances?
>>
>> What do you mean with "added to all scheduler instances"?
>>
>> a653sched_alloc_udata() will be called only for the units of a domain being
>> in the instance (i.e. cpupool) designated by the ops parameter.
>
> Perhaps the question is a result of me being confused. My understanding was
> that a653sched_alloc_udata() is supposed to be setting up per-unit data,
> not per-scheduler instance stuff. Yet the latter is what looks to be
> happening in the Dom0 (now control-domain) specific block of code.
I don't think so.
Dom0 is special, as it needs to be functional without someone having
uploaded the scheduling table. In case dom0 isn't in the cpupool of the
arinc653 scheduler, there won't be any dom0-specific entries.
Juergen
© 2016 - 2026 Red Hat, Inc.