When that code still lived in a653sched_init(), it was redundant with the
earlier memset() / xzalloc(). Once moved, the full structure field needs
setting, as dom_handle_cmp() uses memcmp().
Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There being no "else" to the if(), what about other Dom0 vCPU-s? And why
is it that domain ID 0 is special here, rather than the hardware and/or
control domain(s)? (Likely the latter as that's what would invoke
XEN_SYSCTL_SCHEDOP_putinfo, and hence needs to be able to run without
that having been issued first.)
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -420,7 +420,8 @@ a653sched_alloc_udata(const struct sched
if ( entry < ARINC653_MAX_DOMAINS_PER_SCHEDULE )
{
- sched_priv->schedule[entry].dom_handle[0] = '\0';
+ memset(sched_priv->schedule[entry].dom_handle, '\0',
+ sizeof(sched_priv->schedule[entry].dom_handle));
sched_priv->schedule[entry].unit_id = unit->unit_id;
sched_priv->schedule[entry].runtime = DEFAULT_TIMESLICE;
sched_priv->schedule[entry].unit = unit;
On 3/24/26 11:54, Jan Beulich wrote:
> When that code still lived in a653sched_init(), it was redundant with the
> earlier memset() / xzalloc(). Once moved, the full structure field needs
> setting, as dom_handle_cmp() uses memcmp().
The whole a653sched_priv_t *sched_priv is still allocated in a653sched_init()
with xzalloc(), so it's still redundant post-move. With that said, the code is
only setting the first element (of an already-zeroed array), which is suspicious
and misleading. What we really should be doing here is copy unit->domain->handle
to sched_priv->schedule[entry].dom_handle.
> Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> There being no "else" to the if(), what about other Dom0 vCPU-s?
The condition is checking minor frame entries (i.e. available slots in the
schedule). Once those are exhausted, Dom0 units beyond
ARINC653_MAX_DOMAINS_PER_SCHEDULE would never be scheduled. Currently the
scheduler only supports scheduling a single unit at a time, so this way of
adding units to the schedule would result in Dom0 vCPUs being scheduled
sequentially, which is incredibly inefficient and unlikely to be what anyone
would actually want, but I don't think there are other possibilities given the
current lack of multicore support.
There was an effort some time ago to introduce multicore scheduling, see [1].
I'd be happy to review if somebody wants to pick that up again.
[1] https://lore.kernel.org/xen-devel/20200916181854.75563-1-jeff.kubascik@dornerworks.com/T/#t
> And why
> is it that domain ID 0 is special here, rather than the hardware and/or
> control domain(s)? (Likely the latter as that's what would invoke
> XEN_SYSCTL_SCHEDOP_putinfo, and hence needs to be able to run without
> that having been issued first.)
This likely should be updated to is_control_domain(unit->domain).
>
> --- a/xen/common/sched/arinc653.c
> +++ b/xen/common/sched/arinc653.c
> @@ -420,7 +420,8 @@ a653sched_alloc_udata(const struct sched
>
> if ( entry < ARINC653_MAX_DOMAINS_PER_SCHEDULE )
> {
> - sched_priv->schedule[entry].dom_handle[0] = '\0';
> + memset(sched_priv->schedule[entry].dom_handle, '\0',
> + sizeof(sched_priv->schedule[entry].dom_handle));
> sched_priv->schedule[entry].unit_id = unit->unit_id;
> sched_priv->schedule[entry].runtime = DEFAULT_TIMESLICE;
> sched_priv->schedule[entry].unit = unit;
>
On 24.03.2026 18:10, Stewart Hildebrand wrote:
> On 3/24/26 11:54, Jan Beulich wrote:
>> When that code still lived in a653sched_init(), it was redundant with the
>> earlier memset() / xzalloc(). Once moved, the full structure field needs
>> setting, as dom_handle_cmp() uses memcmp().
>
> The whole a653sched_priv_t *sched_priv is still allocated in a653sched_init()
> with xzalloc(), so it's still redundant post-move.
No, because arinc653_sched_set() may have "polluted" the entries in the
meantime.
> With that said, the code is
> only setting the first element (of an already-zeroed array), which is suspicious
> and misleading. What we really should be doing here is copy unit->domain->handle
> to sched_priv->schedule[entry].dom_handle.
I can switch to doing that, but aiui for Dom0 that's still going to be
all zeroes. The difference will become relevant when switching to use
of is_control_domain() in the surrounding if().
>> Fixes: 9f0c658baedc ("arinc: add cpu-pool support to scheduler")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> There being no "else" to the if(), what about other Dom0 vCPU-s?
>
> The condition is checking minor frame entries (i.e. available slots in the
> schedule). Once those are exhausted, Dom0 units beyond
> ARINC653_MAX_DOMAINS_PER_SCHEDULE would never be scheduled.
And hence the system overall would likely misbehave, without any halfway
clear indication in the log as to why?
Jan
© 2016 - 2026 Red Hat, Inc.