kernel/cgroup/cpuset.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
From: Chen Ridong <chenridong@huawei.com>
A kernel warning was observed in the cpuset migration path:
WARNING: CPU: 3 PID: 123 at kernel/cgroup/cpuset.c:3130
cgroup_migrate_execute+0x8df/0xf30
Call Trace:
cgroup_transfer_tasks+0x2f3/0x3b0
cpuset_migrate_tasks_workfn+0x146/0x3b0
process_one_work+0x5ba/0xda0
worker_thread+0x788/0x1220
The issue can be reliably reproduced with:
# Setup test cpuset
mkdir /sys/fs/cgroup/cpuset/test
echo 2-3 > /sys/fs/cgroup/cpuset/test/cpuset.cpus
echo 0 > /sys/fs/cgroup/cpuset/test/cpuset.mems
# Start test process
sleep 100 &
pid=$!
echo $pid > /sys/fs/cgroup/cpuset/test/cgroup.procs
taskset -p 0xC $pid # Bind to CPUs 2-3
# Take CPUs offline
echo 0 > /sys/devices/system/cpu/cpu3/online
echo 0 > /sys/devices/system/cpu/cpu2/online
Root cause analysis:
When tasks are migrated to top_cpuset due to CPUs going offline,
cpuset_attach_task() sets the CPU affinity using cpus_attach which
is initialized from cpu_possible_mask. This mask may include offline
CPUs. When __set_cpus_allowed_ptr() computes the intersection between:
1. cpus_attach (possible CPUs, may include offline)
2. p->user_cpus_ptr (original user-set mask)
The resulting new_mask may contain only offline CPUs, causing the
operation to fail.
The fix changes cpus_attach initialization to use cpu_active_mask
instead of cpu_possible_mask, ensuring we only consider online CPUs
when setting the new affinity. This prevents the scenario where
the intersection would result in an invalid CPU set.
Fixes: da019032819a ("sched: Enforce user requested affinity")
Reported-by: Yang Lijin <yanglijin@huawei.com>
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
kernel/cgroup/cpuset.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index f74d04429a29..5401adbdffa6 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task)
if (cs != &top_cpuset)
guarantee_active_cpus(task, cpus_attach);
else
- cpumask_andnot(cpus_attach, task_cpu_possible_mask(task),
+ cpumask_andnot(cpus_attach, cpu_active_mask,
subpartitions_cpus);
/*
* can_attach beforehand should guarantee that this doesn't
--
2.34.1
On 7/13/25 11:23 PM, Chen Ridong wrote: > From: Chen Ridong <chenridong@huawei.com> > > A kernel warning was observed in the cpuset migration path: > > WARNING: CPU: 3 PID: 123 at kernel/cgroup/cpuset.c:3130 > cgroup_migrate_execute+0x8df/0xf30 > Call Trace: > cgroup_transfer_tasks+0x2f3/0x3b0 > cpuset_migrate_tasks_workfn+0x146/0x3b0 > process_one_work+0x5ba/0xda0 > worker_thread+0x788/0x1220 > > The issue can be reliably reproduced with: > > # Setup test cpuset > mkdir /sys/fs/cgroup/cpuset/test > echo 2-3 > /sys/fs/cgroup/cpuset/test/cpuset.cpus > echo 0 > /sys/fs/cgroup/cpuset/test/cpuset.mems > > # Start test process > sleep 100 & > pid=$! > echo $pid > /sys/fs/cgroup/cpuset/test/cgroup.procs > taskset -p 0xC $pid # Bind to CPUs 2-3 > > # Take CPUs offline > echo 0 > /sys/devices/system/cpu/cpu3/online > echo 0 > /sys/devices/system/cpu/cpu2/online > > Root cause analysis: > When tasks are migrated to top_cpuset due to CPUs going offline, > cpuset_attach_task() sets the CPU affinity using cpus_attach which > is initialized from cpu_possible_mask. This mask may include offline > CPUs. When __set_cpus_allowed_ptr() computes the intersection between: > 1. cpus_attach (possible CPUs, may include offline) > 2. p->user_cpus_ptr (original user-set mask) > The resulting new_mask may contain only offline CPUs, causing the > operation to fail. > > The fix changes cpus_attach initialization to use cpu_active_mask > instead of cpu_possible_mask, ensuring we only consider online CPUs > when setting the new affinity. This prevents the scenario where > the intersection would result in an invalid CPU set. > > Fixes: da019032819a ("sched: Enforce user requested affinity") > Reported-by: Yang Lijin <yanglijin@huawei.com> > Signed-off-by: Chen Ridong <chenridong@huawei.com> > --- > kernel/cgroup/cpuset.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > index f74d04429a29..5401adbdffa6 100644 > --- a/kernel/cgroup/cpuset.c > +++ b/kernel/cgroup/cpuset.c > @@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) > if (cs != &top_cpuset) > guarantee_active_cpus(task, cpus_attach); > else > - cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), > + cpumask_andnot(cpus_attach, cpu_active_mask, > subpartitions_cpus); > /* > * can_attach beforehand should guarantee that this doesn't Offline CPUs are explicitly included for tasks in top_cpuset. Can you try the following patch to see if it works? Thanks, Longman diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 3bc4301466f3..acd70120228c 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -3114,6 +3114,10 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset) static cpumask_var_t cpus_attach; static nodemask_t cpuset_attach_nodemask_to; +/* + * Note that tasks in the top cpuset won't get update to their cpumasks when + * a hotplug event happens. So we include offline CPUs as well. + */ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) { lockdep_assert_held(&cpuset_mutex); @@ -3127,7 +3131,16 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) * can_attach beforehand should guarantee that this doesn't * fail. TODO: have a better way to handle failure here */ - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + if (unlikely(set_cpus_allowed_ptr(task, cpus_attach))) { + /* + * Since offline CPUs are included for top_cpuset, + * set_cpus_allowed_ptr() can fail if user_cpus_ptr contains + * only offline CPUs. Take out the offline CPUs and retry. + */ + if (cs == &top_cpuset) + cpumask_and(cpus_attach, cpus_attach, cpu_active_mask); + WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + } cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); cpuset1_update_task_spread_flags(cs, task);
On 2025/7/15 3:46, Waiman Long wrote: > On 7/13/25 11:23 PM, Chen Ridong wrote: >> From: Chen Ridong <chenridong@huawei.com> >> >> A kernel warning was observed in the cpuset migration path: >> >> WARNING: CPU: 3 PID: 123 at kernel/cgroup/cpuset.c:3130 >> cgroup_migrate_execute+0x8df/0xf30 >> Call Trace: >> cgroup_transfer_tasks+0x2f3/0x3b0 >> cpuset_migrate_tasks_workfn+0x146/0x3b0 >> process_one_work+0x5ba/0xda0 >> worker_thread+0x788/0x1220 >> >> The issue can be reliably reproduced with: >> >> # Setup test cpuset >> mkdir /sys/fs/cgroup/cpuset/test >> echo 2-3 > /sys/fs/cgroup/cpuset/test/cpuset.cpus >> echo 0 > /sys/fs/cgroup/cpuset/test/cpuset.mems >> >> # Start test process >> sleep 100 & >> pid=$! >> echo $pid > /sys/fs/cgroup/cpuset/test/cgroup.procs >> taskset -p 0xC $pid # Bind to CPUs 2-3 >> >> # Take CPUs offline >> echo 0 > /sys/devices/system/cpu/cpu3/online >> echo 0 > /sys/devices/system/cpu/cpu2/online >> >> Root cause analysis: >> When tasks are migrated to top_cpuset due to CPUs going offline, >> cpuset_attach_task() sets the CPU affinity using cpus_attach which >> is initialized from cpu_possible_mask. This mask may include offline >> CPUs. When __set_cpus_allowed_ptr() computes the intersection between: >> 1. cpus_attach (possible CPUs, may include offline) >> 2. p->user_cpus_ptr (original user-set mask) >> The resulting new_mask may contain only offline CPUs, causing the >> operation to fail. >> >> The fix changes cpus_attach initialization to use cpu_active_mask >> instead of cpu_possible_mask, ensuring we only consider online CPUs >> when setting the new affinity. This prevents the scenario where >> the intersection would result in an invalid CPU set. >> >> Fixes: da019032819a ("sched: Enforce user requested affinity") >> Reported-by: Yang Lijin <yanglijin@huawei.com> >> Signed-off-by: Chen Ridong <chenridong@huawei.com> >> --- >> kernel/cgroup/cpuset.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >> index f74d04429a29..5401adbdffa6 100644 >> --- a/kernel/cgroup/cpuset.c >> +++ b/kernel/cgroup/cpuset.c >> @@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) >> if (cs != &top_cpuset) >> guarantee_active_cpus(task, cpus_attach); >> else >> - cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), >> + cpumask_andnot(cpus_attach, cpu_active_mask, >> subpartitions_cpus); >> /* >> * can_attach beforehand should guarantee that this doesn't > > Offline CPUs are explicitly included for tasks in top_cpuset. Can you try the following patch to see > if it works? > Thank you very much. I tried this patch and it worked. I will resend the new patch. Best regards Ridong > Thanks, > Longman > > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > index 3bc4301466f3..acd70120228c 100644 > --- a/kernel/cgroup/cpuset.c > +++ b/kernel/cgroup/cpuset.c > @@ -3114,6 +3114,10 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset) > static cpumask_var_t cpus_attach; > static nodemask_t cpuset_attach_nodemask_to; > > +/* > + * Note that tasks in the top cpuset won't get update to their cpumasks when > + * a hotplug event happens. So we include offline CPUs as well. > + */ > static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) > { > lockdep_assert_held(&cpuset_mutex); > @@ -3127,7 +3131,16 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) > * can_attach beforehand should guarantee that this doesn't > * fail. TODO: have a better way to handle failure here > */ > - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); > + if (unlikely(set_cpus_allowed_ptr(task, cpus_attach))) { > + /* > + * Since offline CPUs are included for top_cpuset, > + * set_cpus_allowed_ptr() can fail if user_cpus_ptr contains > + * only offline CPUs. Take out the offline CPUs and retry. > + */ > + if (cs == &top_cpuset) > + cpumask_and(cpus_attach, cpus_attach, cpu_active_mask); > + WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); > + } > > cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); > cpuset1_update_task_spread_flags(cs, task);
On Mon, Jul 14, 2025 at 03:23:11AM +0000, Chen Ridong wrote: > From: Chen Ridong <chenridong@huawei.com> > > A kernel warning was observed in the cpuset migration path: > > WARNING: CPU: 3 PID: 123 at kernel/cgroup/cpuset.c:3130 > cgroup_migrate_execute+0x8df/0xf30 > Call Trace: > cgroup_transfer_tasks+0x2f3/0x3b0 > cpuset_migrate_tasks_workfn+0x146/0x3b0 > process_one_work+0x5ba/0xda0 > worker_thread+0x788/0x1220 > > The issue can be reliably reproduced with: > > # Setup test cpuset > mkdir /sys/fs/cgroup/cpuset/test > echo 2-3 > /sys/fs/cgroup/cpuset/test/cpuset.cpus > echo 0 > /sys/fs/cgroup/cpuset/test/cpuset.mems > > # Start test process > sleep 100 & > pid=$! > echo $pid > /sys/fs/cgroup/cpuset/test/cgroup.procs > taskset -p 0xC $pid # Bind to CPUs 2-3 > > # Take CPUs offline > echo 0 > /sys/devices/system/cpu/cpu3/online > echo 0 > /sys/devices/system/cpu/cpu2/online > > Root cause analysis: > When tasks are migrated to top_cpuset due to CPUs going offline, > cpuset_attach_task() sets the CPU affinity using cpus_attach which > is initialized from cpu_possible_mask. This mask may include offline > CPUs. When __set_cpus_allowed_ptr() computes the intersection between: > 1. cpus_attach (possible CPUs, may include offline) > 2. p->user_cpus_ptr (original user-set mask) > The resulting new_mask may contain only offline CPUs, causing the > operation to fail. > > The fix changes cpus_attach initialization to use cpu_active_mask > instead of cpu_possible_mask, ensuring we only consider online CPUs > when setting the new affinity. This prevents the scenario where > the intersection would result in an invalid CPU set. > > Fixes: da019032819a ("sched: Enforce user requested affinity") > Reported-by: Yang Lijin <yanglijin@huawei.com> > Signed-off-by: Chen Ridong <chenridong@huawei.com> > --- > kernel/cgroup/cpuset.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > index f74d04429a29..5401adbdffa6 100644 > --- a/kernel/cgroup/cpuset.c > +++ b/kernel/cgroup/cpuset.c > @@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) > if (cs != &top_cpuset) > guarantee_active_cpus(task, cpus_attach); > else > - cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), > + cpumask_andnot(cpus_attach, cpu_active_mask, > subpartitions_cpus); This breaks things. Any task mask must be a subset of task_cpu_possible_mask() at all times. It might not be able to run outside of that mask.
On 2025/7/14 16:41, Peter Zijlstra wrote: > On Mon, Jul 14, 2025 at 03:23:11AM +0000, Chen Ridong wrote: >> From: Chen Ridong <chenridong@huawei.com> >> >> A kernel warning was observed in the cpuset migration path: >> >> WARNING: CPU: 3 PID: 123 at kernel/cgroup/cpuset.c:3130 >> cgroup_migrate_execute+0x8df/0xf30 >> Call Trace: >> cgroup_transfer_tasks+0x2f3/0x3b0 >> cpuset_migrate_tasks_workfn+0x146/0x3b0 >> process_one_work+0x5ba/0xda0 >> worker_thread+0x788/0x1220 >> >> The issue can be reliably reproduced with: >> >> # Setup test cpuset >> mkdir /sys/fs/cgroup/cpuset/test >> echo 2-3 > /sys/fs/cgroup/cpuset/test/cpuset.cpus >> echo 0 > /sys/fs/cgroup/cpuset/test/cpuset.mems >> >> # Start test process >> sleep 100 & >> pid=$! >> echo $pid > /sys/fs/cgroup/cpuset/test/cgroup.procs >> taskset -p 0xC $pid # Bind to CPUs 2-3 >> >> # Take CPUs offline >> echo 0 > /sys/devices/system/cpu/cpu3/online >> echo 0 > /sys/devices/system/cpu/cpu2/online >> >> Root cause analysis: >> When tasks are migrated to top_cpuset due to CPUs going offline, >> cpuset_attach_task() sets the CPU affinity using cpus_attach which >> is initialized from cpu_possible_mask. This mask may include offline >> CPUs. When __set_cpus_allowed_ptr() computes the intersection between: >> 1. cpus_attach (possible CPUs, may include offline) >> 2. p->user_cpus_ptr (original user-set mask) >> The resulting new_mask may contain only offline CPUs, causing the >> operation to fail. >> >> The fix changes cpus_attach initialization to use cpu_active_mask >> instead of cpu_possible_mask, ensuring we only consider online CPUs >> when setting the new affinity. This prevents the scenario where >> the intersection would result in an invalid CPU set. >> >> Fixes: da019032819a ("sched: Enforce user requested affinity") >> Reported-by: Yang Lijin <yanglijin@huawei.com> >> Signed-off-by: Chen Ridong <chenridong@huawei.com> >> --- >> kernel/cgroup/cpuset.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >> index f74d04429a29..5401adbdffa6 100644 >> --- a/kernel/cgroup/cpuset.c >> +++ b/kernel/cgroup/cpuset.c >> @@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) >> if (cs != &top_cpuset) >> guarantee_active_cpus(task, cpus_attach); >> else >> - cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), >> + cpumask_andnot(cpus_attach, cpu_active_mask, >> subpartitions_cpus); > > This breaks things. Any task mask must be a subset of > task_cpu_possible_mask() at all times. It might not be able to run > outside of that mask. Hi Peter, Thanks for your feedback. I'm afraid I don't fully understand what you mean by "breaks things". Could you please explain in more detail? To clarify my current understanding: this patch simply changes the cpus_attach initialization from task_cpu_possible_mask(task) to cpu_active_mask. The intention is that when CPUs are offlined and tasks get migrated to root cpuset, we shouldn't try to migrate tasks to offline CPUs. And since cpu_active_mask is a subset of cpu_possible_mask, I thought this would be safe. Did I miss anything? Best regards Ridong
On Mon, Jul 14, 2025 at 07:30:39PM +0800, Chen Ridong wrote: > >> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > >> index f74d04429a29..5401adbdffa6 100644 > >> --- a/kernel/cgroup/cpuset.c > >> +++ b/kernel/cgroup/cpuset.c > >> @@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) > >> if (cs != &top_cpuset) > >> guarantee_active_cpus(task, cpus_attach); > >> else > >> - cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), > >> + cpumask_andnot(cpus_attach, cpu_active_mask, > >> subpartitions_cpus); > > > > This breaks things. Any task mask must be a subset of > > task_cpu_possible_mask() at all times. It might not be able to run > > outside of that mask. > > Hi Peter, > > Thanks for your feedback. I'm afraid I don't fully understand what you > mean by "breaks things". Could you please explain in more detail? > > To clarify my current understanding: this patch simply changes the > cpus_attach initialization from task_cpu_possible_mask(task) to > cpu_active_mask. The intention is that when CPUs are offlined and > tasks get migrated to root cpuset, we shouldn't try to migrate tasks > to offline CPUs. And since cpu_active_mask is a subset of > cpu_possible_mask, I thought this would be safe. Did I miss anything? task_cpu_possible_mask() is the mask a task *MUST* stay inside of. Specifically, this was introduced for ARMv9 where some CPUs drop the capability to run ARM32 instructions. Trying to schedule an ARM32 task on a CPU that does not support that instruction set is an immediate and fatal fail. Your change results in in something akin to: set_cpus_allowed_task(task, cpu_active_mask & ~subpartition_cpus); Which does not honor the task_cpu_possible_mask() constraint.
On 2025/7/14 19:59, Peter Zijlstra wrote: > On Mon, Jul 14, 2025 at 07:30:39PM +0800, Chen Ridong wrote: > >>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >>>> index f74d04429a29..5401adbdffa6 100644 >>>> --- a/kernel/cgroup/cpuset.c >>>> +++ b/kernel/cgroup/cpuset.c >>>> @@ -3121,7 +3121,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) >>>> if (cs != &top_cpuset) >>>> guarantee_active_cpus(task, cpus_attach); >>>> else >>>> - cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), >>>> + cpumask_andnot(cpus_attach, cpu_active_mask, >>>> subpartitions_cpus); >>> >>> This breaks things. Any task mask must be a subset of >>> task_cpu_possible_mask() at all times. It might not be able to run >>> outside of that mask. >> >> Hi Peter, >> >> Thanks for your feedback. I'm afraid I don't fully understand what you >> mean by "breaks things". Could you please explain in more detail? >> >> To clarify my current understanding: this patch simply changes the >> cpus_attach initialization from task_cpu_possible_mask(task) to >> cpu_active_mask. The intention is that when CPUs are offlined and >> tasks get migrated to root cpuset, we shouldn't try to migrate tasks >> to offline CPUs. And since cpu_active_mask is a subset of >> cpu_possible_mask, I thought this would be safe. Did I miss anything? > > task_cpu_possible_mask() is the mask a task *MUST* stay inside of. > > Specifically, this was introduced for ARMv9 where some CPUs drop the > capability to run ARM32 instructions. Trying to schedule an ARM32 task > on a CPU that does not support that instruction set is an immediate and > fatal fail. > > Your change results in in something akin to: > > set_cpus_allowed_task(task, cpu_active_mask & ~subpartition_cpus); > > Which does not honor the task_cpu_possible_mask() constraint. Thanks for your patience. See now. Best regards, Ridong
© 2016 - 2025 Red Hat, Inc.