[cgroup/for-6.19 PATCH v3 4/5] cgroup/cpuset: Ensure domain isolated CPUs stay in root or isolated partition

Waiman Long posted 5 patches 1 month, 2 weeks ago
[cgroup/for-6.19 PATCH v3 4/5] cgroup/cpuset: Ensure domain isolated CPUs stay in root or isolated partition
Posted by Waiman Long 1 month, 2 weeks ago
Commit 4a74e418881f ("cgroup/cpuset: Check partition conflict with
housekeeping setup") is supposed to ensure that domain isolated CPUs
designated by the "isolcpus" boot command line option stay either in
root partition or in isolated partitions. However, the required check
wasn't implemented when a remote partition was created or when an
existing partition changed type from "root" to "isolated".

Even though this is a relatively minor issue, we still need to add the
required prstate_housekeeping_conflict() call in the right places to
ensure that the rule is strictly followed.

The following steps can be used to reproduce the problem before this
fix.

  # fmt -1 /proc/cmdline | grep isolcpus
  isolcpus=9
  # cd /sys/fs/cgroup/
  # echo +cpuset > cgroup.subtree_control
  # mkdir test
  # echo 9 > test/cpuset.cpus
  # echo isolated > test/cpuset.cpus.partition
  # cat test/cpuset.cpus.partition
  isolated
  # cat test/cpuset.cpus.effective
  9
  # echo root > test/cpuset.cpus.partition
  # cat test/cpuset.cpus.effective
  9
  # cat test/cpuset.cpus.partition
  root

With this fix, the last few steps will become:

  # echo root > test/cpuset.cpus.partition
  # cat test/cpuset.cpus.effective
  0-8,10-95
  # cat test/cpuset.cpus.partition
  root invalid (partition config conflicts with housekeeping setup)

Reported-by: Chen Ridong <chenridong@huawei.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/cgroup/cpuset.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index cc9c3402f16b..2daf58bf0bbb 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1610,8 +1610,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
 	if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
 	    cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
 		return PERR_INVCPUS;
-	if ((new_prs == PRS_ISOLATED) &&
-	    !isolated_cpus_can_update(tmp->new_cpus, NULL))
+	if (((new_prs == PRS_ISOLATED) &&
+	     !isolated_cpus_can_update(tmp->new_cpus, NULL)) ||
+	    prstate_housekeeping_conflict(new_prs, tmp->new_cpus))
 		return PERR_HKEEPING;
 
 	spin_lock_irq(&callback_lock);
@@ -3062,8 +3063,9 @@ static int update_prstate(struct cpuset *cs, int new_prs)
 		 * A change in load balance state only, no change in cpumasks.
 		 * Need to update isolated_cpus.
 		 */
-		if ((new_prs == PRS_ISOLATED) &&
-		    !isolated_cpus_can_update(cs->effective_xcpus, NULL))
+		if (((new_prs == PRS_ISOLATED) &&
+		     !isolated_cpus_can_update(cs->effective_xcpus, NULL)) ||
+		    prstate_housekeeping_conflict(new_prs, cs->effective_xcpus))
 			err = PERR_HKEEPING;
 		else
 			isolcpus_updated = true;
-- 
2.51.1
Re: [cgroup/for-6.19 PATCH v3 4/5] cgroup/cpuset: Ensure domain isolated CPUs stay in root or isolated partition
Posted by Chen Ridong 1 month, 2 weeks ago

On 2025/11/5 12:38, Waiman Long wrote:
> Commit 4a74e418881f ("cgroup/cpuset: Check partition conflict with
> housekeeping setup") is supposed to ensure that domain isolated CPUs
> designated by the "isolcpus" boot command line option stay either in
> root partition or in isolated partitions. However, the required check
> wasn't implemented when a remote partition was created or when an
> existing partition changed type from "root" to "isolated".
> 
> Even though this is a relatively minor issue, we still need to add the
> required prstate_housekeeping_conflict() call in the right places to
> ensure that the rule is strictly followed.
> 
> The following steps can be used to reproduce the problem before this
> fix.
> 
>   # fmt -1 /proc/cmdline | grep isolcpus
>   isolcpus=9
>   # cd /sys/fs/cgroup/
>   # echo +cpuset > cgroup.subtree_control
>   # mkdir test
>   # echo 9 > test/cpuset.cpus
>   # echo isolated > test/cpuset.cpus.partition
>   # cat test/cpuset.cpus.partition
>   isolated
>   # cat test/cpuset.cpus.effective
>   9
>   # echo root > test/cpuset.cpus.partition
>   # cat test/cpuset.cpus.effective
>   9
>   # cat test/cpuset.cpus.partition
>   root
> 
> With this fix, the last few steps will become:
> 
>   # echo root > test/cpuset.cpus.partition
>   # cat test/cpuset.cpus.effective
>   0-8,10-95
>   # cat test/cpuset.cpus.partition
>   root invalid (partition config conflicts with housekeeping setup)
> 
> Reported-by: Chen Ridong <chenridong@huawei.com>
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  kernel/cgroup/cpuset.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index cc9c3402f16b..2daf58bf0bbb 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1610,8 +1610,9 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
>  	if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
>  	    cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
>  		return PERR_INVCPUS;
> -	if ((new_prs == PRS_ISOLATED) &&
> -	    !isolated_cpus_can_update(tmp->new_cpus, NULL))
> +	if (((new_prs == PRS_ISOLATED) &&
> +	     !isolated_cpus_can_update(tmp->new_cpus, NULL)) ||
> +	    prstate_housekeeping_conflict(new_prs, tmp->new_cpus))
>  		return PERR_HKEEPING;
>  
>  	spin_lock_irq(&callback_lock);
> @@ -3062,8 +3063,9 @@ static int update_prstate(struct cpuset *cs, int new_prs)
>  		 * A change in load balance state only, no change in cpumasks.
>  		 * Need to update isolated_cpus.
>  		 */
> -		if ((new_prs == PRS_ISOLATED) &&
> -		    !isolated_cpus_can_update(cs->effective_xcpus, NULL))
> +		if (((new_prs == PRS_ISOLATED) &&
> +		     !isolated_cpus_can_update(cs->effective_xcpus, NULL)) ||
> +		    prstate_housekeeping_conflict(new_prs, cs->effective_xcpus))
>  			err = PERR_HKEEPING;
>  		else
>  			isolcpus_updated = true;

Reviewed-by: Chen Ridong <chenridong@huawei.com>

-- 
Best regards,
Ridong