From nobody Fri Oct 3 20:59:24 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27E3F23A58E; Mon, 25 Aug 2025 03:38:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756093126; cv=none; b=kOrSTSLebipQAnnEI9F9McXjMTkVuBMKZPZ4pf5W7woZFNF0X7bUQzWmwYvjBbmG+WHiTDo8x7tuYBKVnOj7OOjUJg74a5b0bgIzzW1LUDKBJ2QXr/xCM9xWfEVgHCckulpleDd5XsLC1t1mAgTPGuX51rC5KmBHixmXElEbrVE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756093126; c=relaxed/simple; bh=QsyqL6zDhHD1q1OZyRW3s6b6V7xYtyBss/zgtHWR5w4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Jz73z4K/xAk8kxMNy2Pla/YPKLIPidNFRMai1aCJmTgWcigMjnz39Rp/+pk9lp/fEc7c8gX2td2ygq11oP5CE22znsCbkW4HikkC3oqr60TGkIjaaePOLzuYYfHkE81tjTMXLLiWWJm4+tUjIrQ1J9ez//ezgU62kzm+7eX5Y60= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4c9Gjb1YFtzYQvGp; Mon, 25 Aug 2025 11:38:43 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id BAE8C1A1267; Mon, 25 Aug 2025 11:38:41 +0800 (CST) Received: from hulk-vt.huawei.com (unknown [10.67.174.121]) by APP4 (Coremail) with SMTP id gCh0CgDXII2x2qtoTBLUAA--.30877S3; Mon, 25 Aug 2025 11:38:41 +0800 (CST) From: Chen Ridong To: longman@redhat.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, lujialin4@huawei.com, chenridong@huawei.com Subject: [PATCH -next v5 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup Date: Mon, 25 Aug 2025 03:23:50 +0000 Message-Id: <20250825032352.1703602-2-chenridong@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250825032352.1703602-1-chenridong@huaweicloud.com> References: <20250825032352.1703602-1-chenridong@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgDXII2x2qtoTBLUAA--.30877S3 X-Coremail-Antispam: 1UD129KBjvJXoWxJF1kGFy7Gr47XrWUWF4rKrg_yoW5CrWrpF WYkFWUG3yUJr18W34DJ3Z7Xr1Skaykt34kK3sxJ34rGFyay3y0vFy7Z3sYqw4UKF97WF13 JFWqyr4IgFyUtrUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Kb4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_JFI_Gr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr4 1l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK 67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI 8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxUw9a9UUUUU X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ Content-Type: text/plain; charset="utf-8" From: Chen Ridong Currently, free_cpumasks() can free both tmpmasks and cpumasks of a cpuset (cs). However, these two operations are not logically coupled. To improve code clarity: 1. Move cpumask freeing to free_cpuset() 2. Rename free_cpumasks() to free_tmpmasks() This change enforces the single responsibility principle. Signed-off-by: Chen Ridong Reviewed-by: Waiman Long --- kernel/cgroup/cpuset.c | 32 +++++++++++++------------------- 1 file changed, 13 insertions(+), 19 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 3466ebbf1016..aebda14cc67f 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -459,23 +459,14 @@ static inline int alloc_cpumasks(struct cpuset *cs, s= truct tmpmasks *tmp) } =20 /** - * free_cpumasks - free cpumasks in a tmpmasks structure - * @cs: the cpuset that have cpumasks to be free. + * free_tmpmasks - free cpumasks in a tmpmasks structure * @tmp: the tmpmasks structure pointer */ -static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp) +static inline void free_tmpmasks(struct tmpmasks *tmp) { - if (cs) { - free_cpumask_var(cs->cpus_allowed); - free_cpumask_var(cs->effective_cpus); - free_cpumask_var(cs->effective_xcpus); - free_cpumask_var(cs->exclusive_cpus); - } - if (tmp) { - free_cpumask_var(tmp->new_cpus); - free_cpumask_var(tmp->addmask); - free_cpumask_var(tmp->delmask); - } + free_cpumask_var(tmp->new_cpus); + free_cpumask_var(tmp->addmask); + free_cpumask_var(tmp->delmask); } =20 /** @@ -508,7 +499,10 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset= *cs) */ static inline void free_cpuset(struct cpuset *cs) { - free_cpumasks(cs, NULL); + free_cpumask_var(cs->cpus_allowed); + free_cpumask_var(cs->effective_cpus); + free_cpumask_var(cs->effective_xcpus); + free_cpumask_var(cs->exclusive_cpus); kfree(cs); } =20 @@ -2427,7 +2421,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, if (cs->partition_root_state) update_partition_sd_lb(cs, old_prs); out_free: - free_cpumasks(NULL, &tmp); + free_tmpmasks(&tmp); return retval; } =20 @@ -2530,7 +2524,7 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, if (cs->partition_root_state) update_partition_sd_lb(cs, old_prs); =20 - free_cpumasks(NULL, &tmp); + free_tmpmasks(&tmp); return 0; } =20 @@ -2983,7 +2977,7 @@ static int update_prstate(struct cpuset *cs, int new_= prs) notify_partition_change(cs, old_prs); if (force_sd_rebuild) rebuild_sched_domains_locked(); - free_cpumasks(NULL, &tmpmask); + free_tmpmasks(&tmpmask); return 0; } =20 @@ -4006,7 +4000,7 @@ static void cpuset_handle_hotplug(void) if (force_sd_rebuild) rebuild_sched_domains_cpuslocked(); =20 - free_cpumasks(NULL, ptmp); + free_tmpmasks(ptmp); } =20 void cpuset_update_active_cpus(void) --=20 2.34.1 From nobody Fri Oct 3 20:59:24 2025 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 745E033985; Mon, 25 Aug 2025 03:38:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756093126; cv=none; b=FkakvtoGc8JCgCjGFmHtDsaL26zIaea8aUAxaCa+levTLg9ZkqgaN/F/dpz2rqVIAG59kNTZIm20BppOGxnpe78hmc1Wz5qwV3uEh4myO5BZj8JEX6wDD0Q4k9y8CEEfoLAOwKmoId9odPabopEccoICPqnHRO4dSDhUSbNOXgA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756093126; c=relaxed/simple; bh=WPHkdS2DzAIS6FN4XFZUqWZYgpaVZYWQqnOMQaDWZKM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=Ph3YvECcoUnPHDRu5hdLtVtHt6RSnIFwmxFNaE8yocpx2lVldatyOLMryQXCnkwpBG6y9HOW2UytEgimOb+xlIF+/A0hnYXmZyNt9lDlsrdWHxtBL1C1YLKUNxMuJGFhIa51B48Vps0E8P6VEaPvHLyt3TqCg7KHKfSc6STQNbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4c9GjZ1ZLSzKHN2p; Mon, 25 Aug 2025 11:38:42 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id C7AFE1A1356; Mon, 25 Aug 2025 11:38:41 +0800 (CST) Received: from hulk-vt.huawei.com (unknown [10.67.174.121]) by APP4 (Coremail) with SMTP id gCh0CgDXII2x2qtoTBLUAA--.30877S4; Mon, 25 Aug 2025 11:38:41 +0800 (CST) From: Chen Ridong To: longman@redhat.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, lujialin4@huawei.com, chenridong@huawei.com Subject: [PATCH -next v5 2/3] cpuset: separate tmpmasks and cpuset allocation logic Date: Mon, 25 Aug 2025 03:23:51 +0000 Message-Id: <20250825032352.1703602-3-chenridong@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250825032352.1703602-1-chenridong@huaweicloud.com> References: <20250825032352.1703602-1-chenridong@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgDXII2x2qtoTBLUAA--.30877S4 X-Coremail-Antispam: 1UD129KBjvJXoWxtFWfuFW3ZF43uw4UtrW5ZFb_yoWxtryUpF 48CrWUKayUJr18Ww45G3Z7Gr1fK3yvqa4qy3W5XryruFyavr4I9FyDX3sYvFW3Cas7CF1r XF98Aw4IgFyDKrUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Kb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr4 1l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK 67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI 8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU2FApUUUUU X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ From: Chen Ridong The original alloc_cpumasks() served dual purposes: allocating cpumasks for both temporary masks (tmpmasks) and cpuset structures. This patch: 1. Decouples these allocation paths for better code clarity 2. Introduces dedicated alloc_tmpmasks() and dup_or_alloc_cpuset() functions 3. Maintains symmetric pairing: - alloc_tmpmasks() =E2=86=94 free_tmpmasks() - dup_or_alloc_cpuset() =E2=86=94 free_cpuset() Signed-off-by: Chen Ridong Reviewed-by: Waiman Long --- kernel/cgroup/cpuset.c | 127 ++++++++++++++++++++++------------------- 1 file changed, 69 insertions(+), 58 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index aebda14cc67f..7b0b81c835bf 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -411,51 +411,47 @@ static void guarantee_online_mems(struct cpuset *cs, = nodemask_t *pmask) } =20 /** - * alloc_cpumasks - allocate three cpumasks for cpuset - * @cs: the cpuset that have cpumasks to be allocated. - * @tmp: the tmpmasks structure pointer + * alloc_cpumasks - Allocate an array of cpumask variables + * @pmasks: Pointer to array of cpumask_var_t pointers + * @size: Number of cpumasks to allocate * Return: 0 if successful, -ENOMEM otherwise. * - * Only one of the two input arguments should be non-NULL. + * Allocates @size cpumasks and initializes them to empty. Returns 0 on + * success, -ENOMEM on allocation failure. On failure, any previously + * allocated cpumasks are freed. */ -static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp) +static inline int alloc_cpumasks(cpumask_var_t *pmasks[], u32 size) { - cpumask_var_t *pmask1, *pmask2, *pmask3, *pmask4; + int i; =20 - if (cs) { - pmask1 =3D &cs->cpus_allowed; - pmask2 =3D &cs->effective_cpus; - pmask3 =3D &cs->effective_xcpus; - pmask4 =3D &cs->exclusive_cpus; - } else { - pmask1 =3D &tmp->new_cpus; - pmask2 =3D &tmp->addmask; - pmask3 =3D &tmp->delmask; - pmask4 =3D NULL; + for (i =3D 0; i < size; i++) { + if (!zalloc_cpumask_var(pmasks[i], GFP_KERNEL)) { + while (--i >=3D 0) + free_cpumask_var(*pmasks[i]); + return -ENOMEM; + } } - - if (!zalloc_cpumask_var(pmask1, GFP_KERNEL)) - return -ENOMEM; - - if (!zalloc_cpumask_var(pmask2, GFP_KERNEL)) - goto free_one; - - if (!zalloc_cpumask_var(pmask3, GFP_KERNEL)) - goto free_two; - - if (pmask4 && !zalloc_cpumask_var(pmask4, GFP_KERNEL)) - goto free_three; - - return 0; +} =20 -free_three: - free_cpumask_var(*pmask3); -free_two: - free_cpumask_var(*pmask2); -free_one: - free_cpumask_var(*pmask1); - return -ENOMEM; +/** + * alloc_tmpmasks - Allocate temporary cpumasks for cpuset operations. + * @tmp: Pointer to tmpmasks structure to populate + * Return: 0 on success, -ENOMEM on allocation failure + */ +static inline int alloc_tmpmasks(struct tmpmasks *tmp) +{ + /* + * Array of pointers to the three cpumask_var_t fields in tmpmasks. + * Note: Array size must match actual number of masks (3) + */ + cpumask_var_t *pmask[3] =3D { + &tmp->new_cpus, + &tmp->addmask, + &tmp->delmask + }; + + return alloc_cpumasks(pmask, ARRAY_SIZE(pmask)); } =20 /** @@ -470,26 +466,46 @@ static inline void free_tmpmasks(struct tmpmasks *tmp) } =20 /** - * alloc_trial_cpuset - allocate a trial cpuset - * @cs: the cpuset that the trial cpuset duplicates + * dup_or_alloc_cpuset - Duplicate or allocate a new cpuset + * @cs: Source cpuset to duplicate (NULL for a fresh allocation) + * + * Creates a new cpuset by either: + * 1. Duplicating an existing cpuset (if @cs is non-NULL), or + * 2. Allocating a fresh cpuset with zero-initialized masks (if @cs is NUL= L) + * + * Return: Pointer to newly allocated cpuset on success, NULL on failure */ -static struct cpuset *alloc_trial_cpuset(struct cpuset *cs) +static struct cpuset *dup_or_alloc_cpuset(struct cpuset *cs) { struct cpuset *trial; =20 - trial =3D kmemdup(cs, sizeof(*cs), GFP_KERNEL); + /* Allocate base structure */ + trial =3D cs ? kmemdup(cs, sizeof(*cs), GFP_KERNEL) : + kzalloc(sizeof(*cs), GFP_KERNEL); if (!trial) return NULL; =20 - if (alloc_cpumasks(trial, NULL)) { + /* Setup cpumask pointer array */ + cpumask_var_t *pmask[4] =3D { + &trial->cpus_allowed, + &trial->effective_cpus, + &trial->effective_xcpus, + &trial->exclusive_cpus + }; + + if (alloc_cpumasks(pmask, ARRAY_SIZE(pmask))) { kfree(trial); return NULL; } =20 - cpumask_copy(trial->cpus_allowed, cs->cpus_allowed); - cpumask_copy(trial->effective_cpus, cs->effective_cpus); - cpumask_copy(trial->effective_xcpus, cs->effective_xcpus); - cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus); + /* Copy masks if duplicating */ + if (cs) { + cpumask_copy(trial->cpus_allowed, cs->cpus_allowed); + cpumask_copy(trial->effective_cpus, cs->effective_cpus); + cpumask_copy(trial->effective_xcpus, cs->effective_xcpus); + cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus); + } + return trial; } =20 @@ -2332,7 +2348,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed)) return 0; =20 - if (alloc_cpumasks(NULL, &tmp)) + if (alloc_tmpmasks(&tmp)) return -ENOMEM; =20 if (old_prs) { @@ -2476,7 +2492,7 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, if (retval) return retval; =20 - if (alloc_cpumasks(NULL, &tmp)) + if (alloc_tmpmasks(&tmp)) return -ENOMEM; =20 if (old_prs) { @@ -2820,7 +2836,7 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct = cpuset *cs, int spread_flag_changed; int err; =20 - trialcs =3D alloc_trial_cpuset(cs); + trialcs =3D dup_or_alloc_cpuset(cs); if (!trialcs) return -ENOMEM; =20 @@ -2881,7 +2897,7 @@ static int update_prstate(struct cpuset *cs, int new_= prs) if (new_prs && is_prs_invalid(old_prs)) old_prs =3D PRS_MEMBER; =20 - if (alloc_cpumasks(NULL, &tmpmask)) + if (alloc_tmpmasks(&tmpmask)) return -ENOMEM; =20 err =3D update_partition_exclusive_flag(cs, new_prs); @@ -3223,7 +3239,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, if (!is_cpuset_online(cs)) goto out_unlock; =20 - trialcs =3D alloc_trial_cpuset(cs); + trialcs =3D dup_or_alloc_cpuset(cs); if (!trialcs) { retval =3D -ENOMEM; goto out_unlock; @@ -3456,15 +3472,10 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent= _css) if (!parent_css) return &top_cpuset.css; =20 - cs =3D kzalloc(sizeof(*cs), GFP_KERNEL); + cs =3D dup_or_alloc_cpuset(NULL); if (!cs) return ERR_PTR(-ENOMEM); =20 - if (alloc_cpumasks(cs, NULL)) { - kfree(cs); - return ERR_PTR(-ENOMEM); - } - __set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); fmeter_init(&cs->fmeter); cs->relax_domain_level =3D -1; @@ -3920,7 +3931,7 @@ static void cpuset_handle_hotplug(void) bool on_dfl =3D is_in_v2_mode(); struct tmpmasks tmp, *ptmp =3D NULL; =20 - if (on_dfl && !alloc_cpumasks(NULL, &tmp)) + if (on_dfl && !alloc_tmpmasks(&tmp)) ptmp =3D &tmp; =20 lockdep_assert_cpus_held(); --=20 2.34.1 From nobody Fri Oct 3 20:59:24 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27EDF23C8CD; Mon, 25 Aug 2025 03:38:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756093127; cv=none; b=ahywDSdWLFov11Ri9pY1IIE/XoJEx4QwzhQIb2NB1r/vCfG+0m5fzxQ6rax4ZvRBcgULLMQ0+3EmS88pkoUBwzFD8+hHwmC06ARrwHt3PfBLofRDLukqTI+DZZsJcmzr2vfNnukaNFTltE38iq0GE2p6XQVZNjqXJ3vCpDCdcek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756093127; c=relaxed/simple; bh=A5HIS079uYJ7RV0VnuE/qTCM1zQzWFkIMVUsGbNJH/Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GA1YEV/Wq6wwIxLGfmmiwvD8exFrM+AvaK/qDmdlHCJAVWdKpGiv7N53qRo0VzaZ5YBhez93TdbHWhdV0TUvQ7Dh2bZzaCC7SrpmkhBwWNx9izHyrz3YpF6ygby1+7TS0PAOBuocYwwBIrDgZgXaxi+JALb2+bBA7ctELV/DAs8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4c9Gjb2XvFzYQvGM; Mon, 25 Aug 2025 11:38:43 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id DD3931A1262; Mon, 25 Aug 2025 11:38:41 +0800 (CST) Received: from hulk-vt.huawei.com (unknown [10.67.174.121]) by APP4 (Coremail) with SMTP id gCh0CgDXII2x2qtoTBLUAA--.30877S5; Mon, 25 Aug 2025 11:38:41 +0800 (CST) From: Chen Ridong To: longman@redhat.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, lujialin4@huawei.com, chenridong@huawei.com Subject: [PATCH -next v5 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks Date: Mon, 25 Aug 2025 03:23:52 +0000 Message-Id: <20250825032352.1703602-4-chenridong@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250825032352.1703602-1-chenridong@huaweicloud.com> References: <20250825032352.1703602-1-chenridong@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgDXII2x2qtoTBLUAA--.30877S5 X-Coremail-Antispam: 1UD129KBjvJXoWxuFW3ZFyrXr4DArWxZF1DJrb_yoW7KF4UpF yq9rW7tayUtr4Duw13Ga4Dur1rKw1j9FWUGFn5J3WrAFy2yF429F1DCF9xWr15GryxCrn8 Wasrur4Y9a1DGrUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Kb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr4 1l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK 67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI 8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxUrbyCDUUUU X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ Content-Type: text/plain; charset="utf-8" From: Chen Ridong cpuset: add helpers for cpus_read_lock and cpuset_mutex locks. Replace repetitive locking patterns with new helpers: - cpuset_full_lock() - cpuset_full_unlock() This makes the code cleaner and ensures consistent lock ordering. Signed-off-by: Chen Ridong Reviewed-by: Waiman Long --- kernel/cgroup/cpuset-internal.h | 2 ++ kernel/cgroup/cpuset-v1.c | 12 +++---- kernel/cgroup/cpuset.c | 60 +++++++++++++++++++-------------- 3 files changed, 40 insertions(+), 34 deletions(-) diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-interna= l.h index 75b3aef39231..337608f408ce 100644 --- a/kernel/cgroup/cpuset-internal.h +++ b/kernel/cgroup/cpuset-internal.h @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cp= uset *cs, int turning_on) ssize_t cpuset_write_resmask(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off); int cpuset_common_seq_show(struct seq_file *sf, void *v); +void cpuset_full_lock(void); +void cpuset_full_unlock(void); =20 /* * cpuset-v1.c diff --git a/kernel/cgroup/cpuset-v1.c b/kernel/cgroup/cpuset-v1.c index b69a7db67090..12e76774c75b 100644 --- a/kernel/cgroup/cpuset-v1.c +++ b/kernel/cgroup/cpuset-v1.c @@ -169,8 +169,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state = *css, struct cftype *cft, cpuset_filetype_t type =3D cft->private; int retval =3D -ENODEV; =20 - cpus_read_lock(); - cpuset_lock(); + cpuset_full_lock(); if (!is_cpuset_online(cs)) goto out_unlock; =20 @@ -184,8 +183,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state = *css, struct cftype *cft, break; } out_unlock: - cpuset_unlock(); - cpus_read_unlock(); + cpuset_full_unlock(); return retval; } =20 @@ -454,8 +452,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state = *css, struct cftype *cft, cpuset_filetype_t type =3D cft->private; int retval =3D 0; =20 - cpus_read_lock(); - cpuset_lock(); + cpuset_full_lock(); if (!is_cpuset_online(cs)) { retval =3D -ENODEV; goto out_unlock; @@ -498,8 +495,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state = *css, struct cftype *cft, break; } out_unlock: - cpuset_unlock(); - cpus_read_unlock(); + cpuset_full_unlock(); return retval; } =20 diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 7b0b81c835bf..a78ccd11ce9b 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -250,6 +250,12 @@ static struct cpuset top_cpuset =3D { =20 static DEFINE_MUTEX(cpuset_mutex); =20 +/** + * cpuset_lock - Acquire the global cpuset mutex + * + * This locks the global cpuset mutex to prevent modifications to cpuset + * hierarchy and configurations. This helper is not enough to make modific= ation. + */ void cpuset_lock(void) { mutex_lock(&cpuset_mutex); @@ -260,6 +266,24 @@ void cpuset_unlock(void) mutex_unlock(&cpuset_mutex); } =20 +/** + * cpuset_full_lock - Acquire full protection for cpuset modification + * + * Takes both CPU hotplug read lock (cpus_read_lock()) and cpuset mutex + * to safely modify cpuset data. + */ +void cpuset_full_lock(void) +{ + cpus_read_lock(); + mutex_lock(&cpuset_mutex); +} + +void cpuset_full_unlock(void) +{ + mutex_unlock(&cpuset_mutex); + cpus_read_unlock(); +} + static DEFINE_SPINLOCK(callback_lock); =20 void cpuset_callback_lock_irq(void) @@ -3234,8 +3258,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, int retval =3D -ENODEV; =20 buf =3D strstrip(buf); - cpus_read_lock(); - mutex_lock(&cpuset_mutex); + cpuset_full_lock(); if (!is_cpuset_online(cs)) goto out_unlock; =20 @@ -3264,8 +3287,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, if (force_sd_rebuild) rebuild_sched_domains_locked(); out_unlock: - mutex_unlock(&cpuset_mutex); - cpus_read_unlock(); + cpuset_full_unlock(); flush_workqueue(cpuset_migrate_mm_wq); return retval ?: nbytes; } @@ -3368,12 +3390,10 @@ static ssize_t cpuset_partition_write(struct kernfs= _open_file *of, char *buf, else return -EINVAL; =20 - cpus_read_lock(); - mutex_lock(&cpuset_mutex); + cpuset_full_lock(); if (is_cpuset_online(cs)) retval =3D update_prstate(cs, val); - mutex_unlock(&cpuset_mutex); - cpus_read_unlock(); + cpuset_full_unlock(); return retval ?: nbytes; } =20 @@ -3498,9 +3518,7 @@ static int cpuset_css_online(struct cgroup_subsys_sta= te *css) if (!parent) return 0; =20 - cpus_read_lock(); - mutex_lock(&cpuset_mutex); - + cpuset_full_lock(); if (is_spread_page(parent)) set_bit(CS_SPREAD_PAGE, &cs->flags); if (is_spread_slab(parent)) @@ -3552,8 +3570,7 @@ static int cpuset_css_online(struct cgroup_subsys_sta= te *css) cpumask_copy(cs->effective_cpus, parent->cpus_allowed); spin_unlock_irq(&callback_lock); out_unlock: - mutex_unlock(&cpuset_mutex); - cpus_read_unlock(); + cpuset_full_unlock(); return 0; } =20 @@ -3568,16 +3585,12 @@ static void cpuset_css_offline(struct cgroup_subsys= _state *css) { struct cpuset *cs =3D css_cs(css); =20 - cpus_read_lock(); - mutex_lock(&cpuset_mutex); - + cpuset_full_lock(); if (!cpuset_v2() && is_sched_load_balance(cs)) cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0); =20 cpuset_dec(); - - mutex_unlock(&cpuset_mutex); - cpus_read_unlock(); + cpuset_full_unlock(); } =20 /* @@ -3589,16 +3602,11 @@ static void cpuset_css_killed(struct cgroup_subsys_= state *css) { struct cpuset *cs =3D css_cs(css); =20 - cpus_read_lock(); - mutex_lock(&cpuset_mutex); - + cpuset_full_lock(); /* Reset valid partition back to member */ if (is_partition_valid(cs)) update_prstate(cs, PRS_MEMBER); - - mutex_unlock(&cpuset_mutex); - cpus_read_unlock(); - + cpuset_full_unlock(); } =20 static void cpuset_css_free(struct cgroup_subsys_state *css) --=20 2.34.1