From nobody Sat Oct 4 12:41:17 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 330D825A2CF; Mon, 18 Aug 2025 03:59:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755489549; cv=none; b=t5Wh74NlB6KHhrK8hqekQ7rfNNgNSbGpqPuPvPTe1eMdWg+077p+xkO2Ts3a7YeITpSoG+V8JihhVMOnQC8SfJlzsZsiW5EI24ezNhJDA78K8e2eNdQtwaR9daNqMa9bKKLCtzPqdoR9WhFRolU1O55nDsJne2Jq0UMYgzzE8Oo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755489549; c=relaxed/simple; bh=3MEDVwuejxLIxhJHIwWBKqI70/LdL6U7s/ee5SQrQXw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=O1ygMSHJE+/je22qlmuZFJJ4bpyvxxSXVfujJQbsvyIsuNM2yi6y6opAAmjHq4VMOqmRDNEm4MDc1cW+fXKnxmPdEs7MigPtx8E0M362XlcOCep0E6+YgDdjG5SOxQjthCKa+fQvO/QmluCrP5EhnDZc5hhxPGylUsIFRVtcZNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4c4zVL4GcQzYQvL9; Mon, 18 Aug 2025 11:59:06 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 2E7FA1A0C88; Mon, 18 Aug 2025 11:59:05 +0800 (CST) Received: from hulk-vt.huawei.com (unknown [10.67.174.121]) by APP4 (Coremail) with SMTP id gCh0CgCnIhP+pKJot61AEA--.17966S4; Mon, 18 Aug 2025 11:59:05 +0800 (CST) From: Chen Ridong To: longman@redhat.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, lujialin4@huawei.com, chenridong@huawei.com Subject: [PATCH -next v3 2/3] cpuset: separate tmpmasks and cpuset allocation logic Date: Mon, 18 Aug 2025 03:44:29 +0000 Message-Id: <20250818034430.1304069-3-chenridong@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250818034430.1304069-1-chenridong@huaweicloud.com> References: <20250818034430.1304069-1-chenridong@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgCnIhP+pKJot61AEA--.17966S4 X-Coremail-Antispam: 1UD129KBjvJXoWxtFWfuFW3ZF43uw4UtrW5ZFb_yoWxtry8pF 4rCrWUKayUJr18Ww4UG3Z7Gr1fK3yvqa4qy3W5XryruFyaqF4I9FyDJ3sYvay7CasrCF1r XF98Aw4IgFyDKrUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Kb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr4 1l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK 67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI 8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAv wI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14 v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU2FApUUUUU X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ From: Chen Ridong The original alloc_cpumasks() served dual purposes: allocating cpumasks for both temporary masks (tmpmasks) and cpuset structures. This patch: 1. Decouples these allocation paths for better code clarity 2. Introduces dedicated alloc_tmpmasks() and dup_or_alloc_cpuset() functions 3. Maintains symmetric pairing: - alloc_tmpmasks() =E2=86=94 free_tmpmasks() - dup_or_alloc_cpuset() =E2=86=94 free_cpuset() Signed-off-by: Chen Ridong --- kernel/cgroup/cpuset.c | 128 ++++++++++++++++++++++------------------- 1 file changed, 69 insertions(+), 59 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index aebda14cc67f..d5588a1fef60 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -411,51 +411,46 @@ static void guarantee_online_mems(struct cpuset *cs, = nodemask_t *pmask) } =20 /** - * alloc_cpumasks - allocate three cpumasks for cpuset - * @cs: the cpuset that have cpumasks to be allocated. - * @tmp: the tmpmasks structure pointer - * Return: 0 if successful, -ENOMEM otherwise. + * alloc_cpumasks - Allocate an array of cpumask variables + * @pmasks: Pointer to array of cpumask_var_t pointers + * @size: Number of cpumasks to allocate * - * Only one of the two input arguments should be non-NULL. + * Allocates @size cpumasks and initializes them to empty. Returns 0 on + * success, -ENOMEM on allocation failure. On failure, any previously + * allocated cpumasks are freed. */ -static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp) +static inline int alloc_cpumasks(cpumask_var_t *pmasks[], u32 size) { - cpumask_var_t *pmask1, *pmask2, *pmask3, *pmask4; + int i; =20 - if (cs) { - pmask1 =3D &cs->cpus_allowed; - pmask2 =3D &cs->effective_cpus; - pmask3 =3D &cs->effective_xcpus; - pmask4 =3D &cs->exclusive_cpus; - } else { - pmask1 =3D &tmp->new_cpus; - pmask2 =3D &tmp->addmask; - pmask3 =3D &tmp->delmask; - pmask4 =3D NULL; + for (i =3D 0; i < size; i++) { + if (!zalloc_cpumask_var(pmasks[i], GFP_KERNEL)) { + while (--i >=3D 0) + free_cpumask_var(*pmasks[i]); + return -ENOMEM; + } } - - if (!zalloc_cpumask_var(pmask1, GFP_KERNEL)) - return -ENOMEM; - - if (!zalloc_cpumask_var(pmask2, GFP_KERNEL)) - goto free_one; - - if (!zalloc_cpumask_var(pmask3, GFP_KERNEL)) - goto free_two; - - if (pmask4 && !zalloc_cpumask_var(pmask4, GFP_KERNEL)) - goto free_three; - - return 0; +} =20 -free_three: - free_cpumask_var(*pmask3); -free_two: - free_cpumask_var(*pmask2); -free_one: - free_cpumask_var(*pmask1); - return -ENOMEM; +/** + * alloc_tmpmasks - Allocate temporary cpumasks for cpuset operations. + * @tmp: Pointer to tmpmasks structure to populate + * Return: 0 on success, -ENOMEM on allocation failure + */ +static inline int alloc_tmpmasks(struct tmpmasks *tmp) +{ + /* + * Array of pointers to the three cpumask_var_t fields in tmpmasks. + * Note: Array size must match actual number of masks (3) + */ + cpumask_var_t *pmask[3] =3D { + &tmp->new_cpus, + &tmp->addmask, + &tmp->delmask + }; + + return alloc_cpumasks(pmask, ARRAY_SIZE(pmask)); } =20 /** @@ -470,26 +465,46 @@ static inline void free_tmpmasks(struct tmpmasks *tmp) } =20 /** - * alloc_trial_cpuset - allocate a trial cpuset - * @cs: the cpuset that the trial cpuset duplicates + * dup_or_alloc_cpuset - Duplicate or allocate a new cpuset + * @cs: Source cpuset to duplicate (NULL for a fresh allocation) + * + * Creates a new cpuset by either: + * 1. Duplicating an existing cpuset (if @cs is non-NULL), or + * 2. Allocating a fresh cpuset with zero-initialized masks (if @cs is NUL= L) + * + * Return: Pointer to newly allocated cpuset on success, NULL on failure */ -static struct cpuset *alloc_trial_cpuset(struct cpuset *cs) +static struct cpuset *dup_or_alloc_cpuset(struct cpuset *cs) { struct cpuset *trial; =20 - trial =3D kmemdup(cs, sizeof(*cs), GFP_KERNEL); + /* Allocate base structure */ + trial =3D cs ? kmemdup(cs, sizeof(*cs), GFP_KERNEL) : + kzalloc(sizeof(*cs), GFP_KERNEL); if (!trial) return NULL; =20 - if (alloc_cpumasks(trial, NULL)) { + /* Setup cpumask pointer array */ + cpumask_var_t *pmask[4] =3D { + &trial->cpus_allowed, + &trial->effective_cpus, + &trial->effective_xcpus, + &trial->exclusive_cpus + }; + + if (alloc_cpumasks(pmask, ARRAY_SIZE(pmask))) { kfree(trial); return NULL; } =20 - cpumask_copy(trial->cpus_allowed, cs->cpus_allowed); - cpumask_copy(trial->effective_cpus, cs->effective_cpus); - cpumask_copy(trial->effective_xcpus, cs->effective_xcpus); - cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus); + /* Copy masks if duplicating */ + if (cs) { + cpumask_copy(trial->cpus_allowed, cs->cpus_allowed); + cpumask_copy(trial->effective_cpus, cs->effective_cpus); + cpumask_copy(trial->effective_xcpus, cs->effective_xcpus); + cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus); + } + return trial; } =20 @@ -2332,7 +2347,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed)) return 0; =20 - if (alloc_cpumasks(NULL, &tmp)) + if (alloc_tmpmasks(&tmp)) return -ENOMEM; =20 if (old_prs) { @@ -2476,7 +2491,7 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, if (retval) return retval; =20 - if (alloc_cpumasks(NULL, &tmp)) + if (alloc_tmpmasks(&tmp)) return -ENOMEM; =20 if (old_prs) { @@ -2820,7 +2835,7 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct = cpuset *cs, int spread_flag_changed; int err; =20 - trialcs =3D alloc_trial_cpuset(cs); + trialcs =3D dup_or_alloc_cpuset(cs); if (!trialcs) return -ENOMEM; =20 @@ -2881,7 +2896,7 @@ static int update_prstate(struct cpuset *cs, int new_= prs) if (new_prs && is_prs_invalid(old_prs)) old_prs =3D PRS_MEMBER; =20 - if (alloc_cpumasks(NULL, &tmpmask)) + if (alloc_tmpmasks(&tmpmask)) return -ENOMEM; =20 err =3D update_partition_exclusive_flag(cs, new_prs); @@ -3223,7 +3238,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, if (!is_cpuset_online(cs)) goto out_unlock; =20 - trialcs =3D alloc_trial_cpuset(cs); + trialcs =3D dup_or_alloc_cpuset(cs); if (!trialcs) { retval =3D -ENOMEM; goto out_unlock; @@ -3456,15 +3471,10 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent= _css) if (!parent_css) return &top_cpuset.css; =20 - cs =3D kzalloc(sizeof(*cs), GFP_KERNEL); + cs =3D dup_or_alloc_cpuset(NULL); if (!cs) return ERR_PTR(-ENOMEM); =20 - if (alloc_cpumasks(cs, NULL)) { - kfree(cs); - return ERR_PTR(-ENOMEM); - } - __set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); fmeter_init(&cs->fmeter); cs->relax_domain_level =3D -1; @@ -3920,7 +3930,7 @@ static void cpuset_handle_hotplug(void) bool on_dfl =3D is_in_v2_mode(); struct tmpmasks tmp, *ptmp =3D NULL; =20 - if (on_dfl && !alloc_cpumasks(NULL, &tmp)) + if (on_dfl && !alloc_tmpmasks(&tmp)) ptmp =3D &tmp; =20 lockdep_assert_cpus_held(); --=20 2.34.1