From nobody Mon Sep 16 19:17:18 2024 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31A0CB662; Fri, 26 Jul 2024 01:12:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721956336; cv=none; b=Tk/7dpexvzBwnxOuo7qcvYKiGd5Iqvt//18dKZDLQU4nWvWNPrhvNTStJTjmLbOgrqoCbb7O+6T5ksyPVAuFsGX2Q8dcG3MjewAg4wdCxKjf81Nyf4O8hrRWNzONj0Ie/bIqksVTGqv0GKcTVH0WaISYj9pKHsnTPZr99d/CFM4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721956336; c=relaxed/simple; bh=d5Hj+nrO74nTVvvjKvO/ZB1w8fGxAzGorbFlfqvUFs8=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=mSvV08HSfyZ3GWHPI1vgC/n0sekb20Y2ffXDvkbqI7G0aJUGMeleGglxj4pJcjRQ1Klq2xcmE9985qM4K4WSlZUxDjLoTkTprctswuI12bE6v+ePDOUef4dLzOeov6dG1DEvketB7s9PRrYdi8XX/w8mptVX1kEDvZTLYOOd6oo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WVV6l2rMnzMrG2; Fri, 26 Jul 2024 09:10:23 +0800 (CST) Received: from kwepemd100013.china.huawei.com (unknown [7.221.188.163]) by mail.maildlp.com (Postfix) with ESMTPS id 486E0180AE3; Fri, 26 Jul 2024 09:12:11 +0800 (CST) Received: from huawei.com (10.67.174.121) by kwepemd100013.china.huawei.com (7.221.188.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Fri, 26 Jul 2024 09:12:10 +0800 From: Chen Ridong To: , , , , , , CC: , Subject: [PATCH v2 -next] cgroup/cpuset: add decrease attach_in_progress helpers Date: Fri, 26 Jul 2024 01:05:02 +0000 Message-ID: <20240726010502.2226806-1-chenridong@huawei.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemd100013.china.huawei.com (7.221.188.163) Content-Type: text/plain; charset="utf-8" There are several functions to decrease attach_in_progress, and they will wake up cpuset_attach_wq when attach_in_progress is zero. So, add a helper to make it concise. Signed-off-by: Chen Ridong Reviewed-by: Kamalesh Babulal Reviewed-by: Waiman Long --- kernel/cgroup/cpuset.c | 39 ++++++++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 15 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index d4322619e59a..fa0c2fc5d383 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -490,6 +490,26 @@ static inline void check_insane_mems_config(nodemask_t= *nodes) } } =20 +/* + * decrease cs->attach_in_progress. + * wake_up cpuset_attach_wq if cs->attach_in_progress=3D=3D0. + */ +static inline void dec_attach_in_progress_locked(struct cpuset *cs) +{ + lockdep_assert_held(&cpuset_mutex); + + cs->attach_in_progress--; + if (!cs->attach_in_progress) + wake_up(&cpuset_attach_wq); +} + +static inline void dec_attach_in_progress(struct cpuset *cs) +{ + mutex_lock(&cpuset_mutex); + dec_attach_in_progress_locked(cs); + mutex_unlock(&cpuset_mutex); +} + /* * Cgroup v2 behavior is used on the "cpus" and "mems" control files when * on default hierarchy or when the cpuset_v2_mode flag is set by mounting @@ -3421,9 +3441,7 @@ static void cpuset_cancel_attach(struct cgroup_taskse= t *tset) cs =3D css_cs(css); =20 mutex_lock(&cpuset_mutex); - cs->attach_in_progress--; - if (!cs->attach_in_progress) - wake_up(&cpuset_attach_wq); + dec_attach_in_progress_locked(cs); =20 if (cs->nr_migrate_dl_tasks) { int cpu =3D cpumask_any(cs->effective_cpus); @@ -3538,9 +3556,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) reset_migrate_dl_data(cs); } =20 - cs->attach_in_progress--; - if (!cs->attach_in_progress) - wake_up(&cpuset_attach_wq); + dec_attach_in_progress_locked(cs); =20 mutex_unlock(&cpuset_mutex); } @@ -4283,11 +4299,7 @@ static void cpuset_cancel_fork(struct task_struct *t= ask, struct css_set *cset) if (same_cs) return; =20 - mutex_lock(&cpuset_mutex); - cs->attach_in_progress--; - if (!cs->attach_in_progress) - wake_up(&cpuset_attach_wq); - mutex_unlock(&cpuset_mutex); + dec_attach_in_progress(cs); } =20 /* @@ -4319,10 +4331,7 @@ static void cpuset_fork(struct task_struct *task) guarantee_online_mems(cs, &cpuset_attach_nodemask_to); cpuset_attach_task(cs, task); =20 - cs->attach_in_progress--; - if (!cs->attach_in_progress) - wake_up(&cpuset_attach_wq); - + dec_attach_in_progress_locked(cs); mutex_unlock(&cpuset_mutex); } =20 --=20 2.34.1