From nobody Sat Nov 23 22:25:38 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DEAA145A03 for ; Sun, 10 Nov 2024 02:51:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731207072; cv=none; b=nftGibe3iQtqzGOeYFd2yh74XMnVQlofC8JMlvgnvWh/LQ9gK1nZ+BcHykgXcFqNE+hdG1BXZtiCJaeBZgSuYGusNbqOBLBOAf1cx9l2qLTqeozkFCZWYH6Fq+yr7LRWERumiLG4jlK1GXN/7Qq4pR2rEAqxhMKCEyE8Mk1Vz4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731207072; c=relaxed/simple; bh=DfO1JOsqzbqASInTeV9hHIzImiQVdyIjji/vA4kwjNM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kDRvDRfW65E70r7gXGHmD7Ro4iB5lVyyTzTvV8cYGIRiVpMBXdI2gSRlpkQn2BkAsN1vutEc92JVPmBdltUlTOSH30J2WpZc179mbLWMJdJvRUjot/SP99kApbuefsk+UrVDK+b6vfidVNxyL/OkIoNl/Vm5PfuRUYMmh9I1It4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=c51TAzhP; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="c51TAzhP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731207069; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O37r0KO3DmNZhw8U4o1ldqxUUb1pNcn1hd7Mfod66L0=; b=c51TAzhPlAeZc/pzuDK61EOFyuHG+XRI4fejMJ+UtTn2Ze4qLdN3PSBgPpYSTzj6C/NcrY 1O52IE4NTc25pXl9SQwGrpwCHkla25BYKA2N8WDTSpENr8ffZFkkmGYj6+NABtdP/jA55d JY86gEmLqITeeiFCHtuyuOSgf0vPrTU= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-317-T1oFcOKPPu2ANKJnxQYTSA-1; Sat, 09 Nov 2024 21:51:06 -0500 X-MC-Unique: T1oFcOKPPu2ANKJnxQYTSA-1 X-Mimecast-MFC-AGG-ID: T1oFcOKPPu2ANKJnxQYTSA Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8B0A21956096; Sun, 10 Nov 2024 02:51:04 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.2.16.3]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D86EB300019E; Sun, 10 Nov 2024 02:51:01 +0000 (UTC) From: Waiman Long To: Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Waiman Long Subject: [PATCH 1/3] cgroup/cpuset: Revert "Allow suppression of sched domain rebuild in update_cpumasks_hier()" Date: Sat, 9 Nov 2024 21:50:21 -0500 Message-ID: <20241110025023.664487-2-longman@redhat.com> In-Reply-To: <20241110025023.664487-1-longman@redhat.com> References: <20241110025023.664487-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Revert commit 3ae0b773211e ("cgroup/cpuset: Allow suppression of sched domain rebuild in update_cpumasks_hier()") to allow for an alternative way to suppress unnecessary rebuild_sched_domains_locked() calls in update_cpumasks_hier() and elsewhere in a following commit. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 39 ++++++++++++++------------------------- 1 file changed, 14 insertions(+), 25 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index a4dd285cdf39..565280193922 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1922,12 +1922,6 @@ static void compute_partition_effective_cpumask(stru= ct cpuset *cs, rcu_read_unlock(); } =20 -/* - * update_cpumasks_hier() flags - */ -#define HIER_CHECKALL 0x01 /* Check all cpusets with no skipping */ -#define HIER_NO_SD_REBUILD 0x02 /* Don't rebuild sched domains */ - /* * update_cpumasks_hier - Update effective cpumasks and tasks in the subtr= ee * @cs: the cpuset to consider @@ -1942,7 +1936,7 @@ static void compute_partition_effective_cpumask(struc= t cpuset *cs, * Called with cpuset_mutex held */ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, - int flags) + bool force) { struct cpuset *cp; struct cgroup_subsys_state *pos_css; @@ -2007,10 +2001,10 @@ static void update_cpumasks_hier(struct cpuset *cs,= struct tmpmasks *tmp, * Skip the whole subtree if * 1) the cpumask remains the same, * 2) has no partition root state, - * 3) HIER_CHECKALL flag not set, and + * 3) force flag not set, and * 4) for v2 load balance state same as its parent. */ - if (!cp->partition_root_state && !(flags & HIER_CHECKALL) && + if (!cp->partition_root_state && !force && cpumask_equal(tmp->new_cpus, cp->effective_cpus) && (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || (is_sched_load_balance(parent) =3D=3D is_sched_load_balance(cp)))) { @@ -2112,8 +2106,7 @@ static void update_cpumasks_hier(struct cpuset *cs, s= truct tmpmasks *tmp, } rcu_read_unlock(); =20 - if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD) && - !force_sd_rebuild) + if (need_rebuild_sched_domains && !force_sd_rebuild) rebuild_sched_domains_locked(); } =20 @@ -2141,9 +2134,7 @@ static void update_sibling_cpumasks(struct cpuset *pa= rent, struct cpuset *cs, * directly. * * The update_cpumasks_hier() function may sleep. So we have to - * release the RCU read lock before calling it. HIER_NO_SD_REBUILD - * flag is used to suppress rebuild of sched domains as the callers - * will take care of that. + * release the RCU read lock before calling it. */ rcu_read_lock(); cpuset_for_each_child(sibling, pos_css, parent) { @@ -2159,7 +2150,7 @@ static void update_sibling_cpumasks(struct cpuset *pa= rent, struct cpuset *cs, continue; =20 rcu_read_unlock(); - update_cpumasks_hier(sibling, tmp, HIER_NO_SD_REBUILD); + update_cpumasks_hier(sibling, tmp, false); rcu_read_lock(); css_put(&sibling->css); } @@ -2179,7 +2170,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, struct tmpmasks tmp; struct cpuset *parent =3D parent_cs(cs); bool invalidate =3D false; - int hier_flags =3D 0; + bool force =3D false; int old_prs =3D cs->partition_root_state; =20 /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */ @@ -2240,8 +2231,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, * Check all the descendants in update_cpumasks_hier() if * effective_xcpus is to be changed. */ - if (!cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus)) - hier_flags =3D HIER_CHECKALL; + force =3D !cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus); =20 retval =3D validate_change(cs, trialcs); =20 @@ -2309,7 +2299,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, spin_unlock_irq(&callback_lock); =20 /* effective_cpus/effective_xcpus will be updated here */ - update_cpumasks_hier(cs, &tmp, hier_flags); + update_cpumasks_hier(cs, &tmp, force); =20 /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */ if (cs->partition_root_state) @@ -2334,7 +2324,7 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, struct tmpmasks tmp; struct cpuset *parent =3D parent_cs(cs); bool invalidate =3D false; - int hier_flags =3D 0; + bool force =3D false; int old_prs =3D cs->partition_root_state; =20 if (!*buf) { @@ -2357,8 +2347,7 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, * Check all the descendants in update_cpumasks_hier() if * effective_xcpus is to be changed. */ - if (!cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus)) - hier_flags =3D HIER_CHECKALL; + force =3D !cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus); =20 retval =3D validate_change(cs, trialcs); if (retval) @@ -2411,8 +2400,8 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, * of the subtree when it is a valid partition root or effective_xcpus * is updated. */ - if (is_partition_valid(cs) || hier_flags) - update_cpumasks_hier(cs, &tmp, hier_flags); + if (is_partition_valid(cs) || force) + update_cpumasks_hier(cs, &tmp, force); =20 /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */ if (cs->partition_root_state) @@ -2853,7 +2842,7 @@ static int update_prstate(struct cpuset *cs, int new_= prs) update_unbound_workqueue_cpumask(new_xcpus_state); =20 /* Force update if switching back to member */ - update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0); + update_cpumasks_hier(cs, &tmpmask, !new_prs); =20 /* Update sched domains and load balance flag */ update_partition_sd_lb(cs, old_prs); --=20 2.47.0 From nobody Sat Nov 23 22:25:38 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A387146A62 for ; Sun, 10 Nov 2024 02:51:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731207074; cv=none; b=CKVcY7XBbQHZix4VQPBcqxTaaUO/QyRQ9GlyF+fM7MUkIWJpxIAHW34hKTnmRWbOZpYD320VYICXR0piJG5KvhGEMOoBl0XRpPPtvKi4PfJHRwMILCx77VsqjkpSJJl+AkEzmIAlTW+zqNcBfYjzgwCghKzI9N6ns4Sx/VjeN/8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731207074; c=relaxed/simple; bh=VkABoRS40YwKCapl/BEodM6a31840wxOObed6PtN25Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hhy9YK+7TMtRdXr9upfTKhYkjfgiWP97Zcgs6JAcGBnc74X+Y7ABm5c9AQbWIGviDpguDzMF1Lx5D62RwBxSNaPaGirDM02uirMuwJJgTm/xdDxVCRro6EcwpCHKoAcyTCBxs9cU9fxdRgoSJe248b1mABcyF7KCYylVT6SQZkU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Z4pZxZQk; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Z4pZxZQk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731207071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Av4tAKFK4UZNMFciRKYaBpM+xzAwa9/mZjq8A81Jims=; b=Z4pZxZQkmsWcTqhGDS9me16BW82lbPDNWzNXW29NwW75PyhUxYkYPddYV5KfMCELFaFHoW D6DMnwASZZP+z/m/2Wih8nPShtYHQeue50VABOU00Xdq729h3AnsnE2WlAm/yi09GGxpRI QdTXgtHFUuaY90RxtdbLIcGiApqgP1Q= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-219-Kt63064kNEa6r180dQo0Ug-1; Sat, 09 Nov 2024 21:51:08 -0500 X-MC-Unique: Kt63064kNEa6r180dQo0Ug-1 X-Mimecast-MFC-AGG-ID: Kt63064kNEa6r180dQo0Ug Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9A4361956089; Sun, 10 Nov 2024 02:51:06 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.2.16.3]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B6317300019F; Sun, 10 Nov 2024 02:51:04 +0000 (UTC) From: Waiman Long To: Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Waiman Long Subject: [PATCH 2/3] cgroup/cpuset: Enforce at most one rebuild_sched_domains_locked() call per operation Date: Sat, 9 Nov 2024 21:50:22 -0500 Message-ID: <20241110025023.664487-3-longman@redhat.com> In-Reply-To: <20241110025023.664487-1-longman@redhat.com> References: <20241110025023.664487-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Since commit ff0ce721ec21 ("cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug"), there is only one rebuild_sched_domains_locked() call per hotplug operation. However, writing to the various cpuset control files may still casue more than one rebuild_sched_domains_locked() call to happen in some cases. Juri had found that two rebuild_sched_domains_locked() calls in update_prstate(), one from update_cpumasks_hier() and another one from update_partition_sd_lb() could cause cpuset partition to be created with null total_bw for DL tasks. IOW, DL tasks may not be scheduled correctly in such a partition. A sample command sequence that can reproduce null total_bw is as follows. # echo Y >/sys/kernel/debug/sched/verbose # echo +cpuset >/sys/fs/cgroup/cgroup.subtree_control # mkdir /sys/fs/cgroup/test # echo 0-7 > /sys/fs/cgroup/test/cpuset.cpus # echo 6-7 > /sys/fs/cgroup/test/cpuset.cpus.exclusive # echo root >/sys/fs/cgroup/test/cpuset.cpus.partition Fix this double rebuild_sched_domains_locked() calls problem by replacing existing calls with cpuset_force_rebuild() except the rebuild_sched_domains_cpuslocked() call at the end of cpuset_handle_hotplug(). Checking of the force_sd_rebuild flag is now done at the end of cpuset_write_resmask() and update_prstate() to determine if rebuild_sched_domains_locked() should be called or not. The cpuset v1 code can still call rebuild_sched_domains_locked() directly as double rebuild_sched_domains_locked() calls is not possible. Reported-by: Juri Lelli Closes: https://lore.kernel.org/lkml/ZyuUcJDPBln1BK1Y@jlelli-thinkpadt14gen= 4.remote.csb/ Signed-off-by: Waiman Long Tested-by: Juri Lelli --- kernel/cgroup/cpuset.c | 49 ++++++++++++++++++++++++++++-------------- 1 file changed, 33 insertions(+), 16 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 565280193922..0d56a226c522 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -84,9 +84,19 @@ static bool have_boot_isolcpus; static struct list_head remote_children; =20 /* - * A flag to force sched domain rebuild at the end of an operation while - * inhibiting it in the intermediate stages when set. Currently it is only - * set in hotplug code. + * A flag to force sched domain rebuild at the end of an operation. + * It can be set in + * - update_partition_sd_lb() + * - remote_partition_check() + * - update_cpumasks_hier() + * - cpuset_update_flag() + * - cpuset_hotplug_update_tasks() + * - cpuset_handle_hotplug() + * + * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock. + * + * Note that update_relax_domain_level() in cpuset-v1.c can still call + * rebuild_sched_domains_locked() directly without using this flag. */ static bool force_sd_rebuild; =20 @@ -990,6 +1000,7 @@ void rebuild_sched_domains_locked(void) =20 lockdep_assert_cpus_held(); lockdep_assert_held(&cpuset_mutex); + force_sd_rebuild =3D false; =20 /* * If we have raced with CPU hotplug, return early to avoid @@ -1164,8 +1175,8 @@ static void update_partition_sd_lb(struct cpuset *cs,= int old_prs) clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); } =20 - if (rebuild_domains && !force_sd_rebuild) - rebuild_sched_domains_locked(); + if (rebuild_domains) + cpuset_force_rebuild(); } =20 /* @@ -1512,8 +1523,8 @@ static void remote_partition_check(struct cpuset *cs,= struct cpumask *newmask, remote_partition_disable(child, tmp); disable_cnt++; } - if (disable_cnt && !force_sd_rebuild) - rebuild_sched_domains_locked(); + if (disable_cnt) + cpuset_force_rebuild(); } =20 /* @@ -2106,8 +2117,8 @@ static void update_cpumasks_hier(struct cpuset *cs, s= truct tmpmasks *tmp, } rcu_read_unlock(); =20 - if (need_rebuild_sched_domains && !force_sd_rebuild) - rebuild_sched_domains_locked(); + if (need_rebuild_sched_domains) + cpuset_force_rebuild(); } =20 /** @@ -2726,9 +2737,13 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct= cpuset *cs, cs->flags =3D trialcs->flags; spin_unlock_irq(&callback_lock); =20 - if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed && - !force_sd_rebuild) - rebuild_sched_domains_locked(); + if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed) { + if (!IS_ENABLED(CONFIG_CPUSETS_V1) || + cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) + cpuset_force_rebuild(); + else + rebuild_sched_domains_locked(); + } =20 if (spread_flag_changed) cpuset1_update_tasks_flags(cs); @@ -2848,6 +2863,8 @@ static int update_prstate(struct cpuset *cs, int new_= prs) update_partition_sd_lb(cs, old_prs); =20 notify_partition_change(cs, old_prs); + if (force_sd_rebuild) + rebuild_sched_domains_locked(); free_cpumasks(NULL, &tmpmask); return 0; } @@ -3141,6 +3158,8 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, } =20 free_cpuset(trialcs); + if (force_sd_rebuild) + rebuild_sched_domains_locked(); out_unlock: mutex_unlock(&cpuset_mutex); cpus_read_unlock(); @@ -3885,11 +3904,9 @@ static void cpuset_handle_hotplug(void) rcu_read_unlock(); } =20 - /* rebuild sched domains if cpus_allowed has changed */ - if (force_sd_rebuild) { - force_sd_rebuild =3D false; + /* rebuild sched domains if necessary */ + if (force_sd_rebuild) rebuild_sched_domains_cpuslocked(); - } =20 free_cpumasks(NULL, ptmp); } --=20 2.47.0 From nobody Sat Nov 23 22:25:38 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0601C146A79 for ; Sun, 10 Nov 2024 02:51:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731207074; cv=none; b=R+qtKaEEsxflmYi+bG7eB6/95SbL1w+ssvyv0EiHDxG4ACF0GK+917iIvuItDJJkBElUWTXYxexBFChAFckDZd6h0bMVEzFnzIE4aasUWe31SF4ewqLuskSVul4xf00s8h6ETdXKLjQv/tvYMZ++gj416qfoAbr37jY9/wSjMPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731207074; c=relaxed/simple; bh=C7F0hzSEvRCvJoaGfUa7jxx615Hn7j/4+kWTa1FsAmw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LqAvQZEAKpggqpXgMK41rpl3GlcMM6TVrNojqD3laAtLK5qy0a0yj4hLjrwGSOhUd7BPTkGivenddgB7vBDKxh4w6rXXPhn1ShwTLWIUpZOODoHaKj+rcVWANyG+wB1a8lzns+u1Sum6uwqm8CGcK7jYVYq6B+ZEBoqOMPeGndU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NZLEq03r; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NZLEq03r" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731207071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mgT/glyX8PvJewj2/e81B/ji7YZiFdXrkaiYQgRzgk0=; b=NZLEq03rs27wfDhA6XDT9p0T95B3tvdpqPWJY/dDEehc0uFwKxL3U8+HKqeUUIrROpkQoy LGu30U5F08j3bFL9wo+Gk+GQlJDA8BXi+wVdYEDdCGUHCVt3BEstOG/aZ/U4035NBPGR4H tejCFoN2c2o5pVr19s871Y2+e2tNXiM= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-56-XOcphakvOOW-wLlRePT7FA-1; Sat, 09 Nov 2024 21:51:10 -0500 X-MC-Unique: XOcphakvOOW-wLlRePT7FA-1 X-Mimecast-MFC-AGG-ID: XOcphakvOOW-wLlRePT7FA Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9841D19560AF; Sun, 10 Nov 2024 02:51:08 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.2.16.3]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DF98F300019E; Sun, 10 Nov 2024 02:51:06 +0000 (UTC) From: Waiman Long To: Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Waiman Long Subject: [PATCH 3/3] cgroup/cpuset: Further optimize code if CONFIG_CPUSETS_V1 not set Date: Sat, 9 Nov 2024 21:50:23 -0500 Message-ID: <20241110025023.664487-4-longman@redhat.com> In-Reply-To: <20241110025023.664487-1-longman@redhat.com> References: <20241110025023.664487-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Currently the cpuset code uses group_subsys_on_dfl() to check if we are running with cgroup v2. If CONFIG_CPUSETS_V1 isn't set, there is really no need to do this check and we can optimize out some of the unneeded v1 specific code paths. Introduce a new cpuset_v2() and use it to replace the cgroup_subsys_on_dfl() check to further optimize the code. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 39 +++++++++++++++++++-------------------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 0d56a226c522..655396e75b58 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -293,6 +293,12 @@ static inline void dec_attach_in_progress(struct cpuse= t *cs) mutex_unlock(&cpuset_mutex); } =20 +static inline bool cpuset_v2(void) +{ + return !IS_ENABLED(CONFIG_CPUSETS_V1) || + cgroup_subsys_on_dfl(cpuset_cgrp_subsys); +} + /* * Cgroup v2 behavior is used on the "cpus" and "mems" control files when * on default hierarchy or when the cpuset_v2_mode flag is set by mounting @@ -303,7 +309,7 @@ static inline void dec_attach_in_progress(struct cpuset= *cs) */ static inline bool is_in_v2_mode(void) { - return cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || + return cpuset_v2() || (cpuset_cgrp_subsys.root->flags & CGRP_ROOT_CPUSET_V2_MODE); } =20 @@ -738,7 +744,7 @@ static int generate_sched_domains(cpumask_var_t **domai= ns, int nslot; /* next empty doms[] struct cpumask slot */ struct cgroup_subsys_state *pos_css; bool root_load_balance =3D is_sched_load_balance(&top_cpuset); - bool cgrpv2 =3D cgroup_subsys_on_dfl(cpuset_cgrp_subsys); + bool cgrpv2 =3D cpuset_v2(); int nslot_update; =20 doms =3D NULL; @@ -1198,7 +1204,7 @@ static void reset_partition_data(struct cpuset *cs) { struct cpuset *parent =3D parent_cs(cs); =20 - if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) + if (!cpuset_v2()) return; =20 lockdep_assert_held(&callback_lock); @@ -2017,7 +2023,7 @@ static void update_cpumasks_hier(struct cpuset *cs, s= truct tmpmasks *tmp, */ if (!cp->partition_root_state && !force && cpumask_equal(tmp->new_cpus, cp->effective_cpus) && - (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || + (!cpuset_v2() || (is_sched_load_balance(parent) =3D=3D is_sched_load_balance(cp)))) { pos_css =3D css_rightmost_descendant(pos_css); continue; @@ -2091,8 +2097,7 @@ static void update_cpumasks_hier(struct cpuset *cs, s= truct tmpmasks *tmp, * from parent if current cpuset isn't a valid partition root * and their load balance states differ. */ - if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && - !is_partition_valid(cp) && + if (cpuset_v2() && !is_partition_valid(cp) && (is_sched_load_balance(parent) !=3D is_sched_load_balance(cp))) { if (is_sched_load_balance(parent)) set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); @@ -2108,8 +2113,7 @@ static void update_cpumasks_hier(struct cpuset *cs, s= truct tmpmasks *tmp, */ if (!cpumask_empty(cp->cpus_allowed) && is_sched_load_balance(cp) && - (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || - is_partition_valid(cp))) + (!cpuset_v2() || is_partition_valid(cp))) need_rebuild_sched_domains =3D true; =20 rcu_read_lock(); @@ -2246,7 +2250,7 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, =20 retval =3D validate_change(cs, trialcs); =20 - if ((retval =3D=3D -EINVAL) && cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) { + if ((retval =3D=3D -EINVAL) && cpuset_v2()) { struct cgroup_subsys_state *css; struct cpuset *cp; =20 @@ -2738,8 +2742,7 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct = cpuset *cs, spin_unlock_irq(&callback_lock); =20 if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed) { - if (!IS_ENABLED(CONFIG_CPUSETS_V1) || - cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) + if (cpuset_v2()) cpuset_force_rebuild(); else rebuild_sched_domains_locked(); @@ -2925,8 +2928,7 @@ static int cpuset_can_attach(struct cgroup_taskset *t= set) * migration permission derives from hierarchy ownership in * cgroup_procs_write_permission()). */ - if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || - (cpus_updated || mems_updated)) { + if (!cpuset_v2() || (cpus_updated || mems_updated)) { ret =3D security_task_setscheduler(task); if (ret) goto out_unlock; @@ -3040,8 +3042,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) * in effective cpus and mems. In that case, we can optimize out * by skipping the task iteration and update. */ - if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && - !cpus_updated && !mems_updated) { + if (cpuset_v2() && !cpus_updated && !mems_updated) { cpuset_attach_nodemask_to =3D cs->effective_mems; goto out; } @@ -3391,7 +3392,7 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_c= ss) INIT_LIST_HEAD(&cs->remote_sibling); =20 /* Set CS_MEMORY_MIGRATE for default hierarchy */ - if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) + if (cpuset_v2()) __set_bit(CS_MEMORY_MIGRATE, &cs->flags); =20 return &cs->css; @@ -3418,8 +3419,7 @@ static int cpuset_css_online(struct cgroup_subsys_sta= te *css) /* * For v2, clear CS_SCHED_LOAD_BALANCE if parent is isolated */ - if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && - !is_sched_load_balance(parent)) + if (cpuset_v2() && !is_sched_load_balance(parent)) clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); =20 cpuset_inc(); @@ -3489,8 +3489,7 @@ static void cpuset_css_offline(struct cgroup_subsys_s= tate *css) if (is_partition_valid(cs)) update_prstate(cs, 0); =20 - if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && - is_sched_load_balance(cs)) + if (!cpuset_v2() && is_sched_load_balance(cs)) cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0); =20 cpuset_dec(); --=20 2.47.0