From nobody Sat Feb 7 16:05:44 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF316313E39 for ; Wed, 28 Jan 2026 04:43:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769575434; cv=none; b=i5narwEe8SHqV/8aunUqvnZL2nEIYeTQW4uIUdezp705vzkP64JPcG2eB/h3eDP36DpvnC3v4SdfBUkR5/HOCHLPfOSMszMxJfO7yJapEHxIKimERH1QRPDrDhXUL0JY/fqB3676Z/ZMIOkKGY0rxk7gpx+oQeFCQ8OpvBo+Bvo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769575434; c=relaxed/simple; bh=y3AG4ItZTE9YrI/yAlycILOMdlCMh4FiHB0Xw38NcNk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fidvuDhdH4TI2nPJWWJ4agMNvzbNjQTZNuxIxhrP3zc28k0qSZEibE6Qzq7TmzrYrcoLHvsW6RnrJdVzdBL2tfnyhrNTxvqk22BikS0U7vxAXqf6nt4uhw/QXmk2cYTDmP9LAmuCq0MQ/LoJVgs+3X2jcDwH7gC86HRthbeuypE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=BuBAJPWF; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BuBAJPWF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1769575432; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z5s8pAN3XHJbPx9PP65YrQaQggkzkJ9pEpC05YMysnA=; b=BuBAJPWF/Lfg843FhCt1+tSMHQxzvnWRJf3BQGBdUFXPaKAsDRn8kUMlwEbMYnrDy6hXKV re2qDeIWK5FBSV29rgTBZWYcZcYouWpNK9flQr8m1O3qBdkD8IGn9YaYFvusY8GNAwBf1w uwEFeUdwFpFKp1h1hXKs4y/JKeDSJyQ= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-684-Y8yRFISjPVaxi2pqybcVTg-1; Tue, 27 Jan 2026 23:43:46 -0500 X-MC-Unique: Y8yRFISjPVaxi2pqybcVTg-1 X-Mimecast-MFC-AGG-ID: Y8yRFISjPVaxi2pqybcVTg_1769575424 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 029BB1800365; Wed, 28 Jan 2026 04:43:43 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.22.80.3]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0A37C18002A6; Wed, 28 Jan 2026 04:43:39 +0000 (UTC) From: Waiman Long To: Chen Ridong , Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Waiman Long Subject: [PATCH/for-next 1/2] cgroup/cpuset: Defer housekeeping_update() call from CPU hotplug to task_work Date: Tue, 27 Jan 2026 23:42:50 -0500 Message-ID: <20260128044251.1229702-2-longman@redhat.com> In-Reply-To: <20260128044251.1229702-1-longman@redhat.com> References: <20260128044251.1229702-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" The update_isolation_cpumasks() function can be called either directly from regular cpuset control file write with cpuset_full_lock() called or via the CPU hotplug path with cpus_write_lock and cpuset_mutex held. As we are going to enable dynamic update to the nozh_full housekeeping cpumask (HK_TYPE_KERNEL_NOISE) soon with the help of CPU hotplug, allowing the CPU hotplug path to call into housekeeping_update() directly from update_isolation_cpumasks() will cause deadlock. So we have to defer any call to housekeeping_update() after the CPU hotplug operation has finished. This can be done via the task_work_add(..., TWA_RESUME) API where the actual housekeeping_update() call, if needed, will happen right before existing back to userspace. Since the HK_TYPE_DOMAIN housekeeping cpumask should now track the changes in "cpuset.cpus.isolated", add a check in test_cpuset_prs.sh to confirm that the CPU hotplug deferral, if needed, is working as expected. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 49 ++++++++++++++++++- .../selftests/cgroup/test_cpuset_prs.sh | 9 ++++ 2 files changed, 56 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 7b7d12ab1006..98c7cb732206 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -84,6 +84,10 @@ static cpumask_var_t isolated_cpus; */ static bool isolated_cpus_updating; =20 +/* Both cpuset_mutex and cpus_read_locked acquired */ +static bool cpuset_full_locked; +static bool isolation_task_work_queued; + /* * A flag to force sched domain rebuild at the end of an operation. * It can be set in @@ -285,10 +289,12 @@ void cpuset_full_lock(void) { cpus_read_lock(); mutex_lock(&cpuset_mutex); + cpuset_full_locked =3D true; } =20 void cpuset_full_unlock(void) { + cpuset_full_locked =3D false; mutex_unlock(&cpuset_mutex); cpus_read_unlock(); } @@ -1285,25 +1291,64 @@ static bool prstate_housekeeping_conflict(int prsta= te, struct cpumask *new_cpus) return false; } =20 +static void __update_isolation_cpumasks(bool twork); +static void isolation_task_work_fn(struct callback_head *cb) +{ + cpuset_full_lock(); + __update_isolation_cpumasks(true); + cpuset_full_lock(); +} + /* - * update_isolation_cpumasks - Update external isolation related CPU masks + * __update_isolation_cpumasks - Update external isolation related CPU mas= ks + * @twork - set if call from isolation_task_work_fn() * * The following external CPU masks will be updated if necessary: * - workqueue unbound cpumask */ -static void update_isolation_cpumasks(void) +static void __update_isolation_cpumasks(bool twork) { int ret; =20 + if (twork) + isolation_task_work_queued =3D false; + if (!isolated_cpus_updating) return; =20 + /* + * This function can be reached either directly from regular cpuset + * control file write (cpuset_full_locked) or via hotplug + * (cpus_write_lock && cpuset_mutex held). In the later case, we + * defer the housekeeping_update() call to a task_work to avoid + * the possibility of deadlock. The task_work will be run right + * before exiting back to userspace. + */ + if (!cpuset_full_locked) { + static struct callback_head twork_cb; + + if (!isolation_task_work_queued) { + init_task_work(&twork_cb, isolation_task_work_fn); + if (!task_work_add(current, &twork_cb, TWA_RESUME)) + isolation_task_work_queued =3D true; + else + /* Current task shouldn't be exiting */ + WARN_ON_ONCE(1); + } + return; + } + ret =3D housekeeping_update(isolated_cpus); WARN_ON_ONCE(ret < 0); =20 isolated_cpus_updating =3D false; } =20 +static inline void update_isolation_cpumasks(void) +{ + __update_isolation_cpumasks(false); +} + /** * rm_siblings_excl_cpus - Remove exclusive CPUs that are used by sibling = cpusets * @parent: Parent cpuset containing all siblings diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/test= ing/selftests/cgroup/test_cpuset_prs.sh index 5dff3ad53867..af4a2532cb3e 100755 --- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh +++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh @@ -773,6 +773,7 @@ check_isolcpus() EXPECTED_ISOLCPUS=3D$1 ISCPUS=3D${CGROUP2}/cpuset.cpus.isolated ISOLCPUS=3D$(cat $ISCPUS) + HKICPUS=3D$(cat /sys/devices/system/cpu/isolated) LASTISOLCPU=3D SCHED_DOMAINS=3D/sys/kernel/debug/sched/domains if [[ $EXPECTED_ISOLCPUS =3D . ]] @@ -810,6 +811,14 @@ check_isolcpus() ISOLCPUS=3D EXPECTED_ISOLCPUS=3D$EXPECTED_SDOMAIN =20 + # + # The inverse of HK_TYPE_DOMAIN cpumask in $HKICPUS should match $ISOLCPUS + # + [[ "$ISOLCPUS" !=3D "$HKICPUS" ]] && { + echo "Housekeeping isolated CPUs mismatch - $HKICPUS" + return 1 + } + # # Use the sched domain in debugfs to check isolated CPUs, if available # --=20 2.52.0 From nobody Sat Feb 7 16:05:44 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF46F316184 for ; Wed, 28 Jan 2026 04:43:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769575436; cv=none; b=mKfN9Qho55Qy4Lr4cRN0FZcidOhNE992ETcJjwxxIIxFaNvAjg8qOALrri6AI3fyA05sz63tWsrUK/mZSrLUXU2F1NKLIndaiCbuKD50nqe2tK/rasv/S8Nv+1KVW8gewHFcWpW2xzBFR3VWYOSKJ1QGkBmSsX3JZzvGuUNo8aM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769575436; c=relaxed/simple; bh=FxDI1o8m2cofdyGnkzpRhmpMOb5z4TI3owdqIfbMKtc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HU9WPA5DQtkSgUrgMm0nh/JkxK6Twqgfir50F5kgWrcxafToWh6o6goitXibsQMqX1oe/GydBZR7QQHak2psPeI6Syrd0Zh2DkMdhktXXSnk/EhDWOiKzSajF6PTuOQTlswI7oGCPEz0/RwfHslJJJ0G2s8lTXO6jSvgyv4cMyQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=OAK9KUVd; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OAK9KUVd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1769575434; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hfT8qurcHhZMimvKVK3njVZGYV6+2WQyDonOkfkfCMY=; b=OAK9KUVdNEmoAwEzObQKQ+2s1FfhWfJ7VidV2xlzmzK2Kp6P0XOjkIetVO2IAgGoAf30sY XU9zkT+kOkV1RRSU/idwIa1mYAWzvZJxEpjAjmnaeNIePftdiW/Az0fL0FXvsEqwQU/NmE K2Pf3XzZanRTfzim9S9KmwJfQptHnX0= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-307-AFq7e_hfOAupxf_JQwAfzg-1; Tue, 27 Jan 2026 23:43:48 -0500 X-MC-Unique: AFq7e_hfOAupxf_JQwAfzg-1 X-Mimecast-MFC-AGG-ID: AFq7e_hfOAupxf_JQwAfzg_1769575426 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4303F18005B5; Wed, 28 Jan 2026 04:43:46 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.22.80.3]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3FBE11800870; Wed, 28 Jan 2026 04:43:43 +0000 (UTC) From: Waiman Long To: Chen Ridong , Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Waiman Long Subject: [PATCH/for-next 2/2] cgroup/cpuset: Introduce a new top level isolcpus_update_mutex Date: Tue, 27 Jan 2026 23:42:51 -0500 Message-ID: <20260128044251.1229702-3-longman@redhat.com> In-Reply-To: <20260128044251.1229702-1-longman@redhat.com> References: <20260128044251.1229702-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" The current cpuset partition code is able to dynamically update the sched domains of a running system and the corresponding HK_TYPE_DOMAIN housekeeping cpumask to perform what is essentally the "isolcpus=3Ddomain,..." boot command line feature at run time. The housekeeping cpumask update requires flushing a number of different workqueues which may not be safe with cpus_read_lock() held as the workqueue flushing code may acquire cpus_read_lock() or acquiring locks which have locking dependency with cpus_read_lock() down the chain. Below is an example of such circular locking problem. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D WARNING: possible circular locking dependency detected 6.18.0-test+ #2 Tainted: G S ------------------------------------------------------ test_cpuset_prs/10971 is trying to acquire lock: ffff888112ba4958 ((wq_completion)sync_wq){+.+.}-{0:0}, at: touch_wq_lockd= ep_map+0x7a/0x180 but task is already holding lock: ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at: cpuset_partition_write+0= x85/0x130 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (cpuset_mutex){+.+.}-{4:4}: -> #3 (cpu_hotplug_lock){++++}-{0:0}: -> #2 (rtnl_mutex){+.+.}-{4:4}: -> #1 ((work_completion)(&arg.work)){+.+.}-{0:0}: -> #0 ((wq_completion)sync_wq){+.+.}-{0:0}: Chain exists of: (wq_completion)sync_wq --> cpu_hotplug_lock --> cpuset_mutex 5 locks held by test_cpuset_prs/10971: #0: ffff88816810e440 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0xf9/0x1= d0 #1: ffff8891ab620890 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_fop_write_it= er+0x260/0x5f0 #2: ffff8890a78b83e8 (kn->active#187){.+.+}-{0:0}, at: kernfs_fop_write_= iter+0x2b6/0x5f0 #3: ffffffffadf32900 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_partitio= n_write+0x77/0x130 #4: ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at: cpuset_partition_wr= ite+0x85/0x130 Call Trace: : touch_wq_lockdep_map+0x93/0x180 __flush_workqueue+0x111/0x10b0 housekeeping_update+0x12d/0x2d0 update_parent_effective_cpumask+0x595/0x2440 update_prstate+0x89d/0xce0 cpuset_partition_write+0xc5/0x130 cgroup_file_write+0x1a5/0x680 kernfs_fop_write_iter+0x3df/0x5f0 vfs_write+0x525/0xfd0 ksys_write+0xf9/0x1d0 do_syscall_64+0x95/0x520 entry_SYSCALL_64_after_hwframe+0x76/0x7e To avoid such a circular locking dependency problem, we have to call housekeeping_update() without holding the cpus_read_lock() and cpuset_mutex. One way to do that is to introduce a new top level isolcpus_update_mutex which will be acquired first if the set of isolated CPUs may have to be updated. This new isolcpus_update_mutex will provide the need mutual exclusion without the need to hold cpus_read_lock(). As cpus_read_lock() is now no longer held when tmigr_isolated_exclude_cpumask() is called, it needs to acquire it directly. The lockdep_is_cpuset_held() is also updated to check the new isolcpus_update_mutex. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 79 ++++++++++++++++++++++++----------- kernel/sched/isolation.c | 4 +- kernel/time/timer_migration.c | 3 +- 3 files changed, 57 insertions(+), 29 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 98c7cb732206..96390ceb5122 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -78,7 +78,7 @@ static cpumask_var_t subpartitions_cpus; static cpumask_var_t isolated_cpus; =20 /* - * isolated_cpus updating flag (protected by cpuset_mutex) + * isolated_cpus updating flag (protected by isolcpus_update_mutex) * Set if isolated_cpus is going to be updated in the current * cpuset_mutex crtical section. */ @@ -223,29 +223,46 @@ struct cpuset top_cpuset =3D { }; =20 /* - * There are two global locks guarding cpuset structures - cpuset_mutex and - * callback_lock. The cpuset code uses only cpuset_mutex. Other kernel - * subsystems can use cpuset_lock()/cpuset_unlock() to prevent change to c= puset - * structures. Note that cpuset_mutex needs to be a mutex as it is used in - * paths that rely on priority inheritance (e.g. scheduler - on RT) for - * correctness. + * CPUSET Locking Convention + * ------------------------- * - * A task must hold both locks to modify cpusets. If a task holds - * cpuset_mutex, it blocks others, ensuring that it is the only task able = to - * also acquire callback_lock and be able to modify cpusets. It can perfo= rm - * various checks on the cpuset structure first, knowing nothing will chan= ge. - * It can also allocate memory while just holding cpuset_mutex. While it = is - * performing these checks, various callback routines can briefly acquire - * callback_lock to query cpusets. Once it is ready to make the changes, = it - * takes callback_lock, blocking everyone else. + * Below are the three global locks guarding cpuset structures in lock + * acquisition order: + * - isolcpus_update_mutex (optional) + * - cpu_hotplug_lock (cpus_read_lock/cpus_write_lock) + * - cpuset_mutex + * - callback_lock (raw spinlock) * - * Calls to the kernel memory allocator can not be made while holding - * callback_lock, as that would risk double tripping on callback_lock - * from one of the callbacks into the cpuset code from within - * __alloc_pages(). + * The first isolcpus_update_mutex should only be held if the existing set= of + * isolated CPUs (in isolated partition) or any of the partition states ma= y be + * changed when some cpuset control files are being written into. Otherwis= e, + * it can be skipped. Holding isolcpus_update_mutex/cpus_read_lock or + * cpus_write_lock will ensure mutual exclusion of isolated_cpus update. * - * If a task is only holding callback_lock, then it has read-only - * access to cpusets. + * As cpuset will now indirectly flush a number of different workqueues in + * housekeeping_update() when the set of isolated CPUs is going to be chan= ged, + * it may not be safe from the circular locking perspective to hold the + * cpus_read_lock. So cpuset_full_lock() will be released before calling + * housekeeping_update() and re-acquired afterward. + * + * A task must hold all the remaining three locks to modify externally vis= ible + * or used fields of cpusets, though some of the internally used cpuset fi= elds + * can be modified by holding cpu_hotplug_lock and cpuset_mutex only. If o= nly + * reliable read access of the externally used fields are needed, a task c= an + * hold either cpuset_mutex or callback_lock. + * + * If a task holds cpu_hotplug_lock and cpuset_mutex, it blocks others, + * ensuring that it is the only task able to also acquire callback_lock and + * be able to modify cpusets. It can perform various checks on the cpuset + * structure first, knowing nothing will change. It can also allocate memo= ry + * without holding callback_lock. While it is performing these checks, var= ious + * callback routines can briefly acquire callback_lock to query cpusets. = Once + * it is ready to make the changes, it takes callback_lock, blocking every= one + * else. + * + * Calls to the kernel memory allocator cannot be made while holding + * callback_lock which is a spinlock, as the memory allocator may sleep or + * call back into cpuset code and acquire callback_lock. * * Now, the task_struct fields mems_allowed and mempolicy may be changed * by other task, we use alloc_lock in the task_struct fields to protect @@ -256,6 +273,7 @@ struct cpuset top_cpuset =3D { * cpumasks and nodemasks. */ =20 +static DEFINE_MUTEX(isolcpus_update_mutex); static DEFINE_MUTEX(cpuset_mutex); =20 /** @@ -302,7 +320,7 @@ void cpuset_full_unlock(void) #ifdef CONFIG_LOCKDEP bool lockdep_is_cpuset_held(void) { - return lockdep_is_held(&cpuset_mutex); + return lockdep_is_held(&isolcpus_update_mutex); } #endif =20 @@ -1294,9 +1312,8 @@ static bool prstate_housekeeping_conflict(int prstate= , struct cpumask *new_cpus) static void __update_isolation_cpumasks(bool twork); static void isolation_task_work_fn(struct callback_head *cb) { - cpuset_full_lock(); + guard(mutex)(&isolcpus_update_mutex); __update_isolation_cpumasks(true); - cpuset_full_lock(); } =20 /* @@ -1338,8 +1355,18 @@ static void __update_isolation_cpumasks(bool twork) return; } =20 + lockdep_assert_held(&isolcpus_update_mutex); + /* + * Release cpus_read_lock & cpuset_mutex before calling + * housekeeping_update() and re-acquiring them afterward if not + * calling from task_work. + */ + if (!twork) + cpuset_full_unlock(); ret =3D housekeeping_update(isolated_cpus); WARN_ON_ONCE(ret < 0); + if (!twork) + cpuset_full_lock(); =20 isolated_cpus_updating =3D false; } @@ -3196,6 +3223,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, return -EACCES; =20 buf =3D strstrip(buf); + mutex_lock(&isolcpus_update_mutex); cpuset_full_lock(); if (!is_cpuset_online(cs)) goto out_unlock; @@ -3226,6 +3254,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file = *of, rebuild_sched_domains_locked(); out_unlock: cpuset_full_unlock(); + mutex_unlock(&isolcpus_update_mutex); if (of_cft(of)->private =3D=3D FILE_MEMLIST) schedule_flush_migrate_mm(); return retval ?: nbytes; @@ -3329,6 +3358,7 @@ static ssize_t cpuset_partition_write(struct kernfs_o= pen_file *of, char *buf, else return -EINVAL; =20 + guard(mutex)(&isolcpus_update_mutex); cpuset_full_lock(); if (is_cpuset_online(cs)) retval =3D update_prstate(cs, val); @@ -3502,6 +3532,7 @@ static void cpuset_css_killed(struct cgroup_subsys_st= ate *css) { struct cpuset *cs =3D css_cs(css); =20 + guard(mutex)(&isolcpus_update_mutex); cpuset_full_lock(); /* Reset valid partition back to member */ if (is_partition_valid(cs)) diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index 3b725d39c06e..ef152d401fe2 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -123,8 +123,6 @@ int housekeeping_update(struct cpumask *isol_mask) struct cpumask *trial, *old =3D NULL; int err; =20 - lockdep_assert_cpus_held(); - trial =3D kmalloc(cpumask_size(), GFP_KERNEL); if (!trial) return -ENOMEM; @@ -136,7 +134,7 @@ int housekeeping_update(struct cpumask *isol_mask) } =20 if (!housekeeping.flags) - static_branch_enable_cpuslocked(&housekeeping_overridden); + static_branch_enable(&housekeeping_overridden); =20 if (housekeeping.flags & HK_FLAG_DOMAIN) old =3D housekeeping_cpumask_dereference(HK_TYPE_DOMAIN); diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 6da9cd562b20..244a8d025e78 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -1559,8 +1559,6 @@ int tmigr_isolated_exclude_cpumask(struct cpumask *ex= clude_cpumask) cpumask_var_t cpumask __free(free_cpumask_var) =3D CPUMASK_VAR_NULL; int cpu; =20 - lockdep_assert_cpus_held(); - if (!works) return -ENOMEM; if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) @@ -1570,6 +1568,7 @@ int tmigr_isolated_exclude_cpumask(struct cpumask *ex= clude_cpumask) * First set previously isolated CPUs as available (unisolate). * This cpumask contains only CPUs that switched to available now. */ + guard(cpus_read_lock)(); cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask); cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask); =20 --=20 2.52.0