From nobody Sat Feb 7 14:23:41 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D9FA33AD83 for ; Fri, 30 Jan 2026 15:43:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787802; cv=none; b=L2bF1ztmiA6rJbcsbEshk7/DhT9JQh1CYx7/aRq3s2HoDgJEomqoKXplG4CejcCUDwM3Cs0SF+/5UD8LfUF27Kq21SsPk3H0Rf2hjLCKec9B+iimRnIgZTaLDMbQLR5SHi8R0nX6+ykD9Z335eCQ3PKpfBxtsASbBg1Ellb3HVU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787802; c=relaxed/simple; bh=kqRxOX79z01UBEBpApdbsNZ82G1d9v5xE0rZXEYn8/4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=q4C4hYOfKfdXhO3u2Lwz5O4+fcq4yo3lBnyKvMEubmyDIeZglOZHvsdeiZ1NNrGa7a7ClEK9TBJadgjXCwyEaFBFEObqq7dlWiEE0Tgg1/3VG/hWkG0V+SwKg3Pc92HGFheEjDNDcqZV4iWA40vsRPP3NAqiZc02gKh1aR5sUho= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IS232igG; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IS232igG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1769787800; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xHdH/ULdvFIh/Q1OjClsi+xbarpp5DPKHBvLsbG0DjA=; b=IS232igGdCqiNckl1EXmqoQrjlqmSiVII0W4b0MpaAoRk17qfBIOzhGYKik4gjp8pzb7vF cuApQse950YzY1b6amYfYf/9ANwenZzhg/hdZOcFl5BkpeWeAix+DtQprSwS3WjviLGP93 8QixPNfvxrhdCUNB8JYvb5oCXJpn+Ac= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-553-EXGz10DKNsGWsdQakk_dbA-1; Fri, 30 Jan 2026 10:43:16 -0500 X-MC-Unique: EXGz10DKNsGWsdQakk_dbA-1 X-Mimecast-MFC-AGG-ID: EXGz10DKNsGWsdQakk_dbA_1769787793 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DA9A3195605F; Fri, 30 Jan 2026 15:43:12 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.22.81.137]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 41F6518007D2; Fri, 30 Jan 2026 15:43:09 +0000 (UTC) From: Waiman Long To: Chen Ridong , Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Waiman Long Subject: [PATCH/for-next v2 1/2] cgroup/cpuset: Defer housekeeping_update() call from CPU hotplug to workqueue Date: Fri, 30 Jan 2026 10:42:53 -0500 Message-ID: <20260130154254.1422113-2-longman@redhat.com> In-Reply-To: <20260130154254.1422113-1-longman@redhat.com> References: <20260130154254.1422113-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" The update_isolation_cpumasks() function can be called either directly from regular cpuset control file write with cpuset_full_lock() called or via the CPU hotplug path with cpus_write_lock and cpuset_mutex held. As we are going to enable dynamic update to the nozh_full housekeeping cpumask (HK_TYPE_KERNEL_NOISE) soon with the help of CPU hotplug, allowing the CPU hotplug path to call into housekeeping_update() directly from update_isolation_cpumasks() will likely cause deadlock. So we have to defer any call to housekeeping_update() after the CPU hotplug operation has finished. This is now done via the workqueue where the actual housekeeping_update() call, if needed, will happen after cpus_write_lock is released. We can't use the synchronous task_work API as call from CPU hotplug path happen in the per-cpu kthread of the CPU that is being shut down or brought up. Because of the asynchronous nature of workqueue, the HK_TYPE_DOMAIN housekeeping cpumask will be updated a bit later than the "cpuset.cpus.isolated" control file in this case. Also add a check in test_cpuset_prs.sh and modify some existing test cases to confirm that "cpuset.cpus.isolated" and HK_TYPE_DOMAIN housekeeping cpumask will both be updated. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 37 +++++++++++++++++-- .../selftests/cgroup/test_cpuset_prs.sh | 13 +++++-- 2 files changed, 44 insertions(+), 6 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 7b7d12ab1006..0b0eb1df09d5 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -84,6 +84,9 @@ static cpumask_var_t isolated_cpus; */ static bool isolated_cpus_updating; =20 +/* Both cpuset_mutex and cpus_read_locked acquired */ +static bool cpuset_locked; + /* * A flag to force sched domain rebuild at the end of an operation. * It can be set in @@ -285,10 +288,12 @@ void cpuset_full_lock(void) { cpus_read_lock(); mutex_lock(&cpuset_mutex); + cpuset_locked =3D true; } =20 void cpuset_full_unlock(void) { + cpuset_locked =3D false; mutex_unlock(&cpuset_mutex); cpus_read_unlock(); } @@ -1285,6 +1290,16 @@ static bool prstate_housekeeping_conflict(int prstat= e, struct cpumask *new_cpus) return false; } =20 +static void isolcpus_workfn(struct work_struct *work) +{ + cpuset_full_lock(); + if (isolated_cpus_updating) { + WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); + isolated_cpus_updating =3D false; + } + cpuset_full_unlock(); +} + /* * update_isolation_cpumasks - Update external isolation related CPU masks * @@ -1293,14 +1308,30 @@ static bool prstate_housekeeping_conflict(int prsta= te, struct cpumask *new_cpus) */ static void update_isolation_cpumasks(void) { - int ret; + static DECLARE_WORK(isolcpus_work, isolcpus_workfn); =20 if (!isolated_cpus_updating) return; =20 - ret =3D housekeeping_update(isolated_cpus); - WARN_ON_ONCE(ret < 0); + /* + * This function can be reached either directly from regular cpuset + * control file write (cpuset_locked) or via hotplug (cpus_write_lock + * && cpuset_mutex held). In the later case, we defer the + * housekeeping_update() call to the system_unbound_wq to avoid the + * possibility of deadlock. This also means that there will be a short + * period of time where HK_TYPE_DOMAIN housekeeping cpumask will lag + * behind isolated_cpus. + */ + if (!cpuset_locked) { + /* + * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work + * item that is still pending. + */ + queue_work(system_unbound_wq, &isolcpus_work); + return; + } =20 + WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); isolated_cpus_updating =3D false; } =20 diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/test= ing/selftests/cgroup/test_cpuset_prs.sh index 5dff3ad53867..0502b156582b 100755 --- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh +++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh @@ -245,8 +245,9 @@ TEST_MATRIX=3D( "C2-3:P1:S+ C3:P2 . . O2=3D0 O2=3D1 . . 0 A1:2|A= 2:3 A1:P1|A2:P2" "C2-3:P1:S+ C3:P1 . . O2=3D0 . . . 0 A1:|A2:3= A1:P1|A2:P1" "C2-3:P1:S+ C3:P1 . . O3=3D0 . . . 0 A1:2|A2:= A1:P1|A2:P1" - "C2-3:P1:S+ C3:P1 . . T:O2=3D0 . . . 0 A1:3|A2:= 3 A1:P1|A2:P-1" - "C2-3:P1:S+ C3:P1 . . . T:O3=3D0 . . 0 A1:2|A2:= 2 A1:P1|A2:P-1" + "C2-3:P1:S+ C3:P2 . . T:O2=3D0 . . . 0 A1:3|A2:= 3 A1:P1|A2:P-2" + "C1-3:P1:S+ C3:P2 . . . T:O3=3D0 . . 0 A1:1-2|A= 2:1-2 A1:P1|A2:P-2 3|" + "C1-3:P1:S+ C3:P2 . . . T:O3=3D0 O3=3D1 . 0 A1:1-2= |A2:3 A1:P1|A2:P2 3" "$SETUP_A123_PARTITIONS . O1=3D0 . . . 0 A1:|A2:2= |A3:3 A1:P1|A2:P1|A3:P1" "$SETUP_A123_PARTITIONS . O2=3D0 . . . 0 A1:1|A2:= |A3:3 A1:P1|A2:P1|A3:P1" "$SETUP_A123_PARTITIONS . O3=3D0 . . . 0 A1:1|A2:= 2|A3: A1:P1|A2:P1|A3:P1" @@ -764,7 +765,7 @@ check_cgroup_states() # only CPUs in isolated partitions as well as those that are isolated at # boot time. # -# $1 - expected isolated cpu list(s) {,} +# $1 - expected isolated cpu list(s) {|} # - expected sched/domains value # - cpuset.cpus.isolated value =3D if not defined # @@ -773,6 +774,7 @@ check_isolcpus() EXPECTED_ISOLCPUS=3D$1 ISCPUS=3D${CGROUP2}/cpuset.cpus.isolated ISOLCPUS=3D$(cat $ISCPUS) + HKICPUS=3D$(cat /sys/devices/system/cpu/isolated) LASTISOLCPU=3D SCHED_DOMAINS=3D/sys/kernel/debug/sched/domains if [[ $EXPECTED_ISOLCPUS =3D . ]] @@ -810,6 +812,11 @@ check_isolcpus() ISOLCPUS=3D EXPECTED_ISOLCPUS=3D$EXPECTED_SDOMAIN =20 + # + # The inverse of HK_TYPE_DOMAIN cpumask in $HKICPUS should match $ISOLCPUS + # + [[ "$ISOLCPUS" !=3D "$HKICPUS" ]] && return 1 + # # Use the sched domain in debugfs to check isolated CPUs, if available # --=20 2.52.0 From nobody Sat Feb 7 14:23:41 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6781333BBD1 for ; Fri, 30 Jan 2026 15:43:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787808; cv=none; b=dKqD2bvFp0YYby34reFPlfklpIaHGnc7j6QEC7b39g20/O+KLdsU4EIKqq9rHVM0QfPtfXYdtYd2ORdAurWZlg64GLT8tELiziCeHcTXw/yuCR+JGO7+6vD5pN8slohomWSz+PxzDk0ck23W8Nd+61W3FlfuH49BZemwdpkXSUk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787808; c=relaxed/simple; bh=T2oo+u+rYREFNiQ7OyCEZAINqInfJOANoSlf5zc78Fw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p34hGcmwvcujJcm4wDlh+wdJ1uAnCCuw9tu8UheEZYDsirLd+ZREIW8doFxuXDHe3OhnS6PQrUnGgmxRXbi4/eF3bYkzJScowoAJn6yUg/wx4IRDEDeByVPbwzEdp0an3t+YoppSC45VPBSMy1nAFnh8o5+Jd+9bnSVt6GAUpgg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=i0Gu9DKI; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="i0Gu9DKI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1769787804; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i+Mmp0HEd0UoabcYz6w6wdEVNi9tGA66TJkg869sXNk=; b=i0Gu9DKIfqkGwcYvfvc43U7wFkBYcGS5A+ev4N60dDu0qu1eTLiW5vDXSSiUVnRj2VAQoB nGZMprw6jrZWHVLEwKBHSGeDTfW04WyxTLaohvdIPFj5ftdmbuFNVMaWpLA7vHEtFUo2nC jFvgdjY80atxadAFPqhvrOlzC2BJXrQ= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-116-nnM8lOpzNaqgg4H5U5Y1ZA-1; Fri, 30 Jan 2026 10:43:19 -0500 X-MC-Unique: nnM8lOpzNaqgg4H5U5Y1ZA-1 X-Mimecast-MFC-AGG-ID: nnM8lOpzNaqgg4H5U5Y1ZA_1769787797 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6E98719560B3; Fri, 30 Jan 2026 15:43:16 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.22.81.137]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5772A1800995; Fri, 30 Jan 2026 15:43:13 +0000 (UTC) From: Waiman Long To: Chen Ridong , Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Waiman Long Subject: [PATCH/for-next v2 2/2] cgroup/cpuset: Introduce a new top level cpuset_top_mutex Date: Fri, 30 Jan 2026 10:42:54 -0500 Message-ID: <20260130154254.1422113-3-longman@redhat.com> In-Reply-To: <20260130154254.1422113-1-longman@redhat.com> References: <20260130154254.1422113-1-longman@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" The current cpuset partition code is able to dynamically update the sched domains of a running system and the corresponding HK_TYPE_DOMAIN housekeeping cpumask to perform what is essentally the "isolcpus=3Ddomain,..." boot command line feature at run time. The housekeeping cpumask update requires flushing a number of different workqueues which may not be safe with cpus_read_lock() held as the workqueue flushing code may acquire cpus_read_lock() or acquiring locks which have locking dependency with cpus_read_lock() down the chain. Below is an example of such circular locking problem. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D WARNING: possible circular locking dependency detected 6.18.0-test+ #2 Tainted: G S ------------------------------------------------------ test_cpuset_prs/10971 is trying to acquire lock: ffff888112ba4958 ((wq_completion)sync_wq){+.+.}-{0:0}, at: touch_wq_lockd= ep_map+0x7a/0x180 but task is already holding lock: ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at: cpuset_partition_write+0= x85/0x130 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (cpuset_mutex){+.+.}-{4:4}: -> #3 (cpu_hotplug_lock){++++}-{0:0}: -> #2 (rtnl_mutex){+.+.}-{4:4}: -> #1 ((work_completion)(&arg.work)){+.+.}-{0:0}: -> #0 ((wq_completion)sync_wq){+.+.}-{0:0}: Chain exists of: (wq_completion)sync_wq --> cpu_hotplug_lock --> cpuset_mutex 5 locks held by test_cpuset_prs/10971: #0: ffff88816810e440 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0xf9/0x1= d0 #1: ffff8891ab620890 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_fop_write_it= er+0x260/0x5f0 #2: ffff8890a78b83e8 (kn->active#187){.+.+}-{0:0}, at: kernfs_fop_write_= iter+0x2b6/0x5f0 #3: ffffffffadf32900 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_partitio= n_write+0x77/0x130 #4: ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at: cpuset_partition_wr= ite+0x85/0x130 Call Trace: : touch_wq_lockdep_map+0x93/0x180 __flush_workqueue+0x111/0x10b0 housekeeping_update+0x12d/0x2d0 update_parent_effective_cpumask+0x595/0x2440 update_prstate+0x89d/0xce0 cpuset_partition_write+0xc5/0x130 cgroup_file_write+0x1a5/0x680 kernfs_fop_write_iter+0x3df/0x5f0 vfs_write+0x525/0xfd0 ksys_write+0xf9/0x1d0 do_syscall_64+0x95/0x520 entry_SYSCALL_64_after_hwframe+0x76/0x7e To avoid such a circular locking dependency problem, we have to call housekeeping_update() without holding the cpus_read_lock() and cpuset_mutex. The current set of wq's flushed by housekeeping_update() may not have work functions that call cpus_read_lock() directly, but we are likely to extend the list of wq's that are flushed in the future. Moreover, the current set of work functions may hold locks that may have cpu_hotplug_lock down the dependency chain. One way to do that is to introduce a new top level cpuset_top_mutex which will be acquired first. This new cpuset_top_mutex will provide the need mutual exclusion without the need to hold cpus_read_lock(). As cpus_read_lock() is now no longer held when tmigr_isolated_exclude_cpumask() is called, it needs to acquire it directly. The lockdep_is_cpuset_held() is also updated to check the new cpuset_top_mutex. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 101 +++++++++++++++++++++++----------- kernel/sched/isolation.c | 4 +- kernel/time/timer_migration.c | 3 +- 3 files changed, 70 insertions(+), 38 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 0b0eb1df09d5..edccfa2df9da 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -78,13 +78,13 @@ static cpumask_var_t subpartitions_cpus; static cpumask_var_t isolated_cpus; =20 /* - * isolated_cpus updating flag (protected by cpuset_mutex) + * isolated_cpus updating flag (protected by cpuset_top_mutex) * Set if isolated_cpus is going to be updated in the current * cpuset_mutex crtical section. */ static bool isolated_cpus_updating; =20 -/* Both cpuset_mutex and cpus_read_locked acquired */ +/* cpuset_top_mutex acquired */ static bool cpuset_locked; =20 /* @@ -222,29 +222,44 @@ struct cpuset top_cpuset =3D { }; =20 /* - * There are two global locks guarding cpuset structures - cpuset_mutex and - * callback_lock. The cpuset code uses only cpuset_mutex. Other kernel - * subsystems can use cpuset_lock()/cpuset_unlock() to prevent change to c= puset - * structures. Note that cpuset_mutex needs to be a mutex as it is used in - * paths that rely on priority inheritance (e.g. scheduler - on RT) for - * correctness. + * CPUSET Locking Convention + * ------------------------- * - * A task must hold both locks to modify cpusets. If a task holds - * cpuset_mutex, it blocks others, ensuring that it is the only task able = to - * also acquire callback_lock and be able to modify cpusets. It can perfo= rm - * various checks on the cpuset structure first, knowing nothing will chan= ge. - * It can also allocate memory while just holding cpuset_mutex. While it = is - * performing these checks, various callback routines can briefly acquire - * callback_lock to query cpusets. Once it is ready to make the changes, = it - * takes callback_lock, blocking everyone else. + * Below are the four global locks guarding cpuset structures in lock + * acquisition order: + * - cpuset_top_mutex + * - cpu_hotplug_lock (cpus_read_lock/cpus_write_lock) + * - cpuset_mutex + * - callback_lock (raw spinlock) * - * Calls to the kernel memory allocator can not be made while holding - * callback_lock, as that would risk double tripping on callback_lock - * from one of the callbacks into the cpuset code from within - * __alloc_pages(). + * The first cpuset_top_mutex will be held except when calling into + * cpuset_handle_hotplug() from the CPU hotplug code where cpus_write_lock + * and cpuset_mutex will be held instead. * - * If a task is only holding callback_lock, then it has read-only - * access to cpusets. + * As cpuset will now indirectly flush a number of different workqueues in + * housekeeping_update() when the set of isolated CPUs is going to be chan= ged, + * it may not be safe from the circular locking perspective to hold the + * cpus_read_lock. So cpus_read_lock and cpuset_mutex will be released bef= ore + * calling housekeeping_update() and re-acquired afterward. + * + * A task must hold all the remaining three locks to modify externally vis= ible + * or used fields of cpusets, though some of the internally used cpuset fi= elds + * can be modified without holding callback_lock. If only reliable read ac= cess + * of the externally used fields are needed, a task can hold either + * cpuset_mutex or callback_lock which are exposed to other subsystems. + * + * If a task holds cpu_hotplug_lock and cpuset_mutex, it blocks others, + * ensuring that it is the only task able to also acquire callback_lock and + * be able to modify cpusets. It can perform various checks on the cpuset + * structure first, knowing nothing will change. It can also allocate memo= ry + * without holding callback_lock. While it is performing these checks, var= ious + * callback routines can briefly acquire callback_lock to query cpusets. = Once + * it is ready to make the changes, it takes callback_lock, blocking every= one + * else. + * + * Calls to the kernel memory allocator cannot be made while holding + * callback_lock which is a spinlock, as the memory allocator may sleep or + * call back into cpuset code and acquire callback_lock. * * Now, the task_struct fields mems_allowed and mempolicy may be changed * by other task, we use alloc_lock in the task_struct fields to protect @@ -255,6 +270,7 @@ struct cpuset top_cpuset =3D { * cpumasks and nodemasks. */ =20 +static DEFINE_MUTEX(cpuset_top_mutex); static DEFINE_MUTEX(cpuset_mutex); =20 /** @@ -278,6 +294,18 @@ void lockdep_assert_cpuset_lock_held(void) lockdep_assert_held(&cpuset_mutex); } =20 +static void cpuset_partial_lock(void) +{ + cpus_read_lock(); + mutex_lock(&cpuset_mutex); +} + +static void cpuset_partial_unlock(void) +{ + mutex_unlock(&cpuset_mutex); + cpus_read_unlock(); +} + /** * cpuset_full_lock - Acquire full protection for cpuset modification * @@ -286,22 +314,22 @@ void lockdep_assert_cpuset_lock_held(void) */ void cpuset_full_lock(void) { - cpus_read_lock(); - mutex_lock(&cpuset_mutex); + mutex_lock(&cpuset_top_mutex); + cpuset_partial_lock(); cpuset_locked =3D true; } =20 void cpuset_full_unlock(void) { cpuset_locked =3D false; - mutex_unlock(&cpuset_mutex); - cpus_read_unlock(); + cpuset_partial_unlock(); + mutex_unlock(&cpuset_top_mutex); } =20 #ifdef CONFIG_LOCKDEP bool lockdep_is_cpuset_held(void) { - return lockdep_is_held(&cpuset_mutex); + return lockdep_is_held(&cpuset_top_mutex); } #endif =20 @@ -1292,12 +1320,12 @@ static bool prstate_housekeeping_conflict(int prsta= te, struct cpumask *new_cpus) =20 static void isolcpus_workfn(struct work_struct *work) { - cpuset_full_lock(); - if (isolated_cpus_updating) { - WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); - isolated_cpus_updating =3D false; - } - cpuset_full_unlock(); + guard(mutex)(&cpuset_top_mutex); + if (!isolated_cpus_updating) + return; + + WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); + isolated_cpus_updating =3D false; } =20 /* @@ -1331,8 +1359,15 @@ static void update_isolation_cpumasks(void) return; } =20 + lockdep_assert_held(&cpuset_top_mutex); + /* + * Release cpus_read_lock & cpuset_mutex before calling + * housekeeping_update() and re-acquiring them afterward. + */ + cpuset_partial_unlock(); WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); isolated_cpus_updating =3D false; + cpuset_partial_lock(); } =20 /** diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index 3b725d39c06e..ef152d401fe2 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -123,8 +123,6 @@ int housekeeping_update(struct cpumask *isol_mask) struct cpumask *trial, *old =3D NULL; int err; =20 - lockdep_assert_cpus_held(); - trial =3D kmalloc(cpumask_size(), GFP_KERNEL); if (!trial) return -ENOMEM; @@ -136,7 +134,7 @@ int housekeeping_update(struct cpumask *isol_mask) } =20 if (!housekeeping.flags) - static_branch_enable_cpuslocked(&housekeeping_overridden); + static_branch_enable(&housekeeping_overridden); =20 if (housekeeping.flags & HK_FLAG_DOMAIN) old =3D housekeeping_cpumask_dereference(HK_TYPE_DOMAIN); diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index 6da9cd562b20..244a8d025e78 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -1559,8 +1559,6 @@ int tmigr_isolated_exclude_cpumask(struct cpumask *ex= clude_cpumask) cpumask_var_t cpumask __free(free_cpumask_var) =3D CPUMASK_VAR_NULL; int cpu; =20 - lockdep_assert_cpus_held(); - if (!works) return -ENOMEM; if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) @@ -1570,6 +1568,7 @@ int tmigr_isolated_exclude_cpumask(struct cpumask *ex= clude_cpumask) * First set previously isolated CPUs as available (unisolate). * This cpumask contains only CPUs that switched to available now. */ + guard(cpus_read_lock)(); cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask); cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask); =20 --=20 2.52.0