From nobody Tue Dec 2 01:04:13 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 665C033D6F2 for ; Mon, 24 Nov 2025 22:31:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764023493; cv=none; b=MBTAxSfIliO2T/VFF+YFOdawPSgyxlm9N6uzUgdhRzomIliaAeG4YOoLG55IWePHnRUeDo7u2n7PlBAFDgtHt1tjt81niw9Xi76Z3UgcrcPZcWiKYJU7jrhW/34PRv21dbd2SS8f2Jws4MxwTJlhp8KZkxdXOCkXjs8P6t7yUOs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764023493; c=relaxed/simple; bh=dsB++Dq5aeuQIcKbt/SpcwfcrztZL9dzCOmQ+56ILVo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I5qu2P2QvhfyCB1EuGMD6HjS66gyfpDvh0RCXUTTT9hRlo1IU5n6oqXucTr/1NrXrB5IpqozCDxM6Aonwn2myclEFL56Ot9WCD5QZG0SaehZZmeqc2fPpOeshu50XGsJskxQi47Lr8wP0/48ZSmeVT1cVJ8R2vKGYLWtgZvCEtc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bZX7t9b+; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bZX7t9b+" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-297f48e81b8so67422065ad.0 for ; Mon, 24 Nov 2025 14:31:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764023491; x=1764628291; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aeFpnqlw9RK343DQ3nNUojYJkotp0BeD1hQVgQ4QeM0=; b=bZX7t9b+0cy4kg/lcbqIB/ROjKOqmcfMQgz8vN4y8mZQB1r5CCpPPzmiMNMFzOgGg5 B84BUW75hRiHuZUhm6irmHxTTs21y9t/gphcJRsjnm7dautR5LL708sN4AwdKl8JvS5F imbJTqtOSJdt+9Hfitk8ZVbrUOnkxPKES+/0bQbGe6sQ3uXQyD+Y1CygUJZ9TuYooaux mIxzG+6Q50j3EGsdVvVRve+55g3wjOMCCyoQVd68KebzMZZkOOjX7aCth2ZbSLMVJ3fg VzCwS4fsBDDGxgtomht+Vvj1fdkeWH7k4ASA/6hcfZWFROHSC6KHsGD+fBCAE/TLKF7e 8Uwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764023491; x=1764628291; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aeFpnqlw9RK343DQ3nNUojYJkotp0BeD1hQVgQ4QeM0=; b=Xrks7Gin/TpQFVw2aH4NhNOCxg7WyQPj/jpq1VIro6w/Fl75SPmzOWHLVh6M0PTRYD 4FsorPYPrKTlwzvzw8xAbtSZYpmMQjaEM30Ox7n7B3di9QUkfXtyKO8QR06K3penJ3gX zlf79mKAf3ru7UL5a2rurpZoSLkXF1uJtAfLfgChQo73krl80R5+QBL97ZGRLr1yn4KQ zZtBmRHLC2JxGBvZr7Yb0FD769781v9emm3CuNRAcjOSDMIPlWz5mu6BmT1bkPnvoNi6 72mG8disgr0WtlYfQ4tZc+0hg9QQ4r6+CfC4/i0r6xCiotGiux3m8iSFSwMJTga/aAke 8k1A== X-Gm-Message-State: AOJu0YxAtwLsLi1yf1Wgd2dciGswuBlnx0RpGW+kAVOgteAe+KHsG16p sipSVLZgu/+MhRIQqrXjncYjqGJ34C41ysTPDffGJBfvGnO48kNaL/LQ6Q8SAsEdRAFn+c3ZGMn xY5WcEeOLqXGPDq/lnmILxunkRKHdYoI9UJHzqj7eNmnrE4OImk+Zp5qZvPHiCWB1EHGIcSv2i3 FZ/kf9gkreO611D2bSVy8aVhO83IsicYuhJwHUphrVO8v3D3wx X-Google-Smtp-Source: AGHT+IE7aRyQ55pgeK8WeQBqrlzoOLW01FrBCMFJycMZrPntHzOLbtavkzC5IcQe9PaGe8ygwQC+6b3u2wsd X-Received: from plhz10.prod.google.com ([2002:a17:902:d9ca:b0:269:7c36:eeb9]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:2383:b0:295:557e:7468 with SMTP id d9443c01a7336-29b6c00b02dmr149958335ad.28.1764023490488; Mon, 24 Nov 2025 14:31:30 -0800 (PST) Date: Mon, 24 Nov 2025 22:31:03 +0000 In-Reply-To: <20251124223111.3616950-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251124223111.3616950-1-jstultz@google.com> X-Mailer: git-send-email 2.52.0.487.g5c8c507ade-goog Message-ID: <20251124223111.3616950-12-jstultz@google.com> Subject: [PATCH v24 11/11] sched: Migrate whole chain in proxy_migrate_task() From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of migrating one task each time through find_proxy_task(), we can walk up the blocked_donor ptrs and migrate the entire current chain in one go. This was broken out of earlier patches and held back while the series was being stabilized, but I wanted to re-introduce it. Signed-off-by: John Stultz --- v12: * Earlier this was re-using blocked_node, but I hit a race with activating blocked entities, and to avoid it introduced a new migration_node listhead v18: * Add init_task initialization of migration_node as suggested by Suleiman v22: * Move migration_node under CONFIG_SCHED_PROXY_EXEC as suggested by K Prateek Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- include/linux/sched.h | 3 +++ init/init_task.c | 3 +++ kernel/fork.c | 3 +++ kernel/sched/core.c | 24 +++++++++++++++++------- 4 files changed, 26 insertions(+), 7 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 178ed37850470..775cc06f756d0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1243,6 +1243,9 @@ struct task_struct { struct mutex *blocked_on; /* lock we're blocked on */ struct task_struct *blocked_donor; /* task that is boosting this task */ raw_spinlock_t blocked_lock; +#ifdef CONFIG_SCHED_PROXY_EXEC + struct list_head migration_node; +#endif =20 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER /* diff --git a/init/init_task.c b/init/init_task.c index 34853a511b4d8..78fb7cb83fa5d 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -178,6 +178,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { &init_task.alloc_lock), #endif .blocked_donor =3D NULL, +#ifdef CONFIG_SCHED_PROXY_EXEC + .migration_node =3D LIST_HEAD_INIT(init_task.migration_node), +#endif #ifdef CONFIG_RT_MUTEXES .pi_waiters =3D RB_ROOT_CACHED, .pi_top_task =3D NULL, diff --git a/kernel/fork.c b/kernel/fork.c index 0a9a17e25b85d..a7561480e879e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2137,6 +2137,9 @@ __latent_entropy struct task_struct *copy_process( =20 p->blocked_on =3D NULL; /* not blocked yet */ p->blocked_donor =3D NULL; /* nobody is boosting p yet */ +#ifdef CONFIG_SCHED_PROXY_EXEC + INIT_LIST_HEAD(&p->migration_node); +#endif =20 #ifdef CONFIG_BCACHE p->sequential_io =3D 0; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7f42ec01192dc..0c50d154050a3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6723,6 +6723,7 @@ static void proxy_migrate_task(struct rq *rq, struct = rq_flags *rf, struct task_struct *p, int target_cpu) { struct rq *target_rq =3D cpu_rq(target_cpu); + LIST_HEAD(migrate_list); =20 lockdep_assert_rq_held(rq); =20 @@ -6739,11 +6740,16 @@ static void proxy_migrate_task(struct rq *rq, struc= t rq_flags *rf, */ proxy_resched_idle(rq); =20 - WARN_ON(p =3D=3D rq->curr); - - deactivate_task(rq, p, DEQUEUE_NOCLOCK); - proxy_set_task_cpu(p, target_cpu); - + for (; p; p =3D p->blocked_donor) { + WARN_ON(p =3D=3D rq->curr); + deactivate_task(rq, p, DEQUEUE_NOCLOCK); + proxy_set_task_cpu(p, target_cpu); + /* + * We can abuse blocked_node to migrate the thing, + * because @p was still on the rq. + */ + list_add(&p->migration_node, &migrate_list); + } /* * We have to zap callbacks before unlocking the rq * as another CPU may jump in and call sched_balance_rq @@ -6755,8 +6761,12 @@ static void proxy_migrate_task(struct rq *rq, struct= rq_flags *rf, raw_spin_rq_unlock(rq); =20 raw_spin_rq_lock(target_rq); - activate_task(target_rq, p, 0); - wakeup_preempt(target_rq, p, 0); + while (!list_empty(&migrate_list)) { + p =3D list_first_entry(&migrate_list, struct task_struct, migration_node= ); + list_del_init(&p->migration_node); + activate_task(target_rq, p, 0); + wakeup_preempt(target_rq, p, 0); + } raw_spin_rq_unlock(target_rq); =20 raw_spin_rq_lock(rq); --=20 2.52.0.487.g5c8c507ade-goog