From nobody Fri Dec 19 14:13:28 2025 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DA582136C for ; Wed, 20 Dec 2023 00:19:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DMHLwvwz" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5cd91cbd273so1001914a12.3 for ; Tue, 19 Dec 2023 16:19:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703031581; x=1703636381; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6y8S+nSE7+9Tt9N3qq9zNMzLoztzpop33DDkxCG+ySI=; b=DMHLwvwzV0QMEKRQW3VVc+aHO1+pZvv5O4ncbQKM9YudvunIfshn2cOreE4tYxo+wd /lwQWioItjxo+B+xZ36oCRjGQxLwgbyITHKdYS1rS4XJY2/jxcwwlL/AO0GNeVmQmyuO Ie3fNaEuWpbHlvq28mrG06YtUmAtLcSC7Ozz2gK2MMv9DdHsUBs4maBINn5Qwhg3bpKp 1JjOvCGP6l9Q/RUf+z9abgp8ZMpseeicKKnVzODmvRnt+2ytabJwBcw+Sq9hJ9OT2OlD IaQV0XF47cBmYzXmOM4m3D7FnvzmkVPuAQBYUUBdub5KFfbEatLAerUYb4nmYx0h089Z F5LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703031581; x=1703636381; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6y8S+nSE7+9Tt9N3qq9zNMzLoztzpop33DDkxCG+ySI=; b=RipDc+lyIB3WzqH5i4ShBzIudn+tBiN5f7gJjIFk1rpZ+6bNdJ5qjHkJ/BkPPVp9nd Ot1vMnic6JGqXExnFKKr/yZ6TpqmYEf1grs7sELcea4dvjdqw7+hwxJCsFQ++6+myJfY GkVZMAaQgdwF20086x3nvuDe5rcJ5Hp85F1ffIUvUdVrt2vYObq19F1MfvmDHt9pqn09 ZNzucZhW2l/gX1khylNORi62998tyvjdVQsyuErgf+2S7zyHbmIvfDuferPo1B5cTL4E u1ELvhLTSyRT3YVjqzDJe0QvHeNxL/HKQe1AzgTT8HmUP+L+L8UB6sURZjlMd3MvC74u Cs8Q== X-Gm-Message-State: AOJu0YzmGqkhZL2WjSWAJhs90YOmxvPRzz7P3MpSSHG/gQHs8H/o2isl t00XOLVnYwCUq3Ht5RChJS1MLeMx7cdIi0x8poIeJ/A2wiRyQmZEVR23XgM9SHGpHhW51hlLvCa QSpwm+XpHe7GENRss4NQPeDDKAsAFkKS0qeR2EJom1betX1qLAq72p8PteD3aG6PdJt00xPM= X-Google-Smtp-Source: AGHT+IFxYmvsF0VHJev0Gi/Z/vUlwephStqUcSLGlwMkwFIpbu2UuRc6ddNfrjreND8D2nC2xY9cm7pr0X5O X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a63:5c07:0:b0:5ca:4440:234c with SMTP id q7-20020a635c07000000b005ca4440234cmr772730pgb.12.1703031580069; Tue, 19 Dec 2023 16:19:40 -0800 (PST) Date: Tue, 19 Dec 2023 16:18:32 -0800 In-Reply-To: <20231220001856.3710363-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231220001856.3710363-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231220001856.3710363-22-jstultz@google.com> Subject: [PATCH v7 21/23] sched: Add find_exec_ctx helper From: John Stultz To: LKML Cc: "Connor O'Brien" , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, John Stultz Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Connor O'Brien Add a helper to find the runnable owner down a chain of blocked waiters This patch was broken out from a larger chain migration patch originally by Connor O'Brien. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Connor O'Brien [jstultz: split out from larger chain migration patch] Signed-off-by: John Stultz --- kernel/sched/core.c | 42 +++++++++++++++++++++++++++++++++++++++++ kernel/sched/cpupri.c | 11 ++++++++--- kernel/sched/deadline.c | 15 +++++++++++++-- kernel/sched/rt.c | 9 ++++++++- kernel/sched/sched.h | 10 ++++++++++ 5 files changed, 81 insertions(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0c212dcd4b7a..77a79d5f829a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3896,6 +3896,48 @@ static void activate_blocked_entities(struct rq *tar= get_rq, } raw_spin_unlock_irqrestore(&owner->blocked_lock, flags); } + +static inline bool task_queued_on_rq(struct rq *rq, struct task_struct *ta= sk) +{ + if (!task_on_rq_queued(task)) + return false; + smp_rmb(); + if (task_rq(task) !=3D rq) + return false; + smp_rmb(); + if (!task_on_rq_queued(task)) + return false; + return true; +} + +/* + * Returns the unblocked task at the end of the blocked chain starting wit= h p + * if that chain is composed entirely of tasks enqueued on rq, or NULL oth= erwise. + */ +struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p) +{ + struct task_struct *exec_ctx, *owner; + struct mutex *mutex; + + if (!sched_proxy_exec()) + return p; + + lockdep_assert_rq_held(rq); + + for (exec_ctx =3D p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_c= tx); + exec_ctx =3D owner) { + mutex =3D exec_ctx->blocked_on; + owner =3D __mutex_owner(mutex); + if (owner =3D=3D exec_ctx) + break; + + if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) { + exec_ctx =3D NULL; + break; + } + } + return exec_ctx; +} #else /* !CONFIG_SCHED_PROXY_EXEC */ static inline void do_activate_task(struct rq *rq, struct task_struct *p, int en_flags) diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c index 15e947a3ded7..53be78afdd07 100644 --- a/kernel/sched/cpupri.c +++ b/kernel/sched/cpupri.c @@ -96,12 +96,17 @@ static inline int __cpupri_find(struct cpupri *cp, stru= ct task_struct *p, if (skip) return 0; =20 - if (cpumask_any_and(&p->cpus_mask, vec->mask) >=3D nr_cpu_ids) + if ((p && cpumask_any_and(&p->cpus_mask, vec->mask) >=3D nr_cpu_ids) || + (!p && cpumask_any(vec->mask) >=3D nr_cpu_ids)) return 0; =20 if (lowest_mask) { - cpumask_and(lowest_mask, &p->cpus_mask, vec->mask); - cpumask_and(lowest_mask, lowest_mask, cpu_active_mask); + if (p) { + cpumask_and(lowest_mask, &p->cpus_mask, vec->mask); + cpumask_and(lowest_mask, lowest_mask, cpu_active_mask); + } else { + cpumask_copy(lowest_mask, vec->mask); + } =20 /* * We have to ensure that we have at least one bit diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 999bd17f11c4..21e56ac58e32 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1866,6 +1866,8 @@ static void migrate_task_rq_dl(struct task_struct *p,= int new_cpu __maybe_unused =20 static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p) { + struct task_struct *exec_ctx; + /* * Current can't be migrated, useless to reschedule, * let's hope p can move out. @@ -1874,12 +1876,16 @@ static void check_preempt_equal_dl(struct rq *rq, s= truct task_struct *p) !cpudl_find(&rq->rd->cpudl, rq_selected(rq), rq->curr, NULL)) return; =20 + exec_ctx =3D find_exec_ctx(rq, p); + if (task_current(rq, exec_ctx)) + return; + /* * p is migratable, so let's not schedule it and * see if it is pushed or pulled somewhere else. */ if (p->nr_cpus_allowed !=3D 1 && - cpudl_find(&rq->rd->cpudl, p, p, NULL)) + cpudl_find(&rq->rd->cpudl, p, exec_ctx, NULL)) return; =20 resched_curr(rq); @@ -2169,12 +2175,17 @@ static int find_later_rq(struct task_struct *sched_= ctx, struct task_struct *exec /* Locks the rq it finds */ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *= rq) { + struct task_struct *exec_ctx; struct rq *later_rq =3D NULL; int tries; int cpu; =20 for (tries =3D 0; tries < DL_MAX_TRIES; tries++) { - cpu =3D find_later_rq(task, task); + exec_ctx =3D find_exec_ctx(rq, task); + if (!exec_ctx) + break; + + cpu =3D find_later_rq(task, exec_ctx); =20 if ((cpu =3D=3D -1) || (cpu =3D=3D rq->cpu)) break; diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 6371b0fca4ad..f8134d062fa3 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1640,6 +1640,11 @@ static void check_preempt_equal_prio(struct rq *rq, = struct task_struct *p) !cpupri_find(&rq->rd->cpupri, rq_selected(rq), rq->curr, NULL)) return; =20 + /* No reason to preempt since rq->curr wouldn't change anyway */ + exec_ctx =3D find_exec_ctx(rq, p); + if (task_current(rq, exec_ctx)) + return; + /* * p is migratable, so let's not schedule it and * see if it is pushed or pulled somewhere else. @@ -1933,12 +1938,14 @@ static int find_lowest_rq(struct task_struct *sched= _ctx, struct task_struct *exe /* Will lock the rq it finds */ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq = *rq) { + struct task_struct *exec_ctx; struct rq *lowest_rq =3D NULL; int tries; int cpu; =20 for (tries =3D 0; tries < RT_MAX_TRIES; tries++) { - cpu =3D find_lowest_rq(task, task); + exec_ctx =3D find_exec_ctx(rq, task); + cpu =3D find_lowest_rq(task, exec_ctx); =20 if ((cpu =3D=3D -1) || (cpu =3D=3D rq->cpu)) break; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ef3d327e267c..6cd473224cfe 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3564,6 +3564,16 @@ int task_is_pushable(struct rq *rq, struct task_stru= ct *p, int cpu) =20 return 0; } + +#ifdef CONFIG_SCHED_PROXY_EXEC +struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p); +#else /* !CONFIG_SCHED_PROXY_EXEC */ +static inline +struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p) +{ + return p; +} +#endif /* CONFIG_SCHED_PROXY_EXEC */ #endif =20 #endif /* _KERNEL_SCHED_SCHED_H */ --=20 2.43.0.472.g3155946c3a-goog