From nobody Mon Dec 1 22:02:15 2025 Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E53B831280D for ; Mon, 1 Dec 2025 12:42:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764592950; cv=none; b=nGs/Ov4HTbwlz55jOo9oRhMPOSALQsmRO7rzAwNXgEbQRJfjOuhttVO8aIMJHo1CLwkyjsrdJU08LE1UqWeKg3mkBkZ27SSRGTkbgGZz1A6dB7ZgryMm6AHcNTdaIWJmbzjGaiAAPvXAF88Sg01NCJRgHkk/J0nf9F0C+wKjLFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764592950; c=relaxed/simple; bh=IxTICo5qwmnsAN3xzEAKO2lNXpDqF33T1HI5DrjBBMI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uiIFvYeFrmxxejuxNOz7gMDbUEUFwjzLFo7q9WnvIqmuVmMC2CFRWMzNP+px5JMvJiEBPRnCDsDEeUuDJF/LJ4x7TRwRNzAeQDRkT1ULZ1XejgSQrUsHGB3zOvQzNzTSx+Ownm5hijXsb9br5rbK4eqv1YWt2/NzKb/W6Du6igc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BZuLh3D4; arc=none smtp.client-ip=209.85.218.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BZuLh3D4" Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-b73875aa527so645875866b.3 for ; Mon, 01 Dec 2025 04:42:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764592946; x=1765197746; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TbcefNdzzZCkICFoQKdfBmi4Qmz+Ypj1mBGRZo7Yg94=; b=BZuLh3D4IXb0nghp5wTqh1491ad+kWKLB5HKimhcAUFuJ4rQ+FyLVN/LLgHjZoJUWS NcfIDTT48415rgWSZFCojNdLK3rCJbBE8j3IQDjcfBYNNaTYOvhhpFQEm8ob3Hxis72j SB4o5VFRATMntPAULgbX3+vV0UT7IkSvEQ2NlsYO6Eb44WJDfj6f1Mg8MkywtSXlmKwB qTGuL+RxLBNrCi96vSIaj7RZKhPNikerBBttvUyd7F6xeDU/RW9wKn9qNuBw/HxDSYDE XTiprXVJB/hDUURVuZW/NdXM3znb44AkB+B7bUv29C1AbDG4eaqutD3kvdM/6M+Mcbs3 bvTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764592946; x=1765197746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=TbcefNdzzZCkICFoQKdfBmi4Qmz+Ypj1mBGRZo7Yg94=; b=BZ3i+pp8pqD7evgATw3zbHK9gK9rsg092HqKq0XBv0vMMtvisamfm4wOqJaWRLknE/ BF7RcV9Dzgb+0b2pGi6ivO8bjEvMk8w5VzfF5hDEfizNbU0MEy4erKxyKXAJ1/g0bUX5 5TY4Bymgp0Q5mTNMC8dzXEI69k6INLIoj3oIpj+OS06UkBwSauYg4nPY6ykQO90uxA+Q JMksIGshFIcXk7DvNFH8UGvFVsS9AZWN6kbtDEnV1DhP5eBz1Sj0UxESJMihu0Sr76SA /yFXhtfpkFZSFN3c7WfCOJAytGmSWBVxvPMvPg/j2PaNzIi6xz7Q1+S1jOB/OS5OuHl4 m69Q== X-Gm-Message-State: AOJu0Yzs4R0Wbr0oPeXteZsQ0jdSs+C5qgOewnbwdImBCTRcmYXaXZlU b6XyU8EiwCnT7FzNTM4HVkJofLygghxpmmRRSROW0fRCeTe/JzbkUN4a X-Gm-Gg: ASbGncvO8cUX2MjO+bIwj/AM3+R+4HRDDe3ei9hzb6x+RdykAWfZdYOzM+TRj8bf7UD /H6sdt8YloQHPe2JuIODfroUkK7ermPgRvIVnBwHuD3NvcjtoeVLjv38/i/6X+63Uf1zY66lBRL pGkiQf5q8i2JGA8+MaF9yFzWf/PItp40ewstmSgJ664bRt37hg1HwKas2ug6LcF0aOaJx/BkrsY 4rmP0E6YTZTlofU8B4crK3xExxUHAqZrQ6sCQHKNe2GfGdrUco4ViQSMaenQs/dilmKqWjnilR1 rBilR4LI0Bt/HrCEO/aGwyWl7ZvWFIT2rtuSZw0wH4Hk2sNYWXhMwW6sCrnL0QVh+lG6mRGQpTb 4fCrWbXW2BNo+naUiRXrAhqrht0L5Rz4CDYj7D/T4WRFtr2x6YmF9gxPUQVeWpVAxrg7UbCL6gk d31hbrXxOe X-Google-Smtp-Source: AGHT+IFsLM/zJmyvcHdu8bOsoaw55txlpe9h8r3zmjaM68jw6WPLTyBd/JMG4KBAZ5R2gZU2pOWC4Q== X-Received: by 2002:a17:907:3dac:b0:b72:fd32:a463 with SMTP id a640c23a62f3a-b76715abd0dmr3799043566b.23.1764592945994; Mon, 01 Dec 2025 04:42:25 -0800 (PST) Received: from victus-lab ([193.205.81.5]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-b76f59e8612sm1173738266b.52.2025.12.01.04.42.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Dec 2025 04:42:25 -0800 (PST) From: Yuri Andriaccio To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: [RFC PATCH v4 21/28] sched/rt: Add HCBS migration related checks and function calls Date: Mon, 1 Dec 2025 13:41:54 +0100 Message-ID: <20251201124205.11169-22-yurand2000@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251201124205.11169-1-yurand2000@gmail.com> References: <20251201124205.11169-1-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: luca abeni Add HCBS related checks and operations to allow rt-task migration, differentiating between cgroup's tasks or tasks that run on the global runqueue. Co-developed-by: Alessio Balsini Signed-off-by: Alessio Balsini Co-developed-by: Andrea Parri Signed-off-by: Andrea Parri Co-developed-by: Yuri Andriaccio Signed-off-by: Yuri Andriaccio Signed-off-by: luca abeni --- kernel/sched/rt.c | 61 ++++++++++++++++++++++++++++++++++++----------- 1 file changed, 47 insertions(+), 14 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 86f5602f30..e2b67f8309 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1,4 +1,3 @@ -#pragma GCC diagnostic ignored "-Wunused-function" // SPDX-License-Identifier: GPL-2.0 /* * Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR @@ -892,6 +891,11 @@ select_task_rq_rt(struct task_struct *p, int cpu, int = flags) struct rq *rq; bool test; + /* Just return the task_cpu for processes inside task groups */ + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && + is_dl_group(rt_rq_of_se(&p->rt))) + goto out; + /* For anything but wake ups, just return the task_cpu */ if (!(flags & (WF_TTWU | WF_FORK))) goto out; @@ -991,7 +995,10 @@ static int balance_rt(struct rq *rq, struct task_struc= t *p, struct rq_flags *rf) * not yet started the picking loop. */ rq_unpin_lock(rq, rf); - pull_rt_task(rq); + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq_of_se(&p->rt)= )) + group_pull_rt_task(rt_rq_of_se(&p->rt)); + else + pull_rt_task(rq); rq_repin_lock(rq, rf); } @@ -1100,7 +1107,10 @@ static inline void set_next_task_rt(struct rq *rq, s= truct task_struct *p, bool f if (rq->donor->sched_class !=3D &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); - rt_queue_push_tasks(rt_rq); + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) + rt_queue_push_from_group(rt_rq); + else + rt_queue_push_tasks(rt_rq); } static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq) @@ -1153,6 +1163,13 @@ static void put_prev_task_rt(struct rq *rq, struct t= ask_struct *p, struct task_s */ if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) enqueue_pushable_task(rt_rq, p); + + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) { + struct sched_dl_entity *dl_se =3D dl_group_of(rt_rq); + + if (dl_se->dl_throttled) + rt_queue_push_from_group(rt_rq); + } } /* Only try algorithms three times */ @@ -2196,6 +2213,7 @@ static void group_push_rt_tasks(struct rt_rq *rt_rq) = { } */ static void task_woken_rt(struct rq *rq, struct task_struct *p) { + struct rt_rq *rt_rq =3D rt_rq_of_se(&p->rt); bool need_to_push =3D !task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && @@ -2203,7 +2221,12 @@ static void task_woken_rt(struct rq *rq, struct task= _struct *p) (rq->curr->nr_cpus_allowed < 2 || rq->donor->prio <=3D p->prio); - if (need_to_push) + if (!need_to_push) + return; + + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) + group_push_rt_tasks(rt_rq); + else push_rt_tasks(rq); } @@ -2243,7 +2266,9 @@ static void switched_from_rt(struct rq *rq, struct ta= sk_struct *p) if (!task_on_rq_queued(p) || rt_rq->rt_nr_running) return; - if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq)) + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) + rt_queue_pull_to_group(rt_rq); + else rt_queue_pull_task(rt_rq); } @@ -2272,6 +2297,13 @@ static void switched_to_rt(struct rq *rq, struct tas= k_struct *p) */ if (task_current(rq, p)) { update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); + + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq_of_se(&p->rt)= )) { + struct sched_dl_entity *dl_se =3D dl_group_of(rt_rq_of_se(&p->rt)); + + p->dl_server =3D dl_se; + } + return; } @@ -2281,13 +2313,10 @@ static void switched_to_rt(struct rq *rq, struct ta= sk_struct *p) * then see if we can move to another run queue. */ if (task_on_rq_queued(p)) { - if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) { - if (p->prio < rq->curr->prio) - resched_curr(rq); - } else { - if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) - rt_queue_push_tasks(rt_rq_of_se(&p->rt)); - } + if (!is_dl_group(rt_rq) && p->nr_cpus_allowed > 1 && rq->rt.overloaded) + rt_queue_push_tasks(rt_rq); + else if (is_dl_group(rt_rq) && rt_rq->overloaded) + rt_queue_push_from_group(rt_rq); if (p->prio < rq->donor->prio && cpu_online(cpu_of(rq))) resched_curr(rq); @@ -2311,8 +2340,12 @@ prio_changed_rt(struct rq *rq, struct task_struct *p= , int oldprio) * If our priority decreases while running, we * may need to pull tasks to this runqueue. */ - if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) && oldprio < p->prio) - rt_queue_pull_task(rt_rq); + if (oldprio < p->prio) { + if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) + rt_queue_pull_to_group(rt_rq); + else + rt_queue_pull_task(rt_rq); + } /* * If there's a higher priority task waiting to run -- 2.51.0