From nobody Wed Dec 17 00:32:54 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAF94C7618E for ; Sat, 22 Apr 2023 07:44:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229451AbjDVHnb (ORCPT ); Sat, 22 Apr 2023 03:43:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229468AbjDVHn1 (ORCPT ); Sat, 22 Apr 2023 03:43:27 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D12F1BDC; Sat, 22 Apr 2023 00:43:25 -0700 (PDT) Date: Sat, 22 Apr 2023 07:43:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1682149403; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=uYmE+OYmLfZ9Q+2puf8qmOrqk2xlXYlmhLyrRyCtFIY=; b=HP11bTZbXuDDqpHk+jfsAc/Ftd6tf9oPPS3dkpDaDaPCqVD0UB1vCBJoKmE5bchL2ZnjBf LiK9i8i8Xd6vuRIyiowIQWRF2Cv3hFZ+NOhOaGA3T9G1wJGLdxE4arHuzKKy2yZmWwUn/T imP9PnVYGqbX5Oq8i3cn4zAOjm/ghkkI28707OgGQ+pnUCmUMGjCtELm937mfS2os0cQRn Jw9ZDBPeJmR9zcXRvxqbPUAFJ5Ot+k3TgN3JNzYH+RezBPNyMhTKNi6N7azb0xFptJqUQx DkGXMmIMVmn05NyDDzjtZRapaS020YWIpyzqiNPGlv1quk12Gyd2utudB0uNyA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1682149403; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=uYmE+OYmLfZ9Q+2puf8qmOrqk2xlXYlmhLyrRyCtFIY=; b=ypspptsciRWVhdXfhGRc7qJAHZ5mHdiTGzSU4Oo9M5PL5dYu4PxWhrvb8XZ7KsKtr+Ag6m 2uZOeVIbvH/dDYBg== From: "tip-bot2 for Schspa Shi" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: sched/core] sched/rt: Fix bad task migration for rt tasks Cc: Schspa Shi , "Peter Zijlstra (Intel)" , "Steven Rostedt (Google)" , Dietmar Eggemann , Valentin Schneider , Dwaine Gonyier , x86@kernel.org, linux-kernel@vger.kernel.org MIME-Version: 1.0 Message-ID: <168214940317.404.10361238709723841210.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the sched/core branch of tip: Commit-ID: feffe5bb274dd3442080ef0e4053746091878799 Gitweb: https://git.kernel.org/tip/feffe5bb274dd3442080ef0e405374609= 1878799 Author: Schspa Shi AuthorDate: Mon, 29 Aug 2022 01:03:02 +08:00 Committer: Peter Zijlstra CommitterDate: Fri, 21 Apr 2023 13:24:21 +02:00 sched/rt: Fix bad task migration for rt tasks Commit 95158a89dd50 ("sched,rt: Use the full cpumask for balancing") allows find_lock_lowest_rq() to pick a task with migration disabled. The purpose of the commit is to push the current running task on the CPU that has the migrate_disable() task away. However, there is a race which allows a migrate_disable() task to be migrated. Consider: CPU0 CPU1 push_rt_task check is_migration_disabled(next_task) task not running and migration_disabled =3D=3D 0 find_lock_lowest_rq(next_task, rq); _double_lock_balance(this_rq, busiest); raw_spin_rq_unlock(this_rq); double_rq_lock(this_rq, busiest); <> task become running migrate_disable(); deactivate_task(rq, next_task, 0); set_task_cpu(next_task, lowest_rq->cpu); WARN_ON_ONCE(is_migration_disabled(p)); Fixes: 95158a89dd50 ("sched,rt: Use the full cpumask for balancing") Signed-off-by: Schspa Shi Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Steven Rostedt (Google) Reviewed-by: Dietmar Eggemann Reviewed-by: Valentin Schneider Tested-by: Dwaine Gonyier --- kernel/sched/deadline.c | 1 + kernel/sched/rt.c | 4 ++++ 2 files changed, 5 insertions(+) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 4cc7e1c..5a9a4b8 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2246,6 +2246,7 @@ static struct rq *find_lock_later_rq(struct task_stru= ct *task, struct rq *rq) !cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) || task_on_cpu(rq, task) || !dl_task(task) || + is_migration_disabled(task) || !task_on_rq_queued(task))) { double_unlock_balance(rq, later_rq); later_rq =3D NULL; diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 9d67dfb..00e0e50 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2000,11 +2000,15 @@ static struct rq *find_lock_lowest_rq(struct task_s= truct *task, struct rq *rq) * the mean time, task could have * migrated already or had its affinity changed. * Also make sure that it wasn't scheduled on its rq. + * It is possible the task was scheduled, set + * "migrate_disabled" and then got preempted, so we must + * check the task migration disable flag here too. */ if (unlikely(task_rq(task) !=3D rq || !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) || task_on_cpu(rq, task) || !rt_task(task) || + is_migration_disabled(task) || !task_on_rq_queued(task))) { =20 double_unlock_balance(rq, lowest_rq);