From nobody Sat Feb 7 18:20:52 2026 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2064.outbound.protection.outlook.com [40.107.94.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D53802165E9 for ; Thu, 9 Jan 2025 10:20:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.64 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736418005; cv=fail; b=mv6r81OI8WYqb/eoBIZ4xkNzv77KZ03zbOmuExRMqaf+BuC2HoUKlB8n8W36mvRoKA+I+u8Sao5W2fs9K0SXEMwPE9gVP/F6JrB4LaJbcYXRxGmj05Zc2elIPBPfhAL99ygdKFu5HklA9RsW3smv9DVa8STBXDinr1iBYNVsvAU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736418005; c=relaxed/simple; bh=SCpV1N2tgORsN8CRtEjyOfGr4dJZlMnVHZKtzu/tDOo=; h=From:To:Cc:Subject:Date:Message-ID:Content-Type:MIME-Version; b=KhAAe5NQUWEuWmt/r9eYPTtm00B1c6QDiIxq1MDNkaAdjn3cnVquAAQPUAh4Sglqem1UGnDhLhJPh/XQ4MRCEt1j0wFpLfGwNECgKyWsx4qHK4L4r0zVTOQN400NGtMDev/bPafUKj++NHsdZo+0a3sdpz/883nuZrMYF6uCHxE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=XOWwpqwT; arc=fail smtp.client-ip=40.107.94.64 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="XOWwpqwT" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=vsWCA7RrIMOgRzmKiOgx+VApVfudWHTGu2Q17vXJkx0ftjkhJyO5bkUPQ92TW+lLnY/Nz479j+qnGlT+SpIe3/KCsfqIjCZ0lCwJOZLWPtOUY9ygm0gUUOSKfSZYPSsQ7Gka5jE729qHY5F557u4l2Ny/oKDa5IP+NCH81TIcwWYm6lO4cuz+/aUOUOMQKndaO8nWPmsoToRisOaT2UDnf0/3IeAkzc+zuxkBZuriYoB0cuegrd1qSs2pB0uO89jIBmHgRQ2xCBTyBnNBC+iTFHRDOot34y3c3iXxgTMYVnlK/X0D/2hYmaQ0w3mptbwsw09lMYV/iFXxtkLOHuZRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=h5fYx26VEIHLbT89OdjL+VvYBpfW3hrrDoRENzG2/sU=; b=xNa20lSdfbLnq+lpy/cOH6JcedpRKqTERVHPBzLo8Zkc+osF6CFK/XEFT++3hIabwWX+R7iRkz+E1L4GAL+OToFSsGy7NqPvIZ6lndWVmQxBT6aigqUWZcjETX5C9a0x0nFnGWSRyvQ887ogvd1xRuCHv77cnFvTGl6r9oevLSO8YrYtLJEOkMg9VBqF/S2eAJISJdTFik9ko/eQLvnDg4cyK8XVAvRjCA3pGoR0gB2Of7TVG72VOYB/pxCcW5pFqst7V8/1xttA2iGEZil1Bu5znQb/jq8Ki3SzyqFsNxpmsFCdLKVPaQukpV5lx+aOslaeZBenFM265xT0VZG36g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h5fYx26VEIHLbT89OdjL+VvYBpfW3hrrDoRENzG2/sU=; b=XOWwpqwTvZq9PFpF4E81gkngf4qhYBs7xAILkrhzxnW+bv9zXmteCDVy9cjx29GlqFHtJFVjE/fuo6BuTqsshHvLVrAtzlcfa3MWLUhxVSpVlMCtZwwFsEeSOPKJd8wWeeadMNTZGJEbdUxCYNaPvQD46ph3xYbvRW/Y9XyiPdibzbGa8KqfAKvPQY6pIPX86VVy9/87RprjKRA0U+UTZnrfXeOk9OlKfVvw9hu7EuOLx/+HJNU0clP6s8V93JsXXuliJ3ab+gqHBeYOIAO+sY1o8hshqSxfsdhqhbMzkFIvMmvwXDTM7tuJranqCZ5YTsJ66pHyYrnv/y4eICzgyA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by LV3PR12MB9412.namprd12.prod.outlook.com (2603:10b6:408:211::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.10; Thu, 9 Jan 2025 10:20:00 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%3]) with mapi id 15.20.8335.011; Thu, 9 Jan 2025 10:20:00 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , linux-kernel@vger.kernel.org Subject: [PATCH v5] sched_ext: Refresh scx idle state during idle-to-idle transitions Date: Thu, 9 Jan 2025 11:19:52 +0100 Message-ID: <20250109101952.443769-1-arighi@nvidia.com> X-Mailer: git-send-email 2.47.1 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: FR0P281CA0063.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:49::11) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|LV3PR12MB9412:EE_ X-MS-Office365-Filtering-Correlation-Id: 505d71ac-0d47-42c7-e484-08dd30972cae X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?dNGrZc5W+nCAORVwNpxob7BmKTILOtWRCU0n/grhBAh2YT0xLrOkDwIsBVcz?= =?us-ascii?Q?HsAYlaBQXfJ+H0iN1IVkUYqh6UoSBHzje3VByOSqWqd4KpCjlb6t5hij/Ayg?= =?us-ascii?Q?YvTqROPZG1xqwtSb4bmtPcRPejpysDvj2mcIfJTSsRI3DHePt8dCw2MmFsp/?= =?us-ascii?Q?uSnjU0TXYfCNUSSlkME5HdScOcmU2paNxyDElLDMy+UIIiV97V5UZGbIqkPF?= =?us-ascii?Q?GCvQIKy4yjpBQAL/t84W9pIfEGYPxKQxyFaKq59OK/k08ejP8uJMqTz3BOVD?= =?us-ascii?Q?mXQd1z7OsM1RY43ZL31ugE/IxgAliBMjm1k6hbYCuQO8cxqjwhITHog9jWDq?= =?us-ascii?Q?voLrg53UTXZ5apkFrDfhI5px2hwsWd0b01/wUmka3uEo7Y/CZX7+L0sLkJwP?= =?us-ascii?Q?eLpcE7QlcEkX/PKgBqb7jXg8WGEWKUsu2+jZp04jHjnWSfia7yPbvBMhT7mz?= =?us-ascii?Q?ApUFHGxQfCc++6TPBEI5q5JgfjLot5nev7ZG56wU3/ZlKPgMUBBet8CZ6Xbf?= =?us-ascii?Q?nVSByHh+eJMWnYusZEW4ezZ+PMNCktPIuXIeVqlOGF5QHj9xGG2VyNpIVWNX?= =?us-ascii?Q?U3pS+mmtzgZZpbuboBpbzz0BJw39sLTir3UwpsgMgVgrPd36HnhA77AXs5M1?= =?us-ascii?Q?qI5HPkXjQZ67xXrA/heSgPhdsJF4V/DJyZ25B/b29kXP3wyU8JvX8/n2kFEx?= =?us-ascii?Q?Lx5vmbqYrBM3YXGmqwBuL2GvLMf9pnvvdxiiRTNbUk8o4A0x25LajLtCZqJO?= =?us-ascii?Q?EIa0tM4VgWKh+NYVjxo73gzWVNnqdmrmmRilGe3xr7CZVB7PZbqFxx9pBwM+?= =?us-ascii?Q?9l6+WpFjBJmGPNgwyJUaNbZWovb4U/thAL3Mj/u/9QzagWlA8DGLUXLTBUSC?= =?us-ascii?Q?XvgKMapsY0ho7ryO28szDkC3M0a6J6aVVPaGH3IWu0aWzLsdHr0iRoldLPB0?= =?us-ascii?Q?gYOmGDa26PfBUpbDktlMbSTkA8yVq4TkaG6lRW8VNvKefm5KdFeEeH2Ggsw8?= =?us-ascii?Q?cA2teiCRfaiMy32xL9WCxSp1htVAzUtae0uy+dY0Z6e5Y8sDjetpvwMfDZko?= =?us-ascii?Q?DPxh+39JCXYmvI94ALX6MHMdj7Si7bYZYSUfMNGoaD0yPBMD5wg//ZJbHeV6?= =?us-ascii?Q?J/fblMDfRVbSCyerC4cevtLrtohHKxqZj5Nzljz+o9uPAxdmJl7z3C85h4Fy?= =?us-ascii?Q?rLzKGmGW8qT1B/1WrkaCbmK6INiOWCyzKMebojwr4Fa5o2pu9kgoxZ/2YVwx?= =?us-ascii?Q?tO/b40t40+iQsdktb/MTRsac5LDVYC0P+qT+CU/6QEeYMu3TnJ1H2m/fqDqc?= =?us-ascii?Q?Bw034PY/zBtZuqaK81+xkHzutvKNpa98bFkU0EMAB++jkqalwK9tR/S4BQag?= =?us-ascii?Q?MDKsuBw1D6LjL1XDGe5ASiIaQNuo?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?Hguqkf4P2ICCsbqTVs73A4F3dfhrPOZnLj/JN8hz04UFkhDG3RD/xPhe/xyt?= =?us-ascii?Q?N1ETGnaf48Ene3STTgVUv/utGOqsbUZUM0SfnUzKNta0w/7mUBBklXKPe2zG?= =?us-ascii?Q?zwxppAlwCC981i+i0VXzUVwhdunG6EBY2PnnoTaB3DMALUKb/DvAnWiAfGXL?= =?us-ascii?Q?H3wAP8/FkYyc5eK8P3emNtuIq5+7lqFuSOTFE/6jdvJpyTV8sNoXw3UZWyxb?= =?us-ascii?Q?6lpe6v7YmvP0A2DLLRZmrHiwAhKFmVpinhFE9M6cV/KPPtDiewCso/BzSor7?= =?us-ascii?Q?e51hA4Xo6fVEOz+xum3YZMefp1xF/3R8Cb7E5Y04g1TCA9OOhYGylCdm25h5?= =?us-ascii?Q?3goymfv7JYuNO12urqehuiVN9H2zUEbqI3zne0tF7mHnNS0B6uQae4r+YNzg?= =?us-ascii?Q?kbvHj3d9VnbbbmpXHzK26PZv4rWaDnzQoVQUNM4L075TRRug1e3HkuvobcDa?= =?us-ascii?Q?zWzG8AfumWedM5QTqwk4Vv9xxz3IXyJSGvEyu4WLxvvtJ4nPBkL64oztkHbp?= =?us-ascii?Q?bNJrVTtSM0RK7qmsrr8fsQ5diVmHS30Vr1G4aWFm+c4DRbVShrRbcO4NE85T?= =?us-ascii?Q?lKP/Ea9feSGaE6nAO2NBriHAhbxMDnxqSOLraLpUlieF7um8cmDdvQlBEuDm?= =?us-ascii?Q?mFvJQ3Hlt/eNxDHUENVnTk2uTOXuGMRBF3WUKEutIcUhd6E0hg16KjdTxqWF?= =?us-ascii?Q?Qd9sKa+jRjvhxhXUF8PPwhEc5p2Ds5nhlofCSKOArh1+1e9VZvizYn4CaNMx?= =?us-ascii?Q?K68dgF4MOQscJbjmUyJzM3qbHL1jFqBC9ROspag8QCDWYrBqoYz8SpMbnidp?= =?us-ascii?Q?fhFyO8BwcKP3Ggi1i0+sLUtpMseuT26H4/QhSWoOlSBLbTGzi2UfArAcWlyH?= =?us-ascii?Q?vFTqrIedBzW8ce6kkV7Npcs4JN6ZlCHBVgWoqLLqg3BJihFK3TvhhlTuSMQJ?= =?us-ascii?Q?4Tw6naXojV168qmlszthA+4lafdPXfcGKF33+/98RZB2DNHgNUktil+kZe3u?= =?us-ascii?Q?n/GuVlLb1vQhz00PpHk/C6YFHQ8mN245wWCb3Sz0YbMxEOCHpQlFscMJuQS+?= =?us-ascii?Q?GHCM3++b6qCTIZioKs0ABPMWjE9zBGGc3SW07R/BElbpKDbMTIRXAS41ytyG?= =?us-ascii?Q?l/Z2oIwdOthIEOGTd60ROCZSkHLF+JcaIwadiO1fIo5GaPVHPK8imdN+nfX8?= =?us-ascii?Q?3W+rKZBCQXn1G/xpRAceR+V0bFYdtr7N8IFL86s+p9ehzSdSZHCDoGaQWmJs?= =?us-ascii?Q?ywWihJnPpTw9BUDkZdP6Qb5WgFfpc4zxMQbwy63OD+m0cyYq846AwwJqWc3+?= =?us-ascii?Q?1x3JXTkB5Mp+00HQHCxp8kAKbVP1Reg59ro3gEp9+zmfboP8LfQHLa47DgZV?= =?us-ascii?Q?xf+Yc7g0diSPasx0VmLeLCLAkij/PUlSitoHhMURvd7XuYKUSx564PVauWWV?= =?us-ascii?Q?V92bcTYZf4FmJ3ydiOyroBhBBfeTSF7lgQQl3HnORx+pnuru702rxk+h2c7L?= =?us-ascii?Q?UOnxoTrG2TiEEtbJLhSl3gKyVQfCbM0CnDwNt3dI2IVmCB5iMdQT8N4j1TH/?= =?us-ascii?Q?DS/3wRt72NTKwWHRp+RT9evdlSZm9Ln3RyVK6kU8?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 505d71ac-0d47-42c7-e484-08dd30972cae X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 10:20:00.6481 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: jX5+fkJybgYhjAevEfEKsgUffRro356/amgo3BJN/PbZ5P/Am/xapDKez19Oj0WldiJW+V7auj6QIE4eQT4Hcg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9412 Content-Type: text/plain; charset="utf-8" With the consolidation of put_prev_task/set_next_task(), see commit 436f3eed5c69 ("sched: Combine the last put_prev_task() and the first set_next_task()"), we are now skipping the transition between these two functions when the previous and the next tasks are the same. As a result, the scx idle state is updated only when a CPU transitions in or out of SCHED_IDLE. While this is generally correct, it can lead to uneven and inefficient core utilization in certain scenarios [1]. A typical scenario involves proactive wake-ups: scx_bpf_pick_idle_cpu() selects and mark an idle CPU as busy, followed by a wake-up via scx_bpf_kick_cpu(), without dispatching any tasks. In this case, the CPU continues running the idle task, returns to idle, but remains marked as busy, preventing it from being selected again as an idle CPU (until a task eventually runs on it and releases the CPU). For example, running a workload that uses 20% of each CPU, combined with an scx scheduler using proactive wake-ups, results in the following core utilization: CPU 0: 25.7% CPU 1: 29.3% CPU 2: 26.5% CPU 3: 25.5% CPU 4: 0.0% CPU 5: 25.5% CPU 6: 0.0% CPU 7: 10.5% To address this, refresh the idle state also in pick_task_idle(), during idle-to-idle transitions, but only trigger ops.update_idle() on actual state changes to prevent unnecessary updates to the scx scheduler and maintain balanced state transitions. With this change in place, the core utilization in the previous example becomes the following: CPU 0: 18.8% CPU 1: 19.4% CPU 2: 18.0% CPU 3: 18.7% CPU 4: 19.3% CPU 5: 18.9% CPU 6: 18.7% CPU 7: 19.3% [1] https://github.com/sched-ext/scx/pull/1139 Fixes: 7c65ae81ea86 ("sched_ext: Don't call put_prev_task_scx() before pick= ing the next task") Signed-off-by: Andrea Righi --- kernel/sched/ext.c | 20 ++++++++++++++++++-- kernel/sched/ext.h | 8 ++++---- kernel/sched/idle.c | 18 ++++++++++++++++-- 3 files changed, 38 insertions(+), 8 deletions(-) ChangeLog v4 -> v5: - prevent unbalanced ops.update_idle() invocations ChangeLog v3 -> v4: - handle the core-sched case that may ignore the result of pick_task(), triggering spurious ops.update_idle() events ChangeLog v2 -> v3: - add a comment to clarify why we need to update the scx idle state in pick_task() ChangeLog v1 -> v2: - move the logic from put_prev_set_next_task() to scx_update_idle() diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 96b6d6aea26e..9ed09e5df064 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3651,11 +3651,27 @@ static void reset_idle_masks(void) cpumask_copy(idle_masks.smt, cpu_online_mask); } =20 -void __scx_update_idle(struct rq *rq, bool idle) +/* + * Update the idle state of a CPU to @idle. + * + * If @do_update is true, ops.update_idle() is invoked to notify the scx + * scheduler of an actual idle state transition (idle to busy or vice + * versa). If @do_update is false, only the idle state in the idle masks is + * refreshed without invoking ops.update_idle(). + * + * This distinction is necessary, because an idle CPU can be "reserved" and + * awakened via scx_bpf_pick_idle_cpu() + scx_bpf_kick_cpu(), marking it as + * busy even if no tasks are dispatched. In this case, the CPU may return + * to idle without a true state transition. Refreshing the idle masks + * without invoking ops.update_idle() ensures accurate idle state tracking + * while avoiding unnecessary updates and maintaining balanced state + * transitions. + */ +void __scx_update_idle(struct rq *rq, bool idle, bool do_update) { int cpu =3D cpu_of(rq); =20 - if (SCX_HAS_OP(update_idle) && !scx_rq_bypassing(rq)) { + if (do_update && SCX_HAS_OP(update_idle) && !scx_rq_bypassing(rq)) { SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle); if (!static_branch_unlikely(&scx_builtin_idle_enabled)) return; diff --git a/kernel/sched/ext.h b/kernel/sched/ext.h index b1675bb59fc4..34e614c1e871 100644 --- a/kernel/sched/ext.h +++ b/kernel/sched/ext.h @@ -57,15 +57,15 @@ static inline void init_sched_ext_class(void) {} #endif /* CONFIG_SCHED_CLASS_EXT */ =20 #if defined(CONFIG_SCHED_CLASS_EXT) && defined(CONFIG_SMP) -void __scx_update_idle(struct rq *rq, bool idle); +void __scx_update_idle(struct rq *rq, bool idle, bool do_update); =20 -static inline void scx_update_idle(struct rq *rq, bool idle) +static inline void scx_update_idle(struct rq *rq, bool idle, bool do_updat= e) { if (scx_enabled()) - __scx_update_idle(rq, idle); + __scx_update_idle(rq, idle, do_update); } #else -static inline void scx_update_idle(struct rq *rq, bool idle) {} +static inline void scx_update_idle(struct rq *rq, bool idle, bool do_updat= e) {} #endif =20 #ifdef CONFIG_CGROUP_SCHED diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 621696269584..ffc636ccd54e 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -452,19 +452,33 @@ static void wakeup_preempt_idle(struct rq *rq, struct= task_struct *p, int flags) static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, st= ruct task_struct *next) { dl_server_update_idle_time(rq, prev); - scx_update_idle(rq, false); + scx_update_idle(rq, false, true); } =20 static void set_next_task_idle(struct rq *rq, struct task_struct *next, bo= ol first) { update_idle_core(rq); - scx_update_idle(rq, true); + scx_update_idle(rq, true, true); schedstat_inc(rq->sched_goidle); next->se.exec_start =3D rq_clock_task(rq); } =20 struct task_struct *pick_task_idle(struct rq *rq) { + /* + * The scx idle state is updated only when the CPU transitions + * in/out of SCHED_IDLE, see put_prev_task_idle() and + * set_next_task_idle(). + * + * However, the CPU may also exit/enter the idle state while + * running the idle task, for example waking up the CPU via + * scx_bpf_kick_cpu() without dispatching a task on it. + * + * In this case we still need to trigger scx_update_idle() to + * ensure a proper management of the scx idle state. + */ + if (rq->curr =3D=3D rq->idle) + scx_update_idle(rq, true, false); return rq->idle; } =20 --=20 2.47.1