From nobody Mon Sep 15 07:39:23 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F807C54EBE for ; Fri, 13 Jan 2023 12:15:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240879AbjAMMPR (ORCPT ); Fri, 13 Jan 2023 07:15:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240773AbjAMMOE (ORCPT ); Fri, 13 Jan 2023 07:14:04 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D731C39F85; Fri, 13 Jan 2023 04:08:51 -0800 (PST) Date: Fri, 13 Jan 2023 12:08:48 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1673611729; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p/Ic1iHz7HOZXaYuCXWJmrVkuygPLmdoT5X74IVs4F4=; b=tHfH//SC2gwMwc/gpDl6kaUeJ0/a9ii4zzJ6iAkjsoy8gFyxV0K6GqmOVmWjc7kbk98kqD Syuem4R9u4HONmYTsxbm0DblE3X/AyhlFOWU8lQ5PgU/BEK165etO/1gGEaFCvJJ41AY6b l8jngw5nWxDSroyq6RLBmdD2DpfoMkHaSLFjVDlAPv9PBdVlSFmHhtI8x0JPgi2M2j7F/N F58/a9M/EZ70X9b9lLyO++/SuUO1TJenftGTuLrFT44lugwhR6hejSnQkUgj5/D6W8s5UH eyvf1sYYVRQaKbzzUdjnv2MulPYI8WBgnALLarfmDcM586GMIQbFsGvjKWxFjw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1673611729; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p/Ic1iHz7HOZXaYuCXWJmrVkuygPLmdoT5X74IVs4F4=; b=xu7T3dWh3XoitD0n4zxH4ygrNwkPfXafMdSuUsX3gqx6FqwE5m2+GaSfTOztcZHdTd200x aS+gMpJEc+KQcNCA== From: "tip-bot2 for Qais Yousef" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: sched/urgent] sched/fair: Fixes for capacity inversion detection Cc: Dietmar Eggemann , "Qais Yousef (Google)" , "Peter Zijlstra (Intel)" , Vincent Guittot , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20230112122708.330667-3-qyousef@layalina.io> References: <20230112122708.330667-3-qyousef@layalina.io> MIME-Version: 1.0 Message-ID: <167361172839.4906.18334832591447660001.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the sched/urgent branch of tip: Commit-ID: da07d2f9c153e457e845d4dcfdd13568d71d18a4 Gitweb: https://git.kernel.org/tip/da07d2f9c153e457e845d4dcfdd13568d= 71d18a4 Author: Qais Yousef AuthorDate: Thu, 12 Jan 2023 12:27:08=20 Committer: Peter Zijlstra CommitterDate: Fri, 13 Jan 2023 11:40:21 +01:00 sched/fair: Fixes for capacity inversion detection Traversing the Perf Domains requires rcu_read_lock() to be held and is conditional on sched_energy_enabled(). Ensure right protections applied. Also skip capacity inversion detection for our own pd; which was an error. Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion") Reported-by: Dietmar Eggemann Signed-off-by: Qais Yousef (Google) Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Vincent Guittot Link: https://lore.kernel.org/r/20230112122708.330667-3-qyousef@layalina.io --- kernel/sched/fair.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index be43731..0f87369 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8868,16 +8868,23 @@ static void update_cpu_capacity(struct sched_domain= *sd, int cpu) * * Thermal pressure will impact all cpus in this perf domain * equally. */ - if (static_branch_unlikely(&sched_asym_cpucapacity)) { + if (sched_energy_enabled()) { unsigned long inv_cap =3D capacity_orig - thermal_load_avg(rq); - struct perf_domain *pd =3D rcu_dereference(rq->rd->pd); + struct perf_domain *pd; + + rcu_read_lock(); =20 + pd =3D rcu_dereference(rq->rd->pd); rq->cpu_capacity_inverted =3D 0; =20 for (; pd; pd =3D pd->next) { struct cpumask *pd_span =3D perf_domain_span(pd); unsigned long pd_cap_orig, pd_cap; =20 + /* We can't be inverted against our own pd */ + if (cpumask_test_cpu(cpu_of(rq), pd_span)) + continue; + cpu =3D cpumask_any(pd_span); pd_cap_orig =3D arch_scale_cpu_capacity(cpu); =20 @@ -8902,6 +8909,8 @@ static void update_cpu_capacity(struct sched_domain *= sd, int cpu) break; } } + + rcu_read_unlock(); } =20 trace_sched_cpu_capacity_tp(rq);