From nobody Sun Feb 8 14:10:38 2026 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A3B861665 for ; Thu, 29 Feb 2024 10:45:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709203542; cv=none; b=AsiLtQXOm3Hwtyi2YmAfWdyyDrA8OQ/mwNMNMRUZRjxx9WXs8LgZfwzUnylHeHvNCgfY2qxoo8J/ndA+jV3vD8L6p44Bq7mTsjA/c2XW94q8FF7cNut+kMFGrO/3/40enc5VefoJemx9qoBgGtD9CzxarA+MANp6WArNZosF/fs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709203542; c=relaxed/simple; bh=R6425vUxV/knpOLxMUmuOp7Kk6ag6WWOy7XOlnydhmk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nVnb4+lIGEUpqvxhuwOUZ3yjkY6Dgl0ExSahRyYzBLMI41MdMmBdELehzo8MNToJbnLtv46Z2pTpVBEqk3hnz219qegJIDQW8rndEIXJmcu3mLY4kbRouAfZW+upDGM8fFzTZ4Blsf6rT6UJgXix1j1Z/UOpVaV5cbsxrDGe2cY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=Ad9u+dE6; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="Ad9u+dE6" Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 41TAQXoV006260; Thu, 29 Feb 2024 10:45:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=yLUTGr0HV3moCfMVSLgklsXMREsVqOmnjjsC5RcEc0E=; b=Ad9u+dE6RNoYhCWCwvXuwfF3uD4pAWPHPqVOWgN/HBottz1F9fZpWVsyGSxb3XDzUdVS pWa682qkhpZzLCaAOXcvvdxcnu/Zo+flhlamODYqiAEm2EmLJ8D3JBYa/hyukH8CLXb7 1SgaDL8uz4GwgAph0aYzcXyaBVQnDJyx4Gf+ivyoQf1mBZf/oYUv9sQP99bVsVZZWa19 88Xr7QUKRhWSiAZMaA4h85r/svV89zeqAfXM1fm/FvmiRJW8Az1AvW7LW1GE7bXnRT28 JNVC3B6PjNHx74ofuLeR/uvGug6kmRW8Vr7zbCKOAnBjFOBIoyGlHD5F6nGE4t56RTn7 iA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3wjqp29rqc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 29 Feb 2024 10:45:28 +0000 Received: from m0360072.ppops.net (m0360072.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 41TASO0O015998; Thu, 29 Feb 2024 10:45:27 GMT Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3wjqp29rq0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 29 Feb 2024 10:45:27 +0000 Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 41T9cxCq024160; Thu, 29 Feb 2024 10:40:26 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 3wfw0kmc4s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 29 Feb 2024 10:40:26 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 41TAeLFk17564180 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 29 Feb 2024 10:40:23 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2DCC320043; Thu, 29 Feb 2024 10:40:21 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2C68120040; Thu, 29 Feb 2024 10:40:17 +0000 (GMT) Received: from li-c1fdab4c-355a-11b2-a85c-ef242fe9efb4.in.ibm.com (unknown [9.109.201.126]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 29 Feb 2024 10:40:16 +0000 (GMT) From: Shrikanth Hegde To: mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org Cc: sshegde@linux.ibm.com, yu.c.chen@intel.com, dietmar.eggemann@arm.com, linux-kernel@vger.kernel.org, nysal@linux.ibm.com, aboorvad@linux.ibm.com, srikar@linux.vnet.ibm.com, vschneid@redhat.com, pierre.gondois@arm.com, morten.rasmussen@arm.com, qyousef@layalina.io Subject: [PATCH v3 1/2] sched/fair: Add EAS checks before updating overutilized Date: Thu, 29 Feb 2024 16:10:09 +0530 Message-Id: <20240229104010.747411-2-sshegde@linux.ibm.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240229104010.747411-1-sshegde@linux.ibm.com> References: <20240229104010.747411-1-sshegde@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: rPxaokvtncmE0ynmKVgdA81nF73bh84P X-Proofpoint-ORIG-GUID: YeTQ3nyUDceMOWUxMXysZ2DfHydy3txR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-29_02,2024-02-27_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 spamscore=0 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2402290081 Content-Type: text/plain; charset="utf-8" Overutilized field of root domain is only used for EAS(energy aware schedul= er) to decide whether to do load balance or not. It is not used if EAS not possible. Currently enqueue_task_fair and task_tick_fair accesses, sometime updates this field. In update_sd_lb_stats it is updated often. This causes cache contention due to true sharing and burns a lot of cycles. overload and overutilized are part of the same cacheline. Updating it often invalidates the cacheline. That causes access to overload to slow down due to false sharing. Hence add EAS check before accessing/updating this field. EAS check is optimized at compile time or it is a static branch. Hence it shouldn't cost much. With the patch, both enqueue_task_fair and newidle_balance don't show up as hot routines in perf profile. 6.8-rc4: 7.18% swapper [kernel.vmlinux] [k] enqueue_task_fair 6.78% s [kernel.vmlinux] [k] newidle_balance +patch: 0.14% swapper [kernel.vmlinux] [k] enqueue_task_fair 0.00% swapper [kernel.vmlinux] [k] newidle_balance Minor change: trace_sched_overutilized_tp expect that second argument to be bool. So do a int to bool conversion for that. Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indica= tor") Signed-off-by: Shrikanth Hegde Reviewed-by: Srikar Dronamraju --- kernel/sched/fair.c | 47 ++++++++++++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 18 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6a16129f9a5c..1f7d62b7c26f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6670,15 +6670,29 @@ static inline bool cpu_overutilized(int cpu) return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu); } -static inline void update_overutilized_status(struct rq *rq) +static inline void set_rd_overutilized_status(struct root_domain *rd, + unsigned int status) { - if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu)) { - WRITE_ONCE(rq->rd->overutilized, SG_OVERUTILIZED); - trace_sched_overutilized_tp(rq->rd, SG_OVERUTILIZED); - } + WRITE_ONCE(rd->overutilized, status); + trace_sched_overutilized_tp(rd, !!status); +} + +static inline void check_update_overutilized_status(struct rq *rq) +{ + /* + * overutilized field is used for load balancing decisions only + * if energy aware scheduler is being used + */ + if (!sched_energy_enabled()) + return; + + if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu)) + set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED); } #else -static inline void update_overutilized_status(struct rq *rq) { } +static inline void check_update_overutilized_status(struct rq *rq) { } +static inline void set_rd_overutilized_status(struct root_domain *rd, + unsigned int status) { } #endif /* Runqueue only has SCHED_IDLE tasks enqueued */ @@ -6779,7 +6793,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) * and the following generally works well enough in practice. */ if (!task_new) - update_overutilized_status(rq); + check_update_overutilized_status(rq); enqueue_throttle: assert_list_leaf_cfs_rq(rq); @@ -9902,7 +9916,7 @@ static inline void update_sg_lb_stats(struct lb_env *= env, if (nr_running > 1) *sg_status |=3D SG_OVERLOAD; - if (cpu_overutilized(i)) + if (sched_energy_enabled() && cpu_overutilized(i)) *sg_status |=3D SG_OVERUTILIZED; #ifdef CONFIG_NUMA_BALANCING @@ -10596,19 +10610,16 @@ static inline void update_sd_lb_stats(struct lb_e= nv *env, struct sd_lb_stats *sd env->fbq_type =3D fbq_classify_group(&sds->busiest_stat); if (!env->sd->parent) { - struct root_domain *rd =3D env->dst_rq->rd; - /* update overload indicator if we are at root domain */ - WRITE_ONCE(rd->overload, sg_status & SG_OVERLOAD); + WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD); /* Update over-utilization (tipping point, U >=3D 0) indicator */ - WRITE_ONCE(rd->overutilized, sg_status & SG_OVERUTILIZED); - trace_sched_overutilized_tp(rd, sg_status & SG_OVERUTILIZED); + if (sched_energy_enabled()) { + set_rd_overutilized_status(env->dst_rq->rd, + sg_status & SG_OVERUTILIZED); + } } else if (sg_status & SG_OVERUTILIZED) { - struct root_domain *rd =3D env->dst_rq->rd; - - WRITE_ONCE(rd->overutilized, SG_OVERUTILIZED); - trace_sched_overutilized_tp(rd, SG_OVERUTILIZED); + set_rd_overutilized_status(env->dst_rq->rd, SG_OVERUTILIZED); } update_idle_cpu_scan(env, sum_util); @@ -12609,7 +12620,7 @@ static void task_tick_fair(struct rq *rq, struct ta= sk_struct *curr, int queued) task_tick_numa(rq, curr); update_misfit_status(curr, rq); - update_overutilized_status(task_rq(curr)); + check_update_overutilized_status(task_rq(curr)); task_tick_core(rq, curr); } -- 2.39.3