From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49B201E98FE for ; Sun, 2 Mar 2025 16:13:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932010; cv=none; b=bKOGzw4Cy+rD/n2jkQgyEzDelxdbZU161gm3+WH8tmOcT1P8SUihVjS2kv79ibxPrdSLADwPRkNvsSBtmfUad6rnHw6GsK9qoQ+JTRArL5/wun4ev7q8OX2AXF1pQuiXuQV0RZfVs2RA9os4NLlrmCWiJKMeWF0SMINKxRThzc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932010; c=relaxed/simple; bh=l6s2K8nye6CHRRRLfb5EYZPGV2XdU8BZngWM9TLpnUk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZXVA0Jw8lylKdu73sr6nDU4VsAUTFWjtQqao/M+cwJS327aiN9witq0fyCTn9ZYS/aUvc9NU37CgQxWCxbWUECXCqAYoagjrQFRNWvr7YFhiZzeewOUpcohT129B1y1zN77q+FdsYi/viXa8kA4N0bHbkeBuVYraozQshLGg2pQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=R0iCrK0q; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="R0iCrK0q" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-4398ec2abc2so32828665e9.1 for ; Sun, 02 Mar 2025 08:13:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932006; x=1741536806; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R7Qe15dxMfjDnNi0r7cCotARJKXbVAoEJ27P5CWqzBU=; b=R0iCrK0qCjVt6zB9xG3cSQGOXlamx1mEl0UjqkLRv/wX8PDxxHNHZfbyJcqDoHJCC8 Nw8X+e1NY+EXjnJFCamO+nW3CZX5v/AptaYt9OvLa1zBS9wiBssMreETTw9fcZdcuhYW z5O5GFDzygrz4ZnA9lAh7yRW7Mv/Bm1DVmLhNdPYnbymEzgf+xgWBLXlzkb0tLXELldh Fx4Cc5SxpvLO/oxygVWHRbsQV7fULNmxvkEbbgX940XL6QOfbGwAsI9mb0U5cucZCP79 4020CYd0GOKgpg3x+7tWAQ/MBNBE3m/3EOfKbPvJ3SPljxjv4ixEQA5Bu5+4T2KSvHbp lQBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932006; x=1741536806; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R7Qe15dxMfjDnNi0r7cCotARJKXbVAoEJ27P5CWqzBU=; b=ZiWSMKAD4tlbPeu6Ufs+x3IOKtVCXej3kIa/YJOEoD4J8qt2cmDO35vb2lDqc+W9cd CEgd5Vg9lqyPi8PiEQAFUjcKwjsc39jPaf7EsJlUtN1QkoeOuTP58mE1SMr59w43jXHo 7knJuat3ELUhkPrwrBzTnxO9+latRUiUlSXWzchQ3eE9COri2V4AyBvSC0odWGkdFHiE EB0O87Ty4eG1wTfEGtQhuvHKAjGdm76JMlDp7b5G0pbrscDdZafmaDT76AAe0MohmDCM doTgrHqJFii3JpVsLM75UflaC83NJcLu2dQfylgofwvTuzCuilzAy3vdeGTrOKjLGVjU Jk1g== X-Forwarded-Encrypted: i=1; AJvYcCXJw0Z59roytcq1icy6WWZgSzv2Y08aPFy6pfOVKuzv5AmeJ8EMlYbSMbFHexz+5v7UypjyHe4V3RVTjbw=@vger.kernel.org X-Gm-Message-State: AOJu0YxEo2C8M1xfWzPsV+hUqQxj1RAJBo8DHwYrKEZuI9kPj8CkYJPy slA0l5FD39zwL973+VvKiGDgOb35QwLoGRZJ0Httw497Tc5fVM6eaWCWTtLo74o= X-Gm-Gg: ASbGncsxxS3V8zx6J8Xkp3WBu4YSjBarOB0a+HdAQsBt7TkOFF/61lyXLziaOA8LIPy F9VIpWQOoos74gqoCVc8t1Nj+htrrd2q0VK+/9ApRaibLuM3B7cpOoVvMUoXXXwB5NnVuIdUJ9l 6NgqsvlGd1TYQ8t0dTvaGsbjWdccQN8F5TK+VI/FdVn79PjfuGje7/h2yqJxKNxlTzwWu5W/Rpu 9jxUyo0LMljv+LHJexrjzjkdqUIMEDySloJQAejFtR8NCywTxeq4yRqvumQMIo2RVY+mBp0eGfW gaa+Ddx/l7dM8C1C8Vkq+sxna9SIIndnir+8ihroSSd/b/yNTUDx X-Google-Smtp-Source: AGHT+IF6XC/WmtRJHedti5yg8T9FNDoqw2mcBAb+8JYNpd7jF2eQc/5JlOrYZYYcPtqGmBa334nu8g== X-Received: by 2002:a05:600c:4509:b0:43b:c2d7:44d with SMTP id 5b1f17b1804b1-43bc2d706cfmr745975e9.21.1740932006529; Sun, 02 Mar 2025 08:13:26 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:25 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 1/7 v4] sched/fair: Filter false overloaded_group case for EAS Date: Sun, 2 Mar 2025 17:13:15 +0100 Message-ID: <20250302161321.1476139-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With EAS, a group should be set overloaded if at least 1 CPU in the group is overutilized but it can happen that a CPU is fully utilized by tasks because of clamping the compute capacity of the CPU. In such case, the CPU is not overutilized and as a result should not be set overloaded as well. group_overloaded being a higher priority than group_misfit, such group can be selected as the busiest group instead of a group with a mistfit task and prevents load_balance to select the CPU with the misfit task to pull the latter on a fitting CPU. Signed-off-by: Vincent Guittot Tested-by: Pierre Gondois --- kernel/sched/fair.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 857808da23d8..d3d1a2ba6b1a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9931,6 +9931,7 @@ struct sg_lb_stats { unsigned int group_asym_packing; /* Tasks should be moved to preferred CP= U */ unsigned int group_smt_balance; /* Task on busy SMT be moved */ unsigned long group_misfit_task_load; /* A CPU has a task too big for its= capacity */ + unsigned int group_overutilized; /* At least one CPU is overutilized in t= he group */ #ifdef CONFIG_NUMA_BALANCING unsigned int nr_numa_running; unsigned int nr_preferred_running; @@ -10163,6 +10164,13 @@ group_has_capacity(unsigned int imbalance_pct, str= uct sg_lb_stats *sgs) static inline bool group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs) { + /* + * With EAS and uclamp, 1 CPU in the group must be overutilized to + * consider the group overloaded. + */ + if (sched_energy_enabled() && !sgs->group_overutilized) + return false; + if (sgs->sum_nr_running <=3D sgs->group_weight) return false; =20 @@ -10374,8 +10382,10 @@ static inline void update_sg_lb_stats(struct lb_en= v *env, nr_running =3D rq->nr_running; sgs->sum_nr_running +=3D nr_running; =20 - if (cpu_overutilized(i)) + if (cpu_overutilized(i)) { *sg_overutilized =3D 1; + sgs->group_overutilized =3D 1; + } =20 /* * No need to call idle_cpu() if nr_running is not 0 --=20 2.43.0 From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F9811E9B2E for ; Sun, 2 Mar 2025 16:13:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932011; cv=none; b=uRCrj6DBasB5FTZAFXcEnUbWl6Azd9rt67klLNo8WizuEcJNvQKZb1WYmNbNGFDnGsc3Z6dVI8fpCqjOR31ezOcXjx5xlPzlT5FI7M40NcxL0u3OoVjvNUSZaeq8mdAyMtFiL+l3Hoi074cgkwOPmmMnXl6BR4fZtZ/8i1xIrt4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932011; c=relaxed/simple; bh=egzu3f8lxE3MWip7YazBJRQ/Ydi5ZEOEpxWi6oXJGI4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E078zgw0Aoht0c02acQfi5MdXVuz4BkQ6Eo/d5LjSTgucfDabcYDqF6jiT6cwM3ALQWTMydDQ0cvxZ4x2cYBddbeSGqPe4DrdbBE5LTmTod3R1UM4IFg/qQHv21jCf6fyroCV5xVsjFoy++e7yZsym7Nn+kSObZKfwXkBvfe6X8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=p2r7L1FL; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="p2r7L1FL" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-43995b907cfso22897905e9.3 for ; Sun, 02 Mar 2025 08:13:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932008; x=1741536808; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w/6kLY6RaSrrdr4jkNO3fwWS6bRx5SKAp4IQp7qj4jQ=; b=p2r7L1FLpMHa441232n2j7JvCWtAX0MSRajVoodRMqA4fNCZ2dwjHaaaC5IG5KZnMK DC8uE2hZ/uhAzBCqH7MgHcZ9hMR8fjHsQ18QA4jewmNzJIrH2X5MJOBPkhuHkJtT055L g8YgOyo+9rPu80ZeAd+7sGkldKzUY+E6CRWWvsWs+d6Q4l2OYFZiELVCFWuzlFCVd06v UnIfSu8iY9KYEk/Ag41i1Cv8xFa0Zcgdg84AMbM6Es1OqtSh6ZJYq6jZb8Jw0V8kxuSG ELTLzM/JNJbpomfFm0dSNGYh64ezfEtCrSa6BeaDYngMA28uCG4V+6qISbO2UySI+uJl UNGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932008; x=1741536808; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w/6kLY6RaSrrdr4jkNO3fwWS6bRx5SKAp4IQp7qj4jQ=; b=YPmHfXMtqNPdsJHg4/K+msYswE+rjAbMgcGHtFNO3ClUTG4Xl1g41heEkoop449GhS Q1Cju0EhoLyVnypCc9KBBcCFvFVxci/e/NxZqjwgDJh8KZdxdqiwD/hIbv4/eIdbjVZf KcD0qZXunzMlK8UWyUV4UTEuIwZRUUo7S8Vp2XatrPNEg52h8R9Fvbr+CSTpWuNesZXh PYWZR8+PShgt82Fl1+ylppvCdF0Rlitd9A6X/bciilsMLfMG0JNVOHYLwdLnde0Pp6Rt lpUBBR6jQbXF/82wQiCeXC66XcrHssVae9YhFKg7X2xx3YHmQ8MgeE2BECMxI8+eBlX8 G0NA== X-Forwarded-Encrypted: i=1; AJvYcCW/Um4RC3OUbgfdO2dHjtRHJsZXb4R81X02Bo2mqcsHuznwCoa5diPd/jbIKFpzzNvHu+GPaWNXZLGCbZ0=@vger.kernel.org X-Gm-Message-State: AOJu0Yxf6xUz7kOVjdDLZAy+/hcU63guWQf4Rfc4q4OITX/u49gKXbM1 0P3xOKqsjLRo4BzkH7crTG+TCsgznc/LLg9qAuKo39nU6tGBzAlf4/P1dQHWySs= X-Gm-Gg: ASbGncu38PtbNmrlcDrTJHdS2jY+ntzEAd5V21MQYwIotYL1ohgHiVX+X3C8w+/aWiz jCunMQtWUGwZMFbnbKNze7T14zF1p/f9c5TULw4R8s/N8i65wWkyIwYghJjESj98gEAYQzziuMl uJBuYy3MG3ab7hP10/kLKPaWBfHsnPGqZFyPbMrNIDhNDJ3Cpem28qz7R9zrXz4loy8pR8fJgza cyEEHVOUFTABuoZVlsTUAkxKTR6h7Yx+6NeIJrvnXxdTL9LcSjlOPBYFC2sEcLPwswXJshm9eZF v7RiqSDq0svCkZrIyOn/wisodcfJQh+aC170Wne0P10wHtvagMz3 X-Google-Smtp-Source: AGHT+IHocYVnoPTuzXvVlpS0Ekd9gqrxt2gB/DYqhXFZoCPRF3XfYa147n8ZBJLqnYSROcy8kUM78Q== X-Received: by 2002:a05:600c:1d20:b0:439:84ba:5773 with SMTP id 5b1f17b1804b1-43baec9c2c1mr50187315e9.31.1740932008348; Sun, 02 Mar 2025 08:13:28 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:26 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 2/7 v4] energy model: Add a get previous state function Date: Sun, 2 Mar 2025 17:13:16 +0100 Message-ID: <20250302161321.1476139-3-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of parsing the entire EM table everytime, add a function to get the previous state. Will be used in the scheduler feec() function. Signed-off-by: Vincent Guittot --- include/linux/energy_model.h | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 78318d49276d..551e243b9c43 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -216,6 +216,26 @@ em_pd_get_efficient_state(struct em_perf_state *table, return max_ps; } =20 +static inline int +em_pd_get_previous_state(struct em_perf_state *table, + struct em_perf_domain *pd, int idx) +{ + unsigned long pd_flags =3D pd->flags; + int min_ps =3D pd->min_perf_state; + struct em_perf_state *ps; + int i; + + for (i =3D idx - 1; i >=3D min_ps; i--) { + ps =3D &table[i]; + if (pd_flags & EM_PERF_DOMAIN_SKIP_INEFFICIENCIES && + ps->flags & EM_PERF_STATE_INEFFICIENT) + continue; + return i; + } + + return -1; +} + /** * em_cpu_energy() - Estimates the energy consumed by the CPUs of a * performance domain @@ -362,6 +382,18 @@ static inline struct em_perf_domain *em_pd_get(struct = device *dev) { return NULL; } +static inline int +em_pd_get_efficient_state(struct em_perf_state *table, + struct em_perf_domain *pd, unsigned long max_util) +{ + return 0; +} +static inline int +em_pd_get_previous_state(struct em_perf_state *table, + struct em_perf_domain *pd, int idx) +{ + return -1; +} static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util, unsigned long allowed_cpu_cap) --=20 2.43.0 From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 754701EA7D6 for ; Sun, 2 Mar 2025 16:13:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932014; cv=none; b=SKlJK6IQD0pGNDJqs708ctrN/nFJnsL9CumGXqDUdPG/KRAmVoWzAl+y8wwS3OhDLRheqP7rZ+LTIRY8DSDv1E9t+TiT+tzLVNkfE1ghA0otrzDVANBhUsMISpaVCq94XdD03BGTtabSu8fVoudwtoubDLIEYymf1AH0S6zm6UM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932014; c=relaxed/simple; bh=PsixIBRYtomDMf+aa69pSUIIBXLsyccZplM+BMuxbl8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xr4Pg8gHP7u26WhHv/AVsowYyZ3e5teIoqwb6C7r3QQFZsi/Q0frHPe860pF/E9nyJhmW9pR9L7g/m5jLK/K9bi00CZp1v6dnEW8DPU6fc7mn0IPL1yLEZFZx05yU7bk6vuRLyGIZriXyGBkeW0bdyEoud08OvlsP5tCY5ySbx8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=aTAzeHI1; arc=none smtp.client-ip=209.85.128.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="aTAzeHI1" Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-4393dc02b78so22598385e9.3 for ; Sun, 02 Mar 2025 08:13:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932010; x=1741536810; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w0wS9NNaT48S3voEP7Ca12jv6d8Skrd9dbmu26/lMYU=; b=aTAzeHI1SSDFYJeuyfRA2so1TGp7FY19Qgn1VBNG+xn4e1MzRhzNXlgh0xtDwZX2/e Hv1A4TsClDqV3FYdTxCEaWn7VrOGbXiQunAEU9ytW0ER6Vq1I7+WdFdzG8O3IKlO8dsC JQxUGGNNAxraeDw5ZomUqX5Y4CjK+/mW8iuAJX6W3XFDkT841nrQYZmf+3i01MvmizlS QudX++AUN+ngRJ4502wjAY2Ukz77MmToWtBC6qtAXM8P/RJkKI180cvl+of2yrLXn4GE tAloIj8AAubw/uFYJIWY9CKvTNtSb41v0hn2HT7AsvBvnlZHhecDjm5g5rZbTXjG4zQm EU9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932010; x=1741536810; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w0wS9NNaT48S3voEP7Ca12jv6d8Skrd9dbmu26/lMYU=; b=wC2rFiovinusZ8HuMpWXc/9uB6rQj3HZOFQZ1JYQPniASCxZcLq3dGRp7VMwwLHhCN /j0RFZAwvPGK2eMPJIex6GIiVJ0CSEggGFzHgjWnceszhLxtuIB9C/xh4ROsWTV4FPrS k/SE0ZcGkZqsxvlSiJfkTFW5qmW7/M5XYeZKOTgEuxigRdNqWPVuSYm41tQlDHODQIt4 Twb5tONAAeNMXBfvChBNQWafnklvJ7rVtVw1ZR3EcQ8ZvXBxB7GraWgyRgG3kkJlhRaH N/fz/PhTUewMXmAmQHoies+6IhYRsSVzi6RgT71KxwO6HJG1fFI9WBnj2iUQnip3zucK 8g+A== X-Forwarded-Encrypted: i=1; AJvYcCVPhA/4FnVsToTRJ4KXAG5YlKFn6jf+41qRPvT7jEWztZTY758iw93CTldO4H4854zu8VctOly60qG/Tvo=@vger.kernel.org X-Gm-Message-State: AOJu0YxFH/Gap1r9h2vzcA4FCoUtlRMzcClZFhSVRyVInbD/NnWLSrMM JM0VLOnl2PUbXiuQ7WipGf9/xOeMVp1UWJCB3NiJGFOlm1raSV1QflNrLYc1OZE= X-Gm-Gg: ASbGnct2c5tdFM9Rm10KmgBSZuQhg0t2Si9XEEAlaZjaczh8yLnYIgtlxU54tMPs14t m6Wz2+8vXEwI+pZyI9xME34zaCiLLfq5QV0NOydnRwh7iZ4GUA+TJvx5JGM5MtKQb3iuOkGwGaq A50DQuqoU3G8kBZKnCIHWX4FO9m+KjiU87WLRah+ld/CgIqCESuQ3yh15Pa9f8B3/pq9UA0Qy1g HTQoW4d+q9IAWP/fMkPJyXu5Gknav3lzUcM5xZuBMv2RBOia8OL1S3f2wkxkrrwyUFt09Y9DN/I D/aMAZWlfKa6nflYCysLechbs3Wt7+9yQqWxBtB3phZnzYYtlxf0 X-Google-Smtp-Source: AGHT+IEKVcm/bYMf72i67PNLlvZHPrApefaAlMc0q6+gY8K8VP7zlLJBxztKWsE1Gu6y4rvn1ksFmQ== X-Received: by 2002:a05:600c:1550:b0:439:9f19:72ab with SMTP id 5b1f17b1804b1-43ba67583ecmr76600145e9.23.1740932009699; Sun, 02 Mar 2025 08:13:29 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:28 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 3/7 v4] sched/fair: Rework feec() to use cost instead of spare capacity Date: Sun, 2 Mar 2025 17:13:17 +0100 Message-ID: <20250302161321.1476139-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" feec() looks for the CPU with highest spare capacity in a PD assuming that it will be the best CPU from a energy efficiency PoV because it will require the smallest increase of OPP. Although this is true generally speaking, this policy also filters some others CPUs which will be as efficients because of using the same OPP. In fact, we really care about the cost of the new OPP that will be selected to handle the waking task. In many cases, several CPUs will end up selecting the same OPP and as a result using the same energy cost. In these cases, we can use other metrics to select the best CPU for the same energy cost. Rework feec() to look 1st for the lowest cost in a PD and then the most performant CPU between CPUs. The cost of the OPP remains the only comparison criteria between Performance Domains. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 466 +++++++++++++++++++++++--------------------- 1 file changed, 246 insertions(+), 220 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d3d1a2ba6b1a..a9b97bbc085f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8193,29 +8193,37 @@ unsigned long sched_cpu_util(int cpu) } =20 /* - * energy_env - Utilization landscape for energy estimation. - * @task_busy_time: Utilization contribution by the task for which we test= the - * placement. Given by eenv_task_busy_time(). - * @pd_busy_time: Utilization of the whole perf domain without the task - * contribution. Given by eenv_pd_busy_time(). - * @cpu_cap: Maximum CPU capacity for the perf domain. - * @pd_cap: Entire perf domain capacity. (pd->nr_cpus * cpu_cap). - */ -struct energy_env { - unsigned long task_busy_time; - unsigned long pd_busy_time; - unsigned long cpu_cap; - unsigned long pd_cap; + * energy_cpu_stat - Utilization landscape for energy estimation. + * @idx : Index of the OPP in the performance domain + * @cost : Cost of the OPP + * @max_perf : Compute capacity of OPP + * @min_perf : Compute capacity of the previous OPP + * @capa : Capacity of the CPU + * @runnable : runnable_avg of the CPU + * @nr_running : Number of cfs running task + * @fits : Fits level of the CPU + * @cpu : Current best CPU + */ +struct energy_cpu_stat { + unsigned long idx; + unsigned long cost; + unsigned long max_perf; + unsigned long min_perf; + unsigned long capa; + unsigned long util; + unsigned long runnable; + unsigned int nr_running; + int fits; + int cpu; }; =20 /* - * Compute the task busy time for compute_energy(). This time cannot be - * injected directly into effective_cpu_util() because of the IRQ scaling. + * Compute the task busy time for computing its energy impact. This time c= annot + * be injected directly into effective_cpu_util() because of the IRQ scali= ng. * The latter only makes sense with the most recent CPUs where the task has * run. */ -static inline void eenv_task_busy_time(struct energy_env *eenv, - struct task_struct *p, int prev_cpu) +static inline unsigned long task_busy_time(struct task_struct *p, int prev= _cpu) { unsigned long busy_time, max_cap =3D arch_scale_cpu_capacity(prev_cpu); unsigned long irq =3D cpu_util_irq(cpu_rq(prev_cpu)); @@ -8225,124 +8233,153 @@ static inline void eenv_task_busy_time(struct ene= rgy_env *eenv, else busy_time =3D scale_irq_capacity(task_util_est(p), irq, max_cap); =20 - eenv->task_busy_time =3D busy_time; + return busy_time; } =20 -/* - * Compute the perf_domain (PD) busy time for compute_energy(). Based on t= he - * utilization for each @pd_cpus, it however doesn't take into account - * clamping since the ratio (utilization / cpu_capacity) is already enough= to - * scale the EM reported power consumption at the (eventually clamped) - * cpu_capacity. - * - * The contribution of the task @p for which we want to estimate the - * energy cost is removed (by cpu_util()) and must be calculated - * separately (see eenv_task_busy_time). This ensures: - * - * - A stable PD utilization, no matter which CPU of that PD we want to = place - * the task on. - * - * - A fair comparison between CPUs as the task contribution (task_util(= )) - * will always be the same no matter which CPU utilization we rely on - * (util_avg or util_est). - * - * Set @eenv busy time for the PD that spans @pd_cpus. This busy time can't - * exceed @eenv->pd_cap. - */ -static inline void eenv_pd_busy_time(struct energy_env *eenv, - struct cpumask *pd_cpus, - struct task_struct *p) +/* Estimate the utilization of the CPU that is then used to select the OPP= */ +static unsigned long find_cpu_max_util(int cpu, struct task_struct *p, int= dst_cpu) { - unsigned long busy_time =3D 0; - int cpu; + unsigned long util =3D cpu_util(cpu, p, dst_cpu, 1); + unsigned long eff_util, min, max; + + /* + * Performance domain frequency: utilization clamping + * must be considered since it affects the selection + * of the performance domain frequency. + */ + eff_util =3D effective_cpu_util(cpu, util, &min, &max); =20 - for_each_cpu(cpu, pd_cpus) { - unsigned long util =3D cpu_util(cpu, p, -1, 0); + /* Task's uclamp can modify min and max value */ + if (uclamp_is_used() && cpu =3D=3D dst_cpu) { + min =3D max(min, uclamp_eff_value(p, UCLAMP_MIN)); =20 - busy_time +=3D effective_cpu_util(cpu, util, NULL, NULL); + /* + * If there is no active max uclamp constraint, + * directly use task's one, otherwise keep max. + */ + if (uclamp_rq_is_idle(cpu_rq(cpu))) + max =3D uclamp_eff_value(p, UCLAMP_MAX); + else + max =3D max(max, uclamp_eff_value(p, UCLAMP_MAX)); } =20 - eenv->pd_busy_time =3D min(eenv->pd_cap, busy_time); + eff_util =3D sugov_effective_cpu_perf(cpu, eff_util, min, max); + return eff_util; } =20 -/* - * Compute the maximum utilization for compute_energy() when the task @p - * is placed on the cpu @dst_cpu. - * - * Returns the maximum utilization among @eenv->cpus. This utilization can= 't - * exceed @eenv->cpu_cap. - */ -static inline unsigned long -eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus, - struct task_struct *p, int dst_cpu) +/* Estimate the utilization of the CPU without the task */ +static unsigned long find_cpu_actual_util(int cpu, struct task_struct *p) { - unsigned long max_util =3D 0; - int cpu; + unsigned long util =3D cpu_util(cpu, p, -1, 0); + unsigned long eff_util; =20 - for_each_cpu(cpu, pd_cpus) { - struct task_struct *tsk =3D (cpu =3D=3D dst_cpu) ? p : NULL; - unsigned long util =3D cpu_util(cpu, p, dst_cpu, 1); - unsigned long eff_util, min, max; + eff_util =3D effective_cpu_util(cpu, util, NULL, NULL); =20 - /* - * Performance domain frequency: utilization clamping - * must be considered since it affects the selection - * of the performance domain frequency. - * NOTE: in case RT tasks are running, by default the min - * utilization can be max OPP. - */ - eff_util =3D effective_cpu_util(cpu, util, &min, &max); + return eff_util; +} =20 - /* Task's uclamp can modify min and max value */ - if (tsk && uclamp_is_used()) { - min =3D max(min, uclamp_eff_value(p, UCLAMP_MIN)); +/* Find the cost of a performance domain for the estimated utilization */ +static inline void find_pd_cost(struct em_perf_domain *pd, + unsigned long max_util, + struct energy_cpu_stat *stat) +{ + struct em_perf_table *em_table; + struct em_perf_state *ps; + int i; =20 - /* - * If there is no active max uclamp constraint, - * directly use task's one, otherwise keep max. - */ - if (uclamp_rq_is_idle(cpu_rq(cpu))) - max =3D uclamp_eff_value(p, UCLAMP_MAX); - else - max =3D max(max, uclamp_eff_value(p, UCLAMP_MAX)); - } + /* + * Find the lowest performance state of the Energy Model above the + * requested performance. + */ + em_table =3D rcu_dereference(pd->em_table); + i =3D em_pd_get_efficient_state(em_table->state, pd, max_util); + ps =3D &em_table->state[i]; =20 - eff_util =3D sugov_effective_cpu_perf(cpu, eff_util, min, max); - max_util =3D max(max_util, eff_util); + /* Save the cost and performance range of the OPP */ + stat->max_perf =3D ps->performance; + stat->cost =3D ps->cost; + i =3D em_pd_get_previous_state(em_table->state, pd, i); + if (i < 0) + stat->min_perf =3D 0; + else { + ps =3D &em_table->state[i]; + stat->min_perf =3D ps->performance; } +} + +/*Check if the CPU can handle the waking task */ +static int check_cpu_with_task(struct task_struct *p, int cpu) +{ + unsigned long p_util_min =3D uclamp_is_used() ? uclamp_eff_value(p, UCLAM= P_MIN) : 0; + unsigned long p_util_max =3D uclamp_is_used() ? uclamp_eff_value(p, UCLAM= P_MAX) : 1024; + unsigned long util_min =3D p_util_min; + unsigned long util_max =3D p_util_max; + unsigned long util =3D cpu_util(cpu, p, cpu, 0); + struct rq *rq =3D cpu_rq(cpu); =20 - return min(max_util, eenv->cpu_cap); + /* + * Skip CPUs that cannot satisfy the capacity request. + * IOW, placing the task there would make the CPU + * overutilized. Take uclamp into account to see how + * much capacity we can get out of the CPU; this is + * aligned with sched_cpu_util(). + */ + if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) { + unsigned long rq_util_min, rq_util_max; + /* + * Open code uclamp_rq_util_with() except for + * the clamp() part. I.e.: apply max aggregation + * only. util_fits_cpu() logic requires to + * operate on non clamped util but must use the + * max-aggregated uclamp_{min, max}. + */ + rq_util_min =3D uclamp_rq_get(rq, UCLAMP_MIN); + rq_util_max =3D uclamp_rq_get(rq, UCLAMP_MAX); + util_min =3D max(rq_util_min, p_util_min); + util_max =3D max(rq_util_max, p_util_max); + } + return util_fits_cpu(util, util_min, util_max, cpu); } =20 /* - * compute_energy(): Use the Energy Model to estimate the energy that @pd = would - * consume for a given utilization landscape @eenv. When @dst_cpu < 0, the= task - * contribution is ignored. + * For the same cost, select the CPU that will povide best performance for= the + * task. */ -static inline unsigned long -compute_energy(struct energy_env *eenv, struct perf_domain *pd, - struct cpumask *pd_cpus, struct task_struct *p, int dst_cpu) +static bool update_best_cpu(struct energy_cpu_stat *target, + struct energy_cpu_stat *min, + int prev, struct sched_domain *sd) { - unsigned long max_util =3D eenv_pd_max_util(eenv, pd_cpus, p, dst_cpu); - unsigned long busy_time =3D eenv->pd_busy_time; - unsigned long energy; - - if (dst_cpu >=3D 0) - busy_time =3D min(eenv->pd_cap, busy_time + eenv->task_busy_time); + /* Select the one with the least number of running tasks */ + if (target->nr_running < min->nr_running) + return true; + if (target->nr_running > min->nr_running) + return false; =20 - energy =3D em_cpu_energy(pd->em_pd, max_util, busy_time, eenv->cpu_cap); + /* Favor previous CPU otherwise */ + if (target->cpu =3D=3D prev) + return true; + if (min->cpu =3D=3D prev) + return false; =20 - trace_sched_compute_energy_tp(p, dst_cpu, energy, max_util, busy_time); + /* + * Choose CPU with lowest contention. One might want to consider load + * instead of runnable but we are supposed to not be overutilized so + * there is enough compute capacity for everybody. + */ + if ((target->runnable * min->capa * sd->imbalance_pct) >=3D + (min->runnable * target->capa * 100)) + return false; =20 - return energy; + return true; } =20 /* * find_energy_efficient_cpu(): Find most energy-efficient target CPU for = the - * waking task. find_energy_efficient_cpu() looks for the CPU with maximum - * spare capacity in each performance domain and uses it as a potential - * candidate to execute the task. Then, it uses the Energy Model to figure - * out which of the CPU candidates is the most energy-efficient. + * waking task. find_energy_efficient_cpu() looks for the CPU with the low= est + * power cost (usually with maximum spare capacity but not always) in each + * performance domain and uses it as a potential candidate to execute the = task. + * Then, it uses the Energy Model to figure out which of the CPU candidate= s is + * the most energy-efficient. * * The rationale for this heuristic is as follows. In a performance domain, * all the most energy efficient CPU candidates (according to the Energy @@ -8379,17 +8416,14 @@ compute_energy(struct energy_env *eenv, struct perf= _domain *pd, static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) { struct cpumask *cpus =3D this_cpu_cpumask_var_ptr(select_rq_mask); - unsigned long prev_delta =3D ULONG_MAX, best_delta =3D ULONG_MAX; - unsigned long p_util_min =3D uclamp_is_used() ? uclamp_eff_value(p, UCLAM= P_MIN) : 0; - unsigned long p_util_max =3D uclamp_is_used() ? uclamp_eff_value(p, UCLAM= P_MAX) : 1024; struct root_domain *rd =3D this_rq()->rd; - int cpu, best_energy_cpu, target =3D -1; - int prev_fits =3D -1, best_fits =3D -1; - unsigned long best_actual_cap =3D 0; - unsigned long prev_actual_cap =3D 0; + unsigned long best_nrg =3D ULONG_MAX; + unsigned long task_util; struct sched_domain *sd; struct perf_domain *pd; - struct energy_env eenv; + int cpu, target =3D -1; + int best_fits =3D -1; + int best_cpu =3D -1; =20 rcu_read_lock(); pd =3D rcu_dereference(rd->pd); @@ -8409,19 +8443,19 @@ static int find_energy_efficient_cpu(struct task_st= ruct *p, int prev_cpu) target =3D prev_cpu; =20 sync_entity_load_avg(&p->se); - if (!task_util_est(p) && p_util_min =3D=3D 0) - goto unlock; - - eenv_task_busy_time(&eenv, p, prev_cpu); + task_util =3D task_busy_time(p, prev_cpu); =20 for (; pd; pd =3D pd->next) { - unsigned long util_min =3D p_util_min, util_max =3D p_util_max; - unsigned long cpu_cap, cpu_actual_cap, util; - long prev_spare_cap =3D -1, max_spare_cap =3D -1; - unsigned long rq_util_min, rq_util_max; - unsigned long cur_delta, base_energy; - int max_spare_cap_cpu =3D -1; - int fits, max_fits =3D -1; + unsigned long pd_actual_util =3D 0, delta_nrg =3D 0; + unsigned long cpu_actual_cap, max_cost =3D 0; + struct energy_cpu_stat target_stat; + struct energy_cpu_stat min_stat =3D { + .cost =3D ULONG_MAX, + .max_perf =3D ULONG_MAX, + .min_perf =3D ULONG_MAX, + .fits =3D -2, + .cpu =3D -1, + }; =20 cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask); =20 @@ -8432,13 +8466,9 @@ static int find_energy_efficient_cpu(struct task_str= uct *p, int prev_cpu) cpu =3D cpumask_first(cpus); cpu_actual_cap =3D get_actual_cpu_capacity(cpu); =20 - eenv.cpu_cap =3D cpu_actual_cap; - eenv.pd_cap =3D 0; - + /* In a PD, the CPU with the lowest cost will be the most efficient */ for_each_cpu(cpu, cpus) { - struct rq *rq =3D cpu_rq(cpu); - - eenv.pd_cap +=3D cpu_actual_cap; + unsigned long target_perf; =20 if (!cpumask_test_cpu(cpu, sched_domain_span(sd))) continue; @@ -8446,120 +8476,116 @@ static int find_energy_efficient_cpu(struct task_= struct *p, int prev_cpu) if (!cpumask_test_cpu(cpu, p->cpus_ptr)) continue; =20 - util =3D cpu_util(cpu, p, cpu, 0); - cpu_cap =3D capacity_of(cpu); + target_stat.fits =3D check_cpu_with_task(p, cpu); + + if (!target_stat.fits) + continue; + + /* 1st select the CPU that fits best */ + if (target_stat.fits < min_stat.fits) + continue; + + /* Then select the CPU with lowest cost */ + + /* Get the performance of the CPU w/ the waking task */ + target_perf =3D find_cpu_max_util(cpu, p, cpu); + target_perf =3D min(target_perf, cpu_actual_cap); + + /* Needing a higher OPP means a higher cost */ + if (target_perf > min_stat.max_perf) + continue; =20 /* - * Skip CPUs that cannot satisfy the capacity request. - * IOW, placing the task there would make the CPU - * overutilized. Take uclamp into account to see how - * much capacity we can get out of the CPU; this is - * aligned with sched_cpu_util(). + * At this point, target's cost can be either equal or + * lower than the current minimum cost. */ - if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) { - /* - * Open code uclamp_rq_util_with() except for - * the clamp() part. I.e.: apply max aggregation - * only. util_fits_cpu() logic requires to - * operate on non clamped util but must use the - * max-aggregated uclamp_{min, max}. - */ - rq_util_min =3D uclamp_rq_get(rq, UCLAMP_MIN); - rq_util_max =3D uclamp_rq_get(rq, UCLAMP_MAX); =20 - util_min =3D max(rq_util_min, p_util_min); - util_max =3D max(rq_util_max, p_util_max); - } + /* Gather more statistics */ + target_stat.cpu =3D cpu; + target_stat.runnable =3D cpu_runnable(cpu_rq(cpu)); + target_stat.capa =3D capacity_of(cpu); + target_stat.nr_running =3D cpu_rq(cpu)->cfs.h_nr_runnable; =20 - fits =3D util_fits_cpu(util, util_min, util_max, cpu); - if (!fits) + /* If the target needs a lower OPP, then look up for + * the corresponding OPP and its associated cost. + * Otherwise at same cost level, select the CPU which + * provides best performance. + */ + if (target_perf < min_stat.min_perf) + find_pd_cost(pd->em_pd, target_perf, &target_stat); + else if (!update_best_cpu(&target_stat, &min_stat, prev_cpu, sd)) continue; =20 - lsub_positive(&cpu_cap, util); - - if (cpu =3D=3D prev_cpu) { - /* Always use prev_cpu as a candidate. */ - prev_spare_cap =3D cpu_cap; - prev_fits =3D fits; - } else if ((fits > max_fits) || - ((fits =3D=3D max_fits) && ((long)cpu_cap > max_spare_cap))) { - /* - * Find the CPU with the maximum spare capacity - * among the remaining CPUs in the performance - * domain. - */ - max_spare_cap =3D cpu_cap; - max_spare_cap_cpu =3D cpu; - max_fits =3D fits; - } + /* Save the new most efficient CPU of the PD */ + min_stat =3D target_stat; } =20 - if (max_spare_cap_cpu < 0 && prev_spare_cap < 0) + if (min_stat.cpu =3D=3D -1) continue; =20 - eenv_pd_busy_time(&eenv, cpus, p); - /* Compute the 'base' energy of the pd, without @p */ - base_energy =3D compute_energy(&eenv, pd, cpus, p, -1); + if (min_stat.fits < best_fits) + continue; =20 - /* Evaluate the energy impact of using prev_cpu. */ - if (prev_spare_cap > -1) { - prev_delta =3D compute_energy(&eenv, pd, cpus, p, - prev_cpu); - /* CPU utilization has changed */ - if (prev_delta < base_energy) - goto unlock; - prev_delta -=3D base_energy; - prev_actual_cap =3D cpu_actual_cap; - best_delta =3D min(best_delta, prev_delta); - } + /* Idle system costs nothing */ + target_stat.max_perf =3D 0; + target_stat.cost =3D 0; =20 - /* Evaluate the energy impact of using max_spare_cap_cpu. */ - if (max_spare_cap_cpu >=3D 0 && max_spare_cap > prev_spare_cap) { - /* Current best energy cpu fits better */ - if (max_fits < best_fits) - continue; + /* Estimate utilization and cost without p */ + for_each_cpu(cpu, cpus) { + unsigned long target_util; =20 - /* - * Both don't fit performance hint (i.e. uclamp_min) - * but best energy cpu has better capacity. - */ - if ((max_fits < 0) && - (cpu_actual_cap <=3D best_actual_cap)) - continue; + /* Accumulate actual utilization w/o task p */ + pd_actual_util +=3D find_cpu_actual_util(cpu, p); =20 - cur_delta =3D compute_energy(&eenv, pd, cpus, p, - max_spare_cap_cpu); - /* CPU utilization has changed */ - if (cur_delta < base_energy) - goto unlock; - cur_delta -=3D base_energy; + /* Get the max utilization of the CPU w/o task p */ + target_util =3D find_cpu_max_util(cpu, p, -1); + target_util =3D min(target_util, cpu_actual_cap); =20 - /* - * Both fit for the task but best energy cpu has lower - * energy impact. - */ - if ((max_fits > 0) && (best_fits > 0) && - (cur_delta >=3D best_delta)) + /* Current OPP is enough */ + if (target_util <=3D target_stat.max_perf) continue; =20 - best_delta =3D cur_delta; - best_energy_cpu =3D max_spare_cap_cpu; - best_fits =3D max_fits; - best_actual_cap =3D cpu_actual_cap; + /* Compute and save the cost of the OPP */ + find_pd_cost(pd->em_pd, target_util, &target_stat); + max_cost =3D target_stat.cost; } - } - rcu_read_unlock(); =20 - if ((best_fits > prev_fits) || - ((best_fits > 0) && (best_delta < prev_delta)) || - ((best_fits < 0) && (best_actual_cap > prev_actual_cap))) - target =3D best_energy_cpu; + /* Add the energy cost of p */ + delta_nrg =3D task_util * min_stat.cost; =20 - return target; + /* + * Compute the energy cost of others running at higher OPP + * because of p. + */ + if (min_stat.cost > max_cost) + delta_nrg +=3D pd_actual_util * (min_stat.cost - max_cost); + + /* Delta energy with p */ + trace_sched_compute_energy_tp(p, min_stat.cpu, delta_nrg, + min_stat.max_perf, pd_actual_util + task_util); + + /* + * The probability that delta energies are equals is almost + * null. PDs being sorted by max capacity, keep the one with + * highest max capacity if this happens. + * TODO: add a margin in energy cost and take into account + * other stats. + */ + if ((min_stat.fits =3D=3D best_fits) && + (delta_nrg >=3D best_nrg)) + continue; + + best_fits =3D min_stat.fits; + best_nrg =3D delta_nrg; + best_cpu =3D min_stat.cpu; + } =20 unlock: rcu_read_unlock(); =20 + if (best_cpu >=3D 0) + target =3D best_cpu; + return target; } =20 --=20 2.43.0 From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B5C51E5B9F for ; Sun, 2 Mar 2025 16:13:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932015; cv=none; b=QpQryz54hLkATjSYZmqfdkOgS1kMF+Bp0/paruzaXMZ2G9FXQWXi/Fz3+598pTOmDGN28W6DxkBfl0nPIU2vQrTR05hXwe/wA/fnh1sZIeli5TCaF06nnlIubf0jbW80uD+C+w+/eFbEm0VF5uCH8/GKt9OolS2myFcWSLgFsEs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932015; c=relaxed/simple; bh=V9tDjb3FRXiUNX1G6EF3G2WC8Di0nWlqSJnRURIssMU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WEoMaoq9zLwLq/T3uQHGzF+Aay6wrman3mlue9JTC0Mx4pfmUZbgkKoFrnpMz98+Iw8KeYoID2o2FNtGC4Ov2bxDQG/OwJWQZAlLrD2TpPynKUIFiy/TBby55ARht30qtF9CywayTUkhL4I4NdCRHmKx37+kw1DHvTgeYXg16WQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=hmMnDNGq; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="hmMnDNGq" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-4398ec2abc2so32828965e9.1 for ; Sun, 02 Mar 2025 08:13:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932012; x=1741536812; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=77SPkZNldmHxco8GhowPmzvPBBBG99U0AlIpE+I1iiY=; b=hmMnDNGq5pWtviKBloyJNPVtoPC+gDnwmt1qumM3lxLC8NDl0oWWhUJbfb79zJ8i/R EtrgzooNiFjeb1qZKUFUsgFBi+A7c2j1TtyMs9T5LrJPlq6LA7BY4TieNYTgERJq9Ei9 3v0yMMoxxtWzWQNCCUpLD1fYUtPbf8i6cNsSI75s95Xzs4pl9lp0bfeHON3IcvaAsrwS DIzpYlwoiwIszMkvdKlYNs2EdPmXIfP2CMoGJcn4Icyf5VREqiXK+EfnQx2UL0VkAltT nfoTLjr80X/HgvDUh0Qc7zRPKsYW6g0SDzh77CdgJwH2wC7rvqa4DZQ/ffITiQkxFFMT gnWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932012; x=1741536812; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=77SPkZNldmHxco8GhowPmzvPBBBG99U0AlIpE+I1iiY=; b=wm/t9ZKnKXrm2ZaUp03lvYeUtg8XZRwq72Aiw5FQI6ZDG5B05OnIXyx+RAtUQbKIHY dwOp1r2hb4NgrWV6XJrFO9RJhDfAwAKmWtCaHwccHfWOuagmGG1fOC2YBgrk1ABESLd+ U9gyNsNUGbPEFCx3xjesnKFkqBJf3/E9U82s1y013Kq59CP4xEZF12stzm4Jw8+p3qVf KLJa7qPlNxzrsRDxR9qLS2BO6BeF6GzUixX4EAE4gIehH+L1woyQQZupcLdV7p8jNYZ0 GnClzeNlj55EQ1BboJMuhwgQzzslRgHEnuB+O38GwYqJDqhoZm9kXWQx32zWHqDmHPyR ynIQ== X-Forwarded-Encrypted: i=1; AJvYcCWJJ88NZYVbwn4sZvk019//4GiPB6BucCaPHIAaUaKQAP3NUNKPn8EdMHRUW0TfHaKIrQth9jHBIHwSgAM=@vger.kernel.org X-Gm-Message-State: AOJu0YzQnxm9O3+hM5u6mYAmeKMludYeT0Ce18VMIjxpNNswMOL/BrvD D5gXy7UHemNFf6E+dSs+ubsLS7mJWF/a2VZjthgnEZYkY1st27fHSBmXCZVX5XQ= X-Gm-Gg: ASbGncu2042KFvkiKlnVK+5a+ayDLvdb9tPOd5dn5kcxP5UGr4CNWsNBOHaewLOMhAS LbhuVcDScXg1UzeJ2Jjtho7D3VRRs1lQvgsr8U51xUkpUlhjrlX8bdvedAXe/5BYvoljle0qXQ0 m8s6SYuWiKoi1XCOYym93K+07UPOXBIWrH67WyMSTlrFCX+O6jHwsZQ7WLu88RG4mBk8NWahAne MBtHDjgpCUbfmvPZJXgL6uCa3Cv0rIE7H4z1iCvGLX8nNBB26iruWJ+k7wxGIB8mE78Z24WgPg+ 3wsincDqMUnXcgpkLC+h8THr7jz07Bc0IJcsGRjKJrsQ2HjGQkQI X-Google-Smtp-Source: AGHT+IER/CQiZhocFwMLy+qnuNJd9Bery9fW/SF2VNMD3kkenFDF1jQlapgLJ9M6YbtDy4pF8OGsPw== X-Received: by 2002:a05:600c:1907:b0:439:9274:81dd with SMTP id 5b1f17b1804b1-43ba66cfe47mr72666135e9.1.1740932011667; Sun, 02 Mar 2025 08:13:31 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:30 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 4/7 v4] energy model: Remove unused em_cpu_energy() Date: Sun, 2 Mar 2025 17:13:18 +0100 Message-ID: <20250302161321.1476139-5-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove the unused function em_cpu_energy() Signed-off-by: Vincent Guittot --- include/linux/energy_model.h | 99 ------------------------------------ 1 file changed, 99 deletions(-) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 551e243b9c43..d0adabba2c56 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -236,99 +236,6 @@ em_pd_get_previous_state(struct em_perf_state *table, return -1; } =20 -/** - * em_cpu_energy() - Estimates the energy consumed by the CPUs of a - * performance domain - * @pd : performance domain for which energy has to be estimated - * @max_util : highest utilization among CPUs of the domain - * @sum_util : sum of the utilization of all CPUs in the domain - * @allowed_cpu_cap : maximum allowed CPU capacity for the @pd, which - * might reflect reduced frequency (due to thermal) - * - * This function must be used only for CPU devices. There is no validation, - * i.e. if the EM is a CPU type and has cpumask allocated. It is called fr= om - * the scheduler code quite frequently and that is why there is not checks. - * - * Return: the sum of the energy consumed by the CPUs of the domain assumi= ng - * a capacity state satisfying the max utilization of the domain. - */ -static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, - unsigned long max_util, unsigned long sum_util, - unsigned long allowed_cpu_cap) -{ - struct em_perf_table *em_table; - struct em_perf_state *ps; - int i; - -#ifdef CONFIG_SCHED_DEBUG - WARN_ONCE(!rcu_read_lock_held(), "EM: rcu read lock needed\n"); -#endif - - if (!sum_util) - return 0; - - /* - * In order to predict the performance state, map the utilization of - * the most utilized CPU of the performance domain to a requested - * performance, like schedutil. Take also into account that the real - * performance might be set lower (due to thermal capping). Thus, clamp - * max utilization to the allowed CPU capacity before calculating - * effective performance. - */ - max_util =3D min(max_util, allowed_cpu_cap); - - /* - * Find the lowest performance state of the Energy Model above the - * requested performance. - */ - em_table =3D rcu_dereference(pd->em_table); - i =3D em_pd_get_efficient_state(em_table->state, pd, max_util); - ps =3D &em_table->state[i]; - - /* - * The performance (capacity) of a CPU in the domain at the performance - * state (ps) can be computed as: - * - * ps->freq * scale_cpu - * ps->performance =3D -------------------- (1) - * cpu_max_freq - * - * So, ignoring the costs of idle states (which are not available in - * the EM), the energy consumed by this CPU at that performance state - * is estimated as: - * - * ps->power * cpu_util - * cpu_nrg =3D -------------------- (2) - * ps->performance - * - * since 'cpu_util / ps->performance' represents its percentage of busy - * time. - * - * NOTE: Although the result of this computation actually is in - * units of power, it can be manipulated as an energy value - * over a scheduling period, since it is assumed to be - * constant during that interval. - * - * By injecting (1) in (2), 'cpu_nrg' can be re-expressed as a product - * of two terms: - * - * ps->power * cpu_max_freq - * cpu_nrg =3D ------------------------ * cpu_util (3) - * ps->freq * scale_cpu - * - * The first term is static, and is stored in the em_perf_state struct - * as 'ps->cost'. - * - * Since all CPUs of the domain have the same micro-architecture, they - * share the same 'ps->cost', and the same CPU capacity. Hence, the - * total energy of the domain (which is the simple sum of the energy of - * all of its CPUs) can be factorized as: - * - * pd_nrg =3D ps->cost * \Sum cpu_util (4) - */ - return ps->cost * sum_util; -} - /** * em_pd_nr_perf_states() - Get the number of performance states of a perf. * domain @@ -394,12 +301,6 @@ em_pd_get_previous_state(struct em_perf_state *table, { return -1; } -static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, - unsigned long max_util, unsigned long sum_util, - unsigned long allowed_cpu_cap) -{ - return 0; -} static inline int em_pd_nr_perf_states(struct em_perf_domain *pd) { return 0; --=20 2.43.0 From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DAC61E9B2E for ; Sun, 2 Mar 2025 16:13:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932017; cv=none; b=sV0n94iWz+oiYvoBth/7fPBOXO7W9EIFUTH4ml4CXIljHkfdlnsDkX6SmKsoCqTOm7P9D3elPoSPNWsvjoEO4mHSsttjDAF8M2KAXQ5Oqdt3mN1/zv0esRL63FmEE9x1ee2LKqYbGoF0IedVoJhL9+ZIdPVUYrZvsOvUR9L9bMk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932017; c=relaxed/simple; bh=y5FCWfFLmsduRKElpG7HovceQUJ3S158eZxC8JqxkvQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=agier1RcLEkylhe39dcGx4ughnCtZxSiZQa/XOopOdN6LN0BMd59aV5LWbauiUVR6uivr3r/W9JqL4cQ0hBTXivB+gqptEVLVaXIqXBSTsRtvtlHgT7QwMTV6rV4MxjVhc5TTGanq6jFzGGPc2/nXJ50XlkRkb+wsGmNSN+g+PY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=Fdw7Cuj/; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Fdw7Cuj/" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-4399ee18a57so22610735e9.1 for ; Sun, 02 Mar 2025 08:13:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932014; x=1741536814; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9QJDWaAxbTaZ06HQKgT3VZ8iGIVgt7TTKEF4KOjm0kA=; b=Fdw7Cuj/mGwbZZZksaM/LA1bho5s3cWOX42APvVLqh/5iV8sCYVNjpvUIi52YmVUm9 7jeEpoXsVrvwPKWp4lv2SdS9Vp6t70sfJ22mMJzeaKm4iLdsRYgkY78w14x27p/xkzQQ 6eM8Wn/+rbMg7ODDymU52DdqiY6oi5dnuaDnH0uLmXJLtg+PNeBPRilF+0JbYml8lZGR 5XpzuRE24t376U65dN4yrfWJJiK/JEKbC/SuS7pgDkm6VJf6a5W2yaMe//gcmAFIp2sL +roA06rc/yobuiZvYDVp+ZnctpmVxZDGjRSQRsxV1Znw2liZWN3wqGGr7DCKAlGNMk/R lq/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932014; x=1741536814; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9QJDWaAxbTaZ06HQKgT3VZ8iGIVgt7TTKEF4KOjm0kA=; b=H5xn+vSSzRpe0hUWnXW4MoAt1HdPenURMMcgVZR15YKW5vDUWaGkJk+ivgIcJDrSx9 UaZhehzDC9T0iAvrRhZb99pOdYbKaRaGGvBFcx2gKoaopWxfr3VixGs+rcOAeQsh7WAa /ledhqlMwbCCZYijsLJRRSHIm69565loDNVJh4ACKFK+hZ7a28Ry0ypgaKcvtXyFbZAT BzNXZ7NwMc8Hx5XNaJy2UUmWhfufivQuZOguFVIefWtMg0+PIHI1Lwz/N5qP9WHUdQLf rHs/wpXpalifLPOWEQVEvMaxJqY6blcqBzeamps0dSjxoSeTPT3Dq+MpM/6HD5V2YE/q 8h7Q== X-Forwarded-Encrypted: i=1; AJvYcCUoDqafPaxE7Fxfn5dNJ8pBQSSVKktN8q8jiXHxqMPCM4tUHRkIsmwdYHAFOSyI0z2c6GLkTxJEONxcIq0=@vger.kernel.org X-Gm-Message-State: AOJu0YwZrQHtR3CSZ7OTirnLLNtNrld2q5e1SYgP4Eg/kZV2dHqJgMQ/ PZljnu/XoEhPrAo/JOa8u/6+MoDOZvqWQS14w3FjDWoHKdQ5OOnySuP1TvsC1v8= X-Gm-Gg: ASbGnctwvaiExbt2YeTY3GiAVE9TdJMQZ5TLWh2varcQds7YuPEJ0LMk8spp0KjQzvp 18M2Ub2i9PlUOl3d6JwPuk5QJ/Ds0EGizIilNIH6cFhIRuZAEmJ+q5+shewPGf+JkwrEBUg8Kp4 Q1ho+pJRm6sDT17TOdJxn1ng3KwOr0+umoG6IUcSdIcQm6n/zyZ4bB7BYQnurmJjumP8OG2CTtv qPfD1AdSQDgk6VjQIjzyenUu5AJYkbccPURdr+em62rPyY0ULS6MHDOKB4vJmXQZNFi6E3lyvBd m1hD+IrYly/dBvDh+DOgCVhMgA/pLI11DRq83X2Cx+kWPIuCRKs7 X-Google-Smtp-Source: AGHT+IGDrUGbBBJeI3Z6m3NvQMruF2Of1ytD5KxJBxh0WsbVRQJsMVb/0KPnaTPD35QtvSHpwu6SCw== X-Received: by 2002:a05:600c:3b9b:b0:43b:c270:49ae with SMTP id 5b1f17b1804b1-43bc2704ae8mr2497795e9.0.1740932013509; Sun, 02 Mar 2025 08:13:33 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:32 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 5/7 v4] sched/fair: Add push task mechanism for EAS Date: Sun, 2 Mar 2025 17:13:19 +0100 Message-ID: <20250302161321.1476139-6-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" EAS is based on wakeup events to efficiently place tasks on the system, but there are cases where a task doesn't have wakeup events anymore or at a far too low pace. For such situation, we can take advantage of the task being put back in the enqueued list to check if it should be pushed on another CPU. When the task is alone on the CPU, it's never put back in the enqueued list; In this special case, we use the tick to run the check. Wake up events remain the main way to migrate tasks but we now detect situation where a task is stuck on a CPU by checking that its utilization is larger than the max available compute capacity (max cpu capacity or uclamp max setting) Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 220 +++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 2 + 2 files changed, 222 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a9b97bbc085f..c3e383b86808 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7051,6 +7051,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) hrtick_update(rq); } =20 +static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p= ); static void set_next_buddy(struct sched_entity *se); =20 /* @@ -7081,6 +7082,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) h_nr_idle =3D task_has_idle_policy(p); if (task_sleep || task_delayed || !se->sched_delayed) h_nr_runnable =3D 1; + + fair_remove_pushable_task(rq, p); } else { cfs_rq =3D group_cfs_rq(se); slice =3D cfs_rq_min_slice(cfs_rq); @@ -8589,6 +8592,197 @@ static int find_energy_efficient_cpu(struct task_st= ruct *p, int prev_cpu) return target; } =20 +static inline bool task_stuck_on_cpu(struct task_struct *p, int cpu) +{ + unsigned long max_capa, util; + + max_capa =3D min(get_actual_cpu_capacity(cpu), + uclamp_eff_value(p, UCLAMP_MAX)); + util =3D max(task_util_est(p), task_runnable(p)); + + /* + * Return true only if the task might not sleep/wakeup because of a low + * compute capacity. Tasks, which wake up regularly, will be handled by + * feec(). + */ + return (util > max_capa); +} + +static inline bool sched_energy_push_task(struct task_struct *p, struct rq= *rq) +{ + if (p->nr_cpus_allowed =3D=3D 1) + return false; + + if (is_rd_overutilized(rq->rd)) + return false; + + if (task_stuck_on_cpu(p, cpu_of(rq))) + return true; + + return false; +} + +static int active_load_balance_cpu_stop(void *data); + +static inline void check_pushable_task(struct task_struct *p, struct rq *r= q) +{ + int new_cpu, cpu =3D cpu_of(rq); + + if (!sched_energy_enabled()) + return; + + if (WARN_ON(!p)) + return; + + if (WARN_ON(!task_current(rq, p))) + return; + + if (is_migration_disabled(p)) + return; + + /* If there are several task, wait for being put back */ + if (rq->nr_running > 1) + return; + + if (!sched_energy_push_task(p, rq)) + return; + + new_cpu =3D find_energy_efficient_cpu(p, cpu); + + if (new_cpu =3D=3D cpu) + return; + + /* + * ->active_balance synchronizes accesses to + * ->active_balance_work. Once set, it's cleared + * only after active load balance is finished. + */ + if (!rq->active_balance) { + rq->active_balance =3D 1; + rq->push_cpu =3D new_cpu; + } else + return; + + raw_spin_rq_unlock(rq); + stop_one_cpu_nowait(cpu, + active_load_balance_cpu_stop, rq, + &rq->active_balance_work); + raw_spin_rq_lock(rq); +} + +static inline int has_pushable_tasks(struct rq *rq) +{ + return !plist_head_empty(&rq->cfs.pushable_tasks); +} + +static struct task_struct *pick_next_pushable_fair_task(struct rq *rq) +{ + struct task_struct *p; + + if (!has_pushable_tasks(rq)) + return NULL; + + p =3D plist_first_entry(&rq->cfs.pushable_tasks, + struct task_struct, pushable_tasks); + + WARN_ON_ONCE(rq->cpu !=3D task_cpu(p)); + WARN_ON_ONCE(task_current(rq, p)); + WARN_ON_ONCE(p->nr_cpus_allowed <=3D 1); + WARN_ON_ONCE(!task_on_rq_queued(p)); + + /* + * Remove task from the pushable list as we try only once after that + * the task has been put back in enqueued list. + */ + plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); + + return p; +} + +/* + * See if the non running fair tasks on this rq can be sent on other CPUs + * that fits better with their profile. + */ +static bool push_fair_task(struct rq *rq) +{ + struct task_struct *next_task; + int prev_cpu, new_cpu; + struct rq *new_rq; + + next_task =3D pick_next_pushable_fair_task(rq); + if (!next_task) + return false; + + if (is_migration_disabled(next_task)) + return true; + + /* We might release rq lock */ + get_task_struct(next_task); + + prev_cpu =3D rq->cpu; + + new_cpu =3D find_energy_efficient_cpu(next_task, prev_cpu); + + if (new_cpu =3D=3D prev_cpu) + goto out; + + new_rq =3D cpu_rq(new_cpu); + + if (double_lock_balance(rq, new_rq)) { + /* The task has already migrated in between */ + if (task_cpu(next_task) !=3D rq->cpu) { + double_unlock_balance(rq, new_rq); + goto out; + } + + deactivate_task(rq, next_task, 0); + set_task_cpu(next_task, new_cpu); + activate_task(new_rq, next_task, 0); + + resched_curr(new_rq); + + double_unlock_balance(rq, new_rq); + } + +out: + put_task_struct(next_task); + + return true; +} + +static void push_fair_tasks(struct rq *rq) +{ + /* push_fair_task() will return true if it moved a fair task */ + while (push_fair_task(rq)) + ; +} + +static DEFINE_PER_CPU(struct balance_callback, fair_push_head); + +static inline void fair_queue_pushable_tasks(struct rq *rq) +{ + if (!sched_energy_enabled() || !has_pushable_tasks(rq)) + return; + + queue_balance_callback(rq, &per_cpu(fair_push_head, rq->cpu), push_fair_t= asks); +} +static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p) +{ + if (sched_energy_enabled()) + plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); +} + +static void fair_add_pushable_task(struct rq *rq, struct task_struct *p) +{ + if (sched_energy_enabled() && task_on_rq_queued(p) && !p->se.sched_delaye= d) { + if (sched_energy_push_task(p, rq)) { + plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); + plist_node_init(&p->pushable_tasks, p->prio); + plist_add(&p->pushable_tasks, &rq->cfs.pushable_tasks); + } + } +} + /* * select_task_rq_fair: Select target runqueue for the waking task in doma= ins * that have the relevant SD flag set. In practice, this is SD_BALANCE_WAK= E, @@ -8758,6 +8952,10 @@ balance_fair(struct rq *rq, struct task_struct *prev= , struct rq_flags *rf) return sched_balance_newidle(rq, rf) !=3D 0; } #else +static inline void check_pushable_task(struct task_struct *p, struct rq *r= q) {} +static inline void fair_queue_pushable_tasks(struct rq *rq) {} +static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p= ) {} +static inline void fair_add_pushable_task(struct rq *rq, struct task_struc= t *p) {} static inline void set_task_max_allowed_capacity(struct task_struct *p) {} #endif /* CONFIG_SMP */ =20 @@ -8947,6 +9145,12 @@ pick_next_task_fair(struct rq *rq, struct task_struc= t *prev, struct rq_flags *rf put_prev_entity(cfs_rq, pse); set_next_entity(cfs_rq, se); =20 + /* + * The previous task might be eligible for being pushed on + * another cpu if it is still active. + */ + fair_add_pushable_task(rq, prev); + __set_next_task_fair(rq, p, true); } =20 @@ -9019,6 +9223,13 @@ static void put_prev_task_fair(struct rq *rq, struct= task_struct *prev, struct t cfs_rq =3D cfs_rq_of(se); put_prev_entity(cfs_rq, se); } + + /* + * The previous task might be eligible for being pushed on another cpu + * if it is still active. + */ + fair_add_pushable_task(rq, prev); + } =20 /* @@ -13151,6 +13362,7 @@ static void task_tick_fair(struct rq *rq, struct ta= sk_struct *curr, int queued) if (static_branch_unlikely(&sched_numa_balancing)) task_tick_numa(rq, curr); =20 + check_pushable_task(curr, rq); update_misfit_status(curr, rq); check_update_overutilized_status(task_rq(curr)); =20 @@ -13303,6 +13515,8 @@ static void __set_next_task_fair(struct rq *rq, str= uct task_struct *p, bool firs { struct sched_entity *se =3D &p->se; =20 + fair_remove_pushable_task(rq, p); + #ifdef CONFIG_SMP if (task_on_rq_queued(p)) { /* @@ -13320,6 +13534,11 @@ static void __set_next_task_fair(struct rq *rq, st= ruct task_struct *p, bool firs if (hrtick_enabled_fair(rq)) hrtick_start_fair(rq, p); =20 + /* + * Try to push prev task before checking misfit for next task as + * the migration of prev can make next fitting the CPU + */ + fair_queue_pushable_tasks(rq); update_misfit_status(p, rq); sched_fair_update_stop_tick(rq, p); } @@ -13350,6 +13569,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) cfs_rq->tasks_timeline =3D RB_ROOT_CACHED; cfs_rq->min_vruntime =3D (u64)(-(1LL << 20)); #ifdef CONFIG_SMP + plist_head_init(&cfs_rq->pushable_tasks); raw_spin_lock_init(&cfs_rq->removed.lock); #endif } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ab16d3d0e51c..2db198dccf21 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -722,6 +722,8 @@ struct cfs_rq { struct list_head leaf_cfs_rq_list; struct task_group *tg; /* group that "owns" this runqueue */ =20 + struct plist_head pushable_tasks; + /* Locally cached copy of our task_group's idle value */ int idle; =20 --=20 2.43.0 From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CDEF1EB1A9 for ; Sun, 2 Mar 2025 16:13:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932019; cv=none; b=SGMeUAOaAX/V3RQc0AawwnalVKGwH/TiNSXkDzKbyQO6s5Y8hJkzt45h8FPdaTg2JxqcZGPQZ2AVGgTn2DaX5jmLFXFfU8jnQXPQqNHHr5PxFSE6u/Fpkju/bTfT29dz9uzov/fU2W6SfYn4PAex92DOrAw/mF4nYOuJAcpa7kI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932019; c=relaxed/simple; bh=M+AvDmAEBArgFT6+14lafuSfZPvZVBMk3Z1hP1vASLU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=di0oE+mvS30mRgBkKIU/QZygeOJL08fVXXqDvUA+iNQzhkU/33n0vwGoEsyq3KOAd9VmzpI/BuJd/g2Bq+KI6Q4Y2lOeZkU8vc72aMWY78qQYQlt6sPppmv6LdmN4Es+n1RGrDElEE6jiMs2JVcd7KbDpMQVj/D7L+9nvSlOXB4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=tU67pHRr; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="tU67pHRr" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-43bbd711eedso2839205e9.3 for ; Sun, 02 Mar 2025 08:13:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932015; x=1741536815; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wCa8lkfQvNrx2IlBh4QrAru9EVi5SrFl7QvJJUDndaY=; b=tU67pHRrhOfLcVFWR9w3uPe7JSEEKsa1Cn4m0LsC9pR9DJXrfVwE7RGrYPhZ98Fu7L /Qsy8HxbmyZbeIbRoWsn/hF02aig3iSCRebqft2Ud1McO+stzzaVBvuLdGqC14OLoono vPuVL4bRlZ3TzF62CHgy/NYwU4fYiN/WSDEmFwKNz9bgnEEBf0J/SAyeRrNVXCyKJ+7N zPlvbx3tLWzFJspbyFUs9IBKgk469OKA7Igk0XNJyRfC7cv46TRBxSE+enVmFWyPSPd9 Hf98jijQN7DR+ifkFWrYD0kxqauRSljJeOpX7r97aVHuAPh/fT+xVz+KwH33fHYZLmJn yr7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932015; x=1741536815; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wCa8lkfQvNrx2IlBh4QrAru9EVi5SrFl7QvJJUDndaY=; b=moqFZ7X8m7Ln41BTvKzQJfwGm+Bo7Ztlv2p3oRCRfaUEhfadxWoidRbe6JWGXkzAX+ aRFa0zil5JxGZOSpCYcIyeZEClUfn23j9E5LoOlkrAwEowLk+6RVBjXscfgDm4wUsdtY sJdTKH3cscEp+C+J3VncgSjovgabvfY6zpM+HVVoRybWKKjQvOJXq6+IYgLKEqNMB9Bn 20cXIpb/H3dnukBkZU1hnwQ5ojKaaR8o/++02UYsKav89Xog2HycdZjpttCt5U9k8Kmq hBC/aqH6gF9TVGm0FN6Y9Ycc+60CqIBBIOXFO6NcNKiJbrDQEkOea+7jR4qPj+Z/lhbz ZGgQ== X-Forwarded-Encrypted: i=1; AJvYcCUAyg7BkDYS0NClZxmH6X+RXBUYksTzstiwyzMNu0je9RwlzzG9gLFjtIX75GogK1HjP3yAvCLiRP6PvkU=@vger.kernel.org X-Gm-Message-State: AOJu0YwLcULotclmIeyTAa2GgeMkG5o3keRxQ6Qjz5Gmz2/L2EV0GuGC oilVRUKpHFhWodCNe+yLTEquD2dFl3Y0z7Dptw0QRjup8eBmtCRm//GBNq00VQE= X-Gm-Gg: ASbGncuZEhmIB7obSdy5rEenBbjz5bMHU6HuWcUljgAfUEiLfaLV9WDa5K7EdF4myhc 48/nlBwifPb+YJVs2cj6pNPYGV3x8VmEUpGtJ+4wcR7hhfkCJbA/HXQl237ObEUqQmURpB+gsYq +dDeQQttDqd55MG7S7ETjzA8Mh3HDj2F9uIeqKfUUlQQ0xll7Cfg1wvFC2pNzV0fEoM6S/F97VQ BpKK39t5keiZvUAxiYo4udA3wt5Aqj3BHhAmuC+/g0XDacxNRhjeFqIJTYv5F7Xhi9t30R9vxgL Mf3M4ds/gE+S6nPiXAKlTwWfNSoG4yTDDdt5Aiucf9KB84tgARq5 X-Google-Smtp-Source: AGHT+IFAMoaIGn4h2zoJeVcr3ysdQbYvJaB9hjr5OqcxdCsw2cvSbFNdlRHWa6DBFm27WmvHa7wL1Q== X-Received: by 2002:a05:600c:1550:b0:439:6dba:adf2 with SMTP id 5b1f17b1804b1-43ba66e7550mr97201315e9.15.1740932015400; Sun, 02 Mar 2025 08:13:35 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:33 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 6/7 v4] sched/fair: Add misfit case to push task mecanism for EAS Date: Sun, 2 Mar 2025 17:13:20 +0100 Message-ID: <20250302161321.1476139-7-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some task misfit cases can be handled directly by the push mecanism instead of triggering an idle load balance to pull the task on a better CPU. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 38 +++++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c3e383b86808..21bd62cf138c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8508,6 +8508,8 @@ static int find_energy_efficient_cpu(struct task_stru= ct *p, int prev_cpu) target_stat.runnable =3D cpu_runnable(cpu_rq(cpu)); target_stat.capa =3D capacity_of(cpu); target_stat.nr_running =3D cpu_rq(cpu)->cfs.h_nr_runnable; + if ((p->on_rq) && (!p->se.sched_delayed) && (cpu =3D=3D prev_cpu)) + target_stat.nr_running--; =20 /* If the target needs a lower OPP, then look up for * the corresponding OPP and its associated cost. @@ -8613,6 +8615,9 @@ static inline bool sched_energy_push_task(struct task= _struct *p, struct rq *rq) if (p->nr_cpus_allowed =3D=3D 1) return false; =20 + if (!task_fits_cpu(p, cpu_of(rq))) + return true; + if (is_rd_overutilized(rq->rd)) return false; =20 @@ -8624,33 +8629,33 @@ static inline bool sched_energy_push_task(struct ta= sk_struct *p, struct rq *rq) =20 static int active_load_balance_cpu_stop(void *data); =20 -static inline void check_pushable_task(struct task_struct *p, struct rq *r= q) +static inline bool check_pushable_task(struct task_struct *p, struct rq *r= q) { int new_cpu, cpu =3D cpu_of(rq); =20 if (!sched_energy_enabled()) - return; + return false; =20 if (WARN_ON(!p)) - return; + return false; =20 if (WARN_ON(!task_current(rq, p))) - return; + return false; =20 if (is_migration_disabled(p)) - return; + return false; =20 /* If there are several task, wait for being put back */ if (rq->nr_running > 1) - return; + return false; =20 if (!sched_energy_push_task(p, rq)) - return; + return false; =20 new_cpu =3D find_energy_efficient_cpu(p, cpu); =20 if (new_cpu =3D=3D cpu) - return; + return false; =20 /* * ->active_balance synchronizes accesses to @@ -8661,13 +8666,15 @@ static inline void check_pushable_task(struct task_= struct *p, struct rq *rq) rq->active_balance =3D 1; rq->push_cpu =3D new_cpu; } else - return; + return false; =20 raw_spin_rq_unlock(rq); stop_one_cpu_nowait(cpu, active_load_balance_cpu_stop, rq, &rq->active_balance_work); raw_spin_rq_lock(rq); + + return true; } =20 static inline int has_pushable_tasks(struct rq *rq) @@ -8952,7 +8959,11 @@ balance_fair(struct rq *rq, struct task_struct *prev= , struct rq_flags *rf) return sched_balance_newidle(rq, rf) !=3D 0; } #else -static inline void check_pushable_task(struct task_struct *p, struct rq *r= q) {} +static inline bool check_pushable_task(struct task_struct *p, struct rq *r= q) +{ + return false +} + static inline void fair_queue_pushable_tasks(struct rq *rq) {} static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p= ) {} static inline void fair_add_pushable_task(struct rq *rq, struct task_struc= t *p) {} @@ -13362,9 +13373,10 @@ static void task_tick_fair(struct rq *rq, struct t= ask_struct *curr, int queued) if (static_branch_unlikely(&sched_numa_balancing)) task_tick_numa(rq, curr); =20 - check_pushable_task(curr, rq); - update_misfit_status(curr, rq); - check_update_overutilized_status(task_rq(curr)); + if (!check_pushable_task(curr, rq)) { + update_misfit_status(curr, rq); + check_update_overutilized_status(task_rq(curr)); + } =20 task_tick_core(rq, curr); } --=20 2.43.0 From nobody Wed Feb 11 05:28:56 2026 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE6DB1EB1BF for ; Sun, 2 Mar 2025 16:13:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932020; cv=none; b=nSsFOLuePjtedZ9ic57EboXWxmpiyoXLUcnwYCbDYb5RVw52YkXTvRUE5F8abizDbbPTcu0R89WKQVDjbU73V0ge6brcAo7IV7BSmdXtbQpza9skC1zc3qq9ZrhDHNJWLezZK2Qx9JmQp3ZyFYNki3QS5WetOIlW+WRbbcV43Zw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740932020; c=relaxed/simple; bh=AVs0Ojczs2C0TDloanPrJOxloY5r9ENCO5oTgFrQAcc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZUAaBcoMWEPm6iNL77A9mvUZb2vDaQCmmo/gl0ZTqhEIJLtR2csT7wgGLNce9USukyuBJCIAHECPYYoF5a3DuAAkqahaQTWNM4ANOxOWLYZi9wYcGDXZdZMkmf2nhG9cyRqGVgfE27azR93jxBLn4KCYTbNlp1RyfDGb5M+Hdu0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=lbabW0H9; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="lbabW0H9" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-43994ef3872so23235815e9.2 for ; Sun, 02 Mar 2025 08:13:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1740932017; x=1741536817; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K45XWNnUOn6Er5XZHpb+55lI3DiFvNPiXBbeHlptjoo=; b=lbabW0H9z7IGl5nzkUOcTI4gVSgwvYBdQH+rUK3lb+8Zxy+q1WOnO/Bj6Xe15FH8k1 WGc0vM+FUmyDDfQ5reY18wyLi1QNQto5AVn9u1s3H+hC9W7TC/y9vWTrJCqLrgilOmeP qdKbecs9pWt24DNl363hsf/uzixq3ih4UTwzYk7xAZM6hDcOfPhxxbStEmIodd4yicOF LZn9Tj8sbH9eDL5OFxtN+XL9iEB2OEpCU149X96KGjch+HOrezT8KOM0m2cmO79SVDE2 UyWrUN74Linp6P+DAO8W1Dn44nSRgGe1oeRCZ6Zbf1dohAwmeQqktpCIzJTrlZrZySWR 47Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740932017; x=1741536817; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K45XWNnUOn6Er5XZHpb+55lI3DiFvNPiXBbeHlptjoo=; b=oX8vofCAUG86R+5JkCxCx4bL91ZqM0Mx5k35G1shqfk0WaAISRCmHmp+uU0tyIDtR3 UJ6oLA396A18d1k3+bTSlblLYnM6vB45qBtuQ+tZNTZPeTP1vIlQMmcVXoAebocQoafK AiEiJMH0kq5hbgj4mp3A4qhwL6g1NP2A/YETqL/QQ5XqP9cfjwM490KCt497ptREbVCZ pYR46kVo522Xp694zqcCBcf5Mw4/0/RuqPZJpynmJSrxkPo5eKKYwZjD4/BBnBoMupR2 0lVo8NCjdM/Vgdg4ax1c6ezhsp/PH7feWHAsD9npCGJOWYD849R9K5OTZcQCJ7/jpoM7 Qeig== X-Forwarded-Encrypted: i=1; AJvYcCXml61L3kv6B46Ij5t1pY23B1WyHSOIbFATxvFJlV6thYsnOwD8rnsLOF2bTfa6txz6h6JZZuTsFxkCy/k=@vger.kernel.org X-Gm-Message-State: AOJu0YytFzGqA6pdUfGDtxtoVkhrA2eU9tREJWSNATHPp0TCC0A1CSjP RgPcllGBppdlAYZ/wJAbemojbCabW/0xtQmLZWIoKEas3Vao0V4gX8dVWnooOCc= X-Gm-Gg: ASbGncvow21JN0ZOC+3mC/uxDtUSezRHsHD0CqZn3c/XEJ9VbhNiSXNajKgJamLOUtS 2lq+mTDkKFE9IYSbU3om5UycWzHkxsjQbwNwsHdY4KkCqHnj8Nvn8Qdw6bRyP9rWjJz9LTU3ISG ZOH+hbwicre/FzafgO1usQgZMOZRYwJQJQt0uyw/3aU1c2j4HGyPqaebC6tZ2mbD4wt4DAFNqZi aBTgoDLovHAI+o8VyGmvvoZDJOa6LgoYzCQOkGzULQaeG8mV2iW4AnobnT2zPfjwyO5opHoIDhs pLygSpvQ33o/jXjKTLXjn9FAHHogwft6O9cvZjozhuckZ8cbdNH6 X-Google-Smtp-Source: AGHT+IF8gIrmLbkZ0HcbA64fVvHOXkZrLd1NWZQbv+5hMoBo8OIAet/FvNNhQY0r7SC3xOA2BnNJ/g== X-Received: by 2002:a05:600c:444d:b0:43b:8198:f6fc with SMTP id 5b1f17b1804b1-43ba66e6d07mr97553735e9.11.1740932017153; Sun, 02 Mar 2025 08:13:37 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:cbb1:d64:4932:5446]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bb767a977sm25530245e9.18.2025.03.02.08.13.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Mar 2025 08:13:35 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, lukasz.luba@arm.com, rafael.j.wysocki@intel.com, pierre.gondois@arm.com, linux-kernel@vger.kernel.org Cc: qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com, qperret@google.com, Vincent Guittot Subject: [PATCH 7/7 v4] sched/fair: Update overutilized detection Date: Sun, 2 Mar 2025 17:13:21 +0100 Message-ID: <20250302161321.1476139-8-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250302161321.1476139-1-vincent.guittot@linaro.org> References: <20250302161321.1476139-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Checking uclamp_min is useless and counterproductive for overutilized state as misfit can now happen without being in overutilized state Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 21bd62cf138c..c241d9d57a0c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6831,16 +6831,15 @@ static inline void hrtick_update(struct rq *rq) #ifdef CONFIG_SMP static inline bool cpu_overutilized(int cpu) { - unsigned long rq_util_min, rq_util_max; + unsigned long rq_util_max; =20 if (!sched_energy_enabled()) return false; =20 - rq_util_min =3D uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN); rq_util_max =3D uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX); =20 /* Return true only if the utilization doesn't fit CPU's capacity */ - return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu); + return !util_fits_cpu(cpu_util_cfs(cpu), 0, rq_util_max, cpu); } =20 /* --=20 2.43.0