From nobody Fri Dec 27 17:12:37 2024 Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0C437E0E8 for ; Mon, 2 Dec 2024 17:46:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161576; cv=none; b=D3SDtFr9rJ7EPWXFdHN2Or5zkYQTJl1lG3qYT8T4dzNTBGZHXNMt66qibq/kmfdwzz80LHIEuzc4hVtbaTF6FSMrPquXnheYfr+5n1CmhvUHAaHoNhZG7YS7WjbakDBHsrfPDXqZhq0Qh59bOdzAHdeJ5zLdz05FhhfuAygimvc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161576; c=relaxed/simple; bh=b5il2Z2utd0lmXYJr0mtWGX8EWfbvnZlY+vNR6xuPY8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IFKADCsDCxIKoBF+pPBOP6PbD7BJX3lCDvStVEkYW2Ni/CX85Nvj0tvocYFoH5zfpihWne+n+NX6K13e6A8m9dI+0Z8oCm8Isb/Nr4N8A/7Dd++gv2XeDoVnXn4uLDN9neccQJ2MxE0uzD7f2OTHzBAKRA3NhD+lTaDpf4wF9aQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=DxGdxrTg; arc=none smtp.client-ip=209.85.208.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="DxGdxrTg" Received: by mail-lj1-f175.google.com with SMTP id 38308e7fff4ca-2ffd6af012eso55332751fa.2 for ; Mon, 02 Dec 2024 09:46:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161573; x=1733766373; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7ZxEcd0jfIdJ43B0I6BAc7C6qxciSXMBMwXPC6dEEfQ=; b=DxGdxrTgCYcF4khuL9G36U0vsiJy18ndsKr43Fqljgdg+mkVVyCQsdTZtYmo45suEf hBaIrQpQtie9uZ6cGjXB97txfgJD8jUnBxwKeVnwMIofVttViOXFQObEK1PbqQfOZI7Q W8UCCORxhlsWPigBz+pbeT5UwsVJ4ON3HsdnXer+i3d7LEvB9z/RzB3fqHdnU9AFcoE5 Zaz0OydHadBOnxkQ8BPqFJxF9vEN31lo5Z6N+EoXONg1JHKMIZm4ixtTdEXHEqOZOW8/ 2rVV82At9JxgudmJcokEQfg1Dt3DvNou+Xd25NIyOjX6O0R37CHw538dVEXM+q8tpzI0 inOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161573; x=1733766373; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7ZxEcd0jfIdJ43B0I6BAc7C6qxciSXMBMwXPC6dEEfQ=; b=LDXIlInyEeJQOruKgm96y2381FtEtDVG0Ff101OSIg5Vzq8aXoXjsTPgNaCpozmS+E 42Ohw0X06fJ+SeHAJyEJit558qrHhUScBoVAHJfG14mwRs5qS7T8SrMUCx8mZ7Ze3fXo tKIrk8XVAlURbImzoNIJEhVnp2os8BAM+2/dMBt/wFKL+clpB0LwMyneRQxsfqFaIpB3 exPx2P0C0YUU1J3N+EnX44BW+S5vh8c3dz3rqf1CM5NfzyKzerjhJBc1b4ghXF6cXrEX +Q12KqeBR2zO8ilU/q94Mgu4UrqPpACCW5J865hDA4KOaBNbalVIe6d/KfSMc1C2DNyG utkQ== X-Forwarded-Encrypted: i=1; AJvYcCUh+YlM61xfCPDnzM3sedwN8bBE0RIzQSkvO948flFS5IR6w5bUpYlEx7yUdlOj19z0V9rwcj/+qfz6+Ko=@vger.kernel.org X-Gm-Message-State: AOJu0Yxm+X1lFsQUje8CrHCCI6wckOL+PjdR/VMRpdrUA+vgiM7pctBQ JyatxSnqI31u64aOK7Mri+rDDpyyM+jAqNi7bh9aCKvoivC1oUzBUENcfG+XAH0= X-Gm-Gg: ASbGncvj97d3xJDidcSad8xkqZ5LKKC2HSmtCxkh2hN9SpxoVabfZS8NFEmFQ+Wr3Ti tl+s5aGwekMVcg6/q22Iy/5rUEfiWxI9IPGrHmgMEqUs1Wx4Otv2rtQZwBnzY+n3WBedkdOxnCO kA9ax4vddzK9QXGE6iA83bRqrm4FLcrXSYFTMH0fCnSdPVvNx1sQirq0PeARIc+0p3vtX1r85mJ 3hmyo+6dDA3geUnrE0bYlaYpy5UxQWntmhRqESUWcx6KPCsUEk760NUnkE= X-Google-Smtp-Source: AGHT+IGVUcNjfQ8+ZCeiCAJe+K4XXiQaOih19B14NrRnmqvuw2IAQ6AvgtCEfynTHH8cun3GlPJZVQ== X-Received: by 2002:ac2:5695:0:b0:53d:eec1:a04e with SMTP id 2adb3069b0e04-53df00cf72dmr12684739e87.23.1733161573044; Mon, 02 Dec 2024 09:46:13 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:11 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 01/11 v3] sched/fair: Fix sched_can_stop_tick() for fair tasks Date: Mon, 2 Dec 2024 18:45:56 +0100 Message-ID: <20241202174606.4074512-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We can't stop the tick of a rq if there are at least 2 tasks enqueued in the whole hierarchy and not only at the root cfs rq. rq->cfs.nr_running tracks the number of sched_entity at one level whereas rq->cfs.h_nr_running tracks all queued tasks in the hierarchy. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1dee3f5ef940..ed95861e9887 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1343,7 +1343,7 @@ bool sched_can_stop_tick(struct rq *rq) if (scx_enabled() && !scx_can_stop_tick(rq)) return false; =20 - if (rq->cfs.nr_running > 1) + if (rq->cfs.h_nr_running > 1) return false; =20 /* --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3817C1DB940 for ; Mon, 2 Dec 2024 17:46:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161578; cv=none; b=ohw03QdEGjNpAH+eoJZ+bOHwy4siPAFIkQF6jGkm0+b7IPd+Y9zeSLh4yp/GdNDVOg4arNdLSs9afAIBwGqSpm6ByIxVRg55+CBvap8CG6RCqJ6npHr5BII1qWY1LmjSPrhckGvC/07HnHu5EvjW4Eg8aGljc+aKSyzZwigc+XQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161578; c=relaxed/simple; bh=dlupHd+njlE7jJ2Rs1AySOi0dOpE8uNlQ64FxIFDBnw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=K7bqPmjAq2Pdrjbnl/1QCmaq2aAVJKx14Pfu++C80y/cw57hdnVB/8QvaOE3Yb0BEKjjNkk588OTYxxnb+7ti2OhT5PL3uc1ArV5M//a67aKrvIVbZx8+qhBGCk+p+zjDKrhltUHUF+Rueqsf9ZnRj3lvNKoncxbwWRovN3YSIQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=Sw+2MAKG; arc=none smtp.client-ip=209.85.128.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Sw+2MAKG" Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-434a2f3bae4so42974615e9.3 for ; Mon, 02 Dec 2024 09:46:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161574; x=1733766374; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=38f3rBCxzrz2c9uXXio6KuyhyFwB3F5q6yyrAFCosaw=; b=Sw+2MAKGTEE5+vlbUveD0NpAvwz3M2OeTCfs8dlWRskUD+My6yvfqk6UJ55MBq0KbN gYZa9hXRNOjemhJtUAFyVv9Tz5s50qpqAOa2mDYH0SZBSChlG6BEycm1Z3StZxqdqPcT 4MV7iD0QnIKFaAepNbf/9lA4o/46V2jfyDkawGu1bPp/ALzORFn7GHxrUGpi0zaRmeo9 AFQeH5Y+ExbH2UzANb6NMXXk8+6yeV6gw5F/Fmggcz1Kkim1HDECT7RAOTAUFClGb667 aSF30fxLDL/uzVf2MWQu1/4QlacSwjTgZzpkPNAvaWM2VwpxPRXz/ljUQgW/ZTksy525 t+Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161574; x=1733766374; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=38f3rBCxzrz2c9uXXio6KuyhyFwB3F5q6yyrAFCosaw=; b=t9iuSkI68o2mYD39bFQ1BhsINYkRxYTXIzbT7lNIqS0EqLAIeXfkmAALlZl7O0AyVc bL9ziAQp8xisznxr7g/Wm9a3q7uIFyDEI2x+9Y8CloBllzzhJnJ4GbXPfsYYg7ifUqcr 6RFgCse1JOEcjOFFuA0VYKzJZ3k6dK6FX8hz/XfiROB1hMmzCT6ZDl1cuzaq4DJasGbA 7CiJDWMKnKUHIELixBDx7TAxzaHrog+/4D5lMOpXjYlZ0CZqjflpTSwRO/ANdlArTOb0 jBcD7qk0enBqBSwnOCo5TXC8sH2819K3XM5MgljYDzEflvnfuwVGRlJxVNUmaepsKINZ kYnQ== X-Forwarded-Encrypted: i=1; AJvYcCWScKgDNnR+ysqFCkv1qEkspvFJ+iphEHfay28GkNFGnbFTdAN/n3AZ01MbduO3qrL2IFN+YvrjdA4Q5e8=@vger.kernel.org X-Gm-Message-State: AOJu0Yypj3PcRVI2b8fHIY/zt8xxvaUCqE0oXEuUmzo7b5vtX9M1vRjy bfsxnv8XCX9027n+nQwPm+/GSi+KAU6NalZTNnZiDb+r1YFEu12yCd/F4nJVnlo= X-Gm-Gg: ASbGncs95xu6PrlLyuYG9tmcPBMKi5BoKD+L8lRirWo4sm0djpB4ty6RrPXrwt2ymMj V9pDiaqwo/pqrX5ibb1xRMyKEDMqfJxgCVlfMdkYV658cCj2JZ3zmk3zhCAce96B0Xy3j/xq4dk WenkX3VKe7XWvYcoZ9W6O7626+QNmgisUPrKVjsaTzKhXg0vzwqx1/iWRZEc+mLKlknc+k0z3xz BXQpulTPdIIVCMncXA2XhQcV9P6CMVJAA8cvrF3isbnS53sHUuKliQyRx4= X-Google-Smtp-Source: AGHT+IE3KT6IRqQVQo830z7cat+AqDap0oAyQ2msOIzq4sLdYzRFZsde3Q897a8HHmyqw5AP9Jprtw== X-Received: by 2002:a05:600c:1c98:b0:434:a962:2aa0 with SMTP id 5b1f17b1804b1-434a9df13a6mr199769375e9.32.1733161574442; Mon, 02 Dec 2024 09:46:14 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:13 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 02/11 v3] sched/eevdf: More PELT vs DELAYED_DEQUEUE Date: Mon, 2 Dec 2024 18:45:57 +0100 Message-ID: <20241202174606.4074512-3-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra Vincent and Dietmar noted that while commit fc1892becd56 ("sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE") fixes the entity runnable stats, it does not adjust the cfs_rq runnable stats, which are based off of h_nr_running. Track h_nr_delayed such that we can discount those and adjust the signal. Fixes: fc1892becd56 ("sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE") Reported-by: Dietmar Eggemann Closes: https://lore.kernel.org/lkml/a9a45193-d0c6-4ba2-a822-464ad30b550e@a= rm.com/ Reported-by: Vincent Guittot Closes: https://lore.kernel.org/lkml/CAKfTPtCNUvWE_GX5LyvTF-WdxUT=3DZgvZZv-= 4t=3DeWntg5uOFqiQ@mail.gmail.com/ Signed-off-by: Peter Zijlstra (Intel) [ Fixes checkpatch warnings and rebased ] Signed-off-by: Vincent Guittot Link: https://lkml.kernel.org/r/20240906104525.GG4928@noisy.programming.kic= ks-ass.net Tested-by: K Prateek Nayak Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/debug.c | 1 + kernel/sched/fair.c | 51 +++++++++++++++++++++++++++++++++++++++----- kernel/sched/pelt.c | 2 +- kernel/sched/sched.h | 8 +++++-- 4 files changed, 54 insertions(+), 8 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index a48b2a701ec2..a1be00a988bf 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -845,6 +845,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); SEQ_printf(m, " .%-30s: %d\n", "h_nr_running", cfs_rq->h_nr_running); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", cfs_rq->idle_nr_running); SEQ_printf(m, " .%-30s: %d\n", "idle_h_nr_running", diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4283c818bbd1..fc69aab57870 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5463,9 +5463,33 @@ static void clear_buddies(struct cfs_rq *cfs_rq, str= uct sched_entity *se) =20 static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq); =20 -static inline void finish_delayed_dequeue_entity(struct sched_entity *se) +static void set_delayed(struct sched_entity *se) +{ + se->sched_delayed =3D 1; + for_each_sched_entity(se) { + struct cfs_rq *cfs_rq =3D cfs_rq_of(se); + + cfs_rq->h_nr_delayed++; + if (cfs_rq_throttled(cfs_rq)) + break; + } +} + +static void clear_delayed(struct sched_entity *se) { se->sched_delayed =3D 0; + for_each_sched_entity(se) { + struct cfs_rq *cfs_rq =3D cfs_rq_of(se); + + cfs_rq->h_nr_delayed--; + if (cfs_rq_throttled(cfs_rq)) + break; + } +} + +static inline void finish_delayed_dequeue_entity(struct sched_entity *se) +{ + clear_delayed(se); if (sched_feat(DELAY_ZERO) && se->vlag > 0) se->vlag =3D 0; } @@ -5495,7 +5519,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) if (cfs_rq->next =3D=3D se) cfs_rq->next =3D NULL; update_load_avg(cfs_rq, se, 0); - se->sched_delayed =3D 1; + set_delayed(se); return false; } } @@ -5909,7 +5933,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long task_delta, idle_task_delta, dequeue =3D 1; + long task_delta, idle_task_delta, delayed_delta, dequeue =3D 1; long rq_h_nr_running =3D rq->cfs.h_nr_running; =20 raw_spin_lock(&cfs_b->lock); @@ -5942,6 +5966,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) =20 task_delta =3D cfs_rq->h_nr_running; idle_task_delta =3D cfs_rq->idle_h_nr_running; + delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { struct cfs_rq *qcfs_rq =3D cfs_rq_of(se); int flags; @@ -5965,6 +5990,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) =20 qcfs_rq->h_nr_running -=3D task_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; + qcfs_rq->h_nr_delayed -=3D delayed_delta; =20 if (qcfs_rq->load.weight) { /* Avoid re-evaluating load for this entity: */ @@ -5987,6 +6013,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) =20 qcfs_rq->h_nr_running -=3D task_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; + qcfs_rq->h_nr_delayed -=3D delayed_delta; } =20 /* At this point se is NULL and we are at root level*/ @@ -6012,7 +6039,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long task_delta, idle_task_delta; + long task_delta, idle_task_delta, delayed_delta; long rq_h_nr_running =3D rq->cfs.h_nr_running; =20 se =3D cfs_rq->tg->se[cpu_of(rq)]; @@ -6048,6 +6075,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) =20 task_delta =3D cfs_rq->h_nr_running; idle_task_delta =3D cfs_rq->idle_h_nr_running; + delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { struct cfs_rq *qcfs_rq =3D cfs_rq_of(se); =20 @@ -6065,6 +6093,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) =20 qcfs_rq->h_nr_running +=3D task_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; + qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(qcfs_rq)) @@ -6082,6 +6111,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) =20 qcfs_rq->h_nr_running +=3D task_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; + qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(qcfs_rq)) @@ -6930,7 +6960,7 @@ requeue_delayed_entity(struct sched_entity *se) } =20 update_load_avg(cfs_rq, se, 0); - se->sched_delayed =3D 0; + clear_delayed(se); } =20 /* @@ -6944,6 +6974,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) struct cfs_rq *cfs_rq; struct sched_entity *se =3D &p->se; int idle_h_nr_running =3D task_has_idle_policy(p); + int h_nr_delayed =3D 0; int task_new =3D !(flags & ENQUEUE_WAKEUP); int rq_h_nr_running =3D rq->cfs.h_nr_running; u64 slice =3D 0; @@ -6970,6 +7001,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) if (p->in_iowait) cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); =20 + if (task_new) + h_nr_delayed =3D !!se->sched_delayed; + for_each_sched_entity(se) { if (se->on_rq) { if (se->sched_delayed) @@ -6992,6 +7026,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) =20 cfs_rq->h_nr_running++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; + cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D 1; @@ -7015,6 +7050,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) =20 cfs_rq->h_nr_running++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; + cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D 1; @@ -7077,6 +7113,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) struct task_struct *p =3D NULL; int idle_h_nr_running =3D 0; int h_nr_running =3D 0; + int h_nr_delayed =3D 0; struct cfs_rq *cfs_rq; u64 slice =3D 0; =20 @@ -7084,6 +7121,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) p =3D task_of(se); h_nr_running =3D 1; idle_h_nr_running =3D task_has_idle_policy(p); + if (!task_sleep && !task_delayed) + h_nr_delayed =3D !!se->sched_delayed; } else { cfs_rq =3D group_cfs_rq(se); slice =3D cfs_rq_min_slice(cfs_rq); @@ -7101,6 +7140,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) =20 cfs_rq->h_nr_running -=3D h_nr_running; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; + cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D h_nr_running; @@ -7139,6 +7179,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) =20 cfs_rq->h_nr_running -=3D h_nr_running; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; + cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D h_nr_running; diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index fc07382361a8..fee75cc2c47b 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cf= s_rq) { if (___update_load_sum(now, &cfs_rq->avg, scale_load_down(cfs_rq->load.weight), - cfs_rq->h_nr_running, + cfs_rq->h_nr_running - cfs_rq->h_nr_delayed, cfs_rq->curr !=3D NULL)) { =20 ___update_load_avg(&cfs_rq->avg, 1); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 5eb2d5b9722f..99d19c605e4f 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -649,6 +649,7 @@ struct cfs_rq { unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ + unsigned int h_nr_delayed; =20 s64 avg_vruntime; u64 avg_load; @@ -898,8 +899,11 @@ struct dl_rq { =20 static inline void se_update_runnable(struct sched_entity *se) { - if (!entity_is_task(se)) - se->runnable_weight =3D se->my_q->h_nr_running; + if (!entity_is_task(se)) { + struct cfs_rq *cfs_rq =3D se->my_q; + + se->runnable_weight =3D cfs_rq->h_nr_running - cfs_rq->h_nr_delayed; + } } =20 static inline long se_runnable(struct sched_entity *se) --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F349F1DD9A8 for ; Mon, 2 Dec 2024 17:46:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161580; cv=none; b=YFyOy33CXJkVBOSews0sRoZNLS7l9uw2Jubsrn0ciTXRmiw0ktsTqVXIZJeOAsqXQql6inPFcvhE9FpZHzaqyux6d8jaVPg31K1y2pRUDFBbbQDRIs9JN5glFKIeZZR9lQxdaTZtdxCqxgNrUXt59oDqUt9bSmXFTCb9LAsxNVU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161580; c=relaxed/simple; bh=pPXttQsIqv4QcQ5P73PsksI6dM2Lh8p4JMSL9mmGqTo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e+az2U3ejqjn9N+JmnIAG/gmmO6hQ7e1s6/9OmP6nPMHPE+IMbkklmzaG/l8JS19VqL0J5l5DiZ5QDw8rvSLy0+VQq0192vRFUnwlyKegVkimbOJ5kltql/EygoWrUFKIPDWPr1s6cfXrOCJ+vB6oxp2pmb0WT5zBRB1LbOUHmE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=F3noEzek; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="F3noEzek" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-4349cc45219so41295675e9.3 for ; Mon, 02 Dec 2024 09:46:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161576; x=1733766376; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ytt4msqh2bkvqTCxvMO7lfADd+2Glg781DZC05UgcFA=; b=F3noEzek6RTJff6Q37aMEkCkTZUDe1BOl6LnfGmA6/jlSvYgmP37DbmQCnnA+dZcLI bYzNLXi8Slv8C2Sk/yIFNpLPz/nbGdtPuVJnwzkw3VmIIJhHAg6rhyn4lIq3FTR2MH+m oU8f2Z4U5Z38wI44rCEGb+jVShqHRwqpgiPXmwcCY7XOEVArVFfuxfVJ0yhauI1jw2No VuymL6Yxg/+9ZmMxiltRCOKY7u+DC3r/NjexIEaVIdX9uEyWSylHJHr60Qq4dLtLFCZq QYy7fJVMy2eeXsWPDd0jsDVs8bR9I0f2ed7j2Axz3/IfmMqYAN9UgTOkMTyjnk6NHwJN Yg/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161576; x=1733766376; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ytt4msqh2bkvqTCxvMO7lfADd+2Glg781DZC05UgcFA=; b=w4XWfgwLyFpgQAfycfXu9j9TGUxjnp9cs/bNsNIvqqVwfhKY2oJozl0Ymy1YZPmL/9 LtTAsa8+ux12ziPm/yt0dEj6EQoWYhrQ0AI8NiRiEf6h3KeMLHX0c9ZEHyXPNSX+DKFz INJqVYdeVzdCG6f9uyfNP42Q+duP9w9iHxSQnjKoyw6GzPE0IztK5vDNaH484JW6ujKm 6ZTd12ffgAvt9jJfpsckrz60zYVslpfI2zD2A5YuMoZW8u+qH+IvINqlSmM0M6oH4+5v 9RCCE4KELsU/fwd+6Ukl3utJLPzGf/PJxepeKY/ny2ibbBU9IY9OSDFZdNArB/AmW3eF Cb4Q== X-Forwarded-Encrypted: i=1; AJvYcCUOhoRNMBqW2+4E5H0LANlriqMAvYOIYl3tnPFvkULw1/yNCy8rvy9khbQ2b/0A+631Jh01sA3VyI2NyGY=@vger.kernel.org X-Gm-Message-State: AOJu0YyPKq/aF2dCP8Pu/mTyQAjtRqpxL51pi5FMgtoV3odFRoIOywbe 0nxo6MT1F3mPHvLjcpEegYtc1luM8Dov/VQn1HbnE67nkgn+BRSLg/tGeLmI9OY= X-Gm-Gg: ASbGnctDYjgdPV3E9s1H51QLNf2PwZPRB40A933Wby/IQcU4Wp56vn54tPV1LYfLGjt NPzQQLjZyBeqf1LSNqMvg3fNDMQGad4Zh/zRxic34HGAwY0MZbFRgNxRUEEhzmCn1giXe8iVrd8 6OYGLpvflF0Sm+YQwoneepobA4LWSkpunveQozFoTvxE4xm9cYLDVJVs5FIrDzT3rgj5FAMn9KD XJP2pNBeFdQZLre7+zatjzd61zdd1qrRvgBLk9Su0+9QOKFslDyGmbuyVI= X-Google-Smtp-Source: AGHT+IEXqyCpupsGZoouFYsp/z5djNL1sx77rLODReZiDdKFzoF0Tmu8FyrJzn3LhEH3rOKWGduvkQ== X-Received: by 2002:a05:600c:5490:b0:42c:c28c:e477 with SMTP id 5b1f17b1804b1-434a9de55ebmr191951735e9.23.1733161576205; Mon, 02 Dec 2024 09:46:16 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:14 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 03/11 v3] sched/fair: Rename h_nr_running into h_nr_queued Date: Mon, 2 Dec 2024 18:45:58 +0100 Message-ID: <20241202174606.4074512-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With delayed dequeued feature, a sleeping sched_entity remains queued in the rq until its lag has elapsed but can't run. Rename h_nr_running into h_nr_queued to reflect this new behavior. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/core.c | 4 +- kernel/sched/debug.c | 6 +-- kernel/sched/fair.c | 88 ++++++++++++++++++++++---------------------- kernel/sched/pelt.c | 4 +- kernel/sched/sched.h | 4 +- 5 files changed, 53 insertions(+), 53 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ed95861e9887..9ff29c59493a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1343,7 +1343,7 @@ bool sched_can_stop_tick(struct rq *rq) if (scx_enabled() && !scx_can_stop_tick(rq)) return false; =20 - if (rq->cfs.h_nr_running > 1) + if (rq->cfs.h_nr_queued > 1) return false; =20 /* @@ -6020,7 +6020,7 @@ __pick_next_task(struct rq *rq, struct task_struct *p= rev, struct rq_flags *rf) * opportunity to pull in more work from other CPUs. */ if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) && - rq->nr_running =3D=3D rq->cfs.h_nr_running)) { + rq->nr_running =3D=3D rq->cfs.h_nr_queued)) { =20 p =3D pick_next_task_fair(rq, prev, rf); if (unlikely(p =3D=3D RETRY_TASK)) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index a1be00a988bf..08d6c2b7caa3 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -379,7 +379,7 @@ static ssize_t sched_fair_server_write(struct file *fil= p, const char __user *ubu return -EINVAL; } =20 - if (rq->cfs.h_nr_running) { + if (rq->cfs.h_nr_queued) { update_rq_clock(rq); dl_server_stop(&rq->fair_server); } @@ -392,7 +392,7 @@ static ssize_t sched_fair_server_write(struct file *fil= p, const char __user *ubu printk_deferred("Fair server disabled in CPU %d, system may crash due t= o starvation.\n", cpu_of(rq)); =20 - if (rq->cfs.h_nr_running) + if (rq->cfs.h_nr_queued) dl_server_start(&rq->fair_server); } =20 @@ -844,7 +844,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) spread =3D right_vruntime - left_vruntime; SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); - SEQ_printf(m, " .%-30s: %d\n", "h_nr_running", cfs_rq->h_nr_running); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", cfs_rq->idle_nr_running); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fc69aab57870..0f6dc4d9b15f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2128,7 +2128,7 @@ static void update_numa_stats(struct task_numa_env *e= nv, ns->load +=3D cpu_load(rq); ns->runnable +=3D cpu_runnable(rq); ns->util +=3D cpu_util_cfs(cpu); - ns->nr_running +=3D rq->cfs.h_nr_running; + ns->nr_running +=3D rq->cfs.h_nr_queued; ns->compute_capacity +=3D capacity_of(cpu); =20 if (find_idle && idle_core < 0 && !rq->nr_running && idle_cpu(cpu)) { @@ -5394,7 +5394,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new - * h_nr_running of its group cfs_rq. + * h_nr_queued of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -5532,7 +5532,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new - * h_nr_running of its group cfs_rq. + * h_nr_queued of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. @@ -5933,8 +5933,8 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long task_delta, idle_task_delta, delayed_delta, dequeue =3D 1; - long rq_h_nr_running =3D rq->cfs.h_nr_running; + long queued_delta, idle_task_delta, delayed_delta, dequeue =3D 1; + long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 raw_spin_lock(&cfs_b->lock); /* This will start the period timer if necessary */ @@ -5964,7 +5964,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq); rcu_read_unlock(); =20 - task_delta =3D cfs_rq->h_nr_running; + queued_delta =3D cfs_rq->h_nr_queued; idle_task_delta =3D cfs_rq->idle_h_nr_running; delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { @@ -5986,9 +5986,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) dequeue_entity(qcfs_rq, se, flags); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running -=3D task_delta; + qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; qcfs_rq->h_nr_delayed -=3D delayed_delta; =20 @@ -6009,18 +6009,18 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) se_update_runnable(se); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running -=3D task_delta; + qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; qcfs_rq->h_nr_delayed -=3D delayed_delta; } =20 /* At this point se is NULL and we are at root level*/ - sub_nr_running(rq, task_delta); + sub_nr_running(rq, queued_delta); =20 /* Stop the fair server if throttling resulted in no runnable tasks */ - if (rq_h_nr_running && !rq->cfs.h_nr_running) + if (rq_h_nr_queued && !rq->cfs.h_nr_queued) dl_server_stop(&rq->fair_server); done: /* @@ -6039,8 +6039,8 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long task_delta, idle_task_delta, delayed_delta; - long rq_h_nr_running =3D rq->cfs.h_nr_running; + long queued_delta, idle_task_delta, delayed_delta; + long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 se =3D cfs_rq->tg->se[cpu_of(rq)]; =20 @@ -6073,7 +6073,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) goto unthrottle_throttle; } =20 - task_delta =3D cfs_rq->h_nr_running; + queued_delta =3D cfs_rq->h_nr_queued; idle_task_delta =3D cfs_rq->idle_h_nr_running; delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { @@ -6089,9 +6089,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) enqueue_entity(qcfs_rq, se, ENQUEUE_WAKEUP); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running +=3D task_delta; + qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 @@ -6107,9 +6107,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) se_update_runnable(se); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running +=3D task_delta; + qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 @@ -6119,11 +6119,11 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) } =20 /* Start the fair server if un-throttling resulted in new runnable tasks = */ - if (!rq_h_nr_running && rq->cfs.h_nr_running) + if (!rq_h_nr_queued && rq->cfs.h_nr_queued) dl_server_start(&rq->fair_server); =20 /* At this point se is NULL and we are at root level*/ - add_nr_running(rq, task_delta); + add_nr_running(rq, queued_delta); =20 unthrottle_throttle: assert_list_leaf_cfs_rq(rq); @@ -6833,7 +6833,7 @@ static void hrtick_start_fair(struct rq *rq, struct t= ask_struct *p) =20 SCHED_WARN_ON(task_rq(p) !=3D rq); =20 - if (rq->cfs.h_nr_running > 1) { + if (rq->cfs.h_nr_queued > 1) { u64 ran =3D se->sum_exec_runtime - se->prev_sum_exec_runtime; u64 slice =3D se->slice; s64 delta =3D slice - ran; @@ -6976,7 +6976,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) int idle_h_nr_running =3D task_has_idle_policy(p); int h_nr_delayed =3D 0; int task_new =3D !(flags & ENQUEUE_WAKEUP); - int rq_h_nr_running =3D rq->cfs.h_nr_running; + int rq_h_nr_queued =3D rq->cfs.h_nr_queued; u64 slice =3D 0; =20 /* @@ -7024,7 +7024,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) enqueue_entity(cfs_rq, se, flags); slice =3D cfs_rq_min_slice(cfs_rq); =20 - cfs_rq->h_nr_running++; + cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 @@ -7048,7 +7048,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 - cfs_rq->h_nr_running++; + cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 @@ -7060,7 +7060,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) goto enqueue_throttle; } =20 - if (!rq_h_nr_running && rq->cfs.h_nr_running) { + if (!rq_h_nr_queued && rq->cfs.h_nr_queued) { /* Account for idle runtime */ if (!rq->nr_running) dl_server_update_idle_time(rq, rq->curr); @@ -7107,19 +7107,19 @@ static void set_next_buddy(struct sched_entity *se); static int dequeue_entities(struct rq *rq, struct sched_entity *se, int fl= ags) { bool was_sched_idle =3D sched_idle_rq(rq); - int rq_h_nr_running =3D rq->cfs.h_nr_running; + int rq_h_nr_queued =3D rq->cfs.h_nr_queued; bool task_sleep =3D flags & DEQUEUE_SLEEP; bool task_delayed =3D flags & DEQUEUE_DELAYED; struct task_struct *p =3D NULL; int idle_h_nr_running =3D 0; - int h_nr_running =3D 0; + int h_nr_queued =3D 0; int h_nr_delayed =3D 0; struct cfs_rq *cfs_rq; u64 slice =3D 0; =20 if (entity_is_task(se)) { p =3D task_of(se); - h_nr_running =3D 1; + h_nr_queued =3D 1; idle_h_nr_running =3D task_has_idle_policy(p); if (!task_sleep && !task_delayed) h_nr_delayed =3D !!se->sched_delayed; @@ -7138,12 +7138,12 @@ static int dequeue_entities(struct rq *rq, struct s= ched_entity *se, int flags) break; } =20 - cfs_rq->h_nr_running -=3D h_nr_running; + cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D h_nr_running; + idle_h_nr_running =3D h_nr_queued; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) @@ -7177,21 +7177,21 @@ static int dequeue_entities(struct rq *rq, struct s= ched_entity *se, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 - cfs_rq->h_nr_running -=3D h_nr_running; + cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D h_nr_running; + idle_h_nr_running =3D h_nr_queued; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) return 0; } =20 - sub_nr_running(rq, h_nr_running); + sub_nr_running(rq, h_nr_queued); =20 - if (rq_h_nr_running && !rq->cfs.h_nr_running) + if (rq_h_nr_queued && !rq->cfs.h_nr_queued) dl_server_stop(&rq->fair_server); =20 /* balance early to pull high priority tasks */ @@ -10319,7 +10319,7 @@ sched_reduced_capacity(struct rq *rq, struct sched_= domain *sd) * When there is more than 1 task, the group_overloaded case already * takes care of cpu with reduced capacity */ - if (rq->cfs.h_nr_running !=3D 1) + if (rq->cfs.h_nr_queued !=3D 1) return false; =20 return check_cpu_capacity(rq, sd); @@ -10354,7 +10354,7 @@ static inline void update_sg_lb_stats(struct lb_env= *env, sgs->group_load +=3D load; sgs->group_util +=3D cpu_util_cfs(i); sgs->group_runnable +=3D cpu_runnable(rq); - sgs->sum_h_nr_running +=3D rq->cfs.h_nr_running; + sgs->sum_h_nr_running +=3D rq->cfs.h_nr_queued; =20 nr_running =3D rq->nr_running; sgs->sum_nr_running +=3D nr_running; @@ -10669,7 +10669,7 @@ static inline void update_sg_wakeup_stats(struct sc= hed_domain *sd, sgs->group_util +=3D cpu_util_without(i, p); sgs->group_runnable +=3D cpu_runnable_without(rq, p); local =3D task_running_on_cpu(i, p); - sgs->sum_h_nr_running +=3D rq->cfs.h_nr_running - local; + sgs->sum_h_nr_running +=3D rq->cfs.h_nr_queued - local; =20 nr_running =3D rq->nr_running - local; sgs->sum_nr_running +=3D nr_running; @@ -11451,7 +11451,7 @@ static struct rq *sched_balance_find_src_rq(struct = lb_env *env, if (rt > env->fbq_type) continue; =20 - nr_running =3D rq->cfs.h_nr_running; + nr_running =3D rq->cfs.h_nr_queued; if (!nr_running) continue; =20 @@ -11610,7 +11610,7 @@ static int need_active_balance(struct lb_env *env) * available on dst_cpu. */ if (env->idle && - (env->src_rq->cfs.h_nr_running =3D=3D 1)) { + (env->src_rq->cfs.h_nr_queued =3D=3D 1)) { if ((check_cpu_capacity(env->src_rq, sd)) && (capacity_of(env->src_cpu)*sd->imbalance_pct < capacity_of(env->dst_= cpu)*100)) return 1; @@ -12353,7 +12353,7 @@ static void nohz_balancer_kick(struct rq *rq) * If there's a runnable CFS task and the current CPU has reduced * capacity, kick the ILB to see if there's a better CPU to run on: */ - if (rq->cfs.h_nr_running >=3D 1 && check_cpu_capacity(rq, sd)) { + if (rq->cfs.h_nr_queued >=3D 1 && check_cpu_capacity(rq, sd)) { flags =3D NOHZ_STATS_KICK | NOHZ_BALANCE_KICK; goto unlock; } @@ -12851,11 +12851,11 @@ static int sched_balance_newidle(struct rq *this_= rq, struct rq_flags *rf) * have been enqueued in the meantime. Since we're not going idle, * pretend we pulled a task. */ - if (this_rq->cfs.h_nr_running && !pulled_task) + if (this_rq->cfs.h_nr_queued && !pulled_task) pulled_task =3D 1; =20 /* Is there a task of a high priority class? */ - if (this_rq->nr_running !=3D this_rq->cfs.h_nr_running) + if (this_rq->nr_running !=3D this_rq->cfs.h_nr_queued) pulled_task =3D -1; =20 out: @@ -13542,7 +13542,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) parent_cfs_rq->idle_nr_running--; } =20 - idle_task_delta =3D grp_cfs_rq->h_nr_running - + idle_task_delta =3D grp_cfs_rq->h_nr_queued - grp_cfs_rq->idle_h_nr_running; if (!cfs_rq_is_idle(grp_cfs_rq)) idle_task_delta *=3D -1; diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index fee75cc2c47b..2bad0b508dfc 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -275,7 +275,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long = load) * * group: [ see update_cfs_group() ] * se_weight() =3D tg->weight * grq->load_avg / tg->load_avg - * se_runnable() =3D grq->h_nr_running + * se_runnable() =3D grq->h_nr_queued * * runnable_sum =3D se_runnable() * runnable =3D grq->runnable_sum * runnable_avg =3D runnable_sum @@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cf= s_rq) { if (___update_load_sum(now, &cfs_rq->avg, scale_load_down(cfs_rq->load.weight), - cfs_rq->h_nr_running - cfs_rq->h_nr_delayed, + cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed, cfs_rq->curr !=3D NULL)) { =20 ___update_load_avg(&cfs_rq->avg, 1); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 99d19c605e4f..b011081aff97 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -646,7 +646,7 @@ struct balance_callback { struct cfs_rq { struct load_weight load; unsigned int nr_running; - unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */ + unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ unsigned int h_nr_delayed; @@ -902,7 +902,7 @@ static inline void se_update_runnable(struct sched_enti= ty *se) if (!entity_is_task(se)) { struct cfs_rq *cfs_rq =3D se->my_q; =20 - se->runnable_weight =3D cfs_rq->h_nr_running - cfs_rq->h_nr_delayed; + se->runnable_weight =3D cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed; } } =20 --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A9A41DB548 for ; Mon, 2 Dec 2024 17:46:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161582; cv=none; b=uNiGAnuif7roqGEscx3hrNh/4xG4LB4KZq5JMzsWbRQsxBREsQfYTemkr8aqbNcx+rRNFy+Pb3mW5nMuJLlRmUQcAGkc2uQQiEzxCASjWwoDxa0w4y3OqHH/p8HBwP1WIJil8DKZUe7P10k4nCJD8fVnEPmYPkr+r75OmOHIkoI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161582; c=relaxed/simple; bh=bPSze9Be6KKv+lxg6xKVD8vLL7rpB+u/fYOoZsaxDv8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sARfDlv+AyyPywd8hoqT4I4bTVTb0JDs+ICjACysZQ40VznYa1js1wsIIfQo+WHPJHrR9zmE3o5MAXD8kqR81SYJXX0o/gVbwqODoHeBk6o0msjVVByXSG9bUYiMyP1SYhkHmmdtYhAfl6WBcA887f0BV0GrkuE0oloRBZxLlzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=YremjeKy; arc=none smtp.client-ip=209.85.167.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="YremjeKy" Received: by mail-lf1-f42.google.com with SMTP id 2adb3069b0e04-53de771c5ebso5235200e87.2 for ; Mon, 02 Dec 2024 09:46:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161578; x=1733766378; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ek/xsElwsHkh8LnXoYO/HvGXR5tmyf5r/Nk1G/nsybw=; b=YremjeKyfbzzzFzfvzqUjM8Bjk/Mvo3cQfBzFu+BIw69Ib218eq3Jsj4BeXS65bnJK yjTD5BmAtlgNJRCIGwqQr8DfV33nYYkmf2eMSbd1UPL3CKbgkkOnaRYLt34EXz1lxh1u JKCW7dVZ59D6+smJo4GPmRfCMeioLtfcXq+4obL0CRZtfQoMaftgJ3EvAYLSLEz4F7u1 azoj8DLSW43cw2nxRhgpT86BfApCoPTjaWBWXFNwsf8LvG/YZVTT24qONf4T0xH63Xfu N2YIy8hd76hk8hWAGTl0uLRc+179dd/kE3jcxVNBM1qHH8Pnr51MsTPa3ba2kdOj5a3t m8YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161578; x=1733766378; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ek/xsElwsHkh8LnXoYO/HvGXR5tmyf5r/Nk1G/nsybw=; b=kCBmnrmHm6NugBup+fjCk9ooSw3D+0+2vbbmDM0jZ3oqCUKrXgWzbZd2ljFyhbppG6 X6T1Vk7K+z0SjDOyBl1opmBXe5D67NBZ4QKDUaUPs+dZFEYPLxLsYnfDP2b0F67o3NUz 5pPZV+46jcFXzxTFdHJoWIGEC/ekc8GjvP4tygVSXMi96+FfqOSkU054/QyMhKiTFs51 O2ypyWR7HRr0prqGehmzGtxGb2QKeHVoZFkFDNjj5aSxgF0RBH0btHFQwiCSw5mAyDd2 1PvzJyQj3Gb5knKkN02TCn5L1ZUyiMtMZ1COdBOPnIwxso7QNORqFyJTjIu/2o592GCI 9VxA== X-Forwarded-Encrypted: i=1; AJvYcCVlxP9laKADTffaPeTPBVZwcoAnpjz6EXufX9jQpFG6LE+qnAmCtES6uDYcGyzPbgeGUgLnDWuktHz2iIk=@vger.kernel.org X-Gm-Message-State: AOJu0YzVQdNBD7red+jtV0JZMIIhbxuam1lB7ZKIrYwKuP3jHiMA1W4y CJNRgJEU5YDc3E2uvZF5qjjjx7p4Be1JnjWfSzY52x0UlcXQhIR6VAoHpqwRBOc= X-Gm-Gg: ASbGncs5h+Gh/xdMAVGjRIvRlb7wbRQOesvdhMO1vPnPVynnWEWRFm7A74kJvICKb5J luOQ8x0XMT2jOQHoeCFUzDlNRwSRdXjHqIXmOZxlM/FsG49BOyGJrS44DV2i9HVTbHEIv8356Iq URz+GafZevtSwnrOqKx+dPUVjfvtcoB4PHLevQPvo8i638sOP8w9Kf/Ca6k2n6/jJ1C2E4voAjs BaWK1aAsoygpu+hR05UQ5JvyYvsOt5ZhAgUXw9yrxOAZ/Y2Io73eP6/BaA= X-Google-Smtp-Source: AGHT+IGT2Akx/4NKE0Li/lmEpkH1uyU3ezVGw48+hLzJjhoFjk3qlR8IGq+3L600SlxZqFiXULRvCA== X-Received: by 2002:a05:6512:1115:b0:539:918c:5124 with SMTP id 2adb3069b0e04-53df00dce95mr13626576e87.31.1733161577712; Mon, 02 Dec 2024 09:46:17 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:16 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 04/11 v3] sched/fair: Add new cfs_rq.h_nr_runnable Date: Mon, 2 Dec 2024 18:45:59 +0100 Message-ID: <20241202174606.4074512-5-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With delayed dequeued feature, a sleeping sched_entity remains queued in the rq until its lag has elapsed. As a result, it stays also visible in the statistics that are used to balance the system and in particular the field cfs.h_nr_queued when the sched_entity is associated to a task. Create a new h_nr_runnable that tracks only queued and runnable tasks. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/debug.c | 1 + kernel/sched/fair.c | 20 ++++++++++++++++++-- kernel/sched/sched.h | 1 + 3 files changed, 20 insertions(+), 2 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 08d6c2b7caa3..fd711cc4d44c 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -844,6 +844,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) spread =3D right_vruntime - left_vruntime; SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_runnable", cfs_rq->h_nr_runnable); SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0f6dc4d9b15f..46cf1c72598c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5469,6 +5469,7 @@ static void set_delayed(struct sched_entity *se) for_each_sched_entity(se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 + cfs_rq->h_nr_runnable--; cfs_rq->h_nr_delayed++; if (cfs_rq_throttled(cfs_rq)) break; @@ -5481,6 +5482,7 @@ static void clear_delayed(struct sched_entity *se) for_each_sched_entity(se) { struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 + cfs_rq->h_nr_runnable++; cfs_rq->h_nr_delayed--; if (cfs_rq_throttled(cfs_rq)) break; @@ -5933,7 +5935,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long queued_delta, idle_task_delta, delayed_delta, dequeue =3D 1; + long queued_delta, runnable_delta, idle_task_delta, delayed_delta, dequeu= e =3D 1; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 raw_spin_lock(&cfs_b->lock); @@ -5965,6 +5967,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) rcu_read_unlock(); =20 queued_delta =3D cfs_rq->h_nr_queued; + runnable_delta =3D cfs_rq->h_nr_runnable; idle_task_delta =3D cfs_rq->idle_h_nr_running; delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { @@ -5989,6 +5992,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) idle_task_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued -=3D queued_delta; + qcfs_rq->h_nr_runnable -=3D runnable_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; qcfs_rq->h_nr_delayed -=3D delayed_delta; =20 @@ -6012,6 +6016,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) idle_task_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued -=3D queued_delta; + qcfs_rq->h_nr_runnable -=3D runnable_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; qcfs_rq->h_nr_delayed -=3D delayed_delta; } @@ -6039,7 +6044,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long queued_delta, idle_task_delta, delayed_delta; + long queued_delta, runnable_delta, idle_task_delta, delayed_delta; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 se =3D cfs_rq->tg->se[cpu_of(rq)]; @@ -6074,6 +6079,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) } =20 queued_delta =3D cfs_rq->h_nr_queued; + runnable_delta =3D cfs_rq->h_nr_runnable; idle_task_delta =3D cfs_rq->idle_h_nr_running; delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { @@ -6092,6 +6098,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) idle_task_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued +=3D queued_delta; + qcfs_rq->h_nr_runnable +=3D runnable_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 @@ -6110,6 +6117,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) idle_task_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued +=3D queued_delta; + qcfs_rq->h_nr_runnable +=3D runnable_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 @@ -7024,6 +7032,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) enqueue_entity(cfs_rq, se, flags); slice =3D cfs_rq_min_slice(cfs_rq); =20 + if (!h_nr_delayed) + cfs_rq->h_nr_runnable++; cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; cfs_rq->h_nr_delayed +=3D h_nr_delayed; @@ -7048,6 +7058,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 + if (!h_nr_delayed) + cfs_rq->h_nr_runnable++; cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; cfs_rq->h_nr_delayed +=3D h_nr_delayed; @@ -7138,6 +7150,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) break; } =20 + if (!h_nr_delayed) + cfs_rq->h_nr_runnable -=3D h_nr_queued; cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; cfs_rq->h_nr_delayed -=3D h_nr_delayed; @@ -7177,6 +7191,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 + if (!h_nr_delayed) + cfs_rq->h_nr_runnable -=3D h_nr_queued; cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; cfs_rq->h_nr_delayed -=3D h_nr_delayed; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b011081aff97..869d5d3521f2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -647,6 +647,7 @@ struct cfs_rq { struct load_weight load; unsigned int nr_running; unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ + unsigned int h_nr_runnable; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ unsigned int h_nr_delayed; --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A0811DDC3A for ; Mon, 2 Dec 2024 17:46:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161584; cv=none; b=ol9U876DygiIIVD1ZMAOXjtF2hw3TKTgtvfP7ClzG9tvrHgDUfOCs48zUdoaDu9tKwLcF0cRoBk0e2wQ1dsqx4frqpHsa+E3IxQtjKa000ZU0wpfeeAIdlKufpXVqlMFFUqJn7yN10uJTPdlwPaJmrtJNSKRwS8RG+PQcP8da0c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161584; c=relaxed/simple; bh=r76ZFxf6+szW9eepTQKANXmM15PvPmPw1HXWFoEr5bg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gMVxIVLZWFuFYLXPYlxo2YLDNChliLcRSQXeoTCH+TotMVJ8DTOfRNTAwsOyvuF/I8RRB+11Inr08U0jVmQVgi6rsaJCU+rwEgmaLIizktHP1GQytRVHI0joLayTuEsT5j2lKstTvPMjwxdcjIhgRY3+ljz7H5hxh0rAJMg7tKI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=vres8kFY; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="vres8kFY" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-434acf1f9abso42124475e9.2 for ; Mon, 02 Dec 2024 09:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161579; x=1733766379; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=s4CmYt43pmWFjod+teL3Skl+Q8jt+0rCZaGH1lQgHrw=; b=vres8kFYC39ijn15CEV7jKioOuqrDfBNUOTDcZlB9VG5MWh4VhlFwFTGZn8o3nOZFr C6aG49mtoCbJJ9X9xSLAJ6r9kevMAeGG3wMKkk53F3Ig8YylEg3NMle+1PYWpEANP7jk a2fvAJhXO/NJ3/4vBezahKsHDU3vYXy3/r1jyIgLKwQTXVaSYoSNc6ZRCkDsODeTkAgD Doza42bTqSahVGWLHsoWfGt+yx2/KlQuN4I2OQBXZwWqSAVycXdntmUgj31be6rZrhh8 HPf5Tay1U1zcUGsYp9LPtZke/aUV+z7g6ezfTmpJuLStBYQeCztjMufCfKhQJXWmT0oI arDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161579; x=1733766379; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s4CmYt43pmWFjod+teL3Skl+Q8jt+0rCZaGH1lQgHrw=; b=ldFz/bQj18vb8FYRLDtHTcR6Egjqui4w1FTMx/LcVTHM49rKBms/TuDOsgprNfBrvS 9Vd2gdHvLn6C1JmTnLCnmSVSeqp1FvHAq2d55xBgwyZ9Kd3smdkIA9rooS6sSJWE628e mAeqVpGDGoMOwpxxOTLuJdqa0iJabMpSSRdqcP1B8JXhIvt5qVaov9a4X+gqZeEQAfQh eRnhwal5iH/QJjVaahH3XFL4W8Nzz+cv8nRn7JhVRmTW8Jd/zMfp18VHtdH/+GTuTM4P H09kQLmMhSzk8lJsBJu5LvUNB5yGZssdKknDuxj2hLDlrBC80UVwT5wjlL8ELh7w/yTy 49YQ== X-Forwarded-Encrypted: i=1; AJvYcCVl/trd9Zw/NFatJN/BiL6WFMrfeXfGybOZEQgpCGBPe5qlADdetKkdEjDsmDdVu9UE3YITZvyoY0izdWY=@vger.kernel.org X-Gm-Message-State: AOJu0YxLiirHCbgI2eETKPkG0tR/OUnM7RJO/HrtaH0W1nbN/DC9/Zkp tLFJ49qepZXJ+7ibkFchEtfGFXNA+LKebj8aMbUd7jbTuPGhTl3StQFOA7DHZ7s= X-Gm-Gg: ASbGncsHdaU5RC1Jik6ByrcpzLBmHR1QL2RzhOBZWqg5qkt4VM+BA8v9QIDK0AqtM6R ozWepTVrA7db5ONQGcWeE8ULAdPXAxdyuEQHT4Bff6kJEbl0JqO4CC9y4RRFWcuB0KSCv7skww0 ecwyJST9i9YEKRO82TcE8EDb9q3sRuarcqRdTnBPiiFQV/DxtWVLEeciWJIClAPjTCmXXFxnS5U hJmQzvrTSqzX2rxYbvv4PvZDTg1M0Um85hks7q5bd3qAtpyUEP5hNEHbgE= X-Google-Smtp-Source: AGHT+IEZQs119pt6vxl/ltjNCpQHi5f8kOCbfJLZaQpN/MW0OaY1Xz4SYgTX1gju1ChOZekBvQHawQ== X-Received: by 2002:a05:600c:3489:b0:434:a39b:5e44 with SMTP id 5b1f17b1804b1-434a9dcff98mr241827605e9.17.1733161579591; Mon, 02 Dec 2024 09:46:19 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:18 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 05/11 v3] sched/fair: Use the new cfs_rq.h_nr_runnable Date: Mon, 2 Dec 2024 18:46:00 +0100 Message-ID: <20241202174606.4074512-6-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the new h_nr_runnable that tracks only queued and runnable tasks in the statistics that are used to balance the system: - PELT runnable_avg - deciding if a group is overloaded or has spare capacity - numa stats - reduced capacity management - load balance - nohz kick It should be noticed that the rq->nr_running still counts the delayed dequeued tasks as delayed dequeue is a fair feature that is meaningless at core level. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/fair.c | 18 +++++++++--------- kernel/sched/pelt.c | 4 ++-- kernel/sched/sched.h | 7 ++----- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 46cf1c72598c..e3c89aeda73f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2128,7 +2128,7 @@ static void update_numa_stats(struct task_numa_env *e= nv, ns->load +=3D cpu_load(rq); ns->runnable +=3D cpu_runnable(rq); ns->util +=3D cpu_util_cfs(cpu); - ns->nr_running +=3D rq->cfs.h_nr_queued; + ns->nr_running +=3D rq->cfs.h_nr_runnable; ns->compute_capacity +=3D capacity_of(cpu); =20 if (find_idle && idle_core < 0 && !rq->nr_running && idle_cpu(cpu)) { @@ -5394,7 +5394,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new - * h_nr_queued of its group cfs_rq. + * h_nr_runnable of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -5534,7 +5534,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new - * h_nr_queued of its group cfs_rq. + * h_nr_runnable of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. @@ -10335,7 +10335,7 @@ sched_reduced_capacity(struct rq *rq, struct sched_= domain *sd) * When there is more than 1 task, the group_overloaded case already * takes care of cpu with reduced capacity */ - if (rq->cfs.h_nr_queued !=3D 1) + if (rq->cfs.h_nr_runnable !=3D 1) return false; =20 return check_cpu_capacity(rq, sd); @@ -10370,7 +10370,7 @@ static inline void update_sg_lb_stats(struct lb_env= *env, sgs->group_load +=3D load; sgs->group_util +=3D cpu_util_cfs(i); sgs->group_runnable +=3D cpu_runnable(rq); - sgs->sum_h_nr_running +=3D rq->cfs.h_nr_queued; + sgs->sum_h_nr_running +=3D rq->cfs.h_nr_runnable; =20 nr_running =3D rq->nr_running; sgs->sum_nr_running +=3D nr_running; @@ -10685,7 +10685,7 @@ static inline void update_sg_wakeup_stats(struct sc= hed_domain *sd, sgs->group_util +=3D cpu_util_without(i, p); sgs->group_runnable +=3D cpu_runnable_without(rq, p); local =3D task_running_on_cpu(i, p); - sgs->sum_h_nr_running +=3D rq->cfs.h_nr_queued - local; + sgs->sum_h_nr_running +=3D rq->cfs.h_nr_runnable - local; =20 nr_running =3D rq->nr_running - local; sgs->sum_nr_running +=3D nr_running; @@ -11467,7 +11467,7 @@ static struct rq *sched_balance_find_src_rq(struct = lb_env *env, if (rt > env->fbq_type) continue; =20 - nr_running =3D rq->cfs.h_nr_queued; + nr_running =3D rq->cfs.h_nr_runnable; if (!nr_running) continue; =20 @@ -11626,7 +11626,7 @@ static int need_active_balance(struct lb_env *env) * available on dst_cpu. */ if (env->idle && - (env->src_rq->cfs.h_nr_queued =3D=3D 1)) { + (env->src_rq->cfs.h_nr_runnable =3D=3D 1)) { if ((check_cpu_capacity(env->src_rq, sd)) && (capacity_of(env->src_cpu)*sd->imbalance_pct < capacity_of(env->dst_= cpu)*100)) return 1; @@ -12369,7 +12369,7 @@ static void nohz_balancer_kick(struct rq *rq) * If there's a runnable CFS task and the current CPU has reduced * capacity, kick the ILB to see if there's a better CPU to run on: */ - if (rq->cfs.h_nr_queued >=3D 1 && check_cpu_capacity(rq, sd)) { + if (rq->cfs.h_nr_runnable >=3D 1 && check_cpu_capacity(rq, sd)) { flags =3D NOHZ_STATS_KICK | NOHZ_BALANCE_KICK; goto unlock; } diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 2bad0b508dfc..7a8534a2deff 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -275,7 +275,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long = load) * * group: [ see update_cfs_group() ] * se_weight() =3D tg->weight * grq->load_avg / tg->load_avg - * se_runnable() =3D grq->h_nr_queued + * se_runnable() =3D grq->h_nr_runnable * * runnable_sum =3D se_runnable() * runnable =3D grq->runnable_sum * runnable_avg =3D runnable_sum @@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cf= s_rq) { if (___update_load_sum(now, &cfs_rq->avg, scale_load_down(cfs_rq->load.weight), - cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed, + cfs_rq->h_nr_runnable, cfs_rq->curr !=3D NULL)) { =20 ___update_load_avg(&cfs_rq->avg, 1); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 869d5d3521f2..4374c660f5c7 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -900,11 +900,8 @@ struct dl_rq { =20 static inline void se_update_runnable(struct sched_entity *se) { - if (!entity_is_task(se)) { - struct cfs_rq *cfs_rq =3D se->my_q; - - se->runnable_weight =3D cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed; - } + if (!entity_is_task(se)) + se->runnable_weight =3D se->my_q->h_nr_runnable; } =20 static inline long se_runnable(struct sched_entity *se) --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A35481DE2A2 for ; Mon, 2 Dec 2024 17:46:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161584; cv=none; b=UiHsUvopWZAZB8h3sK3+bBLcqjj+TMjCav6RFIbk5Vc/+dasJSO0JbYCMo4GBbjz/uhH5CIVEfzp6sguZFFJthA9In8NR9BvHIf3UG72kXN3/rJh4eaLmDmvcZCZ/0zxyD/VzMTzoYTdjkpMJJAj4S3rok0hF6wWChwHooxxWbg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161584; c=relaxed/simple; bh=D4g5jqrbanGfWVMem76yHEW8lroqNq946rQxKR1Vbdw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hICfnEA4zh1PeIFqx/oLUQThIUkDvbagcCbCeVDG2G6n6CgpO1V0iaCgMvkwtXULEDQkAevFf/iTIr31QBuoAKQw5ylETviWazVTH5iZYAokj6SVYkHycMv/jmC1kZiFwyBixzTxUNmmed1+0sb1dEpkaPQxSp7MCLt6F233CjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=o8hZ871S; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="o8hZ871S" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-434a852bb6eso42113095e9.3 for ; Mon, 02 Dec 2024 09:46:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161581; x=1733766381; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LENeNFXikaUf1Bv3eseCGzkfYoR7UWKdlBvAhWPdfaI=; b=o8hZ871S3zyb9jvBkZ/L7ADMo0JqV6opM+zVxf6JOVSwiJb8VsPEFBTUFfCzNxRTGd BYmMjHjgHWEaGFXOS8wzesseKd9IxIkk4pVOZSPJ2JZxZB1vBNyUUmrDT0wA25mZjyEO k0vfeTgdkyN0fyW4hJpwxF7QyGdF1VLVsYkVhSWaV2+IEhnwQgwX6WX4XVmZiiREVdqR olDpWdgk4PxWomGNwM7CGUO4vyBMaRqcJ2XR4mpi0lWqer+sU2Mdp3oWb5857Gh/qug4 2quTmK1tc2ZhW+X3eWvNH2Qj6j5ciUsFrWgK4Jn8CplIM727JMdcils+jaLVPiFXxfn+ g8bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161581; x=1733766381; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LENeNFXikaUf1Bv3eseCGzkfYoR7UWKdlBvAhWPdfaI=; b=X5UgW9s6qSaCTdku/RqILEhhwPKIrJyEZXMLeAk7ARuJhzo1dXPXX4qB7RdQgvizGy qzu5OFPJqR18+NTcjS0qfEbUVHkIpff3iPjFa6D6Qr1j/RltKKTpIaaH72rjqN8qXWjg u1/UwGhjs8J9vVXFF+Z/iLFHtEpB0zM13EZC+ly/Ip1JafNCmzI2tDqnra2MorppEIGm I9QoluWo36+LLxo5jfpNay/J+P2aLlkC9qH1GUmeY+1bEbn8oEo9jdb/UCjLy2uw0kNn 6WSrZ83WfUegCzt154DJ7o48qAfE9zatIImtuLMLEMzWPys9MTcOJJ/P9uzcOuOzcfRC pmVQ== X-Forwarded-Encrypted: i=1; AJvYcCWWDXrWlgJdpAbOpuci2tqP9WgPGkFUeyjXO/5o8PGhsdbpzxxiAlyvK4atEb8tTA6V6FMwDSmxyPD07oA=@vger.kernel.org X-Gm-Message-State: AOJu0YwBrwWVkiweO0QxR023HCN3Y44JdhZrYuiKEwSS8fGSUSoefA2r LxpcHx0UeNbqzZwktP68w0pOt8yODfO57XxMZJ6qdSv6x7biuwrds5rslvgBCqo= X-Gm-Gg: ASbGncuoMlt5DsnMQuPHIn3sg4v2h3ptyZ1dKwPCEFwtElK4V//jEPTqzP577VuNkzW X64k4Jdvm1hSEs8zYUusCnqY2snR1vuJCL0RsGVZYg/HGGqIu+7amt2+v6hf8imXgFsJ5h/ilGR L9+HZBNXyo4BHQRDeyDxo2akFkgGjEZikWSATg8CTjsDg1t8l4yz6wlisLFP9CXskQkg+vYdduL r+r+5BetQEZXQ5XbRvzyZRlSVoZhRbTKZcWajuifsLUB1Xmk+W6zTiqrq4= X-Google-Smtp-Source: AGHT+IEUI/3GcBh5AacbAE5p7WHpoV5Q7nV5gU2MdCEkEfgyioELEpEiJMrxBr2x8NZvCNuOauGWIw== X-Received: by 2002:a05:600c:354e:b0:434:a684:9b1 with SMTP id 5b1f17b1804b1-434a9dbbfc6mr224922805e9.4.1733161580918; Mon, 02 Dec 2024 09:46:20 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:20 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 06/11 v3] sched/fair: Removed unsued cfs_rq.h_nr_delayed Date: Mon, 2 Dec 2024 18:46:01 +0100 Message-ID: <20241202174606.4074512-7-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" h_nr_delayed is not used anymore. We now have - h_nr_runnable which tracks tasks ready to run - h_nr_queued which tracks enqueued tasks either ready to run or delayed dequeue Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/debug.c | 1 - kernel/sched/fair.c | 40 ++++++++++++---------------------------- kernel/sched/sched.h | 1 - 3 files changed, 12 insertions(+), 30 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index fd711cc4d44c..56be3651605d 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -846,7 +846,6 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); SEQ_printf(m, " .%-30s: %d\n", "h_nr_runnable", cfs_rq->h_nr_runnable); SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); - SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", cfs_rq->idle_nr_running); SEQ_printf(m, " .%-30s: %d\n", "idle_h_nr_running", diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e3c89aeda73f..8afd1b548e76 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5470,7 +5470,6 @@ static void set_delayed(struct sched_entity *se) struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 cfs_rq->h_nr_runnable--; - cfs_rq->h_nr_delayed++; if (cfs_rq_throttled(cfs_rq)) break; } @@ -5483,7 +5482,6 @@ static void clear_delayed(struct sched_entity *se) struct cfs_rq *cfs_rq =3D cfs_rq_of(se); =20 cfs_rq->h_nr_runnable++; - cfs_rq->h_nr_delayed--; if (cfs_rq_throttled(cfs_rq)) break; } @@ -5935,7 +5933,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long queued_delta, runnable_delta, idle_task_delta, delayed_delta, dequeu= e =3D 1; + long queued_delta, runnable_delta, idle_task_delta, dequeue =3D 1; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 raw_spin_lock(&cfs_b->lock); @@ -5969,7 +5967,6 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) queued_delta =3D cfs_rq->h_nr_queued; runnable_delta =3D cfs_rq->h_nr_runnable; idle_task_delta =3D cfs_rq->idle_h_nr_running; - delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { struct cfs_rq *qcfs_rq =3D cfs_rq_of(se); int flags; @@ -5994,7 +5991,6 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->h_nr_runnable -=3D runnable_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; - qcfs_rq->h_nr_delayed -=3D delayed_delta; =20 if (qcfs_rq->load.weight) { /* Avoid re-evaluating load for this entity: */ @@ -6018,7 +6014,6 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->h_nr_runnable -=3D runnable_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; - qcfs_rq->h_nr_delayed -=3D delayed_delta; } =20 /* At this point se is NULL and we are at root level*/ @@ -6044,7 +6039,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long queued_delta, runnable_delta, idle_task_delta, delayed_delta; + long queued_delta, runnable_delta, idle_task_delta; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 se =3D cfs_rq->tg->se[cpu_of(rq)]; @@ -6081,7 +6076,6 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) queued_delta =3D cfs_rq->h_nr_queued; runnable_delta =3D cfs_rq->h_nr_runnable; idle_task_delta =3D cfs_rq->idle_h_nr_running; - delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { struct cfs_rq *qcfs_rq =3D cfs_rq_of(se); =20 @@ -6100,7 +6094,6 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->h_nr_runnable +=3D runnable_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; - qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(qcfs_rq)) @@ -6119,7 +6112,6 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->h_nr_runnable +=3D runnable_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; - qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(qcfs_rq)) @@ -6982,7 +6974,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) struct cfs_rq *cfs_rq; struct sched_entity *se =3D &p->se; int idle_h_nr_running =3D task_has_idle_policy(p); - int h_nr_delayed =3D 0; + int h_nr_runnable =3D 1; int task_new =3D !(flags & ENQUEUE_WAKEUP); int rq_h_nr_queued =3D rq->cfs.h_nr_queued; u64 slice =3D 0; @@ -7009,8 +7001,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) if (p->in_iowait) cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); =20 - if (task_new) - h_nr_delayed =3D !!se->sched_delayed; + if (task_new && se->sched_delayed) + h_nr_runnable =3D 0; =20 for_each_sched_entity(se) { if (se->on_rq) { @@ -7032,11 +7024,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct = *p, int flags) enqueue_entity(cfs_rq, se, flags); slice =3D cfs_rq_min_slice(cfs_rq); =20 - if (!h_nr_delayed) - cfs_rq->h_nr_runnable++; + cfs_rq->h_nr_runnable +=3D h_nr_runnable; cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; - cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D 1; @@ -7058,11 +7048,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct = *p, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 - if (!h_nr_delayed) - cfs_rq->h_nr_runnable++; + cfs_rq->h_nr_runnable +=3D h_nr_runnable; cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; - cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D 1; @@ -7125,7 +7113,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) struct task_struct *p =3D NULL; int idle_h_nr_running =3D 0; int h_nr_queued =3D 0; - int h_nr_delayed =3D 0; + int h_nr_runnable =3D 0; struct cfs_rq *cfs_rq; u64 slice =3D 0; =20 @@ -7133,8 +7121,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) p =3D task_of(se); h_nr_queued =3D 1; idle_h_nr_running =3D task_has_idle_policy(p); - if (!task_sleep && !task_delayed) - h_nr_delayed =3D !!se->sched_delayed; + if (task_sleep || task_delayed || !se->sched_delayed) + h_nr_runnable =3D 1; } else { cfs_rq =3D group_cfs_rq(se); slice =3D cfs_rq_min_slice(cfs_rq); @@ -7150,11 +7138,9 @@ static int dequeue_entities(struct rq *rq, struct sc= hed_entity *se, int flags) break; } =20 - if (!h_nr_delayed) - cfs_rq->h_nr_runnable -=3D h_nr_queued; + cfs_rq->h_nr_runnable -=3D h_nr_runnable; cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; - cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D h_nr_queued; @@ -7191,11 +7177,9 @@ static int dequeue_entities(struct rq *rq, struct sc= hed_entity *se, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 - if (!h_nr_delayed) - cfs_rq->h_nr_runnable -=3D h_nr_queued; + cfs_rq->h_nr_runnable -=3D h_nr_runnable; cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; - cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) idle_h_nr_running =3D h_nr_queued; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4374c660f5c7..d3ce5e99b025 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -650,7 +650,6 @@ struct cfs_rq { unsigned int h_nr_runnable; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ - unsigned int h_nr_delayed; =20 s64 avg_vruntime; u64 avg_load; --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-lf1-f46.google.com (mail-lf1-f46.google.com [209.85.167.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 937331DE2D4 for ; Mon, 2 Dec 2024 17:46:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161586; cv=none; b=D5zwUhbG3vnvX1/phT5rZb8xTORyGXjod+RwTtm3F1pFXTbnMznIRO12LbCEkoU3bSGurRMd2I+LfzUt42hrYqaUozmatShBDHlX1gWXbHhqTPzbC4dCRWSwjGzQHMJaSRHg3WPYUOQbuvhlRP2BBHNZkgnUG+WQ7LCkTe3p2+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161586; c=relaxed/simple; bh=DYN6Lk1itAAawvdGEpfPUHXf/lGIrZr0P7wVPRit1kc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=K6vJ5sZD7ADrB10H3l7jqdLwPgUVW7vAlE/UYD1CsTV1os+am/bxI1RTsMnRFRv1cZUZzbv0zsJKYfJLCwgOaqwp2Yy3lssSsIlza8eeITWFkqpIwWCFMf3LA9dVcZHXANuBBnm92jtMsJYl52JFRMYJe7nB0fR9wd0t0Q2J+RE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=aMdEdPiX; arc=none smtp.client-ip=209.85.167.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="aMdEdPiX" Received: by mail-lf1-f46.google.com with SMTP id 2adb3069b0e04-53df1d1b726so5388680e87.0 for ; Mon, 02 Dec 2024 09:46:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161583; x=1733766383; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rnzTu39yWGpfYZcqrwWi2HtHvhGqlJE+kQkZcQmTmm8=; b=aMdEdPiXjqBDVS+BqrSHh6qKFTUua/VMAd+dxsLvtstKyuTHT/ZxEKkpVt7CTM0wLK KiCAWNRqgVumDjCWPzmWXp4DlZjrvIkX7EJGHIZ/kMHCbnd3QNjex9SficI6B0vwMUSi 2vf9+Di5L3K/v5W02f07Gy+xOuK6301+kqZtMF62Ggp+r9PhfNXaoLcYgTu7gymXmwqJ +fBzm5xQbktLOaAiFyaMgMWHm1VuZMT+SUUAMn5iOGStCdfAebYHAlzGT++3JfOu47ly XV1V+Y1YWzCOM6cQN9y+tU7g2eoA5bhbybgyh24OiDQOxwFqQztQUuz2ytNGhI3rnDZs vlOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161583; x=1733766383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rnzTu39yWGpfYZcqrwWi2HtHvhGqlJE+kQkZcQmTmm8=; b=mVtsJo52b8PXL6AaqNf+9MpUzf9UElGQe08fdicanhWLy+29zTb0Uv2OtbrjDg3x1Q xG1giLVcxWp5z5VnUXieqC3k6A3Ul/oa0u/Xu6WZL8GQtkfYMpljlykjEqhtOdVBWSFx 2jRBm6L3p4CKhtwA8iDOha/Ao2fo6X/SQnSU1bIkAp6J6d9zMN+kXa76DbcqfnyPpHqg CYv3Pev9cePef7oJaRn+5HnEbFJNd/akEr5O1jbBJN/13nYad80xgqOEDQHlEOSrtdaa 5nsSNXDHgV/LKm0Asjp4NL5f7eIV9KCExut5r/M5E9zkuLhi1YsJ7zsiWUsUKGfBwsp/ TRsg== X-Forwarded-Encrypted: i=1; AJvYcCXqAs0dE4dtO6ACo4HTVAJFXA3rAGnXAdbvgo2Vlx3N3Rr17Kmuxs0SLPVxN7NB4+BS1DF8BDzmIUyQdTI=@vger.kernel.org X-Gm-Message-State: AOJu0YxzHon+BcJLsurremnfhUVcK5IvIEQsHdZgN/Jgl2RFdxrEKaD7 jiGwFZBXR1PkCa1Flsbz73gVsjCg2TqpSY4qTM2OSz+soaCKRdjWgeb2VFQuV4Y= X-Gm-Gg: ASbGncs2alGZ0wvRuywjuY9ZR/Su4HIByKgebUbH1ZneYB2MWcyvWqxw1jc40CHFZ5l PgwnzFxMQQrgvBxliSkuv747EsfEL5PqpY2R/bqL90iNv+aB65LzoFSyAzFRlkWC3cZ2hyxNKH0 Lq5Oh9ImTpKRwU4M71yi/sgEzX8rv2x7thT4wsh5bJ0eRMZljghnCREhIVSUwHDi9jsnzn9WxM/ HOo7cQxVDTe4WBoie865sfpSchj8Iwjmr2a/qRqdDilPu5R3cYeE+9NyEY= X-Google-Smtp-Source: AGHT+IFVqA8OTBgPdz06WxgL36XbSwxnkLKT9CjFvpGbSd1hOMAxC1cLy2QxMKymipVsqrot87BIBg== X-Received: by 2002:a05:6512:3e22:b0:53d:ed3e:4dbb with SMTP id 2adb3069b0e04-53df00d10a5mr11901568e87.21.1733161582606; Mon, 02 Dec 2024 09:46:22 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:21 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 07/11 v3] sched/fair: Rename cfs_rq.idle_h_nr_running into h_nr_idle Date: Mon, 2 Dec 2024 18:46:02 +0100 Message-ID: <20241202174606.4074512-8-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use same naming convention as others starting with h_nr_* and rename idle_h_nr_running into h_nr_idle. The "running" is not correct anymore as it includes delayed dequeue tasks as well. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/debug.c | 3 +-- kernel/sched/fair.c | 52 ++++++++++++++++++++++---------------------- kernel/sched/sched.h | 2 +- 3 files changed, 28 insertions(+), 29 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 56be3651605d..e21b66b6ee10 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -848,8 +848,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", cfs_rq->idle_nr_running); - SEQ_printf(m, " .%-30s: %d\n", "idle_h_nr_running", - cfs_rq->idle_h_nr_running); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_idle", cfs_rq->h_nr_idle); SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight); #ifdef CONFIG_SMP SEQ_printf(m, " .%-30s: %lu\n", "load_avg", diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8afd1b548e76..7234206b9981 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5933,7 +5933,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long queued_delta, runnable_delta, idle_task_delta, dequeue =3D 1; + long queued_delta, runnable_delta, idle_delta, dequeue =3D 1; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 raw_spin_lock(&cfs_b->lock); @@ -5966,7 +5966,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) =20 queued_delta =3D cfs_rq->h_nr_queued; runnable_delta =3D cfs_rq->h_nr_runnable; - idle_task_delta =3D cfs_rq->idle_h_nr_running; + idle_delta =3D cfs_rq->h_nr_idle; for_each_sched_entity(se) { struct cfs_rq *qcfs_rq =3D cfs_rq_of(se); int flags; @@ -5986,11 +5986,11 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) dequeue_entity(qcfs_rq, se, flags); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_queued; + idle_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->h_nr_runnable -=3D runnable_delta; - qcfs_rq->idle_h_nr_running -=3D idle_task_delta; + qcfs_rq->h_nr_idle -=3D idle_delta; =20 if (qcfs_rq->load.weight) { /* Avoid re-evaluating load for this entity: */ @@ -6009,11 +6009,11 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) se_update_runnable(se); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_queued; + idle_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->h_nr_runnable -=3D runnable_delta; - qcfs_rq->idle_h_nr_running -=3D idle_task_delta; + qcfs_rq->h_nr_idle -=3D idle_delta; } =20 /* At this point se is NULL and we are at root level*/ @@ -6039,7 +6039,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long queued_delta, runnable_delta, idle_task_delta; + long queued_delta, runnable_delta, idle_delta; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 se =3D cfs_rq->tg->se[cpu_of(rq)]; @@ -6075,7 +6075,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) =20 queued_delta =3D cfs_rq->h_nr_queued; runnable_delta =3D cfs_rq->h_nr_runnable; - idle_task_delta =3D cfs_rq->idle_h_nr_running; + idle_delta =3D cfs_rq->h_nr_idle; for_each_sched_entity(se) { struct cfs_rq *qcfs_rq =3D cfs_rq_of(se); =20 @@ -6089,11 +6089,11 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) enqueue_entity(qcfs_rq, se, ENQUEUE_WAKEUP); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_queued; + idle_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->h_nr_runnable +=3D runnable_delta; - qcfs_rq->idle_h_nr_running +=3D idle_task_delta; + qcfs_rq->h_nr_idle +=3D idle_delta; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(qcfs_rq)) @@ -6107,11 +6107,11 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) se_update_runnable(se); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_queued; + idle_delta =3D cfs_rq->h_nr_queued; =20 qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->h_nr_runnable +=3D runnable_delta; - qcfs_rq->idle_h_nr_running +=3D idle_task_delta; + qcfs_rq->h_nr_idle +=3D idle_delta; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(qcfs_rq)) @@ -6921,7 +6921,7 @@ static inline void check_update_overutilized_status(s= truct rq *rq) { } /* Runqueue only has SCHED_IDLE tasks enqueued */ static int sched_idle_rq(struct rq *rq) { - return unlikely(rq->nr_running =3D=3D rq->cfs.idle_h_nr_running && + return unlikely(rq->nr_running =3D=3D rq->cfs.h_nr_idle && rq->nr_running); } =20 @@ -6973,7 +6973,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) { struct cfs_rq *cfs_rq; struct sched_entity *se =3D &p->se; - int idle_h_nr_running =3D task_has_idle_policy(p); + int h_nr_idle =3D task_has_idle_policy(p); int h_nr_runnable =3D 1; int task_new =3D !(flags & ENQUEUE_WAKEUP); int rq_h_nr_queued =3D rq->cfs.h_nr_queued; @@ -7026,10 +7026,10 @@ enqueue_task_fair(struct rq *rq, struct task_struct= *p, int flags) =20 cfs_rq->h_nr_runnable +=3D h_nr_runnable; cfs_rq->h_nr_queued++; - cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; + cfs_rq->h_nr_idle +=3D h_nr_idle; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D 1; + h_nr_idle =3D 1; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) @@ -7050,10 +7050,10 @@ enqueue_task_fair(struct rq *rq, struct task_struct= *p, int flags) =20 cfs_rq->h_nr_runnable +=3D h_nr_runnable; cfs_rq->h_nr_queued++; - cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; + cfs_rq->h_nr_idle +=3D h_nr_idle; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D 1; + h_nr_idle =3D 1; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) @@ -7111,7 +7111,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) bool task_sleep =3D flags & DEQUEUE_SLEEP; bool task_delayed =3D flags & DEQUEUE_DELAYED; struct task_struct *p =3D NULL; - int idle_h_nr_running =3D 0; + int h_nr_idle =3D 0; int h_nr_queued =3D 0; int h_nr_runnable =3D 0; struct cfs_rq *cfs_rq; @@ -7120,7 +7120,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) if (entity_is_task(se)) { p =3D task_of(se); h_nr_queued =3D 1; - idle_h_nr_running =3D task_has_idle_policy(p); + h_nr_idle =3D task_has_idle_policy(p); if (task_sleep || task_delayed || !se->sched_delayed) h_nr_runnable =3D 1; } else { @@ -7140,10 +7140,10 @@ static int dequeue_entities(struct rq *rq, struct s= ched_entity *se, int flags) =20 cfs_rq->h_nr_runnable -=3D h_nr_runnable; cfs_rq->h_nr_queued -=3D h_nr_queued; - cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; + cfs_rq->h_nr_idle -=3D h_nr_idle; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D h_nr_queued; + h_nr_idle =3D h_nr_queued; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) @@ -7179,10 +7179,10 @@ static int dequeue_entities(struct rq *rq, struct s= ched_entity *se, int flags) =20 cfs_rq->h_nr_runnable -=3D h_nr_runnable; cfs_rq->h_nr_queued -=3D h_nr_queued; - cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; + cfs_rq->h_nr_idle -=3D h_nr_idle; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D h_nr_queued; + h_nr_idle =3D h_nr_queued; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) @@ -13543,7 +13543,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) } =20 idle_task_delta =3D grp_cfs_rq->h_nr_queued - - grp_cfs_rq->idle_h_nr_running; + grp_cfs_rq->h_nr_idle; if (!cfs_rq_is_idle(grp_cfs_rq)) idle_task_delta *=3D -1; =20 @@ -13553,7 +13553,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) if (!se->on_rq) break; =20 - cfs_rq->idle_h_nr_running +=3D idle_task_delta; + cfs_rq->h_nr_idle +=3D idle_task_delta; =20 /* Already accounted at parent level and above. */ if (cfs_rq_is_idle(cfs_rq)) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d3ce5e99b025..afe5cb93db89 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -649,7 +649,7 @@ struct cfs_rq { unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int h_nr_runnable; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int idle_nr_running; /* SCHED_IDLE */ - unsigned int idle_h_nr_running; /* SCHED_IDLE */ + unsigned int h_nr_idle; /* SCHED_IDLE */ =20 s64 avg_vruntime; u64 avg_load; --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBDB21DE3D5 for ; Mon, 2 Dec 2024 17:46:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161588; cv=none; b=POMTUPyaJ3FrEZ4RKT84STRPvA+VcrRoz+YU0HvcN01kBRl0JAtWkTxsTxuNDQ+bptQJUzDJswftRellmGOAm8YckgEsHjmpR0oWKHRi6eklGP1dFIh7XNk3dTLCV/UYWJEW+VNerP9aRqlGbQPJOS2RlJFZXNQdbZILFj+SGCo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161588; c=relaxed/simple; bh=T65mAueW4jYy6mZM1Pq158iZdfII223WhEIXJbMYhgE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Al93OzuJW44qR6UvqX6tfzKff/Ugn3Lef9V92ZIWB49k63Uns/Kitupa0xSg4xZzm9gLN1qk3q7srZY+WcE5qF9WG4b5CTm52e6AIq9TuTbESfpjDEFS4f6DtVtm8t9CnRde3EE58bVn7wIrVIVallAOKT2/xFmMAmYpAiXWdj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=ciFJ5mFO; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ciFJ5mFO" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-4349fd77b33so37173765e9.2 for ; Mon, 02 Dec 2024 09:46:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161584; x=1733766384; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KaiK4vvXS+8NUDhX4vOVsSr5dTIQjigRktyW3HjBCiQ=; b=ciFJ5mFOXYyquHjfW5Xz+aSADr3/83lUkw+d/ubyCuP+SuDTJ3JUymI1Myw9Pwqp6W owxMM9ZmpXZzzF6HAirjClLeBAZQUclwtyqZHzrTlry+BlzuCeYO0vyQu68oxqW2F8AV 5Ru+ptpsPVOLbadnsZT+2BEdUOlkZ4n6GG21aSoZpP9soS283f9fKhW7CFjhWEI+VzTy EXmZoP8GqjXMQnCaGRg5EfUIAkM+ZlJGhQQNfQT9BfdVq8hTXpSPq1wwwfVO10aTcZnN d/bxxcTerVJ8vT1/xnphu2+YmUdfFAskH983LWc3uSsmigvDC0Bk2YFfxOS4MkPnn1qQ /yXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161584; x=1733766384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KaiK4vvXS+8NUDhX4vOVsSr5dTIQjigRktyW3HjBCiQ=; b=Z07NrPfrvoOppv9Yct4OAXayYNTGdDHpV8xXEBmZj2b7Z5gNVFg0a/wC5924T1bC55 RKq7SNWjUaCovBVwIOMIjr4hdgEro80fA2bZURKPIurlJtOvFsbC3o9UEgynPfBGCDZ/ isUtZUO4lsYVx/5dPt+MY5GO5DEZT6IWPfdYyikNxWDePX3uL+bAzNojqkbF5/Zum0Mj WLgpaa9SJjTJW2IePl+sDOcIt+lYazs0xrNuLg8EiGOpdXA034bqd4NufyidxnnB3NRY xdEEVyvU2tJOmBZ7GGxtg36hi94a/JCEmh0CAv6o76v5L7zoPShpTMIT3LHJ0nqeDwn6 GdaA== X-Forwarded-Encrypted: i=1; AJvYcCWngmGZlfSS1c2yyUomgsVFqPYtXX7Z8GNxlvtaqF1cBWt2+b97OWt8f7OEWCiE8U2SMkA1B2wlN3BvVQA=@vger.kernel.org X-Gm-Message-State: AOJu0YxYlr0yd0F5L2Z5WhPmlUbdN49i10kliUb/9bTZAmqz2QOuQNrW DHdhTrB0/OJCkBd1rN8PIms4/b4t+fKoE47HmaZm/NEfCC8uGFINBARyJ5ejrepQpTpAScXVDUp RSxs= X-Gm-Gg: ASbGncuJFPgs+i9HqNLxj35Jrdm+RVtKEjmJzOqSzS11OWyilbt4qoXJRFxfEtaryW+ K7mU/Vjp6Bredcwmkt1+gp9mCRfUyKBQPMZaRxVcnbIzkCY1UtsP2TtC19ImNZhQAGGEHJ0BcxM qWQU9ILGjG6ZKjsZOK2vR6HUyOrIdCPAFS8KtN9PDlIeVMNtMIkidJogijRGDZhC0RsNCKg0RKQ g9DGEoLPIyQ8JcQ8FPCEu4jm+4OGZaJxainUztFenAXnWI8Aue4e0wUHsE= X-Google-Smtp-Source: AGHT+IFv8IcsXwQyaLgDvqVMB4ewo5qieNcN8vtjUSQEsJ1+a0alMo8+yJBMnqQutbuKSWku9bPz8Q== X-Received: by 2002:a05:600c:5022:b0:434:a6af:d322 with SMTP id 5b1f17b1804b1-434a9e10ff0mr195226915e9.33.1733161584126; Mon, 02 Dec 2024 09:46:24 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:23 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 08/11 v3] sched/fair: Remove unused cfs_rq.idle_nr_running Date: Mon, 2 Dec 2024 18:46:03 +0100 Message-ID: <20241202174606.4074512-9-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" cfs_rq.idle_nr_running field is not used anywhere so we can remove the useless associated computation Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/debug.c | 2 -- kernel/sched/fair.c | 14 +------------- kernel/sched/sched.h | 1 - 3 files changed, 1 insertion(+), 16 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index e21b66b6ee10..e300ee4d7956 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -846,8 +846,6 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); SEQ_printf(m, " .%-30s: %d\n", "h_nr_runnable", cfs_rq->h_nr_runnable); SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); - SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", - cfs_rq->idle_nr_running); SEQ_printf(m, " .%-30s: %d\n", "h_nr_idle", cfs_rq->h_nr_idle); SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight); #ifdef CONFIG_SMP diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7234206b9981..46a6e49b4f1c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3674,8 +3674,6 @@ account_entity_enqueue(struct cfs_rq *cfs_rq, struct = sched_entity *se) } #endif cfs_rq->nr_running++; - if (se_is_idle(se)) - cfs_rq->idle_nr_running++; } =20 static void @@ -3689,8 +3687,6 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct = sched_entity *se) } #endif cfs_rq->nr_running--; - if (se_is_idle(se)) - cfs_rq->idle_nr_running--; } =20 /* @@ -13523,7 +13519,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); struct sched_entity *se =3D tg->se[i]; - struct cfs_rq *parent_cfs_rq, *grp_cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; struct rq_flags rf; @@ -13534,14 +13530,6 @@ int sched_group_set_idle(struct task_group *tg, lo= ng idle) if (WARN_ON_ONCE(was_idle =3D=3D cfs_rq_is_idle(grp_cfs_rq))) goto next_cpu; =20 - if (se->on_rq) { - parent_cfs_rq =3D cfs_rq_of(se); - if (cfs_rq_is_idle(grp_cfs_rq)) - parent_cfs_rq->idle_nr_running++; - else - parent_cfs_rq->idle_nr_running--; - } - idle_task_delta =3D grp_cfs_rq->h_nr_queued - grp_cfs_rq->h_nr_idle; if (!cfs_rq_is_idle(grp_cfs_rq)) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index afe5cb93db89..9a9220aad9fc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -648,7 +648,6 @@ struct cfs_rq { unsigned int nr_running; unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int h_nr_runnable; /* SCHED_{NORMAL,BATCH,IDLE} */ - unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int h_nr_idle; /* SCHED_IDLE */ =20 s64 avg_vruntime; --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 553331DE4EB for ; Mon, 2 Dec 2024 17:46:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161591; cv=none; b=l5HhSKpxo6n8J7yUwxQbHR82rAysB9AfjDPGdcI0DAq2HM1QZbFHftClfwFYj/MOpui3xvFAoEqSslMffmM7KorzA0jiXUe8xooEo0wVzrVCizE7NKWPICJGh3o7Q4wfTohB3TbpOnmp3UHUiwjLjt/8FdN2l3Drcz4vpmh/ST8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161591; c=relaxed/simple; bh=XyfUa9YUfSVbElesWZeKwyIgT+WmILg+rHDvUh0HSrE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HCWKABbVLKnJCouQ2y4jLqiJG4GV12mFLBvKkXOg94s5qxUDeQWifCmLeH1lBuJdZpUZoK3BXXAZxeveBCWyRz2AdxxA6OxD0CxHx1OzzxT3swOS8FpJM8FfMjsl5aMy5smr+k+kOtOFuZzGLKToH60cWmXEJC1Ipo9KEHIzPPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=xoNWz0mx; arc=none smtp.client-ip=209.85.128.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="xoNWz0mx" Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-434a742481aso37161235e9.3 for ; Mon, 02 Dec 2024 09:46:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161585; x=1733766385; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZrNxnxZJyu0z/HPVoRTCfySDcSpKmA98viOx0ChTOpY=; b=xoNWz0mxlH1o0NlWBT9t9wYy1mlDDAlqsCAr4yiDcf58apmBe4niwfjly+qF5qgprR gVj2WmRhoqwYDTPfVKV90KR63cNHXLK1lNd5P0L+UDqnRSumzUzyQAtG43Ch0xak6Gn8 RrKJ6VsgcSYni20PXQPeCWOmuFmM7NjmAWfW0s5P9iA5AUGmM7o0OekHVbXKoysugQii pPkZsvr/GgmdvsMKVyT3DW3VMgLSkAFM2ZKE90E73NxvcAkH+qIXoSDioJMkpj5nOHid oigRRs3upn/iko2piG30lAo2UnWwfUZghwp4yvCHVuBrPrNvhQIdkpq4zGUq1q+YyAVv UZRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161585; x=1733766385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZrNxnxZJyu0z/HPVoRTCfySDcSpKmA98viOx0ChTOpY=; b=HlNsmb3FjMwEm2RWQrAe+xJ8GGsoDwXFPMiisC3SQiJL4/yCSzl5BOZ9Wrc9tGuGKC T9Ph7yjSh7XRIxltfUikPrdpsjjYF0XAq+ClCq0wG845dqQfWDsikzMI0mscg/VwOAW7 OfKDo87kOfpqc7bUs2AbF/K40qyyCNmBfQMBWtqpE5lqdavNGEB1BoEPpDQLjqBOR+Q/ RRkW63d2P7LmNuPV5+ORdv2cyS2MTaBCkqPvlGyOCF79efU1dUoCpxPRLL4zufkV/Wgs pDFfDpTf/H0SGFIgEKf2Roi8Bl6bxOL7zJwBPLbWLgkrANdJGD35WrdFzat4s/vvWSYo 3PtA== X-Forwarded-Encrypted: i=1; AJvYcCWQFZFLeNDssUvY946Wp900467OTJlPgixXsjTzv+scdWjPo/O+Ctyd0IfpF1XEQyURk0B5Vde+7psdmwc=@vger.kernel.org X-Gm-Message-State: AOJu0Yw26UEDbDMKdKxbhsTWF8YOyljDBu/CP091z6snESSPnqoESZDj DCra+P1qCP8+qNWX75+bVGMHwwcBKHCRRFvX427dLJd2n1p0XJMKwWGH72uSWzU= X-Gm-Gg: ASbGnctXtqIyY2vK0361p3FkG+wgGBz8bWXBnhZ3oxappSmx8tGZCdN9hLle3sR/E+9 +hlhlkE+uAGPJxZdjkTvhHRT4Snpr3bNv7gmsGiqGfrU05JmujK7HhjyvPko4okcDOiC02EnmVt Z88NxEmBCXQkJFixf9TfrLVk1C+n7DqXVah2kVeRF51fRzTVexth8RHdGOwde7Hex1OgorI+TGX erLgGSS5+4uUrHF8bE9eKRlVQohv6NcFMhjyMa+nmWPkksNoXpBa4DnIZU= X-Google-Smtp-Source: AGHT+IHWZpMD9BHt8HZ1Zol/LPw4HFq6GIKduvByiyw2TlPUkq0LEdLvSgJYkIxSIGL1/dlGQ5gE6w== X-Received: by 2002:a05:600c:474e:b0:42c:de2f:da27 with SMTP id 5b1f17b1804b1-434a9dbc432mr219895875e9.2.1733161585518; Mon, 02 Dec 2024 09:46:25 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:24 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 09/11 v3] sched/fair: Rename cfs_rq.nr_running into nr_queued Date: Mon, 2 Dec 2024 18:46:04 +0100 Message-ID: <20241202174606.4074512-10-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename cfs_rq.nr_running into cfs_rq.nr_queued which better reflects the reality as the value includes both the ready to run tasks and the delayed dequeue tasks. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 38 +++++++++++++++++++------------------- kernel/sched/sched.h | 4 ++-- 3 files changed, 22 insertions(+), 22 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index e300ee4d7956..5e8e84a2bcb1 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -843,7 +843,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) SPLIT_NS(right_vruntime)); spread =3D right_vruntime - left_vruntime; SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); - SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); + SEQ_printf(m, " .%-30s: %d\n", "nr_queued", cfs_rq->nr_queued); SEQ_printf(m, " .%-30s: %d\n", "h_nr_runnable", cfs_rq->h_nr_runnable); SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); SEQ_printf(m, " .%-30s: %d\n", "h_nr_idle", cfs_rq->h_nr_idle); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 46a6e49b4f1c..95c51604e1ba 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -915,7 +915,7 @@ static struct sched_entity *pick_eevdf(struct cfs_rq *c= fs_rq) * We can safely skip eligibility check if there is only one entity * in this cfs_rq, saving some cycles. */ - if (cfs_rq->nr_running =3D=3D 1) + if (cfs_rq->nr_queued =3D=3D 1) return curr && curr->on_rq ? curr : se; =20 if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr))) @@ -1247,7 +1247,7 @@ static void update_curr(struct cfs_rq *cfs_rq) =20 account_cfs_rq_runtime(cfs_rq, delta_exec); =20 - if (cfs_rq->nr_running =3D=3D 1) + if (cfs_rq->nr_queued =3D=3D 1) return; =20 if (resched || did_preempt_short(cfs_rq, curr)) { @@ -3673,7 +3673,7 @@ account_entity_enqueue(struct cfs_rq *cfs_rq, struct = sched_entity *se) list_add(&se->group_node, &rq->cfs_tasks); } #endif - cfs_rq->nr_running++; + cfs_rq->nr_queued++; } =20 static void @@ -3686,7 +3686,7 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct = sched_entity *se) list_del_init(&se->group_node); } #endif - cfs_rq->nr_running--; + cfs_rq->nr_queued--; } =20 /* @@ -5220,7 +5220,7 @@ static inline void update_misfit_status(struct task_s= truct *p, struct rq *rq) =20 static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) { - return !cfs_rq->nr_running; + return !cfs_rq->nr_queued; } =20 #define UPDATE_TG 0x0 @@ -5276,7 +5276,7 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_enti= ty *se, int flags) * * EEVDF: placement strategy #1 / #2 */ - if (sched_feat(PLACE_LAG) && cfs_rq->nr_running && se->vlag) { + if (sched_feat(PLACE_LAG) && cfs_rq->nr_queued && se->vlag) { struct sched_entity *curr =3D cfs_rq->curr; unsigned long load; =20 @@ -5423,7 +5423,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) __enqueue_entity(cfs_rq, se); se->on_rq =3D 1; =20 - if (cfs_rq->nr_running =3D=3D 1) { + if (cfs_rq->nr_queued =3D=3D 1) { check_enqueue_throttle(cfs_rq); if (!throttled_hierarchy(cfs_rq)) { list_add_leaf_cfs_rq(cfs_rq); @@ -5568,7 +5568,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) if (flags & DEQUEUE_DELAYED) finish_delayed_dequeue_entity(se); =20 - if (cfs_rq->nr_running =3D=3D 0) + if (cfs_rq->nr_queued =3D=3D 0) update_idle_cfs_rq_clock_pelt(cfs_rq); =20 return true; @@ -5916,7 +5916,7 @@ static int tg_throttle_down(struct task_group *tg, vo= id *data) list_del_leaf_cfs_rq(cfs_rq); =20 SCHED_WARN_ON(cfs_rq->throttled_clock_self); - if (cfs_rq->nr_running) + if (cfs_rq->nr_queued) cfs_rq->throttled_clock_self =3D rq_clock(rq); } cfs_rq->throttle_count++; @@ -6025,7 +6025,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) */ cfs_rq->throttled =3D 1; SCHED_WARN_ON(cfs_rq->throttled_clock); - if (cfs_rq->nr_running) + if (cfs_rq->nr_queued) cfs_rq->throttled_clock =3D rq_clock(rq); return true; } @@ -6125,7 +6125,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) assert_list_leaf_cfs_rq(rq); =20 /* Determine whether we need to wake up potentially idle CPU: */ - if (rq->curr =3D=3D rq->idle && rq->cfs.nr_running) + if (rq->curr =3D=3D rq->idle && rq->cfs.nr_queued) resched_curr(rq); } =20 @@ -6426,7 +6426,7 @@ static __always_inline void return_cfs_rq_runtime(str= uct cfs_rq *cfs_rq) if (!cfs_bandwidth_used()) return; =20 - if (!cfs_rq->runtime_enabled || cfs_rq->nr_running) + if (!cfs_rq->runtime_enabled || cfs_rq->nr_queued) return; =20 __return_cfs_rq_runtime(cfs_rq); @@ -6944,14 +6944,14 @@ requeue_delayed_entity(struct sched_entity *se) if (sched_feat(DELAY_ZERO)) { update_entity_lag(cfs_rq, se); if (se->vlag > 0) { - cfs_rq->nr_running--; + cfs_rq->nr_queued--; if (se !=3D cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->vlag =3D 0; place_entity(cfs_rq, se, 0); if (se !=3D cfs_rq->curr) __enqueue_entity(cfs_rq, se); - cfs_rq->nr_running++; + cfs_rq->nr_queued++; } } =20 @@ -8876,7 +8876,7 @@ static struct task_struct *pick_task_fair(struct rq *= rq) =20 again: cfs_rq =3D &rq->cfs; - if (!cfs_rq->nr_running) + if (!cfs_rq->nr_queued) return NULL; =20 do { @@ -8993,7 +8993,7 @@ static struct task_struct *__pick_next_task_fair(stru= ct rq *rq, struct task_stru =20 static bool fair_server_has_tasks(struct sched_dl_entity *dl_se) { - return !!dl_se->rq->cfs.nr_running; + return !!dl_se->rq->cfs.nr_queued; } =20 static struct task_struct *fair_server_pick_task(struct sched_dl_entity *d= l_se) @@ -9783,7 +9783,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) { update_tg_load_avg(cfs_rq); =20 - if (cfs_rq->nr_running =3D=3D 0) + if (cfs_rq->nr_queued =3D=3D 0) update_idle_cfs_rq_clock_pelt(cfs_rq); =20 if (cfs_rq =3D=3D &rq->cfs) @@ -12965,7 +12965,7 @@ static inline void task_tick_core(struct rq *rq, st= ruct task_struct *curr) * MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check * if we need to give up the CPU. */ - if (rq->core->core_forceidle_count && rq->cfs.nr_running =3D=3D 1 && + if (rq->core->core_forceidle_count && rq->cfs.nr_queued =3D=3D 1 && __entity_slice_used(&curr->se, MIN_NR_TASKS_DURING_FORCEIDLE)) resched_curr(rq); } @@ -13109,7 +13109,7 @@ prio_changed_fair(struct rq *rq, struct task_struct= *p, int oldprio) if (!task_on_rq_queued(p)) return; =20 - if (rq->cfs.nr_running =3D=3D 1) + if (rq->cfs.nr_queued =3D=3D 1) return; =20 /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 9a9220aad9fc..aef716c41edb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -645,7 +645,7 @@ struct balance_callback { /* CFS-related fields in a runqueue */ struct cfs_rq { struct load_weight load; - unsigned int nr_running; + unsigned int nr_queued; unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int h_nr_runnable; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int h_nr_idle; /* SCHED_IDLE */ @@ -2565,7 +2565,7 @@ static inline bool sched_rt_runnable(struct rq *rq) =20 static inline bool sched_fair_runnable(struct rq *rq) { - return rq->cfs.nr_running > 0; + return rq->cfs.nr_queued > 0; } =20 extern struct task_struct *pick_next_task_fair(struct rq *rq, struct task_= struct *prev, struct rq_flags *rf); --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA02F1DE4FD for ; Mon, 2 Dec 2024 17:46:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161591; cv=none; b=UWZH6Im1jGp4QoJzPG3hOTEEpItmnc7TJrGp5Uyvp6arW//91hKE3XknX1DYo671qzzd+Z1+0TAg8wxqUHByp3mAMTkwZvCWQVfnve/psg1bU7tm1rauHCsxi/qyy5THFs/1dw6wZoL6ycf+2WEL1DMeD+vf8hJvLJExLrS9t1o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161591; c=relaxed/simple; bh=m/hgL1xLhV8k+CAYnNCYojygX/yF8uKSzeNw5fq0AzQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V33RNxi66VUw52Umazmphp51cRIoWL2NH/SEWEYdXC71wAS4hqmL94NH8MkwMM/nWuuQVonrwhVZugcGps8tZruwRkHv/Fh2rsq9QuYLH9d0rCB3Fitpah7S6VdvgohSiKQ2lsSGSKJSn+tjYSeOX/k2lZ7uuRlvyH9ngQ3W/RQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=EmM+QgYH; arc=none smtp.client-ip=209.85.128.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="EmM+QgYH" Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-4349e1467fbso39803745e9.1 for ; Mon, 02 Dec 2024 09:46:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161587; x=1733766387; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Py6vIgzd70aBYUcFacLlH6wRilMj5qKT5n2w86MfaBw=; b=EmM+QgYHZDcyFOtskxuIJTrCWn+I+5EkxeEw2c8fy3mhsEXUmKDCgmGwJTDjzZT6nL fBhZkrjndj9ePS5hNyWpmSDTlYU54CnvWsgPGIOSadzN9f1WaztnV8G/PZMyw/G/S4Ll 2Pyy6Ka5Wf7DWy9syq6OoWX1gHNKRiUncVzmGCAe90Du5yuB4FX+7Q5hsDjBchJeiftZ /0lqEvMIc4Zj+qWRSwGvGZ2lfR5OSlQZQfhJv81YmnD0XrzGXX71z7QpHndCO0HLEjjM mIuwkwAs6D/8MRODrGIUYW5wDw60qjDNh3aq9eA+4CO5fpOSFmS3zbvf3kPZF53NOAeI I3eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161587; x=1733766387; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Py6vIgzd70aBYUcFacLlH6wRilMj5qKT5n2w86MfaBw=; b=ePoKNu+AZcYvObTXu1fP13wukHCYmBxvLhTNEngEbDMyjhfImZ64FQ20kPgt+Ienfj IPWNGu3P1LcT0Pvdyu0XFKs+Fpd+ZIfKk3AMDFtDXV6TKltDXmcWOy0Rl3ZBdPzUSkay /90Q8fqici0YCPXao0Qru4ltwl0aF0Hf8rAFD26xupWuRg8VCggGWo/6PDSYyIoX26VL 6IL1k8CBYp4upXNnt/pb/R2Y2iC3JaU64sujgbCZ8yIXQty3JkKE5h7gW5IVVTcHdkbD aVwDpTlGHoAVDGBdDXNSPmycg5LJF2oA/+V0MzWwr+FHXRpzuNV8vlc8KfadJzOSykje qA/w== X-Forwarded-Encrypted: i=1; AJvYcCVPUvkhxx8EpgdZAb808u4cSO7XxwIk8HAu1G4ilXRY3xVxRBBNoNphSR7sLSIg+6serHPFnRCzWAg9AZs=@vger.kernel.org X-Gm-Message-State: AOJu0YwmirBGIX1VjKghnLP+juidpZ2GBIT19BveUqPRX+Roo0zJrvXW WT0BKR3bwkr9tqHFhnI7GYwWDkbknVI3bbSeFlcV9mVrWM6rxQ+II7Y6Scs10MM= X-Gm-Gg: ASbGncum/1VKkixzTCm4+fzjdLUGbLLJ6MENJfeIuU8I8LZhv0cwegwgio9zOHiKlTm dU2i5MG35VQ3Skzvi/ODjbuPfROEWWeDoI3VN2p4xkmTBIoYq2MgurBcRuQz+3a1iXcv83vfWVr dsQRn1sJNIrA6ChF9QEJ356qBBTXk3Be/KInxzb9n6hrK4luCU8ez1LsfIHi8hBwojDBNUweOmn utCOQgKUZZXzMsNyKvGeSnDG8t1Tr2b5V7l3+OH8Mdqxjc0zrUIhibkoiM= X-Google-Smtp-Source: AGHT+IE5zSBvndweGRLr+x4vyGXPTmVDchC6STqPsDBA/UfhaJVJSRz+acbYML83vOw08pcsUR22Hg== X-Received: by 2002:a05:600c:444e:b0:431:44fe:fd9a with SMTP id 5b1f17b1804b1-434a9dc8297mr206023425e9.19.1733161587127; Mon, 02 Dec 2024 09:46:27 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:26 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 10/11 v3] sched/fair: Do not try to migrate delayed dequeue task Date: Mon, 2 Dec 2024 18:46:05 +0100 Message-ID: <20241202174606.4074512-11-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Migrating a delayed dequeued task doesn't help in balancing the number of runnable tasks in the system. Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/fair.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 95c51604e1ba..555a9eba5486 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9394,11 +9394,15 @@ int can_migrate_task(struct task_struct *p, struct = lb_env *env) =20 /* * We do not migrate tasks that are: - * 1) throttled_lb_pair, or - * 2) cannot be migrated to this CPU due to cpus_ptr, or - * 3) running (obviously), or - * 4) are cache-hot on their current CPU. + * 1) delayed dequeued unless we migrate load, or + * 2) throttled_lb_pair, or + * 3) cannot be migrated to this CPU due to cpus_ptr, or + * 4) running (obviously), or + * 5) are cache-hot on their current CPU. */ + if ((p->se.sched_delayed) && (env->migration_type !=3D migrate_load)) + return 0; + if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu)) return 0; =20 --=20 2.43.0 From nobody Fri Dec 27 17:12:37 2024 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54B271DE3DB for ; Mon, 2 Dec 2024 17:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161592; cv=none; b=BxaR2nX2ccMwAmU6sD4f3TuI1FEkFiMLaV5hSl5dd68aTfeMqA7ntIbW+7rD2RNNzzj8ClpuNlaunG6FgVdtCV0yAHFxAi+oAC4bwejQBEmU2U880fsBpolLgjZtvqgtaZHbPymZgyntEegInwLHqSMR2eWVBxTQtQg8HFmmUgk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161592; c=relaxed/simple; bh=Vojp+S10FMlBTIZkWSaI2ey2ZEYUoLBUhVC33mWSmAQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rtEpRebpiu9ttsfeOnB4OeWd8x4NJBaolxPkDPobO9umqPtbFxioS3JZfjubaQMGlx3SelLlsXPvT/TETgVeW0h20KLi2fTAaOM1x92c0wOyo1ZfUi+28w8dk2M3mouyh016ccZJ1AUGTIqoEIySpE2jpRw3BgUV2HKt2mY5onY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=rTilyLOw; arc=none smtp.client-ip=209.85.167.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="rTilyLOw" Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-53dd2fdcebcso5199257e87.0 for ; Mon, 02 Dec 2024 09:46:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161588; x=1733766388; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IbVLdkKPCRYszk92Q0dE+wF18uSJwLunkcEs6Kf5hck=; b=rTilyLOwLMASGlpGVv7XC3z0S2QSVayqRMVT5B2aWc6wfw30qiwmM7IxepIuimk+D8 4bysaDzB6WnPPrNBx21JJlEOOjT/LIbEMB65BhE8hYa03s6uv5nZ1wX/68wce5re5WHc WffEhK3EMrMF1Bsb+jRoIB6KdkUpErtmvqCWexAWSa6PXYPv2Arimm+hSfCJS99tAh2V o7y6XeeqnbVYi2cefH/cqM6lt82ncEct8Nq4ip6umr2nb/p4tdnTQcjIE3w9bsoAouJg 4udASn6nIHIt9OsmvjGoX0ZOO4A8hFe2zQSXKT7SOkhuACzzXsWDuo5dZsqnZ9FtdBPm MG4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161588; x=1733766388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IbVLdkKPCRYszk92Q0dE+wF18uSJwLunkcEs6Kf5hck=; b=dmuLbPydZU3XEsoV/qfHelotVcLTckfMjiFlzELwMqGzMw1AA9DZj/dur70dm7SOCT 9W5oxybt/9GnGysip+uluFbwI2RHlPfDT+Wo7pK06WibcvzmJ6teqsoecxqyPvouRB27 MbVtoWkGNqkiMABP2IkWj0hgrdC7z5WqP/N46zO85jqmfDCabISc+YF9s5B6HqaF6M5V gvDeYadM9DMUEE1unUysC8m7hjvgrZeCAIwF8aou4r6nSMozoomKX/8q9TouijSuidJq S1DfuEBPtoIVYRU+yTP72misuPP0EFfPbe9sxZ/k+C0IGX8u1Hrc31RBEK3PRKj0gMf/ IKEw== X-Forwarded-Encrypted: i=1; AJvYcCU8X39sRcjZahH4HV622cme5ZTdbdn1SSP9CQj7jV2/CYQ93SShds0APMi7kQUO9lKt2hpC2uXXU9/tAS0=@vger.kernel.org X-Gm-Message-State: AOJu0YzBfFnQ3zeCPRwROJre4isnP4BHYbDRjbQ276kJatMkc9wHAwrW ta17PqaS7C/qsXace0L2Xd199OROvzkA1lrcA5CcXjoxTLwvHeBAIpBfaAZncUw= X-Gm-Gg: ASbGnctBI6GY1BuUsTGS+iRaTK3t1VBX36zdS83gaIGeQh1BBTsgVMpaVkGQzALUcrA pqiX0AtslszhMQR2SWze+rNe4p+r18C5/bYU+4IURIwFhU41BLzja/xzrzD7vLe79WuJwkVng93 r5qB+8PBaxjl7+64Y8y/ryRsxtHuTpG3JR6NxUxUp4++r+FpqK9LgNYpr4L23ONWaA4BDUVGCL1 N8xDpDAXMUQqBOh3EyxJ0YoXZMKzj7GKulTS7qFqOjhCNquHkAVbNSk0MU= X-Google-Smtp-Source: AGHT+IGcDydITViSGX3ztqU2cL6GJWNKAM50r7tVAldIXwZqz3nTPBvOdtZFCrS8rRVhwikgYcPCDA== X-Received: by 2002:a19:2d0a:0:b0:53d:f4af:6fea with SMTP id 2adb3069b0e04-53df4af70dcmr9575869e87.4.1733161588449; Mon, 02 Dec 2024 09:46:28 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:27 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 11/11 v3] sched/fair: Fix variable declaration position Date: Mon, 2 Dec 2024 18:46:06 +0100 Message-ID: <20241202174606.4074512-12-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move variable declaration at the beginning of the function Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Tested-By: Luis Machado --- kernel/sched/fair.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 555a9eba5486..fa2edb59d009 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5494,6 +5494,7 @@ static bool dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { bool sleep =3D flags & DEQUEUE_SLEEP; + int action =3D UPDATE_TG; =20 update_curr(cfs_rq); =20 @@ -5520,7 +5521,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) } } =20 - int action =3D UPDATE_TG; if (entity_is_task(se) && task_on_rq_migrating(task_of(se))) action |=3D DO_DETACH; =20 @@ -5630,6 +5630,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags); static struct sched_entity * pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq) { + struct sched_entity *se; /* * Enabling NEXT_BUDDY will affect latency but not fairness. */ @@ -5640,7 +5641,7 @@ pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq) return cfs_rq->next; } =20 - struct sched_entity *se =3D pick_eevdf(cfs_rq); + se =3D pick_eevdf(cfs_rq); if (se->sched_delayed) { dequeue_entities(rq, se, DEQUEUE_SLEEP | DEQUEUE_DELAYED); /* --=20 2.43.0