From nobody Sat Dec 28 09:57:23 2024 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F349F1DD9A8 for ; Mon, 2 Dec 2024 17:46:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161580; cv=none; b=YFyOy33CXJkVBOSews0sRoZNLS7l9uw2Jubsrn0ciTXRmiw0ktsTqVXIZJeOAsqXQql6inPFcvhE9FpZHzaqyux6d8jaVPg31K1y2pRUDFBbbQDRIs9JN5glFKIeZZR9lQxdaTZtdxCqxgNrUXt59oDqUt9bSmXFTCb9LAsxNVU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733161580; c=relaxed/simple; bh=pPXttQsIqv4QcQ5P73PsksI6dM2Lh8p4JMSL9mmGqTo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e+az2U3ejqjn9N+JmnIAG/gmmO6hQ7e1s6/9OmP6nPMHPE+IMbkklmzaG/l8JS19VqL0J5l5DiZ5QDw8rvSLy0+VQq0192vRFUnwlyKegVkimbOJ5kltql/EygoWrUFKIPDWPr1s6cfXrOCJ+vB6oxp2pmb0WT5zBRB1LbOUHmE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=F3noEzek; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="F3noEzek" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-4349cc45219so41295675e9.3 for ; Mon, 02 Dec 2024 09:46:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1733161576; x=1733766376; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ytt4msqh2bkvqTCxvMO7lfADd+2Glg781DZC05UgcFA=; b=F3noEzek6RTJff6Q37aMEkCkTZUDe1BOl6LnfGmA6/jlSvYgmP37DbmQCnnA+dZcLI bYzNLXi8Slv8C2Sk/yIFNpLPz/nbGdtPuVJnwzkw3VmIIJhHAg6rhyn4lIq3FTR2MH+m oU8f2Z4U5Z38wI44rCEGb+jVShqHRwqpgiPXmwcCY7XOEVArVFfuxfVJ0yhauI1jw2No VuymL6Yxg/+9ZmMxiltRCOKY7u+DC3r/NjexIEaVIdX9uEyWSylHJHr60Qq4dLtLFCZq QYy7fJVMy2eeXsWPDd0jsDVs8bR9I0f2ed7j2Axz3/IfmMqYAN9UgTOkMTyjnk6NHwJN Yg/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733161576; x=1733766376; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ytt4msqh2bkvqTCxvMO7lfADd+2Glg781DZC05UgcFA=; b=w4XWfgwLyFpgQAfycfXu9j9TGUxjnp9cs/bNsNIvqqVwfhKY2oJozl0Ymy1YZPmL/9 LtTAsa8+ux12ziPm/yt0dEj6EQoWYhrQ0AI8NiRiEf6h3KeMLHX0c9ZEHyXPNSX+DKFz INJqVYdeVzdCG6f9uyfNP42Q+duP9w9iHxSQnjKoyw6GzPE0IztK5vDNaH484JW6ujKm 6ZTd12ffgAvt9jJfpsckrz60zYVslpfI2zD2A5YuMoZW8u+qH+IvINqlSmM0M6oH4+5v 9RCCE4KELsU/fwd+6Ukl3utJLPzGf/PJxepeKY/ny2ibbBU9IY9OSDFZdNArB/AmW3eF Cb4Q== X-Forwarded-Encrypted: i=1; AJvYcCUOhoRNMBqW2+4E5H0LANlriqMAvYOIYl3tnPFvkULw1/yNCy8rvy9khbQ2b/0A+631Jh01sA3VyI2NyGY=@vger.kernel.org X-Gm-Message-State: AOJu0YyPKq/aF2dCP8Pu/mTyQAjtRqpxL51pi5FMgtoV3odFRoIOywbe 0nxo6MT1F3mPHvLjcpEegYtc1luM8Dov/VQn1HbnE67nkgn+BRSLg/tGeLmI9OY= X-Gm-Gg: ASbGnctDYjgdPV3E9s1H51QLNf2PwZPRB40A933Wby/IQcU4Wp56vn54tPV1LYfLGjt NPzQQLjZyBeqf1LSNqMvg3fNDMQGad4Zh/zRxic34HGAwY0MZbFRgNxRUEEhzmCn1giXe8iVrd8 6OYGLpvflF0Sm+YQwoneepobA4LWSkpunveQozFoTvxE4xm9cYLDVJVs5FIrDzT3rgj5FAMn9KD XJP2pNBeFdQZLre7+zatjzd61zdd1qrRvgBLk9Su0+9QOKFslDyGmbuyVI= X-Google-Smtp-Source: AGHT+IEXqyCpupsGZoouFYsp/z5djNL1sx77rLODReZiDdKFzoF0Tmu8FyrJzn3LhEH3rOKWGduvkQ== X-Received: by 2002:a05:600c:5490:b0:42c:c28c:e477 with SMTP id 5b1f17b1804b1-434a9de55ebmr191951735e9.23.1733161576205; Mon, 02 Dec 2024 09:46:16 -0800 (PST) Received: from vingu-cube.. ([2a01:e0a:f:6020:f271:ff3b:369e:33b6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d29fbsm193275855e9.29.2024.12.02.09.46.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2024 09:46:14 -0800 (PST) From: Vincent Guittot To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org Cc: kprateek.nayak@amd.com, pauld@redhat.com, efault@gmx.de, luis.machado@arm.com, tj@kernel.org, void@manifault.com, Vincent Guittot Subject: [PATCH 03/11 v3] sched/fair: Rename h_nr_running into h_nr_queued Date: Mon, 2 Dec 2024 18:45:58 +0100 Message-ID: <20241202174606.4074512-4-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202174606.4074512-1-vincent.guittot@linaro.org> References: <20241202174606.4074512-1-vincent.guittot@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With delayed dequeued feature, a sleeping sched_entity remains queued in the rq until its lag has elapsed but can't run. Rename h_nr_running into h_nr_queued to reflect this new behavior. Signed-off-by: Vincent Guittot --- kernel/sched/core.c | 4 +- kernel/sched/debug.c | 6 +-- kernel/sched/fair.c | 88 ++++++++++++++++++++++---------------------- kernel/sched/pelt.c | 4 +- kernel/sched/sched.h | 4 +- 5 files changed, 53 insertions(+), 53 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ed95861e9887..9ff29c59493a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1343,7 +1343,7 @@ bool sched_can_stop_tick(struct rq *rq) if (scx_enabled() && !scx_can_stop_tick(rq)) return false; =20 - if (rq->cfs.h_nr_running > 1) + if (rq->cfs.h_nr_queued > 1) return false; =20 /* @@ -6020,7 +6020,7 @@ __pick_next_task(struct rq *rq, struct task_struct *p= rev, struct rq_flags *rf) * opportunity to pull in more work from other CPUs. */ if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) && - rq->nr_running =3D=3D rq->cfs.h_nr_running)) { + rq->nr_running =3D=3D rq->cfs.h_nr_queued)) { =20 p =3D pick_next_task_fair(rq, prev, rf); if (unlikely(p =3D=3D RETRY_TASK)) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index a1be00a988bf..08d6c2b7caa3 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -379,7 +379,7 @@ static ssize_t sched_fair_server_write(struct file *fil= p, const char __user *ubu return -EINVAL; } =20 - if (rq->cfs.h_nr_running) { + if (rq->cfs.h_nr_queued) { update_rq_clock(rq); dl_server_stop(&rq->fair_server); } @@ -392,7 +392,7 @@ static ssize_t sched_fair_server_write(struct file *fil= p, const char __user *ubu printk_deferred("Fair server disabled in CPU %d, system may crash due t= o starvation.\n", cpu_of(rq)); =20 - if (rq->cfs.h_nr_running) + if (rq->cfs.h_nr_queued) dl_server_start(&rq->fair_server); } =20 @@ -844,7 +844,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct c= fs_rq *cfs_rq) spread =3D right_vruntime - left_vruntime; SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); - SEQ_printf(m, " .%-30s: %d\n", "h_nr_running", cfs_rq->h_nr_running); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", cfs_rq->idle_nr_running); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fc69aab57870..0f6dc4d9b15f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2128,7 +2128,7 @@ static void update_numa_stats(struct task_numa_env *e= nv, ns->load +=3D cpu_load(rq); ns->runnable +=3D cpu_runnable(rq); ns->util +=3D cpu_util_cfs(cpu); - ns->nr_running +=3D rq->cfs.h_nr_running; + ns->nr_running +=3D rq->cfs.h_nr_queued; ns->compute_capacity +=3D capacity_of(cpu); =20 if (find_idle && idle_core < 0 && !rq->nr_running && idle_cpu(cpu)) { @@ -5394,7 +5394,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new - * h_nr_running of its group cfs_rq. + * h_nr_queued of its group cfs_rq. * - For group_entity, update its weight to reflect the new share of * its group cfs_rq * - Add its new weight to cfs_rq->load.weight @@ -5532,7 +5532,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) * When dequeuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new - * h_nr_running of its group cfs_rq. + * h_nr_queued of its group cfs_rq. * - Subtract its previous weight from cfs_rq->load.weight. * - For group entity, update its weight to reflect the new share * of its group cfs_rq. @@ -5933,8 +5933,8 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long task_delta, idle_task_delta, delayed_delta, dequeue =3D 1; - long rq_h_nr_running =3D rq->cfs.h_nr_running; + long queued_delta, idle_task_delta, delayed_delta, dequeue =3D 1; + long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 raw_spin_lock(&cfs_b->lock); /* This will start the period timer if necessary */ @@ -5964,7 +5964,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq); rcu_read_unlock(); =20 - task_delta =3D cfs_rq->h_nr_running; + queued_delta =3D cfs_rq->h_nr_queued; idle_task_delta =3D cfs_rq->idle_h_nr_running; delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { @@ -5986,9 +5986,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) dequeue_entity(qcfs_rq, se, flags); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running -=3D task_delta; + qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; qcfs_rq->h_nr_delayed -=3D delayed_delta; =20 @@ -6009,18 +6009,18 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) se_update_runnable(se); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running -=3D task_delta; + qcfs_rq->h_nr_queued -=3D queued_delta; qcfs_rq->idle_h_nr_running -=3D idle_task_delta; qcfs_rq->h_nr_delayed -=3D delayed_delta; } =20 /* At this point se is NULL and we are at root level*/ - sub_nr_running(rq, task_delta); + sub_nr_running(rq, queued_delta); =20 /* Stop the fair server if throttling resulted in no runnable tasks */ - if (rq_h_nr_running && !rq->cfs.h_nr_running) + if (rq_h_nr_queued && !rq->cfs.h_nr_queued) dl_server_stop(&rq->fair_server); done: /* @@ -6039,8 +6039,8 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); struct sched_entity *se; - long task_delta, idle_task_delta, delayed_delta; - long rq_h_nr_running =3D rq->cfs.h_nr_running; + long queued_delta, idle_task_delta, delayed_delta; + long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 se =3D cfs_rq->tg->se[cpu_of(rq)]; =20 @@ -6073,7 +6073,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) goto unthrottle_throttle; } =20 - task_delta =3D cfs_rq->h_nr_running; + queued_delta =3D cfs_rq->h_nr_queued; idle_task_delta =3D cfs_rq->idle_h_nr_running; delayed_delta =3D cfs_rq->h_nr_delayed; for_each_sched_entity(se) { @@ -6089,9 +6089,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) enqueue_entity(qcfs_rq, se, ENQUEUE_WAKEUP); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running +=3D task_delta; + qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 @@ -6107,9 +6107,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) se_update_runnable(se); =20 if (cfs_rq_is_idle(group_cfs_rq(se))) - idle_task_delta =3D cfs_rq->h_nr_running; + idle_task_delta =3D cfs_rq->h_nr_queued; =20 - qcfs_rq->h_nr_running +=3D task_delta; + qcfs_rq->h_nr_queued +=3D queued_delta; qcfs_rq->idle_h_nr_running +=3D idle_task_delta; qcfs_rq->h_nr_delayed +=3D delayed_delta; =20 @@ -6119,11 +6119,11 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) } =20 /* Start the fair server if un-throttling resulted in new runnable tasks = */ - if (!rq_h_nr_running && rq->cfs.h_nr_running) + if (!rq_h_nr_queued && rq->cfs.h_nr_queued) dl_server_start(&rq->fair_server); =20 /* At this point se is NULL and we are at root level*/ - add_nr_running(rq, task_delta); + add_nr_running(rq, queued_delta); =20 unthrottle_throttle: assert_list_leaf_cfs_rq(rq); @@ -6833,7 +6833,7 @@ static void hrtick_start_fair(struct rq *rq, struct t= ask_struct *p) =20 SCHED_WARN_ON(task_rq(p) !=3D rq); =20 - if (rq->cfs.h_nr_running > 1) { + if (rq->cfs.h_nr_queued > 1) { u64 ran =3D se->sum_exec_runtime - se->prev_sum_exec_runtime; u64 slice =3D se->slice; s64 delta =3D slice - ran; @@ -6976,7 +6976,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) int idle_h_nr_running =3D task_has_idle_policy(p); int h_nr_delayed =3D 0; int task_new =3D !(flags & ENQUEUE_WAKEUP); - int rq_h_nr_running =3D rq->cfs.h_nr_running; + int rq_h_nr_queued =3D rq->cfs.h_nr_queued; u64 slice =3D 0; =20 /* @@ -7024,7 +7024,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) enqueue_entity(cfs_rq, se, flags); slice =3D cfs_rq_min_slice(cfs_rq); =20 - cfs_rq->h_nr_running++; + cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 @@ -7048,7 +7048,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 - cfs_rq->h_nr_running++; + cfs_rq->h_nr_queued++; cfs_rq->idle_h_nr_running +=3D idle_h_nr_running; cfs_rq->h_nr_delayed +=3D h_nr_delayed; =20 @@ -7060,7 +7060,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) goto enqueue_throttle; } =20 - if (!rq_h_nr_running && rq->cfs.h_nr_running) { + if (!rq_h_nr_queued && rq->cfs.h_nr_queued) { /* Account for idle runtime */ if (!rq->nr_running) dl_server_update_idle_time(rq, rq->curr); @@ -7107,19 +7107,19 @@ static void set_next_buddy(struct sched_entity *se); static int dequeue_entities(struct rq *rq, struct sched_entity *se, int fl= ags) { bool was_sched_idle =3D sched_idle_rq(rq); - int rq_h_nr_running =3D rq->cfs.h_nr_running; + int rq_h_nr_queued =3D rq->cfs.h_nr_queued; bool task_sleep =3D flags & DEQUEUE_SLEEP; bool task_delayed =3D flags & DEQUEUE_DELAYED; struct task_struct *p =3D NULL; int idle_h_nr_running =3D 0; - int h_nr_running =3D 0; + int h_nr_queued =3D 0; int h_nr_delayed =3D 0; struct cfs_rq *cfs_rq; u64 slice =3D 0; =20 if (entity_is_task(se)) { p =3D task_of(se); - h_nr_running =3D 1; + h_nr_queued =3D 1; idle_h_nr_running =3D task_has_idle_policy(p); if (!task_sleep && !task_delayed) h_nr_delayed =3D !!se->sched_delayed; @@ -7138,12 +7138,12 @@ static int dequeue_entities(struct rq *rq, struct s= ched_entity *se, int flags) break; } =20 - cfs_rq->h_nr_running -=3D h_nr_running; + cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D h_nr_running; + idle_h_nr_running =3D h_nr_queued; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) @@ -7177,21 +7177,21 @@ static int dequeue_entities(struct rq *rq, struct s= ched_entity *se, int flags) se->slice =3D slice; slice =3D cfs_rq_min_slice(cfs_rq); =20 - cfs_rq->h_nr_running -=3D h_nr_running; + cfs_rq->h_nr_queued -=3D h_nr_queued; cfs_rq->idle_h_nr_running -=3D idle_h_nr_running; cfs_rq->h_nr_delayed -=3D h_nr_delayed; =20 if (cfs_rq_is_idle(cfs_rq)) - idle_h_nr_running =3D h_nr_running; + idle_h_nr_running =3D h_nr_queued; =20 /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) return 0; } =20 - sub_nr_running(rq, h_nr_running); + sub_nr_running(rq, h_nr_queued); =20 - if (rq_h_nr_running && !rq->cfs.h_nr_running) + if (rq_h_nr_queued && !rq->cfs.h_nr_queued) dl_server_stop(&rq->fair_server); =20 /* balance early to pull high priority tasks */ @@ -10319,7 +10319,7 @@ sched_reduced_capacity(struct rq *rq, struct sched_= domain *sd) * When there is more than 1 task, the group_overloaded case already * takes care of cpu with reduced capacity */ - if (rq->cfs.h_nr_running !=3D 1) + if (rq->cfs.h_nr_queued !=3D 1) return false; =20 return check_cpu_capacity(rq, sd); @@ -10354,7 +10354,7 @@ static inline void update_sg_lb_stats(struct lb_env= *env, sgs->group_load +=3D load; sgs->group_util +=3D cpu_util_cfs(i); sgs->group_runnable +=3D cpu_runnable(rq); - sgs->sum_h_nr_running +=3D rq->cfs.h_nr_running; + sgs->sum_h_nr_running +=3D rq->cfs.h_nr_queued; =20 nr_running =3D rq->nr_running; sgs->sum_nr_running +=3D nr_running; @@ -10669,7 +10669,7 @@ static inline void update_sg_wakeup_stats(struct sc= hed_domain *sd, sgs->group_util +=3D cpu_util_without(i, p); sgs->group_runnable +=3D cpu_runnable_without(rq, p); local =3D task_running_on_cpu(i, p); - sgs->sum_h_nr_running +=3D rq->cfs.h_nr_running - local; + sgs->sum_h_nr_running +=3D rq->cfs.h_nr_queued - local; =20 nr_running =3D rq->nr_running - local; sgs->sum_nr_running +=3D nr_running; @@ -11451,7 +11451,7 @@ static struct rq *sched_balance_find_src_rq(struct = lb_env *env, if (rt > env->fbq_type) continue; =20 - nr_running =3D rq->cfs.h_nr_running; + nr_running =3D rq->cfs.h_nr_queued; if (!nr_running) continue; =20 @@ -11610,7 +11610,7 @@ static int need_active_balance(struct lb_env *env) * available on dst_cpu. */ if (env->idle && - (env->src_rq->cfs.h_nr_running =3D=3D 1)) { + (env->src_rq->cfs.h_nr_queued =3D=3D 1)) { if ((check_cpu_capacity(env->src_rq, sd)) && (capacity_of(env->src_cpu)*sd->imbalance_pct < capacity_of(env->dst_= cpu)*100)) return 1; @@ -12353,7 +12353,7 @@ static void nohz_balancer_kick(struct rq *rq) * If there's a runnable CFS task and the current CPU has reduced * capacity, kick the ILB to see if there's a better CPU to run on: */ - if (rq->cfs.h_nr_running >=3D 1 && check_cpu_capacity(rq, sd)) { + if (rq->cfs.h_nr_queued >=3D 1 && check_cpu_capacity(rq, sd)) { flags =3D NOHZ_STATS_KICK | NOHZ_BALANCE_KICK; goto unlock; } @@ -12851,11 +12851,11 @@ static int sched_balance_newidle(struct rq *this_= rq, struct rq_flags *rf) * have been enqueued in the meantime. Since we're not going idle, * pretend we pulled a task. */ - if (this_rq->cfs.h_nr_running && !pulled_task) + if (this_rq->cfs.h_nr_queued && !pulled_task) pulled_task =3D 1; =20 /* Is there a task of a high priority class? */ - if (this_rq->nr_running !=3D this_rq->cfs.h_nr_running) + if (this_rq->nr_running !=3D this_rq->cfs.h_nr_queued) pulled_task =3D -1; =20 out: @@ -13542,7 +13542,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) parent_cfs_rq->idle_nr_running--; } =20 - idle_task_delta =3D grp_cfs_rq->h_nr_running - + idle_task_delta =3D grp_cfs_rq->h_nr_queued - grp_cfs_rq->idle_h_nr_running; if (!cfs_rq_is_idle(grp_cfs_rq)) idle_task_delta *=3D -1; diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index fee75cc2c47b..2bad0b508dfc 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -275,7 +275,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long = load) * * group: [ see update_cfs_group() ] * se_weight() =3D tg->weight * grq->load_avg / tg->load_avg - * se_runnable() =3D grq->h_nr_running + * se_runnable() =3D grq->h_nr_queued * * runnable_sum =3D se_runnable() * runnable =3D grq->runnable_sum * runnable_avg =3D runnable_sum @@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cf= s_rq) { if (___update_load_sum(now, &cfs_rq->avg, scale_load_down(cfs_rq->load.weight), - cfs_rq->h_nr_running - cfs_rq->h_nr_delayed, + cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed, cfs_rq->curr !=3D NULL)) { =20 ___update_load_avg(&cfs_rq->avg, 1); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 99d19c605e4f..b011081aff97 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -646,7 +646,7 @@ struct balance_callback { struct cfs_rq { struct load_weight load; unsigned int nr_running; - unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */ + unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ unsigned int h_nr_delayed; @@ -902,7 +902,7 @@ static inline void se_update_runnable(struct sched_enti= ty *se) if (!entity_is_task(se)) { struct cfs_rq *cfs_rq =3D se->my_q; =20 - se->runnable_weight =3D cfs_rq->h_nr_running - cfs_rq->h_nr_delayed; + se->runnable_weight =3D cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed; } } =20 --=20 2.43.0