[PATCH] sched/fair: Add helper to handle leaf cfs_rq addition

Shubhang Kaushik via B4 Relay posted 1 patch 2 months, 1 week ago
There is a newer version of this series
kernel/sched/fair.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
[PATCH] sched/fair: Add helper to handle leaf cfs_rq addition
Posted by Shubhang Kaushik via B4 Relay 2 months, 1 week ago
From: Shubhang Kaushik <shubhang@os.amperecomputing.com>

Refactor the logic for adding a cfs_rq to the leaf list into a helper
function.

The existing code repeated the logic to check if the cfs_rq was
throttled and added it to the leaf list. This change extracts that
logic into the static inline helper `__cfs_rq_maybe_add_leaf()`, which
is a more robust naming convention for an internal helper.

This refactoring removes code duplication and makes the parent function,
`propagate_entity_cfs_rq()`, cleaner and easier to read.

Signed-off-by: Shubhang Kaushik <shubhang@os.amperecomputing.com>
---
Refactor repeated cfs_rq leaf logic in `propagate_entity_cfs_rq()`
into a new helper function, `__cfs_rq_maybe_add_leaf()`, to
remove code duplication and improve readability.
---
 kernel/sched/fair.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc0b7ce8a65d6bbe616953f530f7a02bb619537c..fe714c68f0ef52c08c552ce93741f29df95d7d1c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13159,6 +13159,18 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
 }
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
+/*
+ * If a task gets attached to this cfs_rq and, before being queued,
+ * it gets migrated to another CPU (e.g., due to reasons like affinity change),
+ * this cfs_rq must remain on leaf cfs_rq list. This allows the
+ * removed load to decay properly; otherwise, it can cause a fairness problem.
+ */
+static inline void __cfs_rq_maybe_add_leaf(struct cfs_rq *cfs_rq)
+{
+	if (!cfs_rq_pelt_clock_throttled(cfs_rq))
+		list_add_leaf_cfs_rq(cfs_rq);
+}
+
 /*
  * Propagate the changes of the sched_entity across the tg tree to make it
  * visible to the root
@@ -13167,14 +13179,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
 {
 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
 
-	/*
-	 * If a task gets attached to this cfs_rq and before being queued,
-	 * it gets migrated to another CPU due to reasons like affinity
-	 * change, make sure this cfs_rq stays on leaf cfs_rq list to have
-	 * that removed load decayed or it can cause faireness problem.
-	 */
-	if (!cfs_rq_pelt_clock_throttled(cfs_rq))
-		list_add_leaf_cfs_rq(cfs_rq);
+	__cfs_rq_maybe_add_leaf(cfs_rq);
 
 	/* Start to propagate at parent */
 	se = se->parent;
@@ -13184,8 +13189,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
 
 		update_load_avg(cfs_rq, se, UPDATE_TG);
 
-		if (!cfs_rq_pelt_clock_throttled(cfs_rq))
-			list_add_leaf_cfs_rq(cfs_rq);
+		__cfs_rq_maybe_add_leaf(cfs_rq);
 	}
 }
 #else /* !CONFIG_FAIR_GROUP_SCHED: */

---
base-commit: 0d97f2067c166eb495771fede9f7b73999c67f66
change-id: 20251010-sched-cfs-refactor-propagate-e6345a291f3c

Best regards,
-- 
Shubhang Kaushik <shubhang@os.amperecomputing.com>