From nobody Fri Oct 10 17:26:50 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFB222ED845; Fri, 13 Jun 2025 07:37:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749800259; cv=none; b=kqMMOkC2BFa7dMT8MdLEpSLrYWOBYjyQy7EBBXg1GmgL0M9pCcPV5xDbVRwdEoS1y1FzYPJvBJNvoByY2gXj8Zg/kB/UQTloqDE2u8LUAlv8jI/LzM6iZwmRNrGGhw0MbMMdhdjo66P3Pwzww/j6YCj4y5wrbLtszMdLdt/lkZw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749800259; c=relaxed/simple; bh=ehwk764SOUQTO9fotq3vLTqgACmrv/icwsLb2ZFD/2I=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=rVtKLjn0/i7/Hey/mhS5y7XJ65C7g2ygMY4ztqHH5dgzs/OIVPSfXqltJEDS198hiFu8IDxBjyzR+yyfvZeWGrYbBGk4qpjfYLLDBw1APfzYeeBF0MVU0fZkZwqcA1WWGQEqd9SCH+Lsq3awWlyy1ieQgH+6xlG2gnpcN52oJhI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=0Q45QQc6; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=6lJIuou6; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="0Q45QQc6"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="6lJIuou6" Date: Fri, 13 Jun 2025 07:37:34 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1749800255; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4zovlNKsAwlGuOaCg8u3rcN9Oi9l0aYTXOi9CzP9+lE=; b=0Q45QQc68h/y91vj1W+g8b0EZugTvFVEuJtoEjr+sAWaHi+SJ9kfgC13DgV6BtsVM+VFPH CbhHCf6cgzL+3zlTnPIdcKR5vuyW4AXsv6AzWfPUiuvZg0RB4mtUqhgAFuhxmQoigEuc66 y+0OYiUUoGGmUAUH60DgfIEyx+VBZrYqZmIRbz3smgdf9LpcnwcVL2gUOpd98+L00sJCsz nHo60DTs3qfaQAWJgS3jC8uconvG/TRN8bjNW3/+IWoandZg83HmYkKiVCWa+SO1cRMlts DDx47iLKbxU2X8Lz/6vsiNMwyYTZjcGUixuRTPsCdaWzTRvhTrhOggTJ+Bfm6w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1749800255; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4zovlNKsAwlGuOaCg8u3rcN9Oi9l0aYTXOi9CzP9+lE=; b=6lJIuou61GHPUGld9AgBuDXviVmkJHL1ZkvjMwAlsjQ6f4OhixPCvWzXGncI4fXrxJqU7g jksUcI6ke5QP+PDQ== From: "tip-bot2 for Ingo Molnar" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: sched/core] sched: Clean up and standardize #if/#else/#endif markers in sched/fair.c Cc: Ingo Molnar , Peter Zijlstra , Dietmar Eggemann , Juri Lelli , Linus Torvalds , Mel Gorman , Sebastian Andrzej Siewior , Shrikanth Hegde , Steven Rostedt , Valentin Schneider , Vincent Guittot , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20250528080924.2273858-10-mingo@kernel.org> References: <20250528080924.2273858-10-mingo@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <174980025458.406.16658529902407667827.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the sched/core branch of tip: Commit-ID: 416d5f78e4d3b010734248cb0aad9dc54b7589fa Gitweb: https://git.kernel.org/tip/416d5f78e4d3b010734248cb0aad9dc54= b7589fa Author: Ingo Molnar AuthorDate: Wed, 28 May 2025 10:08:50 +02:00 Committer: Ingo Molnar CommitterDate: Fri, 13 Jun 2025 08:47:16 +02:00 sched: Clean up and standardize #if/#else/#endif markers in sched/fair.c - Use the standard #ifdef marker format for larger blocks, where appropriate: #if CONFIG_FOO ... #else /* !CONFIG_FOO: */ ... #endif /* !CONFIG_FOO */ - Fix whitespace noise and other inconsistencies. Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra Cc: Dietmar Eggemann Cc: Juri Lelli Cc: Linus Torvalds Cc: Mel Gorman Cc: Sebastian Andrzej Siewior Cc: Shrikanth Hegde Cc: Steven Rostedt Cc: Valentin Schneider Cc: Vincent Guittot Link: https://lore.kernel.org/r/20250528080924.2273858-10-mingo@kernel.org --- kernel/sched/fair.c | 111 +++++++++++++++++++++---------------------- 1 file changed, 56 insertions(+), 55 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 83157de..1fabbe0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -111,7 +111,7 @@ int __weak arch_asym_cpu_priority(int cpu) * (default: ~5%) */ #define capacity_greater(cap1, cap2) ((cap1) * 1024 > (cap2) * 1078) -#endif +#endif /* CONFIG_SMP */ =20 #ifdef CONFIG_CFS_BANDWIDTH /* @@ -162,7 +162,7 @@ static int __init sched_fair_sysctl_init(void) return 0; } late_initcall(sched_fair_sysctl_init); -#endif +#endif /* CONFIG_SYSCTL */ =20 static inline void update_load_add(struct load_weight *lw, unsigned long i= nc) { @@ -471,7 +471,7 @@ static int se_is_idle(struct sched_entity *se) return cfs_rq_is_idle(group_cfs_rq(se)); } =20 -#else /* !CONFIG_FAIR_GROUP_SCHED */ +#else /* !CONFIG_FAIR_GROUP_SCHED: */ =20 #define for_each_sched_entity(se) \ for (; se; se =3D NULL) @@ -517,7 +517,7 @@ static int se_is_idle(struct sched_entity *se) return task_has_idle_policy(task_of(se)); } =20 -#endif /* CONFIG_FAIR_GROUP_SCHED */ +#endif /* !CONFIG_FAIR_GROUP_SCHED */ =20 static __always_inline void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec); @@ -1008,7 +1008,7 @@ int sched_update_scaling(void) =20 return 0; } -#endif +#endif /* CONFIG_SMP */ =20 static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se); =20 @@ -1041,6 +1041,7 @@ static bool update_deadline(struct cfs_rq *cfs_rq, st= ruct sched_entity *se) } =20 #include "pelt.h" + #ifdef CONFIG_SMP =20 static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cp= u); @@ -1131,7 +1132,7 @@ void post_init_entity_util_avg(struct task_struct *p) sa->runnable_avg =3D sa->util_avg; } =20 -#else /* !CONFIG_SMP */ +#else /* !CONFIG_SMP: */ void init_entity_runnable_average(struct sched_entity *se) { } @@ -1141,7 +1142,7 @@ void post_init_entity_util_avg(struct task_struct *p) static void update_tg_load_avg(struct cfs_rq *cfs_rq) { } -#endif /* CONFIG_SMP */ +#endif /* !CONFIG_SMP */ =20 static s64 update_curr_se(struct rq *rq, struct sched_entity *curr) { @@ -2114,12 +2115,12 @@ static inline int numa_idle_core(int idle_core, int= cpu) =20 return idle_core; } -#else +#else /* !CONFIG_SCHED_SMT: */ static inline int numa_idle_core(int idle_core, int cpu) { return idle_core; } -#endif +#endif /* !CONFIG_SCHED_SMT */ =20 /* * Gather all necessary information to make NUMA balancing placement @@ -3673,7 +3674,8 @@ static void update_scan_period(struct task_struct *p,= int new_cpu) p->numa_scan_period =3D task_scan_start(p); } =20 -#else +#else /* !CONFIG_NUMA_BALANCING: */ + static void task_tick_numa(struct rq *rq, struct task_struct *curr) { } @@ -3690,7 +3692,7 @@ static inline void update_scan_period(struct task_str= uct *p, int new_cpu) { } =20 -#endif /* CONFIG_NUMA_BALANCING */ +#endif /* !CONFIG_NUMA_BALANCING */ =20 static void account_entity_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se) @@ -3785,12 +3787,12 @@ dequeue_load_avg(struct cfs_rq *cfs_rq, struct sche= d_entity *se) cfs_rq->avg.load_sum =3D max_t(u32, cfs_rq->avg.load_sum, cfs_rq->avg.load_avg * PELT_MIN_DIVIDER); } -#else +#else /* !CONFIG_SMP: */ static inline void enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { } static inline void dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { } -#endif +#endif /* !CONFIG_SMP */ =20 static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, i= nt flags); =20 @@ -4000,11 +4002,11 @@ static void update_cfs_group(struct sched_entity *s= e) reweight_entity(cfs_rq_of(se), se, shares); } =20 -#else /* CONFIG_FAIR_GROUP_SCHED */ +#else /* !CONFIG_FAIR_GROUP_SCHED: */ static inline void update_cfs_group(struct sched_entity *se) { } -#endif /* CONFIG_FAIR_GROUP_SCHED */ +#endif /* !CONFIG_FAIR_GROUP_SCHED */ =20 static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags) { @@ -4481,7 +4483,7 @@ static inline bool skip_blocked_update(struct sched_e= ntity *se) return true; } =20 -#else /* CONFIG_FAIR_GROUP_SCHED */ +#else /* !CONFIG_FAIR_GROUP_SCHED: */ =20 static inline void update_tg_load_avg(struct cfs_rq *cfs_rq) {} =20 @@ -4494,7 +4496,7 @@ static inline int propagate_entity_load_avg(struct sc= hed_entity *se) =20 static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnab= le_sum) {} =20 -#endif /* CONFIG_FAIR_GROUP_SCHED */ +#endif /* !CONFIG_FAIR_GROUP_SCHED */ =20 #ifdef CONFIG_NO_HZ_COMMON static inline void migrate_se_pelt_lag(struct sched_entity *se) @@ -4575,9 +4577,9 @@ static inline void migrate_se_pelt_lag(struct sched_e= ntity *se) =20 __update_load_avg_blocked_se(now, se); } -#else +#else /* !CONFIG_NO_HZ_COMMON: */ static void migrate_se_pelt_lag(struct sched_entity *se) {} -#endif +#endif /* !CONFIG_NO_HZ_COMMON */ =20 /** * update_cfs_rq_load_avg - update the cfs_rq's load/util averages @@ -5144,7 +5146,7 @@ static inline void update_misfit_status(struct task_s= truct *p, struct rq *rq) rq->misfit_task_load =3D max_t(unsigned long, task_h_load(p), 1); } =20 -#else /* CONFIG_SMP */ +#else /* !CONFIG_SMP: */ =20 static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) { @@ -5184,7 +5186,7 @@ util_est_update(struct cfs_rq *cfs_rq, struct task_st= ruct *p, bool task_sleep) {} static inline void update_misfit_status(struct task_struct *p, struct rq *= rq) {} =20 -#endif /* CONFIG_SMP */ +#endif /* !CONFIG_SMP */ =20 void __setparam_fair(struct task_struct *p, const struct sched_attr *attr) { @@ -5685,7 +5687,7 @@ void cfs_bandwidth_usage_dec(void) { static_key_slow_dec_cpuslocked(&__cfs_bandwidth_used); } -#else /* CONFIG_JUMP_LABEL */ +#else /* !CONFIG_JUMP_LABEL: */ static bool cfs_bandwidth_used(void) { return true; @@ -5693,7 +5695,7 @@ static bool cfs_bandwidth_used(void) =20 void cfs_bandwidth_usage_inc(void) {} void cfs_bandwidth_usage_dec(void) {} -#endif /* CONFIG_JUMP_LABEL */ +#endif /* !CONFIG_JUMP_LABEL */ =20 /* * default period for cfs group bandwidth. @@ -6147,12 +6149,12 @@ static inline void __unthrottle_cfs_rq_async(struct= cfs_rq *cfs_rq) if (first) smp_call_function_single_async(cpu_of(rq), &rq->cfsb_csd); } -#else +#else /* !CONFIG_SMP: */ static inline void __unthrottle_cfs_rq_async(struct cfs_rq *cfs_rq) { unthrottle_cfs_rq(cfs_rq); } -#endif +#endif /* !CONFIG_SMP */ =20 static void unthrottle_cfs_rq_async(struct cfs_rq *cfs_rq) { @@ -6733,9 +6735,9 @@ static void sched_fair_update_stop_tick(struct rq *rq= , struct task_struct *p) if (cfs_task_bw_constrained(p)) tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED); } -#endif +#endif /* CONFIG_NO_HZ_FULL */ =20 -#else /* CONFIG_CFS_BANDWIDTH */ +#else /* !CONFIG_CFS_BANDWIDTH: */ =20 static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) = {} static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq) { return false; } @@ -6777,7 +6779,7 @@ bool cfs_task_bw_constrained(struct task_struct *p) return false; } #endif -#endif /* CONFIG_CFS_BANDWIDTH */ +#endif /* !CONFIG_CFS_BANDWIDTH */ =20 #if !defined(CONFIG_CFS_BANDWIDTH) || !defined(CONFIG_NO_HZ_FULL) static inline void sched_fair_update_stop_tick(struct rq *rq, struct task_= struct *p) {} @@ -6822,7 +6824,7 @@ static void hrtick_update(struct rq *rq) =20 hrtick_start_fair(rq, donor); } -#else /* !CONFIG_SCHED_HRTICK */ +#else /* !CONFIG_SCHED_HRTICK: */ static inline void hrtick_start_fair(struct rq *rq, struct task_struct *p) { @@ -6831,7 +6833,7 @@ hrtick_start_fair(struct rq *rq, struct task_struct *= p) static inline void hrtick_update(struct rq *rq) { } -#endif +#endif /* !CONFIG_SCHED_HRTICK */ =20 #ifdef CONFIG_SMP static inline bool cpu_overutilized(int cpu) @@ -6875,9 +6877,9 @@ static inline void check_update_overutilized_status(s= truct rq *rq) if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu)) set_rd_overutilized(rq->rd, 1); } -#else +#else /* !CONFIG_SMP: */ static inline void check_update_overutilized_status(struct rq *rq) { } -#endif +#endif /* !CONFIG_SMP */ =20 /* Runqueue only has SCHED_IDLE tasks enqueued */ static int sched_idle_rq(struct rq *rq) @@ -7677,7 +7679,7 @@ static int select_idle_smt(struct task_struct *p, str= uct sched_domain *sd, int t return -1; } =20 -#else /* CONFIG_SCHED_SMT */ +#else /* !CONFIG_SCHED_SMT: */ =20 static inline void set_idle_cores(int cpu, int val) { @@ -7698,7 +7700,7 @@ static inline int select_idle_smt(struct task_struct = *p, struct sched_domain *sd return -1; } =20 -#endif /* CONFIG_SCHED_SMT */ +#endif /* !CONFIG_SCHED_SMT */ =20 /* * Scan the LLC domain for idle CPUs; this is dynamically regulated by @@ -8743,9 +8745,9 @@ balance_fair(struct rq *rq, struct task_struct *prev,= struct rq_flags *rf) =20 return sched_balance_newidle(rq, rf) !=3D 0; } -#else +#else /* !CONFIG_SMP: */ static inline void set_task_max_allowed_capacity(struct task_struct *p) {} -#endif /* CONFIG_SMP */ +#endif /* !CONFIG_SMP */ =20 static void set_next_buddy(struct sched_entity *se) { @@ -8939,7 +8941,7 @@ again: return p; =20 simple: -#endif +#endif /* CONFIG_FAIR_GROUP_SCHED */ put_prev_set_next_task(rq, prev, p); return p; =20 @@ -9357,13 +9359,13 @@ static long migrate_degrades_locality(struct task_s= truct *p, struct lb_env *env) return src_weight - dst_weight; } =20 -#else +#else /* !CONFIG_NUMA_BALANCING: */ static inline long migrate_degrades_locality(struct task_struct *p, struct lb_env *env) { return 0; } -#endif +#endif /* !CONFIG_NUMA_BALANCING */ =20 /* * Check whether the task is ineligible on the destination cpu @@ -9772,12 +9774,12 @@ static inline void update_blocked_load_status(struc= t rq *rq, bool has_blocked) if (!has_blocked) rq->has_blocked_load =3D 0; } -#else +#else /* !CONFIG_NO_HZ_COMMON: */ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return fals= e; } static inline bool others_have_blocked(struct rq *rq) { return false; } static inline void update_blocked_load_tick(struct rq *rq) {} static inline void update_blocked_load_status(struct rq *rq, bool has_bloc= ked) {} -#endif +#endif /* !CONFIG_NO_HZ_COMMON */ =20 static bool __update_blocked_others(struct rq *rq, bool *done) { @@ -9886,7 +9888,7 @@ static unsigned long task_h_load(struct task_struct *= p) return div64_ul(p->se.avg.load_avg * cfs_rq->h_load, cfs_rq_load_avg(cfs_rq) + 1); } -#else +#else /* !CONFIG_FAIR_GROUP_SCHED: */ static bool __update_blocked_fair(struct rq *rq, bool *done) { struct cfs_rq *cfs_rq =3D &rq->cfs; @@ -9903,7 +9905,7 @@ static unsigned long task_h_load(struct task_struct *= p) { return p->se.avg.load_avg; } -#endif +#endif /* !CONFIG_FAIR_GROUP_SCHED */ =20 static void sched_balance_update_blocked_averages(int cpu) { @@ -10616,7 +10618,7 @@ static inline enum fbq_type fbq_classify_rq(struct = rq *rq) return remote; return all; } -#else +#else /* !CONFIG_NUMA_BALANCING: */ static inline enum fbq_type fbq_classify_group(struct sg_lb_stats *sgs) { return all; @@ -10626,7 +10628,7 @@ static inline enum fbq_type fbq_classify_rq(struct = rq *rq) { return regular; } -#endif /* CONFIG_NUMA_BALANCING */ +#endif /* !CONFIG_NUMA_BALANCING */ =20 =20 struct sg_lb_stats; @@ -12772,7 +12774,7 @@ static void nohz_newidle_balance(struct rq *this_rq) atomic_or(NOHZ_NEWILB_KICK, nohz_flags(this_cpu)); } =20 -#else /* !CONFIG_NO_HZ_COMMON */ +#else /* !CONFIG_NO_HZ_COMMON: */ static inline void nohz_balancer_kick(struct rq *rq) { } =20 static inline bool nohz_idle_balance(struct rq *this_rq, enum cpu_idle_typ= e idle) @@ -12781,7 +12783,7 @@ static inline bool nohz_idle_balance(struct rq *thi= s_rq, enum cpu_idle_type idle } =20 static inline void nohz_newidle_balance(struct rq *this_rq) { } -#endif /* CONFIG_NO_HZ_COMMON */ +#endif /* !CONFIG_NO_HZ_COMMON */ =20 /* * sched_balance_newidle is called by schedule() if this_cpu is about to b= ecome @@ -13076,10 +13078,10 @@ bool cfs_prio_less(const struct task_struct *a, c= onst struct task_struct *b, =20 cfs_rqa =3D sea->cfs_rq; cfs_rqb =3D seb->cfs_rq; -#else +#else /* !CONFIG_FAIR_GROUP_SCHED: */ cfs_rqa =3D &task_rq(a)->cfs; cfs_rqb =3D &task_rq(b)->cfs; -#endif +#endif /* !CONFIG_FAIR_GROUP_SCHED */ =20 /* * Find delta after normalizing se's vruntime with its cfs_rq's @@ -13103,9 +13105,9 @@ static int task_is_throttled_fair(struct task_struc= t *p, int cpu) #endif return throttled_hierarchy(cfs_rq); } -#else +#else /* !CONFIG_SCHED_CORE: */ static inline void task_tick_core(struct rq *rq, struct task_struct *curr)= {} -#endif +#endif /* !CONFIG_SCHED_CORE */ =20 /* * scheduler tick hitting a task of our scheduling class. @@ -13199,9 +13201,9 @@ static void propagate_entity_cfs_rq(struct sched_en= tity *se) list_add_leaf_cfs_rq(cfs_rq); } } -#else +#else /* !CONFIG_FAIR_GROUP_SCHED: */ static void propagate_entity_cfs_rq(struct sched_entity *se) { } -#endif +#endif /* !CONFIG_FAIR_GROUP_SCHED */ =20 static void detach_entity_cfs_rq(struct sched_entity *se) { @@ -13737,6 +13739,5 @@ __init void init_sched_fair_class(void) nohz.next_blocked =3D jiffies; zalloc_cpumask_var(&nohz.idle_cpus_mask, GFP_NOWAIT); #endif -#endif /* SMP */ - +#endif /* CONFIG_SMP */ }