From nobody Wed Dec 17 13:46:03 2025 Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAA00221D90 for ; Mon, 17 Mar 2025 10:43:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208189; cv=none; b=nRU0I03MbrAIcCppsjUF/6dM7qkuumNArgdWPkUAG6p1xoANxaBDHtl9BxamEv8GF3HVfOV40qaVRkHmmZBPtLmQdoOUZFvu6N1fzFZDIxxdC8OdtwL4VfEerzVl6Ep7dWJj7ZoHIWf/aojQvs5sVPzhi9WTpr1499q/zZhbFVA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208189; c=relaxed/simple; bh=x0F9Wc6PmA7Mj6G2FHO4Iyk/rm/aXwBNFFkeSmZJIt4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uz8kyKmZwbPi5KfCkCbHqvlVoVOF7TwQeyhV7dAWtb9yeYvfppV5AmboR/pdaB7h48oM55glDmYuRmZfIRrG5i/6tqzj+tqW4dyKL6Y6FB2E6LibK4QDQKfC5u+rllM3ZkT0tDGjj9H2sSu6f2lenZwFobc2mkY8f/xkrCqwdCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DVhy19j/; arc=none smtp.client-ip=209.85.221.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DVhy19j/" Received: by mail-wr1-f49.google.com with SMTP id ffacd0b85a97d-390cf7458f5so3692865f8f.2 for ; Mon, 17 Mar 2025 03:43:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742208185; x=1742812985; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=tvIByDkGm9SD8qNzdW0YO+9Wrq4g6pxBmwrq0RDmigY=; b=DVhy19j/R1ihRlbhLleMY6CCV+EfPZ7xPc4BdeVD3ME2tlL+57j9FloRdBxjHiENEq ZBWxgAJ64TOm1TsfacgfuC2oP9TUg7RmmhlUp8pK0MCeIEQ5vszralu5SDZykqrGEQeU /UZvCl3V8N1iSCyvGReY1bde28moMYc2eYu2Vl1WIbqrMf2fRC2Wkrc/Xh7rECt2wqPV VRoCYewsnxo41DeYEVbeO4T73fJn8b3kKZYeW1qIUxKysgB+Ix5aO3htUbSlFKw6zHvw omNJa1ax1j4wLNFx3z86iP3Cv4f5OrDuFH+X+KgdD1yHg15hj4BpXjRamn+A/8LdkbRv q94g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742208185; x=1742812985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=tvIByDkGm9SD8qNzdW0YO+9Wrq4g6pxBmwrq0RDmigY=; b=IZlIS9DZr6lq1EQ+dp51B4daQfgr0vUW9zJ+fBxUvwNQSHVJrmrv+Td02jaDgumfOS n4sUDSZhqr0XE/NceH2siQUpGBgivU2MNqHHReF1YLIFrtKMwE+KSbFJMr+amiz//Sd9 fENOt3XX+ZFLqABW1Ep5owftoSjmySJFyTQP6jE0eSvy/fSzpWwa5SY3cufaDTujDaGC BeuJSI2Xn0r1sZbCC+MuMgXT0bWm2x8fHHUqXx19RbwyX1irlIoOAGoTCgNX90zSdD5C GLgurlL3HDGDB9aRDO8tjLSJ1THQK19IetmQb6q7V184W9YKGp55PguHuuD+vId2p4K8 KJQg== X-Gm-Message-State: AOJu0YwvCC02nP64m0WWTl4T6Mf0NeL0WPCcYkL8UYdAl73J9Hf51Sb6 KSvJSmmbANlBowtrjA3oJMwlUQfKXPi6j42NkXv5NmC4E+5KssUQCyPatLFj X-Gm-Gg: ASbGncucfHp3jUoqw3XgivJ1qsk8qn/fjN7rInZ0XC2C/z4GwSw1z6vdAXxYsyU77u/ 4C+Ti1kTamUx5r0VKRhfqIQ+hfD+pGzGLmwE2YZugfIoBgGl6c8n3It/rRIbeADt/A/BGkZ50iW HgJILZ5Hzo4vfQmtqQNdZ5qolkuzoeYerhqv6eI2Eh4JwZw0w8+1HP8TJRjzGj+EDhXD+SCeaKk rKzVGNr/7zSAYGNRTjkqgTLQPvThWEEcwKYIgpqq8nheVcX7P60IYbtSsUhM1Tpi+TcuZpnzFiR hjkcBEx/+RtoP1ZwHBnkFg2EFKiYkegWC/ZKPjd0zW/9xmNYHkhQZdvKOO2fEAUAVb+0nbFIESu N X-Google-Smtp-Source: AGHT+IEvWF+8TYzeQlb+MW9+WYcgYJ1Bio2BDwzY4XszZtwtPDmt4VKN6QimcFqfhj51xgDjOYC3ug== X-Received: by 2002:adf:ca93:0:b0:391:888:f534 with SMTP id ffacd0b85a97d-3971e780ea7mr8372748f8f.20.1742208184707; Mon, 17 Mar 2025 03:43:04 -0700 (PDT) Received: from starship.. (1F2EF046.nat.pool.telekom.hu. [31.46.240.70]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3189absm14807262f8f.71.2025.03.17.03.43.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Mar 2025 03:43:04 -0700 (PDT) Sender: Ingo Molnar From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Linus Torvalds , Peter Zijlstra , Shrikanth Hegde , Thomas Gleixner , Valentin Schneider , Steven Rostedt , Mel Gorman , Vincent Guittot Subject: [PATCH 1/5] sched/debug: Change SCHED_WARN_ON() to WARN_ON_ONCE() Date: Mon, 17 Mar 2025 11:42:52 +0100 Message-ID: <20250317104257.3496611-2-mingo@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250317104257.3496611-1-mingo@kernel.org> References: <20250317104257.3496611-1-mingo@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The scheduler has this special SCHED_WARN() facility that depends on CONFIG_SCHED_DEBUG. Since CONFIG_SCHED_DEBUG is getting removed, convert SCHED_WARN() to WARN_ON_ONCE(). Note that the warning output isn't 100% equivalent: #define SCHED_WARN_ON(x) WARN_ONCE(x, #x) Because SCHED_WARN_ON() would output the 'x' condition as well, while WARN_ONCE() will only show a backtrace. Hopefully these are rare enough to not really matter. If it does, we should probably introduce a new WARN_ON() variant that outputs the condition in stringified form, or improve WARN_ON() itself. Signed-off-by: Ingo Molnar --- kernel/sched/core.c | 24 ++++++++++++------------ kernel/sched/core_sched.c | 2 +- kernel/sched/deadline.c | 12 ++++++------ kernel/sched/ext.c | 2 +- kernel/sched/fair.c | 58 +++++++++++++++++++++++++++++--------------= --------------- kernel/sched/rt.c | 2 +- kernel/sched/sched.h | 16 +++++----------- kernel/sched/stats.h | 2 +- 8 files changed, 56 insertions(+), 62 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 03d7b63dc3e5..2da197b2968b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -801,7 +801,7 @@ void update_rq_clock(struct rq *rq) =20 #ifdef CONFIG_SCHED_DEBUG if (sched_feat(WARN_DOUBLE_CLOCK)) - SCHED_WARN_ON(rq->clock_update_flags & RQCF_UPDATED); + WARN_ON_ONCE(rq->clock_update_flags & RQCF_UPDATED); rq->clock_update_flags |=3D RQCF_UPDATED; #endif clock =3D sched_clock_cpu(cpu_of(rq)); @@ -1719,7 +1719,7 @@ static inline void uclamp_rq_dec_id(struct rq *rq, st= ruct task_struct *p, =20 bucket =3D &uc_rq->bucket[uc_se->bucket_id]; =20 - SCHED_WARN_ON(!bucket->tasks); + WARN_ON_ONCE(!bucket->tasks); if (likely(bucket->tasks)) bucket->tasks--; =20 @@ -1739,7 +1739,7 @@ static inline void uclamp_rq_dec_id(struct rq *rq, st= ruct task_struct *p, * Defensive programming: this should never happen. If it happens, * e.g. due to future modification, warn and fix up the expected value. */ - SCHED_WARN_ON(bucket->value > rq_clamp); + WARN_ON_ONCE(bucket->value > rq_clamp); if (bucket->value >=3D rq_clamp) { bkt_clamp =3D uclamp_rq_max_value(rq, clamp_id, uc_se->value); uclamp_rq_set(rq, clamp_id, bkt_clamp); @@ -2121,7 +2121,7 @@ void activate_task(struct rq *rq, struct task_struct = *p, int flags) =20 void deactivate_task(struct rq *rq, struct task_struct *p, int flags) { - SCHED_WARN_ON(flags & DEQUEUE_SLEEP); + WARN_ON_ONCE(flags & DEQUEUE_SLEEP); =20 WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING); ASSERT_EXCLUSIVE_WRITER(p->on_rq); @@ -2726,7 +2726,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct a= ffinity_context *ctx) * XXX do further audits, this smells like something putrid. */ if (ctx->flags & SCA_MIGRATE_DISABLE) - SCHED_WARN_ON(!p->on_cpu); + WARN_ON_ONCE(!p->on_cpu); else lockdep_assert_held(&p->pi_lock); =20 @@ -4195,7 +4195,7 @@ int try_to_wake_up(struct task_struct *p, unsigned in= t state, int wake_flags) * - we're serialized against set_special_state() by virtue of * it disabling IRQs (this allows not taking ->pi_lock). */ - SCHED_WARN_ON(p->se.sched_delayed); + WARN_ON_ONCE(p->se.sched_delayed); if (!ttwu_state_match(p, state, &success)) goto out; =20 @@ -4489,7 +4489,7 @@ static void __sched_fork(unsigned long clone_flags, s= truct task_struct *p) INIT_LIST_HEAD(&p->se.group_node); =20 /* A delayed task cannot be in clone(). */ - SCHED_WARN_ON(p->se.sched_delayed); + WARN_ON_ONCE(p->se.sched_delayed); =20 #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq =3D NULL; @@ -5745,7 +5745,7 @@ static void sched_tick_remote(struct work_struct *wor= k) * we are always sure that there is no proxy (only a * single task is running). */ - SCHED_WARN_ON(rq->curr !=3D rq->donor); + WARN_ON_ONCE(rq->curr !=3D rq->donor); update_rq_clock(rq); =20 if (!is_idle_task(curr)) { @@ -5965,7 +5965,7 @@ static inline void schedule_debug(struct task_struct = *prev, bool preempt) preempt_count_set(PREEMPT_DISABLED); } rcu_sleep_check(); - SCHED_WARN_ON(ct_state() =3D=3D CT_STATE_USER); + WARN_ON_ONCE(ct_state() =3D=3D CT_STATE_USER); =20 profile_hit(SCHED_PROFILING, __builtin_return_address(0)); =20 @@ -6811,7 +6811,7 @@ static inline void sched_submit_work(struct task_stru= ct *tsk) * deadlock if the callback attempts to acquire a lock which is * already acquired. */ - SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT); + WARN_ON_ONCE(current->__state & TASK_RTLOCK_WAIT); =20 /* * If we are going to sleep and we have plugged IO queued, @@ -9202,7 +9202,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_= state *css) unsigned int clamps; =20 lockdep_assert_held(&uclamp_mutex); - SCHED_WARN_ON(!rcu_read_lock_held()); + WARN_ON_ONCE(!rcu_read_lock_held()); =20 css_for_each_descendant_pre(css, top_css) { uc_parent =3D css_tg(css)->parent @@ -10537,7 +10537,7 @@ static void task_mm_cid_work(struct callback_head *= work) struct mm_struct *mm; int weight, cpu; =20 - SCHED_WARN_ON(t !=3D container_of(work, struct task_struct, cid_work)); + WARN_ON_ONCE(t !=3D container_of(work, struct task_struct, cid_work)); =20 work->next =3D work; /* Prevent double-add */ if (t->flags & PF_EXITING) diff --git a/kernel/sched/core_sched.c b/kernel/sched/core_sched.c index 1ef98a93eb1d..c4606ca89210 100644 --- a/kernel/sched/core_sched.c +++ b/kernel/sched/core_sched.c @@ -65,7 +65,7 @@ static unsigned long sched_core_update_cookie(struct task= _struct *p, * a cookie until after we've removed it, we must have core scheduling * enabled here. */ - SCHED_WARN_ON((p->core_cookie || cookie) && !sched_core_enabled(rq)); + WARN_ON_ONCE((p->core_cookie || cookie) && !sched_core_enabled(rq)); =20 if (sched_core_enqueued(p)) sched_core_dequeue(rq, p, DEQUEUE_SAVE); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index ff4df16b5186..b18c80272f86 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -249,8 +249,8 @@ void __add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) =20 lockdep_assert_rq_held(rq_of_dl_rq(dl_rq)); dl_rq->running_bw +=3D dl_bw; - SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */ - SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + WARN_ON_ONCE(dl_rq->running_bw < old); /* overflow */ + WARN_ON_ONCE(dl_rq->running_bw > dl_rq->this_bw); /* kick cpufreq (see the comment in kernel/sched/sched.h). */ cpufreq_update_util(rq_of_dl_rq(dl_rq), 0); } @@ -262,7 +262,7 @@ void __sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) =20 lockdep_assert_rq_held(rq_of_dl_rq(dl_rq)); dl_rq->running_bw -=3D dl_bw; - SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */ + WARN_ON_ONCE(dl_rq->running_bw > old); /* underflow */ if (dl_rq->running_bw > old) dl_rq->running_bw =3D 0; /* kick cpufreq (see the comment in kernel/sched/sched.h). */ @@ -276,7 +276,7 @@ void __add_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) =20 lockdep_assert_rq_held(rq_of_dl_rq(dl_rq)); dl_rq->this_bw +=3D dl_bw; - SCHED_WARN_ON(dl_rq->this_bw < old); /* overflow */ + WARN_ON_ONCE(dl_rq->this_bw < old); /* overflow */ } =20 static inline @@ -286,10 +286,10 @@ void __sub_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) =20 lockdep_assert_rq_held(rq_of_dl_rq(dl_rq)); dl_rq->this_bw -=3D dl_bw; - SCHED_WARN_ON(dl_rq->this_bw > old); /* underflow */ + WARN_ON_ONCE(dl_rq->this_bw > old); /* underflow */ if (dl_rq->this_bw > old) dl_rq->this_bw =3D 0; - SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + WARN_ON_ONCE(dl_rq->running_bw > dl_rq->this_bw); } =20 static inline diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 0f1da199cfc7..953a5b9ec0cd 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -2341,7 +2341,7 @@ static bool task_can_run_on_remote_rq(struct task_str= uct *p, struct rq *rq, { int cpu =3D cpu_of(rq); =20 - SCHED_WARN_ON(task_cpu(p) =3D=3D cpu); + WARN_ON_ONCE(task_cpu(p) =3D=3D cpu); =20 /* * If @p has migration disabled, @p->cpus_ptr is updated to contain only diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9dafb374d76d..89609ebd4904 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -399,7 +399,7 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *= cfs_rq) =20 static inline void assert_list_leaf_cfs_rq(struct rq *rq) { - SCHED_WARN_ON(rq->tmp_alone_branch !=3D &rq->leaf_cfs_rq_list); + WARN_ON_ONCE(rq->tmp_alone_branch !=3D &rq->leaf_cfs_rq_list); } =20 /* Iterate through all leaf cfs_rq's on a runqueue */ @@ -696,7 +696,7 @@ static void update_entity_lag(struct cfs_rq *cfs_rq, st= ruct sched_entity *se) { s64 vlag, limit; =20 - SCHED_WARN_ON(!se->on_rq); + WARN_ON_ONCE(!se->on_rq); =20 vlag =3D avg_vruntime(cfs_rq) - se->vruntime; limit =3D calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); @@ -3317,7 +3317,7 @@ static void task_numa_work(struct callback_head *work) bool vma_pids_skipped; bool vma_pids_forced =3D false; =20 - SCHED_WARN_ON(p !=3D container_of(work, struct task_struct, numa_work)); + WARN_ON_ONCE(p !=3D container_of(work, struct task_struct, numa_work)); =20 work->next =3D work; /* @@ -4036,7 +4036,7 @@ static inline bool load_avg_is_decayed(struct sched_a= vg *sa) * Make sure that rounding and/or propagation of PELT values never * break this. */ - SCHED_WARN_ON(sa->load_avg || + WARN_ON_ONCE(sa->load_avg || sa->util_avg || sa->runnable_avg); =20 @@ -5460,7 +5460,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) clear_buddies(cfs_rq, se); =20 if (flags & DEQUEUE_DELAYED) { - SCHED_WARN_ON(!se->sched_delayed); + WARN_ON_ONCE(!se->sched_delayed); } else { bool delay =3D sleep; /* @@ -5470,7 +5470,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_en= tity *se, int flags) if (flags & DEQUEUE_SPECIAL) delay =3D false; =20 - SCHED_WARN_ON(delay && se->sched_delayed); + WARN_ON_ONCE(delay && se->sched_delayed); =20 if (sched_feat(DELAY_DEQUEUE) && delay && !entity_eligible(cfs_rq, se)) { @@ -5551,7 +5551,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_e= ntity *se) } =20 update_stats_curr_start(cfs_rq, se); - SCHED_WARN_ON(cfs_rq->curr); + WARN_ON_ONCE(cfs_rq->curr); cfs_rq->curr =3D se; =20 /* @@ -5592,7 +5592,7 @@ pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq) if (sched_feat(PICK_BUDDY) && cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) { /* ->next will never be delayed */ - SCHED_WARN_ON(cfs_rq->next->sched_delayed); + WARN_ON_ONCE(cfs_rq->next->sched_delayed); return cfs_rq->next; } =20 @@ -5628,7 +5628,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, st= ruct sched_entity *prev) /* in !on_rq case, update occurred at dequeue */ update_load_avg(cfs_rq, prev, 0); } - SCHED_WARN_ON(cfs_rq->curr !=3D prev); + WARN_ON_ONCE(cfs_rq->curr !=3D prev); cfs_rq->curr =3D NULL; } =20 @@ -5851,7 +5851,7 @@ static int tg_unthrottle_up(struct task_group *tg, vo= id *data) =20 cfs_rq->throttled_clock_self =3D 0; =20 - if (SCHED_WARN_ON((s64)delta < 0)) + if (WARN_ON_ONCE((s64)delta < 0)) delta =3D 0; =20 cfs_rq->throttled_clock_self_time +=3D delta; @@ -5871,7 +5871,7 @@ static int tg_throttle_down(struct task_group *tg, vo= id *data) cfs_rq->throttled_clock_pelt =3D rq_clock_pelt(rq); list_del_leaf_cfs_rq(cfs_rq); =20 - SCHED_WARN_ON(cfs_rq->throttled_clock_self); + WARN_ON_ONCE(cfs_rq->throttled_clock_self); if (cfs_rq->nr_queued) cfs_rq->throttled_clock_self =3D rq_clock(rq); } @@ -5980,7 +5980,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) * throttled-list. rq->lock protects completion. */ cfs_rq->throttled =3D 1; - SCHED_WARN_ON(cfs_rq->throttled_clock); + WARN_ON_ONCE(cfs_rq->throttled_clock); if (cfs_rq->nr_queued) cfs_rq->throttled_clock =3D rq_clock(rq); return true; @@ -6136,7 +6136,7 @@ static inline void __unthrottle_cfs_rq_async(struct c= fs_rq *cfs_rq) } =20 /* Already enqueued */ - if (SCHED_WARN_ON(!list_empty(&cfs_rq->throttled_csd_list))) + if (WARN_ON_ONCE(!list_empty(&cfs_rq->throttled_csd_list))) return; =20 first =3D list_empty(&rq->cfsb_csd_list); @@ -6155,7 +6155,7 @@ static void unthrottle_cfs_rq_async(struct cfs_rq *cf= s_rq) { lockdep_assert_rq_held(rq_of(cfs_rq)); =20 - if (SCHED_WARN_ON(!cfs_rq_throttled(cfs_rq) || + if (WARN_ON_ONCE(!cfs_rq_throttled(cfs_rq) || cfs_rq->runtime_remaining <=3D 0)) return; =20 @@ -6191,7 +6191,7 @@ static bool distribute_cfs_runtime(struct cfs_bandwid= th *cfs_b) goto next; =20 /* By the above checks, this should never be true */ - SCHED_WARN_ON(cfs_rq->runtime_remaining > 0); + WARN_ON_ONCE(cfs_rq->runtime_remaining > 0); =20 raw_spin_lock(&cfs_b->lock); runtime =3D -cfs_rq->runtime_remaining + 1; @@ -6212,7 +6212,7 @@ static bool distribute_cfs_runtime(struct cfs_bandwid= th *cfs_b) * We currently only expect to be unthrottling * a single cfs_rq locally. */ - SCHED_WARN_ON(!list_empty(&local_unthrottle)); + WARN_ON_ONCE(!list_empty(&local_unthrottle)); list_add_tail(&cfs_rq->throttled_csd_list, &local_unthrottle); } @@ -6237,7 +6237,7 @@ static bool distribute_cfs_runtime(struct cfs_bandwid= th *cfs_b) =20 rq_unlock_irqrestore(rq, &rf); } - SCHED_WARN_ON(!list_empty(&local_unthrottle)); + WARN_ON_ONCE(!list_empty(&local_unthrottle)); =20 rcu_read_unlock(); =20 @@ -6789,7 +6789,7 @@ static void hrtick_start_fair(struct rq *rq, struct t= ask_struct *p) { struct sched_entity *se =3D &p->se; =20 - SCHED_WARN_ON(task_rq(p) !=3D rq); + WARN_ON_ONCE(task_rq(p) !=3D rq); =20 if (rq->cfs.h_nr_queued > 1) { u64 ran =3D se->sum_exec_runtime - se->prev_sum_exec_runtime; @@ -6900,8 +6900,8 @@ requeue_delayed_entity(struct sched_entity *se) * Because a delayed entity is one that is still on * the runqueue competing until elegibility. */ - SCHED_WARN_ON(!se->sched_delayed); - SCHED_WARN_ON(!se->on_rq); + WARN_ON_ONCE(!se->sched_delayed); + WARN_ON_ONCE(!se->on_rq); =20 if (sched_feat(DELAY_ZERO)) { update_entity_lag(cfs_rq, se); @@ -7161,8 +7161,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) rq->next_balance =3D jiffies; =20 if (p && task_delayed) { - SCHED_WARN_ON(!task_sleep); - SCHED_WARN_ON(p->on_rq !=3D 1); + WARN_ON_ONCE(!task_sleep); + WARN_ON_ONCE(p->on_rq !=3D 1); =20 /* Fix-up what dequeue_task_fair() skipped */ hrtick_update(rq); @@ -8740,7 +8740,7 @@ static inline void set_task_max_allowed_capacity(stru= ct task_struct *p) {} static void set_next_buddy(struct sched_entity *se) { for_each_sched_entity(se) { - if (SCHED_WARN_ON(!se->on_rq)) + if (WARN_ON_ONCE(!se->on_rq)) return; if (se_is_idle(se)) return; @@ -12484,7 +12484,7 @@ static void set_cpu_sd_state_busy(int cpu) =20 void nohz_balance_exit_idle(struct rq *rq) { - SCHED_WARN_ON(rq !=3D this_rq()); + WARN_ON_ONCE(rq !=3D this_rq()); =20 if (likely(!rq->nohz_tick_stopped)) return; @@ -12520,7 +12520,7 @@ void nohz_balance_enter_idle(int cpu) { struct rq *rq =3D cpu_rq(cpu); =20 - SCHED_WARN_ON(cpu !=3D smp_processor_id()); + WARN_ON_ONCE(cpu !=3D smp_processor_id()); =20 /* If this CPU is going down, then nothing needs to be done: */ if (!cpu_active(cpu)) @@ -12603,7 +12603,7 @@ static void _nohz_idle_balance(struct rq *this_rq, = unsigned int flags) int balance_cpu; struct rq *rq; =20 - SCHED_WARN_ON((flags & NOHZ_KICK_MASK) =3D=3D NOHZ_BALANCE_KICK); + WARN_ON_ONCE((flags & NOHZ_KICK_MASK) =3D=3D NOHZ_BALANCE_KICK); =20 /* * We assume there will be no idle load after this update and clear @@ -13043,7 +13043,7 @@ bool cfs_prio_less(const struct task_struct *a, con= st struct task_struct *b, struct cfs_rq *cfs_rqb; s64 delta; =20 - SCHED_WARN_ON(task_rq(b)->core !=3D rq->core); + WARN_ON_ONCE(task_rq(b)->core !=3D rq->core); =20 #ifdef CONFIG_FAIR_GROUP_SCHED /* @@ -13246,7 +13246,7 @@ static void switched_from_fair(struct rq *rq, struc= t task_struct *p) =20 static void switched_to_fair(struct rq *rq, struct task_struct *p) { - SCHED_WARN_ON(p->se.sched_delayed); + WARN_ON_ONCE(p->se.sched_delayed); =20 attach_task_cfs_rq(p); =20 @@ -13281,7 +13281,7 @@ static void __set_next_task_fair(struct rq *rq, str= uct task_struct *p, bool firs if (!first) return; =20 - SCHED_WARN_ON(se->sched_delayed); + WARN_ON_ONCE(se->sched_delayed); =20 if (hrtick_enabled_fair(rq)) hrtick_start_fair(rq, p); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 4b8e33c615b1..926281ac3ac0 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1713,7 +1713,7 @@ static struct sched_rt_entity *pick_next_rt_entity(st= ruct rt_rq *rt_rq) BUG_ON(idx >=3D MAX_RT_PRIO); =20 queue =3D array->queue + idx; - if (SCHED_WARN_ON(list_empty(queue))) + if (WARN_ON_ONCE(list_empty(queue))) return NULL; next =3D list_entry(queue->next, struct sched_rt_entity, run_list); =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0212a0c5534a..189f7b033dab 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -91,12 +91,6 @@ struct cpuidle_state; #include "cpupri.h" #include "cpudeadline.h" =20 -#ifdef CONFIG_SCHED_DEBUG -# define SCHED_WARN_ON(x) WARN_ONCE(x, #x) -#else -# define SCHED_WARN_ON(x) ({ (void)(x), 0; }) -#endif - /* task_struct::on_rq states: */ #define TASK_ON_RQ_QUEUED 1 #define TASK_ON_RQ_MIGRATING 2 @@ -1571,7 +1565,7 @@ static inline void update_idle_core(struct rq *rq) { } =20 static inline struct task_struct *task_of(struct sched_entity *se) { - SCHED_WARN_ON(!entity_is_task(se)); + WARN_ON_ONCE(!entity_is_task(se)); return container_of(se, struct task_struct, se); } =20 @@ -1652,7 +1646,7 @@ static inline void assert_clock_updated(struct rq *rq) * The only reason for not seeing a clock update since the * last rq_pin_lock() is if we're currently skipping updates. */ - SCHED_WARN_ON(rq->clock_update_flags < RQCF_ACT_SKIP); + WARN_ON_ONCE(rq->clock_update_flags < RQCF_ACT_SKIP); } =20 static inline u64 rq_clock(struct rq *rq) @@ -1699,7 +1693,7 @@ static inline void rq_clock_cancel_skipupdate(struct = rq *rq) static inline void rq_clock_start_loop_update(struct rq *rq) { lockdep_assert_rq_held(rq); - SCHED_WARN_ON(rq->clock_update_flags & RQCF_ACT_SKIP); + WARN_ON_ONCE(rq->clock_update_flags & RQCF_ACT_SKIP); rq->clock_update_flags |=3D RQCF_ACT_SKIP; } =20 @@ -1774,7 +1768,7 @@ static inline void rq_pin_lock(struct rq *rq, struct = rq_flags *rf) rq->clock_update_flags &=3D (RQCF_REQ_SKIP|RQCF_ACT_SKIP); rf->clock_update_flags =3D 0; # ifdef CONFIG_SMP - SCHED_WARN_ON(rq->balance_callback && rq->balance_callback !=3D &balance_= push_callback); + WARN_ON_ONCE(rq->balance_callback && rq->balance_callback !=3D &balance_p= ush_callback); # endif #endif } @@ -2685,7 +2679,7 @@ static inline void idle_set_state(struct rq *rq, =20 static inline struct cpuidle_state *idle_get_state(struct rq *rq) { - SCHED_WARN_ON(!rcu_read_lock_held()); + WARN_ON_ONCE(!rcu_read_lock_held()); =20 return rq->idle_state; } diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index 19cdbe96f93d..452826df6ae1 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -144,7 +144,7 @@ static inline void psi_enqueue(struct task_struct *p, i= nt flags) =20 if (p->se.sched_delayed) { /* CPU migration of "sleeping" task */ - SCHED_WARN_ON(!(flags & ENQUEUE_MIGRATED)); + WARN_ON_ONCE(!(flags & ENQUEUE_MIGRATED)); if (p->in_memstall) set |=3D TSK_MEMSTALL; if (p->in_iowait) --=20 2.45.2 From nobody Wed Dec 17 13:46:03 2025 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A823E2236FD for ; Mon, 17 Mar 2025 10:43:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208189; cv=none; b=SxPBECLCY7A+B30sVNEpAY3SzdHN8TLgfNcBfYxm20G238hfd6aWDL3XPGF+Arh15rV6pqNRCjfFOccLZdoLuHKaLo381x/afivw3/QK4NpgOxuBb+v7tegEhQqfELP+ZmBZeedYlrjU6mu1kpfvXIwGh9K+S5DYI1DSEkymTno= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208189; c=relaxed/simple; bh=opa6lPKdZ9aCTVZgV6zn8piU1lOjxjwhDcQrQFFAyoQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B29cK2l83oohvuqqKSoSAVUuznoNSU/vNmBeQZgsyzqoiY0ZUYVgVlJ8MrUcq0PxEnPqc2mETVwB6ISdWYPmvOq6qJ2Dh9gWL4ktRQ7B56YQxYwgETtRW2UGiQZ1o76Vht/5geIJf9JP1kEEioBKtSJ6Ki2Ps65kjzOlXqcyLEc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=I99BOsmO; arc=none smtp.client-ip=209.85.221.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="I99BOsmO" Received: by mail-wr1-f52.google.com with SMTP id ffacd0b85a97d-3965c995151so2851858f8f.1 for ; Mon, 17 Mar 2025 03:43:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742208186; x=1742812986; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=bx1q2HBp4vh5xTWwllug152my5AslkI00+Q5AQBDivo=; b=I99BOsmOGUFEJCD6BbeKq8EMREYV9WBZjSbANNk8233+dA0odDrRRQruK43pHG4AwS iIynKqZSwbo/lUCtgnOWwvob4wmMfZ51t4o+2M5f+FhnDnUj1HrnlEpS6/Avz6nhfPE6 fnxO1Nx5EYmdbuFkzCY2U59ReTjYxQZa8+9PTrETbvE561/qdt+UE0pY25Jmjyb5DNFq RZn8JJ13Zn5a1MkT30gyXUUDdXUDBB7j8iGMOzxz7FcsZJqEtbm0wmiofj9jlytVOdMX caP8pq/3mLViQgf+QsWDLH7IrKTw0eM2GVGH9IjZooK+bdjIREeR7mbXmQjw2G2d3yV6 V+fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742208186; x=1742812986; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=bx1q2HBp4vh5xTWwllug152my5AslkI00+Q5AQBDivo=; b=TwoQQO2KUE50T9pPBkJUrhyCoUp4vwGLGsOw+//C2naQV2isMNgZR3BZtN3ADzVoap +RUWQis3ZL4a5WUgQh2tNElCwDjmgFsDqT3KdZS77Bq/t+HcLdbzOo+dGPr9hzb5Iurc V7tO3hBd/HTBHOIJIrGIAccflHzz4IDFOpdcjTg2H4SsD43qZzA7wbS+cSUEo5GyX9nu AdMDGCaPJPUMjFR0EbGxl7vNL+8Qt4vvhnnICS//63L4RLjpjMH9py6TcdPvGfQoIgFZ TfPUPS3igVtc2I4P7CORRC/u49Tld/mCQGfDcfnvNiX7maPnhdCo2ZNFxirbsbQWt5qd Snxw== X-Gm-Message-State: AOJu0YyE+Kg/4c3UaGX/Y6/24EXyJfZbBQBJQ+rBmSQ0GBGSVVFODl17 en8m3jgtyrD5DmG2vBuKWdcNfyrWS+X0Acqum+RyecNRBjbkXJtLlatHB9gx X-Gm-Gg: ASbGncvmO1mH/nRlDr1z6+fHkX0K89GUku6PIhmc6raBXyh0RxlvnqcgtTAWt2XYb44 ZSLlc0z89kVY07YQExDoLIXC6E+HsP+DMGJ9SE1PIHxlUQNOt/aWUq7kMNjpqbAWXP+T6bI4UW5 K3t/3hTXSwdMDiztGU/67jX/A9wIhXqzfbiGHnkVCP2BqOPmUrKHGHyDBrZlEUpfQeR/r8EvC18 VAUmUNljLskvUsPnzqqSFRJIM9wMXo3E1/vgQ3+VaKDqaSK2KkhmiJ2CtK5CePRRhrVjnw3wQud yK45Ai5UzoDtsJopMY7SRIV8lJs14NmLpmAXkikZsxHZjmc23mVAYO9F3IPE0MYHsQ== X-Google-Smtp-Source: AGHT+IH7lnASCC5nMhjvu4cRSqg+zrHC47rzZUzSuJK4VlEY/7mlEeEyJv4uw8sdZKH52wIArQKeJQ== X-Received: by 2002:a5d:598d:0:b0:391:441b:baac with SMTP id ffacd0b85a97d-3971ef3e4bfmr13565999f8f.50.1742208185753; Mon, 17 Mar 2025 03:43:05 -0700 (PDT) Received: from starship.. (1F2EF046.nat.pool.telekom.hu. [31.46.240.70]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3189absm14807262f8f.71.2025.03.17.03.43.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Mar 2025 03:43:05 -0700 (PDT) Sender: Ingo Molnar From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Linus Torvalds , Peter Zijlstra , Shrikanth Hegde , Thomas Gleixner , Valentin Schneider , Steven Rostedt , Mel Gorman , Vincent Guittot Subject: [PATCH 2/5] sched/debug: Make 'const_debug' tunables unconditional __read_mostly Date: Mon, 17 Mar 2025 11:42:53 +0100 Message-ID: <20250317104257.3496611-3-mingo@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250317104257.3496611-1-mingo@kernel.org> References: <20250317104257.3496611-1-mingo@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With CONFIG_SCHED_DEBUG becoming unconditional, remove the extra 'const_debug' indirection towards __read_mostly. Signed-off-by: Ingo Molnar --- kernel/sched/core.c | 4 ++-- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 15 +++++---------- 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2da197b2968b..d6833a85e561 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -128,7 +128,7 @@ DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); */ #define SCHED_FEAT(name, enabled) \ (1UL << __SCHED_FEAT_##name) * enabled | -const_debug unsigned int sysctl_sched_features =3D +__read_mostly unsigned int sysctl_sched_features =3D #include "features.h" 0; #undef SCHED_FEAT @@ -148,7 +148,7 @@ __read_mostly int sysctl_resched_latency_warn_once =3D = 1; * Number of tasks to iterate in a single balance run. * Limited because this is done with IRQs disabled. */ -const_debug unsigned int sysctl_sched_nr_migrate =3D SCHED_NR_MIGRATE_BREA= K; +__read_mostly unsigned int sysctl_sched_nr_migrate =3D SCHED_NR_MIGRATE_BR= EAK; =20 __read_mostly int scheduler_running; =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 89609ebd4904..35ee8d9d78d5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -79,7 +79,7 @@ unsigned int sysctl_sched_tunable_scaling =3D SCHED_TUNAB= LESCALING_LOG; unsigned int sysctl_sched_base_slice =3D 700000ULL; static unsigned int normalized_sysctl_sched_base_slice =3D 700000ULL; =20 -const_debug unsigned int sysctl_sched_migration_cost =3D 500000UL; +__read_mostly unsigned int sysctl_sched_migration_cost =3D 500000UL; =20 static int __init setup_sched_thermal_decay_shift(char *str) { diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 189f7b033dab..187a22800577 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2194,13 +2194,8 @@ static inline void __set_task_cpu(struct task_struct= *p, unsigned int cpu) } =20 /* - * Tunables that become constants when CONFIG_SCHED_DEBUG is off: + * Tunables: */ -#ifdef CONFIG_SCHED_DEBUG -# define const_debug __read_mostly -#else -# define const_debug const -#endif =20 #define SCHED_FEAT(name, enabled) \ __SCHED_FEAT_##name , @@ -2218,7 +2213,7 @@ enum { * To support run-time toggling of sched features, all the translation uni= ts * (but core.c) reference the sysctl_sched_features defined in core.c. */ -extern const_debug unsigned int sysctl_sched_features; +extern __read_mostly unsigned int sysctl_sched_features; =20 #ifdef CONFIG_JUMP_LABEL =20 @@ -2249,7 +2244,7 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT= _NR]; */ #define SCHED_FEAT(name, enabled) \ (1UL << __SCHED_FEAT_##name) * enabled | -static const_debug __maybe_unused unsigned int sysctl_sched_features =3D +static __read_mostly __maybe_unused unsigned int sysctl_sched_features =3D #include "features.h" 0; #undef SCHED_FEAT @@ -2837,8 +2832,8 @@ extern void wakeup_preempt(struct rq *rq, struct task= _struct *p, int flags); # define SCHED_NR_MIGRATE_BREAK 32 #endif =20 -extern const_debug unsigned int sysctl_sched_nr_migrate; -extern const_debug unsigned int sysctl_sched_migration_cost; +extern __read_mostly unsigned int sysctl_sched_nr_migrate; +extern __read_mostly unsigned int sysctl_sched_migration_cost; =20 extern unsigned int sysctl_sched_base_slice; =20 --=20 2.45.2 From nobody Wed Dec 17 13:46:03 2025 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C818B22B8D9 for ; Mon, 17 Mar 2025 10:43:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208191; cv=none; b=Vm6RC/LPlSWbQ32bc+3tSDMMNrtGJg1KLRUdFRogVsYrms7PAIYala/fLfnc2lIA+LObapqO6IT8NNWhB2KsGWmfnseGli1OYpJa6boRrgv7IszNvga5xvAc3KXookrBVSBdLkkrrWRgBLBV4bENg+CeRZ1MTq4Pe6AOEnvkeJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208191; c=relaxed/simple; bh=HzqkGXLbyXnfxqFy+KHi8BS1Yw4Drjo9HtN+UPrBNDs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fDG8dGFPpQTxSzOrrd+49Q8+QpJ7PIbihtO88PRX7WT3ZUkpgV3uZmEJhyVpS3CzYwBWx0buLSrl+zMSvnjhe4zEtGgqFaKuRE2uZnSzaJXmTzWWSlE/s+vLMnidSGnhslRe69WkloZYc8XfYmoqOvJza2ajgxbF57UPrWuRhI4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FL0hBW+/; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FL0hBW+/" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-43d04dc73b7so20424985e9.3 for ; Mon, 17 Mar 2025 03:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742208187; x=1742812987; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=kXiwNlhHHkVHmQlhy+M9VYV3FYmoT2BZc91EFsZurIs=; b=FL0hBW+/WsWKMPDbyem4DH+SbDNayaspx5PS0f/bda53DU4XBy9LQaaMPbY+fppbLD G97ydJwnbg2Ie65jHHIesPvV2hU+nW14/x5dxn6xJ3zzK6pck4DhagReJUHJajvKLJUo 6WEYoHaGwldXlQKzFjznsE0DvaK7nfMbsXXR4fPuCHduDgHhUX67KfyVMaqu+4Qxb7ZA VD4CTMAlmVPMx0wUT+OMdT3dYwbcXOmOeTP4BOPzMhqgwjnyI5Oz7aGKKrV3pTBRlm7g rlfD8oWjs8KAv3/J1GAd+b/f6lm8wbDKod0CW9NTqKp7EivQ7Pm0fDJb5L8gWbJQOHlB zcGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742208187; x=1742812987; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kXiwNlhHHkVHmQlhy+M9VYV3FYmoT2BZc91EFsZurIs=; b=YWyJqnyGPy+KfjbbMeIrlCx6Qq3nYBB3OEMXQcpunN+nor3Zb3gxDpNd/5vUm+0O3q QvqZZ0R73E+YnFKde6WrvLhW9z16eSOl86xM1iB6WKyCp+a3miRat2HOxjbsosvGI/xt M+XHMV4b7EY+HQUX+L3PBGowL2bqaVZP+XKCXeoNN6JDM3XqzyzzADRB6CZR2j/JpkfD 376EkkGgGROEQLht3RLr32ei+lTpSdlIIk4fPh7xPjkDIvx2UnoawkghZt4qmiDt0r1j PGeSFtpnqQF3l3wE40lSFu78+f4waPT8JSHNjXm6D0ew8AGanPQ/kFiUX2k6SKxd9v+A 4t5w== X-Gm-Message-State: AOJu0Yx5r7aIVUECAf8rbCA285cvpYgFsPuwgTe0ltIySFIo7NJvb/r4 +d2cbgmVQjg6j/qJi+41xyYYRuQXipixJt/YVWQBPCn0V0xYgRc1yxaUTiGE X-Gm-Gg: ASbGnctTrtnY80MAjOjkbi7GnAUNMLhiWTyBCkGBL2aXq3Lt2jAxvc82jHtMC/xV9kt mdMZKYgZoOpAR+9T5wcgri0/Ul9V0xNEPCfRF3MufvG67hvqw2QoGkJjgxCTjmldr1cYm5lS7XY trs3sZDpCdr9GEzge7YvEFGuMr2vy5tZopDOa8+uDhc1O2WtEInAzJZ2KCgy+ghwdXVa9lSkXyJ ZCu0lOBKT1LADgj3j4Pwsep1iVpA4S8LhgvhHN/S9dHodm2oiqYA7PpCPEHddGZKThy7aZwxEDc akE6L/21yIIAgh5y0vekAFeFd5ak8pyb2RfrE4yy+J8Ld2qSdeNGdFmzD54r6ej6FA== X-Google-Smtp-Source: AGHT+IGwHRrB2+IpciPVRMY+M4P+gjN7zqqLq5SU2B+vcIjFHvl/kEmnQHQAX1//Zloba9FSLhuGdg== X-Received: by 2002:a5d:5886:0:b0:391:4684:dbdb with SMTP id ffacd0b85a97d-3971d61752amr10770007f8f.17.1742208186824; Mon, 17 Mar 2025 03:43:06 -0700 (PDT) Received: from starship.. (1F2EF046.nat.pool.telekom.hu. [31.46.240.70]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3189absm14807262f8f.71.2025.03.17.03.43.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Mar 2025 03:43:06 -0700 (PDT) Sender: Ingo Molnar From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Linus Torvalds , Peter Zijlstra , Shrikanth Hegde , Thomas Gleixner , Valentin Schneider , Steven Rostedt , Mel Gorman , Vincent Guittot Subject: [PATCH 3/5] sched/debug: Make CONFIG_SCHED_DEBUG functionality unconditional Date: Mon, 17 Mar 2025 11:42:54 +0100 Message-ID: <20250317104257.3496611-4-mingo@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250317104257.3496611-1-mingo@kernel.org> References: <20250317104257.3496611-1-mingo@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All the big Linux distros enable CONFIG_SCHED_DEBUG, because the various features it provides help not just with kernel development, but with system administration and user-space software development as well. Reflect this reality and enable this functionality unconditionally. Signed-off-by: Ingo Molnar --- fs/proc/base.c | 7 ------- include/linux/energy_model.h | 2 -- include/linux/sched/debug.h | 2 -- include/linux/sched/topology.h | 4 ---- include/trace/events/sched.h | 2 -- kernel/sched/build_utility.c | 4 +--- kernel/sched/core.c | 18 +++--------------- kernel/sched/deadline.c | 2 -- kernel/sched/fair.c | 4 ---- kernel/sched/rt.c | 5 +---- kernel/sched/sched.h | 54 ++++----------------------------------= ---------------- kernel/sched/topology.c | 13 ------------- 12 files changed, 9 insertions(+), 108 deletions(-) diff --git a/fs/proc/base.c b/fs/proc/base.c index cd89e956c322..61526420d0ee 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -1489,7 +1489,6 @@ static const struct file_operations proc_fail_nth_ope= rations =3D { #endif =20 =20 -#ifdef CONFIG_SCHED_DEBUG /* * Print out various scheduling related per-task fields: */ @@ -1539,8 +1538,6 @@ static const struct file_operations proc_pid_sched_op= erations =3D { .release =3D single_release, }; =20 -#endif - #ifdef CONFIG_SCHED_AUTOGROUP /* * Print out autogroup related information: @@ -3331,9 +3328,7 @@ static const struct pid_entry tgid_base_stuff[] =3D { ONE("status", S_IRUGO, proc_pid_status), ONE("personality", S_IRUSR, proc_pid_personality), ONE("limits", S_IRUGO, proc_pid_limits), -#ifdef CONFIG_SCHED_DEBUG REG("sched", S_IRUGO|S_IWUSR, proc_pid_sched_operations), -#endif #ifdef CONFIG_SCHED_AUTOGROUP REG("autogroup", S_IRUGO|S_IWUSR, proc_pid_sched_autogroup_operations), #endif @@ -3682,9 +3677,7 @@ static const struct pid_entry tid_base_stuff[] =3D { ONE("status", S_IRUGO, proc_pid_status), ONE("personality", S_IRUSR, proc_pid_personality), ONE("limits", S_IRUGO, proc_pid_limits), -#ifdef CONFIG_SCHED_DEBUG REG("sched", S_IRUGO|S_IWUSR, proc_pid_sched_operations), -#endif NOD("comm", S_IFREG|S_IRUGO|S_IWUSR, &proc_tid_comm_inode_operations, &proc_pid_set_comm_operations, {}), diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 78318d49276d..65efc0f5ea2e 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -240,9 +240,7 @@ static inline unsigned long em_cpu_energy(struct em_per= f_domain *pd, struct em_perf_state *ps; int i; =20 -#ifdef CONFIG_SCHED_DEBUG WARN_ONCE(!rcu_read_lock_held(), "EM: rcu read lock needed\n"); -#endif =20 if (!sum_util) return 0; diff --git a/include/linux/sched/debug.h b/include/linux/sched/debug.h index b5035afa2396..35ed4577a6cc 100644 --- a/include/linux/sched/debug.h +++ b/include/linux/sched/debug.h @@ -35,12 +35,10 @@ extern void show_stack(struct task_struct *task, unsign= ed long *sp, =20 extern void sched_show_task(struct task_struct *p); =20 -#ifdef CONFIG_SCHED_DEBUG struct seq_file; extern void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns, struct seq_file *m); extern void proc_sched_set_task(struct task_struct *p); -#endif =20 /* Attach to any functions which should be ignored in wchan output. */ #define __sched __section(".sched.text") diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 7f3dbafe1817..7894653bc70b 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -25,16 +25,12 @@ enum { }; #undef SD_FLAG =20 -#ifdef CONFIG_SCHED_DEBUG - struct sd_flag_debug { unsigned int meta_flags; char *name; }; extern const struct sd_flag_debug sd_flag_debug[]; =20 -#endif - #ifdef CONFIG_SCHED_SMT static inline int cpu_smt_flags(void) { diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 9ea4c404bd4e..bfd97cce40a1 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -193,9 +193,7 @@ static inline long __trace_sched_switch_state(bool pree= mpt, { unsigned int state; =20 -#ifdef CONFIG_SCHED_DEBUG BUG_ON(p !=3D current); -#endif /* CONFIG_SCHED_DEBUG */ =20 /* * Preemption ignores task state, therefore preempted tasks are always diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c index 80a3df49ab47..bf9d8db94b70 100644 --- a/kernel/sched/build_utility.c +++ b/kernel/sched/build_utility.c @@ -68,9 +68,7 @@ # include "cpufreq_schedutil.c" #endif =20 -#ifdef CONFIG_SCHED_DEBUG -# include "debug.c" -#endif +#include "debug.c" =20 #ifdef CONFIG_SCHEDSTATS # include "stats.c" diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d6833a85e561..598b7f241dda 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -118,7 +118,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); =20 DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); =20 -#ifdef CONFIG_SCHED_DEBUG /* * Debugging: various feature bits * @@ -142,7 +141,6 @@ __read_mostly unsigned int sysctl_sched_features =3D */ __read_mostly int sysctl_resched_latency_warn_ms =3D 100; __read_mostly int sysctl_resched_latency_warn_once =3D 1; -#endif /* CONFIG_SCHED_DEBUG */ =20 /* * Number of tasks to iterate in a single balance run. @@ -799,11 +797,10 @@ void update_rq_clock(struct rq *rq) if (rq->clock_update_flags & RQCF_ACT_SKIP) return; =20 -#ifdef CONFIG_SCHED_DEBUG if (sched_feat(WARN_DOUBLE_CLOCK)) WARN_ON_ONCE(rq->clock_update_flags & RQCF_UPDATED); rq->clock_update_flags |=3D RQCF_UPDATED; -#endif + clock =3D sched_clock_cpu(cpu_of(rq)); scx_rq_clock_update(rq, clock); =20 @@ -3291,7 +3288,6 @@ void relax_compatible_cpus_allowed_ptr(struct task_st= ruct *p) =20 void set_task_cpu(struct task_struct *p, unsigned int new_cpu) { -#ifdef CONFIG_SCHED_DEBUG unsigned int state =3D READ_ONCE(p->__state); =20 /* @@ -3329,7 +3325,6 @@ void set_task_cpu(struct task_struct *p, unsigned int= new_cpu) WARN_ON_ONCE(!cpu_online(new_cpu)); =20 WARN_ON_ONCE(is_migration_disabled(p)); -#endif =20 trace_sched_migrate_task(p, new_cpu); =20 @@ -5577,7 +5572,6 @@ unsigned long long task_sched_runtime(struct task_str= uct *p) return ns; } =20 -#ifdef CONFIG_SCHED_DEBUG static u64 cpu_resched_latency(struct rq *rq) { int latency_warn_ms =3D READ_ONCE(sysctl_resched_latency_warn_ms); @@ -5622,9 +5616,6 @@ static int __init setup_resched_latency_warn_ms(char = *str) return 1; } __setup("resched_latency_warn_ms=3D", setup_resched_latency_warn_ms); -#else -static inline u64 cpu_resched_latency(struct rq *rq) { return 0; } -#endif /* CONFIG_SCHED_DEBUG */ =20 /* * This function gets called by the timer code, with HZ frequency. @@ -6718,9 +6709,7 @@ static void __sched notrace __schedule(int sched_mode) picked: clear_tsk_need_resched(prev); clear_preempt_need_resched(); -#ifdef CONFIG_SCHED_DEBUG rq->last_seen_need_resched_ns =3D 0; -#endif =20 if (likely(prev !=3D next)) { rq->nr_switches++; @@ -7094,7 +7083,7 @@ asmlinkage __visible void __sched preempt_schedule_ir= q(void) int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wak= e_flags, void *key) { - WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_= CURRENT_CPU)); + WARN_ON_ONCE(wake_flags & ~(WF_SYNC|WF_CURRENT_CPU)); return try_to_wake_up(curr->private, mode, wake_flags); } EXPORT_SYMBOL(default_wake_function); @@ -7764,10 +7753,9 @@ void show_state_filter(unsigned int state_filter) sched_show_task(p); } =20 -#ifdef CONFIG_SCHED_DEBUG if (!state_filter) sysrq_sched_debug_show(); -#endif + rcu_read_unlock(); /* * Only show locks if all tasks are dumped: diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index b18c80272f86..d352b57f31cf 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -3567,9 +3567,7 @@ void dl_bw_free(int cpu, u64 dl_bw) } #endif =20 -#ifdef CONFIG_SCHED_DEBUG void print_dl_stats(struct seq_file *m, int cpu) { print_dl_rq(m, cpu, &cpu_rq(cpu)->dl); } -#endif /* CONFIG_SCHED_DEBUG */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 35ee8d9d78d5..a0c4cd26ee07 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -983,7 +983,6 @@ static struct sched_entity *pick_eevdf(struct cfs_rq *c= fs_rq) return best; } =20 -#ifdef CONFIG_SCHED_DEBUG struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq) { struct rb_node *last =3D rb_last(&cfs_rq->tasks_timeline.rb_root); @@ -1010,7 +1009,6 @@ int sched_update_scaling(void) return 0; } #endif -#endif =20 static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se); =20 @@ -13668,7 +13666,6 @@ DEFINE_SCHED_CLASS(fair) =3D { #endif }; =20 -#ifdef CONFIG_SCHED_DEBUG void print_cfs_stats(struct seq_file *m, int cpu) { struct cfs_rq *cfs_rq, *pos; @@ -13702,7 +13699,6 @@ void show_numa_stats(struct task_struct *p, struct = seq_file *m) rcu_read_unlock(); } #endif /* CONFIG_NUMA_BALANCING */ -#endif /* CONFIG_SCHED_DEBUG */ =20 __init void init_sched_fair_class(void) { diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 926281ac3ac0..8f7c3bfb49ef 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -169,9 +169,8 @@ static void destroy_rt_bandwidth(struct rt_bandwidth *r= t_b) =20 static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se) { -#ifdef CONFIG_SCHED_DEBUG WARN_ON_ONCE(!rt_entity_is_task(rt_se)); -#endif + return container_of(rt_se, struct task_struct, rt); } =20 @@ -2967,7 +2966,6 @@ static int sched_rr_handler(const struct ctl_table *t= able, int write, void *buff } #endif /* CONFIG_SYSCTL */ =20 -#ifdef CONFIG_SCHED_DEBUG void print_rt_stats(struct seq_file *m, int cpu) { rt_rq_iter_t iter; @@ -2978,4 +2976,3 @@ void print_rt_stats(struct seq_file *m, int cpu) print_rt_rq(m, cpu, rt_rq); rcu_read_unlock(); } -#endif /* CONFIG_SCHED_DEBUG */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 187a22800577..ac68db706b7c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1174,10 +1174,8 @@ struct rq { =20 atomic_t nr_iowait; =20 -#ifdef CONFIG_SCHED_DEBUG u64 last_seen_need_resched_ns; int ticks_without_resched; -#endif =20 #ifdef CONFIG_MEMBARRIER int membarrier_state; @@ -1706,14 +1704,12 @@ static inline void rq_clock_stop_loop_update(struct= rq *rq) struct rq_flags { unsigned long flags; struct pin_cookie cookie; -#ifdef CONFIG_SCHED_DEBUG /* * A copy of (rq::clock_update_flags & RQCF_UPDATED) for the * current pin context is stashed here in case it needs to be * restored in rq_repin_lock(). */ unsigned int clock_update_flags; -#endif }; =20 extern struct balance_callback balance_push_callback; @@ -1764,21 +1760,18 @@ static inline void rq_pin_lock(struct rq *rq, struc= t rq_flags *rf) { rf->cookie =3D lockdep_pin_lock(__rq_lockp(rq)); =20 -#ifdef CONFIG_SCHED_DEBUG rq->clock_update_flags &=3D (RQCF_REQ_SKIP|RQCF_ACT_SKIP); rf->clock_update_flags =3D 0; -# ifdef CONFIG_SMP +#ifdef CONFIG_SMP WARN_ON_ONCE(rq->balance_callback && rq->balance_callback !=3D &balance_p= ush_callback); -# endif #endif } =20 static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf) { -#ifdef CONFIG_SCHED_DEBUG if (rq->clock_update_flags > RQCF_ACT_SKIP) rf->clock_update_flags =3D RQCF_UPDATED; -#endif + scx_rq_clock_invalidate(rq); lockdep_unpin_lock(__rq_lockp(rq), rf->cookie); } @@ -1787,12 +1780,10 @@ static inline void rq_repin_lock(struct rq *rq, str= uct rq_flags *rf) { lockdep_repin_lock(__rq_lockp(rq), rf->cookie); =20 -#ifdef CONFIG_SCHED_DEBUG /* * Restore the value we stashed in @rf for this pin context. */ rq->clock_update_flags |=3D rf->clock_update_flags; -#endif } =20 extern @@ -2066,9 +2057,7 @@ struct sched_group_capacity { unsigned long next_update; int imbalance; /* XXX unrelated to capacity but shared group state */ =20 -#ifdef CONFIG_SCHED_DEBUG int id; -#endif =20 unsigned long cpumask[]; /* Balance mask */ }; @@ -2108,13 +2097,8 @@ static inline struct cpumask *group_balance_mask(str= uct sched_group *sg) =20 extern int group_balance_cpu(struct sched_group *sg); =20 -#ifdef CONFIG_SCHED_DEBUG extern void update_sched_domain_debugfs(void); extern void dirty_sched_domain_sysctl(int cpu); -#else -static inline void update_sched_domain_debugfs(void) { } -static inline void dirty_sched_domain_sysctl(int cpu) { } -#endif =20 extern int sched_update_scaling(void); =20 @@ -2207,8 +2191,6 @@ enum { =20 #undef SCHED_FEAT =20 -#ifdef CONFIG_SCHED_DEBUG - /* * To support run-time toggling of sched features, all the translation uni= ts * (but core.c) reference the sysctl_sched_features defined in core.c. @@ -2235,24 +2217,6 @@ extern struct static_key sched_feat_keys[__SCHED_FEA= T_NR]; =20 #endif /* !CONFIG_JUMP_LABEL */ =20 -#else /* !SCHED_DEBUG: */ - -/* - * Each translation unit has its own copy of sysctl_sched_features to allow - * constants propagation at compile time and compiler optimization based on - * features default. - */ -#define SCHED_FEAT(name, enabled) \ - (1UL << __SCHED_FEAT_##name) * enabled | -static __read_mostly __maybe_unused unsigned int sysctl_sched_features =3D -#include "features.h" - 0; -#undef SCHED_FEAT - -#define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x)) - -#endif /* !SCHED_DEBUG */ - extern struct static_key_false sched_numa_balancing; extern struct static_key_false sched_schedstats; =20 @@ -2837,7 +2801,6 @@ extern __read_mostly unsigned int sysctl_sched_migrat= ion_cost; =20 extern unsigned int sysctl_sched_base_slice; =20 -#ifdef CONFIG_SCHED_DEBUG extern int sysctl_resched_latency_warn_ms; extern int sysctl_resched_latency_warn_once; =20 @@ -2848,7 +2811,6 @@ extern unsigned int sysctl_numa_balancing_scan_period= _min; extern unsigned int sysctl_numa_balancing_scan_period_max; extern unsigned int sysctl_numa_balancing_scan_size; extern unsigned int sysctl_numa_balancing_hot_threshold; -#endif =20 #ifdef CONFIG_SCHED_HRTICK =20 @@ -2921,7 +2883,6 @@ unsigned long arch_scale_freq_capacity(int cpu) } #endif =20 -#ifdef CONFIG_SCHED_DEBUG /* * In double_lock_balance()/double_rq_lock(), we use raw_spin_rq_lock() to * acquire rq lock instead of rq_lock(). So at the end of these two functi= ons @@ -2936,9 +2897,6 @@ static inline void double_rq_clock_clear_update(struc= t rq *rq1, struct rq *rq2) rq2->clock_update_flags &=3D (RQCF_REQ_SKIP|RQCF_ACT_SKIP); #endif } -#else -static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq = *rq2) { } -#endif =20 #define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \ __DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \ @@ -3151,7 +3109,6 @@ extern struct sched_entity *__pick_root_entity(struct= cfs_rq *cfs_rq); extern struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq); extern struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq); =20 -#ifdef CONFIG_SCHED_DEBUG extern bool sched_debug_verbose; =20 extern void print_cfs_stats(struct seq_file *m, int cpu); @@ -3162,15 +3119,12 @@ extern void print_rt_rq(struct seq_file *m, int cpu= , struct rt_rq *rt_rq); extern void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq); =20 extern void resched_latency_warn(int cpu, u64 latency); -# ifdef CONFIG_NUMA_BALANCING +#ifdef CONFIG_NUMA_BALANCING extern void show_numa_stats(struct task_struct *p, struct seq_file *m); extern void print_numa_stats(struct seq_file *m, int node, unsigned long tsf, unsigned long tpf, unsigned long gsf, unsigned long gpf); -# endif /* CONFIG_NUMA_BALANCING */ -#else /* !CONFIG_SCHED_DEBUG: */ -static inline void resched_latency_warn(int cpu, u64 latency) { } -#endif /* !CONFIG_SCHED_DEBUG */ +#endif /* CONFIG_NUMA_BALANCING */ =20 extern void init_cfs_rq(struct cfs_rq *cfs_rq); extern void init_rt_rq(struct rt_rq *rt_rq); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index c49aea8c1025..cb0769820b0b 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -11,8 +11,6 @@ DEFINE_MUTEX(sched_domains_mutex); static cpumask_var_t sched_domains_tmpmask; static cpumask_var_t sched_domains_tmpmask2; =20 -#ifdef CONFIG_SCHED_DEBUG - static int __init sched_debug_setup(char *str) { sched_debug_verbose =3D true; @@ -151,15 +149,6 @@ static void sched_domain_debug(struct sched_domain *sd= , int cpu) break; } } -#else /* !CONFIG_SCHED_DEBUG */ - -# define sched_debug_verbose 0 -# define sched_domain_debug(sd, cpu) do { } while (0) -static inline bool sched_debug(void) -{ - return false; -} -#endif /* CONFIG_SCHED_DEBUG */ =20 /* Generate a mask of SD flags with the SDF_NEEDS_GROUPS metaflag */ #define SD_FLAG(name, mflags) (name * !!((mflags) & SDF_NEEDS_GROUPS)) | @@ -2275,9 +2264,7 @@ static int __sdt_alloc(const struct cpumask *cpu_map) if (!sgc) return -ENOMEM; =20 -#ifdef CONFIG_SCHED_DEBUG sgc->id =3D j; -#endif =20 *per_cpu_ptr(sdd->sgc, j) =3D sgc; } --=20 2.45.2 From nobody Wed Dec 17 13:46:03 2025 Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07CE1233702 for ; Mon, 17 Mar 2025 10:43:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208191; cv=none; b=FSMRwmxERCgi38gr64AgBFCA3eW3SXUgwA1Oc/JCxHoanBqJJBE6T0PN1jI1XsfHnW3RvSIFmmMVxocxNVhUsxjsrICtS3+zKrNoTt+0SFI6OwxfTl7EiMPbOT+ADmb6jRxr6JUbYa8dY7dF5V6fmtgij5ePkwqNZkPRJRZNoCo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208191; c=relaxed/simple; bh=U18xujko9n0Y1GrO7buoeWVMF9gyXjkaaWpDVLWDWSo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=tn0wjIZySjcLe/r6Dy+WI4EUtJcxEONRCYPxKkQRz/f+PQZDApShZD/xC4lQiREq6Y0cdcUirI/RZhuPsGx+IYbUWuwTdtSrOcHZIxrn/JKpC7Q3sZsk30Ca6W/y61koDrWLqTWuT0UyA7ak2UgOD12S+83zkp3+cfk7Mddlg/w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NhQKLpNv; arc=none smtp.client-ip=209.85.221.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NhQKLpNv" Received: by mail-wr1-f50.google.com with SMTP id ffacd0b85a97d-391342fc148so2666140f8f.2 for ; Mon, 17 Mar 2025 03:43:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742208188; x=1742812988; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=Sx8FHwbN1ZlNJR2eQrRFHTsZzsgNNMNKpgOHR6Q6NVE=; b=NhQKLpNvj/ad/X4P944MmHiUH+c0sROxAaL/1h/JE2I73zDjuV/45wdaSKzl3XbA3E E/nNwrqNcBvt69Ug764tl1DGmdB1dBt+G27g9woKb2Csi4dqZyUtq3u8H2Kdy7Nwyxqr ZtQGxbt8mXTC71nAk5L8zc6twA/lkm8ozx5FUo+CjyKY595yA+gmqehCCNjGib3hs17L 5q4Cq35gWC3l2V0xR/ZxRFz+fx+oiStNzw4+Gs1Jv8ZMnjFIGWRCnfz4/bTBl/AKKHg2 26wCHj2Mw2u9s/QikdcL90Rd5la2ZqSuL8R/cIqjHR9eJ2HqWil7Vr1ah8EAN7B+awNB IiSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742208188; x=1742812988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=Sx8FHwbN1ZlNJR2eQrRFHTsZzsgNNMNKpgOHR6Q6NVE=; b=ITD7HRrKwAg+qIaAJhiAOPFS8DyQyL/FRn/MWCjWhOCSOZG7mWT8QkUtQpy97c0W+0 D8qFjag8JGQgspba4m4uBepRvmDhWRiOgMH4cx1IxeGPemh2NyAEEX9gIZZMMIqNOLGV FyaRO8MuIe6p9FlcM21AgwP5dAHsmWEwlIxpBqhVgmArsnQ7LUTAA4Iid+GfuzYGb9lb RlgVM2xtSqKo8DZY4zupMZAuXQ1d45DYutuZk38gbZObvj6ZtW2YYiKxW3vXAulcmi6p cUJPfF8/RE3qdcho1/4dn5WBGBajzEExPs+2hdzRfX8EI97HpGVT4O7dqJHDLURfEFYR vdwA== X-Gm-Message-State: AOJu0YxZ43U4xwTvwgfQX4xFL23s8zobW9lw/suF36nVaXPAoQifS9aa eeoR0hWnnLUDNl7EZm3SWKLmd+jFRW+yqM9H/i5FS4tZZgbQv3p7tG8z+VMf X-Gm-Gg: ASbGncv15OxqH4W2xtcEY+TUJ+zu9vDl5dGWwjNvvLHvhaHmEcx3UGnI8KoUmVy3wF2 0kPaGf0sMybTJMRUCzrJQ6vltPMLak7apXrINOgsyZXfJoXI06V5Vc+glrZ/dJtZ7rY6s4DFhh/ N0azQYJ0ZsFRK75rJ3SH8VdDb2H53qW3P5YRF/j+RFfgEAaDPujreXrJx3inS7IVqz7YnEOybqe 8LddNZVz/GO/Ls9faxOV0tED+t3/al8wJo0VAQW8+HwcAPDDayF8m2f0BirzpOXQ79GnBaqwebb u/WwduDbKF+Dlkm6dlOPoQeQFNXPwBllINmPLedJ8wDjxPfT6JNaXm4Khw0wtNvrRQ== X-Google-Smtp-Source: AGHT+IHzkqMFP5e3a0iaSgEjqPer/VK4YhIMwkFQiCFnM4t/fOCXfPY1x7sJTqjv1CbilWJcLM9ryQ== X-Received: by 2002:a05:6000:1448:b0:391:13ef:1b1b with SMTP id ffacd0b85a97d-3971e0bdc02mr10350417f8f.30.1742208188054; Mon, 17 Mar 2025 03:43:08 -0700 (PDT) Received: from starship.. (1F2EF046.nat.pool.telekom.hu. [31.46.240.70]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3189absm14807262f8f.71.2025.03.17.03.43.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Mar 2025 03:43:07 -0700 (PDT) Sender: Ingo Molnar From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Linus Torvalds , Peter Zijlstra , Shrikanth Hegde , Thomas Gleixner , Valentin Schneider , Steven Rostedt , Mel Gorman , Vincent Guittot Subject: [PATCH 4/5] sched/debug, Documentation: Remove (most) CONFIG_SCHED_DEBUG references from documentation Date: Mon, 17 Mar 2025 11:42:55 +0100 Message-ID: <20250317104257.3496611-5-mingo@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250317104257.3496611-1-mingo@kernel.org> References: <20250317104257.3496611-1-mingo@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Since it's enabled unconditionally now, remove all references to it. (Left out languages I cannot read.) Signed-off-by: Ingo Molnar --- Documentation/scheduler/sched-debug.rst | 2 +- Documentation/scheduler/sched-design-CFS.rst | 2 +- Documentation/scheduler/sched-domains.rst | 5 ++--- Documentation/scheduler/sched-ext.rst | 3 +-- Documentation/scheduler/sched-stats.rst | 2 +- Documentation/translations/sp_SP/scheduler/sched-design-CFS.rst | 2 +- 6 files changed, 7 insertions(+), 9 deletions(-) diff --git a/Documentation/scheduler/sched-debug.rst b/Documentation/schedu= ler/sched-debug.rst index 4d3d24f2a439..b5a92a39eccd 100644 --- a/Documentation/scheduler/sched-debug.rst +++ b/Documentation/scheduler/sched-debug.rst @@ -2,7 +2,7 @@ Scheduler debugfs =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -Booting a kernel with CONFIG_SCHED_DEBUG=3Dy will give access to +Booting a kernel with debugfs enabled will give access to scheduler specific debug files under /sys/kernel/debug/sched. Some of those files are described below. =20 diff --git a/Documentation/scheduler/sched-design-CFS.rst b/Documentation/s= cheduler/sched-design-CFS.rst index 8786f219fc73..b574a2644c77 100644 --- a/Documentation/scheduler/sched-design-CFS.rst +++ b/Documentation/scheduler/sched-design-CFS.rst @@ -96,7 +96,7 @@ picked and the current task is preempted. CFS uses nanosecond granularity accounting and does not rely on any jiffie= s or other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in = the way the previous scheduler had, and has no heuristics whatsoever. There is -only one central tunable (you have to switch on CONFIG_SCHED_DEBUG): +only one central tunable: =20 /sys/kernel/debug/sched/base_slice_ns =20 diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/sche= duler/sched-domains.rst index 5e996fe973b1..15e3a4cb304a 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -73,9 +73,8 @@ Architectures may override the generic domain builder and= the default SD flags for a given topology level by creating a sched_domain_topology_level array= and calling set_sched_topology() with this array as the parameter. =20 -The sched-domains debugging infrastructure can be enabled by enabling -CONFIG_SCHED_DEBUG and adding 'sched_verbose' to your cmdline. If you -forgot to tweak your cmdline, you can also flip the +The sched-domains debugging infrastructure can be enabled by 'sched_verbos= e' +to your cmdline. If you forgot to tweak your cmdline, you can also flip the /sys/kernel/debug/sched/verbose knob. This enables an error checking parse= of the sched domains which should catch most possible errors (described above= ). It also prints out the domain structure in a visual format. diff --git a/Documentation/scheduler/sched-ext.rst b/Documentation/schedule= r/sched-ext.rst index c4672d7df2f7..5788a3319630 100644 --- a/Documentation/scheduler/sched-ext.rst +++ b/Documentation/scheduler/sched-ext.rst @@ -107,8 +107,7 @@ detailed information: nr_rejected : 0 enable_seq : 1 =20 -If ``CONFIG_SCHED_DEBUG`` is set, whether a given task is on sched_ext can -be determined as follows: +Whether a given task is on sched_ext can be determined as follows: =20 .. code-block:: none =20 diff --git a/Documentation/scheduler/sched-stats.rst b/Documentation/schedu= ler/sched-stats.rst index caea83d91c67..08b6bc9a315c 100644 --- a/Documentation/scheduler/sched-stats.rst +++ b/Documentation/scheduler/sched-stats.rst @@ -88,7 +88,7 @@ One of these is produced per domain for each cpu describe= d. (Note that if CONFIG_SMP is not defined, *no* domains are utilized and these lines will not appear in the output. is an extension to the domain field that prints the name of the corresponding sched domain. It can appear in -schedstat version 17 and above, and requires CONFIG_SCHED_DEBUG.) +schedstat version 17 and above. =20 domain 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19= 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44= 45 =20 diff --git a/Documentation/translations/sp_SP/scheduler/sched-design-CFS.rs= t b/Documentation/translations/sp_SP/scheduler/sched-design-CFS.rst index dc728c739e28..b35d24464be9 100644 --- a/Documentation/translations/sp_SP/scheduler/sched-design-CFS.rst +++ b/Documentation/translations/sp_SP/scheduler/sched-design-CFS.rst @@ -112,7 +112,7 @@ CFS usa una granularidad de nanosegundos y no depende d= e ning=C3=BAn jiffy o detalles como HZ. De este modo, el gestor de tareas CFS no tiene noci=C3=B3n de "ventanas de tiempo" de la forma en que ten=C3=ADa el gesto= r de tareas previo, y tampoco tiene heur=C3=ADsticos. =C3=9Anicamente hay un pa= r=C3=A1metro -central ajustable (se ha de cambiar en CONFIG_SCHED_DEBUG): +central ajustable: =20 /sys/kernel/debug/sched/base_slice_ns =20 --=20 2.45.2 From nobody Wed Dec 17 13:46:03 2025 Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com [209.85.221.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC7A9235347 for ; Mon, 17 Mar 2025 10:43:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208192; cv=none; b=bYTegIyq7e+uQlNEIt86us/e0nhgBxUnw1UGgAYB+HNPIRGGy6vIRKN5rkyPGQ0SdNSOPP5a2Ogz4vTYss3s6iJw4PcnniliSU22v1CGXt6U/GSZgXS6ySpbignUDnEeAarUD1MnVRx1z0m4s/jv9FrcOxbwRsBKK67pTeaUKvY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742208192; c=relaxed/simple; bh=ttowJ0RwSBgYbbzKOp7L3l4PCvTt/2Q7/OlmZX6B32U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xru5cmy8aYi3UbQUMwYGavG5P6TTLiSR0xVVYZScq5cNcFcofWnkqw5aJKOzH5BVNPCOUhHOTdzA3RIM4AQP523nknlJmONsVDScDBDP/d4rAwN4IpNbcH0hOJTjGvMq6H3zjdO2A7FmwqVdVqlWHVN6/nt7YX1JwQXUfWYk6qI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Vg9I2x5t; arc=none smtp.client-ip=209.85.221.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Vg9I2x5t" Received: by mail-wr1-f46.google.com with SMTP id ffacd0b85a97d-39727fe912cso900512f8f.3 for ; Mon, 17 Mar 2025 03:43:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742208189; x=1742812989; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=PGTMnbx0Lkc4Mj7M6zPuFctTBxRo1EPEX72C6bvz4aQ=; b=Vg9I2x5tUacEm+lbSpew+BpPcRCDP61Q/bqdwgRgFuZBoZtORfRTi/nuaMNvmfGdMk t1uOcsGdPVr84OElGFtF65bJ91Iq7blxcCGSD64tSy9gP/fvDelrl54j4CWbEQ9ZKlde liQubagQxPIcDcZsxtd3KMTy82anbQUEgJ+0GFrNIQsYL8a6g4HYpYZ9741vXYyWLgkl iKAdSbUISn055h8YK70WNfMlpVRFS0l00ayqsS4cpbxWvtK40uEMBx0jxS5UyInu/TTZ DgokIszC1Qp+oBmLiCZm7NakMqwGsQS2a/mkOewPNFwfA6g5ES2e5RxSnVxGmCC9OGHe ZR/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742208189; x=1742812989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=PGTMnbx0Lkc4Mj7M6zPuFctTBxRo1EPEX72C6bvz4aQ=; b=l18ORZqV3uKl8pFRP73kwZ0W8DMoFU2Yxs2UxuYvT2kXJwnU8X6xmkcB5zlgFCF9hk DBonfMZULRZ2SUHYjVWGfNeGeRAE0qDcYxC1k4wAU2hxehzLkG+OfJmpefDs+ORabArh 9X0XvL6JHOZBcaY1QRwg60dcHVrUBAmuF9r2LYYjOvjIQruoIyJP6Rsp09Yz4ioDxh9K WziUz2mUXDyhAou06ybdZgaaRGOPv8JNja0ShLinMQH2NJJ9L7byl1/ri6zpsnv9Hm8y 0WWxxillUjplRbLuHW555tDmqJSq7bWB9rnJENLhvBiSngd0XSsRpdZiJlFO5eBgUZor GnFg== X-Gm-Message-State: AOJu0YxSRGeCJlNd9M4tpfQVPw7Z7aBtp4OM4VmIC/N5Zr05BjrUj7pX xwMDegv6wjNZNtVbAZEU61izoP6cdwmEeeVlITX9Turb5mXN5kSe04zdC83I X-Gm-Gg: ASbGncu5gjWt4I76hXxpCWZpbEZNHdTCtDC2UZDeUgKXfK2KsKdsY2ne0Lf6vtZIpfy urfOLRGka9t/R8wCDIP29KjX5f/zEJ25YwPP8Ra3Jb7WwAK57q8JqoEFLOs1LM1dwFAy7fyq2u2 OmBQSNYZkXJQV+pSRcUguI4f8QXUQE2I0P/0ahInCz2S8Sk9dmLvKeACWvUy1CqkSpWilOtJEp8 CRqFNQEBmh8Amy7FIjgflRrtppFjC2OerycfL3T21HaE+YiFaCBv50OmtPmf8AUqJAWhyMiHplZ hYplr4ZcZ1J3KZhRRJByIhBne84OABk6o2Boh8x4n/H+S79/I3WOBbvNN/s9/Ri57g== X-Google-Smtp-Source: AGHT+IGxuKvw8uod1U5/dgfORaVxcStn0vSbX4+udaE4zEmKoNM2yKtayyk4qt8Z5OZQUJS71f6d3w== X-Received: by 2002:a05:6000:184c:b0:38f:3073:708 with SMTP id ffacd0b85a97d-3971cf48df7mr9741939f8f.3.1742208189089; Mon, 17 Mar 2025 03:43:09 -0700 (PDT) Received: from starship.. (1F2EF046.nat.pool.telekom.hu. [31.46.240.70]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3189absm14807262f8f.71.2025.03.17.03.43.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Mar 2025 03:43:08 -0700 (PDT) Sender: Ingo Molnar From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Linus Torvalds , Peter Zijlstra , Shrikanth Hegde , Thomas Gleixner , Valentin Schneider , Steven Rostedt , Mel Gorman , Vincent Guittot Subject: [PATCH 5/5] sched/debug: Remove CONFIG_SCHED_DEBUG Date: Mon, 17 Mar 2025 11:42:56 +0100 Message-ID: <20250317104257.3496611-6-mingo@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250317104257.3496611-1-mingo@kernel.org> References: <20250317104257.3496611-1-mingo@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For more than a decade, CONFIG_SCHED_DEBUG=3Dy has been enabled in all the major Linux distributions: /boot/config-6.11.0-19-generic:CONFIG_SCHED_DEBUG=3Dy The reason is that while originally CONFIG_SCHED_DEBUG started out as a debugging feature, over the years (decades ...) it has grown various bits of statistics, instrumentation and control knobs that are useful for sysadmin and general software development purposes as well. But within the kernel we still pretend that there's a choice, and sometimes code that is seemingly 'debug only' creates overhead that should be optimized in reality. So make it all official and make CONFIG_SCHED_DEBUG unconditional. Now that all uses of CONFIG_SCHED_DEBUG are removed from the code by previous patches, remove the Kconfig option as well. Signed-off-by: Ingo Molnar --- lib/Kconfig.debug | 9 --------- 1 file changed, 9 deletions(-) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 1af972a92d06..a2ab693d008d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1301,15 +1301,6 @@ endmenu # "Debug lockups and hangs" =20 menu "Scheduler Debugging" =20 -config SCHED_DEBUG - bool "Collect scheduler debugging info" - depends on DEBUG_KERNEL && DEBUG_FS - default y - help - If you say Y here, the /sys/kernel/debug/sched file will be provided - that can help debug the scheduler. The runtime overhead of this - option is minimal. - config SCHED_INFO bool default n --=20 2.45.2