kernel/sched/topology.c | 7 +++++++ 1 file changed, 7 insertions(+)
On a x86 system under test with 1780 CPUs, topology_span_sane() takes
around 8 seconds cumulatively for all the iterations. It is an expensive
operation which does the sanity of non-NUMA topology masks.
CPU topology is not something which changes very frequently hence make
this check optional for the systems where the topology is trusted and
need faster bootup.
Restrict this to sched_verbose kernel cmdline option so that this penalty
can be avoided for the systems who wants to avoid it.
Cc: stable@vger.kernel.org
Fixes: ccf74128d66c ("sched/topology: Assert non-NUMA topology masks don't (partially) overlap")
Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
---
[V2]
- Use kernel cmdline param instead of compile time flag.
kernel/sched/topology.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 9748a4c8d668..4ca63bff321d 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2363,6 +2363,13 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
{
int i = cpu + 1;
+ /* Skip the topology sanity check for non-debug, as it is a time-consuming operatin */
+ if (!sched_debug_verbose) {
+ pr_info_once("%s: Skipping topology span sanity check. Use `sched_verbose` boot parameter to enable it.\n",
+ __func__);
+ return true;
+ }
+
/* NUMA levels are allowed to overlap */
if (tl->flags & SDTL_OVERLAP)
return true;
--
2.43.0
(+ Steve) Hello Saurabh, On 11/18/2024 3:09 PM, Saurabh Sengar wrote: > On a x86 system under test with 1780 CPUs, topology_span_sane() takes > around 8 seconds cumulatively for all the iterations. It is an expensive > operation which does the sanity of non-NUMA topology masks. Steve too was optimizing this path. I believe his latest version can be found at: https://lore.kernel.org/lkml/20241031200431.182443-1-steve.wahl@hpe.com/ Does that approach help improving bootup time for you? Valentine suggested the same approach as yours on a previous version of Steve's optimization but Steve believed returning true can possibly have other implication in the sched-domain building path. The thread can be found at: https://lore.kernel.org/lkml/Zw_k_WFeYFli87ck@swahl-home.5wahls.com/ > > CPU topology is not something which changes very frequently hence make > this check optional for the systems where the topology is trusted and > need faster bootup. > > Restrict this to sched_verbose kernel cmdline option so that this penalty > can be avoided for the systems who wants to avoid it. > > Cc: stable@vger.kernel.org > Fixes: ccf74128d66c ("sched/topology: Assert non-NUMA topology masks don't (partially) overlap") > Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com> > --- > [V2] > - Use kernel cmdline param instead of compile time flag. > > kernel/sched/topology.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 9748a4c8d668..4ca63bff321d 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -2363,6 +2363,13 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl, > { > int i = cpu + 1; > > + /* Skip the topology sanity check for non-debug, as it is a time-consuming operatin */ > + if (!sched_debug_verbose) { nit. I think the convention in topology.c is to call "sched_debug()" and not check "sched_debug_verbose" directly. > + pr_info_once("%s: Skipping topology span sanity check. Use `sched_verbose` boot parameter to enable it.\n", > + __func__); > + return true; > + } > + > /* NUMA levels are allowed to overlap */ > if (tl->flags & SDTL_OVERLAP) > return true; -- Thanks and Regards, Prateek
On Tue, Nov 19, 2024 at 11:54:57AM +0530, K Prateek Nayak wrote: > (+ Steve) > > Hello Saurabh, > On 11/18/2024 3:09 PM, Saurabh Sengar wrote: > > On a x86 system under test with 1780 CPUs, topology_span_sane() takes > > around 8 seconds cumulatively for all the iterations. It is an expensive > > operation which does the sanity of non-NUMA topology masks. > > Steve too was optimizing this path. I believe his latest version can be > found at: > https://lore.kernel.org/lkml/20241031200431.182443-1-steve.wahl@hpe.com/ Yes, Saurabh, I'd be very interested in whether my current patch relieves your situation. Thanks, --> Steve Wahl -- Steve Wahl, Hewlett Packard Enterprise
© 2016 - 2024 Red Hat, Inc.