On 2026.02.10 10:13 Peter Zijlstra wrote: > On Tue, Feb 03, 2026 at 08:36:41AM -0800, Doug Smythies wrote: > >> Further to my email from the other day, where all was good [1], >> I have continued to test, in particular the severe overload conditions >> from [2]. > >> Conditions: >> Greater than 12,500 X (yes > /dev/null) tasks >> But less than 15,000 X ( yes > /dev/null) tasks >> >> I have tested up to 20,000 X (yes > /dev/null) tasks >> with previous kernels, including mainline 6.19-rc1. >> >> I would not disagree if you say my operating conditions >> are ridiculous. > > They absolutely are; however!, people do crazy things so I doubt you are > alone. > >> System: >> Processor: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz, 6 cores 12 CPUs. >> CPU frequency scaling driver: intel_pstate; Governor powersave. > > Right, so I was too lazy to find a matching test machine, but instead > used taskset to limit myself to 6 cores/12 threads and let it rip. > > # taskset -c -p 0-5,24-29 $$ > # for ((i=0; i<20000; i++)) do yes > /dev/null & done > > ... a *LONG* while later ... > > And I have reached 15k. > > ... this is *SLOW* ... Thanks for trying it. And yes it gets very slow. I should have warned readers. With the first version of this patch set it took my computer 20 minutes to spin out 18,000 tasks. A graph is attached. Note that more typically I could not get to 18,000 tasks. I don't know a predictable way to create the hang. By the way, I had no issue with 80,000 tasks, if they contained some regular sleep time. A graph is attached. The load average was 79,400. > So I reached 20000 and figured what the heck and went for another 5k. > > Eventually I managed to reach 21160, and then boom. > > It is one of those pick_next_task_fair() NULL pointer derefs that are so > very indicative of math overflow. > > I'll try and have a poke, if only this were a faster thing ;-)
© 2016 - 2026 Red Hat, Inc.