include/linux/sched.h | 3 +++ kernel/sched/core.c | 2 ++ kernel/sched/debug.c | 1 + kernel/sched/fair.c | 49 +++++++++++++++++++++++++++++++++++++++++ kernel/sched/features.h | 1 + 5 files changed, 56 insertions(+)
The main purpose is to avoid too many cross CPU wake up when it is unnecessary. The frequent cross CPU wake up brings significant damage to some workloads, especially on high core count systems. Inhibits the cross CPU wake-up by placing the wakee on waking CPU, if both the waker and wakee are short-duration tasks. The short duration task could become a trouble maker on high-load system, because it could bring frequent context switch. This strategy only takes effect when the system is busy. Because it is unreasonable to inhibit the idle CPU scan when there are still idle CPUs. First, introduce the definition of a short-duration task. Then leverages the first patch to choose a local CPU for wakee. Overall there is performance improvement on some overloaded case. Such as will-it-scale, netperf. And no noticeable impact on schbench, hackbench, tbench and a OLTP workload with a commercial RDBMS, tested on a Intel Xeon 2 x 56C machine. Per the test on Zen3 from Prateek, most benchmarks result saw small wins or are comparable to sched:tip. SpecJBB Critical-jOps improved while Max-jOPS saw a small hit, but it might be in the expected range. ycsb-mongodb saw small uplift in NPS1 mode. Throughput improvement of netperf(localhost) was observed on a Rome 2 x 64C machine, when the number of clients equals the CPUs. Abel reported against a latency regression from Redis on an overloaded system. Inspired by his description, v5 added the check of wakee_flips to mitigate task stacking. Changes since v5: 1. Check the wakee_flips of the waker/wakee. If the wakee_flips of waker/wakee are both 0, it indicates that the waker and the wakee are waking up each other. In this case, put them together on the same CPU. This is to avoid that too many wakees are stacked on one CPU, which might cause regression on redis. Changes since v4: 1. Dietmar has commented on the task duration calculation. So refined the commit log to reduce confusion. 2. Change [PATCH 1/2] to only record the average duration of a task. So this change could benefit UTIL_EST_FASTER[1]. 3. As v4 reported regression on Zen3 and Kunpeng Arm64, add back the system average utilization restriction that, if the system is not busy, do not enable the short wake up. Above logic has shown improvment on Zen3[2]. 4. Restrict the wakeup target to be current CPU, rather than both current CPU and task's previous CPU. This could also benefit wakeup optimization from interrupt in the future, which is suggested by Yicong. Changes since v3: 1. Honglei and Josh have concern that the threshold of short task duration could be too long. Decreased the threshold from sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8), and the '8' comes from get_update_sysctl_factor(). 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion. 3. Move the calculation of average duration from put_prev_task_fair() to dequeue_task_fair(). Because there is an issue in v3 that, put_prev_task_fair() will not be invoked by pick_next_task_fair() in fast path, thus the dur_avg could not be updated timely. 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);" on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen. 5. Move the scan for CPU with short duration task from select_idle_cpu() to select_idle_siblings(), because there is no CPU scan involved, per Yicong. Changes since v2: 1. Peter suggested comparing the duration of waker and the cost to scan for an idle CPU: If the cost is higher than the task duration, do not waste time finding an idle CPU, choose the local or previous CPU directly. A prototype was created based on this suggestion. However, according to the test result, this prototype does not inhibit the cross CPU wakeup and did not bring improvement. Because the cost to find an idle CPU is small in the problematic scenario. The root cause of the problem is a race condition between scanning for an idle CPU and task enqueue(please refer to the commit log in PATCH 2/2). So v3 does not change the core logic of v2, with some refinement based on Peter's suggestion. 2. Simplify the logic to record the task duration per Peter and Abel's suggestion. [1] https://lore.kernel.org/lkml/c56855a7-14fd-4737-fc8b-8ea21487c5f6@arm.com/ [2] https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/ v5: https://lore.kernel.org/lkml/cover.1675361144.git.yu.c.chen@intel.com/ v4: https://lore.kernel.org/lkml/cover.1671158588.git.yu.c.chen@intel.com/ v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/ v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/ v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/ Chen Yu (2): sched/fair: Record the average duration of a task sched/fair: Introduce SIS_SHORT to wake up short task on current CPU include/linux/sched.h | 3 +++ kernel/sched/core.c | 2 ++ kernel/sched/debug.c | 1 + kernel/sched/fair.c | 49 +++++++++++++++++++++++++++++++++++++++++ kernel/sched/features.h | 1 + 5 files changed, 56 insertions(+) -- 2.25.1
Hi Chenyu, On 2023/2/22 22:09, Chen Yu wrote: > The main purpose is to avoid too many cross CPU wake up when it is > unnecessary. The frequent cross CPU wake up brings significant damage > to some workloads, especially on high core count systems. > > Inhibits the cross CPU wake-up by placing the wakee on waking CPU, > if both the waker and wakee are short-duration tasks. The short > duration task could become a trouble maker on high-load system, > because it could bring frequent context switch. This strategy > only takes effect when the system is busy. Because it is unreasonable > to inhibit the idle CPU scan when there are still idle CPUs. > > First, introduce the definition of a short-duration task. Then > leverages the first patch to choose a local CPU for wakee. > > Overall there is performance improvement on some overloaded case. > Such as will-it-scale, netperf. And no noticeable impact on > schbench, hackbench, tbench and a OLTP workload with a commercial > RDBMS, tested on a Intel Xeon 2 x 56C machine. > > Per the test on Zen3 from Prateek, most benchmarks result saw small > wins or are comparable to sched:tip. SpecJBB Critical-jOps improved while > Max-jOPS saw a small hit, but it might be in the expected range. > ycsb-mongodb saw small uplift in NPS1 mode. > > Throughput improvement of netperf(localhost) was observed on a > Rome 2 x 64C machine, when the number of clients equals the CPUs. > > Abel reported against a latency regression from Redis on an overloaded > system. Inspired by his description, v5 added the check of wakee_flips > to mitigate task stacking. > > Changes since v5: > 1. Check the wakee_flips of the waker/wakee. If the wakee_flips > of waker/wakee are both 0, it indicates that the waker and the wakee > are waking up each other. In this case, put them together on the > same CPU. This is to avoid that too many wakees are stacked on > one CPU, which might cause regression on redis. > The patch looks good to me. And for the v6 version there's no significant regression on our server. :) Detailed results below. The setup are the same as what used on v4. There're some gain for UDP_RR. For mysql no significant regression, there're ~2% loss for 128 threads case but the proportion is within the fluctuation range so it should be ok. Thanks, Yicong tbench results (node 0): Compare 6.3-rc1-vanilla sis-short-v6 1: 322.2940 324.6750 ( 0.74%) 4: 1293.4000 1294.2900 ( 0.07%) 8: 2568.7700 2606.9200 ( 1.49%) 16: 5158.0800 5254.5500 ( 1.87%) 32: 10074.2000 10286.9000 ( 2.11%) 64: 7910.5000 7969.1000 ( 0.74%) 128: 6670.3900 6699.7600 ( 0.44%) tbench results (node 0-1): Compare 6.3-rc1-vanilla sis-short-v6 1: 324.5650 330.9840 ( 1.98%) 4: 1302.9000 1311.3300 ( 0.65%) 8: 2573.7200 2615.5500 ( 1.63%) 16: 5092.7900 5178.4200 ( 1.68%) 32: 8766.8000 9466.6700 ( 7.98%) 64: 16859.5000 16535.2000 ( -1.92%) 128: 13673.6000 11960.7000 ( -12.53%) 128: 13673.6000 13066.6000 ( -4.43%) [verification] netperf results TCP_RR (node 0): Compare 6.3-rc1-vanilla sis-short-v6 1: 74952.4633 74964.7067 ( 0.02%) 4: 76020.1192 75984.1158 ( -0.05%) 8: 76698.5804 76846.1312 ( 0.19%) 16: 77713.0350 77858.9752 ( 0.19%) 32: 65947.7090 66124.5758 ( 0.27%) 64: 25195.1539 25374.4948 ( 0.71%) 128: 10602.2112 10703.6223 ( 0.96%) netperf results TCP_RR (node 0-1): Compare 6.3-rc1-vanilla sis-short-v6 1: 76229.5167 76178.0700 ( -0.07%) 4: 77196.7558 76546.3333 ( -0.84%) 8: 76340.5733 76612.0829 ( 0.36%) 16: 75808.3848 75132.6454 ( -0.89%) 32: 75178.5653 71464.7628 ( -4.94%) 64: 75222.3510 76028.7949 ( 1.07%) 128: 28552.5946 28830.6498 ( 0.97%) netperf results UDP_RR (node 0): Compare 6.3-rc1-vanilla sis-short-v6 1: 90744.2900 94082.3967 ( 3.68%) 4: 92323.3100 95160.6525 ( 3.07%) 8: 92951.3996 96303.9783 ( 3.61%) 16: 93418.8117 97358.5925 ( 4.22%) 32: 78725.5393 84363.7384 ( 7.16%) 64: 30069.7350 30929.4379 ( 2.86%) 128: 12501.6386 12695.0588 ( 1.55%) netperf results UDP_RR (node 0-1): Compare 6.3-rc1-vanilla sis-short-v6 1: 92030.0200 95568.9767 ( 3.85%) 4: 90041.0042 94265.9775 ( 4.69%) 8: 90273.7354 94055.1283 ( 4.19%) 16: 90404.9869 95233.8633 ( 5.34%) 32: 74813.0918 83538.5368 ( 11.66%) [*] 64: 57494.5351 74612.3866 ( 29.77%) [*] 128: 24803.3562 16420.1439 ( -33.80%) [*] [*] untrustworthy due to large fluactuation 6.3.0-rc1-vanilla 6.3.0-rc1-sis-short-v6 Avg Avg (%) TPS-8 4496.25 4520.99 (0.55%) QPS-8 71940.01 72335.93 (0.55%) avg_lat-8 1.78 1.77 (0.38%) 95th_lat-8 2.10 2.11 (-0.48%) TPS-16 8905.41 8844.63 (-0.68%) QPS-16 142486.59 141514.08 (-0.68%) avg_lat-16 1.80 1.81 (-0.56%) 95th_lat-16 2.18 2.23 (-2.45%) TPS-32 14214.88 14378.30 (1.15%) QPS-32 227438.14 230052.76 (1.15%) avg_lat-32 2.25 2.22 (1.19%) 95th_lat-32 3.06 2.95 (3.70%) TPS-64 14697.57 14801.04 (0.70%) QPS-64 235161.20 236816.57 (0.70%) avg_lat-64 4.35 4.32 (0.69%) 95th_lat-64 6.51 6.39 (1.79%) TPS-128 18417.83 18010.42 (-2.21%) QPS-128 294685.24 288166.68 (-2.21%) avg_lat-128 6.97 7.13 (-2.30%) 95th_lat-128 10.22 10.46 (-2.35%) TPS-256 29940.82 30491.62 (1.84%) QPS-256 479053.21 487865.89 (1.84%) avg_lat-256 8.54 8.41 (1.60%) 95th_lat-256 13.22 13.55 (-2.50%) > Changes since v4: > 1. Dietmar has commented on the task duration calculation. So refined > the commit log to reduce confusion. > 2. Change [PATCH 1/2] to only record the average duration of a task. > So this change could benefit UTIL_EST_FASTER[1]. > 3. As v4 reported regression on Zen3 and Kunpeng Arm64, add back > the system average utilization restriction that, if the system > is not busy, do not enable the short wake up. Above logic has > shown improvment on Zen3[2]. > 4. Restrict the wakeup target to be current CPU, rather than both > current CPU and task's previous CPU. This could also benefit > wakeup optimization from interrupt in the future, which is > suggested by Yicong. > > Changes since v3: > 1. Honglei and Josh have concern that the threshold of short > task duration could be too long. Decreased the threshold from > sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8), > and the '8' comes from get_update_sysctl_factor(). > 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion. > 3. Move the calculation of average duration from put_prev_task_fair() > to dequeue_task_fair(). Because there is an issue in v3 that, > put_prev_task_fair() will not be invoked by pick_next_task_fair() > in fast path, thus the dur_avg could not be updated timely. > 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);" > on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen. > 5. Move the scan for CPU with short duration task from select_idle_cpu() > to select_idle_siblings(), because there is no CPU scan involved, per > Yicong. > > Changes since v2: > > 1. Peter suggested comparing the duration of waker and the cost to > scan for an idle CPU: If the cost is higher than the task duration, > do not waste time finding an idle CPU, choose the local or previous > CPU directly. A prototype was created based on this suggestion. > However, according to the test result, this prototype does not inhibit > the cross CPU wakeup and did not bring improvement. Because the cost > to find an idle CPU is small in the problematic scenario. The root > cause of the problem is a race condition between scanning for an idle > CPU and task enqueue(please refer to the commit log in PATCH 2/2). > So v3 does not change the core logic of v2, with some refinement based > on Peter's suggestion. > > 2. Simplify the logic to record the task duration per Peter and Abel's suggestion. > > > [1] https://lore.kernel.org/lkml/c56855a7-14fd-4737-fc8b-8ea21487c5f6@arm.com/ > [2] https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/ > > v5: https://lore.kernel.org/lkml/cover.1675361144.git.yu.c.chen@intel.com/ > v4: https://lore.kernel.org/lkml/cover.1671158588.git.yu.c.chen@intel.com/ > v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/ > v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/ > v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/ > > Chen Yu (2): > sched/fair: Record the average duration of a task > sched/fair: Introduce SIS_SHORT to wake up short task on current CPU > > include/linux/sched.h | 3 +++ > kernel/sched/core.c | 2 ++ > kernel/sched/debug.c | 1 + > kernel/sched/fair.c | 49 +++++++++++++++++++++++++++++++++++++++++ > kernel/sched/features.h | 1 + > 5 files changed, 56 insertions(+) >
On 2023-03-15 at 17:34:43 +0800, Yicong Yang wrote: > Hi Chenyu, > > On 2023/2/22 22:09, Chen Yu wrote: > > The main purpose is to avoid too many cross CPU wake up when it is > > unnecessary. The frequent cross CPU wake up brings significant damage > > to some workloads, especially on high core count systems. > > > > Inhibits the cross CPU wake-up by placing the wakee on waking CPU, > > if both the waker and wakee are short-duration tasks. The short > > duration task could become a trouble maker on high-load system, > > because it could bring frequent context switch. This strategy > > only takes effect when the system is busy. Because it is unreasonable > > to inhibit the idle CPU scan when there are still idle CPUs. > > > > First, introduce the definition of a short-duration task. Then > > leverages the first patch to choose a local CPU for wakee. > > > > Overall there is performance improvement on some overloaded case. > > Such as will-it-scale, netperf. And no noticeable impact on > > schbench, hackbench, tbench and a OLTP workload with a commercial > > RDBMS, tested on a Intel Xeon 2 x 56C machine. > > > > Per the test on Zen3 from Prateek, most benchmarks result saw small > > wins or are comparable to sched:tip. SpecJBB Critical-jOps improved while > > Max-jOPS saw a small hit, but it might be in the expected range. > > ycsb-mongodb saw small uplift in NPS1 mode. > > > > Throughput improvement of netperf(localhost) was observed on a > > Rome 2 x 64C machine, when the number of clients equals the CPUs. > > > > Abel reported against a latency regression from Redis on an overloaded > > system. Inspired by his description, v5 added the check of wakee_flips > > to mitigate task stacking. > > > > Changes since v5: > > 1. Check the wakee_flips of the waker/wakee. If the wakee_flips > > of waker/wakee are both 0, it indicates that the waker and the wakee > > are waking up each other. In this case, put them together on the > > same CPU. This is to avoid that too many wakees are stacked on > > one CPU, which might cause regression on redis. > > > > The patch looks good to me. And for the v6 version there's no significant > regression on our server. :) > > Detailed results below. The setup are the same as what used on v4. There're > some gain for UDP_RR. For mysql no significant regression, there're ~2% > loss for 128 threads case but the proportion is within the fluctuation > range so it should be ok. > Thanks Yicong for the test! thanks, Chenyu
Hello Chenyu, I did not observe any regression when testing v6 :) Most of the benchmark results are comparable to the tip with minor gains in some benchmarks with single sender and single receiver. Following are the results from testing the series on a dual socket Zen3 machine (2 x 64C/128T): NPS Modes are used to logically divide single socket into multiple NUMA region. Following is the NUMA configuration for each NPS mode on the system: NPS1: Each socket is a NUMA node. Total 2 NUMA nodes in the dual socket machine. Node 0: 0-63, 128-191 Node 1: 64-127, 192-255 NPS2: Each socket is further logically divided into 2 NUMA regions. Total 4 NUMA nodes exist over 2 socket. Node 0: 0-31, 128-159 Node 1: 32-63, 160-191 Node 2: 64-95, 192-223 Node 3: 96-127, 223-255 NPS4: Each socket is logically divided into 4 NUMA regions. Total 8 NUMA nodes exist over 2 socket. Node 0: 0-15, 128-143 Node 1: 16-31, 144-159 Node 2: 32-47, 160-175 Node 3: 48-63, 176-191 Node 4: 64-79, 192-207 Node 5: 80-95, 208-223 Node 6: 96-111, 223-231 Node 7: 112-127, 232-255 Benchmark Results: Kernel versions: - tip: 6.2.0-rc6 tip sched/core - sis_short: 6.2.0-rc6 tip sched/core + this series When the testing started, the tip was at: commit 7c4a5b89a0b5 "sched/rt: pick_next_rt_entity(): check list_entry" ~~~~~~~~~~~~~ ~ hackbench ~ ~~~~~~~~~~~~~ o NPS1 Test: tip sis-short 1-groups: 4.63 (0.00 pct) 4.47 (3.45 pct) 2-groups: 4.42 (0.00 pct) 4.41 (0.22 pct) 4-groups: 4.21 (0.00 pct) 4.24 (-0.71 pct) 8-groups: 4.95 (0.00 pct) 5.06 (-2.22 pct) 16-groups: 5.43 (0.00 pct) 5.36 (1.28 pct) o NPS2 Test: tip sis-short 1-groups: 4.68 (0.00 pct) 4.58 (2.13 pct) 2-groups: 4.45 (0.00 pct) 4.37 (1.79 pct) 4-groups: 4.19 (0.00 pct) 4.30 (-2.62 pct) 8-groups: 4.80 (0.00 pct) 5.22 (-8.75 pct) * 8-groups: 4.91 (0.00 pct) 5.01 (-2.03 pct) [Verification Run] 16-groups: 5.60 (0.00 pct) 5.66 (-1.07 pct) o NPS4 Test: tip sis-short 1-groups: 4.68 (0.00 pct) 4.66 (0.42 pct) 2-groups: 4.56 (0.00 pct) 4.52 (0.87 pct) 4-groups: 4.50 (0.00 pct) 4.62 (-2.66 pct) 8-groups: 5.76 (0.00 pct) 5.64 (2.08 pct) 16-groups: 5.60 (0.00 pct) 5.79 (-3.39 pct) ~~~~~~~~~~~~ ~ schbench ~ ~~~~~~~~~~~~ o NPS1 #workers: tip sis-short 1: 36.00 (0.00 pct) 34.00 (5.55 pct) 2: 37.00 (0.00 pct) 36.00 (2.70 pct) 4: 38.00 (0.00 pct) 40.00 (-5.26 pct) 8: 52.00 (0.00 pct) 46.00 (11.53 pct) 16: 66.00 (0.00 pct) 68.00 (-3.03 pct) 32: 111.00 (0.00 pct) 111.00 (0.00 pct) 64: 213.00 (0.00 pct) 214.00 (-0.46 pct) 128: 502.00 (0.00 pct) 497.00 (0.99 pct) 256: 45632.00 (0.00 pct) 46784.00 (-2.52 pct) 512: 78720.00 (0.00 pct) 75136.00 (4.55 pct) o NPS2 #workers: tip sis-short 1: 31.00 (0.00 pct) 31.00 (0.00 pct) 2: 32.00 (0.00 pct) 32.00 (0.00 pct) 4: 39.00 (0.00 pct) 39.00 (0.00 pct) 8: 52.00 (0.00 pct) 47.00 (9.61 pct) 16: 67.00 (0.00 pct) 69.00 (-2.98 pct) 32: 113.00 (0.00 pct) 118.00 (-4.42 pct) 64: 213.00 (0.00 pct) 231.00 (-8.45 pct) * 64: 225.00 (0.00 pct) 214.00 (4.88 pct) [Verification Run] 128: 508.00 (0.00 pct) 513.00 (-0.98 pct) 256: 46912.00 (0.00 pct) 45888.00 (2.18 pct) 512: 76672.00 (0.00 pct) 79232.00 (-3.33 pct) o NPS4 #workers: tip sis-short 1: 33.00 (0.00 pct) 29.00 (12.12 pct) 2: 40.00 (0.00 pct) 35.00 (12.50 pct) 4: 44.00 (0.00 pct) 40.00 (9.09 pct) 8: 73.00 (0.00 pct) 60.00 (17.80 pct) 16: 71.00 (0.00 pct) 69.00 (2.81 pct) 32: 111.00 (0.00 pct) 119.00 (-7.20 pct) 64: 217.00 (0.00 pct) 208.00 (4.14 pct) 128: 509.00 (0.00 pct) 889.00 (-74.65 pct) * 128: 525.90 (0.00 pct) 542.00 (-3.23 pct) [Verification Run] 256: 44352.00 (0.00 pct) 46528.00 (-4.90 pct) 512: 75392.00 (0.00 pct) 78720.00 (-4.41 pct) ~~~~~~~~~~ ~ tbench ~ ~~~~~~~~~~ o NPS1 Clients: tip sis-short 1 483.10 (0.00 pct) 479.99 (-0.64 pct) 2 956.03 (0.00 pct) 961.15 (0.53 pct) 4 1786.36 (0.00 pct) 1793.16 (0.38 pct) 8 3304.47 (0.00 pct) 3224.76 (-2.41 pct) 16 5440.44 (0.00 pct) 5584.12 (2.64 pct) 32 10462.02 (0.00 pct) 10667.21 (1.96 pct) 64 18995.99 (0.00 pct) 19802.51 (4.24 pct) 128 27896.44 (0.00 pct) 28509.96 (2.19 pct) 256 49742.89 (0.00 pct) 50404.44 (1.32 pct) 512 49583.01 (0.00 pct) 49362.40 (-0.44 pct) 1024 48467.75 (0.00 pct) 49393.34 (1.90 pct) o NPS2 Clients: tip sis-short 1 472.57 (0.00 pct) 491.88 (4.08 pct) 2 938.27 (0.00 pct) 962.87 (2.62 pct) 4 1764.34 (0.00 pct) 1782.85 (1.04 pct) 8 3043.57 (0.00 pct) 3275.90 (7.63 pct) 16 5103.53 (0.00 pct) 5098.77 (-0.09 pct) 32 9767.22 (0.00 pct) 9730.51 (-0.37 pct) 64 18712.65 (0.00 pct) 19153.47 (2.35 pct) 128 27691.95 (0.00 pct) 28738.51 (3.77 pct) 256 47939.24 (0.00 pct) 48571.73 (1.31 pct) 512 47843.70 (0.00 pct) 49224.01 (2.88 pct) 1024 48412.05 (0.00 pct) 48662.85 (0.51 pct) o NPS4 Clients: tip sis-short 1 486.74 (0.00 pct) 487.21 (0.09 pct) 2 950.50 (0.00 pct) 944.87 (-0.59 pct) 4 1778.58 (0.00 pct) 1785.67 (0.39 pct) 8 3106.36 (0.00 pct) 3269.40 (5.24 pct) 16 5139.81 (0.00 pct) 5346.01 (4.01 pct) 32 9911.04 (0.00 pct) 9961.37 (0.50 pct) 64 18201.46 (0.00 pct) 18755.21 (3.04 pct) 128 27284.67 (0.00 pct) 27372.54 (0.32 pct) 256 46793.72 (0.00 pct) 47277.47 (1.03 pct) 512 48841.96 (0.00 pct) 47736.63 (-2.26 pct) 1024 48811.99 (0.00 pct) 48066.96 (-1.52 pct) ~~~~~~~~~~ ~ stream ~ ~~~~~~~~~~ o NPS1 - 10 Runs: Test: tip sis-short Copy: 321229.54 (0.00 pct) 332046.15 (3.36 pct) Scale: 207471.32 (0.00 pct) 209724.40 (1.08 pct) Add: 234962.15 (0.00 pct) 238593.38 (1.54 pct) Triad: 246256.00 (0.00 pct) 259065.38 (5.20 pct) - 100 Runs: Test: tip sis-short Copy: 332714.94 (0.00 pct) 330868.23 (-0.55 pct) Scale: 216140.84 (0.00 pct) 218881.39 (1.26 pct) Add: 239605.00 (0.00 pct) 243423.22 (1.59 pct) Triad: 258580.84 (0.00 pct) 257857.39 (-0.27 pct) o NPS2 - 10 Runs: Test: tip sis-short Copy: 324423.92 (0.00 pct) 314693.42 (-2.99 pct) Scale: 215993.56 (0.00 pct) 216081.04 (0.04 pct) Add: 250590.28 (0.00 pct) 250786.87 (0.07 pct) Triad: 261284.44 (0.00 pct) 258434.05 (-1.09 pct) - 100 Runs: Test: tip sis-short Copy: 325993.72 (0.00 pct) 321152.67 (-1.48 pct) Scale: 227201.27 (0.00 pct) 224454.35 (-1.20 pct) Add: 256601.84 (0.00 pct) 253548.96 (-1.18 pct) Triad: 260222.19 (0.00 pct) 259141.98 (-0.41 pct) o NPS4 - 10 Runs: Test: tip sis-short Copy: 356850.80 (0.00 pct) 355198.15 (-0.46 pct) Scale: 247219.39 (0.00 pct) 240196.59 (-2.84 pct) Add: 268588.78 (0.00 pct) 265259.51 (-1.23 pct) Triad: 272932.59 (0.00 pct) 275791.62 (1.04 pct) - 100 Runs: Test: tip sis-short Copy: 365965.18 (0.00 pct) 364556.47 (-0.38 pct) Scale: 246068.58 (0.00 pct) 249613.08 (1.44 pct) Add: 263677.73 (0.00 pct) 267118.22 (1.30 pct) Triad: 273701.36 (0.00 pct) 275324.29 (0.59 pct) ~~~~~~~~~~~~~ ~ unixbench ~ ~~~~~~~~~~~~~ o NPS1 Test Metric Parallelism tip sis_short unixbench-dhry2reg Hmean unixbench-dhry2reg-1 49077561.21 ( 0.00%) 48958154.65 ( -0.24%) unixbench-dhry2reg Hmean unixbench-dhry2reg-512 6276672225.10 ( 0.00%) 6282377092.30 ( 0.09%) unixbench-syscall Amean unixbench-syscall-1 2664815.40 ( 0.00%) 2682364.37 * -0.66%* unixbench-syscall Amean unixbench-syscall-512 7848462.70 ( 0.00%) 7935735.97 * -1.11%* unixbench-pipe Hmean unixbench-pipe-1 2531131.89 ( 0.00%) 2510761.89 * -0.80%* unixbench-pipe Hmean unixbench-pipe-512 305244521.98 ( 0.00%) 302210856.64 * -0.99%* unixbench-spawn Hmean unixbench-spawn-1 4058.05 ( 0.00%) 4060.15 ( 0.05%) unixbench-spawn Hmean unixbench-spawn-512 80162.90 ( 0.00%) 80337.40 ( 0.22%) unixbench-execl Hmean unixbench-execl-1 4148.64 ( 0.00%) 4150.92 ( 0.05%) unixbench-execl Hmean unixbench-execl-512 11077.20 ( 0.00%) 11124.06 ( 0.42%) o NPS2 Test Metric Parallelism tip sis_short unixbench-dhry2reg Hmean unixbench-dhry2reg-1 49394822.56 ( 0.00%) 49562225.47 ( 0.34%) unixbench-dhry2reg Hmean unixbench-dhry2reg-512 6262917314.00 ( 0.00%) 6270269390.20 ( 0.12%) unixbench-syscall Amean unixbench-syscall-1 2663675.03 ( 0.00%) 2685044.77 * -0.80%* unixbench-syscall Amean unixbench-syscall-512 7342392.90 ( 0.00%) 7369717.10 * -0.37%* unixbench-pipe Hmean unixbench-pipe-1 2533194.04 ( 0.00%) 2508985.37 * -0.96%* unixbench-pipe Hmean unixbench-pipe-512 303588239.03 ( 0.00%) 301439936.90 * -0.71%* unixbench-spawn Hmean unixbench-spawn-1 5141.40 ( 0.00%) 4840.60 ( -5.85%) * unixbench-spawn Hmean unixbench-spawn-1 4780.20 ( 0.00%) 5235.90 * 9.53%* [Verification Run] unixbench-spawn Hmean unixbench-spawn-512 82993.79 ( 0.00%) 77573.59 * -6.53%* * unixbench-spawn Hmean unixbench-spawn-512 79664.40 ( 0.00%) 81747.60 * 2.61%* [Verification Run] unixbench-execl Hmean unixbench-execl-1 4140.15 ( 0.00%) 4134.94 ( -0.13%) unixbench-execl Hmean unixbench-execl-512 12229.25 ( 0.00%) 12392.40 ( 1.33%) o NPS4 Test Metric Parallelism tip sis_short unixbench-dhry2reg Hmean unixbench-dhry2reg-1 48970677.27 ( 0.00%) 48906123.72 ( -0.13%) unixbench-dhry2reg Hmean unixbench-dhry2reg-512 6294483486.30 ( 0.00%) 6284127003.20 ( -0.16%) unixbench-syscall Amean unixbench-syscall-1 2664715.13 ( 0.00%) 2685194.30 * -0.77%* unixbench-syscall Amean unixbench-syscall-512 7938670.70 ( 0.00%) 7824901.77 * 1.43%* unixbench-pipe Hmean unixbench-pipe-1 2527605.54 ( 0.00%) 2503782.85 * -0.94%* unixbench-pipe Hmean unixbench-pipe-512 305068507.23 ( 0.00%) 302815020.95 * -0.74%* unixbench-spawn Hmean unixbench-spawn-1 5207.34 ( 0.00%) 5221.99 ( 0.28%) unixbench-spawn Hmean unixbench-spawn-512 81352.38 ( 0.00%) 82374.89 * 1.26%* unixbench-execl Hmean unixbench-execl-1 4131.37 ( 0.00%) 4130.76 ( -0.01%) unixbench-execl Hmean unixbench-execl-512 13025.56 ( 0.00%) 12816.98 ( -1.60%) ~~~~~~~~~~~~~~~~ ~ ycsb-mongodb ~ ~~~~~~~~~~~~~~~~ 0 NPS1 tip : 130249.00 (var: 1.16%) sis_short : 133626.00 (var: 1.09%) (2.59%) o NPS2 tip : 131100.00 (var: 1.07%) sis_short : 133713.00 (var: 3.17%) (1.99%) o NPS4 tip : 136446.00 (var: 1.97%) sis_short : 136700.00 (var: 3.16%) (0.18%) ~~~~~~~~~~~~~~~~~ ~ SpecJBB & DSB ~ ~~~~~~~~~~~~~~~~~ - SpecJBB numbers see small improvements of ~2%. - DeathStarBench numbers remain similar to tip. ~~~~~~~~~~~ ~ netperf ~ ~~~~~~~~~~~ o NPS1 tip sis_short 1-clients: 107932.22 (0.00 pct) 110175.53 (2.07 pct) 2-clients: 106887.99 (0.00 pct) 108626.93 (1.62 pct) 4-clients: 106676.11 (0.00 pct) 107736.87 (0.99 pct) 8-clients: 98645.45 (0.00 pct) 97700.99 (-0.95 pct) 16-clients: 88881.23 (0.00 pct) 88800.03 (-0.09 pct) 32-clients: 86654.28 (0.00 pct) 87252.74 (0.69 pct) 64-clients: 81431.90 (0.00 pct) 79703.33 (-2.12 pct) 128-clients: 55993.77 (0.00 pct) 55681.20 (-0.55 pct) 256-clients: 43865.59 (0.00 pct) 42588.18 (-2.91 pct) o NPS4 tip sis_short 1-clients: 106711.81 (0.00 pct) 109905.29 (2.99 pct) 2-clients: 106987.79 (0.00 pct) 108469.16 (1.38 pct) 4-clients: 105275.37 (0.00 pct) 106707.13 (1.36 pct) 8-clients: 103028.31 (0.00 pct) 103106.39 (0.07 pct) 16-clients: 87382.43 (0.00 pct) 88974.74 (1.82 pct) 32-clients: 86578.14 (0.00 pct) 87616.81 (1.19 pct) 64-clients: 81470.63 (0.00 pct) 81519.56 (0.06 pct) 128-clients: 54803.35 (0.00 pct) 55102.65 (0.54 pct) 256-clients: 42910.29 (0.00 pct) 41887.09 (-2.38 pct) On 2/22/2023 7:39 PM, Chen Yu wrote: > The main purpose is to avoid too many cross CPU wake up when it is > unnecessary. The frequent cross CPU wake up brings significant damage > to some workloads, especially on high core count systems. > > Inhibits the cross CPU wake-up by placing the wakee on waking CPU, > if both the waker and wakee are short-duration tasks. The short > duration task could become a trouble maker on high-load system, > because it could bring frequent context switch. This strategy > only takes effect when the system is busy. Because it is unreasonable > to inhibit the idle CPU scan when there are still idle CPUs. > > First, introduce the definition of a short-duration task. Then > leverages the first patch to choose a local CPU for wakee. > > Overall there is performance improvement on some overloaded case. > Such as will-it-scale, netperf. And no noticeable impact on > schbench, hackbench, tbench and a OLTP workload with a commercial > RDBMS, tested on a Intel Xeon 2 x 56C machine. > > Per the test on Zen3 from Prateek, most benchmarks result saw small > wins or are comparable to sched:tip. SpecJBB Critical-jOps improved while > Max-jOPS saw a small hit, but it might be in the expected range. > ycsb-mongodb saw small uplift in NPS1 mode. > > Throughput improvement of netperf(localhost) was observed on a > Rome 2 x 64C machine, when the number of clients equals the CPUs. > > Abel reported against a latency regression from Redis on an overloaded > system. Inspired by his description, v5 added the check of wakee_flips > to mitigate task stacking. > > Changes since v5: > 1. Check the wakee_flips of the waker/wakee. If the wakee_flips > of waker/wakee are both 0, it indicates that the waker and the wakee > are waking up each other. In this case, put them together on the > same CPU. This is to avoid that too many wakees are stacked on > one CPU, which might cause regression on redis. > > Changes since v4: > 1. Dietmar has commented on the task duration calculation. So refined > the commit log to reduce confusion. > 2. Change [PATCH 1/2] to only record the average duration of a task. > So this change could benefit UTIL_EST_FASTER[1]. > 3. As v4 reported regression on Zen3 and Kunpeng Arm64, add back > the system average utilization restriction that, if the system > is not busy, do not enable the short wake up. Above logic has > shown improvment on Zen3[2]. > 4. Restrict the wakeup target to be current CPU, rather than both > current CPU and task's previous CPU. This could also benefit > wakeup optimization from interrupt in the future, which is > suggested by Yicong. > > Changes since v3: > 1. Honglei and Josh have concern that the threshold of short > task duration could be too long. Decreased the threshold from > sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8), > and the '8' comes from get_update_sysctl_factor(). > 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion. > 3. Move the calculation of average duration from put_prev_task_fair() > to dequeue_task_fair(). Because there is an issue in v3 that, > put_prev_task_fair() will not be invoked by pick_next_task_fair() > in fast path, thus the dur_avg could not be updated timely. > 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);" > on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen. > 5. Move the scan for CPU with short duration task from select_idle_cpu() > to select_idle_siblings(), because there is no CPU scan involved, per > Yicong. > > Changes since v2: > > 1. Peter suggested comparing the duration of waker and the cost to > scan for an idle CPU: If the cost is higher than the task duration, > do not waste time finding an idle CPU, choose the local or previous > CPU directly. A prototype was created based on this suggestion. > However, according to the test result, this prototype does not inhibit > the cross CPU wakeup and did not bring improvement. Because the cost > to find an idle CPU is small in the problematic scenario. The root > cause of the problem is a race condition between scanning for an idle > CPU and task enqueue(please refer to the commit log in PATCH 2/2). > So v3 does not change the core logic of v2, with some refinement based > on Peter's suggestion. > > 2. Simplify the logic to record the task duration per Peter and Abel's suggestion. > > > [1] https://lore.kernel.org/lkml/c56855a7-14fd-4737-fc8b-8ea21487c5f6@arm.com/ > [2] https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/ > > v5: https://lore.kernel.org/lkml/cover.1675361144.git.yu.c.chen@intel.com/ > v4: https://lore.kernel.org/lkml/cover.1671158588.git.yu.c.chen@intel.com/ > v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/ > v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/ > v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/ > > Chen Yu (2): > sched/fair: Record the average duration of a task > sched/fair: Introduce SIS_SHORT to wake up short task on current CPU > > include/linux/sched.h | 3 +++ > kernel/sched/core.c | 2 ++ > kernel/sched/debug.c | 1 + > kernel/sched/fair.c | 49 +++++++++++++++++++++++++++++++++++++++++ > kernel/sched/features.h | 1 + > 5 files changed, 56 insertions(+) > With the introduction of wakee_flips condition in is_short_task(), most benchmarks that have multiple tasks interacting are now comparable to tip. There may be other avenues for optimization using SIS_SHORT as Abel pointed but this series is a good start in that direction. Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> -- Thanks and Regards, Prateek
On 2023-03-14 at 08:43:17 +0530, K Prateek Nayak wrote: > Hello Chenyu, > > I did not observe any regression when testing v6 :) > Most of the benchmark results are comparable to the tip with minor > gains in some benchmarks with single sender and single receiver. > > With the introduction of wakee_flips condition in is_short_task(), > most benchmarks that have multiple tasks interacting are now > comparable to tip. There may be other avenues for optimization > using SIS_SHORT as Abel pointed but this series is a good start in > that direction. > > Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Thanks Prateek! thanks, Chenyu
© 2016 - 2025 Red Hat, Inc.