net/core/skbuff.c | 1 + 1 file changed, 1 insertion(+)
From: HariKrishna Sagala <hariconscious@gmail.com>
Syzbot reported an uninit-value bug on at kmalloc_reserve for
commit 320475fbd590 ("Merge tag 'mtd/fixes-for-6.17-rc6' of
git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux")'
Syzbot KMSAN reported use of uninitialized memory originating from functions
"kmalloc_reserve()", where memory allocated via "kmem_cache_alloc_node()" or
"kmalloc_node_track_caller()" was not explicitly initialized.
This can lead to undefined behavior when the allocated buffer
is later accessed.
Fix this by requesting the initialized memory using the gfp flag
appended with the option "__GFP_ZERO".
Reported-by: syzbot+9a4fbb77c9d4aacd3388@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=9a4fbb77c9d4aacd3388
Fixes: 915d975b2ffa ("net: deal with integer overflows in
kmalloc_reserve()")
Tested-by: syzbot+9a4fbb77c9d4aacd3388@syzkaller.appspotmail.com
Cc: <stable@vger.kernel.org> # 6.16
Signed-off-by: HariKrishna Sagala <hariconscious@gmail.com>
---
RESEND:
- added Cc stable as suggested from kernel test robot
net/core/skbuff.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index ee0274417948..2308ebf99bbd 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -573,6 +573,7 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node,
void *obj;
obj_size = SKB_HEAD_ALIGN(*size);
+ flags |= __GFP_ZERO;
if (obj_size <= SKB_SMALL_HEAD_CACHE_SIZE &&
!(flags & KMALLOC_NOT_NORMAL_BITS)) {
obj = kmem_cache_alloc_node(net_hotdata.skb_small_head_cache,
--
2.43.0
Hello, kernel test robot noticed a 33.9% regression of netperf.Throughput_Mbps on: commit: 5cde54f8220b582bda9c34ef86e04ec00be4ce4a ("[PATCH net RESEND] net/core : fix KMSAN: uninit value in tipc_rcv") url: https://github.com/intel-lab-lkp/linux/commits/hariconscious-gmail-com/net-core-fix-KMSAN-uninit-value-in-tipc_rcv/20250920-023232 base: https://git.kernel.org/cgit/linux/kernel/git/davem/net.git cbf658dd09419f1ef9de11b9604e950bdd5c170b patch link: https://lore.kernel.org/all/20250919183146.4933-1-hariconscious@gmail.com/ patch subject: [PATCH net RESEND] net/core : fix KMSAN: uninit value in tipc_rcv testcase: netperf config: x86_64-rhel-9.4 compiler: gcc-14 test machine: 192 threads 2 sockets Intel(R) Xeon(R) 6740E CPU @ 2.4GHz (Sierra Forest) with 256G memory parameters: ip: ipv4 runtime: 300s nr_threads: 100% cluster: cs-localhost test: SCTP_STREAM cpufreq_governor: performance If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <oliver.sang@intel.com> | Closes: https://lore.kernel.org/oe-lkp/202509241629.3d135124-lkp@intel.com Details are as below: --------------------------------------------------------------------------------------------------> The kernel config and materials to reproduce are available at: https://download.01.org/0day-ci/archive/20250924/202509241629.3d135124-lkp@intel.com ========================================================================================= cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase: cs-localhost/gcc-14/performance/ipv4/x86_64-rhel-9.4/100%/debian-13-x86_64-20250902.cgz/300s/lkp-srf-2sp3/SCTP_STREAM/netperf commit: cbf658dd09 ("Merge tag 'net-6.17-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net") 5cde54f822 ("net/core : fix KMSAN: uninit value in tipc_rcv") cbf658dd09419f1e 5cde54f8220b582bda9c34ef86e ---------------- --------------------------- %stddev %change %stddev \ | \ 5.164e+09 -14.4% 4.419e+09 cpuidle..time 1504152 -12.9% 1310310 ± 4% cpuidle..usage 125501 ± 2% +19.6% 150094 ± 3% meminfo.Mapped 401239 ± 4% +18.3% 474475 meminfo.Shmem 8.63 -1.3 7.31 ± 2% mpstat.cpu.all.idle% 0.42 +0.1 0.52 mpstat.cpu.all.irq% 1.96e+08 -34.2% 1.29e+08 numa-numastat.node0.local_node 1.961e+08 -34.2% 1.291e+08 numa-numastat.node0.numa_hit 1.943e+08 -33.3% 1.297e+08 numa-numastat.node1.local_node 1.944e+08 -33.2% 1.298e+08 numa-numastat.node1.numa_hit 1.961e+08 -34.2% 1.291e+08 numa-vmstat.node0.numa_hit 1.96e+08 -34.2% 1.29e+08 numa-vmstat.node0.numa_local 1.944e+08 -33.2% 1.298e+08 numa-vmstat.node1.numa_hit 1.943e+08 -33.3% 1.297e+08 numa-vmstat.node1.numa_local 1386 -33.9% 916.08 netperf.ThroughputBoth_Mbps 266242 -33.9% 175886 netperf.ThroughputBoth_total_Mbps 1386 -33.9% 916.08 netperf.Throughput_Mbps 266242 -33.9% 175886 netperf.Throughput_total_Mbps 476071 -18.8% 386450 ± 2% netperf.time.involuntary_context_switches 6916 +26.0% 8715 netperf.time.percent_of_cpu_this_job_got 20872 +26.3% 26369 netperf.time.system_time 117319 -33.4% 78105 netperf.time.voluntary_context_switches 115788 -33.9% 76498 netperf.workload 1005775 +1.8% 1024121 proc-vmstat.nr_file_pages 31705 ± 2% +18.6% 37602 ± 4% proc-vmstat.nr_mapped 100398 ± 4% +18.3% 118736 proc-vmstat.nr_shmem 9211977 -4.0% 8843488 proc-vmstat.nr_slab_unreclaimable 3.905e+08 -33.7% 2.589e+08 proc-vmstat.numa_hit 226148 +11.7% 252687 proc-vmstat.numa_huge_pte_updates 3.903e+08 -33.7% 2.587e+08 proc-vmstat.numa_local 1.164e+08 +11.6% 1.299e+08 proc-vmstat.numa_pte_updates 1.243e+10 -33.8% 8.227e+09 proc-vmstat.pgalloc_normal 1.243e+10 -33.8% 8.226e+09 proc-vmstat.pgfree 4.20 ± 13% -50.0% 2.10 ± 11% perf-sched.sch_delay.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown] 4.20 ± 13% -50.0% 2.10 ± 11% perf-sched.total_sch_delay.average.ms 111.28 ± 3% +10.7% 123.18 ± 5% perf-sched.total_wait_and_delay.average.ms 32792 ± 4% -16.3% 27444 ± 6% perf-sched.total_wait_and_delay.count.ms 3631 ± 10% +24.8% 4531 ± 5% perf-sched.total_wait_and_delay.max.ms 107.08 ± 3% +13.1% 121.09 ± 5% perf-sched.total_wait_time.average.ms 3631 ± 10% +24.8% 4531 ± 5% perf-sched.total_wait_time.max.ms 111.28 ± 3% +10.7% 123.18 ± 5% perf-sched.wait_and_delay.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown] 32792 ± 4% -16.3% 27444 ± 6% perf-sched.wait_and_delay.count.[unknown].[unknown].[unknown].[unknown].[unknown] 3631 ± 10% +24.8% 4531 ± 5% perf-sched.wait_and_delay.max.ms.[unknown].[unknown].[unknown].[unknown].[unknown] 107.08 ± 3% +13.1% 121.09 ± 5% perf-sched.wait_time.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown] 3631 ± 10% +24.8% 4531 ± 5% perf-sched.wait_time.max.ms.[unknown].[unknown].[unknown].[unknown].[unknown] 201981 ± 5% +34.9% 272400 ± 13% sched_debug.cfs_rq:/.avg_vruntime.stddev 2.36 ± 6% -14.1% 2.03 ± 5% sched_debug.cfs_rq:/.h_nr_queued.max 0.40 ± 4% -23.5% 0.30 ± 4% sched_debug.cfs_rq:/.h_nr_queued.stddev 2.25 ± 3% -11.1% 2.00 ± 4% sched_debug.cfs_rq:/.h_nr_runnable.max 0.38 ± 5% -24.0% 0.29 ± 4% sched_debug.cfs_rq:/.h_nr_runnable.stddev 20700 ± 27% -42.3% 11950 ± 12% sched_debug.cfs_rq:/.load.avg 201981 ± 5% +34.9% 272400 ± 13% sched_debug.cfs_rq:/.min_vruntime.stddev 0.28 ± 7% -18.1% 0.23 ± 5% sched_debug.cfs_rq:/.nr_queued.stddev 350.87 ± 3% -12.8% 306.09 ± 4% sched_debug.cfs_rq:/.runnable_avg.stddev 335.58 ± 3% -20.3% 267.63 ± 5% sched_debug.cfs_rq:/.util_est.stddev 21.08 ± 8% +47.3% 31.05 ± 3% sched_debug.cpu.clock.stddev 2128 ± 7% -15.5% 1799 ± 6% sched_debug.cpu.curr->pid.stddev 0.43 ± 3% -27.4% 0.32 ± 5% sched_debug.cpu.nr_running.stddev 7223 ± 2% -11.7% 6379 ± 3% sched_debug.cpu.nr_switches.avg 4597 -17.4% 3798 ± 2% sched_debug.cpu.nr_switches.min 129.49 +86.1% 241.00 perf-stat.i.MPKI 8.424e+09 -42.4% 4.852e+09 perf-stat.i.branch-instructions 0.29 +0.2 0.45 perf-stat.i.branch-miss-rate% 17058270 -23.6% 13026512 ± 2% perf-stat.i.branch-misses 88.13 +2.9 91.01 perf-stat.i.cache-miss-rate% 3.228e+09 -6.3% 3.023e+09 perf-stat.i.cache-misses 3.654e+09 -9.3% 3.315e+09 perf-stat.i.cache-references 6871 ± 2% -16.7% 5721 ± 4% perf-stat.i.context-switches 22.00 +103.7% 44.81 perf-stat.i.cpi 5.596e+11 +1.8% 5.697e+11 perf-stat.i.cpu-cycles 1398 -20.9% 1105 perf-stat.i.cpu-migrations 188.57 +5.6% 199.20 perf-stat.i.cycles-between-cache-misses 3.807e+10 -40.8% 2.255e+10 perf-stat.i.instructions 0.08 -39.9% 0.05 perf-stat.i.ipc 0.07 ± 43% +152.2% 0.17 ± 19% perf-stat.i.major-faults 8762 ± 6% -11.8% 7728 ± 4% perf-stat.i.minor-faults 8762 ± 6% -11.8% 7728 ± 4% perf-stat.i.page-faults 103.32 +51.1% 156.15 perf-stat.overall.MPKI 0.24 +0.1 0.31 ± 2% perf-stat.overall.branch-miss-rate% 88.33 +2.9 91.19 perf-stat.overall.cache-miss-rate% 17.83 +64.8% 29.38 perf-stat.overall.cpi 172.58 +9.0% 188.12 perf-stat.overall.cycles-between-cache-misses 0.06 -39.3% 0.03 perf-stat.overall.ipc 83258790 -6.9% 77496361 perf-stat.overall.path-length 6.993e+09 -40.6% 4.156e+09 perf-stat.ps.branch-instructions 16501261 -22.8% 12743483 perf-stat.ps.branch-misses 3.25e+09 -7.0% 3.022e+09 perf-stat.ps.cache-misses 3.679e+09 -9.9% 3.314e+09 perf-stat.ps.cache-references 6755 -16.5% 5642 ± 3% perf-stat.ps.context-switches 5.608e+11 +1.4% 5.684e+11 perf-stat.ps.cpu-cycles 1399 -21.5% 1099 perf-stat.ps.cpu-migrations 3.145e+10 -38.5% 1.935e+10 perf-stat.ps.instructions 0.06 ± 34% +195.2% 0.16 ± 20% perf-stat.ps.major-faults 9.64e+12 -38.5% 5.928e+12 perf-stat.total.instructions Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
© 2016 - 2025 Red Hat, Inc.