include/trace/events/huge_memory.h | 24 +++++++++++++++++++++ mm/khugepaged.c | 34 ++++++++++++++++++++++++------ mm/madvise.c | 17 ++++++++++----- 3 files changed, 63 insertions(+), 12 deletions(-)
Hi all,
This series is improve the khugepaged scan logic, reduce CPU consumption,
prioritize scanning task that access memory frequently.
The following data is traced by bpftrace[1] on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by
khugepaged.
@scan_pmd_status[1]: 1 ## SCAN_SUCCEED
@scan_pmd_status[6]: 2 ## SCAN_EXCEED_SHARED_PTE
@scan_pmd_status[3]: 142 ## SCAN_PMD_MAPPED
@scan_pmd_status[2]: 178 ## SCAN_NO_PTE_TABLE
total progress size: 674 MB
Total time : 419 seconds ## include khugepaged_scan_sleep_millisecs
The khugepaged has below phenomenon: the khugepaged list is scanned in a
FIFO manner, as long as the task is not destroyed,
1. the task no longer has memory that can be collapsed into hugepage,
continues scan it always.
2. the task at the front of the khugepaged scan list is cold, they are
still scanned first.
3. everyone scan at intervals of khugepaged_scan_sleep_millisecs
(default 10s). If we always scan the above two cases first, the valid
scan will have to wait for a long time.
For the first case, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it.
For the second case, if the user has explicitly informed us via
MADV_COLD/MADV_FREE that this vma is cold or will be freed,
just skip it only.
The below is some performance test results.
kernbench results (testing on x86_64 machine):
baseline w/o patches test w/ patches
Amean user-32 18586.99 ( 0.00%) 18562.36 * 0.13%*
Amean syst-32 1133.61 ( 0.00%) 1126.02 * 0.67%*
Amean elsp-32 668.05 ( 0.00%) 667.13 * 0.14%*
BAmean-95 user-32 18585.23 ( 0.00%) 18559.71 ( 0.14%)
BAmean-95 syst-32 1133.22 ( 0.00%) 1125.49 ( 0.68%)
BAmean-95 elsp-32 667.94 ( 0.00%) 667.08 ( 0.13%)
BAmean-99 user-32 18585.23 ( 0.00%) 18559.71 ( 0.14%)
BAmean-99 syst-32 1133.22 ( 0.00%) 1125.49 ( 0.68%)
BAmean-99 elsp-32 667.94 ( 0.00%) 667.08 ( 0.13%)
Create three task[2]: hot1 -> cold -> hot2. After all three task are
created, each allocate memory 128MB. the hot1/hot2 task continuously
access 128 MB memory, while the cold task only accesses its memory
briefly andthen call madvise(MADV_COLD). Here are the performance test
results:
(Throughput bigger is better, other smaller is better)
Testing on x86_64 machine:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.14 sec | 2.93 sec | -6.69% |
| cycles per access | 4.96 | 2.21 | -55.44% |
| Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
| dTLB-load-misses | 284814532 | 69597236 | -75.56% |
Testing on qemu-system-x86_64 -enable-kvm:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.35 sec | 2.96 sec | -11.64% |
| cycles per access | 7.29 | 2.07 | -71.60% |
| Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
| dTLB-load-misses | 241600871 | 3216108 | -98.67% |
This series is based on Linux v6.19-rc2.
Thank you very much for your comments and discussions :)
[1] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/khugepaged_mm.bt
[2] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/app.c
V1 -> V2:
- Rename full to full_scan_finished, pickup Acked-by.
- Just skip SCAN_PMD_MAPPED/NO_PTE_TABLE memory, not remove mm.
- Set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE to just skip, not move mm.
- Again test performance at the v6.19-rc2.
Vernon Yang (4):
mm: khugepaged: add trace_mm_khugepaged_scan event
mm: khugepaged: just skip when the memory has been collapsed
mm: khugepaged: set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE
mm: khugepaged: set to next mm direct when mm has
MMF_DISABLE_THP_COMPLETELY
include/trace/events/huge_memory.h | 24 +++++++++++++++++++++
mm/khugepaged.c | 34 ++++++++++++++++++++++++------
mm/madvise.c | 17 ++++++++++-----
3 files changed, 63 insertions(+), 12 deletions(-)
--
2.51.0
syzbot ci has tested the following series [v2] Improve khugepaged scan logic https://lore.kernel.org/all/20251229055151.54887-1-yanglincheng@kylinos.cn * [PATCH v2 1/4] mm: khugepaged: add trace_mm_khugepaged_scan event * [PATCH v2 2/4] mm: khugepaged: just skip when the memory has been collapsed * [PATCH v2 3/4] mm: khugepaged: set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE * [PATCH v2 4/4] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY and found the following issue: WARNING in madvise_dontneed_free Full report is available here: https://ci.syzbot.org/series/f936dff1-2423-4f46-a59a-ea041c1d741a *** WARNING in madvise_dontneed_free tree: mm-new URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git base: 33b485bade996a9d0154cf0888b7a5c23723121e arch: amd64 compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 config: https://ci.syzbot.org/builds/81f62216-5094-4281-a942-238b7448a3be/config C repro: https://ci.syzbot.org/findings/e308c3a0-c806-45c4-bc1c-24536a3c3ca3/c_repro syz repro: https://ci.syzbot.org/findings/e308c3a0-c806-45c4-bc1c-24536a3c3ca3/syz_repro ------------[ cut here ]------------ WARNING: mm/madvise.c:795 at get_walk_lock mm/madvise.c:795 [inline], CPU#0: syz.0.17/5977 WARNING: mm/madvise.c:795 at madvise_free_single_vma mm/madvise.c:830 [inline], CPU#0: syz.0.17/5977 WARNING: mm/madvise.c:795 at madvise_dontneed_free+0xb52/0xe10 mm/madvise.c:960, CPU#0: syz.0.17/5977 Modules linked in: CPU: 0 UID: 0 PID: 5977 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:get_walk_lock mm/madvise.c:795 [inline] RIP: 0010:madvise_free_single_vma mm/madvise.c:830 [inline] RIP: 0010:madvise_dontneed_free+0xb52/0xe10 mm/madvise.c:960 Code: c7 c6 b0 6e 25 8e e8 7d 4c a3 ff 48 83 fb 01 74 0c 83 fb 03 75 0e e8 ed 46 a3 ff eb 12 e8 e6 46 a3 ff eb 09 e8 df 46 a3 ff 90 <0f> 0b 90 31 db 89 9c 24 08 01 00 00 48 8b 74 24 68 48 8b 54 24 70 RSP: 0018:ffffc90004a17400 EFLAGS: 00010293 RAX: ffffffff821e7411 RBX: 0000000000000002 RCX: ffff888169b7d7c0 RDX: 0000000000000000 RSI: ffffffff8e256eb0 RDI: 0000000000000002 RBP: ffffc90004a175b0 R08: ffff888169b7d7c0 R09: 0000000000000002 R10: 0000000000000003 R11: 0000000000000000 R12: 0000000000000000 R13: dffffc0000000000 R14: 0000000000000100 R15: 1ffff92000942e88 FS: 0000555555761500(0000) GS:ffff88818e62f000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fe49d72b600 CR3: 00000001b85b6000 CR4: 00000000000006f0 Call Trace: <TASK> madvise_vma_behavior+0xd57/0x3680 mm/madvise.c:1385 madvise_walk_vmas+0x575/0xaf0 mm/madvise.c:1730 madvise_do_behavior+0x38e/0x550 mm/madvise.c:1944 do_madvise+0x1bc/0x270 mm/madvise.c:2037 __do_sys_madvise mm/madvise.c:2046 [inline] __se_sys_madvise mm/madvise.c:2044 [inline] __x64_sys_madvise+0xa7/0xc0 mm/madvise.c:2044 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe49d78f7c9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fff39cea178 EFLAGS: 00000246 ORIG_RAX: 000000000000001c RAX: ffffffffffffffda RBX: 00007fe49d9e5fa0 RCX: 00007fe49d78f7c9 RDX: 0000000000000008 RSI: 0000000000600002 RDI: 0000200000000000 RBP: 00007fe49d7f297f R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fe49d9e5fa0 R14: 00007fe49d9e5fa0 R15: 0000000000000003 </TASK> *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
© 2016 - 2026 Red Hat, Inc.