From nobody Sun Feb 8 22:49:14 2026 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4266233A6ED for ; Sun, 1 Feb 2026 12:26:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769948809; cv=none; b=RcMX33/RLcsSEk3QMN0Cnk520eJ8BW3m1xYiElRKj6BBrEH/yDACXWYaVJ66JBJfm0zLpZMdWOHir0s4uVWz8QL+iXdS0R1Or0AxBkRBvMVovEbh4htjv5kfo9oz3orMkhJ2L2Byt0z/dfcGOD61YRp0jxmlij/faQxXN1+uCoI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769948809; c=relaxed/simple; bh=2pRh8GPmZij6N9LO0SmM0Xcev1FEQIJIC1v7G/HRITU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lkTEW0RSbw0ZdY9Ub90bZnEGj8mYQ0XdBWAGY9AKfIp6Yy0OsoCLgQ33wFX1tlTkU8gE9y5rJL1D57EOwEaDTXzlhqdw7TkHApLmDw6/xGaSxesZ3zfmCzeiwlIcZm8xr19XTepuyIfpkdxOuH+0xcUIViUZwRusgtiDVBTSMl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=eT8FbD8I; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eT8FbD8I" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-2a09d981507so25679885ad.1 for ; Sun, 01 Feb 2026 04:26:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769948808; x=1770553608; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=te4Qk5Zt2UCvHwxxzNavYvXAnoEwtEAAn0AB0X/qh6c=; b=eT8FbD8I8a8RlQcOeWLbrKSQZ6FjCrr1Tm31CqmM8aphiCwZdSJuNiwKV6hd5nkmJn Pw8steiw2MLSMKy1TAEfOrbKb0FlKJnRSlwhDqTd2cZg1sTbIKc2Sw43MaVeqB9ZdcOi SMgJbPPvlHjSGv/hEsLiRYBE7h9MOOCkjTHKe4IpVyom4LxMDNMV4SQ9pR15N4ksiCjQ CQQFVPXKfxwNBqy1YpSataRFeYR0uX5Ou1Pv/C46O0eJfFCiWqisgefixvraMdIpPGBz PsTVofBVAMK7dLp8730xH49IQMglzVk/yzEA3d+DSkdvmBf7XeXU9UrRBOTbLWryyO57 PoOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769948808; x=1770553608; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=te4Qk5Zt2UCvHwxxzNavYvXAnoEwtEAAn0AB0X/qh6c=; b=Py2tvLlWoh+TiP331TiNC+9r7cjDJUKFhF2TMgUUjfjGpN7KIThLuDT5xAuv6tI7Cc ssTNDC9BoSt5wRCSE4y7kkKsCFYrTMhuOm6e0cwCPUS4KFnv40p/5A8wa7ErpMOM44zU ThahUefZfRL84jtN8+Jy1r52bTGeUlLxxvOPXDuKKhOvSIW1j5x/ZMsK48nuZ5BI5VEY XveSlt4I9wuy+VJD5+PILQmuSF26fe9eXMym2CIfLtcnaGuy+qqbhIaMGykSe27Wqs1H vy7nwjoQRgnOGb3fmXQp/ZIcE+vd2+wmLLI06R8j+rNduk3jQDSj1iW2FKwcSW/kyhVi 6Qgw== X-Forwarded-Encrypted: i=1; AJvYcCU861icjqt/ToB8GSSyWNG4+kqYZWDavhAnFm7Vw1GkbYcr15wKUemfhARt6CQFGbwYSRqIu0vvtF4+Iq4=@vger.kernel.org X-Gm-Message-State: AOJu0YzD2VRLNuFdloMg+fVZsnP/xDJykMOX9FHhEa6kRs6ocCwwVCFN 0Yr4dmseGnFmTZQFfbnUjtM1oqJyDwnttxuicexKzjVDNI1CAEJoA6eG X-Gm-Gg: AZuq6aIVPkSsdXyPzYIO+HAhGQVsyLoO+7eUIv9N+U9t2qCY9X7j166uJ5cZvxcsbDY CDxpzyUrxCZNSGPyGuYPAFDKjVvqrPqtYNQfTTl2H7ofGSa367ubXjMOujMAr5Fy2L75B3TW3p+ l9bkvQP8Uiara+9nLyzFOQ2N8y5CnwTWvY2FPBbW2J5ZBSi7Rja/PWi/iTs4K5waE1wDhRTlAxd umZF6oowqePZ+zG0KGeDKFSPYKojZpAaX1mEdz87QFyEMwCTKpWDyoTsRrh+S76PGLoqbvPOkrL VvqsLiLhmJb5iT9cFFVoUYUra/zbujeSxZcrVcsLqUeTp1i5XCwlPkgnL467HnHcilIdFMc3lrZ T/eX61KPBbWAu6d3Q1wBVgesFoMnpJVZ3p+x/9bvvt7LDltkA1o92yrYTQV2y6ZOEIj9HQ+TrYH xue1tUSSr+fTBs4Ga9mjL7gp2F X-Received: by 2002:a17:903:fa3:b0:2a7:a9b8:eb9f with SMTP id d9443c01a7336-2a8d9d7a101mr91618605ad.30.1769948807602; Sun, 01 Feb 2026 04:26:47 -0800 (PST) Received: from localhost.localdomain ([114.231.118.96]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2a88b6e5ac4sm116941185ad.86.2026.02.01.04.26.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Feb 2026 04:26:47 -0800 (PST) From: Vernon Yang To: akpm@linux-foundation.org, david@kernel.org Cc: lorenzo.stoakes@oracle.com, ziy@nvidia.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vernon Yang Subject: [PATCH mm-new v6 2/5] mm: khugepaged: refine scan progress number Date: Sun, 1 Feb 2026 20:25:51 +0800 Message-ID: <20260201122554.1470071-3-vernon2gm@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260201122554.1470071-1-vernon2gm@gmail.com> References: <20260201122554.1470071-1-vernon2gm@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vernon Yang Currently, each scan always increases "progress" by HPAGE_PMD_NR, even if only scanning a single PTE/PMD entry. - When only scanning a sigle PTE entry, let me provide a detailed example: static int hpage_collapse_scan_pmd() { for (addr =3D start_addr, _pte =3D pte; _pte < pte + HPAGE_PMD_NR; _pte++, addr +=3D PAGE_SIZE) { pte_t pteval =3D ptep_get(_pte); ... if (pte_uffd_wp(pteval)) { <-- first scan hit result =3D SCAN_PTE_UFFD_WP; goto out_unmap; } } } During the first scan, if pte_uffd_wp(pteval) is true, the loop exits directly. In practice, only one PTE is scanned before termination. Here, "progress +=3D 1" reflects the actual number of PTEs scanned, but previously "progress +=3D HPAGE_PMD_NR" always. - When the memory has been collapsed to PMD, let me provide a detailed example: The following data is traced by bpftrace on a desktop system. After the system has been left idle for 10 minutes upon booting, a lot of SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by khugepaged. From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the following statuses were observed, with frequency mentioned next to them: SCAN_SUCCEED : 1 SCAN_EXCEED_SHARED_PTE: 2 SCAN_PMD_MAPPED : 142 SCAN_NO_PTE_TABLE : 178 total progress size : 674 MB Total time : 419 seconds, include khugepaged_scan_sleep_millisecs The khugepaged_scan list save all task that support collapse into hugepage, as long as the task is not destroyed, khugepaged will not remove it from the khugepaged_scan list. This exist a phenomenon where task has already collapsed all memory regions into hugepage, but khugepaged continues to scan it, which wastes CPU time and invalid, and due to khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for scanning a large number of invalid task, so scanning really valid task is later. After applying this patch, when the memory is either SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE, just skip it, as follow: SCAN_EXCEED_SHARED_PTE: 2 SCAN_PMD_MAPPED : 147 SCAN_NO_PTE_TABLE : 173 total progress size : 45 MB Total time : 20 seconds Signed-off-by: Vernon Yang --- mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++---------- 1 file changed, 31 insertions(+), 10 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d94b34e10bdf..df22b2274d92 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -68,7 +68,10 @@ enum scan_result { static struct task_struct *khugepaged_thread __read_mostly; static DEFINE_MUTEX(khugepaged_mutex); =20 -/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */ +/* + * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas + * every 10 second. + */ static unsigned int khugepaged_pages_to_scan __read_mostly; static unsigned int khugepaged_pages_collapsed; static unsigned int khugepaged_full_scans; @@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_= struct *mm, unsigned long a } =20 static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked, + struct vm_area_struct *vma, unsigned long start_addr, + bool *mmap_locked, unsigned int *cur_progress, struct collapse_control *cc) { pmd_t *pmd; @@ -1256,13 +1260,18 @@ static enum scan_result hpage_collapse_scan_pmd(str= uct mm_struct *mm, VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK); =20 result =3D find_pmd_or_thp_or_none(mm, start_addr, &pmd); - if (result !=3D SCAN_SUCCEED) + if (result !=3D SCAN_SUCCEED) { + if (cur_progress) + *cur_progress =3D 1; goto out; + } =20 memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); pte =3D pte_offset_map_lock(mm, pmd, start_addr, &ptl); if (!pte) { + if (cur_progress) + *cur_progress =3D 1; result =3D SCAN_NO_PTE_TABLE; goto out; } @@ -1396,6 +1405,12 @@ static enum scan_result hpage_collapse_scan_pmd(stru= ct mm_struct *mm, result =3D SCAN_SUCCEED; } out_unmap: + if (cur_progress) { + if (_pte >=3D pte + HPAGE_PMD_NR) + *cur_progress =3D HPAGE_PMD_NR; + else + *cur_progress =3D _pte - pte + 1; + } pte_unmap_unlock(pte, ptl); if (result =3D=3D SCAN_SUCCEED) { result =3D collapse_huge_page(mm, start_addr, referenced, @@ -2286,8 +2301,9 @@ static enum scan_result collapse_file(struct mm_struc= t *mm, unsigned long addr, return result; } =20 -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, uns= igned long addr, - struct file *file, pgoff_t start, struct collapse_control *cc) +static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, + unsigned long addr, struct file *file, pgoff_t start, + unsigned int *cur_progress, struct collapse_control *cc) { struct folio *folio =3D NULL; struct address_space *mapping =3D file->f_mapping; @@ -2376,6 +2392,8 @@ static enum scan_result hpage_collapse_scan_file(stru= ct mm_struct *mm, unsigned cond_resched_rcu(); } } + if (cur_progress) + *cur_progress =3D max(xas.xa_index - start, 1UL); rcu_read_unlock(); =20 if (result =3D=3D SCAN_SUCCEED) { @@ -2455,6 +2473,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result =20 while (khugepaged_scan.address < hend) { bool mmap_locked =3D true; + unsigned int cur_progress =3D 0; =20 cond_resched(); if (unlikely(hpage_collapse_test_exit_or_disable(mm))) @@ -2471,7 +2490,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result mmap_read_unlock(mm); mmap_locked =3D false; *result =3D hpage_collapse_scan_file(mm, - khugepaged_scan.address, file, pgoff, cc); + khugepaged_scan.address, file, pgoff, + &cur_progress, cc); fput(file); if (*result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { mmap_read_lock(mm); @@ -2485,7 +2505,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result } } else { *result =3D hpage_collapse_scan_pmd(mm, vma, - khugepaged_scan.address, &mmap_locked, cc); + khugepaged_scan.address, &mmap_locked, + &cur_progress, cc); } =20 if (*result =3D=3D SCAN_SUCCEED) @@ -2493,7 +2514,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result =20 /* move to next address */ khugepaged_scan.address +=3D HPAGE_PMD_SIZE; - progress +=3D HPAGE_PMD_NR; + progress +=3D cur_progress; if (!mmap_locked) /* * We released mmap_lock so break loop. Note @@ -2816,7 +2837,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, mmap_locked =3D false; *lock_dropped =3D true; result =3D hpage_collapse_scan_file(mm, addr, file, pgoff, - cc); + NULL, cc); =20 if (result =3D=3D SCAN_PAGE_DIRTY_OR_WRITEBACK && !triggered_wb && mapping_can_writeback(file->f_mapping)) { @@ -2831,7 +2852,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, fput(file); } else { result =3D hpage_collapse_scan_pmd(mm, vma, addr, - &mmap_locked, cc); + &mmap_locked, NULL, cc); } if (!mmap_locked) *lock_dropped =3D true; --=20 2.51.0