From nobody Sun Feb 8 17:43:04 2026 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66DEE2E889C for ; Sat, 7 Feb 2026 08:16:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770452203; cv=none; b=knOg8U3eKrm/qC3tN+uaEhPjGkL4SWypn9KJQkHGq385xo+Rd3ptIgmwc3CgTDWsqlgHS9Htm40PSNabxUFTy2O7cnbUv6uKIJEnUCWaEzyofWMWkCVmCa0ImrKsQ4pnP9lLH7AX0DHzosG1LTryVl+X2WZUlV6l25fDKptekds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770452203; c=relaxed/simple; bh=R95W9Hwhrpb9t/80p3jLPzvvpaiuV/mmSq7XJn55mGA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qRyRYC36LJowhcAveKobFu7Bdd8l5nWPg25RdUGQh7e1NHSIzHBgIAuA8PwzH+MRcsPyPI4r2tV0iBTNLwEleXymZTgqzMFZIdlIKr2NqjH5fb+Qxp7MUU7nYp4cPlPaE4y/sVhSCxlxPlLV9mWs2w25h0Oyrl+ADNicg8u5XGQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QiasCdal; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QiasCdal" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-3543b9f60e3so1306943a91.3 for ; Sat, 07 Feb 2026 00:16:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1770452203; x=1771057003; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aSufcolqrQVMe5y8EVPFg87ZCQCerJ/wUIqeIeJwkF0=; b=QiasCdal3wWzs0XISbf1pElsSi+vbt4qfdhS9Cs2sytdnl2DshZNbp/YK0DPqT0iVY tJ9XHxRDe/yj+pd8jfjnXi2OJdXloOlxI/gBunJCKd0XKsplI5ZoLlCB3gIFgjb9Ub4v k8h6mm4hqSHced3kT/LrAgngm6xIfJvri0gwca6JPNN2ElNJ1D2CSjPjKH9VmuIIIu2M qkVUJx4HnImuSvA3UR4I/PQ5JIAM4EfK73yBAv7qLSY+QmoghNkBRUMINiIoN1TVSP0Z T1/7AYdLt14ILTRr6gAA52VetyMrIRbY7hWnCW55m2AMl/4iV6XQcTpr+Q7WL9w7GJKn U1yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770452203; x=1771057003; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=aSufcolqrQVMe5y8EVPFg87ZCQCerJ/wUIqeIeJwkF0=; b=FABDfxItcuLlJl4jxXYIx6bUY51yTg4D2w5+lQOwcD6QSPjcvnrzHaC6rMLNgmcTkH tyTWjNZtT2CpC2AmIlL2Y7EGTtVw0YmJue5Id7ppfp8xVv1QlRRa5mVGoGGuswQvOlwL CiyDZAhv6Jj65m4LKd9/2Sz2ymgkudhjdkWnLysThVgHShDTbuZUzuoPoStpDkHdgD58 3Jy/9q/5C/jnVvXS+3mmEB3HXjDgIivmITeHVWs31Jrb3nUN8ekGPIAvEGtLNet3uRIH 4MkdspOQUdJQ88pJyWhwNnXcbhv3i0LgIKdGXkpDeT6FKGNh4HKAnVa0fJBrKsMH75Sw GXbQ== X-Forwarded-Encrypted: i=1; AJvYcCW1QOay0rG0t0y5f0sqjOvKkWpRNV8Mqnso2EtI13qOXV2joe+swMfovEiiXT5HjE/EN23GPyFL4hHUiv4=@vger.kernel.org X-Gm-Message-State: AOJu0YwRIXUzji+syDEhBXsmZ9CEmdWaeikfHdFkR80oqqRXHEq3rvpN tGXkYjwRPfZC312LxoXtuILVdiUyoa9fMmsNRTKDCNTziow1k7F3F7amRI0AJ9RN X-Gm-Gg: AZuq6aLWrQ1J+OIW3vvJlGOITfWleL7I3RqukaWpmG5C6myO6HBgdiPgGJsVSgonYtI Oa3Vqh7LeJiAqZsr8GjG6IaxiKmxAxWPin1B+GXXLzC9+bn2q61AiWV/SUkUcSO68WVdOaVJQVJ UL6R/9SNfGGiiYlhrRuWiPPds0J1pxrRzcHyuZDhyjZsVAJ/Xz3PgBQlT2Jn2MmHeh9Dbi6Wtkq BLoQNixiP5WHAoX3FKtIg+K8wJ/2O2WOwcgf+nto3FioyCGUrKsqGcTHTWUjzkMyJx/UzjwzVpr HUOj12JR7wyvWRD9vKBSeMrX8eSC+jn25E229QsP4f/pfPAmJL3a6YbQJjhoECTjCO5PKDKljxj 0u2Sx/9HmGoY8DJIHBQu/X1teGfaVTjcWBiiBrytZiXyigiuYTeaPot3Y58wpHZ68ITcTMX80s7 GainoGpC3KyqCuFYdVLQqMR20= X-Received: by 2002:a17:90b:3c12:b0:354:be2e:c056 with SMTP id 98e67ed59e1d1-354be2ed154mr2936251a91.18.1770452202653; Sat, 07 Feb 2026 00:16:42 -0800 (PST) Received: from localhost.localdomain ([114.231.118.96]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3549c5955a7sm8028189a91.17.2026.02.07.00.16.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Feb 2026 00:16:42 -0800 (PST) From: Vernon Yang To: akpm@linux-foundation.org, david@kernel.org Cc: lorenzo.stoakes@oracle.com, ziy@nvidia.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vernon Yang Subject: [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number Date: Sat, 7 Feb 2026 16:16:10 +0800 Message-ID: <20260207081613.588598-3-vernon2gm@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260207081613.588598-1-vernon2gm@gmail.com> References: <20260207081613.588598-1-vernon2gm@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vernon Yang Currently, each scan always increases "progress" by HPAGE_PMD_NR, even if only scanning a single PTE/PMD entry. - When only scanning a sigle PTE entry, let me provide a detailed example: static int hpage_collapse_scan_pmd() { for (addr =3D start_addr, _pte =3D pte; _pte < pte + HPAGE_PMD_NR; _pte++, addr +=3D PAGE_SIZE) { pte_t pteval =3D ptep_get(_pte); ... if (pte_uffd_wp(pteval)) { <-- first scan hit result =3D SCAN_PTE_UFFD_WP; goto out_unmap; } } } During the first scan, if pte_uffd_wp(pteval) is true, the loop exits directly. In practice, only one PTE is scanned before termination. Here, "progress +=3D 1" reflects the actual number of PTEs scanned, but previously "progress +=3D HPAGE_PMD_NR" always. - When the memory has been collapsed to PMD, let me provide a detailed example: The following data is traced by bpftrace on a desktop system. After the system has been left idle for 10 minutes upon booting, a lot of SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by khugepaged. From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the following statuses were observed, with frequency mentioned next to them: SCAN_SUCCEED : 1 SCAN_EXCEED_SHARED_PTE: 2 SCAN_PMD_MAPPED : 142 SCAN_NO_PTE_TABLE : 178 total progress size : 674 MB Total time : 419 seconds, include khugepaged_scan_sleep_millisecs The khugepaged_scan list save all task that support collapse into hugepage, as long as the task is not destroyed, khugepaged will not remove it from the khugepaged_scan list. This exist a phenomenon where task has already collapsed all memory regions into hugepage, but khugepaged continues to scan it, which wastes CPU time and invalid, and due to khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for scanning a large number of invalid task, so scanning really valid task is later. After applying this patch, when the memory is either SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE, just skip it, as follow: SCAN_EXCEED_SHARED_PTE: 2 SCAN_PMD_MAPPED : 147 SCAN_NO_PTE_TABLE : 173 total progress size : 45 MB Total time : 20 seconds Signed-off-by: Vernon Yang Reviewed-by: Dev Jain --- mm/khugepaged.c | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 4049234e1c8b..8b68ae3bc2c5 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -68,7 +68,10 @@ enum scan_result { static struct task_struct *khugepaged_thread __read_mostly; static DEFINE_MUTEX(khugepaged_mutex); =20 -/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */ +/* + * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas + * every 10 second. + */ static unsigned int khugepaged_pages_to_scan __read_mostly; static unsigned int khugepaged_pages_collapsed; static unsigned int khugepaged_full_scans; @@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_= struct *mm, unsigned long a } =20 static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked, + struct vm_area_struct *vma, unsigned long start_addr, + bool *mmap_locked, unsigned int *cur_progress, struct collapse_control *cc) { pmd_t *pmd; @@ -1256,19 +1260,27 @@ static enum scan_result hpage_collapse_scan_pmd(str= uct mm_struct *mm, VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK); =20 result =3D find_pmd_or_thp_or_none(mm, start_addr, &pmd); - if (result !=3D SCAN_SUCCEED) + if (result !=3D SCAN_SUCCEED) { + if (cur_progress) + *cur_progress =3D 1; goto out; + } =20 memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); pte =3D pte_offset_map_lock(mm, pmd, start_addr, &ptl); if (!pte) { + if (cur_progress) + *cur_progress =3D 1; result =3D SCAN_NO_PTE_TABLE; goto out; } =20 for (addr =3D start_addr, _pte =3D pte; _pte < pte + HPAGE_PMD_NR; _pte++, addr +=3D PAGE_SIZE) { + if (cur_progress) + *cur_progress +=3D 1; + pte_t pteval =3D ptep_get(_pte); if (pte_none_or_zero(pteval)) { ++none_or_zero; @@ -2288,8 +2300,9 @@ static enum scan_result collapse_file(struct mm_struc= t *mm, unsigned long addr, return result; } =20 -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, uns= igned long addr, - struct file *file, pgoff_t start, struct collapse_control *cc) +static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, + unsigned long addr, struct file *file, pgoff_t start, + unsigned int *cur_progress, struct collapse_control *cc) { struct folio *folio =3D NULL; struct address_space *mapping =3D file->f_mapping; @@ -2378,6 +2391,8 @@ static enum scan_result hpage_collapse_scan_file(stru= ct mm_struct *mm, unsigned cond_resched_rcu(); } } + if (cur_progress) + *cur_progress =3D HPAGE_PMD_NR; rcu_read_unlock(); =20 if (result =3D=3D SCAN_SUCCEED) { @@ -2457,6 +2472,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result =20 while (khugepaged_scan.address < hend) { bool mmap_locked =3D true; + unsigned int cur_progress =3D 0; =20 cond_resched(); if (unlikely(hpage_collapse_test_exit_or_disable(mm))) @@ -2473,7 +2489,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result mmap_read_unlock(mm); mmap_locked =3D false; *result =3D hpage_collapse_scan_file(mm, - khugepaged_scan.address, file, pgoff, cc); + khugepaged_scan.address, file, pgoff, + &cur_progress, cc); fput(file); if (*result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { mmap_read_lock(mm); @@ -2487,7 +2504,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result } } else { *result =3D hpage_collapse_scan_pmd(mm, vma, - khugepaged_scan.address, &mmap_locked, cc); + khugepaged_scan.address, &mmap_locked, + &cur_progress, cc); } =20 if (*result =3D=3D SCAN_SUCCEED) @@ -2495,7 +2513,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, enum scan_result =20 /* move to next address */ khugepaged_scan.address +=3D HPAGE_PMD_SIZE; - progress +=3D HPAGE_PMD_NR; + progress +=3D cur_progress; if (!mmap_locked) /* * We released mmap_lock so break loop. Note @@ -2818,7 +2836,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, mmap_locked =3D false; *lock_dropped =3D true; result =3D hpage_collapse_scan_file(mm, addr, file, pgoff, - cc); + NULL, cc); =20 if (result =3D=3D SCAN_PAGE_DIRTY_OR_WRITEBACK && !triggered_wb && mapping_can_writeback(file->f_mapping)) { @@ -2833,7 +2851,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, fput(file); } else { result =3D hpage_collapse_scan_pmd(mm, vma, addr, - &mmap_locked, cc); + &mmap_locked, NULL, cc); } if (!mmap_locked) *lock_dropped =3D true; --=20 2.51.0