From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A40B3224D6 for ; Mon, 14 Jul 2025 00:33:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453189; cv=none; b=VjgZwgkBEHDHBDCI4Z8E+S2UR+9CVMlijnXZrpQyPaOHNwCG2vvtRqzJDH5igTl6MX/QSjMyJN/g6UoEzjVWcGZ2f7EsXIWuV4hVYbFjQaOm2AP1OcJ4gNs+hmHz7wx3uIFvChDNGnxn78+/+Cc+PP0rvBW9iLrTplg15kiL588= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453189; c=relaxed/simple; bh=LE+iBiWmq+AZm9ibDmuHdx2AL8mwqjsLFITjU/sERQo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UdPjfvHgJUg9KSimF6yrTfGqTobacnCvmknU7scD6qLt413Wdr6JvsTUq+sTWxeCTsqFhzNDQdRTEqCehSIJStI6W901KWRygxvuNv1Rf+p/wwLb1qE1KitkV4Z9fx4HieQuNRJ/TQx1M9zNHOBqhYxLbElnC21UpluoXgCqkmM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gKJrGGWS; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gKJrGGWS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EEUx7vMzl3XkfLlDz3SXMfiq0BEVHaSfy+TMgLUqN2Q=; b=gKJrGGWS+sxO2mQW/Zpwwlufq6+YoenuSCVnc2slU3e7NdlM5ohSzEyAuLySr+162hyvgw F1qLSrUvnQTPDlbz6JImlaHt1v/eotw8/rAWVVzieikpQnl9HORn+BPx4htSPfgxTRYj7e Ab/jMKVMQI9+yT2bQkswRSCoIiVBtfY= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-486-cN33CzxAMEWzbBBiwv92Kw-1; Sun, 13 Jul 2025 20:33:00 -0400 X-MC-Unique: cN33CzxAMEWzbBBiwv92Kw-1 X-Mimecast-MFC-AGG-ID: cN33CzxAMEWzbBBiwv92Kw_1752453176 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2C988195608B; Mon, 14 Jul 2025 00:32:55 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 91C4E30001BC; Mon, 14 Jul 2025 00:32:40 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 01/14] khugepaged: rename hpage_collapse_* to collapse_* Date: Sun, 13 Jul 2025 18:31:54 -0600 Message-ID: <20250714003207.113275-2-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" The hpage_collapse functions describe functions used by madvise_collapse and khugepaged. remove the unnecessary hpage prefix to shorten the function name. Reviewed-by: Zi Yan Reviewed-by: Baolin Wang Signed-off-by: Nico Pache Acked-by: David Hildenbrand Reviewed-by: Liam R. Howlett --- mm/khugepaged.c | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a55fb1dcd224..eb0babb51868 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -402,14 +402,14 @@ void __init khugepaged_destroy(void) kmem_cache_destroy(mm_slot_cache); } =20 -static inline int hpage_collapse_test_exit(struct mm_struct *mm) +static inline int collapse_test_exit(struct mm_struct *mm) { return atomic_read(&mm->mm_users) =3D=3D 0; } =20 -static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm) +static inline int collapse_test_exit_or_disable(struct mm_struct *mm) { - return hpage_collapse_test_exit(mm) || + return collapse_test_exit(mm) || test_bit(MMF_DISABLE_THP, &mm->flags); } =20 @@ -444,7 +444,7 @@ void __khugepaged_enter(struct mm_struct *mm) int wakeup; =20 /* __khugepaged_exit() must not run from under us */ - VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); + VM_BUG_ON_MM(collapse_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) return; =20 @@ -503,7 +503,7 @@ void __khugepaged_exit(struct mm_struct *mm) } else if (mm_slot) { /* * This is required to serialize against - * hpage_collapse_test_exit() (which is guaranteed to run + * collapse_test_exit() (which is guaranteed to run * under mmap sem read mode). Stop here (after we return all * pagetables will be destroyed) until khugepaged has finished * working on the pagetables under the mmap_lock. @@ -838,7 +838,7 @@ struct collapse_control khugepaged_collapse_control =3D= { .is_khugepaged =3D true, }; =20 -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) +static bool collapse_scan_abort(int nid, struct collapse_control *cc) { int i; =20 @@ -873,7 +873,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(v= oid) } =20 #ifdef CONFIG_NUMA -static int hpage_collapse_find_target_node(struct collapse_control *cc) +static int collapse_find_target_node(struct collapse_control *cc) { int nid, target_node =3D 0, max_value =3D 0; =20 @@ -892,7 +892,7 @@ static int hpage_collapse_find_target_node(struct colla= pse_control *cc) return target_node; } #else -static int hpage_collapse_find_target_node(struct collapse_control *cc) +static int collapse_find_target_node(struct collapse_control *cc) { return 0; } @@ -912,7 +912,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm= , unsigned long address, struct vm_area_struct *vma; unsigned long tva_flags =3D cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; =20 - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(collapse_test_exit_or_disable(mm))) return SCAN_ANY_PROCESS; =20 *vmap =3D vma =3D find_vma(mm, address); @@ -985,7 +985,7 @@ static int check_pmd_still_valid(struct mm_struct *mm, =20 /* * Bring missing pages in from swap, to complete THP collapse. - * Only done if hpage_collapse_scan_pmd believes it is worthwhile. + * Only done if khugepaged_scan_pmd believes it is worthwhile. * * Called and returns without pte mapped or spinlocks held. * Returns result: if not SCAN_SUCCEED, mmap_lock has been released. @@ -1071,7 +1071,7 @@ static int alloc_charge_folio(struct folio **foliop, = struct mm_struct *mm, { gfp_t gfp =3D (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); - int node =3D hpage_collapse_find_target_node(cc); + int node =3D collapse_find_target_node(cc); struct folio *folio; =20 folio =3D __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); @@ -1257,7 +1257,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, return result; } =20 -static int hpage_collapse_scan_pmd(struct mm_struct *mm, +static int collapse_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, bool *mmap_locked, struct collapse_control *cc) @@ -1371,7 +1371,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *= mm, * hit record. */ node =3D folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { + if (collapse_scan_abort(node, cc)) { result =3D SCAN_SCAN_ABORT; goto out_unmap; } @@ -1440,7 +1440,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot= *mm_slot) =20 lockdep_assert_held(&khugepaged_mm_lock); =20 - if (hpage_collapse_test_exit(mm)) { + if (collapse_test_exit(mm)) { /* free mm_slot */ hash_del(&slot->hash); list_del(&slot->mm_node); @@ -1733,7 +1733,7 @@ static void retract_page_tables(struct address_space = *mapping, pgoff_t pgoff) if (find_pmd_or_thp_or_none(mm, addr, &pmd) !=3D SCAN_SUCCEED) continue; =20 - if (hpage_collapse_test_exit(mm)) + if (collapse_test_exit(mm)) continue; /* * When a vma is registered with uffd-wp, we cannot recycle @@ -2255,7 +2255,7 @@ static int collapse_file(struct mm_struct *mm, unsign= ed long addr, return result; } =20 -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long ad= dr, +static int collapse_scan_file(struct mm_struct *mm, unsigned long addr, struct file *file, pgoff_t start, struct collapse_control *cc) { @@ -2312,7 +2312,7 @@ static int hpage_collapse_scan_file(struct mm_struct = *mm, unsigned long addr, } =20 node =3D folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { + if (collapse_scan_abort(node, cc)) { result =3D SCAN_SCAN_ABORT; folio_put(folio); break; @@ -2362,7 +2362,7 @@ static int hpage_collapse_scan_file(struct mm_struct = *mm, unsigned long addr, return result; } =20 -static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *resul= t, +static unsigned int collapse_scan_mm_slot(unsigned int pages, int *result, struct collapse_control *cc) __releases(&khugepaged_mm_lock) __acquires(&khugepaged_mm_lock) @@ -2400,7 +2400,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, int *result, goto breakouterloop_mmap_lock; =20 progress++; - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(collapse_test_exit_or_disable(mm))) goto breakouterloop; =20 vma_iter_init(&vmi, mm, khugepaged_scan.address); @@ -2408,7 +2408,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, int *result, unsigned long hstart, hend; =20 cond_resched(); - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) { + if (unlikely(collapse_test_exit_or_disable(mm))) { progress++; break; } @@ -2430,7 +2430,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, int *result, bool mmap_locked =3D true; =20 cond_resched(); - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(collapse_test_exit_or_disable(mm))) goto breakouterloop; =20 VM_BUG_ON(khugepaged_scan.address < hstart || @@ -2490,7 +2490,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, int *result, * Release the current mm_slot if this mm is about to die, or * if we scanned all vmas of this mm. */ - if (hpage_collapse_test_exit(mm) || !vma) { + if (collapse_test_exit(mm) || !vma) { /* * Make sure that if mm_users is reaching zero while * khugepaged runs here, khugepaged_exit will find @@ -2544,7 +2544,7 @@ static void khugepaged_do_scan(struct collapse_contro= l *cc) pass_through_head++; if (khugepaged_has_work() && pass_through_head < 2) - progress +=3D khugepaged_scan_mm_slot(pages - progress, + progress +=3D collapse_scan_mm_slot(pages - progress, &result, cc); else progress =3D pages; --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A344561FCE for ; Mon, 14 Jul 2025 00:33:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453200; cv=none; b=auY0qFMvFFquwNrec5TYn+oNE1igah4xKlY/oX17h3YXAofvD9IHTUoV1o8g/e6xa8/tcgQWsHs2z+w9fsrcOS+/nO/swK0tdAaMXCTtAahLfbTAVkEtFf+TEwWu83zqUTFcQ0Er6GOJISChfPk2Xl9HZpE5Zg7RUWOjNHJi5Io= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453200; c=relaxed/simple; bh=d2ODTpCnKzv4LfzLyMpFbZX9joep9zPaRGMT4bZwU9k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bU67jQNK8G+P5PinDBR8MG13r3mN5h0mxiT3LIjGwq3rv/WJBTo/h3EU1HsDXaE9TC/V9H6unRlCsm8FbHyuAWpgfqC3Ft5AdY+zaXBEDJ38BTP1XALJ27g2er8DIbVYXixiW3NqDt6FEcaZfd8r68cySsYUT2QqKsUuYvv3ga8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=hwS8KyZy; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hwS8KyZy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453197; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8XYjQgrSq6pcFRDifP+lw5lo9L5pdXPfNpmr8WR3ZHA=; b=hwS8KyZyIYFH9QWDoGoeCbzk5o1/4M8rWAdoMpqZMIqpf2HEyN4Epctg6Cs4bFPRUepxyo 26JIcrmlktN4GGCsJyORj3UEjM+8muaIrbZC0mG0ZzPPtFFI52Qp3cVaDI6Yge5YfJPqb5 i5cxEibFsXI9rzUPY18YvPXVvTymiWk= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-380-A-aSyBGYPCa4VksPuDZ-Vg-1; Sun, 13 Jul 2025 20:33:14 -0400 X-MC-Unique: A-aSyBGYPCa4VksPuDZ-Vg-1 X-Mimecast-MFC-AGG-ID: A-aSyBGYPCa4VksPuDZ-Vg_1752453190 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 506131800290; Mon, 14 Jul 2025 00:33:09 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D088F30001A1; Mon, 14 Jul 2025 00:32:55 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 02/14] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Date: Sun, 13 Jul 2025 18:31:55 -0600 Message-ID: <20250714003207.113275-3-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" The khugepaged daemon and madvise_collapse have two different implementations that do almost the same thing. Create collapse_single_pmd to increase code reuse and create an entry point to these two users. Refactor madvise_collapse and collapse_scan_mm_slot to use the new collapse_single_pmd function. This introduces a minor behavioral change that is most likely an undiscovered bug. The current implementation of khugepaged tests collapse_test_exit_or_disable before calling collapse_pte_mapped_thp, but we weren't doing it in the madvise_collapse case. By unifying these two callers madvise_collapse now also performs this check. Reviewed-by: Baolin Wang Signed-off-by: Nico Pache Acked-by: David Hildenbrand --- mm/khugepaged.c | 95 +++++++++++++++++++++++++------------------------ 1 file changed, 49 insertions(+), 46 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index eb0babb51868..47a80638af97 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2362,6 +2362,50 @@ static int collapse_scan_file(struct mm_struct *mm, = unsigned long addr, return result; } =20 +/* + * Try to collapse a single PMD starting at a PMD aligned addr, and return + * the results. + */ +static int collapse_single_pmd(unsigned long addr, + struct vm_area_struct *vma, bool *mmap_locked, + struct collapse_control *cc) +{ + int result =3D SCAN_FAIL; + struct mm_struct *mm =3D vma->vm_mm; + + if (!vma_is_anonymous(vma)) { + struct file *file =3D get_file(vma->vm_file); + pgoff_t pgoff =3D linear_page_index(vma, addr); + + mmap_read_unlock(mm); + *mmap_locked =3D false; + result =3D collapse_scan_file(mm, addr, file, pgoff, cc); + fput(file); + if (result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { + mmap_read_lock(mm); + *mmap_locked =3D true; + if (collapse_test_exit_or_disable(mm)) { + mmap_read_unlock(mm); + *mmap_locked =3D false; + result =3D SCAN_ANY_PROCESS; + goto end; + } + result =3D collapse_pte_mapped_thp(mm, addr, + !cc->is_khugepaged); + if (result =3D=3D SCAN_PMD_MAPPED) + result =3D SCAN_SUCCEED; + mmap_read_unlock(mm); + *mmap_locked =3D false; + } + } else { + result =3D collapse_scan_pmd(mm, vma, addr, mmap_locked, cc); + } + if (cc->is_khugepaged && result =3D=3D SCAN_SUCCEED) + ++khugepaged_pages_collapsed; +end: + return result; +} + static unsigned int collapse_scan_mm_slot(unsigned int pages, int *result, struct collapse_control *cc) __releases(&khugepaged_mm_lock) @@ -2436,34 +2480,9 @@ static unsigned int collapse_scan_mm_slot(unsigned i= nt pages, int *result, VM_BUG_ON(khugepaged_scan.address < hstart || khugepaged_scan.address + HPAGE_PMD_SIZE > hend); - if (!vma_is_anonymous(vma)) { - struct file *file =3D get_file(vma->vm_file); - pgoff_t pgoff =3D linear_page_index(vma, - khugepaged_scan.address); - - mmap_read_unlock(mm); - mmap_locked =3D false; - *result =3D hpage_collapse_scan_file(mm, - khugepaged_scan.address, file, pgoff, cc); - fput(file); - if (*result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { - mmap_read_lock(mm); - if (hpage_collapse_test_exit_or_disable(mm)) - goto breakouterloop; - *result =3D collapse_pte_mapped_thp(mm, - khugepaged_scan.address, false); - if (*result =3D=3D SCAN_PMD_MAPPED) - *result =3D SCAN_SUCCEED; - mmap_read_unlock(mm); - } - } else { - *result =3D hpage_collapse_scan_pmd(mm, vma, - khugepaged_scan.address, &mmap_locked, cc); - } - - if (*result =3D=3D SCAN_SUCCEED) - ++khugepaged_pages_collapsed; =20 + *result =3D collapse_single_pmd(khugepaged_scan.address, + vma, &mmap_locked, cc); /* move to next address */ khugepaged_scan.address +=3D HPAGE_PMD_SIZE; progress +=3D HPAGE_PMD_NR; @@ -2780,35 +2799,19 @@ int madvise_collapse(struct vm_area_struct *vma, un= signed long start, mmap_assert_locked(mm); memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); - if (!vma_is_anonymous(vma)) { - struct file *file =3D get_file(vma->vm_file); - pgoff_t pgoff =3D linear_page_index(vma, addr); =20 - mmap_read_unlock(mm); - mmap_locked =3D false; - result =3D hpage_collapse_scan_file(mm, addr, file, pgoff, - cc); - fput(file); - } else { - result =3D hpage_collapse_scan_pmd(mm, vma, addr, - &mmap_locked, cc); - } + result =3D collapse_single_pmd(addr, vma, &mmap_locked, cc); + if (!mmap_locked) *lock_dropped =3D true; =20 -handle_result: switch (result) { case SCAN_SUCCEED: case SCAN_PMD_MAPPED: ++thps; break; - case SCAN_PTE_MAPPED_HUGEPAGE: - BUG_ON(mmap_locked); - mmap_read_lock(mm); - result =3D collapse_pte_mapped_thp(mm, addr, true); - mmap_read_unlock(mm); - goto handle_result; /* Whitelisted set of results where continuing OK */ + case SCAN_PTE_MAPPED_HUGEPAGE: case SCAN_PMD_NULL: case SCAN_PTE_NON_PRESENT: case SCAN_PTE_UFFD_WP: --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CFDB63CB for ; Mon, 14 Jul 2025 00:34:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453281; cv=none; b=BEA/OdvwFCnQ7jPrnkRdAxBGQskLV+d2BjtES/tguOga2dxQF3uLy5QduOruIinIeBwQj8LIusUh8J3exzj1pXl4zlhETTGcewXn4noicqWeoe4Sww4L2Sr0AGQ40xHHMmoF3NaoY8ZtdK+oJh4P/bTBmOu8cuWe/yGX/tH548g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453281; c=relaxed/simple; bh=qJh2MdilikLFO+mVWggjFb6+Olw5U5zw1OBUvvvtXMo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pq0a5PhCzGH9ZyJ+MOAH9i5uaC7vj8BK88HWGkyQMjJKZ9GwWbnvtb374atLzqcWJIayF9nm4epYu4AEi6RTrunRxRy/3UCqxzA0MjSUwj/EmPezj8rfDoOEqh50kILBHmqnMDp1HYiyBvn/2ciIWWThip6/YoPSHlPLbDNpiCA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NBMa40cr; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NBMa40cr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453278; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W+J7kSzbC+P3Afm1dYs56pHBZuXjXuwK76sETip7T+g=; b=NBMa40crvEFcnEOaaXHAG9yRcYx9XAT3gJ31iTVCa/0VLsIzTGRmnO0HZgg6qxmXKEPjHB r64Cnzp3a5fzuvsAtjWJj9JzLyES+EqLcQSV8r7Xz23x2YH5XzPvmvNQ9HWeBZUEgbXhVs m6NGdrXJJeSVrbWB4SakUbLzwhc0svw= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-536-hp-nHdCEO9GNmvkLaEQZTQ-1; Sun, 13 Jul 2025 20:33:28 -0400 X-MC-Unique: hp-nHdCEO9GNmvkLaEQZTQ-1 X-Mimecast-MFC-AGG-ID: hp-nHdCEO9GNmvkLaEQZTQ_1752453204 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 82E9C19373DC; Mon, 14 Jul 2025 00:33:23 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E4B4230001B5; Mon, 14 Jul 2025 00:33:09 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 03/14] khugepaged: generalize hugepage_vma_revalidate for mTHP support Date: Sun, 13 Jul 2025 18:31:56 -0600 Message-ID: <20250714003207.113275-4-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" For khugepaged to support different mTHP orders, we must generalize this to check if the PMD is not shared by another VMA and the order is enabled. To ensure madvise_collapse can support working on mTHP orders without the PMD order enabled, we need to convert hugepage_vma_revalidate to take a bitmap of orders. No functional change in this patch. Reviewed-by: Baolin Wang Co-developed-by: Dev Jain Signed-off-by: Dev Jain Signed-off-by: Nico Pache Acked-by: David Hildenbrand --- mm/khugepaged.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 47a80638af97..fa0642e66790 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -907,7 +907,7 @@ static int collapse_find_target_node(struct collapse_co= ntrol *cc) static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long add= ress, bool expect_anon, struct vm_area_struct **vmap, - struct collapse_control *cc) + struct collapse_control *cc, unsigned long orders) { struct vm_area_struct *vma; unsigned long tva_flags =3D cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; @@ -919,9 +919,10 @@ static int hugepage_vma_revalidate(struct mm_struct *m= m, unsigned long address, if (!vma) return SCAN_VMA_NULL; =20 + /* Always check the PMD order to insure its not shared by another VMA */ if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER)) + if (!thp_vma_allowable_orders(vma, vma->vm_flags, tva_flags, orders)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then @@ -1123,7 +1124,8 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, goto out_nolock; =20 mmap_read_lock(mm); - result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc); + result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, + BIT(HPAGE_PMD_ORDER)); if (result !=3D SCAN_SUCCEED) { mmap_read_unlock(mm); goto out_nolock; @@ -1157,7 +1159,8 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, * mmap_lock. */ mmap_write_lock(mm); - result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc); + result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, + BIT(HPAGE_PMD_ORDER)); if (result !=3D SCAN_SUCCEED) goto out_up_write; /* check if the pmd is still valid */ @@ -2788,7 +2791,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, mmap_read_lock(mm); mmap_locked =3D true; result =3D hugepage_vma_revalidate(mm, addr, false, &vma, - cc); + cc, BIT(HPAGE_PMD_ORDER)); if (result !=3D SCAN_SUCCEED) { last_fail =3D result; goto out_nolock; --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 353DD11CBA for ; Mon, 14 Jul 2025 00:33:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453228; cv=none; b=YaSVpwWLrCp/NGWW9cyliQxNTOvHNXskn25M0/tj6En5UkjfR0LkO8ycVdx/1oZQ4Og2r/MXHSPZVRY8HtexaNTJdwUdXOVVC+ybMAN6Eqj8w+04xaoN+76BLaCjfExbQ3MSisO1f1PNTYH/mAZwJs0lEtFUTWHwBFoeEThl/2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453228; c=relaxed/simple; bh=Pce/wB1bhEZ+mwIOd5u1fkM1eISUrIsXgLtGBVHMWiI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FtpNBb09/Gbpk2p5/Vtmt/064fw4MJ7ixiW9nTGmOqrpPHFrdHX7EqVirmwMrCRaERn+G6ry1LoUDTs85umbFojaXeAUkOLz+tIJpbNt/ttsvj+g5KOKWhbm08m6c+LO8BAEtdDBSwjQxpsEg8XqOmbLfxMlVrBqG7rUZmeVOSw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Q6etA9Qw; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Q6etA9Qw" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453226; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ppWQBTEbTq0uLFQZIf8xzLOR4NrZ0/2d1eYmvU0SMOA=; b=Q6etA9QwEtlZYhPRwPLG4Jx5cGU9SH2hkyStm8R26nWpL119+uXbfMuVWzrBERtPN8ELRx dICH+C5FmA2LZo1FzeSS5EfGgHck2FITJ/im8/MaAhYC96iMwwF0Q1dmBqhtxV7m5BxPmK B9IQ+nIU2OD8O5AoiKci9ZiP+OuQBBs= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-81-W150_K-ROqeq_vHNwi60AQ-1; Sun, 13 Jul 2025 20:33:43 -0400 X-MC-Unique: W150_K-ROqeq_vHNwi60AQ-1 X-Mimecast-MFC-AGG-ID: W150_K-ROqeq_vHNwi60AQ_1752453218 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 50C3A19560B4; Mon, 14 Jul 2025 00:33:38 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 271103000396; Mon, 14 Jul 2025 00:33:23 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 04/14] khugepaged: generalize alloc_charge_folio() Date: Sun, 13 Jul 2025 18:31:57 -0600 Message-ID: <20250714003207.113275-5-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" From: Dev Jain Pass order to alloc_charge_folio() and update mTHP statistics. Reviewed-by: Baolin Wang Co-developed-by: Nico Pache Signed-off-by: Nico Pache Signed-off-by: Dev Jain Acked-by: David Hildenbrand --- Documentation/admin-guide/mm/transhuge.rst | 8 ++++++++ include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 4 ++++ mm/khugepaged.c | 17 +++++++++++------ 4 files changed, 25 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index dff8d5985f0f..2c523dce6bc7 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -583,6 +583,14 @@ anon_fault_fallback_charge instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. =20 +collapse_alloc + is incremented every time a huge page is successfully allocated for a + khugepaged collapse. + +collapse_alloc_failed + is incremented every time a huge page allocation fails during a + khugepaged collapse. + zswpout is incremented every time a huge page is swapped out to zswap in one piece without splitting. diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7748489fde1b..4042078e8cc9 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -125,6 +125,8 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_COLLAPSE_ALLOC, + MTHP_STAT_COLLAPSE_ALLOC_FAILED, MTHP_STAT_ZSWPOUT, MTHP_STAT_SWPIN, MTHP_STAT_SWPIN_FALLBACK, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bd7a623d7ef8..e2ed9493df77 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -614,6 +614,8 @@ static struct kobj_attribute _name##_attr =3D __ATTR_RO= (_name) DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FAL= LBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(collapse_alloc, MTHP_STAT_COLLAPSE_ALLOC); +DEFINE_MTHP_STAT_ATTR(collapse_alloc_failed, MTHP_STAT_COLLAPSE_ALLOC_FAIL= ED); DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); @@ -679,6 +681,8 @@ static struct attribute *any_stats_attrs[] =3D { #endif &split_attr.attr, &split_failed_attr.attr, + &collapse_alloc_attr.attr, + &collapse_alloc_failed_attr.attr, NULL, }; =20 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index fa0642e66790..cc9a35185604 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1068,21 +1068,26 @@ static int __collapse_huge_page_swapin(struct mm_st= ruct *mm, } =20 static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, - struct collapse_control *cc) + struct collapse_control *cc, u8 order) { gfp_t gfp =3D (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); int node =3D collapse_find_target_node(cc); struct folio *folio; =20 - folio =3D __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); + folio =3D __folio_alloc(gfp, order, node, &cc->alloc_nmask); if (!folio) { *foliop =3D NULL; - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + if (order =3D=3D HPAGE_PMD_ORDER) + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC_FAILED); return SCAN_ALLOC_HUGE_PAGE_FAIL; } =20 - count_vm_event(THP_COLLAPSE_ALLOC); + if (order =3D=3D HPAGE_PMD_ORDER) + count_vm_event(THP_COLLAPSE_ALLOC); + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC); + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { folio_put(folio); *foliop =3D NULL; @@ -1119,7 +1124,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, */ mmap_read_unlock(mm); =20 - result =3D alloc_charge_folio(&folio, mm, cc); + result =3D alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); if (result !=3D SCAN_SUCCEED) goto out_nolock; =20 @@ -1843,7 +1848,7 @@ static int collapse_file(struct mm_struct *mm, unsign= ed long addr, VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); =20 - result =3D alloc_charge_folio(&new_folio, mm, cc); + result =3D alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER); if (result !=3D SCAN_SUCCEED) goto out; =20 --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09D892E36E7 for ; Mon, 14 Jul 2025 00:34:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453244; cv=none; b=UPYPqhq2YkR8DCvJBkvDem8MJqYbkGaoPIo38rSylWb3D2z630qmP2bPiLV6uJbjrazR1kqlNvoibL2B9wqxd1e6U/eDWN4A9uEYNHKsxWkEBc7nmp0YNLb4b3RKxbiqrjJvetX80w7UbsTtEMW1J7xROmc9nQ/pbxI+s5ZQqjU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453244; c=relaxed/simple; bh=4be4cgW3XfsbhL37Ix2aX+wtirdj7Pt+dk6FcvUFFvo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uXgbmXnpS6BooOaCnHXdhUcMi77P/rtx0lUjZDcwjPnLnxw5WMzlIhlppm/utWR+gRJwAU63bcE1c+e2wSxaHLTRQwIEMgYgVYtRvdJqA6IuFk+0R+yp2XBIAjDcwo5PFfQ1kuW06XkWjAN3gsliaEd3VacO92xQ4XVd3IGlmJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Lk3+beuB; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Lk3+beuB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453242; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EoRH+kStQ3aqXNZ/dhK5FJTZj9XLujnqaCkMuiFSjys=; b=Lk3+beuBkKFXjOGvny5P+uNNhpoEvtSAX5iZ/D4QhKMTe5HhpyDoBT5NBv22l7UzFtqOZw eLLkOhWLxDQzI0ZtwyZXOS3f+e4PUOz5ZbCTkUGCLeXXka7ug805K8TgeqD7wnmObxbeH1 iZOdFSO/uvzP75Gu1DEVpCLbRjZ7fSM= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-671-vmakS4vuPsmh1ghP5sKzHw-1; Sun, 13 Jul 2025 20:33:57 -0400 X-MC-Unique: vmakS4vuPsmh1ghP5sKzHw-1 X-Mimecast-MFC-AGG-ID: vmakS4vuPsmh1ghP5sKzHw_1752453233 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 60C1A18011DF; Mon, 14 Jul 2025 00:33:53 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E7FCD30001A1; Mon, 14 Jul 2025 00:33:38 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 05/14] khugepaged: generalize __collapse_huge_page_* for mTHP support Date: Sun, 13 Jul 2025 18:31:58 -0600 Message-ID: <20250714003207.113275-6-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" generalize the order of the __collapse_huge_page_* functions to support future mTHP collapse. mTHP collapse can suffer from incosistant behavior, and memory waste "creep". disable swapin and shared support for mTHP collapse. No functional changes in this patch. Reviewed-by: Baolin Wang Co-developed-by: Dev Jain Signed-off-by: Dev Jain Signed-off-by: Nico Pache Acked-by: David Hildenbrand --- mm/khugepaged.c | 49 +++++++++++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 18 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cc9a35185604..ee54e3c1db4e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -552,15 +552,17 @@ static int __collapse_huge_page_isolate(struct vm_are= a_struct *vma, unsigned long address, pte_t *pte, struct collapse_control *cc, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, + u8 order) { struct page *page =3D NULL; struct folio *folio =3D NULL; pte_t *_pte; int none_or_zero =3D 0, shared =3D 0, result =3D SCAN_FAIL, referenced = =3D 0; bool writable =3D false; + int scaled_none =3D khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order); =20 - for (_pte =3D pte; _pte < pte + HPAGE_PMD_NR; + for (_pte =3D pte; _pte < pte + (1 << order); _pte++, address +=3D PAGE_SIZE) { pte_t pteval =3D ptep_get(_pte); if (pte_none(pteval) || (pte_present(pteval) && @@ -568,7 +570,7 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, ++none_or_zero; if (!userfaultfd_armed(vma) && (!cc->is_khugepaged || - none_or_zero <=3D khugepaged_max_ptes_none)) { + none_or_zero <=3D scaled_none)) { continue; } else { result =3D SCAN_EXCEED_NONE_PTE; @@ -596,8 +598,8 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, /* See hpage_collapse_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; - if (cc->is_khugepaged && - shared > khugepaged_max_ptes_shared) { + if (order !=3D HPAGE_PMD_ORDER || (cc->is_khugepaged && + shared > khugepaged_max_ptes_shared)) { result =3D SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out; @@ -698,13 +700,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t= *pte, struct vm_area_struct *vma, unsigned long address, spinlock_t *ptl, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, + u8 order) { struct folio *src, *tmp; pte_t *_pte; pte_t pteval; =20 - for (_pte =3D pte; _pte < pte + HPAGE_PMD_NR; + for (_pte =3D pte; _pte < pte + (1 << order); _pte++, address +=3D PAGE_SIZE) { pteval =3D ptep_get(_pte); if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { @@ -751,7 +754,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, + u8 order) { spinlock_t *pmd_ptl; =20 @@ -768,7 +772,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * Release both raw and compound pages isolated * in __collapse_huge_page_isolate. */ - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist); + release_pte_pages(pte, pte + (1 << order), compound_pagelist); } =20 /* @@ -789,7 +793,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, unsigned long address, spinlock_t *ptl, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, u8 order) { unsigned int i; int result =3D SCAN_SUCCEED; @@ -797,7 +801,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct= folio *folio, /* * Copying pages' contents is subject to memory poison at any iteration. */ - for (i =3D 0; i < HPAGE_PMD_NR; i++) { + for (i =3D 0; i < (1 << order); i++) { pte_t pteval =3D ptep_get(pte + i); struct page *page =3D folio_page(folio, i); unsigned long src_addr =3D address + i * PAGE_SIZE; @@ -816,10 +820,10 @@ static int __collapse_huge_page_copy(pte_t *pte, stru= ct folio *folio, =20 if (likely(result =3D=3D SCAN_SUCCEED)) __collapse_huge_page_copy_succeeded(pte, vma, address, ptl, - compound_pagelist); + compound_pagelist, order); else __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma, - compound_pagelist); + compound_pagelist, order); =20 return result; } @@ -994,11 +998,11 @@ static int check_pmd_still_valid(struct mm_struct *mm, static int __collapse_huge_page_swapin(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd, - int referenced) + int referenced, u8 order) { int swapped_in =3D 0; vm_fault_t ret =3D 0; - unsigned long address, end =3D haddr + (HPAGE_PMD_NR * PAGE_SIZE); + unsigned long address, end =3D haddr + (PAGE_SIZE << order); int result; pte_t *pte =3D NULL; spinlock_t *ptl; @@ -1029,6 +1033,15 @@ static int __collapse_huge_page_swapin(struct mm_str= uct *mm, if (!is_swap_pte(vmf.orig_pte)) continue; =20 + /* Dont swapin for mTHP collapse */ + if (order !=3D HPAGE_PMD_ORDER) { + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SWAP); + pte_unmap(pte); + mmap_read_unlock(mm); + result =3D SCAN_EXCEED_SWAP_PTE; + goto out; + } + vmf.pte =3D pte; vmf.ptl =3D ptl; ret =3D do_swap_page(&vmf); @@ -1149,7 +1162,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, * that case. Continuing to collapse causes inconsistency. */ result =3D __collapse_huge_page_swapin(mm, vma, address, pmd, - referenced); + referenced, HPAGE_PMD_ORDER); if (result !=3D SCAN_SUCCEED) goto out_nolock; } @@ -1197,7 +1210,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, pte =3D pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); if (pte) { result =3D __collapse_huge_page_isolate(vma, address, pte, cc, - &compound_pagelist); + &compound_pagelist, HPAGE_PMD_ORDER); spin_unlock(pte_ptl); } else { result =3D SCAN_PMD_NULL; @@ -1227,7 +1240,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, =20 result =3D __collapse_huge_page_copy(pte, folio, pmd, _pmd, vma, address, pte_ptl, - &compound_pagelist); + &compound_pagelist, HPAGE_PMD_ORDER); pte_unmap(pte); if (unlikely(result !=3D SCAN_SUCCEED)) goto out_up_write; --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55D59136349 for ; Mon, 14 Jul 2025 00:34:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453257; cv=none; b=q+jvPEbXkrKjKj0KVxCb5UaQUo4eHU5BBy0FBBuBgFF0HJ428LVglnqEJKe3MFiQg7wHz7nlET97T7ybxAiRyUUbFNtMp4yfw2SIpCpELS7ZCBd0z0jTeT6o08dHEcFxYOhWC/soc5258ikT3bgdpvmBuK2KvY12YUQaC03/rcA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453257; c=relaxed/simple; bh=Y66HoRVOLobstH5ac9/uCQr5eUcBhzsTZ9Zpio4GJE8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hZACm4SAWrFz6wqZLtgGjFbhHU2siRv+DjKFROYIKeWo/GNyK88JUSi4C8Sr9dEconm6GbZXNRpAGYXSrBAlgx3wNUGq2UabkV6iNEZXfisN26XBbg5Aoo688s+ei80KGZ5gMrsafYhNavkIENZirC0sXQWuVB83ZAZ4fWIQILc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=geOVaL0g; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="geOVaL0g" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453254; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LtRO1JOIfZprMG/am2mDb+azBIl78w6oRlv5gx/qwEw=; b=geOVaL0gSBYqxSKG35Qpn5xY/mMeFVPGK8n/maupRdmTGbl2wnAznL5dxJnzxgHlj+tWnF DiIeARjKx228m2a9NKfL/U5XZacZzWniWLJHbftWyjNipwtV0Zu3HtT8BxPEyqh34CPCAY s9gWkeNV5WwGkCZzrNwT/x2CZGQAH40= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-318-QH7DM3DINQScaWt3TqbM6g-1; Sun, 13 Jul 2025 20:34:11 -0400 X-MC-Unique: QH7DM3DINQScaWt3TqbM6g-1 X-Mimecast-MFC-AGG-ID: QH7DM3DINQScaWt3TqbM6g_1752453247 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B273E18011EF; Mon, 14 Jul 2025 00:34:06 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E14CA30001A1; Mon, 14 Jul 2025 00:33:53 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 06/14] khugepaged: introduce collapse_scan_bitmap for mTHP support Date: Sun, 13 Jul 2025 18:31:59 -0600 Message-ID: <20250714003207.113275-7-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" khugepaged scans anons PMD ranges for potential collapse to a hugepage. To add mTHP support we use this scan to instead record chunks of utilized sections of the PMD. collapse_scan_bitmap uses a stack struct to recursively scan a bitmap that represents chunks of utilized regions. We can then determine what mTHP size fits best and in the following patch, we set this bitmap while scanning the anon PMD. A minimum collapse order of 2 is used as this is the lowest order supported by anon memory. max_ptes_none is used as a scale to determine how "full" an order must be before being considered for collapse. When attempting to collapse an order that has its order set to "always" lets always collapse to that order in a greedy manner without considering the number of bits set. Signed-off-by: Nico Pache --- include/linux/khugepaged.h | 4 ++ mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++---- 2 files changed, 89 insertions(+), 9 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index ff6120463745..0f957711a117 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -1,6 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _LINUX_KHUGEPAGED_H #define _LINUX_KHUGEPAGED_H +#define KHUGEPAGED_MIN_MTHP_ORDER 2 +#define KHUGEPAGED_MIN_MTHP_NR (1<mthp_bitmap_stack[++top] =3D (struct scan_bit_state) + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 }; + + while (top >=3D 0) { + state =3D cc->mthp_bitmap_stack[top--]; + order =3D state.order + KHUGEPAGED_MIN_MTHP_ORDER; + offset =3D state.offset; + num_chunks =3D 1 << (state.order); + // Skip mTHP orders that are not enabled + if (!test_bit(order, &enabled_orders)) + goto next; + + // copy the relavant section to a new bitmap + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset, + MTHP_BITMAP_SIZE); + + bits_set =3D bitmap_weight(cc->mthp_bitmap_temp, num_chunks); + threshold_bits =3D (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1) + >> (HPAGE_PMD_ORDER - state.order); + + //Check if the region is "almost full" based on the threshold + if (bits_set > threshold_bits || is_pmd_only + || test_bit(order, &huge_anon_orders_always)) { + ret =3D collapse_huge_page(mm, address, referenced, unmapped, cc, + mmap_locked, order, offset * KHUGEPAGED_MIN_MTHP_NR); + if (ret =3D=3D SCAN_SUCCEED) { + collapsed +=3D (1 << order); + continue; + } + } + +next: + if (state.order > 0) { + next_order =3D state.order - 1; + mid_offset =3D offset + (num_chunks / 2); + cc->mthp_bitmap_stack[++top] =3D (struct scan_bit_state) + { next_order, mid_offset }; + cc->mthp_bitmap_stack[++top] =3D (struct scan_bit_state) + { next_order, offset }; + } + } + return collapsed; +} + static int collapse_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, bool *mmap_locked, @@ -1444,9 +1522,7 @@ static int collapse_scan_pmd(struct mm_struct *mm, pte_unmap_unlock(pte, ptl); if (result =3D=3D SCAN_SUCCEED) { result =3D collapse_huge_page(mm, address, referenced, - unmapped, cc); - /* collapse_huge_page will return with the mmap_lock released */ - *mmap_locked =3D false; + unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); } out: trace_mm_khugepaged_scan_pmd(mm, folio, writable, referenced, --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 584921531F9 for ; Mon, 14 Jul 2025 00:35:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453329; cv=none; b=IEPisGnlgbwbBhULa2jmpF0zCVWMlFzczAbN0n9CWwNdyF+HugLLyBMFPhSSPSWpLPhXcScBYsWVEgwULr8RkDmTzFEc4F3gXqGYTP2TzEzE6pp2mBjD2OemYkHf/6GCNSbvqaNILfFRk4LHUzurbgcPNVLamEGps0/BBk7kr3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453329; c=relaxed/simple; bh=4DsHiOqzJySg+4vnDyyGlPd+0yZIH9KcqWAwAVBBZt4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BPHnrdyCpHKKMH12Gr6oRMzL8E61JDfBNu1ztt89pkxVij/wRlZcTS5f96JAD3B+myDpDD5sfGLlk/ybN5qTTma0lq9ksrysMDPvrf53tlcwJfPoytedi2JD/Jpcuyt/7Z4q6vuExqPmdfqgh94UHFg6Io+3e+5JHC1S8QvZDVA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=eGzp2u8U; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="eGzp2u8U" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SEfbbY+aiDcXulJ2cTAQfJ1S1rJ6pXXkvjMzpnxKETQ=; b=eGzp2u8UJu7yo+EBJuVSJMMms6wPwxkijHzEeyEXpFA90QPw2DX34tjNFRUa+d675L0cQ3 zvz7cmiWDgEofFiSivfDIqAQhm+kQY8gEdhgKxO2kJpKCUfGEFaV/t7OVObdaqq6sju3S5 aKGhAe4Fn68DSw40RaHO/q83+1VcxWY= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-533-dFUqOlGCMFaHN2_wXK1KwQ-1; Sun, 13 Jul 2025 20:34:25 -0400 X-MC-Unique: dFUqOlGCMFaHN2_wXK1KwQ-1 X-Mimecast-MFC-AGG-ID: dFUqOlGCMFaHN2_wXK1KwQ_1752453261 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 047E518011DF; Mon, 14 Jul 2025 00:34:21 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3F07830001B5; Mon, 14 Jul 2025 00:34:06 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 07/14] khugepaged: add mTHP support Date: Sun, 13 Jul 2025 18:32:00 -0600 Message-ID: <20250714003207.113275-8-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Introduce the ability for khugepaged to collapse to different mTHP sizes. While scanning PMD ranges for potential collapse candidates, keep track of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If mTHPs are enabled we remove the restriction of max_ptes_none during the scan phase so we dont bailout early and miss potential mTHP candidates. After the scan is complete we will perform binary recursion on the bitmap to determine which mTHP size would be most efficient to collapse to. max_ptes_none will be scaled by the attempted collapse order to determine how full a THP must be to be eligible. If a mTHP collapse is attempted, but contains swapped out, or shared pages, we dont perform the collapse. For non-PMD collapse we must leave the anon VMA write locked until after we collapse the mTHP-- in the PMD case all the pages are isolated, but in the non-PMD case this is not true, and we must keep the lock to prevent changes to the VMA from occurring. Currently madv_collapse is not supported and will only attempt PMD collapse. Signed-off-by: Nico Pache --- mm/khugepaged.c | 142 +++++++++++++++++++++++++++++++++--------------- 1 file changed, 99 insertions(+), 43 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 59b2431ca616..5d7c5be9097e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1133,13 +1133,14 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; - pte_t *pte; + pte_t *pte =3D NULL, mthp_pte; pgtable_t pgtable; struct folio *folio; spinlock_t *pmd_ptl, *pte_ptl; int result =3D SCAN_FAIL; struct vm_area_struct *vma; struct mmu_notifier_range range; + unsigned long _address =3D address + offset * PAGE_SIZE; =20 VM_BUG_ON(address & ~HPAGE_PMD_MASK); =20 @@ -1155,13 +1156,13 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, *mmap_locked =3D false; } =20 - result =3D alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); + result =3D alloc_charge_folio(&folio, mm, cc, order); if (result !=3D SCAN_SUCCEED) goto out_nolock; =20 mmap_read_lock(mm); - result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, - BIT(HPAGE_PMD_ORDER)); + *mmap_locked =3D true; + result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, BIT(order= )); if (result !=3D SCAN_SUCCEED) { mmap_read_unlock(mm); goto out_nolock; @@ -1179,13 +1180,14 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, * released when it fails. So we jump out_nolock directly in * that case. Continuing to collapse causes inconsistency. */ - result =3D __collapse_huge_page_swapin(mm, vma, address, pmd, - referenced, HPAGE_PMD_ORDER); + result =3D __collapse_huge_page_swapin(mm, vma, _address, pmd, + referenced, order); if (result !=3D SCAN_SUCCEED) goto out_nolock; } =20 mmap_read_unlock(mm); + *mmap_locked =3D false; /* * Prevent all access to pagetables with the exception of * gup_fast later handled by the ptep_clear_flush and the VM @@ -1195,8 +1197,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, * mmap_lock. */ mmap_write_lock(mm); - result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, - BIT(HPAGE_PMD_ORDER)); + result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, BIT(order= )); if (result !=3D SCAN_SUCCEED) goto out_up_write; /* check if the pmd is still valid */ @@ -1207,11 +1208,12 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, vma_start_write(vma); anon_vma_lock_write(vma->anon_vma); =20 - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, - address + HPAGE_PMD_SIZE); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address, + _address + (PAGE_SIZE << order)); mmu_notifier_invalidate_range_start(&range); =20 pmd_ptl =3D pmd_lock(mm, pmd); /* probably unnecessary */ + /* * This removes any huge TLB entry from the CPU so we won't allow * huge and small TLB entries for the same virtual address to @@ -1225,18 +1227,16 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, mmu_notifier_invalidate_range_end(&range); tlb_remove_table_sync_one(); =20 - pte =3D pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); + pte =3D pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl); if (pte) { - result =3D __collapse_huge_page_isolate(vma, address, pte, cc, - &compound_pagelist, HPAGE_PMD_ORDER); + result =3D __collapse_huge_page_isolate(vma, _address, pte, cc, + &compound_pagelist, order); spin_unlock(pte_ptl); } else { result =3D SCAN_PMD_NULL; } =20 if (unlikely(result !=3D SCAN_SUCCEED)) { - if (pte) - pte_unmap(pte); spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); /* @@ -1251,17 +1251,17 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, } =20 /* - * All pages are isolated and locked so anon_vma rmap - * can't run anymore. + * For PMD collapse all pages are isolated and locked so anon_vma + * rmap can't run anymore */ - anon_vma_unlock_write(vma->anon_vma); + if (order =3D=3D HPAGE_PMD_ORDER) + anon_vma_unlock_write(vma->anon_vma); =20 result =3D __collapse_huge_page_copy(pte, folio, pmd, _pmd, - vma, address, pte_ptl, - &compound_pagelist, HPAGE_PMD_ORDER); - pte_unmap(pte); + vma, _address, pte_ptl, + &compound_pagelist, order); if (unlikely(result !=3D SCAN_SUCCEED)) - goto out_up_write; + goto out_unlock_anon_vma; =20 /* * The smp_wmb() inside __folio_mark_uptodate() ensures the @@ -1269,25 +1269,46 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, * write. */ __folio_mark_uptodate(folio); - pgtable =3D pmd_pgtable(_pmd); - - _pmd =3D folio_mk_pmd(folio, vma->vm_page_prot); - _pmd =3D maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); - - spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(mm, pmd, pgtable); - set_pmd_at(mm, address, pmd, _pmd); - update_mmu_cache_pmd(vma, address, pmd); - deferred_split_folio(folio, false); - spin_unlock(pmd_ptl); + if (order =3D=3D HPAGE_PMD_ORDER) { + pgtable =3D pmd_pgtable(_pmd); + _pmd =3D folio_mk_pmd(folio, vma->vm_page_prot); + _pmd =3D maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); + + spin_lock(pmd_ptl); + BUG_ON(!pmd_none(*pmd)); + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + pgtable_trans_huge_deposit(mm, pmd, pgtable); + set_pmd_at(mm, address, pmd, _pmd); + update_mmu_cache_pmd(vma, address, pmd); + deferred_split_folio(folio, false); + spin_unlock(pmd_ptl); + } else { /* mTHP collapse */ + mthp_pte =3D mk_pte(&folio->page, vma->vm_page_prot); + mthp_pte =3D maybe_mkwrite(pte_mkdirty(mthp_pte), vma); + + spin_lock(pmd_ptl); + BUG_ON(!pmd_none(*pmd)); + folio_ref_add(folio, (1 << order) - 1); + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order)); + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order)); + + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pmd_pgtable(_pmd)); + spin_unlock(pmd_ptl); + } =20 folio =3D NULL; =20 result =3D SCAN_SUCCEED; +out_unlock_anon_vma: + if (order !=3D HPAGE_PMD_ORDER) + anon_vma_unlock_write(vma->anon_vma); out_up_write: + if (pte) + pte_unmap(pte); mmap_write_unlock(mm); out_nolock: *mmap_locked =3D false; @@ -1363,31 +1384,60 @@ static int collapse_scan_pmd(struct mm_struct *mm, { pmd_t *pmd; pte_t *pte, *_pte; + int i; int result =3D SCAN_FAIL, referenced =3D 0; int none_or_zero =3D 0, shared =3D 0; struct page *page =3D NULL; struct folio *folio =3D NULL; unsigned long _address; + unsigned long enabled_orders; spinlock_t *ptl; int node =3D NUMA_NO_NODE, unmapped =3D 0; + bool is_pmd_only; bool writable =3D false; - + int chunk_none_count =3D 0; + int scaled_none =3D khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEP= AGED_MIN_MTHP_ORDER); + unsigned long tva_flags =3D cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; VM_BUG_ON(address & ~HPAGE_PMD_MASK); =20 result =3D find_pmd_or_thp_or_none(mm, address, &pmd); if (result !=3D SCAN_SUCCEED) goto out; =20 + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE); + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); + + if (cc->is_khugepaged) + enabled_orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, + tva_flags, THP_ORDERS_ALL_ANON); + else + enabled_orders =3D BIT(HPAGE_PMD_ORDER); + + is_pmd_only =3D (enabled_orders =3D=3D (1 << HPAGE_PMD_ORDER)); + pte =3D pte_offset_map_lock(mm, pmd, address, &ptl); if (!pte) { result =3D SCAN_PMD_NULL; goto out; } =20 - for (_address =3D address, _pte =3D pte; _pte < pte + HPAGE_PMD_NR; - _pte++, _address +=3D PAGE_SIZE) { + for (i =3D 0; i < HPAGE_PMD_NR; i++) { + /* + * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if + * there are pages in this chunk keep track of it in the bitmap + * for mTHP collapsing. + */ + if (i % KHUGEPAGED_MIN_MTHP_NR =3D=3D 0) { + if (chunk_none_count <=3D scaled_none) + bitmap_set(cc->mthp_bitmap, + i / KHUGEPAGED_MIN_MTHP_NR, 1); + chunk_none_count =3D 0; + } + + _pte =3D pte + i; + _address =3D address + i * PAGE_SIZE; pte_t pteval =3D ptep_get(_pte); if (is_swap_pte(pteval)) { ++unmapped; @@ -1410,10 +1460,11 @@ static int collapse_scan_pmd(struct mm_struct *mm, } } if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + ++chunk_none_count; ++none_or_zero; if (!userfaultfd_armed(vma) && - (!cc->is_khugepaged || - none_or_zero <=3D khugepaged_max_ptes_none)) { + (!cc->is_khugepaged || !is_pmd_only || + none_or_zero <=3D khugepaged_max_ptes_none)) { continue; } else { result =3D SCAN_EXCEED_NONE_PTE; @@ -1509,6 +1560,7 @@ static int collapse_scan_pmd(struct mm_struct *mm, address))) referenced++; } + if (!writable) { result =3D SCAN_PAGE_RO; } else if (cc->is_khugepaged && @@ -1521,8 +1573,12 @@ static int collapse_scan_pmd(struct mm_struct *mm, out_unmap: pte_unmap_unlock(pte, ptl); if (result =3D=3D SCAN_SUCCEED) { - result =3D collapse_huge_page(mm, address, referenced, - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); + result =3D collapse_scan_bitmap(mm, address, referenced, unmapped, cc, + mmap_locked, enabled_orders); + if (result > 0) + result =3D SCAN_SUCCEED; + else + result =3D SCAN_FAIL; } out: trace_mm_khugepaged_scan_pmd(mm, folio, writable, referenced, --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4D5216A956 for ; Mon, 14 Jul 2025 00:35:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453348; cv=none; b=A1LPyIlWrdtrAwDdTUPicAC51E3z9h18ZEUfnt9sb87sFGPDV2V4HRvQJulf0EPDH3XvmFZXMdyeudlWRJ3cxbA0Ec5qiUreeyS1WrrEd6lrnvqdd0ARyXVORzAyyIlbavhRlcM1+gN8m1f3YxNHcDPG673+Hkils7nu0DEdC6Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453348; c=relaxed/simple; bh=8SOMyjqFpjKmJpR1bDjOCtnMkVNRow4wZiCt9qx8JOw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lrI0aVXcexLPeNxPDhn11CA8fyg50nN/fDXvc28pb3gnWYhiFPWktFEc1IdJ5FHjzvoZ+ffmsssG8hEcFDgS/ucVUfcN6mGMJtbbKn2Lo7zKYj2hLIR0r+WA5AeCgD/znJqvCvcDe7cDkYC4fG1YKDdudKpJlvNreeTBym27PoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=iSkJojCr; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iSkJojCr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=azrHuy9q5M9OTEtgJQ9kQ7Y/+wyqIwmQi02KrtIoB0I=; b=iSkJojCrDRDgjS3AK76JOU2+ADkIxDxVRInqHuMQg3Xm4MLHsYFMS5807FIGu9hzZvGeCL LiFlxXB7DfowuESGggDsa/yO5oajo+Vp1HPc7suzKHuJ5iF90HfwlGBQLpvBp7l+BtHxMI rXUtnTD9nRbvJ0DZNud+iI/HJMZGV3U= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-262-ep6DKYGjMPCqis-bl_0a4A-1; Sun, 13 Jul 2025 20:34:38 -0400 X-MC-Unique: ep6DKYGjMPCqis-bl_0a4A-1 X-Mimecast-MFC-AGG-ID: ep6DKYGjMPCqis-bl_0a4A_1752453274 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 46C32180028B; Mon, 14 Jul 2025 00:34:34 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 85EAB30001A1; Mon, 14 Jul 2025 00:34:21 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 08/14] khugepaged: skip collapsing mTHP to smaller orders Date: Sun, 13 Jul 2025 18:32:01 -0600 Message-ID: <20250714003207.113275-9-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in some pages being unmapped. Skip these cases until we have a way to check if its ok to collapse to a smaller mTHP size (like in the case of a partially mapped folio). This patch is inspired by Dev Jain's work on khugepaged mTHP support [1]. [1] https://lore.kernel.org/lkml/20241216165105.56185-11-dev.jain@arm.com/ Reviewed-by: Baolin Wang Co-developed-by: Dev Jain Signed-off-by: Dev Jain Signed-off-by: Nico Pache Acked-by: David Hildenbrand --- mm/khugepaged.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5d7c5be9097e..a701d9f0f158 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -612,7 +612,12 @@ static int __collapse_huge_page_isolate(struct vm_area= _struct *vma, folio =3D page_folio(page); VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio); =20 - /* See hpage_collapse_scan_pmd(). */ + if (order !=3D HPAGE_PMD_ORDER && folio_order(folio) >=3D order) { + result =3D SCAN_PTE_MAPPED_HUGEPAGE; + goto out; + } + + /* See khugepaged_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; if (order !=3D HPAGE_PMD_ORDER || (cc->is_khugepaged && --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89FDD224D6 for ; Mon, 14 Jul 2025 00:34:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453300; cv=none; b=m+P9o0xIPdEkoRQC0pDsrDXSZS2Ic9omHIUEauRHAD4xPNa8eErtZ7dvPH8FZvQyKDebnRfCJZ0007j0NbfSI5NGYnAxbs7o/zjZVSFe9ciTXI8jWsmyp6QZ2sstcZhytOXj9dZ0jmS3vOgLMeiehwSCHTCLlzjE0aYyKmMOpJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453300; c=relaxed/simple; bh=APLURJLlYBirxrIp/mY7RmU7pUPhLVo8iELDJJ4+8oo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=odEH3SBaiMKLcEPmc930kpLzP4XMtkpJi6K8pSX1x2Ya77erhmEmYTv1fzMvAvO9qOwRoLK+VAJUtOgDAF4zIRTrskTp0+qCLZBrSAh7ALUXS/++2pjAbxrsbOYnmfB/9EPkJCwpflXlF60tbW+FLZNnlx01ZiYyf6LnQBaUM+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Y6cwlm+M; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Y6cwlm+M" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453297; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BfZNAwhk1NqT5tIuPPyjI7C606wcPsYsghkwk2GurVA=; b=Y6cwlm+MLWLOh/IbmQVTqXUexKx2vcLHsJK9lm+F1CSO8KEPMmsWYse/RIb6W8qEjEjZ9X p5ZyvQTOs/0dSETgE6Pbw4Vp21/rjSFSnyaLMxXGzroSzQBqstzz+VmFWJLYn2EfBynagm isRoFU1+6/gHCr+cohBNpD8x4WKnvKk= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-653-FlwQt_DJNyqxL5eDISseBg-1; Sun, 13 Jul 2025 20:34:52 -0400 X-MC-Unique: FlwQt_DJNyqxL5eDISseBg-1 X-Mimecast-MFC-AGG-ID: FlwQt_DJNyqxL5eDISseBg_1752453288 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id EF258195605A; Mon, 14 Jul 2025 00:34:47 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C570930001A1; Mon, 14 Jul 2025 00:34:34 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 09/14] khugepaged: avoid unnecessary mTHP collapse attempts Date: Sun, 13 Jul 2025 18:32:02 -0600 Message-ID: <20250714003207.113275-10-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" There are cases where, if an attempted collapse fails, all subsequent orders are guaranteed to also fail. Avoid these collapse attempts by bailing out early. Signed-off-by: Nico Pache --- mm/khugepaged.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a701d9f0f158..7a9c4edf0e23 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1367,6 +1367,23 @@ static int collapse_scan_bitmap(struct mm_struct *mm= , unsigned long address, collapsed +=3D (1 << order); continue; } + /* + * Some ret values indicate all lower order will also + * fail, dont trying to collapse smaller orders + */ + if (ret =3D=3D SCAN_EXCEED_NONE_PTE || + ret =3D=3D SCAN_EXCEED_SWAP_PTE || + ret =3D=3D SCAN_EXCEED_SHARED_PTE || + ret =3D=3D SCAN_PTE_NON_PRESENT || + ret =3D=3D SCAN_PTE_UFFD_WP || + ret =3D=3D SCAN_ALLOC_HUGE_PAGE_FAIL || + ret =3D=3D SCAN_CGROUP_CHARGE_FAIL || + ret =3D=3D SCAN_COPY_MC || + ret =3D=3D SCAN_PAGE_LOCK || + ret =3D=3D SCAN_PAGE_COUNT) + goto next; + else + break; } =20 next: --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E62BC1487F4 for ; Mon, 14 Jul 2025 00:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453312; cv=none; b=qxTs6zWjJn+OK6LcVGn241nbcAWACElg+dNqBo606wSCImk/nirqzw/Ed1IF1plu11Gm9j+fHal1XvuQg33aFZZWUPi2SVggdsu8Iki0Pd4EKQm515xCMEK8v5N7ONyZnaRMadDf1uiQXBZ3ZD1MkD47SFXfGaXZpeP3H+Spivg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453312; c=relaxed/simple; bh=uP345dm49Iik13gh5fWta0zX9s4/3nA6rJzwThamf10=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jgneu8nCAkVbkrOlg4DJ+h8pbZskR8Onkz0bistpQmqMmVkgVdS+n9tBYPK0SOY81Opb5eVRxmehyTGgMfnVfGV/a3BOZwthBLowKVw3strAIEEWmv1qJuJcqkncO45W+3ObrnE83o52tZWKZ3jjQ81EM5JPxpOHFlWcTS21LJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=L7kF3ass; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L7kF3ass" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453309; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BY0uq/JwVuLAVo6hhpefEsrpjrJB6SfsaEu6VQnxYkM=; b=L7kF3assVXUh8JTQmpAb3Myk55DEyU5pPUugq5TLZ/V9vlq7GytZCOCNLJMARfgqcQEz6x UI6zBzDUHqW5qab+/bK0gJIVwxle+9dmEtYiOFjqwPEcH5zeUbESYIAfBzYAqty8d4Rr75 6geb2SyonhIHPzozjFkcFuxF98f2DMM= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-486-vOWgCFvjO7-Cger7x3lYig-1; Sun, 13 Jul 2025 20:35:05 -0400 X-MC-Unique: vOWgCFvjO7-Cger7x3lYig-1 X-Mimecast-MFC-AGG-ID: vOWgCFvjO7-Cger7x3lYig_1752453301 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3302A1800289; Mon, 14 Jul 2025 00:35:01 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 789A730001A1; Mon, 14 Jul 2025 00:34:48 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 10/14] khugepaged: allow khugepaged to check all anonymous mTHP orders Date: Sun, 13 Jul 2025 18:32:03 -0600 Message-ID: <20250714003207.113275-11-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" From: Baolin Wang We have now allowed mTHP collapse, but thp_vma_allowable_order() still only checks if the PMD-sized mTHP is allowed to collapse. This prevents scanning and collapsing of 64K mTHP when only 64K mTHP is enabled. Thus, we should modify the checks to allow all large orders of anonymous mTHP. Signed-off-by: Baolin Wang Signed-off-by: Nico Pache Acked-by: David Hildenbrand --- mm/khugepaged.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 7a9c4edf0e23..3772dc0d78ea 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -491,8 +491,11 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_pmd_enabled()) { - if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS, - PMD_ORDER)) + unsigned long orders =3D vma_is_anonymous(vma) ? + THP_ORDERS_ALL_ANON : BIT(PMD_ORDER); + + if (thp_vma_allowable_orders(vma, vm_flags, TVA_ENFORCE_SYSFS, + orders)) __khugepaged_enter(vma->vm_mm); } } @@ -2624,6 +2627,8 @@ static unsigned int collapse_scan_mm_slot(unsigned in= t pages, int *result, =20 vma_iter_init(&vmi, mm, khugepaged_scan.address); for_each_vma(vmi, vma) { + unsigned long orders =3D vma_is_anonymous(vma) ? + THP_ORDERS_ALL_ANON : BIT(PMD_ORDER); unsigned long hstart, hend; =20 cond_resched(); @@ -2631,8 +2636,8 @@ static unsigned int collapse_scan_mm_slot(unsigned in= t pages, int *result, progress++; break; } - if (!thp_vma_allowable_order(vma, vma->vm_flags, - TVA_ENFORCE_SYSFS, PMD_ORDER)) { + if (!thp_vma_allowable_orders(vma, vma->vm_flags, + TVA_ENFORCE_SYSFS, orders)) { skip: progress++; continue; --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F44635979 for ; Mon, 14 Jul 2025 00:35:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453326; cv=none; b=a+y7FWcdEx6T/YWmuXjBYo9qMFG6E6msbxXT4j5j7UGP7D4YtOtqr0xLuyagyDFHJeFfF5b/bWkhSdMKAsQvqZcKp2tgo/0jYCi267/Uusn/4H2bs+i/evMNkcvlmOjw0AEKoC1lIvR9zfcqpCiaLSvEsPYC1RQUAzIuEWI+Ncg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453326; c=relaxed/simple; bh=GT4UYuwnk01J+UO3AsAk5FDpvcP+7Zj/HbL/qTRX3G4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Gukr3mm0eBYo61Fyo8IeOUgKC+Q9ViwQYMw3rVq9+nx++FhN0FTPlDWEJew87npjeO1iWWOQtEeqBsh0WbJPKjVY16pBdvFvhblIeRaRcTfzZ1NA0ODruRCL8JTgXLwW4Rzg8KdTmbwivSdINKpdnMOD3/31JtZomvrDeug/auw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=agmckdQu; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="agmckdQu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uOFU3HEhV8SnfXYDhncuj6HdsbX8p30mN92yCMnfwIs=; b=agmckdQuIREAi5dFIPMOwg3syVzYBFcmUogj0OtcHz5PF5JKCOFgPI2yzM+bOAwH1KkVgM f8vKuOey+zf5wJYWQ+C1Kfev8s12nTB56muFUWEFzAh8kQdLcpzQVGEXpuYvPPJDemtser jYeUoUQNywn0Lk3IG90tWARv32U7DcQ= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-592-u3M4V6djMSuLRS3lTcxq4w-1; Sun, 13 Jul 2025 20:35:19 -0400 X-MC-Unique: u3M4V6djMSuLRS3lTcxq4w-1 X-Mimecast-MFC-AGG-ID: u3M4V6djMSuLRS3lTcxq4w_1752453315 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CA6DE1801210; Mon, 14 Jul 2025 00:35:14 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BB61030001A1; Mon, 14 Jul 2025 00:35:01 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 11/14] khugepaged: kick khugepaged for enabling none-PMD-sized mTHPs Date: Sun, 13 Jul 2025 18:32:04 -0600 Message-ID: <20250714003207.113275-12-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" From: Baolin Wang When only non-PMD-sized mTHP is enabled (such as only 64K mTHP enabled), we should also allow kicking khugepaged to attempt scanning and collapsing 64K mTHP. Modify hugepage_pmd_enabled() to support mTHP collapse, and while we are at it, rename it to make the function name more clear. Signed-off-by: Baolin Wang Signed-off-by: Nico Pache --- mm/khugepaged.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 3772dc0d78ea..65cb8c58bbf8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -430,7 +430,7 @@ static inline int collapse_test_exit_or_disable(struct = mm_struct *mm) test_bit(MMF_DISABLE_THP, &mm->flags); } =20 -static bool hugepage_pmd_enabled(void) +static bool hugepage_enabled(void) { /* * We cover the anon, shmem and the file-backed case here; file-backed @@ -442,11 +442,11 @@ static bool hugepage_pmd_enabled(void) if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && hugepage_global_enabled()) return true; - if (test_bit(PMD_ORDER, &huge_anon_orders_always)) + if (READ_ONCE(huge_anon_orders_always)) return true; - if (test_bit(PMD_ORDER, &huge_anon_orders_madvise)) + if (READ_ONCE(huge_anon_orders_madvise)) return true; - if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) && + if (READ_ONCE(huge_anon_orders_inherit) && hugepage_global_enabled()) return true; if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_pmd_enabled()) @@ -490,7 +490,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, vm_flags_t vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && - hugepage_pmd_enabled()) { + hugepage_enabled()) { unsigned long orders =3D vma_is_anonymous(vma) ? THP_ORDERS_ALL_ANON : BIT(PMD_ORDER); =20 @@ -2714,7 +2714,7 @@ static unsigned int collapse_scan_mm_slot(unsigned in= t pages, int *result, =20 static int khugepaged_has_work(void) { - return !list_empty(&khugepaged_scan.mm_head) && hugepage_pmd_enabled(); + return !list_empty(&khugepaged_scan.mm_head) && hugepage_enabled(); } =20 static int khugepaged_wait_event(void) @@ -2787,7 +2787,7 @@ static void khugepaged_wait_work(void) return; } =20 - if (hugepage_pmd_enabled()) + if (hugepage_enabled()) wait_event_freezable(khugepaged_wait, khugepaged_wait_event()); } =20 @@ -2818,7 +2818,7 @@ static void set_recommended_min_free_kbytes(void) int nr_zones =3D 0; unsigned long recommended_min; =20 - if (!hugepage_pmd_enabled()) { + if (!hugepage_enabled()) { calculate_min_free_kbytes(); goto update_wmarks; } @@ -2868,7 +2868,7 @@ int start_stop_khugepaged(void) int err =3D 0; =20 mutex_lock(&khugepaged_mutex); - if (hugepage_pmd_enabled()) { + if (hugepage_enabled()) { if (!khugepaged_thread) khugepaged_thread =3D kthread_run(khugepaged, NULL, "khugepaged"); @@ -2894,7 +2894,7 @@ int start_stop_khugepaged(void) void khugepaged_min_free_kbytes_update(void) { mutex_lock(&khugepaged_mutex); - if (hugepage_pmd_enabled() && khugepaged_thread) + if (hugepage_enabled() && khugepaged_thread) set_recommended_min_free_kbytes(); mutex_unlock(&khugepaged_mutex); } --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46BD514AD2B for ; Mon, 14 Jul 2025 00:35:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453337; cv=none; b=dh99CIqmCuGYNRV/bCysnBNvQf+gymNr0sNSLvXr5tBVcCLJLaIvmd/zVtiQs+NZBGW94PkI1DM4Opkq0oMk/4Lq8HcFJguUhi6IRJUAqNUsr+Ik22nD4jHmUlPKSx6qTGecH+Lw0wpA5uiQNvYCXnU3EasYDpX8Qcla8sgFn7I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453337; c=relaxed/simple; bh=vTW8aqmFQyL/UJqpVfu+Awxj2lS2MOkYs4EO+0vr8Uk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NSW7yNdw/6CSCBbVZfJtU6xlMWFalhejychXxHE5nexdadD6VZPa9hrvxCDhP/JAH76iB1l1Oxe5HFfBc8M4/XrUT+9/c7k6/w6DJqL5T0Ty++BcEJRGaxsaT87OXRl7zFuvQ8ApF7X/JEm6vl+ABk+3o7wkv9tg4eIUwaMJFKM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=BYKW8xYE; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BYKW8xYE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TDHsdfiITWi1pnUsMSjaLnG0WwHLXZ+KHZ38MYerHJU=; b=BYKW8xYEItN4Pn/c5pgsWCZWXhx9xqE9d+eMpcGBkb5vzYvyw6cKfZc7d22ppnQzUDdgSb Nojd2bHzZ6pk1yfgMrrSmHOLr1HzKqaRdOVez/74vCoJU9bmiy4/Ggubmb2eilgHrFHt4+ tQrA8IiaaBwaz8U7+W3g+yTGKVfKglw= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-297-52g8xMnVOduRZQaxW_M1UQ-1; Sun, 13 Jul 2025 20:35:33 -0400 X-MC-Unique: 52g8xMnVOduRZQaxW_M1UQ-1 X-Mimecast-MFC-AGG-ID: 52g8xMnVOduRZQaxW_M1UQ_1752453328 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 651701956086; Mon, 14 Jul 2025 00:35:28 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 526D730001A1; Mon, 14 Jul 2025 00:35:15 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 12/14] khugepaged: improve tracepoints for mTHP orders Date: Sun, 13 Jul 2025 18:32:05 -0600 Message-ID: <20250714003207.113275-13-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Add the order to the tracepoints to give better insight into what order is being operated at for khugepaged. Reviewed-by: Baolin Wang Signed-off-by: Nico Pache Acked-by: David Hildenbrand --- include/trace/events/huge_memory.h | 34 +++++++++++++++++++----------- mm/khugepaged.c | 10 +++++---- 2 files changed, 28 insertions(+), 16 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge= _memory.h index 2305df6cb485..70661bbf676f 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -92,34 +92,37 @@ TRACE_EVENT(mm_khugepaged_scan_pmd, =20 TRACE_EVENT(mm_collapse_huge_page, =20 - TP_PROTO(struct mm_struct *mm, int isolated, int status), + TP_PROTO(struct mm_struct *mm, int isolated, int status, int order), =20 - TP_ARGS(mm, isolated, status), + TP_ARGS(mm, isolated, status, order), =20 TP_STRUCT__entry( __field(struct mm_struct *, mm) __field(int, isolated) __field(int, status) + __field(int, order) ), =20 TP_fast_assign( __entry->mm =3D mm; __entry->isolated =3D isolated; __entry->status =3D status; + __entry->order =3D order; ), =20 - TP_printk("mm=3D%p, isolated=3D%d, status=3D%s", + TP_printk("mm=3D%p, isolated=3D%d, status=3D%s order=3D%d", __entry->mm, __entry->isolated, - __print_symbolic(__entry->status, SCAN_STATUS)) + __print_symbolic(__entry->status, SCAN_STATUS), + __entry->order) ); =20 TRACE_EVENT(mm_collapse_huge_page_isolate, =20 TP_PROTO(struct folio *folio, int none_or_zero, - int referenced, bool writable, int status), + int referenced, bool writable, int status, int order), =20 - TP_ARGS(folio, none_or_zero, referenced, writable, status), + TP_ARGS(folio, none_or_zero, referenced, writable, status, order), =20 TP_STRUCT__entry( __field(unsigned long, pfn) @@ -127,6 +130,7 @@ TRACE_EVENT(mm_collapse_huge_page_isolate, __field(int, referenced) __field(bool, writable) __field(int, status) + __field(int, order) ), =20 TP_fast_assign( @@ -135,27 +139,31 @@ TRACE_EVENT(mm_collapse_huge_page_isolate, __entry->referenced =3D referenced; __entry->writable =3D writable; __entry->status =3D status; + __entry->order =3D order; ), =20 - TP_printk("scan_pfn=3D0x%lx, none_or_zero=3D%d, referenced=3D%d, writable= =3D%d, status=3D%s", + TP_printk("scan_pfn=3D0x%lx, none_or_zero=3D%d, referenced=3D%d, writable= =3D%d, status=3D%s order=3D%d", __entry->pfn, __entry->none_or_zero, __entry->referenced, __entry->writable, - __print_symbolic(__entry->status, SCAN_STATUS)) + __print_symbolic(__entry->status, SCAN_STATUS), + __entry->order) ); =20 TRACE_EVENT(mm_collapse_huge_page_swapin, =20 - TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret), + TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret, + int order), =20 - TP_ARGS(mm, swapped_in, referenced, ret), + TP_ARGS(mm, swapped_in, referenced, ret, order), =20 TP_STRUCT__entry( __field(struct mm_struct *, mm) __field(int, swapped_in) __field(int, referenced) __field(int, ret) + __field(int, order) ), =20 TP_fast_assign( @@ -163,13 +171,15 @@ TRACE_EVENT(mm_collapse_huge_page_swapin, __entry->swapped_in =3D swapped_in; __entry->referenced =3D referenced; __entry->ret =3D ret; + __entry->order =3D order; ), =20 - TP_printk("mm=3D%p, swapped_in=3D%d, referenced=3D%d, ret=3D%d", + TP_printk("mm=3D%p, swapped_in=3D%d, referenced=3D%d, ret=3D%d, order=3D%= d", __entry->mm, __entry->swapped_in, __entry->referenced, - __entry->ret) + __entry->ret, + __entry->order) ); =20 TRACE_EVENT(mm_khugepaged_scan_file, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 65cb8c58bbf8..d0c99b86b304 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -711,13 +711,14 @@ static int __collapse_huge_page_isolate(struct vm_are= a_struct *vma, } else { result =3D SCAN_SUCCEED; trace_mm_collapse_huge_page_isolate(folio, none_or_zero, - referenced, writable, result); + referenced, writable, result, + order); return result; } out: release_pte_pages(pte, _pte, compound_pagelist); trace_mm_collapse_huge_page_isolate(folio, none_or_zero, - referenced, writable, result); + referenced, writable, result, order); return result; } =20 @@ -1097,7 +1098,8 @@ static int __collapse_huge_page_swapin(struct mm_stru= ct *mm, =20 result =3D SCAN_SUCCEED; out: - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result); + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result, + order); return result; } =20 @@ -1322,7 +1324,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, *mmap_locked =3D false; if (folio) folio_put(folio); - trace_mm_collapse_huge_page(mm, result =3D=3D SCAN_SUCCEED, result); + trace_mm_collapse_huge_page(mm, result =3D=3D SCAN_SUCCEED, result, order= ); return result; } =20 --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4B1319C55E for ; Mon, 14 Jul 2025 00:35:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453353; cv=none; b=kY7UWSm9yPcgeuXohNmuF6akSgTf431Pb2wxBtU5pN/5wtNr61TYl9m71XZEGGz6E6Y5l8M91RXy1EXKJsd59PLXgObjbZpA5NX64fc+wMKj1tbyLtDqV2WDvzh1FhLcYEFONmE0WkkWaSQuaiVFtKoX3Cxz+mu5sYs7wrSx6xg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453353; c=relaxed/simple; bh=7100Le1L0uiN5LSU2FwJKvrxzF2TpMvfQvPfenrYE9c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bBvoxuSXBWqYTTVc9qEQTbSJuKIrxld9DKqhFuhyUvBr4ju5G9TbBV8nmGkgiOKtYHXek80vCfxj5eAtpV1hPZr8KuwsmpPZ/1JexJ3kKGTMmbogbf/wVRCJZY74aOhsY56SgeRQSR3q2jXPOMCEruaX3v/Sy+LfCCGlVvIbewI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZueKyLMg; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZueKyLMg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453351; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rlnoq8W1rC6MvS1nOAXetGyiiwQDHXDvTGvmPdtfgpw=; b=ZueKyLMgFmyrQYIK4IO7xqPvui05L2jsmoQH8dFDy4LVUB6djkpEMHNT+/p1cnypKzqeWz CC+6KE7i/DV3SugEkavSEUI7+X8uKMee/4K3ZWV7bGVMoFJbf8yqGMkNhWI1xgBDq+2Ezw yCWXkusLtQKA0DtkGA2OHTaKO0tW0pQ= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-80-qITqtPePOPeCq0qhrtT2uA-1; Sun, 13 Jul 2025 20:35:46 -0400 X-MC-Unique: qITqtPePOPeCq0qhrtT2uA-1 X-Mimecast-MFC-AGG-ID: qITqtPePOPeCq0qhrtT2uA_1752453342 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E1D3A19560B2; Mon, 14 Jul 2025 00:35:41 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E430030001B5; Mon, 14 Jul 2025 00:35:28 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com Subject: [PATCH v9 13/14] khugepaged: add per-order mTHP khugepaged stats Date: Sun, 13 Jul 2025 18:32:06 -0600 Message-ID: <20250714003207.113275-14-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" With mTHP support inplace, let add the per-order mTHP stats for exceeding NONE, SWAP, and SHARED. Signed-off-by: Nico Pache --- Documentation/admin-guide/mm/transhuge.rst | 17 +++++++++++++++++ include/linux/huge_mm.h | 3 +++ mm/huge_memory.c | 7 +++++++ mm/khugepaged.c | 15 ++++++++++++--- 4 files changed, 39 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index 2c523dce6bc7..28c8af61efba 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -658,6 +658,23 @@ nr_anon_partially_mapped an anonymous THP as "partially mapped" and count it here, even thou= gh it is not actually partially mapped anymore. =20 +collapse_exceed_swap_pte + The number of anonymous THP which contain at least one swap PTE. + Currently khugepaged does not support collapsing mTHP regions that + contain a swap PTE. + +collapse_exceed_none_pte + The number of anonymous THP which have exceeded the none PTE thresh= old. + With mTHP collapse, a bitmap is used to gather the state of a PMD r= egion + and is then recursively checked from largest to smallest order agai= nst + the scaled max_ptes_none count. This counter indicates that the next + enabled order will be checked. + +collapse_exceed_shared_pte + The number of anonymous THP which contain at least one shared PTE. + Currently khugepaged does not support collapsing mTHP regions that + contain a shared PTE. + As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a huge page for use. There are some counters in ``/proc/vmstat`` to help diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4042078e8cc9..e0a27f80f390 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -141,6 +141,9 @@ enum mthp_stat_item { MTHP_STAT_SPLIT_DEFERRED, MTHP_STAT_NR_ANON, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, + MTHP_STAT_COLLAPSE_EXCEED_SWAP, + MTHP_STAT_COLLAPSE_EXCEED_NONE, + MTHP_STAT_COLLAPSE_EXCEED_SHARED, __MTHP_STAT_COUNT }; =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e2ed9493df77..57e5699cf638 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -632,6 +632,10 @@ DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FA= ILED); DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON); DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALL= Y_MAPPED); +DEFINE_MTHP_STAT_ATTR(collapse_exceed_swap_pte, MTHP_STAT_COLLAPSE_EXCEED_= SWAP); +DEFINE_MTHP_STAT_ATTR(collapse_exceed_none_pte, MTHP_STAT_COLLAPSE_EXCEED_= NONE); +DEFINE_MTHP_STAT_ATTR(collapse_exceed_shared_pte, MTHP_STAT_COLLAPSE_EXCEE= D_SHARED); + =20 static struct attribute *anon_stats_attrs[] =3D { &anon_fault_alloc_attr.attr, @@ -648,6 +652,9 @@ static struct attribute *anon_stats_attrs[] =3D { &split_deferred_attr.attr, &nr_anon_attr.attr, &nr_anon_partially_mapped_attr.attr, + &collapse_exceed_swap_pte_attr.attr, + &collapse_exceed_none_pte_attr.attr, + &collapse_exceed_shared_pte_attr.attr, NULL, }; =20 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d0c99b86b304..8a5873d0a23a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -594,7 +594,10 @@ static int __collapse_huge_page_isolate(struct vm_area= _struct *vma, continue; } else { result =3D SCAN_EXCEED_NONE_PTE; - count_vm_event(THP_SCAN_EXCEED_NONE_PTE); + if (order =3D=3D HPAGE_PMD_ORDER) + count_vm_event(THP_SCAN_EXCEED_NONE_PTE); + else + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE); goto out; } } @@ -623,8 +626,14 @@ static int __collapse_huge_page_isolate(struct vm_area= _struct *vma, /* See khugepaged_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; - if (order !=3D HPAGE_PMD_ORDER || (cc->is_khugepaged && - shared > khugepaged_max_ptes_shared)) { + if (order !=3D HPAGE_PMD_ORDER) { + result =3D SCAN_EXCEED_SHARED_PTE; + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED); + goto out; + } + + if (cc->is_khugepaged && + shared > khugepaged_max_ptes_shared) { result =3D SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out; --=20 2.50.0 From nobody Tue Oct 7 05:40:50 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F22F86331 for ; Mon, 14 Jul 2025 00:36:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453396; cv=none; b=UgZNAhsYDd2zy5YOVjbEMutYvjAXGOJC7/q9uGzFS+jh5w7FR7JiWu651wKo4b1MOwm5lnrtq9tNDhwLf8uUeOuIZSCrK5b3NEWSAbCx7vfGeicCHntVG098YoM950GQ0PNo6pVwoWWMwlnFt2FnAqOKYa92rypNRVoxGgyo/qM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752453396; c=relaxed/simple; bh=5eIQ+SG7Yt+jICz3vOlfLTOOz0PX2tMHxemiaMXVqn4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UcKqbBd+FaYmAgJDs3VZeRnly+r+LeqL9ejfzE3W0Dq4QF7ddFuD0XCXP/EbjhAigC6yHD+nq3qFqjqpVncHx+SCyIT1IsyJDZl9d8T1gneOSBKD2pCrtD4gLjF146ZuHYqE1gWdpW2HpVfEEN8wcWUbBn8+nxjqGQLID2lplFk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DxhE4hBW; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DxhE4hBW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752453394; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JI3ul7rOwkv0pIyzaoYkAokd2uM8TJh/Y3hAsO3YfE0=; b=DxhE4hBWcz1/2hFTa67arSLI7+GsVYOxtSJk3GBa0Uisg7yiFfa93IXdvLYcya1vFJJ2ZN 30ev24FoLW+jCSMawTMTmWIj/Dj198jGgnCyJO25EMtwNBOkNFKDZNIJDCM0CvlBaR/BpO EFfymjGq0nmI0138otUvB6KdBfss/h8= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-659-i2e5eysJNeeY5Uuv7uL9dw-1; Sun, 13 Jul 2025 20:36:00 -0400 X-MC-Unique: i2e5eysJNeeY5Uuv7uL9dw-1 X-Mimecast-MFC-AGG-ID: i2e5eysJNeeY5Uuv7uL9dw_1752453355 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AC58F19560BC; Mon, 14 Jul 2025 00:35:55 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.9]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6CBC130001A1; Mon, 14 Jul 2025 00:35:42 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com, Bagas Sanjaya Subject: [PATCH v9 14/14] Documentation: mm: update the admin guide for mTHP collapse Date: Sun, 13 Jul 2025 18:32:07 -0600 Message-ID: <20250714003207.113275-15-npache@redhat.com> In-Reply-To: <20250714003207.113275-1-npache@redhat.com> References: <20250714003207.113275-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Now that we can collapse to mTHPs lets update the admin guide to reflect these changes and provide proper guidence on how to utilize it. Reviewed-by: Bagas Sanjaya Signed-off-by: Nico Pache --- Documentation/admin-guide/mm/transhuge.rst | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/adm= in-guide/mm/transhuge.rst index 28c8af61efba..bd49b46398c9 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -63,7 +63,7 @@ often. THP can be enabled system wide or restricted to certain tasks or even memory ranges inside task's address space. Unless THP is completely disabled, there is ``khugepaged`` daemon that scans memory and -collapses sequences of basic pages into PMD-sized huge pages. +collapses sequences of basic pages into huge pages. =20 The THP behaviour is controlled via :ref:`sysfs ` interface and using madvise(2) and prctl(2) system calls. @@ -144,6 +144,18 @@ hugepage sizes have enabled=3D"never". If enabling mul= tiple hugepage sizes, the kernel will select the most appropriate enabled size for a given allocation. =20 +khugepaged uses max_ptes_none scaled to the order of the enabled mTHP size +to determine collapses. When using mTHPs it's recommended to set +max_ptes_none low-- ideally less than HPAGE_PMD_NR / 2 (255 on 4k page +size). This will prevent undesired "creep" behavior that leads to +continuously collapsing to the largest mTHP size; when we collapse, we are +bringing in new non-zero pages that will, on a subsequent scan, cause the +max_ptes_none check of the +1 order to always be satisfied. By limiting +this to less than half the current order, we make sure we don't cause this +feedback loop. max_ptes_shared and max_ptes_swap have no effect when +collapsing to a mTHP, and mTHP collapse will fail on shared or swapped out +pages. + It's also possible to limit defrag efforts in the VM to generate anonymous hugepages in case they're not immediately free to madvise regions or to never try to defrag memory and simply fallback to regular @@ -221,11 +233,6 @@ top-level control are "never") Khugepaged controls ------------------- =20 -.. note:: - khugepaged currently only searches for opportunities to collapse to - PMD-sized THP and no attempt is made to collapse to other THP - sizes. - khugepaged runs usually at low frequency so while one may not want to invoke defrag algorithms synchronously during the page faults, it should be worth invoking defrag at least in khugepaged. However it's --=20 2.50.0