From nobody Thu Apr 2 17:31:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EADF3CF697 for ; Wed, 25 Mar 2026 11:41:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438882; cv=none; b=CzBUqNAiNxmRLvUmEhyYrfRGGBKy0aoMK1iYRbutyO/HqkBOkNqm5ywT9cYXa1n//QlWrBwq8E9zeXnsoFR0YJW1x0lBfsQcdjUULF8ePVvMV2xRNPSlkq9GM1USCPw3goS1uVcFFKMdMLo1ghSikNFFWa3No3crPbTpxtMFf4k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438882; c=relaxed/simple; bh=aspmkwZ0lOstw4xyUGyGt2wlPmfU5l2Uhh0GntkCwDc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h5VH5GeX2KJu9mGIZD3q6pHwLJlk6PogzZHC0o1n6VNrnsTbnzHqdSNXDK8X+VBvmkWnETjTQxiKAmrP/PIY4F126MChlf/YRjsn9x0TIat9rlWnkD554UCiENdpqOmVKJh9s+oJQCMI9oeUKRuf0Rh3sRDTOj9ExrucQW3NiPg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RX8X6VFi; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RX8X6VFi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774438880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OMmkah6dqMevaOCN+C4P/7Ow457xOM95fOSi6oZNNEE=; b=RX8X6VFi7HUpgBZusCp9560bEEjjtjYn7kWE/AnWxQ+B2agHPSXvYQUvaw1ClxlDVypnLA cHkhx411V5wXohw8kfA7FG4LOB+eMhH8TW4VO3eDVrASjQs//UC4klmhIRpakmxjN3dMp0 6kyszmNz6yHO9uYfoPS966OqTjCOcc4= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-584-uBo8KKigNlOcJt0e66Y7zg-1; Wed, 25 Mar 2026 07:41:15 -0400 X-MC-Unique: uBo8KKigNlOcJt0e66Y7zg-1 X-Mimecast-MFC-AGG-ID: uBo8KKigNlOcJt0e66Y7zg_1774438874 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2E38D1944F12; Wed, 25 Mar 2026 11:41:14 +0000 (UTC) Received: from p1.redhat.com (unknown [10.2.14.4]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AD69118002A6; Wed, 25 Mar 2026 11:40:53 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com, "Lorenzo Stoakes (Oracle)" Subject: [PATCH mm-unstable v4 1/5] mm: consolidate anonymous folio PTE mapping into helpers Date: Wed, 25 Mar 2026 05:40:18 -0600 Message-ID: <20260325114022.444081-2-npache@redhat.com> In-Reply-To: <20260325114022.444081-1-npache@redhat.com> References: <20260325114022.444081-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" The anonymous page fault handler in do_anonymous_page() open-codes the sequence to map a newly allocated anonymous folio at the PTE level: - construct the PTE entry - add rmap - add to LRU - set the PTEs - update the MMU cache. Introduce two helpers to consolidate this duplicated logic, mirroring the existing map_anon_folio_pmd_nopf() pattern for PMD-level mappings: map_anon_folio_pte_nopf(): constructs the PTE entry, takes folio references, adds anon rmap and LRU. This function also handles the uffd_wp that can occur in the pf variant. The future khugepaged mTHP code calls this to handle mapping the new collapsed mTHP to its folio. map_anon_folio_pte_pf(): extends the nopf variant to handle MM_ANONPAGES counter updates, and mTHP fault allocation statistics for the page fault path. The zero-page read path in do_anonymous_page() is also untangled from the shared setpte label, since it does not allocate a folio and should not share the same mapping sequence as the write path. We can now leave nr_pages undeclared at the function intialization, and use the single page update_mmu_cache function to handle the zero page update. This refactoring will also help reduce code duplication between mm/memory.c and mm/khugepaged.c, and provides a clean API for PTE-level anonymous folio mapping that can be reused by future callers (like khugpeaged mTHP support) Suggested-by: Lorenzo Stoakes (Oracle) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Dev Jain Reviewed-by: Lance Yang Acked-by: David Hildenbrand (Arm) Signed-off-by: Nico Pache --- include/linux/mm.h | 4 +++ mm/memory.c | 61 +++++++++++++++++++++++++++++++--------------- 2 files changed, 45 insertions(+), 20 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index eebf940058da..7edebadb2cb2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -5226,4 +5226,8 @@ static inline bool snapshot_page_is_faithful(const st= ruct page_snapshot *ps) =20 void snapshot_page(struct page_snapshot *ps, const struct page *page); =20 +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + bool uffd_wp); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index bd93f34b6120..6396d32c348a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5292,6 +5292,37 @@ static struct folio *alloc_anon_folio(struct vm_faul= t *vmf) return folio_prealloc(vma->vm_mm, vma, vmf->address, true); } =20 +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + bool uffd_wp) +{ + const unsigned int nr_pages =3D folio_nr_pages(folio); + pte_t entry =3D folio_mk_pte(folio, vma->vm_page_prot); + + entry =3D pte_sw_mkyoung(entry); + + if (vma->vm_flags & VM_WRITE) + entry =3D pte_mkwrite(pte_mkdirty(entry), vma); + if (uffd_wp) + entry =3D pte_mkuffd_wp(entry); + + folio_ref_add(folio, nr_pages - 1); + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_ptes(vma->vm_mm, addr, pte, entry, nr_pages); + update_mmu_cache_range(NULL, vma, addr, pte, nr_pages); +} + +static void map_anon_folio_pte_pf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, bool uffd_wp) +{ + const unsigned int order =3D folio_order(folio); + + map_anon_folio_pte_nopf(folio, pte, vma, addr, uffd_wp); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1L << order); + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_ALLOC); +} + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -5303,7 +5334,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *= vmf) unsigned long addr =3D vmf->address; struct folio *folio; vm_fault_t ret =3D 0; - int nr_pages =3D 1; + int nr_pages; pte_t entry; =20 /* File mapping without ->vm_ops ? */ @@ -5338,7 +5369,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return handle_userfault(vmf, VM_UFFD_MISSING); } - goto setpte; + if (vmf_orig_pte_uffd_wp(vmf)) + entry =3D pte_mkuffd_wp(entry); + set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache(vma, addr, vmf->pte); + goto unlock; } =20 /* Allocate our own private page. */ @@ -5362,11 +5399,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) */ __folio_mark_uptodate(folio); =20 - entry =3D folio_mk_pte(folio, vma->vm_page_prot); - entry =3D pte_sw_mkyoung(entry); - if (vma->vm_flags & VM_WRITE) - entry =3D pte_mkwrite(pte_mkdirty(entry), vma); - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); if (!vmf->pte) goto release; @@ -5388,19 +5420,8 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) folio_put(folio); return handle_userfault(vmf, VM_UFFD_MISSING); } - - folio_ref_add(folio, nr_pages - 1); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); - count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); - folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); -setpte: - if (vmf_orig_pte_uffd_wp(vmf)) - entry =3D pte_mkuffd_wp(entry); - set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); + map_anon_folio_pte_pf(folio, vmf->pte, vma, addr, + vmf_orig_pte_uffd_wp(vmf)); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); --=20 2.53.0 From nobody Thu Apr 2 17:31:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7641E3CF023 for ; Wed, 25 Mar 2026 11:41:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438899; cv=none; b=dZPhw/j7GnHSt1nD/kAH7i/19S+lRSJ6BJSL1dLWH4UQqwZZT9ZRZpI7zro/hmvT6eRYQoxAlVieLAi9wGRbX4RO9MQReFJ54Yg1vJCX4B/IYYZAMAWyZ3HRIPxBSVpvtlp2twAybEaTTLB4bAyS4WQoKA7CJeIDDHCm5j3wQng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438899; c=relaxed/simple; bh=1eALHqoJOe4Zvwy1hgs/2dnGTg+eBCt2cxe+WnID5Pc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PU+mSGxDfCWjJYk5lkYyNBBEHuFAzdPcLxzj+Y0ngRN3eTRke/W1TGAMteoHzacTyyL5GMmP7iB2LRJSNvbKvi597K/skK7IGNSPT6jjbiri/+Kkiz4xUe88OVxzO019KYjQk2asGnDd3UZLdpB/Aj7cTl/tEiRi7uJqZLuJs1Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KIbfEc9j; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KIbfEc9j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774438896; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wpAMGReJ5HEw2g+jWb2G48bdmeRb9Igjsn8yjS6J0Rk=; b=KIbfEc9jci0oCkzUWlFUUN78w/ZjeA5hOldki9t1Dcr0qmWv9YVifk+mnv51IIdShpxo8S r3abdjk1ULPWJhXEu0tnt+AEN64D74W23KZAK+NlRZk55AfDs53/QNiP6QU7dlIyZfoxuJ +PP+FffreaeCUvPi6vK3NLWgWv0ZnqQ= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-687-puJu_D8-Mia6SKWxxej48A-1; Wed, 25 Mar 2026 07:41:33 -0400 X-MC-Unique: puJu_D8-Mia6SKWxxej48A-1 X-Mimecast-MFC-AGG-ID: puJu_D8-Mia6SKWxxej48A_1774438892 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6826D1800345; Wed, 25 Mar 2026 11:41:32 +0000 (UTC) Received: from p1.redhat.com (unknown [10.2.14.4]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9C62E1800576; Wed, 25 Mar 2026 11:41:14 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com Subject: [PATCH mm-unstable v4 2/5] mm: introduce is_pmd_order helper Date: Wed, 25 Mar 2026 05:40:19 -0600 Message-ID: <20260325114022.444081-3-npache@redhat.com> In-Reply-To: <20260325114022.444081-1-npache@redhat.com> References: <20260325114022.444081-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" In order to add mTHP support to khugepaged, we will often be checking if a given order is (or is not) a PMD order. Some places in the kernel already use this check, so lets create a simple helper function to keep the code clean and readable. Acked-by: David Hildenbrand (Arm) Reviewed-by: Baolin Wang Reviewed-by: Dev Jain Reviewed-by: Wei Yang Reviewed-by: Lance Yang Reviewed-by: Barry Song Reviewed-by: Zi Yan Reviewed-by: Pedro Falcato Reviewed-by: Lorenzo Stoakes Suggested-by: Lorenzo Stoakes Signed-off-by: Nico Pache --- include/linux/huge_mm.h | 5 +++++ mm/huge_memory.c | 2 +- mm/khugepaged.c | 6 +++--- mm/memory.c | 2 +- mm/mempolicy.c | 2 +- mm/page_alloc.c | 4 ++-- mm/shmem.c | 3 +-- 7 files changed, 14 insertions(+), 10 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index c8799dca3b60..1258fa37e85b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -769,6 +769,11 @@ static inline bool pmd_is_huge(pmd_t pmd) } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 +static inline bool is_pmd_order(unsigned int order) +{ + return order =3D=3D HPAGE_PMD_ORDER; +} + static inline int split_folio_to_list_to_order(struct folio *folio, struct list_head *list, int new_order) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2833b06d7498..b2a6060b3c20 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4118,7 +4118,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, i_mmap_unlock_read(mapping); out: xas_destroy(&xas); - if (old_order =3D=3D HPAGE_PMD_ORDER) + if (is_pmd_order(old_order)) count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED); count_mthp_stat(old_order, !ret ? MTHP_STAT_SPLIT : MTHP_STAT_SPLIT_FAILE= D); return ret; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6bd7a7c0632a..1f4609761294 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1547,7 +1547,7 @@ static enum scan_result try_collapse_pte_mapped_thp(s= truct mm_struct *mm, unsign if (IS_ERR(folio)) return SCAN_PAGE_NULL; =20 - if (folio_order(folio) !=3D HPAGE_PMD_ORDER) { + if (!is_pmd_order(folio_order(folio))) { result =3D SCAN_PAGE_COMPOUND; goto drop_folio; } @@ -2030,7 +2030,7 @@ static enum scan_result collapse_file(struct mm_struc= t *mm, unsigned long addr, * we locked the first folio, then a THP might be there already. * This will be discovered on the first iteration. */ - if (folio_order(folio) =3D=3D HPAGE_PMD_ORDER) { + if (is_pmd_order(folio_order(folio))) { result =3D SCAN_PTE_MAPPED_HUGEPAGE; goto out_unlock; } @@ -2358,7 +2358,7 @@ static enum scan_result hpage_collapse_scan_file(stru= ct mm_struct *mm, continue; } =20 - if (folio_order(folio) =3D=3D HPAGE_PMD_ORDER) { + if (is_pmd_order(folio_order(folio))) { result =3D SCAN_PTE_MAPPED_HUGEPAGE; /* * PMD-sized THP implies that we can only try diff --git a/mm/memory.c b/mm/memory.c index 6396d32c348a..e44469f9cf65 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5573,7 +5573,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct fo= lio *folio, struct page *pa if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER)) return ret; =20 - if (folio_order(folio) !=3D HPAGE_PMD_ORDER) + if (!is_pmd_order(folio_order(folio))) return ret; page =3D &folio->page; =20 diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ff52fb94ff27..fd08771e2057 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2449,7 +2449,7 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsig= ned int order, =20 if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && /* filter "hugepage" allocation, unless from alloc_pages() */ - order =3D=3D HPAGE_PMD_ORDER && ilx !=3D NO_INTERLEAVE_INDEX) { + is_pmd_order(order) && ilx !=3D NO_INTERLEAVE_INDEX) { /* * For hugepage allocation and non-interleave policy which * allows the current node (or other explicitly preferred diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 915b6aef55d0..ee81f5c67c18 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -652,7 +652,7 @@ static inline unsigned int order_to_pindex(int migratet= ype, int order) #ifdef CONFIG_TRANSPARENT_HUGEPAGE bool movable; if (order > PAGE_ALLOC_COSTLY_ORDER) { - VM_BUG_ON(order !=3D HPAGE_PMD_ORDER); + VM_BUG_ON(!is_pmd_order(order)); =20 movable =3D migratetype =3D=3D MIGRATE_MOVABLE; =20 @@ -684,7 +684,7 @@ static inline bool pcp_allowed_order(unsigned int order) if (order <=3D PAGE_ALLOC_COSTLY_ORDER) return true; #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (order =3D=3D HPAGE_PMD_ORDER) + if (is_pmd_order(order)) return true; #endif return false; diff --git a/mm/shmem.c b/mm/shmem.c index d00044257401..4ecefe02881d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -5532,8 +5532,7 @@ static ssize_t thpsize_shmem_enabled_store(struct kob= ject *kobj, spin_unlock(&huge_shmem_orders_lock); } else if (sysfs_streq(buf, "inherit")) { /* Do not override huge allocation policy with non-PMD sized mTHP */ - if (shmem_huge =3D=3D SHMEM_HUGE_FORCE && - order !=3D HPAGE_PMD_ORDER) + if (shmem_huge =3D=3D SHMEM_HUGE_FORCE && !is_pmd_order(order)) return -EINVAL; =20 spin_lock(&huge_shmem_orders_lock); --=20 2.53.0 From nobody Thu Apr 2 17:31:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E61F93CEB85 for ; Wed, 25 Mar 2026 11:41:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438917; cv=none; b=FZwnC9Mwu9aFQXU4YPbGssixdXz/Lj9j002laXHbzvV6HTmvXUs5xlKyoI58KvT1n8u/wTFFHvnvLlBJ9Lz3fjwse7EeC/2hteLg+5lexi1iVbu/Dk3BLJAw5IUotHQ/RT7kTItxgJ6iqN6aHTG8gSc4eql4waR/oSuPMXt16uQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438917; c=relaxed/simple; bh=l90czQtyBSvbTHMC6jGstYspFy+gopEP9XPDkFUMFLg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jygltWK81inC+TZwULU0qAQNIWgbnoy4NRONS1iVA6+JtbqYYvxBx37NlaS5wG0A4q+TIq/KxKchgrO0Tr66lpdgAzPvrmmIH1TFf30tDI1yblvYRhYfhw3SrEkSDKCmt9MlM3g/L958Ucbd2phT+nx/ucubLpjG1WJRb/HrdDc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=EZavs1ci; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EZavs1ci" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774438914; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v7bSf5cwWFar7KW8N8pfBbamm44tOftgAlBkgIGyLuo=; b=EZavs1ciEmoLdsnKPE55muqmFgnlF5X5AYTr05LcAUpHtCdP/t39LCLCi+PuysfLS4hK7M Vj3lZCxjDQIShNxhFPSJ03JAhBTAfqBLqEe+dXSd66l0NR7/ByCwZVvEPdyll6ZWAtTu/P fW0zTvcyy8eI/J1WznU2j+NuYfk0nio= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-643-BwUzY7kdMZShXS9CvkU6Pg-1; Wed, 25 Mar 2026 07:41:51 -0400 X-MC-Unique: BwUzY7kdMZShXS9CvkU6Pg-1 X-Mimecast-MFC-AGG-ID: BwUzY7kdMZShXS9CvkU6Pg_1774438910 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 94D8619560A2; Wed, 25 Mar 2026 11:41:50 +0000 (UTC) Received: from p1.redhat.com (unknown [10.2.14.4]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E28C91800361; Wed, 25 Mar 2026 11:41:32 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com, "Lorenzo Stoakes (Oracle)" Subject: [PATCH mm-unstable v4 3/5] mm/khugepaged: define KHUGEPAGED_MAX_PTES_LIMIT as HPAGE_PMD_NR - 1 Date: Wed, 25 Mar 2026 05:40:20 -0600 Message-ID: <20260325114022.444081-4-npache@redhat.com> In-Reply-To: <20260325114022.444081-1-npache@redhat.com> References: <20260325114022.444081-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" The value (HPAGE_PMD_NR - 1) is used often in the khugepaged code to signify the limit of the max_ptes_* values. Add a define for this to increase code readability and reuse. Acked-by: Pedro Falcato Acked-by: David Hildenbrand (Arm) Suggested-by: Lorenzo Stoakes (Oracle) Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Baolin Wang Reviewed-by: Zi Yan Signed-off-by: Nico Pache --- mm/khugepaged.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1f4609761294..c9c17bfccf0d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -89,6 +89,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait); * * Note that these are only respected if collapse was initiated by khugepa= ged. */ +#define KHUGEPAGED_MAX_PTES_LIMIT (HPAGE_PMD_NR - 1) unsigned int khugepaged_max_ptes_none __read_mostly; static unsigned int khugepaged_max_ptes_swap __read_mostly; static unsigned int khugepaged_max_ptes_shared __read_mostly; @@ -259,7 +260,7 @@ static ssize_t max_ptes_none_store(struct kobject *kobj, unsigned long max_ptes_none; =20 err =3D kstrtoul(buf, 10, &max_ptes_none); - if (err || max_ptes_none > HPAGE_PMD_NR - 1) + if (err || max_ptes_none > KHUGEPAGED_MAX_PTES_LIMIT) return -EINVAL; =20 khugepaged_max_ptes_none =3D max_ptes_none; @@ -284,7 +285,7 @@ static ssize_t max_ptes_swap_store(struct kobject *kobj, unsigned long max_ptes_swap; =20 err =3D kstrtoul(buf, 10, &max_ptes_swap); - if (err || max_ptes_swap > HPAGE_PMD_NR - 1) + if (err || max_ptes_swap > KHUGEPAGED_MAX_PTES_LIMIT) return -EINVAL; =20 khugepaged_max_ptes_swap =3D max_ptes_swap; @@ -310,7 +311,7 @@ static ssize_t max_ptes_shared_store(struct kobject *ko= bj, unsigned long max_ptes_shared; =20 err =3D kstrtoul(buf, 10, &max_ptes_shared); - if (err || max_ptes_shared > HPAGE_PMD_NR - 1) + if (err || max_ptes_shared > KHUGEPAGED_MAX_PTES_LIMIT) return -EINVAL; =20 khugepaged_max_ptes_shared =3D max_ptes_shared; @@ -382,7 +383,7 @@ int __init khugepaged_init(void) return -ENOMEM; =20 khugepaged_pages_to_scan =3D HPAGE_PMD_NR * 8; - khugepaged_max_ptes_none =3D HPAGE_PMD_NR - 1; + khugepaged_max_ptes_none =3D KHUGEPAGED_MAX_PTES_LIMIT; khugepaged_max_ptes_swap =3D HPAGE_PMD_NR / 8; khugepaged_max_ptes_shared =3D HPAGE_PMD_NR / 2; =20 --=20 2.53.0 From nobody Thu Apr 2 17:31:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 677943D1704 for ; Wed, 25 Mar 2026 11:42:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438933; cv=none; b=pESIH5aA9UX7lsQn/oQF+bep1dy50m86ULqVWBLTG3WYer4zATCvo3DTYBSFRrXHJaUjDZRpBaUhDw3EYurSf8mfccQmk44EkzdH0NHMZXFr9sQlGHNSBp46VpUgq8EyeJc3uTuky3YKbKmW/bE44gNKa14C2X15Ajz9YVbP6jM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438933; c=relaxed/simple; bh=l8GzjcrkuZec+Nun2vcW7pRica7V5iUQe+jZl8gqTAA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PIv3dxdY3x6KL8a4N/Ca2K1Ob3eCMguFMVJBUFYf5lcOvFo4jVYFdGoc2aCJJNqbkbrgVA4j817wxU1pvGQNx4cU94p7oq3tUj7gtxzQZlAMabOwSwEvt5Fn/EnHizo3wpCFtm4ej/HjIwJdenmC/jl4UG3rjT5zJfJveIX+UjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gMlQyyAN; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gMlQyyAN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774438930; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z1MDwPOYvrNEQr38Y1Sv348va0swjyz4GDrr6XYek3Q=; b=gMlQyyAN1WwCplZ6IEDExmSNDrDnu8uXzDMyv0XLzUr1WwWwj/z/d+2qA5ciV9g/JUtdUz +hynaY1rvhUAQ6ZSsyjGG608dko/zBc9+0+k+M+fdFSsC2TzYt68cg4Sa9F25kUIoS/Ehx yzmIxkXrch+YzEeDKs3RPloWtEPOw2k= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-446-AxaDX4twMCm0sK57AZJ_eg-1; Wed, 25 Mar 2026 07:42:09 -0400 X-MC-Unique: AxaDX4twMCm0sK57AZJ_eg-1 X-Mimecast-MFC-AGG-ID: AxaDX4twMCm0sK57AZJ_eg_1774438928 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1840E19560B8; Wed, 25 Mar 2026 11:42:08 +0000 (UTC) Received: from p1.redhat.com (unknown [10.2.14.4]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 193B21800576; Wed, 25 Mar 2026 11:41:50 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com Subject: [PATCH mm-unstable v4 4/5] mm/khugepaged: rename hpage_collapse_* to collapse_* Date: Wed, 25 Mar 2026 05:40:21 -0600 Message-ID: <20260325114022.444081-5-npache@redhat.com> In-Reply-To: <20260325114022.444081-1-npache@redhat.com> References: <20260325114022.444081-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" The hpage_collapse functions describe functions used by madvise_collapse and khugepaged. remove the unnecessary hpage prefix to shorten the function name. Reviewed-by: Dev Jain Reviewed-by: Wei Yang Reviewed-by: Lance Yang Reviewed-by: Liam R. Howlett Reviewed-by: Zi Yan Reviewed-by: Baolin Wang Reviewed-by: Lorenzo Stoakes Acked-by: David Hildenbrand (Arm) Signed-off-by: Nico Pache Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/khugepaged.c | 60 ++++++++++++++++++++++++------------------------- mm/mremap.c | 2 +- 2 files changed, 30 insertions(+), 32 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c9c17bfccf0d..3728a2cf133c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -395,14 +395,14 @@ void __init khugepaged_destroy(void) kmem_cache_destroy(mm_slot_cache); } =20 -static inline int hpage_collapse_test_exit(struct mm_struct *mm) +static inline int collapse_test_exit(struct mm_struct *mm) { return atomic_read(&mm->mm_users) =3D=3D 0; } =20 -static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm) +static inline int collapse_test_exit_or_disable(struct mm_struct *mm) { - return hpage_collapse_test_exit(mm) || + return collapse_test_exit(mm) || mm_flags_test(MMF_DISABLE_THP_COMPLETELY, mm); } =20 @@ -436,7 +436,7 @@ void __khugepaged_enter(struct mm_struct *mm) int wakeup; =20 /* __khugepaged_exit() must not run from under us */ - VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); + VM_BUG_ON_MM(collapse_test_exit(mm), mm); if (unlikely(mm_flags_test_and_set(MMF_VM_HUGEPAGE, mm))) return; =20 @@ -490,7 +490,7 @@ void __khugepaged_exit(struct mm_struct *mm) } else if (slot) { /* * This is required to serialize against - * hpage_collapse_test_exit() (which is guaranteed to run + * collapse_test_exit() (which is guaranteed to run * under mmap sem read mode). Stop here (after we return all * pagetables will be destroyed) until khugepaged has finished * working on the pagetables under the mmap_lock. @@ -589,7 +589,7 @@ static enum scan_result __collapse_huge_page_isolate(st= ruct vm_area_struct *vma, goto out; } =20 - /* See hpage_collapse_scan_pmd(). */ + /* See collapse_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; if (cc->is_khugepaged && @@ -840,7 +840,7 @@ static struct collapse_control khugepaged_collapse_cont= rol =3D { .is_khugepaged =3D true, }; =20 -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) +static bool collapse_scan_abort(int nid, struct collapse_control *cc) { int i; =20 @@ -875,7 +875,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(v= oid) } =20 #ifdef CONFIG_NUMA -static int hpage_collapse_find_target_node(struct collapse_control *cc) +static int collapse_find_target_node(struct collapse_control *cc) { int nid, target_node =3D 0, max_value =3D 0; =20 @@ -894,7 +894,7 @@ static int hpage_collapse_find_target_node(struct colla= pse_control *cc) return target_node; } #else -static int hpage_collapse_find_target_node(struct collapse_control *cc) +static int collapse_find_target_node(struct collapse_control *cc) { return 0; } @@ -913,7 +913,7 @@ static enum scan_result hugepage_vma_revalidate(struct = mm_struct *mm, unsigned l enum tva_type type =3D cc->is_khugepaged ? TVA_KHUGEPAGED : TVA_FORCED_COLLAPSE; =20 - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(collapse_test_exit_or_disable(mm))) return SCAN_ANY_PROCESS; =20 *vmap =3D vma =3D find_vma(mm, address); @@ -984,7 +984,7 @@ static enum scan_result check_pmd_still_valid(struct mm= _struct *mm, =20 /* * Bring missing pages in from swap, to complete THP collapse. - * Only done if hpage_collapse_scan_pmd believes it is worthwhile. + * Only done if khugepaged_scan_pmd believes it is worthwhile. * * Called and returns without pte mapped or spinlocks held. * Returns result: if not SCAN_SUCCEED, mmap_lock has been released. @@ -1070,7 +1070,7 @@ static enum scan_result alloc_charge_folio(struct fol= io **foliop, struct mm_stru { gfp_t gfp =3D (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); - int node =3D hpage_collapse_find_target_node(cc); + int node =3D collapse_find_target_node(cc); struct folio *folio; =20 folio =3D __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); @@ -1255,7 +1255,7 @@ static enum scan_result collapse_huge_page(struct mm_= struct *mm, unsigned long a return result; } =20 -static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm, +static enum scan_result collapse_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked, struct collapse_control *cc) { @@ -1380,7 +1380,7 @@ static enum scan_result hpage_collapse_scan_pmd(struc= t mm_struct *mm, * hit record. */ node =3D folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { + if (collapse_scan_abort(node, cc)) { result =3D SCAN_SCAN_ABORT; goto out_unmap; } @@ -1446,7 +1446,7 @@ static void collect_mm_slot(struct mm_slot *slot) =20 lockdep_assert_held(&khugepaged_mm_lock); =20 - if (hpage_collapse_test_exit(mm)) { + if (collapse_test_exit(mm)) { /* free mm_slot */ hash_del(&slot->hash); list_del(&slot->mm_node); @@ -1801,7 +1801,7 @@ static void retract_page_tables(struct address_space = *mapping, pgoff_t pgoff) if (find_pmd_or_thp_or_none(mm, addr, &pmd) !=3D SCAN_SUCCEED) continue; =20 - if (hpage_collapse_test_exit(mm)) + if (collapse_test_exit(mm)) continue; =20 if (!file_backed_vma_is_retractable(vma)) @@ -2317,7 +2317,7 @@ static enum scan_result collapse_file(struct mm_struc= t *mm, unsigned long addr, return result; } =20 -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, +static enum scan_result collapse_scan_file(struct mm_struct *mm, unsigned long addr, struct file *file, pgoff_t start, struct collapse_control *cc) { @@ -2370,7 +2370,7 @@ static enum scan_result hpage_collapse_scan_file(stru= ct mm_struct *mm, } =20 node =3D folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { + if (collapse_scan_abort(node, cc)) { result =3D SCAN_SCAN_ABORT; folio_put(folio); break; @@ -2424,7 +2424,7 @@ static enum scan_result hpage_collapse_scan_file(stru= ct mm_struct *mm, return result; } =20 -static void khugepaged_scan_mm_slot(unsigned int progress_max, +static void collapse_scan_mm_slot(unsigned int progress_max, enum scan_result *result, struct collapse_control *cc) __releases(&khugepaged_mm_lock) __acquires(&khugepaged_mm_lock) @@ -2458,7 +2458,7 @@ static void khugepaged_scan_mm_slot(unsigned int prog= ress_max, goto breakouterloop_mmap_lock; =20 cc->progress++; - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(collapse_test_exit_or_disable(mm))) goto breakouterloop; =20 vma_iter_init(&vmi, mm, khugepaged_scan.address); @@ -2466,7 +2466,7 @@ static void khugepaged_scan_mm_slot(unsigned int prog= ress_max, unsigned long hstart, hend; =20 cond_resched(); - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) { + if (unlikely(collapse_test_exit_or_disable(mm))) { cc->progress++; break; } @@ -2488,7 +2488,7 @@ static void khugepaged_scan_mm_slot(unsigned int prog= ress_max, bool mmap_locked =3D true; =20 cond_resched(); - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(collapse_test_exit_or_disable(mm))) goto breakouterloop; =20 VM_BUG_ON(khugepaged_scan.address < hstart || @@ -2501,12 +2501,12 @@ static void khugepaged_scan_mm_slot(unsigned int pr= ogress_max, =20 mmap_read_unlock(mm); mmap_locked =3D false; - *result =3D hpage_collapse_scan_file(mm, + *result =3D collapse_scan_file(mm, khugepaged_scan.address, file, pgoff, cc); fput(file); if (*result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { mmap_read_lock(mm); - if (hpage_collapse_test_exit_or_disable(mm)) + if (collapse_test_exit_or_disable(mm)) goto breakouterloop; *result =3D try_collapse_pte_mapped_thp(mm, khugepaged_scan.address, false); @@ -2515,7 +2515,7 @@ static void khugepaged_scan_mm_slot(unsigned int prog= ress_max, mmap_read_unlock(mm); } } else { - *result =3D hpage_collapse_scan_pmd(mm, vma, + *result =3D collapse_scan_pmd(mm, vma, khugepaged_scan.address, &mmap_locked, cc); } =20 @@ -2547,7 +2547,7 @@ static void khugepaged_scan_mm_slot(unsigned int prog= ress_max, * Release the current mm_slot if this mm is about to die, or * if we scanned all vmas of this mm, or THP got disabled. */ - if (hpage_collapse_test_exit_or_disable(mm) || !vma) { + if (collapse_test_exit_or_disable(mm) || !vma) { /* * Make sure that if mm_users is reaching zero while * khugepaged runs here, khugepaged_exit will find @@ -2600,7 +2600,7 @@ static void khugepaged_do_scan(struct collapse_contro= l *cc) pass_through_head++; if (khugepaged_has_work() && pass_through_head < 2) - khugepaged_scan_mm_slot(progress_max, &result, cc); + collapse_scan_mm_slot(progress_max, &result, cc); else cc->progress =3D progress_max; spin_unlock(&khugepaged_mm_lock); @@ -2845,8 +2845,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, mmap_read_unlock(mm); mmap_locked =3D false; *lock_dropped =3D true; - result =3D hpage_collapse_scan_file(mm, addr, file, pgoff, - cc); + result =3D collapse_scan_file(mm, addr, file, pgoff, cc); =20 if (result =3D=3D SCAN_PAGE_DIRTY_OR_WRITEBACK && !triggered_wb && mapping_can_writeback(file->f_mapping)) { @@ -2860,8 +2859,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, } fput(file); } else { - result =3D hpage_collapse_scan_pmd(mm, vma, addr, - &mmap_locked, cc); + result =3D collapse_scan_pmd(mm, vma, addr, &mmap_locked, cc); } if (!mmap_locked) *lock_dropped =3D true; diff --git a/mm/mremap.c b/mm/mremap.c index 8566e32d58d9..e9c8b1d05832 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -244,7 +244,7 @@ static int move_ptes(struct pagetable_move_control *pmc, goto out; } /* - * Now new_pte is none, so hpage_collapse_scan_file() path can not find + * Now new_pte is none, so collapse_scan_file() path can not find * this by traversing file->f_mapping, so there is no concurrency with * retract_page_tables(). In addition, we already hold the exclusive * mmap_lock, so this new_pte page is stable, so there is no need to get --=20 2.53.0 From nobody Thu Apr 2 17:31:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1895C3D0917 for ; Wed, 25 Mar 2026 11:42:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438951; cv=none; b=CGO80+4o2mjZ4+MUU+bpsu+uXBlL1HE8YRCtUd2SRm5T5I+e1FsPA4+qEBS8x7mJGwxPAkTNtAf47o+wzo7jHjsZiBzpD2QZkqCyNphU6V4GSckRpDZSEFvkv52adyeKMZG22tl0fHlDtMRbNsHRJPHhWZAGHp5H/8ni9WP0cXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774438951; c=relaxed/simple; bh=REqfT0hqsACB/g6BdQVt667Wprz5GvfNDus9l978rSs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XPy2yJda/1oErWYrScbsWSUWHlh3Jxhsil+q0HToz43w+ikPwUqVxbqotUiwifDYzzurJcI7QkT2tpHh0h9ipc26r9AqKhgdaRR9ziI+HWLmVBtISaBjGVhZYi92Xrq1BPUu04IZ4stq0SRu+twcNg2wF6hHMN4Gg7SvcYw9ceY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gqWf/vRv; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gqWf/vRv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774438949; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qkmjhafQ5ISCRajjpMDTMb7eH4ctQIZNWYKiDpq7pM4=; b=gqWf/vRveMQU4dtCpGhmothkx1+iKWQ8zC6UYYl6ZeEfYTPBp5R9ZrL1npfMGKj5SPuID7 1pa0har4KwvtnjePphZGEBu2oMc5XnEhQCDmzcAfzwM+3xBekT1f3GhqC9qlWwEXylJ87D ArlD0yElYEGKfcMWmPf7D1MtCq1ntiA= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-452-MUUY_oXgPXGcBnigopg00g-1; Wed, 25 Mar 2026 07:42:25 -0400 X-MC-Unique: MUUY_oXgPXGcBnigopg00g-1 X-Mimecast-MFC-AGG-ID: MUUY_oXgPXGcBnigopg00g_1774438944 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C435519560B6; Wed, 25 Mar 2026 11:42:24 +0000 (UTC) Received: from p1.redhat.com (unknown [10.2.14.4]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7EABF18002A6; Wed, 25 Mar 2026 11:42:08 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com, "Lorenzo Stoakes (Oracle)" Subject: [PATCH mm-unstable v4 5/5] mm/khugepaged: unify khugepaged and madv_collapse with collapse_single_pmd() Date: Wed, 25 Mar 2026 05:40:22 -0600 Message-ID: <20260325114022.444081-6-npache@redhat.com> In-Reply-To: <20260325114022.444081-1-npache@redhat.com> References: <20260325114022.444081-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" The khugepaged daemon and madvise_collapse have two different implementations that do almost the same thing. Create collapse_single_pmd to increase code reuse and create an entry point to these two users. Refactor madvise_collapse and collapse_scan_mm_slot to use the new collapse_single_pmd function. To help reduce confusion around the mmap_locked variable, we rename mmap_locked to lock_dropped in the collapse_scan_mm_slot() function, and remove the redundant mmap_locked in madvise_collapse(); this further unifies the code readiblity. the SCAN_PTE_MAPPED_HUGEPAGE enum is no longer reachable in the madvise_collapse() function, so we drop it from the list of "continuing" enums. This introduces a minor behavioral change that is most likely an undiscovered bug. The current implementation of khugepaged tests collapse_test_exit_or_disable() before calling collapse_pte_mapped_thp, but we weren't doing it in the madvise_collapse case. By unifying these two callers madvise_collapse now also performs this check. We also modify the return value to be SCAN_ANY_PROCESS which properly indicates that this process is no longer valid to operate on. By moving the madvise_collapse writeback-retry logic into the helper function we can also avoid having to revalidate the VMA. We guard the khugepaged_pages_collapsed variable to ensure its only incremented for khugepaged. As requested we also convert a VM_BUG_ON to a VM_WARN_ON. Reviewed-by: Lorenzo Stoakes (Oracle) Reviewed-by: Lance Yang Reviewed-by: Baolin Wang Acked-by: David Hildenbrand (Arm) Signed-off-by: Nico Pache Reviewed-by: Nico Pache --- mm/khugepaged.c | 142 ++++++++++++++++++++++++------------------------ 1 file changed, 72 insertions(+), 70 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 3728a2cf133c..d06d84219e1b 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1257,7 +1257,7 @@ static enum scan_result collapse_huge_page(struct mm_= struct *mm, unsigned long a =20 static enum scan_result collapse_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long start_addr, - bool *mmap_locked, struct collapse_control *cc) + bool *lock_dropped, struct collapse_control *cc) { pmd_t *pmd; pte_t *pte, *_pte; @@ -1432,7 +1432,7 @@ static enum scan_result collapse_scan_pmd(struct mm_s= truct *mm, result =3D collapse_huge_page(mm, start_addr, referenced, unmapped, cc); /* collapse_huge_page will return with the mmap_lock released */ - *mmap_locked =3D false; + *lock_dropped =3D true; } out: trace_mm_khugepaged_scan_pmd(mm, folio, referenced, @@ -2424,6 +2424,67 @@ static enum scan_result collapse_scan_file(struct mm= _struct *mm, return result; } =20 +/* + * Try to collapse a single PMD starting at a PMD aligned addr, and return + * the results. + */ +static enum scan_result collapse_single_pmd(unsigned long addr, + struct vm_area_struct *vma, bool *lock_dropped, + struct collapse_control *cc) +{ + struct mm_struct *mm =3D vma->vm_mm; + bool triggered_wb =3D false; + enum scan_result result; + struct file *file; + pgoff_t pgoff; + + mmap_assert_locked(mm); + + if (vma_is_anonymous(vma)) { + result =3D collapse_scan_pmd(mm, vma, addr, lock_dropped, cc); + goto end; + } + + file =3D get_file(vma->vm_file); + pgoff =3D linear_page_index(vma, addr); + + mmap_read_unlock(mm); + *lock_dropped =3D true; +retry: + result =3D collapse_scan_file(mm, addr, file, pgoff, cc); + + /* + * For MADV_COLLAPSE, when encountering dirty pages, try to writeback, + * then retry the collapse one time. + */ + if (!cc->is_khugepaged && result =3D=3D SCAN_PAGE_DIRTY_OR_WRITEBACK && + !triggered_wb && mapping_can_writeback(file->f_mapping)) { + const loff_t lstart =3D (loff_t)pgoff << PAGE_SHIFT; + const loff_t lend =3D lstart + HPAGE_PMD_SIZE - 1; + + filemap_write_and_wait_range(file->f_mapping, lstart, lend); + triggered_wb =3D true; + goto retry; + } + fput(file); + + if (result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { + mmap_read_lock(mm); + if (collapse_test_exit_or_disable(mm)) + result =3D SCAN_ANY_PROCESS; + else + result =3D try_collapse_pte_mapped_thp(mm, addr, + !cc->is_khugepaged); + if (result =3D=3D SCAN_PMD_MAPPED) + result =3D SCAN_SUCCEED; + mmap_read_unlock(mm); + } +end: + if (cc->is_khugepaged && result =3D=3D SCAN_SUCCEED) + ++khugepaged_pages_collapsed; + return result; +} + static void collapse_scan_mm_slot(unsigned int progress_max, enum scan_result *result, struct collapse_control *cc) __releases(&khugepaged_mm_lock) @@ -2485,46 +2546,21 @@ static void collapse_scan_mm_slot(unsigned int prog= ress_max, VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK); =20 while (khugepaged_scan.address < hend) { - bool mmap_locked =3D true; + bool lock_dropped =3D false; =20 cond_resched(); if (unlikely(collapse_test_exit_or_disable(mm))) goto breakouterloop; =20 - VM_BUG_ON(khugepaged_scan.address < hstart || + VM_WARN_ON_ONCE(khugepaged_scan.address < hstart || khugepaged_scan.address + HPAGE_PMD_SIZE > hend); - if (!vma_is_anonymous(vma)) { - struct file *file =3D get_file(vma->vm_file); - pgoff_t pgoff =3D linear_page_index(vma, - khugepaged_scan.address); - - mmap_read_unlock(mm); - mmap_locked =3D false; - *result =3D collapse_scan_file(mm, - khugepaged_scan.address, file, pgoff, cc); - fput(file); - if (*result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { - mmap_read_lock(mm); - if (collapse_test_exit_or_disable(mm)) - goto breakouterloop; - *result =3D try_collapse_pte_mapped_thp(mm, - khugepaged_scan.address, false); - if (*result =3D=3D SCAN_PMD_MAPPED) - *result =3D SCAN_SUCCEED; - mmap_read_unlock(mm); - } - } else { - *result =3D collapse_scan_pmd(mm, vma, - khugepaged_scan.address, &mmap_locked, cc); - } - - if (*result =3D=3D SCAN_SUCCEED) - ++khugepaged_pages_collapsed; =20 + *result =3D collapse_single_pmd(khugepaged_scan.address, + vma, &lock_dropped, cc); /* move to next address */ khugepaged_scan.address +=3D HPAGE_PMD_SIZE; - if (!mmap_locked) + if (lock_dropped) /* * We released mmap_lock so break loop. Note * that we drop mmap_lock before all hugepage @@ -2799,7 +2835,6 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, unsigned long hstart, hend, addr; enum scan_result last_fail =3D SCAN_FAIL; int thps =3D 0; - bool mmap_locked =3D true; =20 BUG_ON(vma->vm_start > start); BUG_ON(vma->vm_end < end); @@ -2821,13 +2856,11 @@ int madvise_collapse(struct vm_area_struct *vma, un= signed long start, =20 for (addr =3D hstart; addr < hend; addr +=3D HPAGE_PMD_SIZE) { enum scan_result result =3D SCAN_FAIL; - bool triggered_wb =3D false; =20 -retry: - if (!mmap_locked) { + if (*lock_dropped) { cond_resched(); mmap_read_lock(mm); - mmap_locked =3D true; + *lock_dropped =3D false; result =3D hugepage_vma_revalidate(mm, addr, false, &vma, cc); if (result !=3D SCAN_SUCCEED) { @@ -2837,45 +2870,14 @@ int madvise_collapse(struct vm_area_struct *vma, un= signed long start, =20 hend =3D min(hend, vma->vm_end & HPAGE_PMD_MASK); } - mmap_assert_locked(mm); - if (!vma_is_anonymous(vma)) { - struct file *file =3D get_file(vma->vm_file); - pgoff_t pgoff =3D linear_page_index(vma, addr); =20 - mmap_read_unlock(mm); - mmap_locked =3D false; - *lock_dropped =3D true; - result =3D collapse_scan_file(mm, addr, file, pgoff, cc); - - if (result =3D=3D SCAN_PAGE_DIRTY_OR_WRITEBACK && !triggered_wb && - mapping_can_writeback(file->f_mapping)) { - loff_t lstart =3D (loff_t)pgoff << PAGE_SHIFT; - loff_t lend =3D lstart + HPAGE_PMD_SIZE - 1; - - filemap_write_and_wait_range(file->f_mapping, lstart, lend); - triggered_wb =3D true; - fput(file); - goto retry; - } - fput(file); - } else { - result =3D collapse_scan_pmd(mm, vma, addr, &mmap_locked, cc); - } - if (!mmap_locked) - *lock_dropped =3D true; + result =3D collapse_single_pmd(addr, vma, lock_dropped, cc); =20 -handle_result: switch (result) { case SCAN_SUCCEED: case SCAN_PMD_MAPPED: ++thps; break; - case SCAN_PTE_MAPPED_HUGEPAGE: - BUG_ON(mmap_locked); - mmap_read_lock(mm); - result =3D try_collapse_pte_mapped_thp(mm, addr, true); - mmap_read_unlock(mm); - goto handle_result; /* Whitelisted set of results where continuing OK */ case SCAN_NO_PTE_TABLE: case SCAN_PTE_NON_PRESENT: @@ -2898,7 +2900,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, =20 out_maybelock: /* Caller expects us to hold mmap_lock on return */ - if (!mmap_locked) + if (*lock_dropped) mmap_read_lock(mm); out_nolock: mmap_assert_locked(mm); --=20 2.53.0