From nobody Tue Apr 7 15:26:14 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E64A6286D70 for ; Thu, 26 Feb 2026 01:30:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772069447; cv=none; b=jtzVo2k9KadfPM5kzz/eqa942LM0u0gNFl7v/IHbK9AVKx62pjVoSCtQbZ21LbvCpD//0Gnt3Eh7eb8MbDrTR0eTJu32M7Sck8WIJiHeQ8XOoycGZYyRmA0ntLalRMG/clVC8Pdi4FiNbRRIjmbc0Q4pt+EigHDrF53X/HQYHBE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772069447; c=relaxed/simple; bh=YBDKXNkneBSMshrbJqOxJit6uBS8btlJ5ToQAMRpn6I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=imfDM9uBU50HujAuKTYvZGhf4lEqoLQRxTJ324oJ0AyNn/RMUOdP3Dauex0XO9aJvSpwZkRy8iFwbZl+H1QKwkjN95k+lXgg3TidZ2VHAtH0YiScjyIXpN9/pP/PKugXuxDRCrXY0y+Bnl3GaQMxA6rpzRlhdlr/pKcSlPyt6X0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=UD4AJSrc; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UD4AJSrc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772069444; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6CWKU+s+/ibAb4RGR5iOyoI/KP+ZxoADOlF3DvBlYHY=; b=UD4AJSrcFnKE50AfaQkNeyjSPqFe3HmV0XfM74gz80qgy890kIC54V0Qlv2ko0amrfkUuN O0UPEthkxMdp3XYmfA8rIEiMLuNUhhEqkDsw8zxc7jvXk4eoCjNlF42zpxEh9VLKa3kAWq 8uJeIhN/vB6V//BgkuHcAzXtp4a8L7g= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-592-TM-sxIQbNO-uA8IEPPnRrw-1; Wed, 25 Feb 2026 20:30:40 -0500 X-MC-Unique: TM-sxIQbNO-uA8IEPPnRrw-1 X-Mimecast-MFC-AGG-ID: TM-sxIQbNO-uA8IEPPnRrw_1772069439 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7B34419560A6; Thu, 26 Feb 2026 01:30:39 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.173]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1CD923003D8C; Thu, 26 Feb 2026 01:30:18 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com Subject: [PATCH mm-unstable v2 1/5] mm: consolidate anonymous folio PTE mapping into helpers Date: Wed, 25 Feb 2026 18:29:25 -0700 Message-ID: <20260226012929.169479-2-npache@redhat.com> In-Reply-To: <20260226012929.169479-1-npache@redhat.com> References: <20260226012929.169479-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" The anonymous page fault handler in do_anonymous_page() open-codes the sequence to map a newly allocated anonymous folio at the PTE level: - construct the PTE entry - add rmap - add to LRU - set the PTEs - update the MMU cache. Introduce a two helpers to consolidate this duplicated logic, mirroring the existing map_anon_folio_pmd_nopf() pattern for PMD-level mappings: map_anon_folio_pte_nopf(): constructs the PTE entry, takes folio references, adds anon rmap and LRU. This function also handles the uffd_wp that can occur in the pf variant. map_anon_folio_pte_pf(): extends the nopf variant to handle MM_ANONPAGES counter updates, and mTHP fault allocation statistics for the page fault path. The zero-page read path in do_anonymous_page() is also untangled from the shared setpte label, since it does not allocate a folio and should not share the same mapping sequence as the write path. Make nr_pages =3D 1 rather than relying on the variable. This makes it more clear that we are operating on the zero page only. This refactoring will also help reduce code duplication between mm/memory.c and mm/khugepaged.c, and provides a clean API for PTE-level anonymous folio mapping that can be reused by future callers. Signed-off-by: Nico Pache Acked-by: David Hildenbrand (Arm) Reviewed-by: Dev Jain Reviewed-by: Lance Yang --- include/linux/mm.h | 4 ++++ mm/memory.c | 59 +++++++++++++++++++++++++++++++--------------- 2 files changed, 44 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 13336340612e..3ebf143c7502 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4901,4 +4901,8 @@ static inline bool snapshot_page_is_faithful(const st= ruct page_snapshot *ps) =20 void snapshot_page(struct page_snapshot *ps, const struct page *page); =20 +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + bool uffd_wp); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 9385842c3503..a1a364e1fdcd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5189,6 +5189,36 @@ static struct folio *alloc_anon_folio(struct vm_faul= t *vmf) return folio_prealloc(vma->vm_mm, vma, vmf->address, true); } =20 +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + bool uffd_wp) +{ + unsigned int nr_pages =3D folio_nr_pages(folio); + pte_t entry =3D folio_mk_pte(folio, vma->vm_page_prot); + + entry =3D pte_sw_mkyoung(entry); + + if (vma->vm_flags & VM_WRITE) + entry =3D pte_mkwrite(pte_mkdirty(entry), vma); + if (uffd_wp) + entry =3D pte_mkuffd_wp(entry); + + folio_ref_add(folio, nr_pages - 1); + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_ptes(vma->vm_mm, addr, pte, entry, nr_pages); + update_mmu_cache_range(NULL, vma, addr, pte, nr_pages); +} + +static void map_anon_folio_pte_pf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + unsigned int nr_pages, bool uffd_wp) +{ + map_anon_folio_pte_nopf(folio, pte, vma, addr, uffd_wp); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); + count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); +} + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -5235,7 +5265,14 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return handle_userfault(vmf, VM_UFFD_MISSING); } - goto setpte; + if (vmf_orig_pte_uffd_wp(vmf)) + entry =3D pte_mkuffd_wp(entry); + set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache_range(vmf, vma, addr, vmf->pte, + /*nr_pages=3D*/ 1); + goto unlock; } =20 /* Allocate our own private page. */ @@ -5259,11 +5296,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) */ __folio_mark_uptodate(folio); =20 - entry =3D folio_mk_pte(folio, vma->vm_page_prot); - entry =3D pte_sw_mkyoung(entry); - if (vma->vm_flags & VM_WRITE) - entry =3D pte_mkwrite(pte_mkdirty(entry), vma); - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); if (!vmf->pte) goto release; @@ -5285,19 +5317,8 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) folio_put(folio); return handle_userfault(vmf, VM_UFFD_MISSING); } - - folio_ref_add(folio, nr_pages - 1); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); - count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); - folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); -setpte: - if (vmf_orig_pte_uffd_wp(vmf)) - entry =3D pte_mkuffd_wp(entry); - set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); + map_anon_folio_pte_pf(folio, vmf->pte, vma, addr, nr_pages, + vmf_orig_pte_uffd_wp(vmf)); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); --=20 2.53.0