From nobody Tue Apr 7 21:25:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DB612EBB8A for ; Wed, 11 Mar 2026 21:15:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773263741; cv=none; b=IC91zI7z6bibg2zewWfRHiBnfjARs3p8twdODAoi42pCiSesmdnQsPjldwLIWTIAd02Sj3rGd14I01satW9Kon0CK34JbJ1DP2U+uhGKYshguO4pp4pQ/uZntxIZawmeZW/UQUSv57sPImt+WNgV/xiYuTSXVaMkgvsQlTGj7PQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773263741; c=relaxed/simple; bh=1RaGzo3CEL3K7WyYHfZ0mLMPIm+lOImQsGjf7TWiRvU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FRu5EeUj9YmnRbIXKSj9b4AHc+w3C2XFTX0GNAEjc4lp0uHbe4PRVz1hD0hOsbdDDCLKmZXXvea/PCIlClyy+0sdWa2PgZPi1mZ7tdlc2cRpTOB6t0Ks0db7IFSuxOEboCSsVQjnCwyQdywZDZBV6w4ILKa5viVQWhESATJ2G4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ha5WKjCJ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ha5WKjCJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773263739; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hpEKhYGnuX/C5T9/tEkiZyfioDXkZBHEba0Tb5EmW1s=; b=ha5WKjCJEPHCTqaN1EuRIfSFafg0z61h05nhNXODXO6RW6NXT+d5DZi4s/OhSvzxr+g2w1 bWJnMG7mT84PGzRe8NAep19GkFNLNzgQn1rkxHyfwmZp6YVcpLv6yM07vIJmGGz9SIQKHC EsuImyMKZ5Idn2kHv68GJe+MeoZdNbY= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-558-wL3qYKr_O_m3H5OISr-4Ug-1; Wed, 11 Mar 2026 17:15:38 -0400 X-MC-Unique: wL3qYKr_O_m3H5OISr-4Ug-1 X-Mimecast-MFC-AGG-ID: wL3qYKr_O_m3H5OISr-4Ug_1773263737 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DBCA61800283; Wed, 11 Mar 2026 21:15:36 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.89.69]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2D71D3002D0E; Wed, 11 Mar 2026 21:15:23 +0000 (UTC) From: Nico Pache To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: aarcange@redhat.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, apopple@nvidia.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org, corbet@lwn.net, dave.hansen@linux.intel.com, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jackmanb@google.com, jack@suse.cz, jannh@google.com, jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org, lance.yang@linux.dev, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com, matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com, npache@redhat.com, peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com, raquini@redhat.com, rdunlap@infradead.org, richard.weiyang@gmail.com, rientjes@google.com, rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com, shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com, thomas.hellstrom@linux.intel.com, tiwai@suse.de, usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com, wangkefeng.wang@huawei.com, will@kernel.org, willy@infradead.org, yang@os.amperecomputing.com, ying.huang@linux.alibaba.com, ziy@nvidia.com, zokeefe@google.com Subject: [PATCH mm-unstable v3 1/5] mm: consolidate anonymous folio PTE mapping into helpers Date: Wed, 11 Mar 2026 15:13:11 -0600 Message-ID: <20260311211315.450947-2-npache@redhat.com> In-Reply-To: <20260311211315.450947-1-npache@redhat.com> References: <20260311211315.450947-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" The anonymous page fault handler in do_anonymous_page() open-codes the sequence to map a newly allocated anonymous folio at the PTE level: - construct the PTE entry - add rmap - add to LRU - set the PTEs - update the MMU cache. Introduce a two helpers to consolidate this duplicated logic, mirroring the existing map_anon_folio_pmd_nopf() pattern for PMD-level mappings: map_anon_folio_pte_nopf(): constructs the PTE entry, takes folio references, adds anon rmap and LRU. This function also handles the uffd_wp that can occur in the pf variant. map_anon_folio_pte_pf(): extends the nopf variant to handle MM_ANONPAGES counter updates, and mTHP fault allocation statistics for the page fault path. The zero-page read path in do_anonymous_page() is also untangled from the shared setpte label, since it does not allocate a folio and should not share the same mapping sequence as the write path. Make nr_pages =3D 1 rather than relying on the variable. This makes it more clear that we are operating on the zero page only. This refactoring will also help reduce code duplication between mm/memory.c and mm/khugepaged.c, and provides a clean API for PTE-level anonymous folio mapping that can be reused by future callers. Reviewed-by: Dev Jain Reviewed-by: Lance Yang Acked-by: David Hildenbrand (Arm) Signed-off-by: Nico Pache Reviewed-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 4 ++++ mm/memory.c | 60 +++++++++++++++++++++++++++++++--------------- 2 files changed, 45 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4c4fd55fc823..9fea354bd17f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4903,4 +4903,8 @@ static inline bool snapshot_page_is_faithful(const st= ruct page_snapshot *ps) =20 void snapshot_page(struct page_snapshot *ps, const struct page *page); =20 +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + bool uffd_wp); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 6aa0ea4af1fc..5c8bf1eb55f5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5197,6 +5197,37 @@ static struct folio *alloc_anon_folio(struct vm_faul= t *vmf) return folio_prealloc(vma->vm_mm, vma, vmf->address, true); } =20 +void map_anon_folio_pte_nopf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, + bool uffd_wp) +{ + unsigned int nr_pages =3D folio_nr_pages(folio); + pte_t entry =3D folio_mk_pte(folio, vma->vm_page_prot); + + entry =3D pte_sw_mkyoung(entry); + + if (vma->vm_flags & VM_WRITE) + entry =3D pte_mkwrite(pte_mkdirty(entry), vma); + if (uffd_wp) + entry =3D pte_mkuffd_wp(entry); + + folio_ref_add(folio, nr_pages - 1); + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_ptes(vma->vm_mm, addr, pte, entry, nr_pages); + update_mmu_cache_range(NULL, vma, addr, pte, nr_pages); +} + +static void map_anon_folio_pte_pf(struct folio *folio, pte_t *pte, + struct vm_area_struct *vma, unsigned long addr, bool uffd_wp) +{ + unsigned int order =3D folio_order(folio); + + map_anon_folio_pte_nopf(folio, pte, vma, addr, uffd_wp); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1 << order); + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_ALLOC); +} + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -5243,7 +5274,14 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return handle_userfault(vmf, VM_UFFD_MISSING); } - goto setpte; + if (vmf_orig_pte_uffd_wp(vmf)) + entry =3D pte_mkuffd_wp(entry); + set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache_range(vmf, vma, addr, vmf->pte, + /*nr_pages=3D*/ 1); + goto unlock; } =20 /* Allocate our own private page. */ @@ -5267,11 +5305,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) */ __folio_mark_uptodate(folio); =20 - entry =3D folio_mk_pte(folio, vma->vm_page_prot); - entry =3D pte_sw_mkyoung(entry); - if (vma->vm_flags & VM_WRITE) - entry =3D pte_mkwrite(pte_mkdirty(entry), vma); - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); if (!vmf->pte) goto release; @@ -5293,19 +5326,8 @@ static vm_fault_t do_anonymous_page(struct vm_fault = *vmf) folio_put(folio); return handle_userfault(vmf, VM_UFFD_MISSING); } - - folio_ref_add(folio, nr_pages - 1); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); - count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); - folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); -setpte: - if (vmf_orig_pte_uffd_wp(vmf)) - entry =3D pte_mkuffd_wp(entry); - set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); + map_anon_folio_pte_pf(folio, vmf->pte, vma, addr, + vmf_orig_pte_uffd_wp(vmf)); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); --=20 2.53.0