From nobody Wed Dec 17 09:04:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54F29C4167B for ; Tue, 28 Nov 2023 14:52:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346368AbjK1OwP (ORCPT ); Tue, 28 Nov 2023 09:52:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346347AbjK1OwL (ORCPT ); Tue, 28 Nov 2023 09:52:11 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EBB710F4 for ; Tue, 28 Nov 2023 06:52:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701183136; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v3TJbNWNpIXvBErov8koQ5cDTRfHHR66tcHCVvKwONQ=; b=RUxfsXEwVlLDutCxgkVC6g65nzDeoOiiDHea83quOvvxeDTT5Fq4gK2dKpNyg+dVp5l6y0 T3mXxYR6J0sIMATsYc5AzZI5tM7zwzcQoSXjiJ27AqN7ZGJYzMWjgbeDg1Mwmorz2p0Z5s yPikfd+Pnjr6eFwxGN4T3v26z3Jsd7E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-625-Ns9ZSoWwPHWGhnYE2l0gLQ-1; Tue, 28 Nov 2023 09:52:08 -0500 X-MC-Unique: Ns9ZSoWwPHWGhnYE2l0gLQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3029D1019C83; Tue, 28 Nov 2023 14:52:08 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.193.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id 232B85028; Tue, 28 Nov 2023 14:52:07 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Mike Kravetz , Muchun Song Subject: [PATCH v1 1/5] mm/rmap: rename hugepage_add* to hugetlb_add* Date: Tue, 28 Nov 2023 15:52:01 +0100 Message-ID: <20231128145205.215026-2-david@redhat.com> In-Reply-To: <20231128145205.215026-1-david@redhat.com> References: <20231128145205.215026-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Let's just call it "hugetlb_". Yes, it's all already inconsistent and confusing because we have a lot of "hugepage_" functions for legacy reasons. But "hugetlb" cannot possibly be confused with transparent huge pages, and it matches "hugetlb.c" and "folio_test_hugetlb()". So let's minimize confusion in rmap code. Signed-off-by: David Hildenbrand Reviewed-by: Muchun Song --- include/linux/rmap.h | 4 ++-- mm/hugetlb.c | 8 ++++---- mm/migrate.c | 4 ++-- mm/rmap.c | 8 ++++---- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b26fe858fd44..4c5bfeb05463 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -203,9 +203,9 @@ void folio_add_file_rmap_range(struct folio *, struct p= age *, unsigned int nr, void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); =20 -void hugepage_add_anon_rmap(struct folio *, struct vm_area_struct *, +void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address, rmap_t flags); -void hugepage_add_new_anon_rmap(struct folio *, struct vm_area_struct *, +void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); =20 static inline void __page_dup_rmap(struct page *page, bool compound) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1169ef2f2176..4cfa0679661e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5278,7 +5278,7 @@ hugetlb_install_folio(struct vm_area_struct *vma, pte= _t *ptep, unsigned long add pte_t newpte =3D make_huge_pte(vma, &new_folio->page, 1); =20 __folio_mark_uptodate(new_folio); - hugepage_add_new_anon_rmap(new_folio, vma, addr); + hugetlb_add_new_anon_rmap(new_folio, vma, addr); if (userfaultfd_wp(vma) && huge_pte_uffd_wp(old)) newpte =3D huge_pte_mkuffd_wp(newpte); set_huge_pte_at(vma->vm_mm, addr, ptep, newpte, sz); @@ -5981,7 +5981,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); page_remove_rmap(&old_folio->page, vma, true); - hugepage_add_new_anon_rmap(new_folio, vma, haddr); + hugetlb_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) newpte =3D huge_pte_mkuffd_wp(newpte); set_huge_pte_at(mm, haddr, ptep, newpte, huge_page_size(h)); @@ -6270,7 +6270,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *m= m, goto backout; =20 if (anon_rmap) - hugepage_add_new_anon_rmap(folio, vma, haddr); + hugetlb_add_new_anon_rmap(folio, vma, haddr); else page_dup_file_rmap(&folio->page, true); new_pte =3D make_huge_pte(vma, &folio->page, ((vma->vm_flags & VM_WRITE) @@ -6725,7 +6725,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, if (folio_in_pagecache) page_dup_file_rmap(&folio->page, true); else - hugepage_add_new_anon_rmap(folio, dst_vma, dst_addr); + hugetlb_add_new_anon_rmap(folio, dst_vma, dst_addr); =20 /* * For either: (1) CONTINUE on a non-shared VMA, or (2) UFFDIO_COPY diff --git a/mm/migrate.c b/mm/migrate.c index 35a88334bb3c..4cb849fa0dd2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -249,8 +249,8 @@ static bool remove_migration_pte(struct folio *folio, =20 pte =3D arch_make_huge_pte(pte, shift, vma->vm_flags); if (folio_test_anon(folio)) - hugepage_add_anon_rmap(folio, vma, pvmw.address, - rmap_flags); + hugetlb_add_anon_rmap(folio, vma, pvmw.address, + rmap_flags); else page_dup_file_rmap(new, true); set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte, diff --git a/mm/rmap.c b/mm/rmap.c index 7a27a2b41802..112467c30b2c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2583,8 +2583,8 @@ void rmap_walk_locked(struct folio *folio, struct rma= p_walk_control *rwc) * * RMAP_COMPOUND is ignored. */ -void hugepage_add_anon_rmap(struct folio *folio, struct vm_area_struct *vm= a, - unsigned long address, rmap_t flags) +void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, rmap_t flags) { VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); =20 @@ -2595,8 +2595,8 @@ void hugepage_add_anon_rmap(struct folio *folio, stru= ct vm_area_struct *vma, PageAnonExclusive(&folio->page), folio); } =20 -void hugepage_add_new_anon_rmap(struct folio *folio, - struct vm_area_struct *vma, unsigned long address) +void hugetlb_add_new_anon_rmap(struct folio *folio, + struct vm_area_struct *vma, unsigned long address) { BUG_ON(address < vma->vm_start || address >=3D vma->vm_end); /* increment count (starts at -1) */ --=20 2.41.0 From nobody Wed Dec 17 09:04:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B1C2C4167B for ; Tue, 28 Nov 2023 14:52:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346433AbjK1Ow3 (ORCPT ); Tue, 28 Nov 2023 09:52:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346389AbjK1OwP (ORCPT ); Tue, 28 Nov 2023 09:52:15 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 685C510E4 for ; Tue, 28 Nov 2023 06:52:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701183140; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PmPjXh3Vh8+yuItVsdShx0BHlebvdn/GvnE6+gvw2TQ=; b=KLu+zqHf5Q+HGbZEwSFGpWi/mHTELEO2zQxjZOJVyiO9zIfmrX3UyG90H8NUAFQeL90Rrr TY4tn7wtrN+yjT2S4be4U3P2M/IjtHjhLuBgEGjGorLYATklRH3BqF+iEtL945a2YFK3kl EvsAlhMm0mb/+/hsHB4KT/iIKLQwJ9I= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-623-3AskqEHWOzuit0KMJT14zw-1; Tue, 28 Nov 2023 09:52:12 -0500 X-MC-Unique: 3AskqEHWOzuit0KMJT14zw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 665EB101A52D; Tue, 28 Nov 2023 14:52:09 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.193.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B2E35028; Tue, 28 Nov 2023 14:52:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Mike Kravetz , Muchun Song Subject: [PATCH v1 2/5] mm/rmap: introduce and use hugetlb_remove_rmap() Date: Tue, 28 Nov 2023 15:52:02 +0100 Message-ID: <20231128145205.215026-3-david@redhat.com> In-Reply-To: <20231128145205.215026-1-david@redhat.com> References: <20231128145205.215026-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" hugetlb rmap handling differs quite a lot from "ordinary" rmap code. We don't want this hugetlb special-casing in the rmap functions, as we're special casing the callers already. Let's simply use a separate function for hugetlb. Let's introduce and use hugetlb_remove_rmap() and remove the hugetlb code from page_remove_rmap(). This effectively removes one check on the small-folio path as well. While this is a cleanup, this will also make it easier to change rmap handling for partially-mappable folios. Note: all possible candidates that need care are page_remove_rmap() that pass compound=3Dtrue. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 5 +++++ mm/hugetlb.c | 4 ++-- mm/rmap.c | 17 ++++++++--------- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 4c5bfeb05463..e8d1dc1d5361 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -208,6 +208,11 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_a= rea_struct *, void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); =20 +static inline void hugetlb_remove_rmap(struct folio *folio) +{ + atomic_dec(&folio->_entire_mapcount); +} + static inline void __page_dup_rmap(struct page *page, bool compound) { if (compound) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4cfa0679661e..d17bb53b19ff 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5669,7 +5669,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, s= truct vm_area_struct *vma, make_pte_marker(PTE_MARKER_UFFD_WP), sz); hugetlb_count_sub(pages_per_huge_page(h), mm); - page_remove_rmap(page, vma, true); + hugetlb_remove_rmap(page_folio(page)); =20 spin_unlock(ptl); tlb_remove_page_size(tlb, page, huge_page_size(h)); @@ -5980,7 +5980,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, =20 /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); - page_remove_rmap(&old_folio->page, vma, true); + hugetlb_remove_rmap(old_folio); hugetlb_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) newpte =3D huge_pte_mkuffd_wp(newpte); diff --git a/mm/rmap.c b/mm/rmap.c index 112467c30b2c..5037581b79ec 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1440,13 +1440,6 @@ void page_remove_rmap(struct page *page, struct vm_a= rea_struct *vma, =20 VM_BUG_ON_PAGE(compound && !PageHead(page), page); =20 - /* Hugetlb pages are not counted in NR_*MAPPED */ - if (unlikely(folio_test_hugetlb(folio))) { - /* hugetlb pages are always mapped with pmds */ - atomic_dec(&folio->_entire_mapcount); - return; - } - /* Is page being unmapped by PTE? Is this its last map to be removed? */ if (likely(!compound)) { last =3D atomic_add_negative(-1, &page->_mapcount); @@ -1804,7 +1797,10 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, dec_mm_counter(mm, mm_counter_file(&folio->page)); } discard: - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); + if (unlikely(folio_test_hugetlb(folio))) + hugetlb_remove_rmap(folio); + else + page_remove_rmap(subpage, vma, false); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); @@ -2157,7 +2153,10 @@ static bool try_to_migrate_one(struct folio *folio, = struct vm_area_struct *vma, */ } =20 - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); + if (unlikely(folio_test_hugetlb(folio))) + hugetlb_remove_rmap(folio); + else + page_remove_rmap(subpage, vma, false); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); --=20 2.41.0 From nobody Wed Dec 17 09:04:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BE54C4167B for ; Tue, 28 Nov 2023 14:52:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346400AbjK1OwS (ORCPT ); Tue, 28 Nov 2023 09:52:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346396AbjK1OwL (ORCPT ); Tue, 28 Nov 2023 09:52:11 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EC7710F6 for ; Tue, 28 Nov 2023 06:52:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701183136; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PsWlt7VFXdpW+l7o+7c53krc3omwX16daxLcjKx1KOM=; b=EtKcCPEc9go8qRPBXkw9/0Pr69JStzpTJci1tePwfDL8ZvcbGTRJnCuRmY0sq5zQ+WCHxq Yv4d0QhBfNjEehHmlekEBRJXX/Ag6lyAOGyz1+leT23/ZwOZLPol8Tef1wzO0JBktGK1XH dSwYzT72PnSyZPn12xAb1OlP6M8K/KA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-671-OEY_3b4gMlmzc9twX3txxQ-1; Tue, 28 Nov 2023 09:52:15 -0500 X-MC-Unique: OEY_3b4gMlmzc9twX3txxQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A088B3812597; Tue, 28 Nov 2023 14:52:10 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.193.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E85F503A; Tue, 28 Nov 2023 14:52:09 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Mike Kravetz , Muchun Song Subject: [PATCH v1 3/5] mm/rmap: introduce and use hugetlb_add_file_rmap() Date: Tue, 28 Nov 2023 15:52:03 +0100 Message-ID: <20231128145205.215026-4-david@redhat.com> In-Reply-To: <20231128145205.215026-1-david@redhat.com> References: <20231128145205.215026-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" hugetlb rmap handling differs quite a lot from "ordinary" rmap code, and we already have dedicated functions for adding anon hugetlb folios and removing hugetlb folios. Right now we're using page_dup_file_rmap() in some cases where "ordinary" rmap code would have used page_add_file_rmap(). So let's introduce and use hugetlb_add_file_rmap() instead. We won't be adding a "hugetlb_dup_file_rmap()" functon for the fork() case, as it would be doing the same: "dup" is just an optimization for "add". While this is a cleanup, this will also make it easier to change rmap handling for partially-mappable folios. What remains is a single page_dup_file_rmap() call in fork() code. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 7 +++++++ mm/hugetlb.c | 6 +++--- mm/migrate.c | 2 +- 3 files changed, 11 insertions(+), 4 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e8d1dc1d5361..0a81e8420a96 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -208,6 +208,13 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_a= rea_struct *, void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); =20 +static inline void hugetlb_add_file_rmap(struct folio *folio) +{ + VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); + + atomic_inc(&folio->_entire_mapcount); +} + static inline void hugetlb_remove_rmap(struct folio *folio) { atomic_dec(&folio->_entire_mapcount); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d17bb53b19ff..541a8f38cfdc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5401,7 +5401,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, * sleep during the process. */ if (!folio_test_anon(pte_folio)) { - page_dup_file_rmap(&pte_folio->page, true); + hugetlb_add_file_rmap(pte_folio); } else if (page_try_dup_anon_rmap(&pte_folio->page, true, src_vma)) { pte_t src_pte_old =3D entry; @@ -6272,7 +6272,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *m= m, if (anon_rmap) hugetlb_add_new_anon_rmap(folio, vma, haddr); else - page_dup_file_rmap(&folio->page, true); + hugetlb_add_file_rmap(folio); new_pte =3D make_huge_pte(vma, &folio->page, ((vma->vm_flags & VM_WRITE) && (vma->vm_flags & VM_SHARED))); /* @@ -6723,7 +6723,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, goto out_release_unlock; =20 if (folio_in_pagecache) - page_dup_file_rmap(&folio->page, true); + hugetlb_add_file_rmap(folio); else hugetlb_add_new_anon_rmap(folio, dst_vma, dst_addr); =20 diff --git a/mm/migrate.c b/mm/migrate.c index 4cb849fa0dd2..de9d94b99ab7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -252,7 +252,7 @@ static bool remove_migration_pte(struct folio *folio, hugetlb_add_anon_rmap(folio, vma, pvmw.address, rmap_flags); else - page_dup_file_rmap(new, true); + hugetlb_add_file_rmap(folio); set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte, psize); } else --=20 2.41.0 From nobody Wed Dec 17 09:04:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10889C07CA9 for ; Tue, 28 Nov 2023 14:52:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346398AbjK1OwZ (ORCPT ); Tue, 28 Nov 2023 09:52:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346370AbjK1OwN (ORCPT ); Tue, 28 Nov 2023 09:52:13 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CAB5D5D for ; Tue, 28 Nov 2023 06:52:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701183138; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OTmuEGR3gJmZpzV6zcBNIBMMhBTdNFHIsA/oNnAKXzk=; b=Lajk4O61sCSZ46Eo29/44tmEoAt02sWZYeveNXPg5vfLdv3xKlkd5iVPdYadeteDBT4eMW xVBHxBsgZwFOp4jusnARHDNeNPsDKLj+zrzCJLkdgHh/HCW0nwXrStRoIa5PHj+6HMgmzY dT3mFtjuKc1NE66qp7eGOe/F926OqlQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-111-vnhnDkNXPWC8poKFInuLGw-1; Tue, 28 Nov 2023 09:52:12 -0500 X-MC-Unique: vnhnDkNXPWC8poKFInuLGw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DDC0C2825E9F; Tue, 28 Nov 2023 14:52:11 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.193.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id D8D0D5028; Tue, 28 Nov 2023 14:52:10 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Mike Kravetz , Muchun Song Subject: [PATCH v1 4/5] mm/rmap: introduce and use hugetlb_try_dup_anon_rmap() Date: Tue, 28 Nov 2023 15:52:04 +0100 Message-ID: <20231128145205.215026-5-david@redhat.com> In-Reply-To: <20231128145205.215026-1-david@redhat.com> References: <20231128145205.215026-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" hugetlb rmap handling differs quite a lot from "ordinary" rmap code, and we already have dedicated functions for adding anon hugetlb folios and removing hugetlb folios. So let's introduce and use hugetlb_try_dup_anon_rmap() to make all hugetlb handling use dedicated hugetlb_* rmap functions. While this is a cleanup, this will also make it easier to change rmap handling for partially-mappable folios. Note that is_device_private_page() does not apply to hugetlb. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 12 +++++++++--- include/linux/rmap.h | 15 +++++++++++++++ mm/hugetlb.c | 3 +-- 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 418d26608ece..24c1c7c5a99c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1953,15 +1953,21 @@ static inline bool page_maybe_dma_pinned(struct pag= e *page) * * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_= seq. */ -static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, - struct page *page) +static inline bool folio_needs_cow_for_dma(struct vm_area_struct *vma, + struct folio *folio) { VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1)); =20 if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)) return false; =20 - return page_maybe_dma_pinned(page); + return folio_maybe_dma_pinned(folio); +} + +static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, + struct page *page) +{ + return folio_needs_cow_for_dma(vma, page_folio(page)); } =20 /** diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0a81e8420a96..8068c332e2ce 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -208,6 +208,21 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_a= rea_struct *, void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); =20 +/* See page_try_dup_anon_rmap() */ +static inline int hugetlb_try_dup_anon_rmap(struct folio *folio, + struct vm_area_struct *vma) +{ + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); + + if (PageAnonExclusive(&folio->page)) { + if (unlikely(folio_needs_cow_for_dma(vma, folio))) + return -EBUSY; + ClearPageAnonExclusive(&folio->page); + } + atomic_inc(&folio->_entire_mapcount); + return 0; +} + static inline void hugetlb_add_file_rmap(struct folio *folio) { VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 541a8f38cfdc..d927f8b2893c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5402,8 +5402,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, */ if (!folio_test_anon(pte_folio)) { hugetlb_add_file_rmap(pte_folio); - } else if (page_try_dup_anon_rmap(&pte_folio->page, - true, src_vma)) { + } else if (hugetlb_try_dup_anon_rmap(pte_folio, src_vma)) { pte_t src_pte_old =3D entry; struct folio *new_folio; =20 --=20 2.41.0 From nobody Wed Dec 17 09:04:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1235CC07CA9 for ; Tue, 28 Nov 2023 14:52:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346408AbjK1OwU (ORCPT ); Tue, 28 Nov 2023 09:52:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346384AbjK1OwL (ORCPT ); Tue, 28 Nov 2023 09:52:11 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E83001701 for ; Tue, 28 Nov 2023 06:52:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701183137; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2oP52ROWQFWEf94d+ZcdDH9xIN9/viIr5hzxAC40F1s=; b=A1RS3WSs73Ghrw8nLtKzKhrT9wn7+PWalZ5oMzMSXyufOj5BE4gpTjHSFmQbzuBUALBbGU z0Eo1JzQo3dB20Yred1YjI/WtgESh8bDiJW1c3rvYVYUiymO/qmgQz9z4dhQlOGue6OhS3 8NpXtobQVhVHV7Ky9rF4nSeQk5cSEcs= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-271-n6VIUHicO2m-dt6V1ZNSMQ-1; Tue, 28 Nov 2023 09:52:13 -0500 X-MC-Unique: n6VIUHicO2m-dt6V1ZNSMQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 237E03C100B4; Tue, 28 Nov 2023 14:52:13 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.193.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24AC310E45; Tue, 28 Nov 2023 14:52:12 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Mike Kravetz , Muchun Song Subject: [PATCH v1 5/5] mm/rmap: add hugetlb sanity checks Date: Tue, 28 Nov 2023 15:52:05 +0100 Message-ID: <20231128145205.215026-6-david@redhat.com> In-Reply-To: <20231128145205.215026-1-david@redhat.com> References: <20231128145205.215026-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Let's make sure we end up with the right folios in the right functions. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 6 ++++++ mm/rmap.c | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 8068c332e2ce..9625b6551d01 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -212,6 +212,7 @@ void hugetlb_add_new_anon_rmap(struct folio *, struct v= m_area_struct *, static inline int hugetlb_try_dup_anon_rmap(struct folio *folio, struct vm_area_struct *vma) { + VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio); VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); =20 if (PageAnonExclusive(&folio->page)) { @@ -225,6 +226,7 @@ static inline int hugetlb_try_dup_anon_rmap(struct foli= o *folio, =20 static inline void hugetlb_add_file_rmap(struct folio *folio) { + VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio); VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); =20 atomic_inc(&folio->_entire_mapcount); @@ -232,11 +234,15 @@ static inline void hugetlb_add_file_rmap(struct folio= *folio) =20 static inline void hugetlb_remove_rmap(struct folio *folio) { + VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio); + atomic_dec(&folio->_entire_mapcount); } =20 static inline void __page_dup_rmap(struct page *page, bool compound) { + VM_WARN_ON(folio_test_hugetlb(page_folio(page))); + if (compound) { struct folio *folio =3D (struct folio *)page; =20 diff --git a/mm/rmap.c b/mm/rmap.c index 5037581b79ec..466f1ea5d0a6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1313,6 +1313,7 @@ void folio_add_new_anon_rmap(struct folio *folio, str= uct vm_area_struct *vma, { int nr; =20 + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); VM_BUG_ON_VMA(address < vma->vm_start || address >=3D vma->vm_end, vma); __folio_set_swapbacked(folio); =20 @@ -1353,6 +1354,7 @@ void folio_add_file_rmap_range(struct folio *folio, s= truct page *page, unsigned int nr_pmdmapped =3D 0, first; int nr =3D 0; =20 + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); =20 /* Is page being mapped by PTE? Is this its first map to be added? */ @@ -1438,6 +1440,7 @@ void page_remove_rmap(struct page *page, struct vm_ar= ea_struct *vma, bool last; enum node_stat_item idx; =20 + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); VM_BUG_ON_PAGE(compound && !PageHead(page), page); =20 /* Is page being unmapped by PTE? Is this its last map to be removed? */ @@ -2585,6 +2588,7 @@ void rmap_walk_locked(struct folio *folio, struct rma= p_walk_control *rwc) void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address, rmap_t flags) { + VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio); VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); =20 atomic_inc(&folio->_entire_mapcount); @@ -2597,6 +2601,8 @@ void hugetlb_add_anon_rmap(struct folio *folio, struc= t vm_area_struct *vma, void hugetlb_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { + VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio); + BUG_ON(address < vma->vm_start || address >=3D vma->vm_end); /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); --=20 2.41.0