From nobody Mon Dec 29 00:37:20 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54A71C4167B for ; Mon, 4 Dec 2023 14:22:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235842AbjLDOWC (ORCPT ); Mon, 4 Dec 2023 09:22:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345095AbjLDOV4 (ORCPT ); Mon, 4 Dec 2023 09:21:56 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6703AA0 for ; Mon, 4 Dec 2023 06:22:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701699721; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CldTAIMRJWc6rWIvSdlg20t+rIQF+KSKFiNVs+JtPQE=; b=LUzB0TssQaPwMcBy7PHK/8CHv/UUKZrB39+csy3bIPQG/xnlJS7fKoa5L7C2yS29qFbNMC ND+B4PkGyDpE2TiCPGj48iG0cDc29ZicSQm9gCeFbmF0J4XkJ1Qn6ti5W+/VqtCZ/l1Uri TmKDbKgQxTagGpkG4ucvqUGSU17NCGA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-119-IrgCM3BPPOeYAKNDeamZgw-1; Mon, 04 Dec 2023 09:21:56 -0500 X-MC-Unique: IrgCM3BPPOeYAKNDeamZgw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1E5A73C0FCAD; Mon, 4 Dec 2023 14:21:56 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.195.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id 79B642026D4C; Mon, 4 Dec 2023 14:21:54 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Ryan Roberts , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu Subject: [PATCH RFC 04/39] mm/rmap: introduce and use hugetlb_try_dup_anon_rmap() Date: Mon, 4 Dec 2023 15:21:11 +0100 Message-ID: <20231204142146.91437-5-david@redhat.com> In-Reply-To: <20231204142146.91437-1-david@redhat.com> References: <20231204142146.91437-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" hugetlb rmap handling differs quite a lot from "ordinary" rmap code. For example, hugetlb currently only supports entire mappings, and treats any mapping as mapped using a single "logical PTE". Let's move it out of the way so we can overhaul our "ordinary" rmap. implementation/interface. So let's introduce and use hugetlb_try_dup_anon_rmap() to make all hugetlb handling use dedicated hugetlb_* rmap functions. Note that is_device_private_page() does not apply to hugetlb. Signed-off-by: David Hildenbrand Reviewed-by: Yin Fengwei --- include/linux/mm.h | 12 +++++++++--- include/linux/rmap.h | 15 +++++++++++++++ mm/hugetlb.c | 3 +-- 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 418d26608ece7..24c1c7c5a99c0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1953,15 +1953,21 @@ static inline bool page_maybe_dma_pinned(struct pag= e *page) * * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_= seq. */ -static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, - struct page *page) +static inline bool folio_needs_cow_for_dma(struct vm_area_struct *vma, + struct folio *folio) { VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1)); =20 if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)) return false; =20 - return page_maybe_dma_pinned(page); + return folio_maybe_dma_pinned(folio); +} + +static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, + struct page *page) +{ + return folio_needs_cow_for_dma(vma, page_folio(page)); } =20 /** diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0a81e8420a961..8068c332e2ce5 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -208,6 +208,21 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_a= rea_struct *, void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); =20 +/* See page_try_dup_anon_rmap() */ +static inline int hugetlb_try_dup_anon_rmap(struct folio *folio, + struct vm_area_struct *vma) +{ + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); + + if (PageAnonExclusive(&folio->page)) { + if (unlikely(folio_needs_cow_for_dma(vma, folio))) + return -EBUSY; + ClearPageAnonExclusive(&folio->page); + } + atomic_inc(&folio->_entire_mapcount); + return 0; +} + static inline void hugetlb_add_file_rmap(struct folio *folio) { VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 541a8f38cfdc7..d927f8b2893c0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5402,8 +5402,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, */ if (!folio_test_anon(pte_folio)) { hugetlb_add_file_rmap(pte_folio); - } else if (page_try_dup_anon_rmap(&pte_folio->page, - true, src_vma)) { + } else if (hugetlb_try_dup_anon_rmap(pte_folio, src_vma)) { pte_t src_pte_old =3D entry; struct folio *new_folio; =20 --=20 2.41.0