From nobody Mon Feb 9 06:34:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 702FAEB64D7 for ; Mon, 26 Jun 2023 17:15:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231329AbjFZRPu (ORCPT ); Mon, 26 Jun 2023 13:15:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231314AbjFZRPB (ORCPT ); Mon, 26 Jun 2023 13:15:01 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A42F910E0; Mon, 26 Jun 2023 10:14:53 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7835F1480; Mon, 26 Jun 2023 10:15:37 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F06B3F663; Mon, 26 Jun 2023 10:14:50 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Subject: [PATCH v1 04/10] mm: Implement folio_add_new_anon_rmap_range() Date: Mon, 26 Jun 2023 18:14:24 +0100 Message-Id: <20230626171430.3167004-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626171430.3167004-1-ryan.roberts@arm.com> References: <20230626171430.3167004-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Like folio_add_new_anon_rmap() but batch-rmaps a range of pages belonging to a folio, for effciency savings. All pages are accounted as small pages. Signed-off-by: Ryan Roberts --- include/linux/rmap.h | 2 ++ mm/rmap.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index a3825ce81102..15433a3d0cbf 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_ar= ea_struct *, unsigned long address); void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int= nr, diff --git a/mm/rmap.c b/mm/rmap.c index 1d8369549424..4050bcea7ae7 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, st= ruct vm_area_struct *vma, __page_set_anon_rmap(folio, &folio->page, vma, address, 1); } =20 +/** + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a = new + * anonymous potentially large folio. + * @folio: The folio containing the pages to be mapped + * @page: First page in the folio to be mapped + * @nr: Number of pages to be mapped + * @vma: the vm area in which the mapping is added + * @address: the user virtual address of the first page to be mapped + * + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a= folio + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-t= est is + * bypassed and the folio does not have to be locked. All pages in the fol= io are + * individually accounted. + * + * As the folio is new, it's assumed to be mapped exclusively by a single + * process. + */ +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma, unsigned long address) +{ + int i; + + VM_BUG_ON_VMA(address < vma->vm_start || + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); + __folio_set_swapbacked(folio); + + if (folio_test_large(folio)) { + /* increment count (starts at 0) */ + atomic_set(&folio->_nr_pages_mapped, nr); + } + + for (i =3D 0; i < nr; i++) { + /* increment count (starts at -1) */ + atomic_set(&page->_mapcount, 0); + __page_set_anon_rmap(folio, page, vma, address, 1); + page++; + address +=3D PAGE_SIZE; + } + + __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); + +} + /** * folio_add_file_rmap_range - add pte mapping to page range of a folio * @folio: The folio to add the mapping to --=20 2.25.1