From nobody Sat Apr 11 04:34:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3CB2C64EC4 for ; Tue, 28 Feb 2023 21:39:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229906AbjB1Vjb (ORCPT ); Tue, 28 Feb 2023 16:39:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbjB1ViV (ORCPT ); Tue, 28 Feb 2023 16:38:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986E535268; Tue, 28 Feb 2023 13:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ARAoirNoaB9mRzIf53Xf1NK7jxsnUs5xL8l8fM8Dxdk=; b=iach5zv0eq0D8FR1a4k/lIomSV pgV0FW5stQQcTQhQVoCe85oCF2tR0Zj8sfrut3+8k6372kXxFNSJGDZXqRN6yULzdojt2xBYsywgS 2W16HPgDHLjhzw5IzkQd/GsCfjPEVBtahoYMJvuz3QmDMqduVxPv52jHKli8DXp+8luYU2PFlLrjP NW8YWzQyvpAsuN8eJgVE83fFbYYHWReM/TXWACNDFuh99uLt18q2UYlUwuvP/Nju2F0XapCddF62s wTnfVleI9HrGcxOq/8tdigu//t0DHMbg0I38uigDhK6n+RzFcj5KM+/ffMUcDNAoHFv8BnTtALLAF gfFtG2sg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018rD-La; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 33/34] mm: Convert do_set_pte() to set_pte_range() Date: Tue, 28 Feb 2023 21:37:36 +0000 Message-Id: <20230228213738.272178-34-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 27 +++++++++++++++------------ 4 files changed, 19 insertions(+), 16 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesyst= ems/locking.rst index 7de7a7272a5e..922886fefb7f 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -663,7 +663,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_p= goff" till "end_pgoff". ->map_pages() is called with page table locked and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 1f79667824eb..568ebe7058d4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1168,7 +1168,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct v= m_area_struct *vma) } =20 vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long add= r); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); =20 vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index db86e459dde6..07ebd90967a3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3507,8 +3507,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_f= ault *vmf, ret =3D VM_FAULT_NOPAGE; =20 ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr +=3D PAGE_SIZE, ++count < nr_pages); =20 /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index 69e844d5f75c..efd17ff09315 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4255,7 +4255,8 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct pa= ge *page) } #endif =20 -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long add= r) +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma =3D vmf->vma; bool uffd_wp =3D pte_marker_uffd_wp(vmf->orig_pte); @@ -4263,7 +4264,7 @@ void do_set_pte(struct vm_fault *vmf, struct page *pa= ge, unsigned long addr) bool prefault =3D vmf->address !=3D addr; pte_t entry; =20 - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry =3D mk_pte(page, vma->vm_page_prot); =20 if (prefault && arch_wants_old_prefaulted_pte()) @@ -4277,14 +4278,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *= page, unsigned long addr) entry =3D pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr !=3D 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vma, addr, vmf->pte, nr); } =20 static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4357,11 +4362,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) =20 /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio =3D page_folio(page); =20 + set_pte_range(vmf, folio, page, 1, vmf->address); ret =3D 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); --=20 2.39.1