From nobody Tue Dec 2 00:46:28 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 176399061411554.24548586990636; Mon, 24 Nov 2025 05:23:34 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1170941.1496001 (Exim 4.92) (envelope-from ) id 1vNWXH-0007VX-9X; Mon, 24 Nov 2025 13:23:19 +0000 Received: by outflank-mailman (output) from mailman id 1170941.1496001; Mon, 24 Nov 2025 13:23:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vNWXH-0007VQ-5m; Mon, 24 Nov 2025 13:23:19 +0000 Received: by outflank-mailman (input) for mailman id 1170941; Mon, 24 Nov 2025 13:23:18 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vNWXG-00059t-3g for xen-devel@lists.xenproject.org; Mon, 24 Nov 2025 13:23:18 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id bc4e3899-c938-11f0-9d18-b5c5bf9af7f9; Mon, 24 Nov 2025 14:23:16 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3CE2F1596; Mon, 24 Nov 2025 05:23:08 -0800 (PST) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6C0293F73B; Mon, 24 Nov 2025 05:23:10 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bc4e3899-c938-11f0-9d18-b5c5bf9af7f9 From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , David Woodhouse , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , "Ritesh Harjani (IBM)" , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Venkat Rao Bagalkote , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Subject: [PATCH v5 06/12] mm: introduce generic lazy_mmu helpers Date: Mon, 24 Nov 2025 13:22:22 +0000 Message-ID: <20251124132228.622678-7-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251124132228.622678-1-kevin.brodsky@arm.com> References: <20251124132228.622678-1-kevin.brodsky@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1763990614602019200 Content-Type: text/plain; charset="utf-8" The implementation of the lazy MMU mode is currently entirely arch-specific; core code directly calls arch helpers: arch_{enter,leave}_lazy_mmu_mode(). We are about to introduce support for nested lazy MMU sections. As things stand we'd have to duplicate that logic in every arch implementing lazy_mmu - adding to a fair amount of logic already duplicated across lazy_mmu implementations. This patch therefore introduces a new generic layer that calls the existing arch_* helpers. Two pair of calls are introduced: * lazy_mmu_mode_enable() ... lazy_mmu_mode_disable() This is the standard case where the mode is enabled for a given block of code by surrounding it with enable() and disable() calls. * lazy_mmu_mode_pause() ... lazy_mmu_mode_resume() This is for situations where the mode is temporarily disabled by first calling pause() and then resume() (e.g. to prevent any batching from occurring in a critical section). The documentation in will be updated in a subsequent patch. No functional change should be introduced at this stage. The implementation of enable()/resume() and disable()/pause() is currently identical, but nesting support will change that. Most of the call sites have been updated using the following Coccinelle script: @@ @@ { ... - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); ... - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); ... } @@ @@ { ... - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_pause(); ... - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_resume(); ... } A couple of notes regarding x86: * Xen is currently the only case where explicit handling is required for lazy MMU when context-switching. This is purely an implementation detail and using the generic lazy_mmu_mode_* functions would cause trouble when nesting support is introduced, because the generic functions must be called from the current task. For that reason we still use arch_leave() and arch_enter() there. * x86 calls arch_flush_lazy_mmu_mode() unconditionally in a few places, but only defines it if PARAVIRT_XXL is selected, and we are removing the fallback in . Add a new fallback definition to to keep things building. Acked-by: David Hildenbrand Signed-off-by: Kevin Brodsky --- arch/arm64/mm/mmu.c | 4 ++-- arch/arm64/mm/pageattr.c | 4 ++-- arch/powerpc/mm/book3s64/hash_tlb.c | 8 +++---- arch/powerpc/mm/book3s64/subpage_prot.c | 4 ++-- arch/x86/include/asm/pgtable.h | 1 + fs/proc/task_mmu.c | 4 ++-- include/linux/pgtable.h | 29 +++++++++++++++++++++---- mm/kasan/shadow.c | 8 +++---- mm/madvise.c | 18 +++++++-------- mm/memory.c | 16 +++++++------- mm/migrate_device.c | 8 +++---- mm/mprotect.c | 4 ++-- mm/mremap.c | 4 ++-- mm/userfaultfd.c | 4 ++-- mm/vmalloc.c | 12 +++++----- mm/vmscan.c | 12 +++++----- 16 files changed, 81 insertions(+), 59 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 94e29e3574ff..ce66ae77abaa 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -729,7 +729,7 @@ int split_kernel_leaf_mapping(unsigned long start, unsi= gned long end) return -EINVAL; =20 mutex_lock(&pgtable_split_lock); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 /* * The split_kernel_leaf_mapping_locked() may sleep, it is not a @@ -751,7 +751,7 @@ int split_kernel_leaf_mapping(unsigned long start, unsi= gned long end) ret =3D split_kernel_leaf_mapping_locked(end); } =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); mutex_unlock(&pgtable_split_lock); return ret; } diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..e4059f13c4ed 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -110,7 +110,7 @@ static int update_range_prot(unsigned long start, unsig= ned long size, if (WARN_ON_ONCE(ret)) return ret; =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 /* * The caller must ensure that the range we are operating on does not @@ -119,7 +119,7 @@ static int update_range_prot(unsigned long start, unsig= ned long size, */ ret =3D walk_kernel_page_table_range_lockless(start, start + size, &pageattr_ops, NULL, &data); - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); =20 return ret; } diff --git a/arch/powerpc/mm/book3s64/hash_tlb.c b/arch/powerpc/mm/book3s64= /hash_tlb.c index 21fcad97ae80..787f7a0e27f0 100644 --- a/arch/powerpc/mm/book3s64/hash_tlb.c +++ b/arch/powerpc/mm/book3s64/hash_tlb.c @@ -205,7 +205,7 @@ void __flush_hash_table_range(unsigned long start, unsi= gned long end) * way to do things but is fine for our needs here. */ local_irq_save(flags); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); for (; start < end; start +=3D PAGE_SIZE) { pte_t *ptep =3D find_init_mm_pte(start, &hugepage_shift); unsigned long pte; @@ -217,7 +217,7 @@ void __flush_hash_table_range(unsigned long start, unsi= gned long end) continue; hpte_need_flush(&init_mm, start, ptep, pte, hugepage_shift); } - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); local_irq_restore(flags); } =20 @@ -237,7 +237,7 @@ void flush_hash_table_pmd_range(struct mm_struct *mm, p= md_t *pmd, unsigned long * way to do things but is fine for our needs here. */ local_irq_save(flags); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); start_pte =3D pte_offset_map(pmd, addr); if (!start_pte) goto out; @@ -249,6 +249,6 @@ void flush_hash_table_pmd_range(struct mm_struct *mm, p= md_t *pmd, unsigned long } pte_unmap(start_pte); out: - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); local_irq_restore(flags); } diff --git a/arch/powerpc/mm/book3s64/subpage_prot.c b/arch/powerpc/mm/book= 3s64/subpage_prot.c index ec98e526167e..07c47673bba2 100644 --- a/arch/powerpc/mm/book3s64/subpage_prot.c +++ b/arch/powerpc/mm/book3s64/subpage_prot.c @@ -73,13 +73,13 @@ static void hpte_flush_range(struct mm_struct *mm, unsi= gned long addr, pte =3D pte_offset_map_lock(mm, pmd, addr, &ptl); if (!pte) return; - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); for (; npages > 0; --npages) { pte_update(mm, addr, pte, 0, 0, 0); addr +=3D PAGE_SIZE; ++pte; } - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(pte - 1, ptl); } =20 diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e33df3da6980..2842fa1f7a2c 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -118,6 +118,7 @@ extern pmdval_t early_pmd_flags; #define __pte(x) native_make_pte(x) =20 #define arch_end_context_switch(prev) do {} while(0) +static inline void arch_flush_lazy_mmu_mode(void) {} #endif /* CONFIG_PARAVIRT_XXL */ =20 static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index d00ac179d973..ee1778adcc20 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -2737,7 +2737,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigne= d long start, return 0; } =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 if ((p->arg.flags & PM_SCAN_WP_MATCHING) && !p->vec_out) { /* Fast path for performing exclusive WP */ @@ -2807,7 +2807,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigne= d long start, if (flush_end) flush_tlb_range(vma, start, addr); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(start_pte, ptl); =20 cond_resched(); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index de7d2c7e63eb..c121358dba15 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -231,10 +231,31 @@ static inline int pmd_dirty(pmd_t pmd) * held, but for kernel PTE updates, no lock is held). Nesting is not perm= itted * and the mode cannot be used in interrupt context. */ -#ifndef CONFIG_ARCH_HAS_LAZY_MMU_MODE -static inline void arch_enter_lazy_mmu_mode(void) {} -static inline void arch_leave_lazy_mmu_mode(void) {} -static inline void arch_flush_lazy_mmu_mode(void) {} +#ifdef CONFIG_ARCH_HAS_LAZY_MMU_MODE +static inline void lazy_mmu_mode_enable(void) +{ + arch_enter_lazy_mmu_mode(); +} + +static inline void lazy_mmu_mode_disable(void) +{ + arch_leave_lazy_mmu_mode(); +} + +static inline void lazy_mmu_mode_pause(void) +{ + arch_leave_lazy_mmu_mode(); +} + +static inline void lazy_mmu_mode_resume(void) +{ + arch_enter_lazy_mmu_mode(); +} +#else +static inline void lazy_mmu_mode_enable(void) {} +static inline void lazy_mmu_mode_disable(void) {} +static inline void lazy_mmu_mode_pause(void) {} +static inline void lazy_mmu_mode_resume(void) {} #endif =20 #ifndef pte_batch_hint diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 29a751a8a08d..c1433d5cc5db 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -305,7 +305,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsi= gned long addr, pte_t pte; int index; =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_pause(); =20 index =3D PFN_DOWN(addr - data->start); page =3D data->pages[index]; @@ -319,7 +319,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsi= gned long addr, } spin_unlock(&init_mm.page_table_lock); =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_resume(); =20 return 0; } @@ -471,7 +471,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, un= signed long addr, pte_t pte; int none; =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_pause(); =20 spin_lock(&init_mm.page_table_lock); pte =3D ptep_get(ptep); @@ -483,7 +483,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, un= signed long addr, if (likely(!none)) __free_page(pfn_to_page(pte_pfn(pte))); =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_resume(); =20 return 0; } diff --git a/mm/madvise.c b/mm/madvise.c index b617b1be0f53..6bf7009fa5ce 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -453,7 +453,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!start_pte) return 0; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); for (; addr < end; pte +=3D nr, addr +=3D nr * PAGE_SIZE) { nr =3D 1; ptent =3D ptep_get(pte); @@ -461,7 +461,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (++batch_count =3D=3D SWAP_CLUSTER_MAX) { batch_count =3D 0; if (need_resched()) { - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(start_pte, ptl); cond_resched(); goto restart; @@ -497,7 +497,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!folio_trylock(folio)) continue; folio_get(folio); - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(start_pte, ptl); start_pte =3D NULL; err =3D split_folio(folio); @@ -508,7 +508,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!start_pte) break; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); if (!err) nr =3D 0; continue; @@ -556,7 +556,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, } =20 if (start_pte) { - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(start_pte, ptl); } if (pageout) @@ -675,7 +675,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, if (!start_pte) return 0; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); for (; addr !=3D end; pte +=3D nr, addr +=3D PAGE_SIZE * nr) { nr =3D 1; ptent =3D ptep_get(pte); @@ -724,7 +724,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, if (!folio_trylock(folio)) continue; folio_get(folio); - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(start_pte, ptl); start_pte =3D NULL; err =3D split_folio(folio); @@ -735,7 +735,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, if (!start_pte) break; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); if (!err) nr =3D 0; continue; @@ -775,7 +775,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, if (nr_swap) add_mm_counter(mm, MM_SWAPENTS, nr_swap); if (start_pte) { - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(start_pte, ptl); } cond_resched(); diff --git a/mm/memory.c b/mm/memory.c index 6675e87eb7dd..c0c29a3b0bcc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1256,7 +1256,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); orig_src_pte =3D src_pte; orig_dst_pte =3D dst_pte; - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 do { nr =3D 1; @@ -1325,7 +1325,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, } while (dst_pte +=3D nr, src_pte +=3D nr, addr +=3D PAGE_SIZE * nr, addr !=3D end); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(orig_src_pte, src_ptl); add_mm_rss_vec(dst_mm, rss); pte_unmap_unlock(orig_dst_pte, dst_ptl); @@ -1842,7 +1842,7 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, return addr; =20 flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); do { bool any_skipped =3D false; =20 @@ -1874,7 +1874,7 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, direct_reclaim =3D try_get_and_clear_pmd(mm, pmd, &pmdval); =20 add_mm_rss_vec(mm, rss); - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); =20 /* Do the actual TLB flush before dropping ptl */ if (force_flush) { @@ -2813,7 +2813,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_= t *pmd, mapped_pte =3D pte =3D pte_alloc_map_lock(mm, pmd, addr, &ptl); if (!pte) return -ENOMEM; - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); do { BUG_ON(!pte_none(ptep_get(pte))); if (!pfn_modify_allowed(pfn, prot)) { @@ -2823,7 +2823,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_= t *pmd, set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot))); pfn++; } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(mapped_pte, ptl); return err; } @@ -3174,7 +3174,7 @@ static int apply_to_pte_range(struct mm_struct *mm, p= md_t *pmd, return -EINVAL; } =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 if (fn) { do { @@ -3187,7 +3187,7 @@ static int apply_to_pte_range(struct mm_struct *mm, p= md_t *pmd, } *mask |=3D PGTBL_PTE_MODIFIED; =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); =20 if (mm !=3D &init_mm) pte_unmap_unlock(mapped_pte, ptl); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 23379663b1e1..0346c2d7819f 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -271,7 +271,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, ptep =3D pte_offset_map_lock(mm, pmdp, start, &ptl); if (!ptep) goto again; - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); ptep +=3D (addr - start) / PAGE_SIZE; =20 for (; addr < end; addr +=3D PAGE_SIZE, ptep++) { @@ -313,7 +313,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (folio_test_large(folio)) { int ret; =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(ptep, ptl); ret =3D migrate_vma_split_folio(folio, migrate->fault_page); @@ -356,7 +356,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (folio && folio_test_large(folio)) { int ret; =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(ptep, ptl); ret =3D migrate_vma_split_folio(folio, migrate->fault_page); @@ -485,7 +485,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (unmapped) flush_tlb_range(walk->vma, start, end); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(ptep - 1, ptl); =20 return 0; diff --git a/mm/mprotect.c b/mm/mprotect.c index 283889e4f1ce..c0571445bef7 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -233,7 +233,7 @@ static long change_pte_range(struct mmu_gather *tlb, is_private_single_threaded =3D vma_is_single_threaded_private(vma); =20 flush_tlb_batched_pending(vma->vm_mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); do { nr_ptes =3D 1; oldpte =3D ptep_get(pte); @@ -379,7 +379,7 @@ static long change_pte_range(struct mmu_gather *tlb, } } } while (pte +=3D nr_ptes, addr +=3D nr_ptes * PAGE_SIZE, addr !=3D end); - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(pte - 1, ptl); =20 return pages; diff --git a/mm/mremap.c b/mm/mremap.c index 672264807db6..8275b9772ec1 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -260,7 +260,7 @@ static int move_ptes(struct pagetable_move_control *pmc, if (new_ptl !=3D old_ptl) spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); flush_tlb_batched_pending(vma->vm_mm); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 for (; old_addr < old_end; old_ptep +=3D nr_ptes, old_addr +=3D nr_ptes *= PAGE_SIZE, new_ptep +=3D nr_ptes, new_addr +=3D nr_ptes * PAGE_SIZE) { @@ -305,7 +305,7 @@ static int move_ptes(struct pagetable_move_control *pmc, } } =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); if (force_flush) flush_tlb_range(vma, old_end - len, old_end); if (new_ptl !=3D old_ptl) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e6dfd5f28acd..b11f81095fa5 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1103,7 +1103,7 @@ static long move_present_ptes(struct mm_struct *mm, /* It's safe to drop the reference now as the page-table is holding one. = */ folio_put(*first_src_folio); *first_src_folio =3D NULL; - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 while (true) { orig_src_pte =3D ptep_get_and_clear(mm, src_addr, src_pte); @@ -1140,7 +1140,7 @@ static long move_present_ptes(struct mm_struct *mm, break; } =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); if (src_addr > src_start) flush_tlb_range(src_vma, src_start, src_addr); =20 diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ecbac900c35f..1dea299fbb5a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -108,7 +108,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long add= r, unsigned long end, if (!pte) return -ENOMEM; =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 do { if (unlikely(!pte_none(ptep_get(pte)))) { @@ -134,7 +134,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long add= r, unsigned long end, pfn++; } while (pte +=3D PFN_DOWN(size), addr +=3D size, addr !=3D end); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); *mask |=3D PGTBL_PTE_MODIFIED; return 0; } @@ -366,7 +366,7 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long = addr, unsigned long end, unsigned long size =3D PAGE_SIZE; =20 pte =3D pte_offset_kernel(pmd, addr); - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 do { #ifdef CONFIG_HUGETLB_PAGE @@ -385,7 +385,7 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long = addr, unsigned long end, WARN_ON(!pte_none(ptent) && !pte_present(ptent)); } while (pte +=3D (size >> PAGE_SHIFT), addr +=3D size, addr !=3D end); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); *mask |=3D PGTBL_PTE_MODIFIED; } =20 @@ -533,7 +533,7 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned lo= ng addr, if (!pte) return -ENOMEM; =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 do { struct page *page =3D pages[*nr]; @@ -555,7 +555,7 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned lo= ng addr, (*nr)++; } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); *mask |=3D PGTBL_PTE_MODIFIED; =20 return err; diff --git a/mm/vmscan.c b/mm/vmscan.c index 92980b072121..564c97a9362f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3515,7 +3515,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, return false; } =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); restart: for (i =3D pte_index(start), addr =3D start; addr !=3D end; i++, addr += =3D PAGE_SIZE) { unsigned long pfn; @@ -3556,7 +3556,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, if (i < PTRS_PER_PTE && get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &= end)) goto restart; =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); pte_unmap_unlock(pte, ptl); =20 return suitable_to_scan(total, young); @@ -3597,7 +3597,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigne= d long addr, struct vm_area if (!spin_trylock(ptl)) goto done; =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 do { unsigned long pfn; @@ -3644,7 +3644,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigne= d long addr, struct vm_area =20 walk_update_folio(walk, last, gen, dirty); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); spin_unlock(ptl); done: *first =3D -1; @@ -4243,7 +4243,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) } } =20 - arch_enter_lazy_mmu_mode(); + lazy_mmu_mode_enable(); =20 pte -=3D (addr - start) / PAGE_SIZE; =20 @@ -4277,7 +4277,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) =20 walk_update_folio(walk, last, gen, dirty); =20 - arch_leave_lazy_mmu_mode(); + lazy_mmu_mode_disable(); =20 /* feedback from rmap walkers to page table walkers */ if (mm_state && suitable_to_scan(i, young)) --=20 2.51.2