From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E236B46AEF0; Fri, 27 Feb 2026 20:09:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772222965; cv=none; b=T/u3N3vLr9yUTEoXJl6XCQfQwwu6uXeBPk4qeUNW2Mui+53GnJamIqSAPYJzRI/XV77UQKbgR8rPXys5CbaKqzv6rOtn1riBg0rXmP+dWoubHovELxj61Bhi3Ani52OQu3TKETnIguoMvEDyxUOIyp2UTY8i2iOV7QJHA4mpi3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772222965; c=relaxed/simple; bh=oTLLv1/LtvCwIvkJbROBPO/znHjTWfNn5494fkK85gk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XQIUeaEd+o113uQepUh/6VvbryBj1stVmx5oJesDPVA0TAda7YSXVVjN7yE4fBRsynH6Hwb8+UCNKT3dXXYxl0Yj1olZpXwTJ8ZcYI7XtaFuXjNWep/PLJOAev1pVdMWcaNcXKpWHHaMbzxEdjQLME+PWNvNZZyckBJAWxwLax0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TykYjpWv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TykYjpWv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E80D2C19423; Fri, 27 Feb 2026 20:09:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772222964; bh=oTLLv1/LtvCwIvkJbROBPO/znHjTWfNn5494fkK85gk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TykYjpWvgGwljpM6p2COuJBm6vVZY9Djvp9ZNM6zX/Mg+sBmes0wCnbQ4WeSfbmW/ SV+IDNBtumzcZhC3ElSbyAiv475EkpQUkxeCLhsWULdxAHfGnT0H335AmzCjAN+dQj QRAOmuBYMH+Ynj0dVRD7vPT/l9+4APLtjm8nWyekU8kueRKpx8XbFw5ajrTVTYHSCI fPG3LX2/4RfCF7sYKFaOg85sIPFkNtlXfBAxGEQ4tDcYAx7xhGzd19vCY9jvvC0Ulp O7aQ9sT3wfiLxfn9cUcpLrszMNdsrZRZprFMOD9AkXLMBxMByussndPeKhWeTlHo/m zYXuh1oeZiM/A== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 01/16] mm/madvise: drop range checks in madvise_free_single_vma() Date: Fri, 27 Feb 2026 21:08:32 +0100 Message-ID: <20260227200848.114019-2-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" madvise_vma_behavior()-> madvise_dontneed_free()->madvise_free_single_vma() is only called from madvise_walk_vmas() (a) After try_vma_read_lock() confirmed that the whole range falls into a single VMA (see is_vma_lock_sufficient()). (b) After adjusting the range to the VMA in the loop afterwards. madvise_dontneed_free() might drop the MM lock when handling userfaultfd, but it properly looks up the VMA again to adjust the range. So in madvise_free_single_vma(), the given range should always fall into a single VMA and should also span at least one page. Let's drop the error checks. The code now matches what we do in madvise_dontneed_single_vma(), where we call zap_vma_range_batched() that documents: "The range must fit into one VMA.". Although that function still adjusts that range, we'll change that soon. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/madvise.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index c0370d9b4e23..efc04334a000 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -799,9 +799,10 @@ static int madvise_free_single_vma(struct madvise_beha= vior *madv_behavior) { struct mm_struct *mm =3D madv_behavior->mm; struct vm_area_struct *vma =3D madv_behavior->vma; - unsigned long start_addr =3D madv_behavior->range.start; - unsigned long end_addr =3D madv_behavior->range.end; - struct mmu_notifier_range range; + struct mmu_notifier_range range =3D { + .start =3D madv_behavior->range.start, + .end =3D madv_behavior->range.end, + }; struct mmu_gather *tlb =3D madv_behavior->tlb; struct mm_walk_ops walk_ops =3D { .pmd_entry =3D madvise_free_pte_range, @@ -811,12 +812,6 @@ static int madvise_free_single_vma(struct madvise_beha= vior *madv_behavior) if (!vma_is_anonymous(vma)) return -EINVAL; =20 - range.start =3D max(vma->vm_start, start_addr); - if (range.start >=3D vma->vm_end) - return -EINVAL; - range.end =3D min(vma->vm_end, end_addr); - if (range.end <=3D vma->vm_start) - return -EINVAL; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, range.start, range.end); =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80F8F44D02D; Fri, 27 Feb 2026 20:09:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772222982; cv=none; b=DXS583cQjmGfBZLqA36COoeTQX8RJaWPxGofpRU+3tJAILfRLjnzGgakLWJfoEk/2G17QkZh8S/2i92HtHJdAPwIYJ1lJfOh17J/fJb8FLrEO4fyemAnJWlj0IMQGawSjyPccNQQZLjsg8bCums7oOT0tGo2p7Rlf/kYzuxMvt4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772222982; c=relaxed/simple; bh=6rt1gIIqFtg8jcYMpGLsIsLBZas8CDATuKxsRIDKO5Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GesgoqvhNengbLX3T2kOl+JRt0hSksBo52kjr7AWbDKPguy9GDyVRFb1mKLN3t/tg1NtLiJr4c+EnQq+LVsQ91jdNfOp5o49LPpV4H94XPCCsia41mgLwX/5DlOLzW7Rahfk0FKeP8ga2PiTteMzUR4Xn1yG0Gp1vfherMiOm0Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BQHY4VRD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BQHY4VRD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FC1AC116C6; Fri, 27 Feb 2026 20:09:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772222982; bh=6rt1gIIqFtg8jcYMpGLsIsLBZas8CDATuKxsRIDKO5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BQHY4VRDSRTvpeKP+Itjk0UhkFl4s6G0uWMMDvYg1M2w57rF10GCj3c7odrrE5wWN VjrTpoY8Likic/JjgXgS14dJIl3eUZ+6M5FwyltQNCNf2ZuaYyTOBZf69RquFDnukC /4wLEeUtMJTFa/NRhc7WEM5x4w8QBOkBoTOacpxlvIgCCK+7PdgrNRjoDeCbwwYTJy FrznIcjK8BKOGMZ+Q63lAvvbNqww0pS1F9QN17GlyDPjQCPwrahp30ffoNjNZsJBgM IwdPAjvbaRoao5DNrG4O31TKtodgkmB+weVPUOBXFPAp6h+0N6Lq2+n4vSxzms/JLJ 4LcC9G37N3BFg== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 02/16] mm/memory: remove "zap_details" parameter from zap_page_range_single() Date: Fri, 27 Feb 2026 21:08:33 +0100 Message-ID: <20260227200848.114019-3-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Nobody except memory.c should really set that parameter to non-NULL. So let's just drop it and make unmap_mapping_range_vma() use zap_page_range_single_batched() instead. Signed-off-by: David Hildenbrand (Arm) Acked-by: Alice Ryhl # Rust and Binder Acked-by: Puranjay Mohan Reviewed-by: Lorenzo Stoakes (Oracle) --- arch/s390/mm/gmap_helpers.c | 2 +- drivers/android/binder_alloc.c | 2 +- include/linux/mm.h | 5 ++--- kernel/bpf/arena.c | 3 +-- kernel/events/core.c | 2 +- mm/madvise.c | 3 +-- mm/memory.c | 16 ++++++++++------ net/ipv4/tcp.c | 5 ++--- rust/kernel/mm/virt.rs | 2 +- 9 files changed, 20 insertions(+), 20 deletions(-) diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c index dea83e3103e5..ae2d59a19313 100644 --- a/arch/s390/mm/gmap_helpers.c +++ b/arch/s390/mm/gmap_helpers.c @@ -89,7 +89,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned l= ong vmaddr, unsigned lo if (!vma) return; if (!is_vm_hugetlb_page(vma)) - zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr, NULL= ); + zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr); vmaddr =3D vma->vm_end; } } diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 241f16a9b63d..dd2046bd5cde 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1185,7 +1185,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, if (vma) { trace_binder_unmap_user_start(alloc, index); =20 - zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL); + zap_page_range_single(vma, page_addr, PAGE_SIZE); =20 trace_binder_unmap_user_end(alloc, index); } diff --git a/include/linux/mm.h b/include/linux/mm.h index ecff8268089b..a8138ff7d1fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2835,11 +2835,10 @@ struct page *vm_normal_page_pud(struct vm_area_stru= ct *vma, unsigned long addr, void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, unsigned long size); void zap_page_range_single(struct vm_area_struct *vma, unsigned long addre= ss, - unsigned long size, struct zap_details *details); + unsigned long size); static inline void zap_vma_pages(struct vm_area_struct *vma) { - zap_page_range_single(vma, vma->vm_start, - vma->vm_end - vma->vm_start, NULL); + zap_page_range_single(vma, vma->vm_start, vma->vm_end - vma->vm_start); } struct mmu_notifier_range; =20 diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c index 144f30e740e8..c34510d83b1f 100644 --- a/kernel/bpf/arena.c +++ b/kernel/bpf/arena.c @@ -656,8 +656,7 @@ static void zap_pages(struct bpf_arena *arena, long uad= dr, long page_cnt) guard(mutex)(&arena->lock); /* iterate link list under lock */ list_for_each_entry(vml, &arena->vma_list, head) - zap_page_range_single(vml->vma, uaddr, - PAGE_SIZE * page_cnt, NULL); + zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt); } =20 static void arena_free_pages(struct bpf_arena *arena, long uaddr, long pag= e_cnt, bool sleepable) diff --git a/kernel/events/core.c b/kernel/events/core.c index ac70d68217b6..c94c56c94104 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7215,7 +7215,7 @@ static int map_range(struct perf_buffer *rb, struct v= m_area_struct *vma) #ifdef CONFIG_MMU /* Clear any partial mappings on error. */ if (err) - zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL); + zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE); #endif =20 return err; diff --git a/mm/madvise.c b/mm/madvise.c index efc04334a000..557a360f7919 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1193,8 +1193,7 @@ static long madvise_guard_install(struct madvise_beha= vior *madv_behavior) * OK some of the range have non-guard pages mapped, zap * them. This leaves existing guard pages in place. */ - zap_page_range_single(vma, range->start, - range->end - range->start, NULL); + zap_page_range_single(vma, range->start, range->end - range->start); } =20 /* diff --git a/mm/memory.c b/mm/memory.c index 9385842c3503..19f5f9a60995 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2203,17 +2203,16 @@ void zap_page_range_single_batched(struct mmu_gathe= r *tlb, * @vma: vm_area_struct holding the applicable pages * @address: starting address of pages to zap * @size: number of bytes to zap - * @details: details of shared cache invalidation * * The range must fit into one VMA. */ void zap_page_range_single(struct vm_area_struct *vma, unsigned long addre= ss, - unsigned long size, struct zap_details *details) + unsigned long size) { struct mmu_gather tlb; =20 tlb_gather_mmu(&tlb, vma->vm_mm); - zap_page_range_single_batched(&tlb, vma, address, size, details); + zap_page_range_single_batched(&tlb, vma, address, size, NULL); tlb_finish_mmu(&tlb); } =20 @@ -2235,7 +2234,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigne= d long address, !(vma->vm_flags & VM_PFNMAP)) return; =20 - zap_page_range_single(vma, address, size, NULL); + zap_page_range_single(vma, address, size); } EXPORT_SYMBOL_GPL(zap_vma_ptes); =20 @@ -3003,7 +3002,7 @@ static int remap_pfn_range_notrack(struct vm_area_str= uct *vma, unsigned long add * maintain page reference counts, and callers may free * pages due to the error. So zap it early. */ - zap_page_range_single(vma, addr, size, NULL); + zap_page_range_single(vma, addr, size); return error; } =20 @@ -4226,7 +4225,12 @@ static void unmap_mapping_range_vma(struct vm_area_s= truct *vma, unsigned long start_addr, unsigned long end_addr, struct zap_details *details) { - zap_page_range_single(vma, start_addr, end_addr - start_addr, details); + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm); + zap_page_range_single_batched(&tlb, vma, start_addr, + end_addr - start_addr, details); + tlb_finish_mmu(&tlb); } =20 static inline void unmap_mapping_range_tree(struct rb_root_cached *root, diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f84d9a45cc9d..befcde27dee7 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2104,7 +2104,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct = vm_area_struct *vma, maybe_zap_len =3D total_bytes_to_map - /* All bytes to map */ *length + /* Mapped or pending */ (pages_remaining * PAGE_SIZE); /* Failed map. */ - zap_page_range_single(vma, *address, maybe_zap_len, NULL); + zap_page_range_single(vma, *address, maybe_zap_len); err =3D 0; } =20 @@ -2269,8 +2269,7 @@ static int tcp_zerocopy_receive(struct sock *sk, total_bytes_to_map =3D avail_len & ~(PAGE_SIZE - 1); if (total_bytes_to_map) { if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT)) - zap_page_range_single(vma, address, total_bytes_to_map, - NULL); + zap_page_range_single(vma, address, total_bytes_to_map); zc->length =3D total_bytes_to_map; zc->recv_skip_hint =3D 0; } else { diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs index da21d65ccd20..b8e59e4420f3 100644 --- a/rust/kernel/mm/virt.rs +++ b/rust/kernel/mm/virt.rs @@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, siz= e: usize) { // sufficient for this method call. This method has no requirement= s on the vma flags. The // address range is checked to be within the vma. unsafe { - bindings::zap_page_range_single(self.as_ptr(), address, size, = core::ptr::null_mut()) + bindings::zap_page_range_single(self.as_ptr(), address, size) }; } =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C9D547276D; Fri, 27 Feb 2026 20:09:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772222999; cv=none; b=MnuPUuZ6QZu/TnkXYswjqa0pO4v37mtdJhExrSM7WTz4BKp8f5aWLeY/uIC98hwHkDH524rhaMJ3J9rbOt7NSVJmMfnTL2O5cKeWkS3iDEYXVn5/AxVey7uJaFAaRuxWf+C5A65cP6Uekpp7AHkNf+/KfxE3fhv7ufgogrWw3xc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772222999; c=relaxed/simple; bh=6LDS5+Vnu1htyspttVgh9zMYsh7tIZY4ma3ymZBtFxA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ucx+1pDQRmfwwHkQOj8L7HQYHZ3vWxCWSBjdggoIDvKNdFdZDJz6ZSNgBt3qUuAWyZStTmmmJSGGK8QKSuyPD4ZjUfHIptwxMwgQXOpjcm9yjXVlAWEpXZAmGNeF3WKw23JJWirhYu2jyYki6J59/H2MqsYcbxfwSzx/HooORDs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SKSqFMwc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SKSqFMwc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E725CC2BC86; Fri, 27 Feb 2026 20:09:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772222998; bh=6LDS5+Vnu1htyspttVgh9zMYsh7tIZY4ma3ymZBtFxA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SKSqFMwcnc7VlfFpJ1YNAbm4Z+zF+/LGzUP4CQ7sh7kvhpYmsBKfiG+nk4e+qHYLY +Cdf6OJiVUJQfBE1Be1Oajv2J5qqAr7rTEQSaKgth6pGitrG8Hc1wPVk2TOXaWSysY 5NZBg2/wRYlKWBg3m+g+8mhjmq7xDqKiE1UDk9cYXlckyJFxfD5ezJwIvK9pjt6B0f +9uLN2/GF+QIAANdneeZojFKTbtzB83JiF1S8jiEQTloHhUkWPaWDcFKguB8BRteav xs9wWSj8gILbDPly5WkEX4gjf8F5WAVfUBqcy1dk1YtENH/MRugbDFtWtjB0lCg6Ka Iqb4hdLjMfU5A== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 03/16] mm/memory: inline unmap_mapping_range_vma() into unmap_mapping_range_tree() Date: Fri, 27 Feb 2026 21:08:34 +0100 Message-ID: <20260227200848.114019-4-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's remove the number of unmap-related functions that cause confusion by inlining unmap_mapping_range_vma() into its single caller. The end result looks pretty readable. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/memory.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 19f5f9a60995..5c47309331f5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4221,18 +4221,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) return wp_page_copy(vmf); } =20 -static void unmap_mapping_range_vma(struct vm_area_struct *vma, - unsigned long start_addr, unsigned long end_addr, - struct zap_details *details) -{ - struct mmu_gather tlb; - - tlb_gather_mmu(&tlb, vma->vm_mm); - zap_page_range_single_batched(&tlb, vma, start_addr, - end_addr - start_addr, details); - tlb_finish_mmu(&tlb); -} - static inline void unmap_mapping_range_tree(struct rb_root_cached *root, pgoff_t first_index, pgoff_t last_index, @@ -4240,17 +4228,20 @@ static inline void unmap_mapping_range_tree(struct = rb_root_cached *root, { struct vm_area_struct *vma; pgoff_t vba, vea, zba, zea; + unsigned long start, size; + struct mmu_gather tlb; =20 vma_interval_tree_foreach(vma, root, first_index, last_index) { vba =3D vma->vm_pgoff; vea =3D vba + vma_pages(vma) - 1; zba =3D max(first_index, vba); zea =3D min(last_index, vea); + start =3D ((zba - vba) << PAGE_SHIFT) + vma->vm_start; + size =3D (zea - zba + 1) << PAGE_SHIFT; =20 - unmap_mapping_range_vma(vma, - ((zba - vba) << PAGE_SHIFT) + vma->vm_start, - ((zea - vba + 1) << PAGE_SHIFT) + vma->vm_start, - details); + tlb_gather_mmu(&tlb, vma->vm_mm); + zap_page_range_single_batched(&tlb, vma, start, size, details); + tlb_finish_mmu(&tlb); } } =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E0FD450915; Fri, 27 Feb 2026 20:10:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223015; cv=none; b=qJx3doZU3KPZ5EYLd48MBQ9+1//7OXAQ2t85AoE2B0Bg4nMOnOqbZ6YSoSVUhMb7VPkAmKUXj0l04o4C7yBTmVT9w4G3MBEXpP4kr3xUE8BsGYzxKvImQcdBYDPMD1R/LjppzZRoVOticTY9MhvMbBb6AoUJ8NQv9/i/KvNW0ik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223015; c=relaxed/simple; bh=hLgJdGT31V5wnxVFe1vHLMNm/KVuWNy5vLAGWsJg21c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DdBRGP/AorCTiUizlOro3HgbqYs03tAHClIsvQszwO7aYsckVoKxQFtFc0e7/wl7NZ502oUVM7t/j1Kqrs/q9Vm43g/M0YVR81jtXNMHUU91EvuNhnp5WwzShU6tQuoUbaHRSgc94VUoSq0/zJ7T+JFTU7Phu39hv/1UaxMQTdI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VgnVpkaH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VgnVpkaH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E261C116C6; Fri, 27 Feb 2026 20:09:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223014; bh=hLgJdGT31V5wnxVFe1vHLMNm/KVuWNy5vLAGWsJg21c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VgnVpkaHaAkciLTAeL0+EHoy6ULSwuu9fj3VyFkgJooaJcw9CuTij8hQME2UbVNWD Yvc/lVRhwFa8DPT/JmgPCmNWacY7labSQ2afT1BQfoxShbwEJk2Jn8z7NxsGfCsqtz PFpbCDhsNtcpqdguKU2gvD4AUwof1y66dauCjiXbAiMN6CyWf4CltYotNxCjssrHCp /q9YFMkt4jNd7FScndseUUa/D3NnepYZOmjVGGfCd+L3Hgug9STuzNp9OaPEgAhBvJ WiDRQIsjsWJIIKTyzXG8Pn+MIUoCY9cTsDjzXp52+rtBZSOfNAH+NerJi1RRj2Hxlh G0ODowk2+zwXQ== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 04/16] mm/memory: simplify calculation in unmap_mapping_range_tree() Date: Fri, 27 Feb 2026 21:08:35 +0100 Message-ID: <20260227200848.114019-5-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's simplify the calculation a bit further to make it easier to get, reusing vma_last_pgoff() which we move from interval_tree.c to mm.h. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 5 +++++ mm/interval_tree.c | 5 ----- mm/memory.c | 12 +++++------- 3 files changed, 10 insertions(+), 12 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a8138ff7d1fa..d3ef586ee1c0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4000,6 +4000,11 @@ static inline unsigned long vma_pages(const struct v= m_area_struct *vma) return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; } =20 +static inline unsigned long vma_last_pgoff(struct vm_area_struct *vma) +{ + return vma->vm_pgoff + vma_pages(vma) - 1; +} + static inline unsigned long vma_desc_size(const struct vm_area_desc *desc) { return desc->end - desc->start; diff --git a/mm/interval_tree.c b/mm/interval_tree.c index 32e390c42c53..32bcfbfcf15f 100644 --- a/mm/interval_tree.c +++ b/mm/interval_tree.c @@ -15,11 +15,6 @@ static inline unsigned long vma_start_pgoff(struct vm_ar= ea_struct *v) return v->vm_pgoff; } =20 -static inline unsigned long vma_last_pgoff(struct vm_area_struct *v) -{ - return v->vm_pgoff + vma_pages(v) - 1; -} - INTERVAL_TREE_DEFINE(struct vm_area_struct, shared.rb, unsigned long, shared.rb_subtree_last, vma_start_pgoff, vma_last_pgoff, /* empty */, vma_interval_tree) diff --git a/mm/memory.c b/mm/memory.c index 5c47309331f5..e4154f03feac 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4227,17 +4227,15 @@ static inline void unmap_mapping_range_tree(struct = rb_root_cached *root, struct zap_details *details) { struct vm_area_struct *vma; - pgoff_t vba, vea, zba, zea; unsigned long start, size; struct mmu_gather tlb; =20 vma_interval_tree_foreach(vma, root, first_index, last_index) { - vba =3D vma->vm_pgoff; - vea =3D vba + vma_pages(vma) - 1; - zba =3D max(first_index, vba); - zea =3D min(last_index, vea); - start =3D ((zba - vba) << PAGE_SHIFT) + vma->vm_start; - size =3D (zea - zba + 1) << PAGE_SHIFT; + const pgoff_t start_idx =3D max(first_index, vma->vm_pgoff); + const pgoff_t end_idx =3D min(last_index, vma_last_pgoff(vma)) + 1; + + start =3D vma->vm_start + ((start_idx - vma->vm_pgoff) << PAGE_SHIFT); + size =3D (end_idx - start_idx) << PAGE_SHIFT; =20 tlb_gather_mmu(&tlb, vma->vm_mm); zap_page_range_single_batched(&tlb, vma, start, size, details); --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44FD244BC9B; Fri, 27 Feb 2026 20:10:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223031; cv=none; b=VIjrF6QGsk40cwoeIIPwJGebThfK8j39V5x0oFS/RvXpxlgyOGa/JDHihK/j/xYwAuO0VEJ6pqDSAViBOyESGqTKlDKojEfKNNCbQja4fD5g4JCvz9GMS3pYwvqIhr8IVvaVuoprXCaxdgOXMbYoAonH9ejfrDzU2EF1vruegFo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223031; c=relaxed/simple; bh=GDIsuZYm2zKYxLFzdJGIwkOaoBFqEdMRlEhHeQ3/4/c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c1shIr0kET9oVkkykXPbcaK0CXYZpBJ68wbQp+i8XYyFpk22i7aNmYMBFbD0ORfJAYWahTZQQySk/nxxN89ZPeC02dWut9sdoANQ5LPSiwe7E+zl5N4hHQ1DMSE+Ed3ZSWszCBjYJ2zM8o2fiGw3kGEq1lESDq9mgv2msgdieFM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DBRHRg84; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DBRHRg84" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E5E5C2BCB0; Fri, 27 Feb 2026 20:10:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223031; bh=GDIsuZYm2zKYxLFzdJGIwkOaoBFqEdMRlEhHeQ3/4/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DBRHRg84mUFFa5j1Ap08A/NiDuF3TUvfcRS7C3+9hv775NpD/8NroucXGTumCDp/q yrfW9Y3j7L6NHLbxDVoLPnwidDlbqQx43JL5ql1ycR4JGKaqhcycHzZzwNzJlcvc02 dllhdtIOUZfDuMDJj+LoTseNq6O5uagn4G64tCcqE5fAIbrOu0ko0XkUr7ircdpbQh bXRyXmVj6xEUhwSFjMMxmlipUHZ264tacuMAwtSvAd5grFuLAF6s73NejFFuX4hEaz JPFck19AXo77usRat6V50+er5SScDpOmDuKTSvIT6gz+N5E8JF7nu9dERR/ctItvMT m5GfMgiKzWHMQ== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 05/16] mm/oom_kill: use MMU_NOTIFY_CLEAR in __oom_reap_task_mm() Date: Fri, 27 Feb 2026 21:08:36 +0100 Message-ID: <20260227200848.114019-6-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In commit 7269f999934b ("mm/mmu_notifier: use correct mmu_notifier events for each invalidation") we converted all MMU_NOTIFY_UNMAP to MMU_NOTIFY_CLEAR, except the ones that actually perform munmap() or mremap() as documented. __oom_reap_task_mm() behaves much more like MADV_DONTNEED. So use MMU_NOTIFY_CLEAR as well. This is a preparation for further changes. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/oom_kill.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 5c6c95c169ee..0ba56fcd10d5 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -551,7 +551,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) struct mmu_notifier_range range; struct mmu_gather tlb; =20 - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, vma->vm_start, vma->vm_end); tlb_gather_mmu(&tlb, mm); --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E61B445BD74; Fri, 27 Feb 2026 20:10:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223048; cv=none; b=fccXNQup549EtqdDs2d8XNxTqTGG59t5+5LNQf8mjNwolt86Cz1WxgdkufG4wWzL9V39ZTBsaALuZpuUA5G5rG9acCgseQNjPrBYv7e9KydNacDAkGqa33Vq3UtMv3KrRDuDrub3sTh1z3HF/hJvzjl80r8plRUTYvglh6cpYmU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223048; c=relaxed/simple; bh=QGXGGpOpfFdi5KYtLppZ0pqbJn7DahplTB4anRf43p0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MVRwowbj/z4Ke3ZTyWRncadFtCipP6ZJvVhtL6o48KAKVFRgYEbaUMABG+RsE7FaUbMjpLMCcW+kvSn81p/JY7YFb1cz8DY3M5B2fQuTAvkZlazjIf48dazaJdWqmKVBZpJyTge176W1jT8x/M1u1AvGsAR4OV+Gi25TksHYigU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nfvnOh6G; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nfvnOh6G" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8338C4AF0B; Fri, 27 Feb 2026 20:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223047; bh=QGXGGpOpfFdi5KYtLppZ0pqbJn7DahplTB4anRf43p0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nfvnOh6GFZtYgZiN0uOTy634vV7a2URwKLO9gYY3hSmWrxWYugIhQMwjmKAP0Ou7H E7yQ2swT+jt/BCllVI65b1rCvucLmydteEcAWJgvmR6s7NVl1dOcUQKed/pO6sinW9 lwoqUdfj2/aHynj1hfJQjMN4BDqIw/yLezbCWepQA0SY428UknnSWvQAdl5uw8Pw0I ZMvoXeKbY2uAT7ljvl10AXyMqsQZJg6xW3aCfxZ3Rd8l7duGBMlC0mAecn8AsWYubD FGcjbDKLIzBns5l13QGee7VBjeTjFl4JHAPE0WATVKG/4R/uphdDwPX6uX4D9dFGKx IeNq3bC5UkO4w== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 06/16] mm/oom_kill: factor out zapping of VMA into zap_vma_for_reaping() Date: Fri, 27 Feb 2026 21:08:37 +0100 Message-ID: <20260227200848.114019-7-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's factor it out so we can turn unmap_page_range() into a static function instead, and so oom reaping has a clean interface to call. Note that hugetlb is not supported, because it would require a bunch of hugetlb-specific further actions (see zap_page_range_single_batched()). Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/internal.h | 5 +---- mm/memory.c | 36 ++++++++++++++++++++++++++++++++---- mm/oom_kill.c | 15 +-------------- 3 files changed, 34 insertions(+), 22 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 39ab37bb0e1d..df9190f7db0e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -536,13 +536,10 @@ static inline void sync_with_folio_pmd_zap(struct mm_= struct *mm, pmd_t *pmdp) } =20 struct zap_details; -void unmap_page_range(struct mmu_gather *tlb, - struct vm_area_struct *vma, - unsigned long addr, unsigned long end, - struct zap_details *details); void zap_page_range_single_batched(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long size, struct zap_details *details); +int zap_vma_for_reaping(struct vm_area_struct *vma); int folio_unmap_invalidate(struct address_space *mapping, struct folio *fo= lio, gfp_t gfp); =20 diff --git a/mm/memory.c b/mm/memory.c index e4154f03feac..621f38ae1425 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2054,10 +2054,9 @@ static inline unsigned long zap_p4d_range(struct mmu= _gather *tlb, return addr; } =20 -void unmap_page_range(struct mmu_gather *tlb, - struct vm_area_struct *vma, - unsigned long addr, unsigned long end, - struct zap_details *details) +static void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct= *vma, + unsigned long addr, unsigned long end, + struct zap_details *details) { pgd_t *pgd; unsigned long next; @@ -2115,6 +2114,35 @@ static void unmap_single_vma(struct mmu_gather *tlb, } } =20 +/** + * zap_vma_for_reaping - zap all page table entries in the vma without blo= cking + * @vma: The vma to zap. + * + * Zap all page table entries in the vma without blocking for use by the o= om + * killer. Hugetlb vmas are not supported. + * + * Returns: 0 on success, -EBUSY if we would have to block. + */ +int zap_vma_for_reaping(struct vm_area_struct *vma) +{ + struct mmu_notifier_range range; + struct mmu_gather tlb; + + VM_WARN_ON_ONCE(is_vm_hugetlb_page(vma)); + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, + vma->vm_start, vma->vm_end); + tlb_gather_mmu(&tlb, vma->vm_mm); + if (mmu_notifier_invalidate_range_start_nonblock(&range)) { + tlb_finish_mmu(&tlb); + return -EBUSY; + } + unmap_page_range(&tlb, vma, range.start, range.end, NULL); + mmu_notifier_invalidate_range_end(&range); + tlb_finish_mmu(&tlb); + return 0; +} + /** * unmap_vmas - unmap a range of memory covered by a list of vma's * @tlb: address of the caller's struct mmu_gather diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 0ba56fcd10d5..54b7a8fe5136 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -548,21 +548,8 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) * count elevated without a good reason. */ if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { - struct mmu_notifier_range range; - struct mmu_gather tlb; - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, - mm, vma->vm_start, - vma->vm_end); - tlb_gather_mmu(&tlb, mm); - if (mmu_notifier_invalidate_range_start_nonblock(&range)) { - tlb_finish_mmu(&tlb); + if (zap_vma_for_reaping(vma)) ret =3D false; - continue; - } - unmap_page_range(&tlb, vma, range.start, range.end, NULL); - mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb); } } =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC5D845105E; Fri, 27 Feb 2026 20:11:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223064; cv=none; b=KhBCMPuBlt3GvJj4FuUch1jGcRgVGDbAUVJxYa0mUoX/AsGhXDeFLw2xJgcsQDFVt1UQWlfColN35oDjl/ly/5vTS6J9T52DLFhqSR5L4hqwvfbsiywWoTMMNyjcKqtQJMn9qpabM0GqZpLA1V9BHZ/IaqldXzth8hbHNE13sd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223064; c=relaxed/simple; bh=1+QuoW/elQeLApVDz9csX68UOgFLcRXKeXzvwiiZ4og=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OTM5RNg5bvnCwrglnpWwe8uBBBpVVTTM0wLehqbPeZgzO6x0cVciviHdySalLxG16B9ZFBjf7WK0c1gXgzeXz8b88dBEN+F2ztMebBJQKe1cf0n4R/DKQi8pe7nkKF5NnOCopLOa8WMQrGsQkyu/dzhq2tei6hOw44Qgqlv4VjA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RtZuxKn3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RtZuxKn3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E1D7C4AF0D; Fri, 27 Feb 2026 20:10:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223064; bh=1+QuoW/elQeLApVDz9csX68UOgFLcRXKeXzvwiiZ4og=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RtZuxKn3Ziqokc8GpxUXMnZvP6x2/VNh0HcDRcORj7Jjg1pwi+muUt68y2oAPSSn7 oXm3RcX/NB5+rlaz1Axv1CYblxhdridhg0R2TC1bV4DMOIkHe0e+ljXOAu11x8YisK 3cKrd3jepyMfsg/8UQ9qaT5UFz7UcP0BvlO+Vb0tqIcwpCcAoN5ovlLJWFsERdCVR6 c+KzkQnQ8g6/c5BNYsVswZIxJhuTUYh3A1VIWKouIbNnLMzCE98PRU6kla3Ls2hHHn Xnpogri0UwXahapxbF6VQB/VcdHdOn11slPbn6O6jernACtIFzDi0c0Un4gk9KL7uH doaGGvuSkLX+w== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 07/16] mm/memory: rename unmap_single_vma() to __zap_vma_range() Date: Fri, 27 Feb 2026 21:08:38 +0100 Message-ID: <20260227200848.114019-8-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's rename it to better fit our new naming scheme. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 621f38ae1425..f0aaec57a66b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2074,7 +2074,7 @@ static void unmap_page_range(struct mmu_gather *tlb, = struct vm_area_struct *vma, } =20 =20 -static void unmap_single_vma(struct mmu_gather *tlb, +static void __zap_vma_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, struct zap_details *details) { @@ -2177,7 +2177,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct unmap_= desc *unmap) unsigned long start =3D unmap->vma_start; unsigned long end =3D unmap->vma_end; hugetlb_zap_begin(vma, &start, &end); - unmap_single_vma(tlb, vma, start, end, &details); + __zap_vma_range(tlb, vma, start, end, &details); hugetlb_zap_end(vma, &details); vma =3D mas_find(unmap->mas, unmap->tree_end - 1); } while (vma); @@ -2213,7 +2213,7 @@ void zap_page_range_single_batched(struct mmu_gather = *tlb, * unmap 'address-end' not 'range.start-range.end' as range * could have been expanded for hugetlb pmd sharing. */ - unmap_single_vma(tlb, vma, address, end, details); + __zap_vma_range(tlb, vma, address, end, details); mmu_notifier_invalidate_range_end(&range); if (is_vm_hugetlb_page(vma)) { /* --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7ED56472782; Fri, 27 Feb 2026 20:11:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223081; cv=none; b=OKgK/7cHFPocTQcqodpchbOKM7mMFwNu/DIv0UgdF3N+hWPZIzFbT0FqMZRdl+wdvE+OHIJtCg/55kOsIzmluWkskTsqMPyaGET3fGz0wdpCYgUAIffPTYjmBfjm66+RC13AtdA+xQ3Gce1bbEqZb+ByG2FZf294Tkf6tDnsZ2Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223081; c=relaxed/simple; bh=Ms86RD0Otoj1d6fzyHR5o28Ex/YjZExC3FyxUcdS+A8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PuwmsR3lDvcvutfj5OE65RtDzhoVCQCNLmRnux4FIEPnVA7gKMf65IxbwF8MD8ONLGyQ4DsOTFCKKnERzcsGeCL8yBoNfUpPuRPULoWrsBZuqmZFzIwigaGZbwKuI4bB9jSjksFTCKksaH5dEq1Drbk3OkbF9qfRIp0yIExvNvE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PYP8hi5N; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PYP8hi5N" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 11DAAC116C6; Fri, 27 Feb 2026 20:11:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223081; bh=Ms86RD0Otoj1d6fzyHR5o28Ex/YjZExC3FyxUcdS+A8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PYP8hi5N5sGdSXGeQeQx0PGG2qoBngPOdHIav4EUqj7du6tyiIkdIqI1w8m/NqulN yXTxgx+g53/5ilvSpYisK6g/8IOBFrnfp7OT3nq69Vw+lSxB4peBlnL7oBBrBFteR+ M2xZ9SjZBvkW0pqpQ/Xabhkha8T9/aSY5mi7v8CLxJ+sb4/YPmxDWucCxYnzccFDiZ EBFkzzUicUwOCfYz7Mnqy0dyy3jiaAtOm+CqJElfZ27AJo9RZVjXuSUjS95x5xGJ2w JLFZQYbU59MSVTJrtE+3FkB7Em2PBv3oHfoONvXgNLI0cd8vWkaGCUZ4/ZP0s9duB7 TSoGd5pRM7nbA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 08/16] mm/memory: move adjusting of address range to unmap_vmas() Date: Fri, 27 Feb 2026 21:08:39 +0100 Message-ID: <20260227200848.114019-9-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" __zap_vma_range() has two callers, whereby zap_page_range_single_batched() documents that the range must fit into the VMA range. So move adjusting the range to unmap_vmas() where it is actually required and add a safety check in __zap_vma_range() instead. In unmap_vmas(), we'd never expect to have empty ranges (otherwise, why have the vma in there in the first place). __zap_vma_range() will no longer be called with start =3D=3D end, so cleanup the function a bit. While at it, simplify the overly long comment to its core message. We will no longer call uprobe_munmap() for start =3D=3D end, which actually seems to be the right thing to do. Note that hugetlb_zap_begin()->...->adjust_range_if_pmd_sharing_possible() cannot result in the range exceeding the vma range. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/memory.c | 58 +++++++++++++++++++++-------------------------------- 1 file changed, 23 insertions(+), 35 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f0aaec57a66b..fdcd2abf29c2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2073,44 +2073,28 @@ static void unmap_page_range(struct mmu_gather *tlb= , struct vm_area_struct *vma, tlb_end_vma(tlb, vma); } =20 - -static void __zap_vma_range(struct mmu_gather *tlb, - struct vm_area_struct *vma, unsigned long start_addr, - unsigned long end_addr, struct zap_details *details) +static void __zap_vma_range(struct mmu_gather *tlb, struct vm_area_struct = *vma, + unsigned long start, unsigned long end, + struct zap_details *details) { - unsigned long start =3D max(vma->vm_start, start_addr); - unsigned long end; - - if (start >=3D vma->vm_end) - return; - end =3D min(vma->vm_end, end_addr); - if (end <=3D vma->vm_start) - return; + VM_WARN_ON_ONCE(start >=3D end || !range_in_vma(vma, start, end)); =20 if (vma->vm_file) uprobe_munmap(vma, start, end); =20 - if (start !=3D end) { - if (unlikely(is_vm_hugetlb_page(vma))) { - /* - * It is undesirable to test vma->vm_file as it - * should be non-null for valid hugetlb area. - * However, vm_file will be NULL in the error - * cleanup path of mmap_region. When - * hugetlbfs ->mmap method fails, - * mmap_region() nullifies vma->vm_file - * before calling this function to clean up. - * Since no pte has actually been setup, it is - * safe to do nothing in this case. - */ - if (vma->vm_file) { - zap_flags_t zap_flags =3D details ? - details->zap_flags : 0; - __unmap_hugepage_range(tlb, vma, start, end, - NULL, zap_flags); - } - } else - unmap_page_range(tlb, vma, start, end, details); + if (unlikely(is_vm_hugetlb_page(vma))) { + zap_flags_t zap_flags =3D details ? details->zap_flags : 0; + + /* + * vm_file will be NULL when we fail early while instantiating + * a new mapping. In this case, no pages were mapped yet and + * there is nothing to do. + */ + if (!vma->vm_file) + return; + __unmap_hugepage_range(tlb, vma, start, end, NULL, zap_flags); + } else { + unmap_page_range(tlb, vma, start, end, details); } } =20 @@ -2174,8 +2158,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct unmap_= desc *unmap) unmap->vma_start, unmap->vma_end); mmu_notifier_invalidate_range_start(&range); do { - unsigned long start =3D unmap->vma_start; - unsigned long end =3D unmap->vma_end; + unsigned long start =3D max(vma->vm_start, unmap->vma_start); + unsigned long end =3D min(vma->vm_end, unmap->vma_end); + hugetlb_zap_begin(vma, &start, &end); __zap_vma_range(tlb, vma, start, end, &details); hugetlb_zap_end(vma, &details); @@ -2204,6 +2189,9 @@ void zap_page_range_single_batched(struct mmu_gather = *tlb, =20 VM_WARN_ON_ONCE(!tlb || tlb->mm !=3D vma->vm_mm); =20 + if (unlikely(!size)) + return; + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, end); hugetlb_zap_begin(vma, &range.start, &range.end); --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 608E0478E27; Fri, 27 Feb 2026 20:11:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223098; cv=none; b=pnxmZrU0VPfIfar41Wy/2dfSYflfpX/dVe2m/iH+QQ78bx0PodPW++oAJzeKTdsbwnB3JKkV8+Ro/TVFX14RVKIdiAohHBzjGmAEVhDegt1BOpwoP/wxiwwOytoGX1r4iYE0LcmFR76c45muiOpDpstr5B52qXlEtiBPKVBLouQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223098; c=relaxed/simple; bh=KcfCcPy+dBQFq2iZWWRAzMkc9iA3mhQIqe4VKvKm/84=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Es3tNLruDZ8BRldSbrztSDLpD95RQYXczH+wkVwMDCaB0UMMbu2ST5bcqF7cdhrt3m42B49q99q/Azv3vfFVEPJbOVx8sExf7zi86DeEZHoG+GkgWykwaSyhx2C+6Pu9m/N02pdrR3dJybk8K7R5ADxMPQChpaadBZtqk/ZNZVg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=R+uZFzLt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="R+uZFzLt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C83C0C2BC86; Fri, 27 Feb 2026 20:11:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223098; bh=KcfCcPy+dBQFq2iZWWRAzMkc9iA3mhQIqe4VKvKm/84=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R+uZFzLtdrG65b8hmVxNKYNlrS/AesJe5C0CdRLLG0CXN4yBdaUTdPL6zW++yDFfV d9morv+7a1pkXjkKZLGhg42leomEsG3yOY/LUvmeI7p74dvImpIdlZ+hVxW2bRPJa1 EnR+f8uCtq2nwqb4bowrT5fporOqCSIT2OWtiwOHYZKKevZ2sqcl0dhKRiXN/W1VTf T3JnjgarvGwRpiC+6sbx7CJfmMTNXMJbxZ+2eZvxX9FSc8qEYjP7BB1HwU+r3h1AXj QaDl1DJbAtFutxlyg3qP2d0PNjBvS5b6oUjoQZ0i0yJpmByBkdKGovCyF8vwNHNDEe YwpOXuN0+M/Ig== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 09/16] mm/memory: convert details->even_cows into details->skip_cows Date: Fri, 27 Feb 2026 21:08:40 +0100 Message-ID: <20260227200848.114019-10-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current semantics are confusing: simply because someone specifies an empty zap_detail struct suddenly makes should_zap_cows() behave differently. The default should be to also zap CoW'ed anonymous pages. Really only unmap_mapping_pages() and friends want to skip zapping of these anon folios. So let's invert the meaning; turn the confusing "reclaim_pt" check that overrides other properties in should_zap_cows() into a safety check. Note that the only caller that sets reclaim_pt=3Dtrue is madvise_dontneed_single_vma(), which wants to zap any pages. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 2 +- mm/madvise.c | 1 - mm/memory.c | 12 ++++++------ 3 files changed, 7 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d3ef586ee1c0..21b67c203e62 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2798,7 +2798,7 @@ extern void pagefault_out_of_memory(void); */ struct zap_details { struct folio *single_folio; /* Locked folio to be unmapped */ - bool even_cows; /* Zap COWed private pages too? */ + bool skip_cows; /* Do not zap COWed private pages */ bool reclaim_pt; /* Need reclaim page tables? */ zap_flags_t zap_flags; /* Extra flags for zapping */ }; diff --git a/mm/madvise.c b/mm/madvise.c index 557a360f7919..b51f216934f3 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -853,7 +853,6 @@ static long madvise_dontneed_single_vma(struct madvise_= behavior *madv_behavior) struct madvise_behavior_range *range =3D &madv_behavior->range; struct zap_details details =3D { .reclaim_pt =3D true, - .even_cows =3D true, }; =20 zap_page_range_single_batched( diff --git a/mm/memory.c b/mm/memory.c index fdcd2abf29c2..7d7c24c6917c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1554,11 +1554,13 @@ copy_page_range(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma) static inline bool should_zap_cows(struct zap_details *details) { /* By default, zap all pages */ - if (!details || details->reclaim_pt) + if (!details) return true; =20 + VM_WARN_ON_ONCE(details->skip_cows && details->reclaim_pt); + /* Or, we zap COWed pages only if the caller wants to */ - return details->even_cows; + return !details->skip_cows; } =20 /* Decides whether we should zap this folio with the folio pointer specifi= ed */ @@ -2149,8 +2151,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct unmap_= desc *unmap) struct mmu_notifier_range range; struct zap_details details =3D { .zap_flags =3D ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP, - /* Careful - we need to zap private pages too! */ - .even_cows =3D true, }; =20 vma =3D unmap->first; @@ -4282,7 +4282,7 @@ void unmap_mapping_folio(struct folio *folio) first_index =3D folio->index; last_index =3D folio_next_index(folio) - 1; =20 - details.even_cows =3D false; + details.skip_cows =3D true; details.single_folio =3D folio; details.zap_flags =3D ZAP_FLAG_DROP_MARKER; =20 @@ -4312,7 +4312,7 @@ void unmap_mapping_pages(struct address_space *mappin= g, pgoff_t start, pgoff_t first_index =3D start; pgoff_t last_index =3D start + nr - 1; =20 - details.even_cows =3D even_cows; + details.skip_cows =3D !even_cows; if (last_index < first_index) last_index =3D ULONG_MAX; =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64091472797; Fri, 27 Feb 2026 20:11:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223114; cv=none; b=qd+0978d/0xZSuuQPs1pVRzCuB7tYA0pbV9KczWXBlx+UdfLhcXsr5hdjei4uPzT0QpRZ3qvZQzU6b+HASUcasLVRl2mk2zZiMOfLyxv14bgDdYp/sMNEFRguvqLCmQhYx/V1KrzAlf3yNsVzKcHKuF+1fHhp8JL2vzD1mzaRJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223114; c=relaxed/simple; bh=I5k8d/7BhjTgSAWiS/rU+NWa+WlmbxH2nIHZouG4qQM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=puujtq0a9MskLETPBI0Aszhd/Qn3RXP87zXhhsWmcxKE229rV2ezdPCR8EymVJ/Bj9ltvPuCq8xcQARHnH3a8BrRYX4QGyLd67wn9BaQeBfYMPnGj7asYzyk8vq1/ZDHCci1Tafe0Hz2CTEjdQkv1rx96fHjTc31T1EezlVvAX4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mqTK4ziI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mqTK4ziI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0F1FC116C6; Fri, 27 Feb 2026 20:11:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223114; bh=I5k8d/7BhjTgSAWiS/rU+NWa+WlmbxH2nIHZouG4qQM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mqTK4ziIkeceLpOMlpc9Z3pxLBxf+uElw1bxcxQsf50OwHmsXyG8JxCnQBvafY8L8 CBoR0oUKd8BfbexVek1eSk440DDZw8CZg1d3pBVVBhMAofC8lV7GSd4wOs8ZGvwQc6 gG2H8vqWX8ELemznuQXqyAwHtagW9ERzcgDt+oTFrkOj6Hnrd+KZQbWXQcmnthmek8 +vBEQhzvxIXjGxGImAh3NyciAlusUwXZudtLW9IZnKynhZJO4tb/IuLubzDUsLdgNg 2YWaQinZZpstMXww2VbVV42oOagkVcy4IAHbH1UPy7e8FZkiElL1SY+N80lOyHpM+g OaaWmRapbAJgA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 10/16] mm/memory: use __zap_vma_range() in zap_vma_for_reaping() Date: Fri, 27 Feb 2026 21:08:41 +0100 Message-ID: <20260227200848.114019-11-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's call __zap_vma_range() instead of unmap_page_range() to prepare for further cleanups. To keep the existing behavior, whereby we do not call uprobe_munmap() which could block, add a new "reaping" member to zap_details and use it. Likely we should handle the possible blocking in uprobe_munmap() differently, but for now keep it unchanged. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 1 + mm/memory.c | 13 +++++++++---- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 21b67c203e62..4710f7c7495a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2800,6 +2800,7 @@ struct zap_details { struct folio *single_folio; /* Locked folio to be unmapped */ bool skip_cows; /* Do not zap COWed private pages */ bool reclaim_pt; /* Need reclaim page tables? */ + bool reaping; /* Reaping, do not block. */ zap_flags_t zap_flags; /* Extra flags for zapping */ }; =20 diff --git a/mm/memory.c b/mm/memory.c index 7d7c24c6917c..394b2e931974 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2079,14 +2079,18 @@ static void __zap_vma_range(struct mmu_gather *tlb,= struct vm_area_struct *vma, unsigned long start, unsigned long end, struct zap_details *details) { + const bool reaping =3D details && details->reaping; + VM_WARN_ON_ONCE(start >=3D end || !range_in_vma(vma, start, end)); =20 - if (vma->vm_file) + /* uprobe_munmap() might sleep, so skip it when reaping. */ + if (vma->vm_file && !reaping) uprobe_munmap(vma, start, end); =20 if (unlikely(is_vm_hugetlb_page(vma))) { zap_flags_t zap_flags =3D details ? details->zap_flags : 0; =20 + VM_WARN_ON_ONCE(reaping); /* * vm_file will be NULL when we fail early while instantiating * a new mapping. In this case, no pages were mapped yet and @@ -2111,11 +2115,12 @@ static void __zap_vma_range(struct mmu_gather *tlb,= struct vm_area_struct *vma, */ int zap_vma_for_reaping(struct vm_area_struct *vma) { + struct zap_details details =3D { + .reaping =3D true, + }; struct mmu_notifier_range range; struct mmu_gather tlb; =20 - VM_WARN_ON_ONCE(is_vm_hugetlb_page(vma)); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, vma->vm_start, vma->vm_end); tlb_gather_mmu(&tlb, vma->vm_mm); @@ -2123,7 +2128,7 @@ int zap_vma_for_reaping(struct vm_area_struct *vma) tlb_finish_mmu(&tlb); return -EBUSY; } - unmap_page_range(&tlb, vma, range.start, range.end, NULL); + __zap_vma_range(&tlb, vma, range.start, range.end, &details); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); return 0; --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D571746AF37; Fri, 27 Feb 2026 20:12:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223130; cv=none; b=RpLlJHy2WXHp1r2aUbrygDzC6g+9KykH0RhTEphm5reZQU32Y+3O57czRmujZzFM59fcJJtcDTBQSJt0p7Ntt1UpOpoY9bxKIrKCE3IUuMgSdjpgn61IB+9kRL77XkTxRBjBdPkQ1Gbt2xHGhgVIhd+oo7Moy9GdIabprhB+yv8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223130; c=relaxed/simple; bh=clYyWttfcaFGSXdSI+NFUKwc/GgjopvB2fmwPUXTrio=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cFUT5+781usPDIoFw/oxzm6G2JLtWGVoSvUfLdf6utIewPiM1k39pF/UDzd7Ho87rTndQTaIZakpeLYBw7Xmdx1hspY2U5TlSgNU4cYCnVJMZatqHwoFH7zGHYiFUJKg38XB+ZkBlgsGDLTXFov5XU+Y5UNzzTQfTzJ0fvscCRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ngaOty6e; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ngaOty6e" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B39E4C116C6; Fri, 27 Feb 2026 20:11:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223130; bh=clYyWttfcaFGSXdSI+NFUKwc/GgjopvB2fmwPUXTrio=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ngaOty6eXM5j7is+WGyPTLemdZsPBcNe61ojNKAJgZwE71n14yO064jdNtdsbX/Ee RlnqA3sETb9Sy0WtFg1UK1UEcaoO+Lw/1O2wov2lG6O/6NijoRCZZU6GJofxg0KeVi odWoGYGuMoHkVNOTEmUkygOdILEoNupmNcfh6X2yUbKxw8hqRUN3XAIuwmUTHbl0uv Wl5I/FU4hI2dyfaSjJKbwQzwLAdwBQdMN+UbkJt1AStL/fzXkytyB/+eDsgIyfI8n/ b9L2nmNB1P0iCf65mL/2gwS2RTtD6WGlw6k+T+OihQnVUd6MdiJbH5WiiAYE79FL+a YRQ32YLZxixOw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 11/16] mm/memory: inline unmap_page_range() into __zap_vma_range() Date: Fri, 27 Feb 2026 21:08:42 +0100 Message-ID: <20260227200848.114019-12-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's inline it into the single caller to reduce the number of confusing unmap/zap helpers. Get rid of the unnecessary BUG_ON(). Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/memory.c | 32 ++++++++++++-------------------- 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 394b2e931974..1c0bcdfc73b7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2056,25 +2056,6 @@ static inline unsigned long zap_p4d_range(struct mmu= _gather *tlb, return addr; } =20 -static void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct= *vma, - unsigned long addr, unsigned long end, - struct zap_details *details) -{ - pgd_t *pgd; - unsigned long next; - - BUG_ON(addr >=3D end); - tlb_start_vma(tlb, vma); - pgd =3D pgd_offset(vma->vm_mm, addr); - do { - next =3D pgd_addr_end(addr, end); - if (pgd_none_or_clear_bad(pgd)) - continue; - next =3D zap_p4d_range(tlb, vma, pgd, addr, next, details); - } while (pgd++, addr =3D next, addr !=3D end); - tlb_end_vma(tlb, vma); -} - static void __zap_vma_range(struct mmu_gather *tlb, struct vm_area_struct = *vma, unsigned long start, unsigned long end, struct zap_details *details) @@ -2100,7 +2081,18 @@ static void __zap_vma_range(struct mmu_gather *tlb, = struct vm_area_struct *vma, return; __unmap_hugepage_range(tlb, vma, start, end, NULL, zap_flags); } else { - unmap_page_range(tlb, vma, start, end, details); + unsigned long next, cur =3D start; + pgd_t *pgd; + + tlb_start_vma(tlb, vma); + pgd =3D pgd_offset(vma->vm_mm, cur); + do { + next =3D pgd_addr_end(cur, end); + if (pgd_none_or_clear_bad(pgd)) + continue; + next =3D zap_p4d_range(tlb, vma, pgd, cur, next, details); + } while (pgd++, cur =3D next, cur !=3D end); + tlb_end_vma(tlb, vma); } } =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E521638551C; Fri, 27 Feb 2026 20:12:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223147; cv=none; b=bpfc8JE0/CEZKOOzn1xffhCpxiC3/m1y064AILpHZjLQTZtSpmcvBWVAlIXv28dHd+V4+3ifSVDGHYtweDdc/DJ1UVfilPe4dU4rwNjO7/pipCtYoygWkn8l/gLKtdC5s6yRCqN8uMTZyqV5NC0oYpKb7LDiiJa4PgjakhK8al8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223147; c=relaxed/simple; bh=aB5tiNSbiWIs8J38wzKTGn9gBoLw1Ed2QFc8m6G1lhE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GOIfAt8mrf2uvDEEldQXPKiCwtXxvMCeYDZOuAflFFcPRaEIoYLCFQoEqPkDpPzy3Z4+rSOuyi19KcIdiFCf9GXe8CuxG9cTMmo2eWzlRFYg7qZCTstVTWleF/LDNdihgAZhs0l+3de6r2iIKCB+HTMFz1BL+CEvwDTMxyzXDPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XV3R41u2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XV3R41u2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 047EBC116C6; Fri, 27 Feb 2026 20:12:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223146; bh=aB5tiNSbiWIs8J38wzKTGn9gBoLw1Ed2QFc8m6G1lhE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XV3R41u27yOs3AucAXItd/ZxAZjAbvL9zkvtgql+u+4ijDgzXZEbk77WEppDG6XbQ RaRDFQEIdi93XVV3MnYwhO+PQuYzL6HPudiNqhTkElOIxON9zpQ28ddXnRauD5K7xI pPAPZLRWc+u8ZveolZp8sk3xfxpltOLQDdxGrANzjZcI65k2mZV2C7NoEkTTWxEEvR CAFqu7YxseHBUNNfvGMccb+W5iBgk9khfRU0vAq0qlcZ/q5lDlnaLNcSa+FLgrhULj YjzvWS5u0pea0U3pcj1C2Qe1JsARvSTKUCz1+bSDPl3+jOtxbrGUap4h/9n2umCd7p tlC/aFeo6fWmw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 12/16] mm: rename zap_vma_pages() to zap_vma() Date: Fri, 27 Feb 2026 21:08:43 +0100 Message-ID: <20260227200848.114019-13-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's rename it to an even simpler name. While at it, add some simplistic kernel doc. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- arch/powerpc/platforms/book3s/vas-api.c | 2 +- arch/powerpc/platforms/pseries/vas.c | 2 +- include/linux/mm.h | 6 +++++- lib/vdso/datastore.c | 2 +- mm/page-writeback.c | 2 +- 5 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platfor= ms/book3s/vas-api.c index ea4ffa63f043..e96d79db69fe 100644 --- a/arch/powerpc/platforms/book3s/vas-api.c +++ b/arch/powerpc/platforms/book3s/vas-api.c @@ -414,7 +414,7 @@ static vm_fault_t vas_mmap_fault(struct vm_fault *vmf) /* * When the LPAR lost credits due to core removal or during * migration, invalidate the existing mapping for the current - * paste addresses and set windows in-active (zap_vma_pages in + * paste addresses and set windows in-active (zap_vma() in * reconfig_close_windows()). * New mapping will be done later after migration or new credits * available. So continue to receive faults if the user space diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/= pseries/vas.c index ceb0a8788c0a..fa05f04364fe 100644 --- a/arch/powerpc/platforms/pseries/vas.c +++ b/arch/powerpc/platforms/pseries/vas.c @@ -807,7 +807,7 @@ static int reconfig_close_windows(struct vas_caps *vcap= , int excess_creds, * is done before the original mmap() and after the ioctl. */ if (vma) - zap_vma_pages(vma); + zap_vma(vma); =20 mutex_unlock(&task_ref->mmap_mutex); mmap_write_unlock(task_ref->mm); diff --git a/include/linux/mm.h b/include/linux/mm.h index 4710f7c7495a..4bd1500b9630 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2837,7 +2837,11 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsign= ed long address, unsigned long size); void zap_page_range_single(struct vm_area_struct *vma, unsigned long addre= ss, unsigned long size); -static inline void zap_vma_pages(struct vm_area_struct *vma) +/** + * zap_vma - zap all page table entries in a vma + * @vma: The vma to zap. + */ +static inline void zap_vma(struct vm_area_struct *vma) { zap_page_range_single(vma, vma->vm_start, vma->vm_end - vma->vm_start); } diff --git a/lib/vdso/datastore.c b/lib/vdso/datastore.c index a565c30c71a0..222c143aebf7 100644 --- a/lib/vdso/datastore.c +++ b/lib/vdso/datastore.c @@ -121,7 +121,7 @@ int vdso_join_timens(struct task_struct *task, struct t= ime_namespace *ns) mmap_read_lock(mm); for_each_vma(vmi, vma) { if (vma_is_special_mapping(vma, &vdso_vvar_mapping)) - zap_vma_pages(vma); + zap_vma(vma); } mmap_read_unlock(mm); =20 diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 601a5e048d12..29f7567e5a71 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2645,7 +2645,7 @@ void folio_account_cleaned(struct folio *folio, struc= t bdi_writeback *wb) * while this function is in progress, although it may have been truncated * before this function is called. Most callers have the folio locked. * A few have the folio blocked from truncation through other means (e.g. - * zap_vma_pages() has it mapped and is holding the page table lock). + * zap_vma() has it mapped and is holding the page table lock). * When called from mark_buffer_dirty(), the filesystem should hold a * reference to the buffer_head that is being marked dirty, which causes * try_to_free_buffers() to fail. --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 14E701D86DC; Fri, 27 Feb 2026 20:12:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223164; cv=none; b=ZKQ29eZOH+SxLQuLRORiOmJZQV3UPBAt+ZBxAlbXSefwYi2pRQJWe0VoeMS/FkSsFw/q5uC+Cnk+DCjVT7eqHzos0l5OaFDrXX+71YP9RUHkX3ScDbebJg0zPmHCXwBgbx+2IZpd9cKY9vpwHIgYwAC3XdQCdI8SHb1uRD/LqMM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223164; c=relaxed/simple; bh=B1yXnc32tXbLcJ8NQGFc5Arvb5lyHtOMPl4AdrDajl0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G0RAnR0MVJTNbTHUsWZ6SCKOh7F/XQN7lQ+F2stkmci9g18N+jKMwdyAmpF41G4kZO2Pzyyg0NQmLhaHxTElyVC0utGplzb1+f1RsR2cuHSVbvtKBY1FJjD6fK0DzccU8+u6pFMZ2tpLJmKoF4Xk8lg+EWiRIH6Y5eDXl4BtH64= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e5g0TPuA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e5g0TPuA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 624A8C4AF0E; Fri, 27 Feb 2026 20:12:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223163; bh=B1yXnc32tXbLcJ8NQGFc5Arvb5lyHtOMPl4AdrDajl0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e5g0TPuAG6gfesnF6qlduLlcGUBPnMhU3YENZOHkNsVPDgQhRnH1MFauWgeL7RMDC ZDytxLL3/ocwf5PbRvFpVbk3pmvkdAD+XF1L6BzyancbBjD35yHtpfICb2QNxOw986 L5hI932ZW/hEh+Gw6fKcAGmpoqiF+z1NyH1tsVKuOA7DQSAhWxwkWr6RocVOmKV8XG KFx3ANZlj2o1XSLVoqev/UhRxYfp7R7fG3oZ4hNen9Y8TdMB1iPdiGc3rcD8XmrotP q4q3zeg0mjfjsk5BWqfRmyIBaT8twqj4E2sWv/18B/I98ZtV8ClTF3/ZnpUTnwGlob z68EfQd2gz72g== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 13/16] mm: rename zap_page_range_single_batched() to zap_vma_range_batched() Date: Fri, 27 Feb 2026 21:08:44 +0100 Message-ID: <20260227200848.114019-14-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's make the naming more consistent with our new naming scheme. While at it, polish the kerneldoc a bit. Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/internal.h | 2 +- mm/madvise.c | 5 ++--- mm/memory.c | 23 +++++++++++++---------- 3 files changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index df9190f7db0e..15a1b3f0a6d1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -536,7 +536,7 @@ static inline void sync_with_folio_pmd_zap(struct mm_st= ruct *mm, pmd_t *pmdp) } =20 struct zap_details; -void zap_page_range_single_batched(struct mmu_gather *tlb, +void zap_vma_range_batched(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long size, struct zap_details *details); int zap_vma_for_reaping(struct vm_area_struct *vma); diff --git a/mm/madvise.c b/mm/madvise.c index b51f216934f3..fb5fcdff2b66 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -855,9 +855,8 @@ static long madvise_dontneed_single_vma(struct madvise_= behavior *madv_behavior) .reclaim_pt =3D true, }; =20 - zap_page_range_single_batched( - madv_behavior->tlb, madv_behavior->vma, range->start, - range->end - range->start, &details); + zap_vma_range_batched(madv_behavior->tlb, madv_behavior->vma, + range->start, range->end - range->start, &details); return 0; } =20 diff --git a/mm/memory.c b/mm/memory.c index 1c0bcdfc73b7..e611e9af4e85 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2167,17 +2167,20 @@ void unmap_vmas(struct mmu_gather *tlb, struct unma= p_desc *unmap) } =20 /** - * zap_page_range_single_batched - remove user pages in a given range + * zap_vma_range_batched - zap page table entries in a vma range * @tlb: pointer to the caller's struct mmu_gather - * @vma: vm_area_struct holding the applicable pages - * @address: starting address of pages to remove - * @size: number of bytes to remove - * @details: details of shared cache invalidation + * @vma: the vma covering the range to zap + * @address: starting address of the range to zap + * @size: number of bytes to zap + * @details: details specifying zapping behavior + * + * @tlb must not be NULL. The provided address range must be fully + * contained within @vma. If @vma is for hugetlb, @tlb is flushed and + * re-initialized by this function. * - * @tlb shouldn't be NULL. The range must fit into one VMA. If @vma is f= or - * hugetlb, @tlb is flushed and re-initialized by this function. + * If @details is NULL, this function will zap all page table entries. */ -void zap_page_range_single_batched(struct mmu_gather *tlb, +void zap_vma_range_batched(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long address, unsigned long size, struct zap_details *details) { @@ -2225,7 +2228,7 @@ void zap_page_range_single(struct vm_area_struct *vma= , unsigned long address, struct mmu_gather tlb; =20 tlb_gather_mmu(&tlb, vma->vm_mm); - zap_page_range_single_batched(&tlb, vma, address, size, NULL); + zap_vma_range_batched(&tlb, vma, address, size, NULL); tlb_finish_mmu(&tlb); } =20 @@ -4251,7 +4254,7 @@ static inline void unmap_mapping_range_tree(struct rb= _root_cached *root, size =3D (end_idx - start_idx) << PAGE_SHIFT; =20 tlb_gather_mmu(&tlb, vma->vm_mm); - zap_page_range_single_batched(&tlb, vma, start, size, details); + zap_vma_range_batched(&tlb, vma, start, size, details); tlb_finish_mmu(&tlb); } } --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D8561D86DC; Fri, 27 Feb 2026 20:12:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223180; cv=none; b=SehRYKtr5RMIkU+SGUaTpC98WC8sIOF/yGoUuNC2ieLKvVDm4R3n4kZGp3Oi8PvrTu/j989M7cpFR7qeq5jd7Z7hfOMy4/+uzQEFsBt9Yc+Ke+cV9PtC2C/4ORW8JRoFiOTpZ5zEj8j9pldNg1sHfFXpKMjDk2PPd1kXHhNeW0o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223180; c=relaxed/simple; bh=EEBn7CIEirsPder1HwHu0zod3+a5jIJfhj8EcLUi1fg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MxQBA+g07oC1ABkGYH+XLQJ3VWutKFxRsO2VAeGmEY+V5Uo8v5AORi1rhlJboTgyDrjAPSVZAOYYIcQqGjHana41LhDesDKlXE0hDmy/Sud1mr1Vk01mQxNKc0TU8QRr6s5/XpRk09MYb7Mki1gXcS2ScjAsWy1yq+EP+bOOxB4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JmPRcYz6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JmPRcYz6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4803DC116C6; Fri, 27 Feb 2026 20:12:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223179; bh=EEBn7CIEirsPder1HwHu0zod3+a5jIJfhj8EcLUi1fg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JmPRcYz6lMupOBBWcIq2xxzw3ZjqLWbh14Q/EzEzzNOPR2bXzATrQ2or0dA35uMRH qQSbMuAtuGPbagAqsNu+bWeITN3fvbriqugq0wAAivv3oBid5ydJfxz6kgH0ggobHr FcqROWqyx31WFNdZnxZqeabN5HI3T4ByIsihTtRpKekFZlg/CHSxlrBVD1TU/sPfLX PtMx1nnJm8R4PN5iGseQ/FJczpNis/rWhHyMfhYrvINAqI8qSVqCSM1beRCreQXnWF uj0oTpBBfR/1UoZQaN9rvvSM//RXewIO1Dwy6rYXGjaI0s2HIGsuNkmVQpYm8jSfVj CC7PaJu+lCODg== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 14/16] mm: rename zap_page_range_single() to zap_vma_range() Date: Fri, 27 Feb 2026 21:08:45 +0100 Message-ID: <20260227200848.114019-15-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Let's rename it to make it better match our new naming scheme. While at it, polish the kerneldoc. Signed-off-by: David Hildenbrand (Arm) Acked-by: Alice Ryhl # Rust and Binder Acked-by: Puranjay Mohan Reviewed-by: Lorenzo Stoakes (Oracle) --- arch/s390/mm/gmap_helpers.c | 2 +- drivers/android/binder/page_range.rs | 4 ++-- drivers/android/binder_alloc.c | 2 +- include/linux/mm.h | 4 ++-- kernel/bpf/arena.c | 2 +- kernel/events/core.c | 2 +- mm/madvise.c | 4 ++-- mm/memory.c | 14 +++++++------- net/ipv4/tcp.c | 6 +++--- rust/kernel/mm/virt.rs | 4 ++-- 10 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c index ae2d59a19313..f8789ffcc05c 100644 --- a/arch/s390/mm/gmap_helpers.c +++ b/arch/s390/mm/gmap_helpers.c @@ -89,7 +89,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned l= ong vmaddr, unsigned lo if (!vma) return; if (!is_vm_hugetlb_page(vma)) - zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr); + zap_vma_range(vma, vmaddr, min(end, vma->vm_end) - vmaddr); vmaddr =3D vma->vm_end; } } diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/= page_range.rs index fdd97112ef5c..2fddd4ed8d4c 100644 --- a/drivers/android/binder/page_range.rs +++ b/drivers/android/binder/page_range.rs @@ -130,7 +130,7 @@ pub(crate) struct ShrinkablePageRange { pid: Pid, /// The mm for the relevant process. mm: ARef, - /// Used to synchronize calls to `vm_insert_page` and `zap_page_range_= single`. + /// Used to synchronize calls to `vm_insert_page` and `zap_vma_range`. #[pin] mm_lock: Mutex<()>, /// Spinlock protecting changes to pages. @@ -719,7 +719,7 @@ fn drop(self: Pin<&mut Self>) { =20 if let Some(vma) =3D mmap_read.vma_lookup(vma_addr) { let user_page_addr =3D vma_addr + (page_index << PAGE_SHIFT); - vma.zap_page_range_single(user_page_addr, PAGE_SIZE); + vma.zap_vma_range(user_page_addr, PAGE_SIZE); } =20 drop(mmap_read); diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index dd2046bd5cde..e4488ad86a65 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1185,7 +1185,7 @@ enum lru_status binder_alloc_free_page(struct list_he= ad *item, if (vma) { trace_binder_unmap_user_start(alloc, index); =20 - zap_page_range_single(vma, page_addr, PAGE_SIZE); + zap_vma_range(vma, page_addr, PAGE_SIZE); =20 trace_binder_unmap_user_end(alloc, index); } diff --git a/include/linux/mm.h b/include/linux/mm.h index 4bd1500b9630..833bedd3f739 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2835,7 +2835,7 @@ struct page *vm_normal_page_pud(struct vm_area_struct= *vma, unsigned long addr, =20 void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, unsigned long size); -void zap_page_range_single(struct vm_area_struct *vma, unsigned long addre= ss, +void zap_vma_range(struct vm_area_struct *vma, unsigned long address, unsigned long size); /** * zap_vma - zap all page table entries in a vma @@ -2843,7 +2843,7 @@ void zap_page_range_single(struct vm_area_struct *vma= , unsigned long address, */ static inline void zap_vma(struct vm_area_struct *vma) { - zap_page_range_single(vma, vma->vm_start, vma->vm_end - vma->vm_start); + zap_vma_range(vma, vma->vm_start, vma->vm_end - vma->vm_start); } struct mmu_notifier_range; =20 diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c index c34510d83b1f..37843c6a4764 100644 --- a/kernel/bpf/arena.c +++ b/kernel/bpf/arena.c @@ -656,7 +656,7 @@ static void zap_pages(struct bpf_arena *arena, long uad= dr, long page_cnt) guard(mutex)(&arena->lock); /* iterate link list under lock */ list_for_each_entry(vml, &arena->vma_list, head) - zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt); + zap_vma_range(vml->vma, uaddr, PAGE_SIZE * page_cnt); } =20 static void arena_free_pages(struct bpf_arena *arena, long uaddr, long pag= e_cnt, bool sleepable) diff --git a/kernel/events/core.c b/kernel/events/core.c index c94c56c94104..5ee02817c3bc 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7215,7 +7215,7 @@ static int map_range(struct perf_buffer *rb, struct v= m_area_struct *vma) #ifdef CONFIG_MMU /* Clear any partial mappings on error. */ if (err) - zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE); + zap_vma_range(vma, vma->vm_start, nr_pages * PAGE_SIZE); #endif =20 return err; diff --git a/mm/madvise.c b/mm/madvise.c index fb5fcdff2b66..6e66f56ff1a6 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -832,7 +832,7 @@ static int madvise_free_single_vma(struct madvise_behav= ior *madv_behavior) * Application no longer needs these pages. If the pages are dirty, * it's OK to just throw them away. The app will be more careful about * data it wants to keep. Be sure to free swap resources too. The - * zap_page_range_single call sets things up for shrink_active_list to act= ually + * zap_vma_range call sets things up for shrink_active_list to actually * free these pages later if no one else has touched them in the meantime, * although we could add these pages to a global reuse list for * shrink_active_list to pick up before reclaiming other pages. @@ -1191,7 +1191,7 @@ static long madvise_guard_install(struct madvise_beha= vior *madv_behavior) * OK some of the range have non-guard pages mapped, zap * them. This leaves existing guard pages in place. */ - zap_page_range_single(vma, range->start, range->end - range->start); + zap_vma_range(vma, range->start, range->end - range->start); } =20 /* diff --git a/mm/memory.c b/mm/memory.c index e611e9af4e85..dd737b6d28c0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2215,14 +2215,14 @@ void zap_vma_range_batched(struct mmu_gather *tlb, } =20 /** - * zap_page_range_single - remove user pages in a given range - * @vma: vm_area_struct holding the applicable pages - * @address: starting address of pages to zap + * zap_vma_range - zap all page table entries in a vma range + * @vma: the vma covering the range to zap + * @address: starting address of the range to zap * @size: number of bytes to zap * - * The range must fit into one VMA. + * The provided address range must be fully contained within @vma. */ -void zap_page_range_single(struct vm_area_struct *vma, unsigned long addre= ss, +void zap_vma_range(struct vm_area_struct *vma, unsigned long address, unsigned long size) { struct mmu_gather tlb; @@ -2250,7 +2250,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigne= d long address, !(vma->vm_flags & VM_PFNMAP)) return; =20 - zap_page_range_single(vma, address, size); + zap_vma_range(vma, address, size); } EXPORT_SYMBOL_GPL(zap_vma_ptes); =20 @@ -3018,7 +3018,7 @@ static int remap_pfn_range_notrack(struct vm_area_str= uct *vma, unsigned long add * maintain page reference counts, and callers may free * pages due to the error. So zap it early. */ - zap_page_range_single(vma, addr, size); + zap_vma_range(vma, addr, size); return error; } =20 diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index befcde27dee7..cb4477ef1529 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2104,7 +2104,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct = vm_area_struct *vma, maybe_zap_len =3D total_bytes_to_map - /* All bytes to map */ *length + /* Mapped or pending */ (pages_remaining * PAGE_SIZE); /* Failed map. */ - zap_page_range_single(vma, *address, maybe_zap_len); + zap_vma_range(vma, *address, maybe_zap_len); err =3D 0; } =20 @@ -2112,7 +2112,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct = vm_area_struct *vma, unsigned long leftover_pages =3D pages_remaining; int bytes_mapped; =20 - /* We called zap_page_range_single, try to reinsert. */ + /* We called zap_vma_range, try to reinsert. */ err =3D vm_insert_pages(vma, *address, pending_pages, &pages_remaining); @@ -2269,7 +2269,7 @@ static int tcp_zerocopy_receive(struct sock *sk, total_bytes_to_map =3D avail_len & ~(PAGE_SIZE - 1); if (total_bytes_to_map) { if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT)) - zap_page_range_single(vma, address, total_bytes_to_map); + zap_vma_range(vma, address, total_bytes_to_map); zc->length =3D total_bytes_to_map; zc->recv_skip_hint =3D 0; } else { diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs index b8e59e4420f3..04b3cc925d67 100644 --- a/rust/kernel/mm/virt.rs +++ b/rust/kernel/mm/virt.rs @@ -113,7 +113,7 @@ pub fn end(&self) -> usize { /// kernel goes further in freeing unused page tables, but for the pur= poses of this operation /// we must only assume that the leaf level is cleared. #[inline] - pub fn zap_page_range_single(&self, address: usize, size: usize) { + pub fn zap_vma_range(&self, address: usize, size: usize) { let (end, did_overflow) =3D address.overflowing_add(size); if did_overflow || address < self.start() || self.end() < end { // TODO: call WARN_ONCE once Rust version of it is added @@ -124,7 +124,7 @@ pub fn zap_page_range_single(&self, address: usize, siz= e: usize) { // sufficient for this method call. This method has no requirement= s on the vma flags. The // address range is checked to be within the vma. unsafe { - bindings::zap_page_range_single(self.as_ptr(), address, size) + bindings::zap_vma_range(self.as_ptr(), address, size) }; } =20 --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EFBB1D86DC; Fri, 27 Feb 2026 20:13:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223196; cv=none; b=ZwoeQkaRNltNC7thpVFtepPeJvrEmRVVlyQo5yx0shATeBuX4/bn2+Iq7m74JdG/AQFKRGOr7+x+HFGJjstUx5AlBqS96G+jJiS4JDXel76d/W0wiQuar2ziMkgwjJrj6CXsWyVCtR8j7GTybV91jH/5G9QHbGaqBESoAGIBODw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223196; c=relaxed/simple; bh=DPPIBtN2jxViPTx92f0FRSM/ZG0jW4iTVJwPfrqzIh8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lvBaQOiV5nk+jsMk94Q73XIeParZ/KNuPOavTAt93AuUY5x7XE8A4f3Tq662GBlayqAtoK4wnIkcLhbpv34WyHQrf9VVldN59QLCGBUEmBiJWfEE8lQ+DHA9h9NhOXwQ1JYv2epyYkQ9A8F6O2YNEnxooOeVTcMPnXyYukP4MX4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BoKOHAuN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BoKOHAuN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 152E5C19421; Fri, 27 Feb 2026 20:12:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223195; bh=DPPIBtN2jxViPTx92f0FRSM/ZG0jW4iTVJwPfrqzIh8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BoKOHAuNZ9Lyo7Qtb0Xof+aunTUAVWSO8ZoPbTGOPgLQXtV0Yd5aS+UZIgyjqPWyG vZmzq/IjgABznEGt/QBHMye+nvxDauSyr1ob8vYx++A8qeditQSZhx0juhW4Ku3jLI EtPn/NcjeZf5E8HIOEKZaesHsYySld9X2Gq2lh2UwrWDeRAQwg7S8EQwWCpzWOPF3h 02mpsuNIVLh3B+yx1arHr+zdp9OkH7YJ2G8iXKu8/U/Is9ylU8rZYRnHH3VXp0Rngh IFj2yD+bA6f/hZFicveu86BbVaR/MWonPQjll+u10qmI3mZsQJPdrOpCxQ5+dIctJ1 g6P8BOcpXbHBw== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 15/16] mm: rename zap_vma_ptes() to zap_special_vma_range() Date: Fri, 27 Feb 2026 21:08:46 +0100 Message-ID: <20260227200848.114019-16-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" zap_vma_ptes() is the only zapping function we export to modules. It's essentially a wrapper around zap_vma_range(), however, with some safety checks: * That the passed range fits fully into the VMA * That it's only used for VM_PFNMAP We might want to support VM_MIXEDMAP soon as well, so use the more-generic term "special vma", although "special" is a bit overloaded. Maybe we'll later just support any VM_SPECIAL flag. While at it, improve the kerneldoc. Signed-off-by: David Hildenbrand (Arm) Acked-by: Leon Romanovsky # drivers/infiniband Reviewed-by: Lorenzo Stoakes (Oracle) --- arch/x86/kernel/cpu/sgx/encl.c | 2 +- drivers/comedi/comedi_fops.c | 2 +- drivers/gpu/drm/i915/i915_mm.c | 4 ++-- drivers/infiniband/core/uverbs_main.c | 6 +++--- drivers/misc/sgi-gru/grumain.c | 2 +- include/linux/mm.h | 2 +- mm/memory.c | 16 +++++++--------- 7 files changed, 16 insertions(+), 18 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index ac60ebde5d9b..3f0222d10f6e 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -1220,7 +1220,7 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsi= gned long addr) =20 ret =3D sgx_encl_find(encl_mm->mm, addr, &vma); if (!ret && encl =3D=3D vma->vm_private_data) - zap_vma_ptes(vma, addr, PAGE_SIZE); + zap_special_vma_range(vma, addr, PAGE_SIZE); =20 mmap_read_unlock(encl_mm->mm); =20 diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c index 48a8a607a84c..b91e0b5ac394 100644 --- a/drivers/comedi/comedi_fops.c +++ b/drivers/comedi/comedi_fops.c @@ -2588,7 +2588,7 @@ static int comedi_mmap(struct file *file, struct vm_a= rea_struct *vma) * remap_pfn_range() because we call remap_pfn_range() in a loop. */ if (retval) - zap_vma_ptes(vma, vma->vm_start, size); + zap_special_vma_range(vma, vma->vm_start, size); #endif =20 if (retval =3D=3D 0) { diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c index c33bd3d83069..fd89e7c7d8d6 100644 --- a/drivers/gpu/drm/i915/i915_mm.c +++ b/drivers/gpu/drm/i915/i915_mm.c @@ -108,7 +108,7 @@ int remap_io_mapping(struct vm_area_struct *vma, =20 err =3D apply_to_page_range(r.mm, addr, size, remap_pfn, &r); if (unlikely(err)) { - zap_vma_ptes(vma, addr, (r.pfn - pfn) << PAGE_SHIFT); + zap_special_vma_range(vma, addr, (r.pfn - pfn) << PAGE_SHIFT); return err; } =20 @@ -156,7 +156,7 @@ int remap_io_sg(struct vm_area_struct *vma, =20 err =3D apply_to_page_range(r.mm, addr, size, remap_sg, &r); if (unlikely(err)) { - zap_vma_ptes(vma, addr, r.pfn << PAGE_SHIFT); + zap_special_vma_range(vma, addr, r.pfn << PAGE_SHIFT); return err; } =20 diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/cor= e/uverbs_main.c index 7b68967a6301..f5837da47299 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -756,7 +756,7 @@ static void rdma_umap_open(struct vm_area_struct *vma) * point, so zap it. */ vma->vm_private_data =3D NULL; - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); + zap_special_vma_range(vma, vma->vm_start, vma->vm_end - vma->vm_start); } =20 static void rdma_umap_close(struct vm_area_struct *vma) @@ -782,7 +782,7 @@ static void rdma_umap_close(struct vm_area_struct *vma) } =20 /* - * Once the zap_vma_ptes has been called touches to the VMA will come here= and + * Once the zap_special_vma_range has been called touches to the VMA will = come here and * we return a dummy writable zero page for all the pfns. */ static vm_fault_t rdma_umap_fault(struct vm_fault *vmf) @@ -878,7 +878,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_fil= e *ufile) continue; list_del_init(&priv->list); =20 - zap_vma_ptes(vma, vma->vm_start, + zap_special_vma_range(vma, vma->vm_start, vma->vm_end - vma->vm_start); =20 if (priv->entry) { diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c index 8d749f345246..278b76cbd281 100644 --- a/drivers/misc/sgi-gru/grumain.c +++ b/drivers/misc/sgi-gru/grumain.c @@ -542,7 +542,7 @@ void gru_unload_context(struct gru_thread_state *gts, i= nt savestate) int ctxnum =3D gts->ts_ctxnum; =20 if (!is_kernel_context(gts)) - zap_vma_ptes(gts->ts_vma, UGRUADDR(gts), GRU_GSEG_PAGESIZE); + zap_special_vma_range(gts->ts_vma, UGRUADDR(gts), GRU_GSEG_PAGESIZE); cch =3D get_cch(gru->gs_gru_base_vaddr, ctxnum); =20 gru_dbg(grudev, "gts %p, cbrmap 0x%lx, dsrmap 0x%lx\n", diff --git a/include/linux/mm.h b/include/linux/mm.h index 833bedd3f739..07f6819db02d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2833,7 +2833,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct= *vma, unsigned long addr, struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long = addr, pud_t pud); =20 -void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, +void zap_special_vma_range(struct vm_area_struct *vma, unsigned long addre= ss, unsigned long size); void zap_vma_range(struct vm_area_struct *vma, unsigned long address, unsigned long size); diff --git a/mm/memory.c b/mm/memory.c index dd737b6d28c0..f3b7b7e16138 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2233,17 +2233,15 @@ void zap_vma_range(struct vm_area_struct *vma, unsi= gned long address, } =20 /** - * zap_vma_ptes - remove ptes mapping the vma - * @vma: vm_area_struct holding ptes to be zapped - * @address: starting address of pages to zap + * zap_special_vma_range - zap all page table entries in a special vma ran= ge + * @vma: the vma covering the range to zap + * @address: starting address of the range to zap * @size: number of bytes to zap * - * This function only unmaps ptes assigned to VM_PFNMAP vmas. - * - * The entire address range must be fully contained within the vma. - * + * This function does nothing when the provided address range is not fully + * contained in @vma, or when the @vma is not VM_PFNMAP. */ -void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, +void zap_special_vma_range(struct vm_area_struct *vma, unsigned long addre= ss, unsigned long size) { if (!range_in_vma(vma, address, address + size) || @@ -2252,7 +2250,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigne= d long address, =20 zap_vma_range(vma, address, size); } -EXPORT_SYMBOL_GPL(zap_vma_ptes); +EXPORT_SYMBOL_GPL(zap_special_vma_range); =20 static pmd_t *walk_to_pmd(struct mm_struct *mm, unsigned long addr) { --=20 2.43.0 From nobody Thu Apr 16 12:24:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 782FE3346BF; Fri, 27 Feb 2026 20:13:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223212; cv=none; b=XSfpR8w6iljlMWjs/4jVfOL6T70JqxNl2bWIEyj+eM9WRVgH45jTLOanBy9mh7WZ7nGP1KoMH5j2KosdRKQ6SSCb7ZxhAi9o0jbZsIp8tDmLWijo/9erHOgNhLCM2mtlGf3pSOTtuHEcwVQ4ZtjDy1jj+Ps0TDa5Jzhu9zWObko= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223212; c=relaxed/simple; bh=8DjODQ9JTo8rNPBCTFGX0cqEx7ur/z2XYX0+HQB8juc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=io7prNS5LG0/MeavSg9xCiOYpZ4eKw/B43ZJENnijM/gf5fumu5pDX6zPrqgJc9mxdjSXjXy2Aweb/7Gh2FhxPPJ364cm4rIRULU/f6WJJdyAx2BFPok5//hf4AXHx3THpR8SSpsxt1RAnXMiqDOpBOHOFb+GQjKmdTgCfecWCY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BsV4ULrX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BsV4ULrX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C5BFC116C6; Fri, 27 Feb 2026 20:13:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223212; bh=8DjODQ9JTo8rNPBCTFGX0cqEx7ur/z2XYX0+HQB8juc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BsV4ULrXcf69csyHSuGqVHkErg/TmBz0CXhzBZuPFMOJEd58YYsV9GY3LYfW2rxUB B/Bd+jNerrYiWu/dHQ4j9aEcpp/k/BLY3TOUIcBEDZazpW4Qb1rlsg807wAQftoetu oKHzBMSDxjv/CxuvG+QWIc4cPfrDOfwX0D5SZxDFnYn2LED/gYiKa1XwK/OgvUaDiQ wgo5tEx/CUq90wHpk0NTCoqjaz2e0NS8iGFwY2Tc6jUxOCGeEWKQQpTGNJBMzS+gGd d3N1RDyJOeuPtak0u7xYeGcKnNxHOOxKOC4LpkDRwUfoXIisyM7oV5jLZ/g+BGM9tD tmtukJMtnncfA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 16/16] mm/memory: support VM_MIXEDMAP in zap_special_vma_range() Date: Fri, 27 Feb 2026 21:08:47 +0100 Message-ID: <20260227200848.114019-17-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is demand for also zapping page table entries by drivers in VM_MIXEDMAP VMAs[1]. Nothing really speaks against supporting VM_MIXEDMAP for driver use. We just don't want arbitrary drivers to zap in ordinary (non-special) VMAs. [1] https://lore.kernel.org/r/aYSKyr7StGpGKNqW@google.com Signed-off-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f3b7b7e16138..3fe30dc2f179 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2239,13 +2239,13 @@ void zap_vma_range(struct vm_area_struct *vma, unsi= gned long address, * @size: number of bytes to zap * * This function does nothing when the provided address range is not fully - * contained in @vma, or when the @vma is not VM_PFNMAP. + * contained in @vma, or when the @vma is not VM_PFNMAP or VM_MIXEDMAP. */ void zap_special_vma_range(struct vm_area_struct *vma, unsigned long addre= ss, unsigned long size) { if (!range_in_vma(vma, address, address + size) || - !(vma->vm_flags & VM_PFNMAP)) + !(vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))) return; =20 zap_vma_range(vma, address, size); --=20 2.43.0