From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3461364E93; Mon, 16 Mar 2026 21:13:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695619; cv=none; b=n94IuKlcwrb32Y8ucq4PQIxbWC23+nBZYwY1NP7NInsx4z3T2JaDCNrN0/T9F0y4usYtuz+Bh94qPlXKGUPwDPiUKMms+5OizCAOsIugf1Nrsc/IcBmtkUUA8HOZ/u9aQJLrlClF9ACseTfiHgGywnG9g0VJ0VIHINRdgCL+YX4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695619; c=relaxed/simple; bh=M9YPgPgzQgiaaTvzgsN1EFl4Gjd+CTrI/r/+UwMOL+E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gT/e3PtxViD3prCY64Akvv8fNj8XTJrzr4hx48SH0q/C3fynO4w5Vsm2CwnMceOA/gg0ogq95oIFnh3TXEp1FiVwuBVdkxIK2aqQ6QgVMhtzv+fTUU0OrQinJY4Pird+TBJwGu5wTQha8gdeNJ2lX368Hp+O7+HmfAdsR6dxnSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=E5RSz38K; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="E5RSz38K" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DF407C19421; Mon, 16 Mar 2026 21:13:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695618; bh=M9YPgPgzQgiaaTvzgsN1EFl4Gjd+CTrI/r/+UwMOL+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E5RSz38KL9XNm+3bO5ISatwvJY95G7ZyUEx3WfEkMi1yAKRGZ9MpGsMVQlU7p8RUz gp4539TwI17ToFGdGNhBtjM2Y9GOj/Vp5OqMMfp45j3mPM9NlsI6bmavmzZw6rG4zt IWTFsJ55Zr6eeqmqyTxdNFo9p4QE/hrlSxBgyUUFDdutnHYyEFsc35NQ6LcI3w6JNh oMkh5zMO4SmrL+3+ugCjizWpvase53kG+wKU0Moz4HbPvvxeZFVcgybaU3Jd4AkcGv TnkunB+lhTT8P39srAKo/Kqsk+xLRHzuAPywpo8SGWn1KkTxxp3a89Ui6Z4C6aG0T4 QgCbUUR1ddsSQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 01/16] mm: various small mmap_prepare cleanups Date: Mon, 16 Mar 2026 21:11:57 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rather than passing arbitrary fields, pass a vm_area_desc pointer to mmap prepare functions to mmap prepare, and an action and vma pointer to mmap complete in order to put all the action-specific logic in the function actually doing the work. Additionally, allow mmap prepare functions to return an error so we can error out as soon as possible if there is something logically incorrect in the input. Update remap_pfn_range_prepare() to properly check the input range for the CoW case. While we're here, make remap_pfn_range_prepare_vma() a little neater, and pass mmap_action directly to call_action_complete(). Then, update compat_vma_mmap() to perform its logic directly, as __compat_vma_map() is not used by anything so we don't need to export it. Also update compat_vma_mmap() to use vfs_mmap_prepare() rather than calling the mmap_prepare op directly. Finally, update the VMA userland tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/fs.h | 2 - include/linux/mm.h | 7 +- mm/internal.h | 27 ++++--- mm/memory.c | 45 +++++++---- mm/util.c | 119 +++++++++++++----------------- mm/vma.c | 24 +++--- tools/testing/vma/include/dup.h | 7 +- tools/testing/vma/include/stubs.h | 8 +- 8 files changed, 123 insertions(+), 116 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 8b3dd145b25e..a2628a12bd2b 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2058,8 +2058,6 @@ static inline bool can_mmap_file(struct file *file) return true; } =20 -int __compat_vma_mmap(const struct file_operations *f_op, - struct file *file, struct vm_area_struct *vma); int compat_vma_mmap(struct file *file, struct vm_area_struct *vma); =20 static inline int vfs_mmap(struct file *file, struct vm_area_struct *vma) diff --git a/include/linux/mm.h b/include/linux/mm.h index 42cc40aa63d9..1e63b3a44a47 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4320,10 +4320,9 @@ static inline void mmap_action_ioremap_full(struct v= m_area_desc *desc, mmap_action_ioremap(desc, desc->start, start_pfn, vma_desc_size(desc)); } =20 -void mmap_action_prepare(struct mmap_action *action, - struct vm_area_desc *desc); -int mmap_action_complete(struct mmap_action *action, - struct vm_area_struct *vma); +int mmap_action_prepare(struct vm_area_desc *desc); +int mmap_action_complete(struct vm_area_struct *vma, + struct mmap_action *action); =20 /* Look up the first VMA which exactly match the interval vm_start ... vm_= end */ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm, diff --git a/mm/internal.h b/mm/internal.h index 708d240b4198..9e42a57e8a12 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1793,26 +1793,31 @@ int walk_page_range_debug(struct mm_struct *mm, uns= igned long start, void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm); int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm); =20 -void remap_pfn_range_prepare(struct vm_area_desc *desc, unsigned long pfn); -int remap_pfn_range_complete(struct vm_area_struct *vma, unsigned long add= r, - unsigned long pfn, unsigned long size, pgprot_t pgprot); +int remap_pfn_range_prepare(struct vm_area_desc *desc); +int remap_pfn_range_complete(struct vm_area_struct *vma, + struct mmap_action *action); =20 -static inline void io_remap_pfn_range_prepare(struct vm_area_desc *desc, - unsigned long orig_pfn, unsigned long size) +static inline int io_remap_pfn_range_prepare(struct vm_area_desc *desc) { + struct mmap_action *action =3D &desc->action; + const unsigned long orig_pfn =3D action->remap.start_pfn; + const unsigned long size =3D action->remap.size; const unsigned long pfn =3D io_remap_pfn_range_pfn(orig_pfn, size); =20 - return remap_pfn_range_prepare(desc, pfn); + action->remap.start_pfn =3D pfn; + return remap_pfn_range_prepare(desc); } =20 static inline int io_remap_pfn_range_complete(struct vm_area_struct *vma, - unsigned long addr, unsigned long orig_pfn, unsigned long size, - pgprot_t orig_prot) + struct mmap_action *action) { - const unsigned long pfn =3D io_remap_pfn_range_pfn(orig_pfn, size); - const pgprot_t prot =3D pgprot_decrypted(orig_prot); + const unsigned long size =3D action->remap.size; + const unsigned long orig_pfn =3D action->remap.start_pfn; + const pgprot_t orig_prot =3D vma->vm_page_prot; =20 - return remap_pfn_range_complete(vma, addr, pfn, size, prot); + action->remap.pgprot =3D pgprot_decrypted(orig_prot); + action->remap.start_pfn =3D io_remap_pfn_range_pfn(orig_pfn, size); + return remap_pfn_range_complete(vma, action); } =20 #ifdef CONFIG_MMU_NOTIFIER diff --git a/mm/memory.c b/mm/memory.c index 219b9bf6cae0..9dec67a18116 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3099,26 +3099,34 @@ static int do_remap_pfn_range(struct vm_area_struct= *vma, unsigned long addr, } #endif =20 -void remap_pfn_range_prepare(struct vm_area_desc *desc, unsigned long pfn) +int remap_pfn_range_prepare(struct vm_area_desc *desc) { - /* - * We set addr=3DVMA start, end=3DVMA end here, so this won't fail, but we - * check it again on complete and will fail there if specified addr is - * invalid. - */ - get_remap_pgoff(vma_desc_is_cow_mapping(desc), desc->start, desc->end, - desc->start, desc->end, pfn, &desc->pgoff); + const struct mmap_action *action =3D &desc->action; + const unsigned long start =3D action->remap.start; + const unsigned long end =3D start + action->remap.size; + const unsigned long pfn =3D action->remap.start_pfn; + const bool is_cow =3D vma_desc_is_cow_mapping(desc); + int err; + + err =3D get_remap_pgoff(is_cow, start, end, desc->start, desc->end, pfn, + &desc->pgoff); + if (err) + return err; + vma_desc_set_flags_mask(desc, VMA_REMAP_FLAGS); + return 0; } =20 -static int remap_pfn_range_prepare_vma(struct vm_area_struct *vma, unsigne= d long addr, - unsigned long pfn, unsigned long size) +static int remap_pfn_range_prepare_vma(struct vm_area_struct *vma, + unsigned long addr, unsigned long pfn, + unsigned long size) { - unsigned long end =3D addr + PAGE_ALIGN(size); + const unsigned long end =3D addr + PAGE_ALIGN(size); + const bool is_cow =3D is_cow_mapping(vma->vm_flags); int err; =20 - err =3D get_remap_pgoff(is_cow_mapping(vma->vm_flags), addr, end, - vma->vm_start, vma->vm_end, pfn, &vma->vm_pgoff); + err =3D get_remap_pgoff(is_cow, addr, end, vma->vm_start, vma->vm_end, + pfn, &vma->vm_pgoff); if (err) return err; =20 @@ -3151,10 +3159,15 @@ int remap_pfn_range(struct vm_area_struct *vma, uns= igned long addr, } EXPORT_SYMBOL(remap_pfn_range); =20 -int remap_pfn_range_complete(struct vm_area_struct *vma, unsigned long add= r, - unsigned long pfn, unsigned long size, pgprot_t prot) +int remap_pfn_range_complete(struct vm_area_struct *vma, + struct mmap_action *action) { - return do_remap_pfn_range(vma, addr, pfn, size, prot); + const unsigned long start =3D action->remap.start; + const unsigned long pfn =3D action->remap.start_pfn; + const unsigned long size =3D action->remap.size; + const pgprot_t prot =3D action->remap.pgprot; + + return do_remap_pfn_range(vma, start, pfn, size, prot); } =20 /** diff --git a/mm/util.c b/mm/util.c index ce7ae80047cf..ac9dd6490523 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1163,43 +1163,6 @@ void flush_dcache_folio(struct folio *folio) EXPORT_SYMBOL(flush_dcache_folio); #endif =20 -/** - * __compat_vma_mmap() - See description for compat_vma_mmap() - * for details. This is the same operation, only with a specific file oper= ations - * struct which may or may not be the same as vma->vm_file->f_op. - * @f_op: The file operations whose .mmap_prepare() hook is specified. - * @file: The file which backs or will back the mapping. - * @vma: The VMA to apply the .mmap_prepare() hook to. - * Returns: 0 on success or error. - */ -int __compat_vma_mmap(const struct file_operations *f_op, - struct file *file, struct vm_area_struct *vma) -{ - struct vm_area_desc desc =3D { - .mm =3D vma->vm_mm, - .file =3D file, - .start =3D vma->vm_start, - .end =3D vma->vm_end, - - .pgoff =3D vma->vm_pgoff, - .vm_file =3D vma->vm_file, - .vma_flags =3D vma->flags, - .page_prot =3D vma->vm_page_prot, - - .action.type =3D MMAP_NOTHING, /* Default */ - }; - int err; - - err =3D f_op->mmap_prepare(&desc); - if (err) - return err; - - mmap_action_prepare(&desc.action, &desc); - set_vma_from_desc(vma, &desc); - return mmap_action_complete(&desc.action, vma); -} -EXPORT_SYMBOL(__compat_vma_mmap); - /** * compat_vma_mmap() - Apply the file's .mmap_prepare() hook to an * existing VMA and execute any requested actions. @@ -1228,7 +1191,31 @@ EXPORT_SYMBOL(__compat_vma_mmap); */ int compat_vma_mmap(struct file *file, struct vm_area_struct *vma) { - return __compat_vma_mmap(file->f_op, file, vma); + struct vm_area_desc desc =3D { + .mm =3D vma->vm_mm, + .file =3D file, + .start =3D vma->vm_start, + .end =3D vma->vm_end, + + .pgoff =3D vma->vm_pgoff, + .vm_file =3D vma->vm_file, + .vma_flags =3D vma->flags, + .page_prot =3D vma->vm_page_prot, + + .action.type =3D MMAP_NOTHING, /* Default */ + }; + int err; + + err =3D vfs_mmap_prepare(file, &desc); + if (err) + return err; + + err =3D mmap_action_prepare(&desc); + if (err) + return err; + + set_vma_from_desc(vma, &desc); + return mmap_action_complete(vma, &desc.action); } EXPORT_SYMBOL(compat_vma_mmap); =20 @@ -1320,8 +1307,8 @@ void snapshot_page(struct page_snapshot *ps, const st= ruct page *page) } } =20 -static int mmap_action_finish(struct mmap_action *action, - const struct vm_area_struct *vma, int err) +static int mmap_action_finish(struct vm_area_struct *vma, + struct mmap_action *action, int err) { /* * If an error occurs, unmap the VMA altogether and return an error. We @@ -1353,37 +1340,38 @@ static int mmap_action_finish(struct mmap_action *a= ction, /** * mmap_action_prepare - Perform preparatory setup for an VMA descriptor * action which need to be performed. - * @desc: The VMA descriptor to prepare for @action. - * @action: The action to perform. + * @desc: The VMA descriptor to prepare for its @desc->action. + * + * Returns: %0 on success, otherwise error. */ -void mmap_action_prepare(struct mmap_action *action, - struct vm_area_desc *desc) +int mmap_action_prepare(struct vm_area_desc *desc) { - switch (action->type) { + switch (desc->action.type) { case MMAP_NOTHING: - break; + return 0; case MMAP_REMAP_PFN: - remap_pfn_range_prepare(desc, action->remap.start_pfn); - break; + return remap_pfn_range_prepare(desc); case MMAP_IO_REMAP_PFN: - io_remap_pfn_range_prepare(desc, action->remap.start_pfn, - action->remap.size); - break; + return io_remap_pfn_range_prepare(desc); } + + WARN_ON_ONCE(1); + return -EINVAL; } EXPORT_SYMBOL(mmap_action_prepare); =20 /** * mmap_action_complete - Execute VMA descriptor action. - * @action: The action to perform. * @vma: The VMA to perform the action upon. + * @action: The action to perform. * * Similar to mmap_action_prepare(). * * Return: 0 on success, or error, at which point the VMA will be unmapped. */ -int mmap_action_complete(struct mmap_action *action, - struct vm_area_struct *vma) +int mmap_action_complete(struct vm_area_struct *vma, + struct mmap_action *action) + { int err =3D 0; =20 @@ -1391,25 +1379,20 @@ int mmap_action_complete(struct mmap_action *action, case MMAP_NOTHING: break; case MMAP_REMAP_PFN: - err =3D remap_pfn_range_complete(vma, action->remap.start, - action->remap.start_pfn, action->remap.size, - action->remap.pgprot); + err =3D remap_pfn_range_complete(vma, action); break; case MMAP_IO_REMAP_PFN: - err =3D io_remap_pfn_range_complete(vma, action->remap.start, - action->remap.start_pfn, action->remap.size, - action->remap.pgprot); + err =3D io_remap_pfn_range_complete(vma, action); break; } =20 - return mmap_action_finish(action, vma, err); + return mmap_action_finish(vma, action, err); } EXPORT_SYMBOL(mmap_action_complete); #else -void mmap_action_prepare(struct mmap_action *action, - struct vm_area_desc *desc) +int mmap_action_prepare(struct vm_area_desc *desc) { - switch (action->type) { + switch (desc->action.type) { case MMAP_NOTHING: break; case MMAP_REMAP_PFN: @@ -1417,11 +1400,13 @@ void mmap_action_prepare(struct mmap_action *action, WARN_ON_ONCE(1); /* nommu cannot handle these. */ break; } + + return 0; } EXPORT_SYMBOL(mmap_action_prepare); =20 -int mmap_action_complete(struct mmap_action *action, - struct vm_area_struct *vma) +int mmap_action_complete(struct vm_area_struct *vma, + struct mmap_action *action) { int err =3D 0; =20 @@ -1436,7 +1421,7 @@ int mmap_action_complete(struct mmap_action *action, break; } =20 - return mmap_action_finish(action, vma, err); + return mmap_action_finish(vma, action, err); } EXPORT_SYMBOL(mmap_action_complete); #endif diff --git a/mm/vma.c b/mm/vma.c index c1f183235756..2a86c7575000 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2640,15 +2640,18 @@ static void __mmap_complete(struct mmap_state *map,= struct vm_area_struct *vma) vma_set_page_prot(vma); } =20 -static void call_action_prepare(struct mmap_state *map, - struct vm_area_desc *desc) +static int call_action_prepare(struct mmap_state *map, + struct vm_area_desc *desc) { - struct mmap_action *action =3D &desc->action; + int err; =20 - mmap_action_prepare(action, desc); + err =3D mmap_action_prepare(desc); + if (err) + return err; =20 - if (action->hide_from_rmap_until_complete) + if (desc->action.hide_from_rmap_until_complete) map->hold_file_rmap_lock =3D true; + return 0; } =20 /* @@ -2672,7 +2675,9 @@ static int call_mmap_prepare(struct mmap_state *map, if (err) return err; =20 - call_action_prepare(map, desc); + err =3D call_action_prepare(map, desc); + if (err) + return err; =20 /* Update fields permitted to be changed. */ map->pgoff =3D desc->pgoff; @@ -2727,13 +2732,12 @@ static bool can_set_ksm_flags_early(struct mmap_sta= te *map) } =20 static int call_action_complete(struct mmap_state *map, - struct vm_area_desc *desc, + struct mmap_action *action, struct vm_area_struct *vma) { - struct mmap_action *action =3D &desc->action; int ret; =20 - ret =3D mmap_action_complete(action, vma); + ret =3D mmap_action_complete(vma, action); =20 /* If we held the file rmap we need to release it. */ if (map->hold_file_rmap_lock) { @@ -2795,7 +2799,7 @@ static unsigned long __mmap_region(struct file *file,= unsigned long addr, __mmap_complete(&map, vma); =20 if (have_mmap_prepare && allocated_new) { - error =3D call_action_complete(&map, &desc, vma); + error =3D call_action_complete(&map, &desc.action, vma); =20 if (error) return error; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 999357e18eb0..9eada1e0949c 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1271,9 +1271,12 @@ static inline int __compat_vma_mmap(const struct fil= e_operations *f_op, if (err) return err; =20 - mmap_action_prepare(&desc.action, &desc); + err =3D mmap_action_prepare(&desc); + if (err) + return err; + set_vma_from_desc(vma, &desc); - return mmap_action_complete(&desc.action, vma); + return mmap_action_complete(vma, &desc.action); } =20 static inline int compat_vma_mmap(struct file *file, diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index 5afb0afe2d48..a30b8bc84955 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -81,13 +81,13 @@ static inline void free_anon_vma_name(struct vm_area_st= ruct *vma) { } =20 -static inline void mmap_action_prepare(struct mmap_action *action, - struct vm_area_desc *desc) +static inline int mmap_action_prepare(struct vm_area_desc *desc) { + return 0; } =20 -static inline int mmap_action_complete(struct mmap_action *action, - struct vm_area_struct *vma) +static inline int mmap_action_complete(struct vm_area_struct *vma, + struct mmap_action *action) { return 0; } --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44EFB364E89; Mon, 16 Mar 2026 21:13:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695622; cv=none; b=kMYYR/D+vfy1kPIuqq3TtmVrPGSSL1URPdBs3rZXVmeKRKp4W4qvA/wH1ye7HIgRYe+oQxC9g9iZAKcGEXuxueJlLmcuaoW561vMv7zCgLhxd1Vp+wIJypmHLcBeCXrCD7U4Lv3MUFoLmS/ovzWFnGkUo5uF2XI3mDwqZUSjsIc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695622; c=relaxed/simple; bh=c79hrhz049PrCZP30n1FB+VqjpxYYCRXbHq0tZQIBj0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p/rkTg99KDCXexPrzrg89xgaLIUmJYDLaAxf/+1yCDPxuSc3C3l706qlip3JVfS3xwgYpubQrpGeMSHmDJaHdViX7k/IUUw7yWr3Y5eHt6rLfn+eZZwoJhRnLT5ZduDaWxeFV82pT4zoV27URYF4W2eTs+WNOO5SlZB8xFmjM0I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pSk+Y10z; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pSk+Y10z" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C629CC19421; Mon, 16 Mar 2026 21:13:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695621; bh=c79hrhz049PrCZP30n1FB+VqjpxYYCRXbHq0tZQIBj0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pSk+Y10zSu7m6MzGHW+Zdj7ihMuzjF24eeuo9sMEd1SIi1ZgnsO5uEAARqCbxy3ex oE6fvSH131kajwxHWZ8tt6Q6XBoxdN7smQJv2Kvv8y6mZ3cJqrjaLmcXgD0Tk02C2Z mCKXapKlxaHc+x094h8DxMdZw/zMC2wEgRnXuGlYHEErx3GRYe9+gzP5eeES3W7egb Ogf++b0M3MhKZdfQY5UckPy2CkE9x9ta4AkV7ew2fXdeSZ+hFb+L0MArn9fVQC4pQJ /itZGiEpSxgbUm30nj1XqdaRgJbn40jjYmoPnRo1JvX7NyVWlvmu9IPR/m32IQ/C3i eN2OFmCnnJ9Kw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 02/16] mm: add documentation for the mmap_prepare file operation callback Date: Mon, 16 Mar 2026 21:11:58 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This documentation makes it easier for a driver/file system implementer to correctly use this callback. It covers the fundamentals, whilst intentionally leaving the less lovely possible actions one might take undocumented (for instance - the success_hook, error_hook fields in mmap_action). The document also covers the new VMA flags implementation which is the only one which will work correctly with mmap_prepare. Signed-off-by: Lorenzo Stoakes (Oracle) --- Documentation/filesystems/index.rst | 1 + Documentation/filesystems/mmap_prepare.rst | 142 +++++++++++++++++++++ 2 files changed, 143 insertions(+) create mode 100644 Documentation/filesystems/mmap_prepare.rst diff --git a/Documentation/filesystems/index.rst b/Documentation/filesystem= s/index.rst index f4873197587d..6cbc3e0292ae 100644 --- a/Documentation/filesystems/index.rst +++ b/Documentation/filesystems/index.rst @@ -29,6 +29,7 @@ algorithms work. fiemap files locks + mmap_prepare multigrain-ts mount_api quota diff --git a/Documentation/filesystems/mmap_prepare.rst b/Documentation/fil= esystems/mmap_prepare.rst new file mode 100644 index 000000000000..65a1f094e469 --- /dev/null +++ b/Documentation/filesystems/mmap_prepare.rst @@ -0,0 +1,142 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D +mmap_prepare callback HOWTO +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D + +Introduction +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The ``struct file->f_op->mmap()`` callback has been deprecated as it is bo= th a +stability and security risk, and doesn't always permit the merging of adja= cent +mappings resulting in unnecessary memory fragmentation. + +It has been replaced with the ``file->f_op->mmap_prepare()`` callback which +solves these problems. + +This hook is called right at the beginning of setting up the mapping, and +importantly it is invoked *before* any merging of adjacent mappings has ta= ken +place. + +If an error arises upon mapping, it might arise after this callback has be= en +invoked, therefore it should be treated as effectively stateless. + +That is - no resources should be allocated nor state updated to reflect th= at a +mapping has been established, as the mapping may either be merged, or fail= to be +mapped after the callback is complete. + +How To Use +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +In your driver's struct file_operations struct, specify an ``mmap_prepare`` +callback rather than an ``mmap`` one, e.g. for ext4: + +.. code-block:: C + + const struct file_operations ext4_file_operations =3D { + ... + .mmap_prepare =3D ext4_file_mmap_prepare, + }; + +This has a signature of ``int (*mmap_prepare)(struct vm_area_desc *)``. + +Examining the struct vm_area_desc type: + +.. code-block:: C + + struct vm_area_desc { + /* Immutable state. */ + const struct mm_struct *const mm; + struct file *const file; /* May vary from vm_file in stacked calle= rs. */ + unsigned long start; + unsigned long end; + + /* Mutable fields. Populated with initial state. */ + pgoff_t pgoff; + struct file *vm_file; + vma_flags_t vma_flags; + pgprot_t page_prot; + + /* Write-only fields. */ + const struct vm_operations_struct *vm_ops; + void *private_data; + + /* Take further action? */ + struct mmap_action action; + }; + +This is straightforward - you have all the fields you need to set up the +mapping, and you can update the mutable and writable fields, for instance: + +.. code-block:: C + + static int ext4_file_mmap_prepare(struct vm_area_desc *desc) + { + int ret; + struct file *file =3D desc->file; + struct inode *inode =3D file->f_mapping->host; + + ... + + file_accessed(file); + if (IS_DAX(file_inode(file))) { + desc->vm_ops =3D &ext4_dax_vm_ops; + vma_desc_set_flags(desc, VMA_HUGEPAGE_BIT); + } else { + desc->vm_ops =3D &ext4_file_vm_ops; + } + return 0; + } + +Importantly, you no longer have to dance around with reference counts or l= ocks +when updating these fields - **you can simply go ahead and change them**. + +Everything is taken care of by the mapping code. + +VMA Flags +--------- + +Along with ``mmap_prepare``, VMA flags have undergone an overhaul. Where b= efore +you would invoke one of vm_flags_init(), vm_flags_reset(), vm_flags_set(), +vm_flags_clear(), and vm_flags_mod() to modify flags (and to have the +locking done correctly for you, this is no longer necessary. + +Also, the legacy approach of specifying VMA flags via ``VM_READ``, ``VM_WR= ITE``, +etc. - i.e. using a ``-VM_xxx``- macro has changed too. + +When implementing mmap_prepare(), reference flags by their bit number, def= ined +as a ``VMA_xxx_BIT`` macro, e.g. ``VMA_READ_BIT``, ``VMA_WRITE_BIT`` etc., +and use one of (where ``desc`` is a pointer to struct vm_area_desc): + +* ``vma_desc_test_flags(desc, ...)`` - Specify a comma-separated list of f= lags + you wish to test for (whether _any_ are set), e.g. - ``vma_desc_test_fla= gs( + desc, VMA_WRITE_BIT, VMA_MAYWRITE_BIT)`` - returns ``true`` if either ar= e set, + otherwise ``false``. +* ``vma_desc_set_flags(desc, ...)`` - Update the VMA descriptor flags to s= et + additional flags specified by a comma-separated list, + e.g. - ``vma_desc_set_flags(desc, VMA_PFNMAP_BIT, VMA_IO_BIT)``. +* ``vma_desc_clear_flags(desc, ...)`` - Update the VMA descriptor flags to= clear + flags specified by a comma-separated list, e.g. - ``vma_desc_clear_flags( + desc, VMA_WRITE_BIT, VMA_MAYWRITE_BIT)``. + +Actions +=3D=3D=3D=3D=3D=3D=3D + +You can now very easily have actions be performed upon a mapping once set = up by +utilising simple helper functions invoked upon the struct vm_area_desc +pointer. These are: + +* mmap_action_remap() - Remaps a range consisting only of PFNs for a speci= fic + range starting a virtual address and PFN number of a set size. + +* mmap_action_remap_full() - Same as mmap_action_remap(), only remaps the + entire mapping from ``start_pfn`` onward. + +* mmap_action_ioremap() - Same as mmap_action_remap(), only performs an I/O + remap. + +* mmap_action_ioremap_full() - Same as mmap_action_ioremap(), only remaps + the entire mapping from ``start_pfn`` onward. + +**NOTE:** The ``action`` field should never normally be manipulated direct= ly, +rather you ought to use one of these helpers. --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81925364920; Mon, 16 Mar 2026 21:13:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695626; cv=none; b=Bv0LIzHrD4qqegOfmazeJ040a/8V2jqmxEMU/aeQACw1hEkXcNtKNKFxGg9ptakPSopCGovB2HcyTWIGtwMViumJQ5FpUTxbqJwypboX08O3c49O6oRn49psS7xryICCt2iH8RdIRX81G7xLRePXV6V/ugWM/gCpr+D9vHoubJk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695626; c=relaxed/simple; bh=ZYlwd7PUB52+Aa0tSM/x3ehotacgsAAPggxfy3ZY/yw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ltyxVmEB32WyrL4Vn9Bxfa14reAPpUKYjRQXTyYJekDK25axcrgMiCLX6CCqWLooxo2s0LBnbcGuFwdDgDcw4ADn1nd1E7KZx5xp6fz6aDAQS4GtOraCreCA7Y3fIB5ZXY8Zoc9VhW2Inu1HCSNheU9cEDD21YrLY6fIfdGChbk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lWJusVF2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lWJusVF2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45D81C2BCB0; Mon, 16 Mar 2026 21:13:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695625; bh=ZYlwd7PUB52+Aa0tSM/x3ehotacgsAAPggxfy3ZY/yw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lWJusVF2FYj6O20AjF69vHaZdlhTuW/FCX7sLY2J2xLq8x+plZOOruT+CF4eJYvzv XjSYmPaEbveXWlez2yANSk5agjSZYes0IACOfpPEvuka5JegNzPZR0+OMZhsKdk2ot 0gTOgqPvBh17kkIac26c62pBaxMWDHJSOQ13DBcMm9h7Rcu1PBf31yVwHlyeTE01qO qo1+4JBJvqmYyOvv1Djssoj3ko/LzbF5GbbJBGE7xGsguAdY/Y8V0TWdDHhMzwYRMj my3q8mKn7Euc6QuFg7LNDQwpx7TcZ+q1HLSrruWE9RD2Z0nOIKQG4cYZDA7Ee1dXZH ql6we8wIwi+qQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 03/16] mm: document vm_operations_struct->open the same as close() Date: Mon, 16 Mar 2026 21:11:59 +0000 Message-ID: <3cec125f9eaf9dc44e638a56c76d12c58684af87.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Describe when the operation is invoked and the context in which it is invoked, matching the description already added for vm_op->close(). While we're here, update all outdated references to an 'area' field for VMAs to the more consistent 'vma'. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 15 ++++++++++----- tools/testing/vma/include/dup.h | 5 +++++ 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1e63b3a44a47..da94edb287cd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -766,15 +766,20 @@ struct vm_uffd_ops; * to the functions called when a no-page or a wp-page exception occurs. */ struct vm_operations_struct { - void (*open)(struct vm_area_struct * area); + /** + * @open: Called when a VMA is remapped, split or forked. Not called + * upon first mapping a VMA. + * Context: User context. May sleep. Caller holds mmap_lock. + */ + void (*open)(struct vm_area_struct *vma); /** * @close: Called when the VMA is being removed from the MM. * Context: User context. May sleep. Caller holds mmap_lock. */ - void (*close)(struct vm_area_struct * area); + void (*close)(struct vm_area_struct *vma); /* Called any time before splitting to check if it's allowed */ - int (*may_split)(struct vm_area_struct *area, unsigned long addr); - int (*mremap)(struct vm_area_struct *area); + int (*may_split)(struct vm_area_struct *vma, unsigned long addr); + int (*mremap)(struct vm_area_struct *vma); /* * Called by mprotect() to make driver-specific permission * checks before mprotect() is finalised. The VMA must not @@ -786,7 +791,7 @@ struct vm_operations_struct { vm_fault_t (*huge_fault)(struct vm_fault *vmf, unsigned int order); vm_fault_t (*map_pages)(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff); - unsigned long (*pagesize)(struct vm_area_struct * area); + unsigned long (*pagesize)(struct vm_area_struct *vma); =20 /* notification that a previously read-only page is about to become * writable, if an error is returned it will cause a SIGBUS */ diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 9eada1e0949c..ccf1f061c65a 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -632,6 +632,11 @@ struct vm_area_struct { } __randomize_layout; =20 struct vm_operations_struct { + /** + * @open: Called when a VMA is remapped, split or forked. Not called + * upon first mapping a VMA. + * Context: User context. May sleep. Caller holds mmap_lock. + */ void (*open)(struct vm_area_struct * area); /** * @close: Called when the VMA is being removed from the MM. --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEA08365A13; Mon, 16 Mar 2026 21:13:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695629; cv=none; b=TGmEylq8/ihERVs4Dg8pnQdReIvERt7ypWXw0w4i5wD8+egLlr/CTxPyiFfXS1w34pfC6dBBpbnS8a10YJZU/m/xPgQXwQBu4nloCODb/pSsPqtsnM+0kc21PUXhZ7HkkdrSJVNXfs136X/sqHQ7fYBiyBlkpRCg4hHte5vtDoE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695629; c=relaxed/simple; bh=mxFcLsr6r2wj9eno75O5C8CT66bJwyYfKm1xkhzS5VE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q/Vo8DJNDWBB08LU7917KxjaDYMqNHFLKrxI6z0mYeZt6QYXkTtN1cMe5XhYA1CuAGzl/q9qV9te2p8Nihr8C8v86syXntP8EDBanepNRCIq6s1Ww8ZCyJgNkv3emb3lCTwgIpH3MqKrzZTWwWzoJbeQg1fOucM7PXtRUH7rZBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UifoBpWE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UifoBpWE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09C13C2BCB1; Mon, 16 Mar 2026 21:13:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695628; bh=mxFcLsr6r2wj9eno75O5C8CT66bJwyYfKm1xkhzS5VE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UifoBpWERyyJinDaROIU/8BdcD8QlZmZI4C90pg1x+VXOyqixOQEfjCN6+i/3J7dc ng4p0DmjHhqw6G5izmwLcWgwnKbOoc42J+S2XGFP2mxiszOAzoIXS0GV4yTIkxxCXv yz3z1FS817ZIr//yhxjzoYjSaYH/h7YCM0hb8VWXTjmokqtiEDOroeMZvgdXHtZINp EVHFTJsEwNhAiOmMQOTCqyZhYr/e8LqmnGIasLzpV/XgeN/92XNzwkSI+FGOFMF8mm fycJK6KknV6G9KQdHGYNGRxu+Kh4PFREmkXJTlrYXRzNFLu4UrZ194JVEv4iCCdL/q PqQJKHUPzTbIQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 04/16] mm: add vm_ops->mapped hook Date: Mon, 16 Mar 2026 21:12:00 +0000 Message-ID: <700b3a31185c1b4255c8410c7724ffd123488467.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Previously, when a driver needed to do something like establish a reference count, it could do so in the mmap hook in the knowledge that the mapping would succeed. With the introduction of f_op->mmap_prepare this is no longer the case, as it is invoked prior to actually establishing the mapping. mmap_prepare is not appropriate for this kind of thing as it is called before any merge might take place, and after which an error might occur meaning resources could be leaked. To take this into account, introduce a new vm_ops->mapped callback which is invoked when the VMA is first mapped (though notably - not when it is merged - which is correct and mirrors existing mmap/open/close behaviour). We do better that vm_ops->open() here, as this callback can return an error, at which point the VMA will be unmapped. Note that vm_ops->mapped() is invoked after any mmap action is complete (such as I/O remapping). We intentionally do not expose the VMA at this point, exposing only the fields that could be used, and an output parameter in case the operation needs to update the vma->vm_private_data field. In order to deal with stacked filesystems which invoke inner filesystem's mmap() invocations, add __compat_vma_mapped() and invoke it on vfs_mmap() (via compat_vma_mmap()) to ensure that the mapped callback is handled when an mmap() caller invokes a nested filesystem's mmap_prepare() callback. We can now also remove call_action_complete() and invoke mmap_action_complete() directly, as we separate out the rmap lock logic to be called in __mmap_region() instead via maybe_drop_file_rmap_lock(). We also abstract unmapping of a VMA on mmap action completion into its own helper function, unmap_vma_locked(). Update the mmap_prepare documentation to describe the mapped hook and make it clear what its intended use is. Additionally, update VMA userland test headers to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- Documentation/filesystems/mmap_prepare.rst | 15 ++++ include/linux/fs.h | 9 ++- include/linux/mm.h | 17 +++++ mm/internal.h | 8 ++ mm/util.c | 85 ++++++++++++++++------ mm/vma.c | 41 ++++++++--- tools/testing/vma/include/dup.h | 27 ++++++- 7 files changed, 164 insertions(+), 38 deletions(-) diff --git a/Documentation/filesystems/mmap_prepare.rst b/Documentation/fil= esystems/mmap_prepare.rst index 65a1f094e469..20db474915da 100644 --- a/Documentation/filesystems/mmap_prepare.rst +++ b/Documentation/filesystems/mmap_prepare.rst @@ -25,6 +25,21 @@ That is - no resources should be allocated nor state upd= ated to reflect that a mapping has been established, as the mapping may either be merged, or fail= to be mapped after the callback is complete. =20 +Mapped callback +--------------- + +If resources need to be allocated per-mapping, or state such as a reference +count needs to be manipulated, this should be done using the ``vm_ops->map= ped`` +hook, which itself should be set by the >mmap_prepare hook. + +This callback is only invoked if a new mapping has been established and wa= s not +merged with any other, and is invoked at a point where no error may occur = before +the mapping is established. + +You may return an error to the callback itself, which will cause the mappi= ng to +become unmapped and an error returned to the mmap() caller. This is useful= if +resources need to be allocated, and that allocation might fail. + How To Use =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 diff --git a/include/linux/fs.h b/include/linux/fs.h index a2628a12bd2b..c390f5c667e3 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2059,13 +2059,20 @@ static inline bool can_mmap_file(struct file *file) } =20 int compat_vma_mmap(struct file *file, struct vm_area_struct *vma); +int __vma_check_mmap_hook(struct vm_area_struct *vma); =20 static inline int vfs_mmap(struct file *file, struct vm_area_struct *vma) { + int err; + if (file->f_op->mmap_prepare) return compat_vma_mmap(file, vma); =20 - return file->f_op->mmap(file, vma); + err =3D file->f_op->mmap(file, vma); + if (err) + return err; + + return __vma_check_mmap_hook(vma); } =20 static inline int vfs_mmap_prepare(struct file *file, struct vm_area_desc = *desc) diff --git a/include/linux/mm.h b/include/linux/mm.h index da94edb287cd..ad1b8c3c0cfd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -777,6 +777,23 @@ struct vm_operations_struct { * Context: User context. May sleep. Caller holds mmap_lock. */ void (*close)(struct vm_area_struct *vma); + /** + * @mapped: Called when the VMA is first mapped in the MM. Not called if + * the new VMA is merged with an adjacent VMA. + * + * The @vm_private_data field is an output field allowing the user to + * modify vma->vm_private_data as necessary. + * + * ONLY valid if set from f_op->mmap_prepare. Will result in an error if + * set from f_op->mmap. + * + * Returns %0 on success, or an error otherwise. On error, the VMA will + * be unmapped. + * + * Context: User context. May sleep. Caller holds mmap_lock. + */ + int (*mapped)(unsigned long start, unsigned long end, pgoff_t pgoff, + const struct file *file, void **vm_private_data); /* Called any time before splitting to check if it's allowed */ int (*may_split)(struct vm_area_struct *vma, unsigned long addr); int (*mremap)(struct vm_area_struct *vma); diff --git a/mm/internal.h b/mm/internal.h index 9e42a57e8a12..f5774892071e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -202,6 +202,14 @@ static inline void vma_close(struct vm_area_struct *vm= a) /* unmap_vmas is in mm/memory.c */ void unmap_vmas(struct mmu_gather *tlb, struct unmap_desc *unmap); =20 +static inline void unmap_vma_locked(struct vm_area_struct *vma) +{ + const size_t len =3D vma_pages(vma) << PAGE_SHIFT; + + mmap_assert_write_locked(vma->vm_mm); + do_munmap(vma->vm_mm, vma->vm_start, len, NULL); +} + #ifdef CONFIG_MMU =20 static inline void get_anon_vma(struct anon_vma *anon_vma) diff --git a/mm/util.c b/mm/util.c index ac9dd6490523..cdfba09e50d7 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1163,6 +1163,54 @@ void flush_dcache_folio(struct folio *folio) EXPORT_SYMBOL(flush_dcache_folio); #endif =20 +static int __compat_vma_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct vm_area_desc desc =3D { + .mm =3D vma->vm_mm, + .file =3D file, + .start =3D vma->vm_start, + .end =3D vma->vm_end, + + .pgoff =3D vma->vm_pgoff, + .vm_file =3D vma->vm_file, + .vma_flags =3D vma->flags, + .page_prot =3D vma->vm_page_prot, + + .action.type =3D MMAP_NOTHING, /* Default */ + }; + int err; + + err =3D vfs_mmap_prepare(file, &desc); + if (err) + return err; + + err =3D mmap_action_prepare(&desc); + if (err) + return err; + + set_vma_from_desc(vma, &desc); + return mmap_action_complete(vma, &desc.action); +} + +static int __compat_vma_mapped(struct file *file, struct vm_area_struct *v= ma) +{ + const struct vm_operations_struct *vm_ops =3D vma->vm_ops; + void *vm_private_data =3D vma->vm_private_data; + int err; + + if (!vm_ops || !vm_ops->mapped) + return 0; + + err =3D vm_ops->mapped(vma->vm_start, vma->vm_end, vma->vm_pgoff, file, + &vm_private_data); + if (err) + unmap_vma_locked(vma); + else if (vm_private_data !=3D vma->vm_private_data) + vma->vm_private_data =3D vm_private_data; + + return err; +} + /** * compat_vma_mmap() - Apply the file's .mmap_prepare() hook to an * existing VMA and execute any requested actions. @@ -1191,34 +1239,26 @@ EXPORT_SYMBOL(flush_dcache_folio); */ int compat_vma_mmap(struct file *file, struct vm_area_struct *vma) { - struct vm_area_desc desc =3D { - .mm =3D vma->vm_mm, - .file =3D file, - .start =3D vma->vm_start, - .end =3D vma->vm_end, - - .pgoff =3D vma->vm_pgoff, - .vm_file =3D vma->vm_file, - .vma_flags =3D vma->flags, - .page_prot =3D vma->vm_page_prot, - - .action.type =3D MMAP_NOTHING, /* Default */ - }; int err; =20 - err =3D vfs_mmap_prepare(file, &desc); - if (err) - return err; - - err =3D mmap_action_prepare(&desc); + err =3D __compat_vma_mmap(file, vma); if (err) return err; =20 - set_vma_from_desc(vma, &desc); - return mmap_action_complete(vma, &desc.action); + return __compat_vma_mapped(file, vma); } EXPORT_SYMBOL(compat_vma_mmap); =20 +int __vma_check_mmap_hook(struct vm_area_struct *vma) +{ + /* vm_ops->mapped is not valid if mmap() is specified. */ + if (vma->vm_ops && WARN_ON_ONCE(vma->vm_ops->mapped)) + return -EINVAL; + + return 0; +} +EXPORT_SYMBOL(__vma_check_mmap_hook); + static void set_ps_flags(struct page_snapshot *ps, const struct folio *fol= io, const struct page *page) { @@ -1316,10 +1356,7 @@ static int mmap_action_finish(struct vm_area_struct = *vma, * invoked if we do NOT merge, so we only clean up the VMA we created. */ if (err) { - const size_t len =3D vma_pages(vma) << PAGE_SHIFT; - - do_munmap(current->mm, vma->vm_start, len, NULL); - + unmap_vma_locked(vma); if (action->error_hook) { /* We may want to filter the error. */ err =3D action->error_hook(err); diff --git a/mm/vma.c b/mm/vma.c index 2a86c7575000..3a0fb2caa1c6 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2731,21 +2731,35 @@ static bool can_set_ksm_flags_early(struct mmap_sta= te *map) return false; } =20 -static int call_action_complete(struct mmap_state *map, - struct mmap_action *action, - struct vm_area_struct *vma) +static int call_mapped_hook(struct vm_area_struct *vma) { - int ret; + const struct vm_operations_struct *vm_ops =3D vma->vm_ops; + void *vm_private_data =3D vma->vm_private_data; + int err; =20 - ret =3D mmap_action_complete(vma, action); + if (!vm_ops || !vm_ops->mapped) + return 0; + err =3D vm_ops->mapped(vma->vm_start, vma->vm_end, vma->vm_pgoff, + vma->vm_file, &vm_private_data); + if (err) { + unmap_vma_locked(vma); + return err; + } + /* Update private data if changed. */ + if (vm_private_data !=3D vma->vm_private_data) + vma->vm_private_data =3D vm_private_data; + return 0; +} =20 - /* If we held the file rmap we need to release it. */ - if (map->hold_file_rmap_lock) { - struct file *file =3D vma->vm_file; +static void maybe_drop_file_rmap_lock(struct mmap_state *map, + struct vm_area_struct *vma) +{ + struct file *file; =20 - i_mmap_unlock_write(file->f_mapping); - } - return ret; + if (!map->hold_file_rmap_lock) + return; + file =3D vma->vm_file; + i_mmap_unlock_write(file->f_mapping); } =20 static unsigned long __mmap_region(struct file *file, unsigned long addr, @@ -2799,8 +2813,11 @@ static unsigned long __mmap_region(struct file *file= , unsigned long addr, __mmap_complete(&map, vma); =20 if (have_mmap_prepare && allocated_new) { - error =3D call_action_complete(&map, &desc.action, vma); + error =3D mmap_action_complete(vma, &desc.action); + if (!error) + error =3D call_mapped_hook(vma); =20 + maybe_drop_file_rmap_lock(&map, vma); if (error) return error; } diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index ccf1f061c65a..4570ec77f153 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -642,7 +642,24 @@ struct vm_operations_struct { * @close: Called when the VMA is being removed from the MM. * Context: User context. May sleep. Caller holds mmap_lock. */ - void (*close)(struct vm_area_struct * area); + void (*close)(struct vm_area_struct *vma); + /** + * @mapped: Called when the VMA is first mapped in the MM. Not called if + * the new VMA is merged with an adjacent VMA. + * + * The @vm_private_data field is an output field allowing the user to + * modify vma->vm_private_data as necessary. + * + * ONLY valid if set from f_op->mmap_prepare. Will result in an error if + * set from f_op->mmap. + * + * Returns %0 on success, or an error otherwise. On error, the VMA will + * be unmapped. + * + * Context: User context. May sleep. Caller holds mmap_lock. + */ + int (*mapped)(unsigned long start, unsigned long end, pgoff_t pgoff, + const struct file *file, void **vm_private_data); /* Called any time before splitting to check if it's allowed */ int (*may_split)(struct vm_area_struct *area, unsigned long addr); int (*mremap)(struct vm_area_struct *area); @@ -1500,3 +1517,11 @@ static inline pgprot_t vma_get_page_prot(vma_flags_t= vma_flags) =20 return vm_get_page_prot(vm_flags); } + +static inline void unmap_vma_locked(struct vm_area_struct *vma) +{ + const size_t len =3D vma_pages(vma) << PAGE_SHIFT; + + mmap_assert_write_locked(vma->vm_mm); + do_munmap(vma->vm_mm, vma->vm_start, len, NULL); +} --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36B393644DA; Mon, 16 Mar 2026 21:13:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695632; cv=none; b=oN4kNjJN1f96Gdt9XnVMbwJ6KTxENohQqNyHaonsmYG+3VtPNw6RhFBeX2Th9VREEGZIA4vFLpYHx4DmOf0D6DptqA5s2AWNQTUAeD1ionI/BENe0kXRefjU0eHM6ksDEyDaY7e74NJ3QuJQbH8z36uApCBCbCjUONDv1xWjLTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695632; c=relaxed/simple; bh=6S44qhIRjmjl+gxtSz073d0fvCuPGMw2Nzi6g3n7GIg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ibRtIlnnl3XqRwoYBuJIpybznZAnn6WaeIoDLvqMQ7JkBmuyRHtisyFwAKA8s8f6sBQzjgYPpRiCGbHtRQq3b66A6B4eNvyDDG3vNVKVmddz67c4cOlqQ6Tj2B9oAxIj15hEtF8Q2eFUlwImdFPbKQbJjQSga1RS42Pd4Qld4JI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=G/nMXj/G; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="G/nMXj/G" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE085C19421; Mon, 16 Mar 2026 21:13:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695631; bh=6S44qhIRjmjl+gxtSz073d0fvCuPGMw2Nzi6g3n7GIg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G/nMXj/Gt/rgZs51K4/q3MDaE+1rYepOBe9N35lU0C2jzQBG8tTMYbazchVMYIMrK SH7mg8XpmttG0GxwWU8BM7a0GUW3UGMBHY1NW0EnFW44iOQdJHVsTPnVZ3FoIAIBxx bQDZGVyvVsDs/9OIustgkwgYS2kKqpZnsw2fFPNSxnI9hHJ2uyO4J+A+ZOn3MpRsp7 UoLgV+kPKpfkcJFy7hB4YEPJlYUm3QUlmbrfI7JF3TQbwG1WarvqyYy7wBcDFjfHmW yXyR9kTsNP+yM1b0kOfshvK1ytmwp+wOxUyN7ESJXV/1+Gc/FTTQqfKokMguPcdSMp 9f+mZjVQTJvFQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 05/16] fs: afs: correctly drop reference count on mapping failure Date: Mon, 16 Mar 2026 21:12:01 +0000 Message-ID: <7adcb09e6817fadf9572a96893e95eacfe6fc4cc.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Commit 9d5403b1036c ("fs: convert most other generic_file_*mmap() users to .mmap_prepare()") updated AFS to use the mmap_prepare callback in favour of the deprecated mmap callback. However, it did not account for the fact that mmap_prepare is called pre-merge, and may then be merged, nor that mmap_prepare can fail to map due to an out of memory error. Both of those are cases in which we should not be incrementing a reference count. With the newly added vm_ops->mapped callback available, we can simply defer this operation to that callback which is only invoked once the mapping is successfully in place (but not yet visible to userspace as the mmap and VMA write locks are held). Therefore add afs_mapped() to implement this callback for AFS, and remove the code doing so in afs_mmap_prepare(). Also update afs_vm_open(), afs_vm_close() and afs_vm_map_pages() to be consistent in how the vnode is accessed. Signed-off-by: Lorenzo Stoakes (Oracle) --- fs/afs/file.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index f609366fd2ac..85696ac984cc 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -28,6 +28,8 @@ static ssize_t afs_file_splice_read(struct file *in, loff= _t *ppos, static void afs_vm_open(struct vm_area_struct *area); static void afs_vm_close(struct vm_area_struct *area); static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgo= ff, pgoff_t end_pgoff); +static int afs_mapped(unsigned long start, unsigned long end, pgoff_t pgof= f, + const struct file *file, void **vm_private_data); =20 const struct file_operations afs_file_operations =3D { .open =3D afs_open, @@ -61,6 +63,7 @@ const struct address_space_operations afs_file_aops =3D { }; =20 static const struct vm_operations_struct afs_vm_ops =3D { + .mapped =3D afs_mapped, .open =3D afs_vm_open, .close =3D afs_vm_close, .fault =3D filemap_fault, @@ -494,32 +497,45 @@ static void afs_drop_open_mmap(struct afs_vnode *vnod= e) */ static int afs_file_mmap_prepare(struct vm_area_desc *desc) { - struct afs_vnode *vnode =3D AFS_FS_I(file_inode(desc->file)); int ret; =20 - afs_add_open_mmap(vnode); - ret =3D generic_file_mmap_prepare(desc); - if (ret =3D=3D 0) - desc->vm_ops =3D &afs_vm_ops; - else - afs_drop_open_mmap(vnode); + if (ret) + return ret; + + desc->vm_ops =3D &afs_vm_ops; return ret; } =20 +static int afs_mapped(unsigned long start, unsigned long end, pgoff_t pgof= f, + const struct file *file, void **vm_private_data) +{ + struct afs_vnode *vnode =3D AFS_FS_I(file_inode(file)); + + afs_add_open_mmap(vnode); + return 0; +} + static void afs_vm_open(struct vm_area_struct *vma) { - afs_add_open_mmap(AFS_FS_I(file_inode(vma->vm_file))); + struct file *file =3D vma->vm_file; + struct afs_vnode *vnode =3D AFS_FS_I(file_inode(file)); + + afs_add_open_mmap(vnode); } =20 static void afs_vm_close(struct vm_area_struct *vma) { - afs_drop_open_mmap(AFS_FS_I(file_inode(vma->vm_file))); + struct file *file =3D vma->vm_file; + struct afs_vnode *vnode =3D AFS_FS_I(file_inode(file)); + + afs_drop_open_mmap(vnode); } =20 static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgo= ff, pgoff_t end_pgoff) { - struct afs_vnode *vnode =3D AFS_FS_I(file_inode(vmf->vma->vm_file)); + struct file *file =3D vmf->vma->vm_file; + struct afs_vnode *vnode =3D AFS_FS_I(file_inode(file)); =20 if (afs_check_validity(vnode)) return filemap_map_pages(vmf, start_pgoff, end_pgoff); --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D192D36604C; Mon, 16 Mar 2026 21:13:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695634; cv=none; b=Nl2vfmnU0VMVgaeCJk8EK25Nn+ossjHbl9Vvk3pMGLbjgdh1180f9gQOL2M3/Ihb/FrIiurmeULtxziPCE9BV8qV5QQ+Qhyb9PKxPktQjMFu7HtvhxxmUg0uYYMJuPhoXXllSCCQDXPVQNG3tWDnHl2fN4r4BzlmRdX0wFxfwtw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695634; c=relaxed/simple; bh=Yjk3yAt9EyG5UNmB6nysV8yEolQZCSnjywjZGWL+ROA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lJxuuob+0pEXYRi6Si6OedSeMnshI+P/dwZVwgrML5qkeT61f0ZbwrjZ7S3wbEgQgyeuEiDCE902Busk8yit5lSVK+fzfF9G051AR29K2VtoSpWIQI7MNrAaAnmOjQ+CC3zvDstVwBBHBRf5JeBk9dT1rgLRvZrfNZ4VirMf7nY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ak/ApMNF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ak/ApMNF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4FD2C2BCAF; Mon, 16 Mar 2026 21:13:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695634; bh=Yjk3yAt9EyG5UNmB6nysV8yEolQZCSnjywjZGWL+ROA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ak/ApMNFRuacQ4MUDQZjWId235ubL+k5EmWB8Vz9+P2RTPEYiEQ4Yf1PbMBTWExIs ETdhEWZyo/ZEFVnn0eVp3TClYNiFv17tukd7JWJJyGsxX/vJxM4QJ/9wMdSffP9jok 93qZvgzdpc9SlU4aEStppXtE4LBELgf3Xj9JJGozLYVqbw51Ugaf+osWqvQCbZ3xbK F5tNhgfxzhsZcXGBZPEKNUfchdbpQ9JzUTi0V5mwzHXMirsQiNiUMWKg0UQuCBvm9h O1aqIpmTopyyl4PcMYinECiIc6spih4DWUYRgSt3exe60uol23x9vx1o80YJYDOI/I 6bx27971Va+vg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 06/16] mm: add mmap_action_simple_ioremap() Date: Mon, 16 Mar 2026 21:12:02 +0000 Message-ID: <1e58aaf3cdb61cc317d890c12c9a558dfc206913.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently drivers use vm_iomap_memory() as a simple helper function for I/O remapping memory over a range starting at a specified physical address over a specified length. In order to utilise this from mmap_prepare, separate out the core logic into __simple_ioremap_prep(), update vm_iomap_memory() to use it, and add simple_ioremap_prepare() to do the same with a VMA descriptor object. We also add MMAP_SIMPLE_IO_REMAP and relevant fields to the struct mmap_action type to permit this operation also. We use mmap_action_ioremap() to set up the actual I/O remap operation once we have checked and figured out the parameters, which makes simple_ioremap_prepare() easy to implement. We then add mmap_action_simple_ioremap() to allow drivers to make use of this mode. We update the mmap_prepare documentation to describe this mode. Finally, we update the VMA tests to reflect this change. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- Documentation/filesystems/mmap_prepare.rst | 3 + include/linux/mm.h | 24 +++++- include/linux/mm_types.h | 6 +- mm/internal.h | 2 + mm/memory.c | 87 +++++++++++++++------- mm/util.c | 12 +++ tools/testing/vma/include/dup.h | 6 +- 7 files changed, 112 insertions(+), 28 deletions(-) diff --git a/Documentation/filesystems/mmap_prepare.rst b/Documentation/fil= esystems/mmap_prepare.rst index 20db474915da..be76ae475b9c 100644 --- a/Documentation/filesystems/mmap_prepare.rst +++ b/Documentation/filesystems/mmap_prepare.rst @@ -153,5 +153,8 @@ pointer. These are: * mmap_action_ioremap_full() - Same as mmap_action_ioremap(), only remaps the entire mapping from ``start_pfn`` onward. =20 +* mmap_action_simple_ioremap() - Sets up an I/O remap from a specified + physical address and over a specified length. + **NOTE:** The ``action`` field should never normally be manipulated direct= ly, rather you ought to use one of these helpers. diff --git a/include/linux/mm.h b/include/linux/mm.h index ad1b8c3c0cfd..df8fa6e6402b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4337,11 +4337,33 @@ static inline void mmap_action_ioremap(struct vm_ar= ea_desc *desc, * @start_pfn: The first PFN in the range to remap. */ static inline void mmap_action_ioremap_full(struct vm_area_desc *desc, - unsigned long start_pfn) + unsigned long start_pfn) { mmap_action_ioremap(desc, desc->start, start_pfn, vma_desc_size(desc)); } =20 +/** + * mmap_action_simple_ioremap - helper for mmap_prepare hook to specify th= at the + * physical range in [start_phys_addr, start_phys_addr + size) should be I= /O + * remapped. + * @desc: The VMA descriptor for the VMA requiring remap. + * @start_phys_addr: Start of the physical memory to be mapped. + * @size: Size of the area to map. + * + * NOTE: Some drivers might want to tweak desc->page_prot for purposes of + * write-combine or similar. + */ +static inline void mmap_action_simple_ioremap(struct vm_area_desc *desc, + phys_addr_t start_phys_addr, + unsigned long size) +{ + struct mmap_action *action =3D &desc->action; + + action->simple_ioremap.start_phys_addr =3D start_phys_addr; + action->simple_ioremap.size =3D size; + action->type =3D MMAP_SIMPLE_IO_REMAP; +} + int mmap_action_prepare(struct vm_area_desc *desc); int mmap_action_complete(struct vm_area_struct *vma, struct mmap_action *action); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4a229cc0a06b..50685cf29792 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -814,6 +814,7 @@ enum mmap_action_type { MMAP_NOTHING, /* Mapping is complete, no further action. */ MMAP_REMAP_PFN, /* Remap PFN range. */ MMAP_IO_REMAP_PFN, /* I/O remap PFN range. */ + MMAP_SIMPLE_IO_REMAP, /* I/O remap with guardrails. */ }; =20 /* @@ -822,13 +823,16 @@ enum mmap_action_type { */ struct mmap_action { union { - /* Remap range. */ struct { unsigned long start; unsigned long start_pfn; unsigned long size; pgprot_t pgprot; } remap; + struct { + phys_addr_t start_phys_addr; + unsigned long size; + } simple_ioremap; }; enum mmap_action_type type; =20 diff --git a/mm/internal.h b/mm/internal.h index f5774892071e..0eaca2f0eb6a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1804,6 +1804,8 @@ int dup_mmap(struct mm_struct *mm, struct mm_struct *= oldmm); int remap_pfn_range_prepare(struct vm_area_desc *desc); int remap_pfn_range_complete(struct vm_area_struct *vma, struct mmap_action *action); +int simple_ioremap_prepare(struct vm_area_desc *desc); +/* No simple_ioremap_complete, is ultimately handled by remap complete. */ =20 static inline int io_remap_pfn_range_prepare(struct vm_area_desc *desc) { diff --git a/mm/memory.c b/mm/memory.c index 9dec67a18116..f3f4046aee97 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3170,6 +3170,59 @@ int remap_pfn_range_complete(struct vm_area_struct *= vma, return do_remap_pfn_range(vma, start, pfn, size, prot); } =20 +static int __simple_ioremap_prep(unsigned long vm_start, unsigned long vm_= end, + pgoff_t vm_pgoff, phys_addr_t start_phys, + unsigned long size, unsigned long *pfnp) +{ + const unsigned long vm_len =3D vm_end - vm_start; + unsigned long pfn, pages; + + /* Check that the physical memory area passed in looks valid */ + if (start_phys + size < start_phys) + return -EINVAL; + /* + * You *really* shouldn't map things that aren't page-aligned, + * but we've historically allowed it because IO memory might + * just have smaller alignment. + */ + size +=3D start_phys & ~PAGE_MASK; + pfn =3D start_phys >> PAGE_SHIFT; + pages =3D (size + ~PAGE_MASK) >> PAGE_SHIFT; + if (pfn + pages < pfn) + return -EINVAL; + + /* We start the mapping 'vm_pgoff' pages into the area */ + if (vm_pgoff > pages) + return -EINVAL; + pfn +=3D vm_pgoff; + pages -=3D vm_pgoff; + + /* Can we fit all of the mapping? */ + if ((vm_len >> PAGE_SHIFT) > pages) + return -EINVAL; + + *pfnp =3D pfn; + return 0; +} + +int simple_ioremap_prepare(struct vm_area_desc *desc) +{ + struct mmap_action *action =3D &desc->action; + const phys_addr_t start =3D action->simple_ioremap.start_phys_addr; + const unsigned long size =3D action->simple_ioremap.size; + unsigned long pfn; + int err; + + err =3D __simple_ioremap_prep(desc->start, desc->end, desc->pgoff, + start, size, &pfn); + if (err) + return err; + + /* The I/O remap logic does the heavy lifting. */ + mmap_action_ioremap(desc, desc->start, pfn, vma_desc_size(desc)); + return mmap_action_prepare(desc); +} + /** * vm_iomap_memory - remap memory to userspace * @vma: user vma to map to @@ -3187,32 +3240,16 @@ int remap_pfn_range_complete(struct vm_area_struct = *vma, */ int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigne= d long len) { - unsigned long vm_len, pfn, pages; - - /* Check that the physical memory area passed in looks valid */ - if (start + len < start) - return -EINVAL; - /* - * You *really* shouldn't map things that aren't page-aligned, - * but we've historically allowed it because IO memory might - * just have smaller alignment. - */ - len +=3D start & ~PAGE_MASK; - pfn =3D start >> PAGE_SHIFT; - pages =3D (len + ~PAGE_MASK) >> PAGE_SHIFT; - if (pfn + pages < pfn) - return -EINVAL; - - /* We start the mapping 'vm_pgoff' pages into the area */ - if (vma->vm_pgoff > pages) - return -EINVAL; - pfn +=3D vma->vm_pgoff; - pages -=3D vma->vm_pgoff; + const unsigned long vm_start =3D vma->vm_start; + const unsigned long vm_end =3D vma->vm_end; + const unsigned long vm_len =3D vm_end - vm_start; + unsigned long pfn; + int err; =20 - /* Can we fit all of the mapping? */ - vm_len =3D vma->vm_end - vma->vm_start; - if (vm_len >> PAGE_SHIFT > pages) - return -EINVAL; + err =3D __simple_ioremap_prep(vm_start, vm_end, vma->vm_pgoff, start, + len, &pfn); + if (err) + return err; =20 /* Ok, let it rip */ return io_remap_pfn_range(vma, vma->vm_start, pfn, vm_len, vma->vm_page_p= rot); diff --git a/mm/util.c b/mm/util.c index cdfba09e50d7..aa92e471afe1 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1390,6 +1390,8 @@ int mmap_action_prepare(struct vm_area_desc *desc) return remap_pfn_range_prepare(desc); case MMAP_IO_REMAP_PFN: return io_remap_pfn_range_prepare(desc); + case MMAP_SIMPLE_IO_REMAP: + return simple_ioremap_prepare(desc); } =20 WARN_ON_ONCE(1); @@ -1421,6 +1423,14 @@ int mmap_action_complete(struct vm_area_struct *vma, case MMAP_IO_REMAP_PFN: err =3D io_remap_pfn_range_complete(vma, action); break; + case MMAP_SIMPLE_IO_REMAP: + /* + * The simple I/O remap should have been delegated to an I/O + * remap. + */ + WARN_ON_ONCE(1); + err =3D -EINVAL; + break; } =20 return mmap_action_finish(vma, action, err); @@ -1434,6 +1444,7 @@ int mmap_action_prepare(struct vm_area_desc *desc) break; case MMAP_REMAP_PFN: case MMAP_IO_REMAP_PFN: + case MMAP_SIMPLE_IO_REMAP: WARN_ON_ONCE(1); /* nommu cannot handle these. */ break; } @@ -1452,6 +1463,7 @@ int mmap_action_complete(struct vm_area_struct *vma, break; case MMAP_REMAP_PFN: case MMAP_IO_REMAP_PFN: + case MMAP_SIMPLE_IO_REMAP: WARN_ON_ONCE(1); /* nommu cannot handle this. */ =20 err =3D -EINVAL; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 4570ec77f153..114daaef4f73 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -453,6 +453,7 @@ enum mmap_action_type { MMAP_NOTHING, /* Mapping is complete, no further action. */ MMAP_REMAP_PFN, /* Remap PFN range. */ MMAP_IO_REMAP_PFN, /* I/O remap PFN range. */ + MMAP_SIMPLE_IO_REMAP, /* I/O remap with guardrails. */ }; =20 /* @@ -461,13 +462,16 @@ enum mmap_action_type { */ struct mmap_action { union { - /* Remap range. */ struct { unsigned long start; unsigned long start_pfn; unsigned long size; pgprot_t pgprot; } remap; + struct { + phys_addr_t start; + unsigned long len; + } simple_ioremap; }; enum mmap_action_type type; =20 --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84918365A13; Mon, 16 Mar 2026 21:13:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695637; cv=none; b=NWBx1OkhYi863gZx9E2jITDSn6aDZcGvw0BnkmMluCEz49g50zBQRTSt8c/+CClfOEyfqVgwPvMu1caX2iRGdUGKN7kaEdZdPWqvuaiv2/ZFbwpVza7KlX5vF1NdUo3JwKHojRGK1zD2aakXc+pM+DzAEpZOaCudaahS4aIwB7M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695637; c=relaxed/simple; bh=Ety6uMqPsqMphsUZjHWQlJPo7xJz901fcYuIgrq0DKg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qxMe3z4kx0ezTPjG1hxBazXIRxtgrbXgcMQx7TNhSGIV6TJg8J1MTG79ogEWFMpNjEKDhRdElwVxIYwzYVA4Kggl07hFDeEeOwxLwWT0lzcpaktk2CkcTvoRa9nUKOB8+uyVA9uXBsa5pMNcTkkrLXRxDovSxGwDrs5b+LDHalY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nmnL0tjD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nmnL0tjD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F6DBC19421; Mon, 16 Mar 2026 21:13:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695637; bh=Ety6uMqPsqMphsUZjHWQlJPo7xJz901fcYuIgrq0DKg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nmnL0tjD2LjXqkscja+qK+o5w7KYcx1X3tNZZE+4s6jUiRbSKxC975NUisVAyfni6 ZPAgYMCz590lP0FEbj4vo4otQhWUiDSpzm4M+SQVKlLbHX4QVH9StZHxUT3cJw1+NW mGc991yrKiEn4hkncIYU5mC8kMkfnkfdNzrr9iLncJ73hWX3VN5JP/d/zQPsM6uquV gJHsvKM4YJ8z2rhjCDfyrDvhGX2EH7g7R9+s2NQqn4s27zAsmYMC+pZq7Edt6YITp3 4uRMMAkFqkX1hlYFoHTcFgKZVEb9+zpQOxRY+Pr0qA6jUCQdRny3jjbHKXYBIyrVyp phbu2Trdoa39w== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 07/16] misc: open-dice: replace deprecated mmap hook with mmap_prepare Date: Mon, 16 Mar 2026 21:12:03 +0000 Message-ID: <77fbdae93f250fa1551f3052fc9034739795ff20.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The f_op->mmap interface is deprecated, so update driver to use its successor, mmap_prepare. The driver previously used vm_iomap_memory(), so this change replaces it with its mmap_prepare equivalent, mmap_action_simple_ioremap(). Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- drivers/misc/open-dice.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c index 24c29e0f00ef..45060fb4ea27 100644 --- a/drivers/misc/open-dice.c +++ b/drivers/misc/open-dice.c @@ -86,29 +86,32 @@ static ssize_t open_dice_write(struct file *filp, const= char __user *ptr, /* * Creates a mapping of the reserved memory region in user address space. */ -static int open_dice_mmap(struct file *filp, struct vm_area_struct *vma) +static int open_dice_mmap_prepare(struct vm_area_desc *desc) { + struct file *filp =3D desc->file; struct open_dice_drvdata *drvdata =3D to_open_dice_drvdata(filp); =20 - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_desc_test(desc, VMA_MAYSHARE_BIT)) { /* Do not allow userspace to modify the underlying data. */ - if (vma->vm_flags & VM_WRITE) + if (vma_desc_test(desc, VMA_WRITE_BIT)) return -EPERM; /* Ensure userspace cannot acquire VM_WRITE later. */ - vm_flags_clear(vma, VM_MAYWRITE); + vma_desc_clear_flags(desc, VMA_MAYWRITE_BIT); } =20 /* Create write-combine mapping so all clients observe a wipe. */ - vma->vm_page_prot =3D pgprot_writecombine(vma->vm_page_prot); - vm_flags_set(vma, VM_DONTCOPY | VM_DONTDUMP); - return vm_iomap_memory(vma, drvdata->rmem->base, drvdata->rmem->size); + desc->page_prot =3D pgprot_writecombine(desc->page_prot); + vma_desc_set_flags(desc, VMA_DONTCOPY_BIT, VMA_DONTDUMP_BIT); + mmap_action_simple_ioremap(desc, drvdata->rmem->base, + drvdata->rmem->size); + return 0; } =20 static const struct file_operations open_dice_fops =3D { .owner =3D THIS_MODULE, .read =3D open_dice_read, .write =3D open_dice_write, - .mmap =3D open_dice_mmap, + .mmap_prepare =3D open_dice_mmap_prepare, }; =20 static int __init open_dice_probe(struct platform_device *pdev) --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CDA13659E0; Mon, 16 Mar 2026 21:14:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695640; cv=none; b=njiWlcoBa+Ws4Td8CN3T7gvO+Nzks+QIyla9R2Lpuv1tkYzLffCVj5A3hyDzwG35ELWrJH6snHLpjLWWHedmBHUhJj/0gBLZW+X4CYoHrxoL6GRWQXr3kr7ahjMWRkRL7bkfTJVCEnyVCSwF9ehDsbpeeL7r9cCoFy6ywST8dpI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695640; c=relaxed/simple; bh=N35PsY/nqCRIMm5eMpyBBuhak2KyJHh8Z/VDZF7nLys=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PJeDwDH35AoNAWExCDizsu8mxQyCf9fHlcyvvOGb3TQBV4ttSJ0cQ6fYrRX2Jrptu6s6SrZK5ZX8hz5XUbx06K8XmIqSW7umHur/jIuTKpbkWcvqGpx1KTxzIgO+N9UTWtvNX5n/TWS2qU/J6ThzSo0BDrHwiT7rQtL5iPndoeE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SooNWz9o; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SooNWz9o" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5307EC2BCB3; Mon, 16 Mar 2026 21:13:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695640; bh=N35PsY/nqCRIMm5eMpyBBuhak2KyJHh8Z/VDZF7nLys=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SooNWz9oGWPLg/eYQ4qw72qEAI7Ry/S3s8O8gOnGA6CxS8wYVzRNuxA7Ln7HYsFkB zhRwtIZ61cvU2ELt2XjfJmPPH1gjNGhUmnwZ8mBvan4MrFkQ2J7BVbSn1VBjcUJ4k+ Of7ITjP7YAjTJ88QPRrmbFNX3DQtiru1EFQMUzDLKhqyY8W4/2OqNh/cDJAks0N71e PqRbTzN4rS2Q33fuhZlURobKEa8hORbg2rbLlbLkyDcgniHiIgMm5saZU/WsJ2tGru k5CxuqZc0QQnh1Zx/SWQnWnt/r6BZD3Km201D23jHKkR6KkTJyIpVF7uqiTCJ1dwqg 2LGNPUjqjjKAA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 08/16] hpet: replace deprecated mmap hook with mmap_prepare Date: Mon, 16 Mar 2026 21:12:04 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The f_op->mmap interface is deprecated, so update driver to use its successor, mmap_prepare. The driver previously used vm_iomap_memory(), so this change replaces it with its mmap_prepare equivalent, mmap_action_simple_ioremap(). Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- drivers/char/hpet.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c index 60dd09a56f50..8f128cc40147 100644 --- a/drivers/char/hpet.c +++ b/drivers/char/hpet.c @@ -354,8 +354,9 @@ static __init int hpet_mmap_enable(char *str) } __setup("hpet_mmap=3D", hpet_mmap_enable); =20 -static int hpet_mmap(struct file *file, struct vm_area_struct *vma) +static int hpet_mmap_prepare(struct vm_area_desc *desc) { + struct file *file =3D desc->file; struct hpet_dev *devp; unsigned long addr; =20 @@ -368,11 +369,12 @@ static int hpet_mmap(struct file *file, struct vm_are= a_struct *vma) if (addr & (PAGE_SIZE - 1)) return -ENOSYS; =20 - vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); - return vm_iomap_memory(vma, addr, PAGE_SIZE); + desc->page_prot =3D pgprot_noncached(desc->page_prot); + mmap_action_simple_ioremap(desc, addr, PAGE_SIZE); + return 0; } #else -static int hpet_mmap(struct file *file, struct vm_area_struct *vma) +static int hpet_mmap_prepare(struct vm_area_desc *desc) { return -ENOSYS; } @@ -710,7 +712,7 @@ static const struct file_operations hpet_fops =3D { .open =3D hpet_open, .release =3D hpet_release, .fasync =3D hpet_fasync, - .mmap =3D hpet_mmap, + .mmap_prepare =3D hpet_mmap_prepare, }; =20 static int hpet_is_known(struct hpet_data *hdp) --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F149365A1C; Mon, 16 Mar 2026 21:14:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695643; cv=none; b=Y+47GY/4sBGvMqp1ryZ03Dxx+WSOdvt94OzeVUnydm6IUs49fkoHHqFuZezl/wo/Vjt7XKRDbRck62N0dpTzxff68fKoI5Q5DEALaKX7nUqCzKOSOaqMa9fGED6EsWJrRHC+4z2pPIBYU1xrFM4BJlD8Jh/60RM7mw0a2uXD3TA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695643; c=relaxed/simple; bh=xj+/JrCOmW2N2PV/njSsEjr7rELQWaiXJ1jd0LU4DrA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iuYi2Zmz554OL1QJXGknavHl61mihWk1XjYOVK5faD6ZcYye4H3LyCkLOKDP8CrnqcBnLz9TgOZ864Xj65FZf/okBT4XMsOKQ6jO6Z26iFSiys3MB+jyzL1N5JScK8Cv0HM5cqnyUXjh+aAkt+DCk0zDyhBBadd37XXG9OKmTxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=L851tmo5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="L851tmo5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4068DC19421; Mon, 16 Mar 2026 21:14:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695642; bh=xj+/JrCOmW2N2PV/njSsEjr7rELQWaiXJ1jd0LU4DrA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L851tmo58ppDwux2Mpnj+FZxbfkIplkdPxJHfycO9WNNDY7sZYeQFCjfwmFU+Fh45 aGdDoJEgZUxVMnut+bnWzxTDIaLFOe78WTKM9S7RSFC3uT/xfT05m2ZlAsIq28U4iQ 9jurjcxG80MdMhRbbNDlcqmDGC2PrLhOGX0db+AAJ71398lwzDtMCoct5askADFWxZ Io2E/A1jMzOx66Kn0KNQSYKM0bUnjpDpyComzKUDNfkZ0WRcSMiiuaSNzZYs4uNe13 XC4FPAahrQJUgC/LkBLaGQJvQxEeZ9GAF5mJk4dBQpoxwjy5s2NZDrJ28uUbyZHkWZ 0VeVtepqiNWgQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 09/16] mtdchar: replace deprecated mmap hook with mmap_prepare, clean up Date: Mon, 16 Mar 2026 21:12:05 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace the deprecated mmap callback with mmap_prepare. Commit f5cf8f07423b ("mtd: Disable mtdchar mmap on MMU systems") commented out the CONFIG_MMU part of this function back in 2012, so after ~14 years it's probably reasonable to remove this altogether rather than updating dead code. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Richard Weinberger --- drivers/mtd/mtdchar.c | 21 +++------------------ 1 file changed, 3 insertions(+), 18 deletions(-) diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c index 55a43682c567..bf01e6ac7293 100644 --- a/drivers/mtd/mtdchar.c +++ b/drivers/mtd/mtdchar.c @@ -1376,27 +1376,12 @@ static unsigned mtdchar_mmap_capabilities(struct fi= le *file) /* * set up a mapping for shared memory segments */ -static int mtdchar_mmap(struct file *file, struct vm_area_struct *vma) +static int mtdchar_mmap_prepare(struct vm_area_desc *desc) { #ifdef CONFIG_MMU - struct mtd_file_info *mfi =3D file->private_data; - struct mtd_info *mtd =3D mfi->mtd; - struct map_info *map =3D mtd->priv; - - /* This is broken because it assumes the MTD device is map-based - and that mtd->priv is a valid struct map_info. It should be - replaced with something that uses the mtd_get_unmapped_area() - operation properly. */ - if (0 /*mtd->type =3D=3D MTD_RAM || mtd->type =3D=3D MTD_ROM*/) { -#ifdef pgprot_noncached - if (file->f_flags & O_DSYNC || map->phys >=3D __pa(high_memory)) - vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); -#endif - return vm_iomap_memory(vma, map->phys, map->size); - } return -ENODEV; #else - return vma->vm_flags & VM_SHARED ? 0 : -EACCES; + return vma_desc_test(desc, VMA_SHARED_BIT) ? 0 : -EACCES; #endif } =20 @@ -1411,7 +1396,7 @@ static const struct file_operations mtd_fops =3D { #endif .open =3D mtdchar_open, .release =3D mtdchar_close, - .mmap =3D mtdchar_mmap, + .mmap_prepare =3D mtdchar_mmap_prepare, #ifndef CONFIG_MMU .get_unmapped_area =3D mtdchar_get_unmapped_area, .mmap_capabilities =3D mtdchar_mmap_capabilities, --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7C47373BE4; Mon, 16 Mar 2026 21:14:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695646; cv=none; b=e0DBpB1lp/Wf5KGfc5Wcpi3A2TWxovEU+FPdKy6B5qngYswrLVo0f8v7wi9qIKojgNZNShrALmG60/0lo5425dIQDtC2seSpIsKZ7II7MyjfSf4rzb5fdBFzCUVPprDbqGvw1iVSFbpx8Dpp25S8JfgCs053b+94Yvpo7cs7PLk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695646; c=relaxed/simple; bh=GSvvK91SQuAGo90FmVnYDSWYlOSBV2oGchryB/VUxUY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VOS62WXriQa0ARYAOps0LQyWiibH2/laVVrSiHyPZZh2tYSfXnupB4MWgo1a52YQHyerAFxYBBWQO9KhJTbhx8kgjDrStwNEQZDN1jjAjD6jx7e7lqU6GSRisTgXxNFiPfF2xdMmnLsiOgennUfX3/1W6EZpC5zJt02p5TydA5I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sOpvgf+n; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sOpvgf+n" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A443C2BCAF; Mon, 16 Mar 2026 21:14:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695645; bh=GSvvK91SQuAGo90FmVnYDSWYlOSBV2oGchryB/VUxUY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sOpvgf+n0IrRUfwxTPsPjWcWeL2agf7X2Q3cZAX3WMAt3lkVZVATJlBeCGrE//uUQ VntjBcXYomVj18FLQi1d+mHVPk0SbQT5HE9DnQSnA97Gk6TGuFs4SQBKF95PUjw2/l KehKxNDGMLZBV7sE5ZZQ/Fm505cgqDDHgNW2PVMZ+2gDhZSyqaw8l2FWbDMWk0qLMI 43UuJnnFI5UbKsvjTmQf/C77Ec7oaNpxQRC082bygg0hBc1z3Jj3/+uBuArC6w3/ry 8ybhySHmn2B/Sbgm0DrVSWCB0b2MVrgHjBd1KEfwsWB9MX+Mm40OiGsRNBxZTHDUJt yySLVoVqyamXw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 10/16] stm: replace deprecated mmap hook with mmap_prepare Date: Mon, 16 Mar 2026 21:12:06 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The f_op->mmap interface is deprecated, so update driver to use its successor, mmap_prepare. The driver previously used vm_iomap_memory(), so this change replaces it with its mmap_prepare equivalent, mmap_action_simple_ioremap(). Also, in order to correctly maintain reference counting, add a vm_ops->mapped callback to increment the reference count when successfully mapped. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- drivers/hwtracing/stm/core.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c index 37584e786bb5..f48c6a8a0654 100644 --- a/drivers/hwtracing/stm/core.c +++ b/drivers/hwtracing/stm/core.c @@ -666,6 +666,16 @@ static ssize_t stm_char_write(struct file *file, const= char __user *buf, return count; } =20 +static int stm_mmap_mapped(unsigned long start, unsigned long end, pgoff_t= pgoff, + const struct file *file, void **vm_private_data) +{ + struct stm_file *stmf =3D file->private_data; + struct stm_device *stm =3D stmf->stm; + + pm_runtime_get_sync(&stm->dev); + return 0; +} + static void stm_mmap_open(struct vm_area_struct *vma) { struct stm_file *stmf =3D vma->vm_file->private_data; @@ -684,12 +694,14 @@ static void stm_mmap_close(struct vm_area_struct *vma) } =20 static const struct vm_operations_struct stm_mmap_vmops =3D { + .mapped =3D stm_mmap_mapped, .open =3D stm_mmap_open, .close =3D stm_mmap_close, }; =20 -static int stm_char_mmap(struct file *file, struct vm_area_struct *vma) +static int stm_char_mmap_prepare(struct vm_area_desc *desc) { + struct file *file =3D desc->file; struct stm_file *stmf =3D file->private_data; struct stm_device *stm =3D stmf->stm; unsigned long size, phys; @@ -697,10 +709,10 @@ static int stm_char_mmap(struct file *file, struct vm= _area_struct *vma) if (!stm->data->mmio_addr) return -EOPNOTSUPP; =20 - if (vma->vm_pgoff) + if (desc->pgoff) return -EINVAL; =20 - size =3D vma->vm_end - vma->vm_start; + size =3D vma_desc_size(desc); =20 if (stmf->output.nr_chans * stm->data->sw_mmiosz !=3D size) return -EINVAL; @@ -712,13 +724,12 @@ static int stm_char_mmap(struct file *file, struct vm= _area_struct *vma) if (!phys) return -EINVAL; =20 - pm_runtime_get_sync(&stm->dev); - - vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); - vm_flags_set(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops =3D &stm_mmap_vmops; - vm_iomap_memory(vma, phys, size); + desc->page_prot =3D pgprot_noncached(desc->page_prot); + vma_desc_set_flags(desc, VMA_IO_BIT, VMA_DONTEXPAND_BIT, + VMA_DONTDUMP_BIT); + desc->vm_ops =3D &stm_mmap_vmops; =20 + mmap_action_simple_ioremap(desc, phys, size); return 0; } =20 @@ -836,7 +847,7 @@ static const struct file_operations stm_fops =3D { .open =3D stm_char_open, .release =3D stm_char_release, .write =3D stm_char_write, - .mmap =3D stm_char_mmap, + .mmap_prepare =3D stm_char_mmap_prepare, .unlocked_ioctl =3D stm_char_ioctl, .compat_ioctl =3D compat_ptr_ioctl, }; --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 992D0374E47; Mon, 16 Mar 2026 21:14:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695648; cv=none; b=Y5EmkW1XCGAoigys67Wfha4FlSARDwPpFl8lnql2/xvzZ15AoaZJVxnB8NaeieABwcnMtb1NRnVk79dKwpSRExVMYatuRBVUTliTQTCHIIPUKCwGBeBq4uuxMGBu4XbMylQonU1iIN5THUD0pC1Guv+fuYrCct7vjNbIHRA0+io= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695648; c=relaxed/simple; bh=2FQWrvwYCkzh7VnGyHG3FIB6cuWllzKv+ct94SBACcI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=C7zSRT5ONVawdSRrbpR8HHGx5061eW2j/e/43J4tU5UxyraAuoVyqDkwwq0Isp2i1W6aYPwyUNGP6dMPqUrmwsjjJf83OLWtazhUIWsXI2CTT7zyYav7tuenlKT3VeZ+KhyOsQ1kvczgFk52nmzG/0h6lDv/hYI6t6Y3YIrrimc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JF3FLVvy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JF3FLVvy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C2E83C19421; Mon, 16 Mar 2026 21:14:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695648; bh=2FQWrvwYCkzh7VnGyHG3FIB6cuWllzKv+ct94SBACcI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JF3FLVvyJGVdZkSE+FdP5FcBEUne8jUO4XSz7moQxYmpOmgJMXXVQCFWkd+Zna1NU He3whoJ4Uxp4evCK1SA8pVH4752aAjCgpoAtA70KmB77QDNH5xfxW1OtYDKbuTglWE 8Ixnhn7P96ejRgJB2L+xQMa690n0cpvspUBLjnjy8UD/0DS/IMeuDDPV2CPyPz6d3P setbJNSK027pMm28L7mf39SCjqW8UffXnFvK+CwK7z0QE+3rx/MCvRk+mcjbU2UT8l RijMbEkUU22+Nba7XDA5hJqZZ2h2CQLpLs12DBvGtXoCkx1vlEYA0Y9FjoLlEl3kBs 6F5J6duPdbw8g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 11/16] staging: vme_user: replace deprecated mmap hook with mmap_prepare Date: Mon, 16 Mar 2026 21:12:07 +0000 Message-ID: <48c6d25e374b57dba6df4fdddd4830d3fc1105be.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The f_op->mmap interface is deprecated, so update driver to use its successor, mmap_prepare. The driver previously used vm_iomap_memory(), so this change replaces it with its mmap_prepare equivalent, mmap_action_simple_ioremap(). Functions that wrap mmap() are also converted to wrap mmap_prepare() instead. Also update the documentation accordingly. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- Documentation/driver-api/vme.rst | 2 +- drivers/staging/vme_user/vme.c | 20 +++++------ drivers/staging/vme_user/vme.h | 2 +- drivers/staging/vme_user/vme_user.c | 51 +++++++++++++++++------------ 4 files changed, 42 insertions(+), 33 deletions(-) diff --git a/Documentation/driver-api/vme.rst b/Documentation/driver-api/vm= e.rst index c0b475369de0..7111999abc14 100644 --- a/Documentation/driver-api/vme.rst +++ b/Documentation/driver-api/vme.rst @@ -107,7 +107,7 @@ The function :c:func:`vme_master_read` can be used to r= ead from and =20 In addition to simple reads and writes, :c:func:`vme_master_rmw` is provid= ed to do a read-modify-write transaction. Parts of a VME window can also be mapp= ed -into user space memory using :c:func:`vme_master_mmap`. +into user space memory using :c:func:`vme_master_mmap_prepare`. =20 =20 Slave windows diff --git a/drivers/staging/vme_user/vme.c b/drivers/staging/vme_user/vme.c index f10a00c05f12..7220aba7b919 100644 --- a/drivers/staging/vme_user/vme.c +++ b/drivers/staging/vme_user/vme.c @@ -735,9 +735,9 @@ unsigned int vme_master_rmw(struct vme_resource *resour= ce, unsigned int mask, EXPORT_SYMBOL(vme_master_rmw); =20 /** - * vme_master_mmap - Mmap region of VME master window. + * vme_master_mmap_prepare - Mmap region of VME master window. * @resource: Pointer to VME master resource. - * @vma: Pointer to definition of user mapping. + * @desc: Pointer to descriptor of user mapping. * * Memory map a region of the VME master window into user space. * @@ -745,12 +745,13 @@ EXPORT_SYMBOL(vme_master_rmw); * resource or -EFAULT if map exceeds window size. Other generic m= map * errors may also be returned. */ -int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *= vma) +int vme_master_mmap_prepare(struct vme_resource *resource, + struct vm_area_desc *desc) { + const unsigned long vma_size =3D vma_desc_size(desc); struct vme_bridge *bridge =3D find_bridge(resource); struct vme_master_resource *image; phys_addr_t phys_addr; - unsigned long vma_size; =20 if (resource->type !=3D VME_MASTER) { dev_err(bridge->parent, "Not a master resource\n"); @@ -758,19 +759,18 @@ int vme_master_mmap(struct vme_resource *resource, st= ruct vm_area_struct *vma) } =20 image =3D list_entry(resource->entry, struct vme_master_resource, list); - phys_addr =3D image->bus_resource.start + (vma->vm_pgoff << PAGE_SHIFT); - vma_size =3D vma->vm_end - vma->vm_start; + phys_addr =3D image->bus_resource.start + (desc->pgoff << PAGE_SHIFT); =20 if (phys_addr + vma_size > image->bus_resource.end + 1) { dev_err(bridge->parent, "Map size cannot exceed the window size\n"); return -EFAULT; } =20 - vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); - - return vm_iomap_memory(vma, phys_addr, vma->vm_end - vma->vm_start); + desc->page_prot =3D pgprot_noncached(desc->page_prot); + mmap_action_simple_ioremap(desc, phys_addr, vma_size); + return 0; } -EXPORT_SYMBOL(vme_master_mmap); +EXPORT_SYMBOL(vme_master_mmap_prepare); =20 /** * vme_master_free - Free VME master window diff --git a/drivers/staging/vme_user/vme.h b/drivers/staging/vme_user/vme.h index 797e9940fdd1..b6413605ea49 100644 --- a/drivers/staging/vme_user/vme.h +++ b/drivers/staging/vme_user/vme.h @@ -151,7 +151,7 @@ ssize_t vme_master_read(struct vme_resource *resource, = void *buf, size_t count, ssize_t vme_master_write(struct vme_resource *resource, void *buf, size_t = count, loff_t offset); unsigned int vme_master_rmw(struct vme_resource *resource, unsigned int ma= sk, unsigned int compare, unsigned int swap, loff_t offset); -int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *= vma); +int vme_master_mmap_prepare(struct vme_resource *resource, struct vm_area_= desc *desc); void vme_master_free(struct vme_resource *resource); =20 struct vme_resource *vme_dma_request(struct vme_dev *vdev, u32 route); diff --git a/drivers/staging/vme_user/vme_user.c b/drivers/staging/vme_user= /vme_user.c index d95dd7d9190a..11e25c2f6b0a 100644 --- a/drivers/staging/vme_user/vme_user.c +++ b/drivers/staging/vme_user/vme_user.c @@ -446,24 +446,14 @@ static void vme_user_vm_close(struct vm_area_struct *= vma) kfree(vma_priv); } =20 -static const struct vm_operations_struct vme_user_vm_ops =3D { - .open =3D vme_user_vm_open, - .close =3D vme_user_vm_close, -}; - -static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct = *vma) +static int vme_user_vm_mapped(unsigned long start, unsigned long end, pgof= f_t pgoff, + const struct file *file, void **vm_private_data) { - int err; + const unsigned int minor =3D iminor(file_inode(file)); struct vme_user_vma_priv *vma_priv; =20 mutex_lock(&image[minor].mutex); =20 - err =3D vme_master_mmap(image[minor].resource, vma); - if (err) { - mutex_unlock(&image[minor].mutex); - return err; - } - vma_priv =3D kmalloc_obj(*vma_priv); if (!vma_priv) { mutex_unlock(&image[minor].mutex); @@ -472,22 +462,41 @@ static int vme_user_master_mmap(unsigned int minor, s= truct vm_area_struct *vma) =20 vma_priv->minor =3D minor; refcount_set(&vma_priv->refcnt, 1); - vma->vm_ops =3D &vme_user_vm_ops; - vma->vm_private_data =3D vma_priv; - + *vm_private_data =3D vma_priv; image[minor].mmap_count++; =20 mutex_unlock(&image[minor].mutex); - return 0; } =20 -static int vme_user_mmap(struct file *file, struct vm_area_struct *vma) +static const struct vm_operations_struct vme_user_vm_ops =3D { + .mapped =3D vme_user_vm_mapped, + .open =3D vme_user_vm_open, + .close =3D vme_user_vm_close, +}; + +static int vme_user_master_mmap_prepare(unsigned int minor, + struct vm_area_desc *desc) +{ + int err; + + mutex_lock(&image[minor].mutex); + + err =3D vme_master_mmap_prepare(image[minor].resource, desc); + if (!err) + desc->vm_ops =3D &vme_user_vm_ops; + + mutex_unlock(&image[minor].mutex); + return err; +} + +static int vme_user_mmap_prepare(struct vm_area_desc *desc) { - unsigned int minor =3D iminor(file_inode(file)); + const struct file *file =3D desc->file; + const unsigned int minor =3D iminor(file_inode(file)); =20 if (type[minor] =3D=3D MASTER_MINOR) - return vme_user_master_mmap(minor, vma); + return vme_user_master_mmap_prepare(minor, desc); =20 return -ENODEV; } @@ -498,7 +507,7 @@ static const struct file_operations vme_user_fops =3D { .llseek =3D vme_user_llseek, .unlocked_ioctl =3D vme_user_unlocked_ioctl, .compat_ioctl =3D compat_ptr_ioctl, - .mmap =3D vme_user_mmap, + .mmap_prepare =3D vme_user_mmap_prepare, }; =20 static int vme_user_match(struct vme_dev *vdev) --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4A793750B0; Mon, 16 Mar 2026 21:14:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695651; cv=none; b=ZPvSJVDHBjAMaU4DoWK9gsNhr7h9liGAhFoxbXviEKjzhT4oooGJEl7+/m4ddb3z5xoLhuv6XmlgzrZA8upEfmSIBwO23ZMeBB34NfRiypV0tGcB5ZZ5W1HZjMsEW+kNXm948e7dWtdIee4KK9/L1vax1bmNL4l3YTNtpwblkGk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695651; c=relaxed/simple; bh=UdalDDqw5K4wrfd0S8fCWChZZfK6fWZrqQ5O42EH8gE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L3MRoEnDoP4gvHhtYlgerXQkVCUAiGMdK8kPhaMaDjDeGmTwgHUHQAxo/+uHPVKQj8A8bbUAm6jL39qbJirEJOIjTUNkUeewBp/3xhDvY5ZFjr+SjoQ/90mP22hY+3iztowuhp3iXZ4dIAuC1FG7Q6yQ03v/jcvbZwyGweoL0E4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=W2YeqsPq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="W2YeqsPq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DE17C2BCB0; Mon, 16 Mar 2026 21:14:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695651; bh=UdalDDqw5K4wrfd0S8fCWChZZfK6fWZrqQ5O42EH8gE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W2YeqsPqzSBvcKTdL7ShzFrBVsk/KmfNTzAhHeWjOURKUHsiNK3CGETdy/xWDTiD4 EebWvFr2ruxCMcVgbMHoT2QGh4ZaUIcAAyGzWgwpo0lX/AixkXKYgE1KVVgiH8mKwh wrcxBPwEqGzdTBKP6XNVul3iuydo542iSYIHmgkBbWUiQZU4/sXBDD/PacAfqG2r6+ FK7iiuZD9cMb0i6z+s1rv9JkMz7JrMmw9AE14XxoUG6lnwX/EDuLCLmY5PixOZWcuj 4Rv9V2Ny1HwMZaAziLcEarXZvSWiKnnGSFt6+MxD9FQW7kO94X08RlqWfYCZD/bmR+ CSqzUWm1gEZzQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 12/16] mm: allow handling of stacked mmap_prepare hooks in more drivers Date: Mon, 16 Mar 2026 21:12:08 +0000 Message-ID: <72750af6906fd96fb6f18e83ac3e694cf357a2c1.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While the conversion of mmap hooks to mmap_prepare is underway, we wil encounter situations where mmap hooks need to invoke nested mmap_prepare hooks. The nesting of mmap hooks is termed 'stacking'. In order to flexibly facilitate the conversion of custom mmap hooks in drivers which stack, we must split up the existing compat_vma_mapped() function into two separate functions: * compat_set_desc_from_vma() - This allows the setting of a vm_area_desc object's fields to the relevant fields of a VMA. * __compat_vma_mmap() - Once an mmap_prepare hook has been executed upon a vm_area_desc object, this function performs any mmap actions specified by the mmap_prepare hook and then invokes its vm_ops->mapped() hook if any were specified. In ordinary cases, where a file's f_op->mmap_prepare() hook simply needs to be invoked in a stacked mmap() hook, compat_vma_mmap() can be used. However some drivers define their own nested hooks, which are invoked in turn by another hook. A concrete example is vmbus_channel->mmap_ring_buffer(), which is invoked in turn by bin_attribute->mmap(): vmbus_channel->mmap_ring_buffer() has a signature of: int (*mmap_ring_buffer)(struct vmbus_channel *channel, struct vm_area_struct *vma); And bin_attribute->mmap() has a signature of: int (*mmap)(struct file *, struct kobject *, const struct bin_attribute *attr, struct vm_area_struct *vma); And so compat_vma_mmap() cannot be used here for incremental conversion of hooks from mmap() to mmap_prepare(). There are many such instances like this, where conversion to mmap_prepare would otherwise cascade to a huge change set due to nesting of this kind. The changes in this patch mean we could now instead convert vmbus_channel->mmap_ring_buffer() to vmbus_channel->mmap_prepare_ring_buffer(), and implement something like: struct vm_area_desc desc; int err; compat_set_desc_from_vm(&desc, file, vma); err =3D channel->mmap_prepare_ring_buffer(channel, &desc); if (err) return err; return __compat_vma_mmap(&desc, vma); Allowing us to incrementally update this logic, and other logic like it. Unfortunately, as part of this change, we need to be able to flexibly assign to the VMA descriptor, so have to remove some of the const declarations within the structure. Also update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/fs.h | 3 + include/linux/mm_types.h | 4 +- mm/util.c | 111 +++++++++++++++++++++++--------- mm/vma.h | 2 +- tools/testing/vma/include/dup.h | 111 ++++++++++++++++++++------------ 5 files changed, 157 insertions(+), 74 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index c390f5c667e3..0bdccfa70b44 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2058,6 +2058,9 @@ static inline bool can_mmap_file(struct file *file) return true; } =20 +void compat_set_desc_from_vma(struct vm_area_desc *desc, const struct file= *file, + const struct vm_area_struct *vma); +int __compat_vma_mmap(struct vm_area_desc *desc, struct vm_area_struct *vm= a); int compat_vma_mmap(struct file *file, struct vm_area_struct *vma); int __vma_check_mmap_hook(struct vm_area_struct *vma); =20 diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 50685cf29792..7538d64f8848 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -891,8 +891,8 @@ static __always_inline bool vma_flags_empty(vma_flags_t= *flags) */ struct vm_area_desc { /* Immutable state. */ - const struct mm_struct *const mm; - struct file *const file; /* May vary from vm_file in stacked callers. */ + struct mm_struct *mm; + struct file *file; /* May vary from vm_file in stacked callers. */ unsigned long start; unsigned long end; =20 diff --git a/mm/util.c b/mm/util.c index aa92e471afe1..a166c48fe894 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1163,34 +1163,38 @@ void flush_dcache_folio(struct folio *folio) EXPORT_SYMBOL(flush_dcache_folio); #endif =20 -static int __compat_vma_mmap(struct file *file, struct vm_area_struct *vma) +/** + * compat_set_desc_from_vma() - assigns VMA descriptor @desc fields from a= VMA. + * @desc: A VMA descriptor whose fields need to be set. + * @file: The file object describing the file being mmap()'d. + * @vma: The VMA whose fields we wish to assign to @desc. + * + * This is a compatibility function to allow an mmap() hook to call + * mmap_prepare() hooks when drivers nest these. This function specifically + * allows the construction of a vm_area_desc value, @desc, from a VMA @vma= for + * the purposes of doing this. + * + * Once the conversion of drivers is complete this function will no longer= be + * required and will be removed. + */ +void compat_set_desc_from_vma(struct vm_area_desc *desc, + const struct file *file, + const struct vm_area_struct *vma) { - struct vm_area_desc desc =3D { - .mm =3D vma->vm_mm, - .file =3D file, - .start =3D vma->vm_start, - .end =3D vma->vm_end, - - .pgoff =3D vma->vm_pgoff, - .vm_file =3D vma->vm_file, - .vma_flags =3D vma->flags, - .page_prot =3D vma->vm_page_prot, - - .action.type =3D MMAP_NOTHING, /* Default */ - }; - int err; + desc->mm =3D vma->vm_mm; + desc->file =3D (struct file *)file; + desc->start =3D vma->vm_start; + desc->end =3D vma->vm_end; =20 - err =3D vfs_mmap_prepare(file, &desc); - if (err) - return err; + desc->pgoff =3D vma->vm_pgoff; + desc->vm_file =3D vma->vm_file; + desc->vma_flags =3D vma->flags; + desc->page_prot =3D vma->vm_page_prot; =20 - err =3D mmap_action_prepare(&desc); - if (err) - return err; - - set_vma_from_desc(vma, &desc); - return mmap_action_complete(vma, &desc.action); + /* Default. */ + desc->action.type =3D MMAP_NOTHING; } +EXPORT_SYMBOL(compat_set_desc_from_vma); =20 static int __compat_vma_mapped(struct file *file, struct vm_area_struct *v= ma) { @@ -1211,6 +1215,49 @@ static int __compat_vma_mapped(struct file *file, st= ruct vm_area_struct *vma) return err; } =20 +/** + * __compat_vma_mmap() - Similar to compat_vma_mmap(), only it allows + * flexibility as to how the mmap_prepare callback is invoked, which is us= eful + * for drivers which invoke nested mmap_prepare callbacks in an mmap() hoo= k. + * @desc: A VMA descriptor upon which an mmap_prepare() hook has already b= een + * executed. + * @vma: The VMA to which @desc should be applied. + * + * The function assumes that you have obtained a VMA descriptor @desc from + * compt_set_desc_from_vma(), and already executed the mmap_prepare() hook= upon + * it. + * + * It then performs any specified mmap actions, and invokes the vm_ops->ma= pped() + * hook if one is present. + * + * See the description of compat_vma_mmap() for more details. + * + * Once the conversion of drivers is complete this function will no longer= be + * required and will be removed. + * + * Returns: 0 on success or error. + */ +int __compat_vma_mmap(struct vm_area_desc *desc, + struct vm_area_struct *vma) +{ + int err; + + /* Perform any preparatory tasks for mmap action. */ + err =3D mmap_action_prepare(desc); + if (err) + return err; + /* Update the VMA from the descriptor. */ + compat_set_vma_from_desc(vma, desc); + /* Complete any specified mmap actions. */ + err =3D mmap_action_complete(vma, &desc->action); + if (err) + return err; + + /* Invoke vm_ops->mapped callback. */ + return __compat_vma_mapped(desc->file, vma); +} +EXPORT_SYMBOL(__compat_vma_mmap); + /** * compat_vma_mmap() - Apply the file's .mmap_prepare() hook to an * existing VMA and execute any requested actions. @@ -1218,10 +1265,10 @@ static int __compat_vma_mapped(struct file *file, s= truct vm_area_struct *vma) * @vma: The VMA to apply the .mmap_prepare() hook to. * * Ordinarily, .mmap_prepare() is invoked directly upon mmap(). However, c= ertain - * stacked filesystems invoke a nested mmap hook of an underlying file. + * stacked drivers invoke a nested mmap hook of an underlying file. * - * Until all filesystems are converted to use .mmap_prepare(), we must be - * conservative and continue to invoke these stacked filesystems using the + * Until all drivers are converted to use .mmap_prepare(), we must be + * conservative and continue to invoke these stacked drivers using the * deprecated .mmap() hook. * * However we have a problem if the underlying file system possesses an @@ -1232,20 +1279,22 @@ static int __compat_vma_mapped(struct file *file, s= truct vm_area_struct *vma) * establishes a struct vm_area_desc descriptor, passes to the underlying * .mmap_prepare() hook and applies any changes performed by it. * - * Once the conversion of filesystems is complete this function will no lo= nger - * be required and will be removed. + * Once the conversion of drivers is complete this function will no longer= be + * required and will be removed. * * Returns: 0 on success or error. */ int compat_vma_mmap(struct file *file, struct vm_area_struct *vma) { + struct vm_area_desc desc; int err; =20 - err =3D __compat_vma_mmap(file, vma); + compat_set_desc_from_vma(&desc, file, vma); + err =3D vfs_mmap_prepare(file, &desc); if (err) return err; =20 - return __compat_vma_mapped(file, vma); + return __compat_vma_mmap(&desc, vma); } EXPORT_SYMBOL(compat_vma_mmap); =20 diff --git a/mm/vma.h b/mm/vma.h index adc18f7dd9f1..a76046c39b14 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -300,7 +300,7 @@ static inline int vma_iter_store_gfp(struct vma_iterato= r *vmi, * f_op->mmap() but which might have an underlying file system which imple= ments * f_op->mmap_prepare(). */ -static inline void set_vma_from_desc(struct vm_area_struct *vma, +static inline void compat_set_vma_from_desc(struct vm_area_struct *vma, struct vm_area_desc *desc) { /* diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 114daaef4f73..6658df26698a 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -519,8 +519,8 @@ enum vma_operation { */ struct vm_area_desc { /* Immutable state. */ - const struct mm_struct *const mm; - struct file *const file; /* May vary from vm_file in stacked callers. */ + struct mm_struct *mm; + struct file *file; /* May vary from vm_file in stacked callers. */ unsigned long start; unsigned long end; =20 @@ -1272,43 +1272,92 @@ static inline void vma_set_anonymous(struct vm_area= _struct *vma) } =20 /* Declared in vma.h. */ -static inline void set_vma_from_desc(struct vm_area_struct *vma, +static inline void compat_set_vma_from_desc(struct vm_area_struct *vma, struct vm_area_desc *desc); =20 -static inline int __compat_vma_mmap(const struct file_operations *f_op, - struct file *file, struct vm_area_struct *vma) +static inline void compat_set_desc_from_vma(struct vm_area_desc *desc, + const struct file *file, + const struct vm_area_struct *vma) { - struct vm_area_desc desc =3D { - .mm =3D vma->vm_mm, - .file =3D file, - .start =3D vma->vm_start, - .end =3D vma->vm_end, + desc->mm =3D vma->vm_mm; + desc->file =3D (struct file *)file; + desc->start =3D vma->vm_start; + desc->end =3D vma->vm_end; =20 - .pgoff =3D vma->vm_pgoff, - .vm_file =3D vma->vm_file, - .vma_flags =3D vma->flags, - .page_prot =3D vma->vm_page_prot, + desc->pgoff =3D vma->vm_pgoff; + desc->vm_file =3D vma->vm_file; + desc->vma_flags =3D vma->flags; + desc->page_prot =3D vma->vm_page_prot; =20 - .action.type =3D MMAP_NOTHING, /* Default */ - }; + /* Default. */ + desc->action.type =3D MMAP_NOTHING; +} + +static inline unsigned long vma_pages(const struct vm_area_struct *vma) +{ + return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; +} + +static inline void unmap_vma_locked(struct vm_area_struct *vma) +{ + const size_t len =3D vma_pages(vma) << PAGE_SHIFT; + + mmap_assert_write_locked(vma->vm_mm); + do_munmap(vma->vm_mm, vma->vm_start, len, NULL); +} + +static inline int __compat_vma_mapped(struct file *file, struct vm_area_st= ruct *vma) +{ + const struct vm_operations_struct *vm_ops =3D vma->vm_ops; int err; =20 - err =3D f_op->mmap_prepare(&desc); + if (!vm_ops->mapped) + return 0; + + err =3D vm_ops->mapped(vma->vm_start, vma->vm_end, vma->vm_pgoff, file, + &vma->vm_private_data); if (err) - return err; + unmap_vma_locked(vma); + return err; +} =20 - err =3D mmap_action_prepare(&desc); +static inline int __compat_vma_mmap(struct vm_area_desc *desc, + struct vm_area_struct *vma) +{ + int err; + + /* Perform any preparatory tasks for mmap action. */ + err =3D mmap_action_prepare(desc); + if (err) + return err; + /* Update the VMA from the descriptor. */ + compat_set_vma_from_desc(vma, desc); + /* Complete any specified mmap actions. */ + err =3D mmap_action_complete(vma, &desc->action); if (err) return err; =20 - set_vma_from_desc(vma, &desc); - return mmap_action_complete(vma, &desc.action); + /* Invoke vm_ops->mapped callback. */ + return __compat_vma_mapped(desc->file, vma); +} + +static inline int vfs_mmap_prepare(struct file *file, struct vm_area_desc = *desc) +{ + return file->f_op->mmap_prepare(desc); } =20 static inline int compat_vma_mmap(struct file *file, struct vm_area_struct *vma) { - return __compat_vma_mmap(file->f_op, file, vma); + struct vm_area_desc desc; + int err; + + compat_set_desc_from_vma(&desc, file, vma); + err =3D vfs_mmap_prepare(file, &desc); + if (err) + return err; + + return __compat_vma_mmap(&desc, vma); } =20 =20 @@ -1318,11 +1367,6 @@ static inline void vma_iter_init(struct vma_iterator= *vmi, mas_init(&vmi->mas, &mm->mm_mt, addr); } =20 -static inline unsigned long vma_pages(struct vm_area_struct *vma) -{ - return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; -} - static inline void mmap_assert_locked(struct mm_struct *); static inline struct vm_area_struct *find_vma_intersection(struct mm_struc= t *mm, unsigned long start_addr, @@ -1492,11 +1536,6 @@ static inline int vfs_mmap(struct file *file, struct= vm_area_struct *vma) return file->f_op->mmap(file, vma); } =20 -static inline int vfs_mmap_prepare(struct file *file, struct vm_area_desc = *desc) -{ - return file->f_op->mmap_prepare(desc); -} - static inline void vma_set_file(struct vm_area_struct *vma, struct file *f= ile) { /* Changing an anonymous vma with this is illegal */ @@ -1521,11 +1560,3 @@ static inline pgprot_t vma_get_page_prot(vma_flags_t= vma_flags) =20 return vm_get_page_prot(vm_flags); } - -static inline void unmap_vma_locked(struct vm_area_struct *vma) -{ - const size_t len =3D vma_pages(vma) << PAGE_SHIFT; - - mmap_assert_write_locked(vma->vm_mm); - do_munmap(vma->vm_mm, vma->vm_start, len, NULL); -} --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78625374E52; Mon, 16 Mar 2026 21:14:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695654; cv=none; b=BX6wGFMyuSdfaZSvf19dHKhNxbKviP6L80BUttwt1y8NquSd3xaF7RTKzSJTDOUnKhJZcAd6et4CNsf3NqAtqf2p0ybKLM1wTCWLrEgGGuYKlfZnnB1Dj2kq20xPrlRSe/cBSjPRiJDBqnJjyctLfgYjDmeZSy9M58HbcAXbUy8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695654; c=relaxed/simple; bh=BqZ3dGXi/uNWrhmwwDUsKyVr3VEa4KB4Sn9UhtSbJGc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IbB5j1J76n+uFc3HOKq631CXvKCTBAszGkYGQ45P69+u+GImQpxihXPfAAkFnkeeofL5faSk7OKsVYL/li0H1BckC5APgF1lkYYWEAo1onbyx3k8WL4Cg8TQj5RJfvaNWHIXg3Tec7VE7olcngga3yH3XCwGL372Ojcbblkk7Ts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ddRg00F2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ddRg00F2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CE46C2BC9E; Mon, 16 Mar 2026 21:14:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695654; bh=BqZ3dGXi/uNWrhmwwDUsKyVr3VEa4KB4Sn9UhtSbJGc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ddRg00F2T/2uy5ssm0Qrl1iCxh7BiYFPZ+3xDhRUzcquIPd3ir2brMNsn2DeZO9Ik 4Or6QCwnUlB7hCUX2R5ljuls0LdC39uPeGcT47HKe4eC66YadayJAeNQ72y1Bm/Am/ XISyzTGiUG4jtxWzTXO2EztA9dIxx9DZvTNz+uiTyo4tS3dDguhT8W630F98FNBp5i bH3GB47keYIYSY8EnfMKeMEfk5mFomJJoUqJ94b43jw2fjBsRpqgkusxxIki4YERaA goihMbGCBxAFXIjI6pK/Vb7S1B++J5F97P25GT6KOflLAWtvBO1Ju6TrdWN03FJaWR I1g0WcCHY9ajg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 13/16] drivers: hv: vmbus: replace deprecated mmap hook with mmap_prepare Date: Mon, 16 Mar 2026 21:12:09 +0000 Message-ID: <816d3c06ca3ec201ac8439a83383b9cb5e407ee9.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The f_op->mmap interface is deprecated, so update the vmbus driver to use its successor, mmap_prepare. This updates all callbacks which referenced the function pointer hv_mmap_ring_buffer to instead reference hv_mmap_prepare_ring_buffer, utilising the newly introduced compat_set_desc_from_vma() and __compat_vma_mmap() to be able to implement this change. The UIO HV generic driver is the only user of hv_create_ring_sysfs(), which is the only function which references vmbus_channel->mmap_prepare_ring_buffer which, in turn, is the only external interface to hv_mmap_prepare_ring_buffer. This patch therefore updates this caller to use mmap_prepare instead, which also previously used vm_iomap_memory(), so this change replaces it with its mmap_prepare equivalent, mmap_action_simple_ioremap(). Signed-off-by: Lorenzo Stoakes (Oracle) --- drivers/hv/hyperv_vmbus.h | 4 ++-- drivers/hv/vmbus_drv.c | 27 +++++++++++++++++---------- drivers/uio/uio_hv_generic.c | 11 ++++++----- include/linux/hyperv.h | 4 ++-- 4 files changed, 27 insertions(+), 19 deletions(-) diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h index 7bd8f8486e85..31f576464f18 100644 --- a/drivers/hv/hyperv_vmbus.h +++ b/drivers/hv/hyperv_vmbus.h @@ -545,8 +545,8 @@ static inline int hv_debug_add_dev_dir(struct hv_device= *dev) =20 /* Create and remove sysfs entry for memory mapped ring buffers for a chan= nel */ int hv_create_ring_sysfs(struct vmbus_channel *channel, - int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, - struct vm_area_struct *vma)); + int (*hv_mmap_prepare_ring_buffer)(struct vmbus_channel *channel, + struct vm_area_desc *desc)); int hv_remove_ring_sysfs(struct vmbus_channel *channel); =20 #endif /* _HYPERV_VMBUS_H */ diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index bc4fc1951ae1..a76fa3f0588c 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -1951,12 +1951,19 @@ static int hv_mmap_ring_buffer_wrapper(struct file = *filp, struct kobject *kobj, struct vm_area_struct *vma) { struct vmbus_channel *channel =3D container_of(kobj, struct vmbus_channel= , kobj); + struct vm_area_desc desc; + int err; =20 /* * hv_(create|remove)_ring_sysfs implementation ensures that mmap_ring_bu= ffer * is not NULL. */ - return channel->mmap_ring_buffer(channel, vma); + compat_set_desc_from_vma(&desc, filp, vma); + err =3D channel->mmap_prepare_ring_buffer(channel, &desc); + if (err) + return err; + + return __compat_vma_mmap(&desc, vma); } =20 static struct bin_attribute chan_attr_ring_buffer =3D { @@ -2048,13 +2055,13 @@ static const struct kobj_type vmbus_chan_ktype =3D { /** * hv_create_ring_sysfs() - create "ring" sysfs entry corresponding to rin= g buffers for a channel. * @channel: Pointer to vmbus_channel structure - * @hv_mmap_ring_buffer: function pointer for initializing the function to= be called on mmap of + * @hv_mmap_ring_buffer: function pointer for initializing the function to= be called on mmap * channel's "ring" sysfs node, which is for the rin= g buffer of that channel. * Function pointer is of below type: - * int (*hv_mmap_ring_buffer)(struct vmbus_channel *= channel, - * struct vm_area_struct = *vma)) - * This has a pointer to the channel and a pointer t= o vm_area_struct, - * used for mmap, as arguments. + * int (*hv_mmap_prepare_ring_buffer)(struct vmbus_c= hannel *channel, + * struct vm_area= _desc *desc)) + * This has a pointer to the channel and a pointer t= o vm_area_desc, + * used for mmap_prepare, as arguments. * * Sysfs node for ring buffer of a channel is created along with other fie= lds, however its * visibility is disabled by default. Sysfs creation needs to be controlle= d when the use-case @@ -2071,12 +2078,12 @@ static const struct kobj_type vmbus_chan_ktype =3D { * Returns 0 on success or error code on failure. */ int hv_create_ring_sysfs(struct vmbus_channel *channel, - int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, - struct vm_area_struct *vma)) + int (*hv_mmap_prepare_ring_buffer)(struct vmbus_channel *channel, + struct vm_area_desc *desc)) { struct kobject *kobj =3D &channel->kobj; =20 - channel->mmap_ring_buffer =3D hv_mmap_ring_buffer; + channel->mmap_prepare_ring_buffer =3D hv_mmap_prepare_ring_buffer; channel->ring_sysfs_visible =3D true; =20 return sysfs_update_group(kobj, &vmbus_chan_group); @@ -2098,7 +2105,7 @@ int hv_remove_ring_sysfs(struct vmbus_channel *channe= l) =20 channel->ring_sysfs_visible =3D false; ret =3D sysfs_update_group(kobj, &vmbus_chan_group); - channel->mmap_ring_buffer =3D NULL; + channel->mmap_prepare_ring_buffer =3D NULL; return ret; } EXPORT_SYMBOL_GPL(hv_remove_ring_sysfs); diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c index 3f8e2e27697f..29ec2d15ada8 100644 --- a/drivers/uio/uio_hv_generic.c +++ b/drivers/uio/uio_hv_generic.c @@ -154,15 +154,16 @@ static void hv_uio_rescind(struct vmbus_channel *chan= nel) * The ring buffer is allocated as contiguous memory by vmbus_open */ static int -hv_uio_ring_mmap(struct vmbus_channel *channel, struct vm_area_struct *vma) +hv_uio_ring_mmap_prepare(struct vmbus_channel *channel, struct vm_area_des= c *desc) { void *ring_buffer =3D page_address(channel->ringbuffer_page); =20 if (channel->state !=3D CHANNEL_OPENED_STATE) return -ENODEV; =20 - return vm_iomap_memory(vma, virt_to_phys(ring_buffer), - channel->ringbuffer_pagecount << PAGE_SHIFT); + mmap_action_simple_ioremap(desc, virt_to_phys(ring_buffer), + channel->ringbuffer_pagecount << PAGE_SHIFT); + return 0; } =20 /* Callback from VMBUS subsystem when new channel created. */ @@ -183,7 +184,7 @@ hv_uio_new_channel(struct vmbus_channel *new_sc) } =20 set_channel_read_mode(new_sc, HV_CALL_ISR); - ret =3D hv_create_ring_sysfs(new_sc, hv_uio_ring_mmap); + ret =3D hv_create_ring_sysfs(new_sc, hv_uio_ring_mmap_prepare); if (ret) { dev_err(device, "sysfs create ring bin file failed; %d\n", ret); vmbus_close(new_sc); @@ -366,7 +367,7 @@ hv_uio_probe(struct hv_device *dev, * or decoupled from uio_hv_generic probe. Userspace programs can make us= e of inotify * APIs to make sure that ring is created. */ - hv_create_ring_sysfs(channel, hv_uio_ring_mmap); + hv_create_ring_sysfs(channel, hv_uio_ring_mmap_prepare); =20 hv_set_drvdata(dev, pdata); =20 diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index dfc516c1c719..3a721b1853a4 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -1015,8 +1015,8 @@ struct vmbus_channel { /* The max size of a packet on this channel */ u32 max_pkt_size; =20 - /* function to mmap ring buffer memory to the channel's sysfs ring attrib= ute */ - int (*mmap_ring_buffer)(struct vmbus_channel *channel, struct vm_area_str= uct *vma); + /* function to mmap_prepare ring buffer memory to the channel's sysfs rin= g attribute */ + int (*mmap_prepare_ring_buffer)(struct vmbus_channel *channel, struct vm_= area_desc *desc); =20 /* boolean to control visibility of sysfs for ring buffer */ bool ring_sysfs_visible; --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DA8537649C; Mon, 16 Mar 2026 21:14:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695657; cv=none; b=C5+QH1aLzYLoI7l+1CZGmXsluLnpWg2vM6jfNea3K0LAJUZ4xPTVy2RtlIUF88ZM9+DnrBd1hQMKHmJn2sQJirTWN573oDEByuXqmVvjbhT5BA17j2DhVcewE34F1WNfzkcKdYeZBp+T/+UsmTLjUvu/DSurYc49GygDwG/wGLs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695657; c=relaxed/simple; bh=fgCpyzFg81kyNJg/EfR9MYplzdNgFwcM7fKHK8x1TBs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rb0WCluGZif1FNJV24+p4qDxhGrRHCJ2WmGGISLHxn+e9Rigi8MHQEA9+4DAjcYMway/vlMjLFPw2QrdWid8bmYRkdI6zDGEFI2bsw7VTnzrD+HE2Z6LRGjqhyvYeziLTkfmuNrYy2bg5Ezs/DJ+hGYp+CwQOB6kT/DL4Awwbn4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f9wa1g+q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f9wa1g+q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B998C19424; Mon, 16 Mar 2026 21:14:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695656; bh=fgCpyzFg81kyNJg/EfR9MYplzdNgFwcM7fKHK8x1TBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f9wa1g+qYZKP6JvRfHYdd3Mi7gE2gobzLF3Ef0Hpjr5s4CKaZ03ETRYKOxBQksNe5 kvq7s2PDuJ1CxtBB2Zbp24YhKCzCA6585G1sYDVP59cf4EQTIvrNaxH6jCTahGeH8+ nX2aJven1SgNUxh6mp5GdVaNUBRPKrwHJHJpRPFDvJSR/h0YUmjAWYZI/S+ef9Cxp1 TuIHRbmGdEMHNnllEIgEiUsIIGxhRkjTEHGe1m8A0voIHe2vkHveO0bJ2nsGF/BeXt +XAAykshocro6yJ4+7d6BUurBngpzoZH0nPFZ3REHSh7l8WhwsIHx/5WwesU3quOSL ph8COYuLyZSyA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 14/16] uio: replace deprecated mmap hook with mmap_prepare in uio_info Date: Mon, 16 Mar 2026 21:12:10 +0000 Message-ID: <892a8b32e5ef64c69239ccc2d1bd364716fd7fdf.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The f_op->mmap interface is deprecated, so update uio_info to use its successor, mmap_prepare. Therefore, replace the uio_info->mmap hook with a new uio_info->mmap_perepare hook, and update its one user, target_core_user, to both specify this new mmap_prepare hook and also to use the new vm_ops->mapped() hook to continue to maintain a correct udev->kref refcount. Then update uio_mmap() to utilise the mmap_prepare compatibility layer to invoke this callback from the uio mmap invocation. Signed-off-by: Lorenzo Stoakes (Oracle) --- drivers/target/target_core_user.c | 26 ++++++++++++++++++-------- drivers/uio/uio.c | 10 ++++++++-- include/linux/uio_driver.h | 4 ++-- 3 files changed, 28 insertions(+), 12 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core= _user.c index af95531ddd35..9d211dad5e53 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -1860,6 +1860,17 @@ static struct page *tcmu_try_get_data_page(struct tc= mu_dev *udev, uint32_t dpi) return NULL; } =20 +static int tcmu_vma_mapped(unsigned long start, unsigned long end, pgoff_t= pgoff, + const struct file *file, void **vm_private_data) +{ + struct tcmu_dev *udev =3D *vm_private_data; + + pr_debug("vma_open\n"); + + kref_get(&udev->kref); + return 0; +} + static void tcmu_vma_open(struct vm_area_struct *vma) { struct tcmu_dev *udev =3D vma->vm_private_data; @@ -1919,26 +1930,25 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *v= mf) } =20 static const struct vm_operations_struct tcmu_vm_ops =3D { + .mapped =3D tcmu_vma_mapped, .open =3D tcmu_vma_open, .close =3D tcmu_vma_close, .fault =3D tcmu_vma_fault, }; =20 -static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma) +static int tcmu_mmap_prepare(struct uio_info *info, struct vm_area_desc *d= esc) { struct tcmu_dev *udev =3D container_of(info, struct tcmu_dev, uio_info); =20 - vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops =3D &tcmu_vm_ops; + vma_desc_set_flags(desc, VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + desc->vm_ops =3D &tcmu_vm_ops; =20 - vma->vm_private_data =3D udev; + desc->private_data =3D udev; =20 /* Ensure the mmap is exactly the right size */ - if (vma_pages(vma) !=3D udev->mmap_pages) + if (vma_desc_pages(desc) !=3D udev->mmap_pages) return -EINVAL; =20 - tcmu_vma_open(vma); - return 0; } =20 @@ -2253,7 +2263,7 @@ static int tcmu_configure_device(struct se_device *de= v) info->irqcontrol =3D tcmu_irqcontrol; info->irq =3D UIO_IRQ_CUSTOM; =20 - info->mmap =3D tcmu_mmap; + info->mmap_prepare =3D tcmu_mmap_prepare; info->open =3D tcmu_open; info->release =3D tcmu_release; =20 diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c index 5a4998e2caf8..1e4ade78ed84 100644 --- a/drivers/uio/uio.c +++ b/drivers/uio/uio.c @@ -850,8 +850,14 @@ static int uio_mmap(struct file *filep, struct vm_area= _struct *vma) goto out; } =20 - if (idev->info->mmap) { - ret =3D idev->info->mmap(idev->info, vma); + if (idev->info->mmap_prepare) { + struct vm_area_desc desc; + + compat_set_desc_from_vma(&desc, filep, vma); + ret =3D idev->info->mmap_prepare(idev->info, &desc); + if (ret) + goto out; + ret =3D __compat_vma_mmap(&desc, vma); goto out; } =20 diff --git a/include/linux/uio_driver.h b/include/linux/uio_driver.h index 334641e20fb1..53bdc557c423 100644 --- a/include/linux/uio_driver.h +++ b/include/linux/uio_driver.h @@ -97,7 +97,7 @@ struct uio_device { * @irq_flags: flags for request_irq() * @priv: optional private data * @handler: the device's irq handler - * @mmap: mmap operation for this uio device + * @mmap_prepare: mmap_pepare operation for this uio device * @open: open operation for this uio device * @release: release operation for this uio device * @irqcontrol: disable/enable irqs when 0/1 is written to /dev/uioX @@ -112,7 +112,7 @@ struct uio_info { unsigned long irq_flags; void *priv; irqreturn_t (*handler)(int irq, struct uio_info *dev_info); - int (*mmap)(struct uio_info *info, struct vm_area_struct *vma); + int (*mmap_prepare)(struct uio_info *info, struct vm_area_desc *desc); int (*open)(struct uio_info *info, struct inode *inode); int (*release)(struct uio_info *info, struct inode *inode); int (*irqcontrol)(struct uio_info *info, s32 irq_on); --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8EF837703B; Mon, 16 Mar 2026 21:14:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695660; cv=none; b=WR7OM/dat+EChOLnLfF22mYW9Hjy9Nyp36S3+rM2e/pw/YW7v+w0eJ0vKsRAwNG08P5ADsSKT4zP4NHOBebvms73VJ7gdg/4MWgcDOPzgEYN0H6EJOr2i71CW8rnVOcXv4KfCOgLp2izMxBVKyedbhm/lgfx7Gh8xmgbl/Ieg84= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695660; c=relaxed/simple; bh=RhcLb1co4EI9du1RTnMkk76DqXJBbl6LldOq2Ouceiw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OUJPdV8vpePyDWFVDN3U7MYht7t2ZGNVgADYR4D+EtujqBzzx2z8Z3G4aj23F+rwM6A3+lBzcrQmv++i4wsKT/r95Ts6RaYsQAsHKHQC8iFeBnWqLeqrw0Kj4BLjSsxqtfL0yiTNiVSEYzDiaO/lxwe6Rz0/XiPZZ9hqnlfXRIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n2KfW8aN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n2KfW8aN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03894C19424; Mon, 16 Mar 2026 21:14:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695659; bh=RhcLb1co4EI9du1RTnMkk76DqXJBbl6LldOq2Ouceiw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n2KfW8aNSzC+mypGvWh/9C7wi2M0+bwkO2gW3/IMl7xlAab8J05OWxWyg3mYcmVuE ZtjLGGSaIHV0lOITwY9sOyWbOZ0FdWBeS4+XbEBSmXoK2aGC4S1KCRG4TojXninJWn d/imSpOpD5PxabUTRZskI4lfg83XYDHM4Vk8gak0Ze8HiOpGSw+p55kcM7qqhpu4hm 32/oPCklh1hQ6FF3Ug5gm6+gSgGnXVIHSWtQF3NK3xGMnqht+Uq8h+zSp3aeC46F2v DHd6uI6j/S8U9ohc4cGEn4Cn8fjtGJKxKIid8Kfm6SrmExD3w4r/2GDrKI7EBLN0U3 u9NbfP8T4/6Kg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 15/16] mm: add mmap_action_map_kernel_pages[_full]() Date: Mon, 16 Mar 2026 21:12:11 +0000 Message-ID: <8e28e4b63bae67bfa1a59ccbac9dc6db1442d75d.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A user can invoke mmap_action_map_kernel_pages() to specify that the mapping should map kernel pages starting from desc->start of a specified number of pages specified in an array. In order to implement this, adjust mmap_action_prepare() to be able to return an error code, as it makes sense to assert that the specified parameters are valid as quickly as possible as well as updating the VMA flags to include VMA_MIXEDMAP_BIT as necessary. This provides an mmap_prepare equivalent of vm_insert_pages(). We additionally update the existing vm_insert_pages() code to use range_in_vma() and add a new range_in_vma_desc() helper function for the mmap_prepare case, sharing the code between the two in range_is_subset(). We add both mmap_action_map_kernel_pages() and mmap_action_map_kernel_pages_full() to allow for both partial and full VMA mappings. We also add mmap_action_map_kernel_pages_discontig() to allow for discontiguous mapping of kernel pages should the need arise. We update the documentation to reflect the new features. Finally, we update the VMA tests accordingly to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- Documentation/filesystems/mmap_prepare.rst | 8 ++ include/linux/mm.h | 95 +++++++++++++++++++++- include/linux/mm_types.h | 7 ++ mm/memory.c | 42 +++++++++- mm/util.c | 6 ++ tools/testing/vma/include/dup.h | 7 ++ 6 files changed, 159 insertions(+), 6 deletions(-) diff --git a/Documentation/filesystems/mmap_prepare.rst b/Documentation/fil= esystems/mmap_prepare.rst index be76ae475b9c..e810aa4134eb 100644 --- a/Documentation/filesystems/mmap_prepare.rst +++ b/Documentation/filesystems/mmap_prepare.rst @@ -156,5 +156,13 @@ pointer. These are: * mmap_action_simple_ioremap() - Sets up an I/O remap from a specified physical address and over a specified length. =20 +* mmap_action_map_kernel_pages() - Maps a specified array of `struct page` + pointers in the VMA from a specific offset. + +* mmap_action_map_kernel_pages_full() - Maps a specified array of `struct + page` pointers over the entire VMA. The caller must ensure there are + sufficient entries in the page array to cover the entire range of the + described VMA. + **NOTE:** The ``action`` field should never normally be manipulated direct= ly, rather you ought to use one of these helpers. diff --git a/include/linux/mm.h b/include/linux/mm.h index df8fa6e6402b..6f0a3edb24e1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2912,7 +2912,7 @@ static inline bool folio_maybe_mapped_shared(struct f= olio *folio) * The caller must add any reference (e.g., from folio_try_get()) it might= be * holding itself to the result. * - * Returns the expected folio refcount. + * Returns: the expected folio refcount. */ static inline int folio_expected_ref_count(const struct folio *folio) { @@ -4364,6 +4364,45 @@ static inline void mmap_action_simple_ioremap(struct= vm_area_desc *desc, action->type =3D MMAP_SIMPLE_IO_REMAP; } =20 +/** + * mmap_action_map_kernel_pages - helper for mmap_prepare hook to specify = that + * @num kernel pages contained in the @pages array should be mapped to use= rland + * starting at virtual address @start. + * @desc: The VMA descriptor for the VMA requiring kernel pags to be mappe= d. + * @start: The virtual address from which to map them. + * @pages: An array of struct page pointers describing the memory to map. + * @nr_pages: The number of entries in the @pages aray. + */ +static inline void mmap_action_map_kernel_pages(struct vm_area_desc *desc, + unsigned long start, struct page **pages, + unsigned long nr_pages) +{ + struct mmap_action *action =3D &desc->action; + + action->type =3D MMAP_MAP_KERNEL_PAGES; + action->map_kernel.start =3D start; + action->map_kernel.pages =3D pages; + action->map_kernel.nr_pages =3D nr_pages; + action->map_kernel.pgoff =3D desc->pgoff; +} + +/** + * mmap_action_map_kernel_pages_full - helper for mmap_prepare hook to spe= cify that + * kernel pages contained in the @pages array should be mapped to userland + * from @desc->start to @desc->end. + * @desc: The VMA descriptor for the VMA requiring kernel pags to be mappe= d. + * @pages: An array of struct page pointers describing the memory to map. + * + * The caller must ensure that @pages contains sufficient entries to cover= the + * entire range described by @desc. + */ +static inline void mmap_action_map_kernel_pages_full(struct vm_area_desc *= desc, + struct page **pages) +{ + mmap_action_map_kernel_pages(desc, desc->start, pages, + vma_desc_pages(desc)); +} + int mmap_action_prepare(struct vm_area_desc *desc); int mmap_action_complete(struct vm_area_struct *vma, struct mmap_action *action); @@ -4380,10 +4419,59 @@ static inline struct vm_area_struct *find_exact_vma= (struct mm_struct *mm, return vma; } =20 +/** + * range_is_subset - Is the specified inner range a subset of the outer ra= nge? + * @outer_start: The start of the outer range. + * @outer_end: The exclusive end of the outer range. + * @inner_start: The start of the inner range. + * @inner_end: The exclusive end of the inner range. + * + * Returns: %true if [inner_start, inner_end) is a subset of [outer_start, + * outer_end), otherwise %false. + */ +static inline bool range_is_subset(unsigned long outer_start, + unsigned long outer_end, + unsigned long inner_start, + unsigned long inner_end) +{ + return outer_start <=3D inner_start && inner_end <=3D outer_end; +} + +/** + * range_in_vma - is the specified [@start, @end) range a subset of the VM= A? + * @vma: The VMA against which we want to check [@start, @end). + * @start: The start of the range we wish to check. + * @end: The exclusive end of the range we wish to check. + * + * Returns: %true if [@start, @end) is a subset of [@vma->vm_start, + * @vma->vm_end), %false otherwise. + */ static inline bool range_in_vma(const struct vm_area_struct *vma, unsigned long start, unsigned long end) { - return (vma && vma->vm_start <=3D start && end <=3D vma->vm_end); + if (!vma) + return false; + + return range_is_subset(vma->vm_start, vma->vm_end, start, end); +} + +/** + * range_in_vma_desc - is the specified [@start, @end) range a subset of t= he VMA + * described by @desc, a VMA descriptor? + * @desc: The VMA descriptor against which we want to check [@start, @end). + * @start: The start of the range we wish to check. + * @end: The exclusive end of the range we wish to check. + * + * Returns: %true if [@start, @end) is a subset of [@desc->start, @desc->e= nd), + * %false otherwise. + */ +static inline bool range_in_vma_desc(const struct vm_area_desc *desc, + unsigned long start, unsigned long end) +{ + if (!desc) + return false; + + return range_is_subset(desc->start, desc->end, start, end); } =20 #ifdef CONFIG_MMU @@ -4427,6 +4515,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsig= ned long addr, int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct pag= e *); int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr, struct page **pages, unsigned long *num); +int map_kernel_pages_prepare(struct vm_area_desc *desc); +int map_kernel_pages_complete(struct vm_area_struct *vma, + struct mmap_action *action); int vm_map_pages(struct vm_area_struct *vma, struct page **pages, unsigned long num); int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7538d64f8848..c46224020a46 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -815,6 +815,7 @@ enum mmap_action_type { MMAP_REMAP_PFN, /* Remap PFN range. */ MMAP_IO_REMAP_PFN, /* I/O remap PFN range. */ MMAP_SIMPLE_IO_REMAP, /* I/O remap with guardrails. */ + MMAP_MAP_KERNEL_PAGES, /* Map kernel page range from array. */ }; =20 /* @@ -833,6 +834,12 @@ struct mmap_action { phys_addr_t start_phys_addr; unsigned long size; } simple_ioremap; + struct { + unsigned long start; + struct page **pages; + unsigned long nr_pages; + pgoff_t pgoff; + } map_kernel; }; enum mmap_action_type type; =20 diff --git a/mm/memory.c b/mm/memory.c index f3f4046aee97..849d5d9eeb83 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2484,13 +2484,14 @@ static int insert_pages(struct vm_area_struct *vma,= unsigned long addr, int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr, struct page **pages, unsigned long *num) { - const unsigned long end_addr =3D addr + (*num * PAGE_SIZE) - 1; + const unsigned long nr_pages =3D *num; + const unsigned long end =3D addr + PAGE_SIZE * nr_pages; =20 - if (addr < vma->vm_start || end_addr >=3D vma->vm_end) + if (!range_in_vma(vma, addr, end)) return -EFAULT; if (!(vma->vm_flags & VM_MIXEDMAP)) { - BUG_ON(mmap_read_trylock(vma->vm_mm)); - BUG_ON(vma->vm_flags & VM_PFNMAP); + VM_WARN_ON_ONCE(mmap_read_trylock(vma->vm_mm)); + VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP); vm_flags_set(vma, VM_MIXEDMAP); } /* Defer page refcount checking till we're about to map that page. */ @@ -2498,6 +2499,39 @@ int vm_insert_pages(struct vm_area_struct *vma, unsi= gned long addr, } EXPORT_SYMBOL(vm_insert_pages); =20 +int map_kernel_pages_prepare(struct vm_area_desc *desc) +{ + const struct mmap_action *action =3D &desc->action; + const unsigned long addr =3D action->map_kernel.start; + unsigned long nr_pages, end; + + if (!vma_desc_test(desc, VMA_MIXEDMAP_BIT)) { + VM_WARN_ON_ONCE(mmap_read_trylock(desc->mm)); + VM_WARN_ON_ONCE(vma_desc_test(desc, VMA_PFNMAP_BIT)); + vma_desc_set_flags(desc, VMA_MIXEDMAP_BIT); + } + + nr_pages =3D action->map_kernel.nr_pages; + end =3D addr + PAGE_SIZE * nr_pages; + if (!range_in_vma_desc(desc, addr, end)) + return -EFAULT; + + return 0; +} +EXPORT_SYMBOL(map_kernel_pages_prepare); + +int map_kernel_pages_complete(struct vm_area_struct *vma, + struct mmap_action *action) +{ + unsigned long nr_pages; + + nr_pages =3D action->map_kernel.nr_pages; + return insert_pages(vma, action->map_kernel.start, + action->map_kernel.pages, + &nr_pages, vma->vm_page_prot); +} +EXPORT_SYMBOL(map_kernel_pages_complete); + /** * vm_insert_page - insert single page into user vma * @vma: user vma to map to diff --git a/mm/util.c b/mm/util.c index a166c48fe894..dea590e7a26c 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1441,6 +1441,8 @@ int mmap_action_prepare(struct vm_area_desc *desc) return io_remap_pfn_range_prepare(desc); case MMAP_SIMPLE_IO_REMAP: return simple_ioremap_prepare(desc); + case MMAP_MAP_KERNEL_PAGES: + return map_kernel_pages_prepare(desc); } =20 WARN_ON_ONCE(1); @@ -1472,6 +1474,9 @@ int mmap_action_complete(struct vm_area_struct *vma, case MMAP_IO_REMAP_PFN: err =3D io_remap_pfn_range_complete(vma, action); break; + case MMAP_MAP_KERNEL_PAGES: + err =3D map_kernel_pages_complete(vma, action); + break; case MMAP_SIMPLE_IO_REMAP: /* * The simple I/O remap should have been delegated to an I/O @@ -1494,6 +1499,7 @@ int mmap_action_prepare(struct vm_area_desc *desc) case MMAP_REMAP_PFN: case MMAP_IO_REMAP_PFN: case MMAP_SIMPLE_IO_REMAP: + case MMAP_MAP_KERNEL_PAGES: WARN_ON_ONCE(1); /* nommu cannot handle these. */ break; } diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 6658df26698a..4407caf207ad 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -454,6 +454,7 @@ enum mmap_action_type { MMAP_REMAP_PFN, /* Remap PFN range. */ MMAP_IO_REMAP_PFN, /* I/O remap PFN range. */ MMAP_SIMPLE_IO_REMAP, /* I/O remap with guardrails. */ + MMAP_MAP_KERNEL_PAGES, /* Map kernel page range from an array. */ }; =20 /* @@ -472,6 +473,12 @@ struct mmap_action { phys_addr_t start; unsigned long len; } simple_ioremap; + struct { + unsigned long start; + struct page **pages; + unsigned long num; + pgoff_t pgoff; + } map_kernel; }; enum mmap_action_type type; =20 --=20 2.53.0 From nobody Tue Apr 7 02:33:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E269E377ECD; Mon, 16 Mar 2026 21:14:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695663; cv=none; b=o/ICjAuiyoHgmubf82AJ15It2CgdDQsYb78FmdMBAG2qA23katJwNoUo0R4xg+BSluCTlr/otmB/w8qXSY+2fk040Drm1AAFq6CTETFSWclsTx6Li1wR4O4ZlH7sQSlCaiKs7gMPeOy5QYGVgKlQj/YC8l6Cyn+GM0IlvJsLj0s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773695663; c=relaxed/simple; bh=dElPd1/2yYuQnbmBxFem9K254mqoyGxI5ua35pg5dJU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LQ3uvhZZ95OPE6nu/yZA/Qtg7dJNs1fLVJZ8UWYIgAVQ87BnEmACJ3PqYFDuRuOrsQ2JVKZBBylUYHb63xmc5xG4MwfpbUICDe2P0zzjh9DYyOs/UnyBVefHWq/hRp5cSvOluD7Qh5NINHxd9MxbHgjDkp6/slGGM3SAupmfW4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WSQBK1iL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WSQBK1iL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF9A6C2BCB1; Mon, 16 Mar 2026 21:14:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773695662; bh=dElPd1/2yYuQnbmBxFem9K254mqoyGxI5ua35pg5dJU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WSQBK1iLku67+ML1WmdG5gvZLWS7LBzAh+jY1gRa2yWEDslVldYCYBTPugdDPtK3i CJUPJvXQZ/W8O2eLGawpJj4z7/Gc/0AG/4/vE6vYrREysTXhDw+AehDa8KdJrBOSlC XhH22ZMH6c73yrZxTi5u16TG0R08vkR0Hg/N25ka99GCvGxJVINDVuFd31W7b8egcK vue6N3UbG8dGv7WpxgZ343wv1BtFDUqGFbAbGk7WlXYTrbd+eiSrkAi0pp+So15ggU sTJ2YQNiWnWZSMjPaxJ+T1mbsuo+/9JntOuw+gRsL+gTD6u/BCWW3CJwONZsgLHGWx bcXTswhvf3Pkw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: Jonathan Corbet , Clemens Ladisch , Arnd Bergmann , Greg Kroah-Hartman , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Long Li , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Bodo Stroesser , "Martin K . Petersen" , David Howells , Marc Dionne , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: [PATCH v2 16/16] mm: on remap assert that input range within the proposed VMA Date: Mon, 16 Mar 2026 21:12:12 +0000 Message-ID: <4e152e7b8e1a93baf0777628eef9409d031cf8f6.1773695307.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now we have range_in_vma_desc(), update remap_pfn_range_prepare() to check whether the input range in contained within the specified VMA, so we can fail at prepare time if an invalid range is specified. This covers the I/O remap mmap actions also which ultimately call into this function, and other mmap action types either already span the full VMA or check this already. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/memory.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 849d5d9eeb83..de0dd17759e2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3142,6 +3142,9 @@ int remap_pfn_range_prepare(struct vm_area_desc *desc) const bool is_cow =3D vma_desc_is_cow_mapping(desc); int err; =20 + if (!range_in_vma_desc(desc, start, end)) + return -EFAULT; + err =3D get_remap_pgoff(is_cow, start, end, desc->start, desc->end, pfn, &desc->pgoff); if (err) --=20 2.53.0