From nobody Thu Apr 9 19:17:39 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CB6B352926; Fri, 6 Mar 2026 17:19:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817550; cv=none; b=pK22Qv6Fn0hn47pxr4SLrq5Ik3dhInzFib+qwNH4Gu7LE4lB2tvhuLCRGnc1uo/7PnU417Jt5ovADnYAvrnitLPKCPtLeGwUCXtOYs8Rt0Dg3HWIsP8RdIZX1Sob+6GaWqw5Q5rg9+5vMXQ+LsvRRkbAef72LPRyD9RifxXp0tI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772817550; c=relaxed/simple; bh=Sv4+9KeoC3sTffZxTlVqK+mroHcnok9qkJN7lo9ppOs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dOouapwkcQlMVUOm9y8jOV3mNDNd08JPeKjkVYVG9iiO7VqB8gakE15jHNJQNDRY+Fe0SV5rqJQ4XGzIK8Plaaq0bVNBfJum/C1KC0PDYCJU+vpY37oeoiVM6MgQCTLVIrzRXPLN/GxCIWUBiPXSzE1Y2s7nzAjK0xUJuPAQE+M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cC/nwaq8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cC/nwaq8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19E47C4CEF7; Fri, 6 Mar 2026 17:19:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772817549; bh=Sv4+9KeoC3sTffZxTlVqK+mroHcnok9qkJN7lo9ppOs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cC/nwaq8MfXxeznsUWyMc4//fJyQY6Gy3lmbfX8nvg3tpTSws3+Top8otjCejjNcz +iWsx+MFM9OJXHtYj8yDDe0QsR5GgaNijUE+iH18yL/vd0qlWg0FF7YX9jHBrA75zI ooH8y83jzoFddEdNKA/XO6hzr3o/XxzkGcb1OZA18Pla4pR/Ylv+jxmLxEudTXNt51 J6t1gnpJLyayOzNYMTy7o21c7x2lFPQRgWKLocKom/dMGuiHYhOjKAVLJDJ3RRJfuq UOpxQs8qxizckrtpQY357vjvIl+aolRQ/CqSA0cSlaRNQ/UK+XYVR85D3sIBvnb5XH S2TL/AJF5gjCw== From: Mike Rapoport To: Andrew Morton Cc: Andrea Arcangeli , Axel Rasmussen , Baolin Wang , David Hildenbrand , Hugh Dickins , James Houghton , "Liam R. Howlett" , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , Michal Hocko , Mike Rapoport , Muchun Song , Nikita Kalyazin , Oscar Salvador , Paolo Bonzini , Peter Xu , Sean Christopherson , Shuah Khan , Suren Baghdasaryan , Vlastimil Babka , kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 07/15] userfaultfd: introduce vm_uffd_ops Date: Fri, 6 Mar 2026 19:18:07 +0200 Message-ID: <20260306171815.3160826-8-rppt@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260306171815.3160826-1-rppt@kernel.org> References: <20260306171815.3160826-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" Current userfaultfd implementation works only with memory managed by core MM: anonymous, shmem and hugetlb. First, there is no fundamental reason to limit userfaultfd support only to the core memory types and userfaults can be handled similarly to regular page faults provided a VMA owner implements appropriate callbacks. Second, historically various code paths were conditioned on vma_is_anonymous(), vma_is_shmem() and is_vm_hugetlb_page() and some of these conditions can be expressed as operations implemented by a particular memory type. Introduce vm_uffd_ops extension to vm_operations_struct that will delegate memory type specific operations to a VMA owner. Operations for anonymous memory are handled internally in userfaultfd using anon_uffd_ops that implicitly assigned to anonymous VMAs. Start with a single operation, ->can_userfault() that will verify that a VMA meets requirements for userfaultfd support at registration time. Implement that method for anonymous, shmem and hugetlb and move relevant parts of vma_can_userfault() into the new callbacks. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/mm.h | 5 +++++ include/linux/userfaultfd_k.h | 6 ++++++ mm/hugetlb.c | 15 +++++++++++++++ mm/shmem.c | 15 +++++++++++++++ mm/userfaultfd.c | 36 ++++++++++++++++++++++++++--------- 5 files changed, 68 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5be3d8a8f806..b63b28c65676 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -741,6 +741,8 @@ struct vm_fault { */ }; =20 +struct vm_uffd_ops; + /* * These are the virtual MM functions - opening of an area, closing and * unmapping it (needed to keep files on disk up-to-date etc), pointer @@ -826,6 +828,9 @@ struct vm_operations_struct { struct page *(*find_normal_page)(struct vm_area_struct *vma, unsigned long addr); #endif /* CONFIG_FIND_NORMAL_PAGE */ +#ifdef CONFIG_USERFAULTFD + const struct vm_uffd_ops *uffd_ops; +#endif }; =20 #ifdef CONFIG_NUMA_BALANCING diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index a49cf750e803..56e85ab166c7 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -80,6 +80,12 @@ struct userfaultfd_ctx { =20 extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long rea= son); =20 +/* VMA userfaultfd operations */ +struct vm_uffd_ops { + /* Checks if a VMA can support userfaultfd */ + bool (*can_userfault)(struct vm_area_struct *vma, vm_flags_t vm_flags); +}; + /* A combined operation mode + behavior flags. */ typedef unsigned int __bitwise uffd_flags_t; =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0beb6e22bc26..077968a8a69a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4818,6 +4818,18 @@ static vm_fault_t hugetlb_vm_op_fault(struct vm_faul= t *vmf) return 0; } =20 +#ifdef CONFIG_USERFAULTFD +static bool hugetlb_can_userfault(struct vm_area_struct *vma, + vm_flags_t vm_flags) +{ + return true; +} + +static const struct vm_uffd_ops hugetlb_uffd_ops =3D { + .can_userfault =3D hugetlb_can_userfault, +}; +#endif + /* * When a new function is introduced to vm_operations_struct and added * to hugetlb_vm_ops, please consider adding the function to shm_vm_ops. @@ -4831,6 +4843,9 @@ const struct vm_operations_struct hugetlb_vm_ops =3D { .close =3D hugetlb_vm_op_close, .may_split =3D hugetlb_vm_op_split, .pagesize =3D hugetlb_vm_op_pagesize, +#ifdef CONFIG_USERFAULTFD + .uffd_ops =3D &hugetlb_uffd_ops, +#endif }; =20 static pte_t make_huge_pte(struct vm_area_struct *vma, struct folio *folio, diff --git a/mm/shmem.c b/mm/shmem.c index b40f3cd48961..f2a25805b9bf 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -3294,6 +3294,15 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, shmem_inode_unacct_blocks(inode, 1); return ret; } + +static bool shmem_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_= flags) +{ + return true; +} + +static const struct vm_uffd_ops shmem_uffd_ops =3D { + .can_userfault =3D shmem_can_userfault, +}; #endif /* CONFIG_USERFAULTFD */ =20 #ifdef CONFIG_TMPFS @@ -5313,6 +5322,9 @@ static const struct vm_operations_struct shmem_vm_ops= =3D { .set_policy =3D shmem_set_policy, .get_policy =3D shmem_get_policy, #endif +#ifdef CONFIG_USERFAULTFD + .uffd_ops =3D &shmem_uffd_ops, +#endif }; =20 static const struct vm_operations_struct shmem_anon_vm_ops =3D { @@ -5322,6 +5334,9 @@ static const struct vm_operations_struct shmem_anon_v= m_ops =3D { .set_policy =3D shmem_set_policy, .get_policy =3D shmem_get_policy, #endif +#ifdef CONFIG_USERFAULTFD + .uffd_ops =3D &shmem_uffd_ops, +#endif }; =20 int shmem_init_fs_context(struct fs_context *fc) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index c5fd1e5c67b3..b55d4a8d88cc 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -34,6 +34,25 @@ struct mfill_state { pmd_t *pmd; }; =20 +static bool anon_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_f= lags) +{ + /* anonymous memory does not support MINOR mode */ + if (vm_flags & VM_UFFD_MINOR) + return false; + return true; +} + +static const struct vm_uffd_ops anon_uffd_ops =3D { + .can_userfault =3D anon_can_userfault, +}; + +static const struct vm_uffd_ops *vma_uffd_ops(struct vm_area_struct *vma) +{ + if (vma_is_anonymous(vma)) + return &anon_uffd_ops; + return vma->vm_ops ? vma->vm_ops->uffd_ops : NULL; +} + static __always_inline bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_en= d) { @@ -2023,13 +2042,15 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, uns= igned long dst_start, bool vma_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_flags, bool wp_async) { - vm_flags &=3D __VM_UFFD_FLAGS; + const struct vm_uffd_ops *ops =3D vma_uffd_ops(vma); =20 - if (vma->vm_flags & VM_DROPPABLE) + /* only VMAs that implement vm_uffd_ops are supported */ + if (!ops) return false; =20 - if ((vm_flags & VM_UFFD_MINOR) && - (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma))) + vm_flags &=3D __VM_UFFD_FLAGS; + + if (vma->vm_flags & VM_DROPPABLE) return false; =20 /* @@ -2041,16 +2062,13 @@ bool vma_can_userfault(struct vm_area_struct *vma, = vm_flags_t vm_flags, =20 /* * If user requested uffd-wp but not enabled pte markers for - * uffd-wp, then shmem & hugetlbfs are not supported but only - * anonymous. + * uffd-wp, then only anonymous memory is supported */ if (!uffd_supports_wp_marker() && (vm_flags & VM_UFFD_WP) && !vma_is_anonymous(vma)) return false; =20 - /* By default, allow any of anon|shmem|hugetlb */ - return vma_is_anonymous(vma) || is_vm_hugetlb_page(vma) || - vma_is_shmem(vma); + return ops->can_userfault(vma, vm_flags); } =20 static void userfaultfd_set_vm_flags(struct vm_area_struct *vma, --=20 2.51.0