From nobody Mon Apr 13 18:26:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B531DC47088 for ; Fri, 2 Dec 2022 09:45:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232625AbiLBJpH (ORCPT ); Fri, 2 Dec 2022 04:45:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232734AbiLBJoh (ORCPT ); Fri, 2 Dec 2022 04:44:37 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF14CBEE2F for ; Fri, 2 Dec 2022 01:43:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669974215; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GAaMjLyXvXMKeDdYrAPmiwXHGFLX7s1JiUj1YL0qpa4=; b=Bz1JXPyo1NoB0gtw/dol35QYMMeuMJF4+z11+fmfiPxe6f/5Rz49/OqPmO7DGZaMhxvyba aRTi1prOnI7SNSb5Wx32zd4jlxsMp+XoYPt/DWTvE4EAlQdA23dZj6ry7K0jwyrWlYFM9F WnetJXvz/MHLzwUOjB2gTlx5MW5ZgLw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-55-kbGyC1fINjCX-zee2ZRkMg-1; Fri, 02 Dec 2022 04:43:32 -0500 X-MC-Unique: kbGyC1fINjCX-zee2ZRkMg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4F43E8630C4; Fri, 2 Dec 2022 09:43:31 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.116]) by smtp.corp.redhat.com (Postfix) with ESMTP id E7FE24B4014; Fri, 2 Dec 2022 09:43:29 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v3 1/4] mm: Move FOLL_* defs to mm_types.h From: David Howells To: Al Viro Cc: Matthew Wilcox , John Hubbard , Christoph Hellwig , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, dhowells@redhat.com, Christoph Hellwig , Matthew Wilcox , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Fri, 02 Dec 2022 09:43:27 +0000 Message-ID: <166997420723.9475.3907844523056304049.stgit@warthog.procyon.org.uk> In-Reply-To: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> References: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move FOLL_* definitions to linux/mm_types.h to make them more accessible without having to drag in all of linux/mm.h and everything that drags in too[1]. Suggested-by: Matthew Wilcox Signed-off-by: David Howells Reviewed-by: John Hubbard Reviewed-by: Christoph Hellwig cc: Al Viro cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/linux-fsdevel/Y1%2FhSO+7kAJhGShG@casper.infra= dead.org/ [1] Link: https://lore.kernel.org/r/166732025009.3186319.3402781784409891214.st= git@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166869688542.3723671.10243929000823258622.s= tgit@warthog.procyon.org.uk/ # rfc --- include/linux/mm.h | 74 ------------------------------------------= ---- include/linux/mm_types.h | 73 ++++++++++++++++++++++++++++++++++++++++++= +++ 2 files changed, 73 insertions(+), 74 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8bbcccbc5565..7a7a287818ad 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2941,80 +2941,6 @@ static inline vm_fault_t vmf_error(int err) struct page *follow_page(struct vm_area_struct *vma, unsigned long address, unsigned int foll_flags); =20 -#define FOLL_WRITE 0x01 /* check pte is writable */ -#define FOLL_TOUCH 0x02 /* mark page accessed */ -#define FOLL_GET 0x04 /* do get_page on page */ -#define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ -#define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ -#define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO - * and return without waiting upon it */ -#define FOLL_NOFAULT 0x80 /* do not fault in pages */ -#define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ -#define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ -#define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ -#define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ -#define FOLL_ANON 0x8000 /* don't do file mappings */ -#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below= */ -#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ -#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ -#define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup = */ - -/* - * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each - * other. Here is what they mean, and how to use them: - * - * FOLL_LONGTERM indicates that the page will be held for an indefinite ti= me - * period _often_ under userspace control. This is in contrast to - * iov_iter_get_pages(), whose usages are transient. - * - * FIXME: For pages which are part of a filesystem, mappings are subject t= o the - * lifetime enforced by the filesystem and we need guarantees that longterm - * users like RDMA and V4L2 only establish mappings which coordinate usage= with - * the filesystem. Ideas for this coordination include revoking the longt= erm - * pin, delaying writeback, bounce buffer page writeback, etc. As FS DAX = was - * added after the problem with filesystems was found FS DAX VMAs are - * specifically failed. Filesystem pages are still subject to bugs and us= e of - * FOLL_LONGTERM should be avoided on those pages. - * - * FIXME: Also NOTE that FOLL_LONGTERM is not supported in every GUP call. - * Currently only get_user_pages() and get_user_pages_fast() support this = flag - * and calls to get_user_pages_[un]locked are specifically not allowed. T= his - * is due to an incompatibility with the FS DAX check and - * FAULT_FLAG_ALLOW_RETRY. - * - * In the CMA case: long term pins in a CMA region would unnecessarily fra= gment - * that region. And so, CMA attempts to migrate the page before pinning, = when - * FOLL_LONGTERM is specified. - * - * FOLL_PIN indicates that a special kind of tracking (not just page->_ref= count, - * but an additional pin counting system) will be invoked. This is intende= d for - * anything that gets a page reference and then touches page data (for exa= mple, - * Direct IO). This lets the filesystem know that some non-file-system ent= ity is - * potentially changing the pages' data. In contrast to FOLL_GET (whose pa= ges - * are released via put_page()), FOLL_PIN pages must be released, ultimate= ly, by - * a call to unpin_user_page(). - * - * FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use diff= erent - * and separate refcounting mechanisms, however, and that means that each = has - * its own acquire and release mechanisms: - * - * FOLL_GET: get_user_pages*() to acquire, and put_page() to release. - * - * FOLL_PIN: pin_user_pages*() to acquire, and unpin_user_pages to rel= ease. - * - * FOLL_PIN and FOLL_GET are mutually exclusive for a given function call. - * (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-b= ased - * calls applied to them, and that's perfectly OK. This is a constraint on= the - * callers, not on the pages.) - * - * FOLL_PIN should be set internally by the pin_user_pages*() APIs, never - * directly by the caller. That's in order to help avoid mismatches when - * releasing pages: get_user_pages*() pages must be released via put_page(= ), - * while pin_user_pages*() pages must be released via unpin_user_page(). - * - * Please see Documentation/core-api/pin_user_pages.rst for more informati= on. - */ - static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) { if (vm_fault & VM_FAULT_OOM) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 500e536796ca..0c80a5ad6e6a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1003,4 +1003,77 @@ enum fault_flag { =20 typedef unsigned int __bitwise zap_flags_t; =20 +/* + * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each + * other. Here is what they mean, and how to use them: + * + * FOLL_LONGTERM indicates that the page will be held for an indefinite ti= me + * period _often_ under userspace control. This is in contrast to + * iov_iter_get_pages(), whose usages are transient. + * + * FIXME: For pages which are part of a filesystem, mappings are subject t= o the + * lifetime enforced by the filesystem and we need guarantees that longterm + * users like RDMA and V4L2 only establish mappings which coordinate usage= with + * the filesystem. Ideas for this coordination include revoking the longt= erm + * pin, delaying writeback, bounce buffer page writeback, etc. As FS DAX = was + * added after the problem with filesystems was found FS DAX VMAs are + * specifically failed. Filesystem pages are still subject to bugs and us= e of + * FOLL_LONGTERM should be avoided on those pages. + * + * FIXME: Also NOTE that FOLL_LONGTERM is not supported in every GUP call. + * Currently only get_user_pages() and get_user_pages_fast() support this = flag + * and calls to get_user_pages_[un]locked are specifically not allowed. T= his + * is due to an incompatibility with the FS DAX check and + * FAULT_FLAG_ALLOW_RETRY. + * + * In the CMA case: long term pins in a CMA region would unnecessarily fra= gment + * that region. And so, CMA attempts to migrate the page before pinning, = when + * FOLL_LONGTERM is specified. + * + * FOLL_PIN indicates that a special kind of tracking (not just page->_ref= count, + * but an additional pin counting system) will be invoked. This is intende= d for + * anything that gets a page reference and then touches page data (for exa= mple, + * Direct IO). This lets the filesystem know that some non-file-system ent= ity is + * potentially changing the pages' data. In contrast to FOLL_GET (whose pa= ges + * are released via put_page()), FOLL_PIN pages must be released, ultimate= ly, by + * a call to unpin_user_page(). + * + * FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use diff= erent + * and separate refcounting mechanisms, however, and that means that each = has + * its own acquire and release mechanisms: + * + * FOLL_GET: get_user_pages*() to acquire, and put_page() to release. + * + * FOLL_PIN: pin_user_pages*() to acquire, and unpin_user_pages to rel= ease. + * + * FOLL_PIN and FOLL_GET are mutually exclusive for a given function call. + * (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-b= ased + * calls applied to them, and that's perfectly OK. This is a constraint on= the + * callers, not on the pages.) + * + * FOLL_PIN should be set internally by the pin_user_pages*() APIs, never + * directly by the caller. That's in order to help avoid mismatches when + * releasing pages: get_user_pages*() pages must be released via put_page(= ), + * while pin_user_pages*() pages must be released via unpin_user_page(). + * + * Please see Documentation/core-api/pin_user_pages.rst for more informati= on. + */ +#define FOLL_WRITE 0x01 /* check pte is writable */ +#define FOLL_TOUCH 0x02 /* mark page accessed */ +#define FOLL_GET 0x04 /* do get_page on page */ +#define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ +#define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ +#define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO + * and return without waiting upon it */ +#define FOLL_NOFAULT 0x80 /* do not fault in pages */ +#define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ +#define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ +#define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ +#define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ +#define FOLL_ANON 0x8000 /* don't do file mappings */ +#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below= */ +#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ +#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ +#define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup = */ + #endif /* _LINUX_MM_TYPES_H */ From nobody Mon Apr 13 18:26:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E092CC4332F for ; Fri, 2 Dec 2022 09:45:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232461AbiLBJpM (ORCPT ); Fri, 2 Dec 2022 04:45:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232736AbiLBJon (ORCPT ); Fri, 2 Dec 2022 04:44:43 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DB09C0568 for ; Fri, 2 Dec 2022 01:43:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669974225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S+hB0bjPdKvleuobAie3+6XGkYjROas4sdfRpZJbOQA=; b=UOx6EaQ4vNIOmVVzg1QqFAIj0JQ54IUvyGcOc0WsmQhqy4w+30zBpCYEcp1KkvEbxXKn2o cWFnzqhllOfI2NKIVi3+n413Uojc1+fbg1icPX3/hAaPTpjwET5QYj49bXJ0FrvhgEWnwp VIN42nvD4A9+8dx6eC6LixOga9X7yCs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-370-WYaOp8PoP6u4YBzvq7HJsQ-1; Fri, 02 Dec 2022 04:43:41 -0500 X-MC-Unique: WYaOp8PoP6u4YBzvq7HJsQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9CBAE185A794; Fri, 2 Dec 2022 09:43:40 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.116]) by smtp.corp.redhat.com (Postfix) with ESMTP id 279C440C845E; Fri, 2 Dec 2022 09:43:39 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v3 2/4] iov_iter: Add a function to extract a page list from an iterator From: David Howells To: Al Viro Cc: Christoph Hellwig , John Hubbard , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, dhowells@redhat.com, Christoph Hellwig , Matthew Wilcox , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Fri, 02 Dec 2022 09:43:36 +0000 Message-ID: <166997421646.9475.14837976344157464997.stgit@warthog.procyon.org.uk> In-Reply-To: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> References: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a function, iov_iter_extract_pages(), to extract a list of pages from an iterator. The pages may be returned with a reference added or a pin added or neither, depending on the type of iterator and the direction of transfer. The function also indicates the mode of retention that was employed for an iterator - and therefore how the caller should dispose of the pages later. There are three cases: (1) Transfer *into* an ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have pins obtained on them (but not references) so that fork() doesn't CoW the pages incorrectly whilst the I/O is in progress. The indicated mode of retention will be FOLL_PIN for this case. The caller should use something like unpin_user_page() to dispose of the page. (2) Transfer is *out of* an ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have references obtained on them, but not pins. The indicated mode of retention will be FOLL_GET. The caller should use something like put_page() for page disposal. (3) Any other sort of iterator. No refs or pins are obtained on the page, the assumption is made that the caller will manage page retention. The indicated mode of retention will be 0. The pages don't need additional disposal. Changes: =3D=3D=3D=3D=3D=3D=3D=3D ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. Signed-off-by: David Howells cc: Al Viro cc: Christoph Hellwig cc: John Hubbard cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/Y3zFzdWnWlEJ8X8/@infradead.org/ [1] Link: https://lore.kernel.org/r/166722777971.2555743.12953624861046741424.s= tgit@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166732025748.3186319.8314014902727092626.st= git@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166869689451.3723671.18242195992447653092.s= tgit@warthog.procyon.org.uk/ # rfc --- include/linux/uio.h | 4 + lib/iov_iter.c | 350 +++++++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 354 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 2e3134b14ffd..2fa3ef0f2da3 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -351,4 +351,8 @@ static inline void iov_iter_ubuf(struct iov_iter *i, un= signed int direction, }; } =20 +ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages, + size_t maxsize, unsigned int maxpages, + size_t *offset0, unsigned int *cleanup_mode); + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index c3ca28ca68a6..f0f758950a54 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1892,3 +1892,353 @@ void iov_iter_restore(struct iov_iter *i, struct io= v_iter_state *state) i->iov -=3D state->nr_segs - i->nr_segs; i->nr_segs =3D state->nr_segs; } + +/* + * Extract a list of contiguous pages from an ITER_PIPE iterator. This do= es + * not get references of its own on the pages, nor does it get a pin on th= em. + * If there's a partial page, it adds that first and will then allocate an= d add + * pages into the pipe to make up the buffer space to the amount required. + * + * The caller must hold the pipe locked and only transferring into a pipe = is + * supported. + */ +static ssize_t iov_iter_extract_pipe_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + size_t *offset0, + unsigned int *cleanup_mode) +{ + unsigned int nr, offset, chunk, j; + struct page **p; + size_t left; + + if (!sanity(i)) + return -EFAULT; + + offset =3D pipe_npages(i, &nr); + if (!nr) + return -EFAULT; + *offset0 =3D offset; + + maxpages =3D min_t(size_t, nr, maxpages); + maxpages =3D want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p =3D *pages; + + left =3D maxsize; + for (j =3D 0; j < maxpages; j++) { + struct page *page =3D append_pipe(i, left, &offset); + if (!page) + break; + chunk =3D min_t(size_t, left, PAGE_SIZE - offset); + left -=3D chunk; + *p++ =3D page; + } + if (!j) + return -EFAULT; + *cleanup_mode =3D 0; + return maxsize - left; +} + +/* + * Extract a list of contiguous pages from an ITER_XARRAY iterator. This = does not + * get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_xarray_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + size_t *offset0, + unsigned int *cleanup_mode) +{ + struct page *page, **p; + unsigned int nr =3D 0, offset; + loff_t pos =3D i->xarray_start + i->iov_offset; + pgoff_t index =3D pos >> PAGE_SHIFT; + XA_STATE(xas, i->xarray, index); + + offset =3D pos & ~PAGE_MASK; + *offset0 =3D offset; + + maxpages =3D want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p =3D *pages; + + rcu_read_lock(); + for (page =3D xas_load(&xas); page; page =3D xas_next(&xas)) { + if (xas_retry(&xas, page)) + continue; + + /* Has the page moved or been split? */ + if (unlikely(page !=3D xas_reload(&xas))) { + xas_reset(&xas); + continue; + } + + p[nr++] =3D find_subpage(page, xas.xa_index); + if (nr =3D=3D maxpages) + break; + } + rcu_read_unlock(); + + maxsize =3D min_t(size_t, nr * PAGE_SIZE - offset, maxsize); + i->iov_offset +=3D maxsize; + i->count -=3D maxsize; + *cleanup_mode =3D 0; + return maxsize; +} + +/* + * Extract a list of contiguous pages from an ITER_BVEC iterator. This do= es + * not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + size_t *offset0, + unsigned int *cleanup_mode) +{ + struct page **p, *page; + size_t skip =3D i->iov_offset, offset; + int k; + + maxsize =3D min(maxsize, i->bvec->bv_len - skip); + skip +=3D i->bvec->bv_offset; + page =3D i->bvec->bv_page + skip / PAGE_SIZE; + offset =3D skip % PAGE_SIZE; + *offset0 =3D offset; + + maxpages =3D want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p =3D *pages; + for (k =3D 0; k < maxpages; k++) + p[k] =3D page + k; + + maxsize =3D min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + i->count -=3D maxsize; + i->iov_offset +=3D maxsize; + if (i->iov_offset =3D=3D i->bvec->bv_len) { + i->iov_offset =3D 0; + i->bvec++; + i->nr_segs--; + } + *cleanup_mode =3D 0; + return maxsize; +} + +/* + * Get the first segment from an ITER_UBUF or ITER_IOVEC iterator. The + * iterator must not be empty. + */ +static unsigned long iov_iter_extract_first_user_segment(const struct iov_= iter *i, + size_t *size) +{ + size_t skip; + long k; + + if (iter_is_ubuf(i)) + return (unsigned long)i->ubuf + i->iov_offset; + + for (k =3D 0, skip =3D i->iov_offset; k < i->nr_segs; k++, skip =3D 0) { + size_t len =3D i->iov[k].iov_len - skip; + + if (unlikely(!len)) + continue; + if (*size > len) + *size =3D len; + return (unsigned long)i->iov[k].iov_base + skip; + } + BUG(); // if it had been empty, we wouldn't get called +} + +/* + * Extract a list of contiguous pages from a user iterator and get referen= ces + * on them. This should only be used iff the iterator is user-backed + * (IOBUF/UBUF) and data is being transferred out of the buffer described = by + * the iterator (ie. this is the source). + * + * The pages are returned with incremented refcounts that the caller must = undo + * once the transfer is complete, but no additional pins are obtained. + * + * This is only safe to be used where background IO/DMA is not going to be + * modifying the buffer, and so won't cause a problem with CoW on fork. + */ +static ssize_t iov_iter_extract_user_pages_and_get(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + size_t *offset0, + unsigned int *cleanup_mode) +{ + unsigned long addr; + unsigned int gup_flags =3D FOLL_GET; + size_t offset; + int res; + + if (WARN_ON_ONCE(iov_iter_rw(i) !=3D WRITE)) + return -EFAULT; + + if (i->nofault) + gup_flags |=3D FOLL_NOFAULT; + + addr =3D iov_iter_extract_first_user_segment(i, &maxsize); + *offset0 =3D offset =3D addr % PAGE_SIZE; + addr &=3D PAGE_MASK; + maxpages =3D want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res =3D get_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <=3D 0)) + return res; + maxsize =3D min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + *cleanup_mode =3D FOLL_GET; + return maxsize; +} + +/* + * Extract a list of contiguous pages from a user iterator and get a pin on + * each of them. This should only be used iff the iterator is user-backed + * (IOBUF/UBUF) and data is being transferred into the buffer described by= the + * iterator (ie. this is the destination). + * + * It does not get refs on the pages, but the pages must be unpinned by the + * caller once the transfer is complete. + * + * This is safe to be used where background IO/DMA *is* going to be modify= ing + * the buffer; using a pin rather than a ref makes sure that CoW happens + * correctly in the parent during fork. + */ +static ssize_t iov_iter_extract_user_pages_and_pin(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + size_t *offset0, + unsigned int *cleanup_mode) +{ + unsigned long addr; + unsigned int gup_flags =3D FOLL_PIN | FOLL_WRITE; + size_t offset; + int res; + + if (WARN_ON_ONCE(iov_iter_rw(i) !=3D READ)) + return -EFAULT; + + if (i->nofault) + gup_flags |=3D FOLL_NOFAULT; + + addr =3D first_iovec_segment(i, &maxsize); + *offset0 =3D offset =3D addr % PAGE_SIZE; + addr &=3D PAGE_MASK; + maxpages =3D want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res =3D pin_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <=3D 0)) + return res; + maxsize =3D min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + *cleanup_mode =3D FOLL_PIN; + return maxsize; +} + +static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + size_t *offset0, + unsigned int *cleanup_mode) +{ + if (i->data_source) + return iov_iter_extract_user_pages_and_get(i, pages, maxsize, + maxpages, offset0, + cleanup_mode); + else + return iov_iter_extract_user_pages_and_pin(i, pages, maxsize, + maxpages, offset0, + cleanup_mode); +} + +/** + * iov_iter_extract_pages - Extract a list of contiguous pages from an ite= rator + * @i: The iterator to extract from + * @pages: Where to return the list of pages + * @maxsize: The maximum amount of iterator to extract + * @maxpages: The maximum size of the list of pages + * @offset0: Where to return the starting offset into (*@pages)[0] + * @cleanup_mode: Where to return the cleanup mode + * + * Extract a list of contiguous pages from the current point of the iterat= or, + * advancing the iterator. The maximum number of pages and the maximum am= ount + * of page contents can be set. + * + * If *@pages is NULL, a page list will be allocated to the required size = and + * *@pages will be set to its base. If *@pages is not NULL, it will be as= sumed + * that the caller allocated a page list at least @maxpages in size and th= is + * will be filled in. + * + * Extra refs or pins on the pages may be obtained as follows: + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF) and data is = to be + * transferred /OUT OF/ the described buffer, refs will be taken on t= he + * pages, but pins will not be added. This can be used for DMA from a + * page; it cannot be used for DMA to a page, as it may cause page-COW + * problems in fork. *@cleanup_mode will be set to FOLL_GET. + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF) and data is = to be + * transferred /INTO/ the described buffer, pins will be added to the + * pages, but refs will not be taken. This must be used for DMA to a + * page. *@cleanup_mode will be set to FOLL_PIN. + * + * (*) If the iterator is ITER_PIPE, this must describe a destination for= the + * data. Additional pages may be allocated and added to the pipe (wh= ich + * will hold the refs), but neither refs nor pins will be obtained fo= r the + * caller. The caller must hold the pipe lock. *@cleanup_mode will = be + * set to 0. + * + * (*) If the iterator is ITER_BVEC or ITER_XARRAY, the pages are merely + * listed; no extra refs or pins are obtained. *@cleanup_mode will b= e set + * to 0. + * + * Note also: + * + * (*) Use with ITER_KVEC is not supported as that may refer to memory th= at + * doesn't have associated page structs. + * + * (*) Use with ITER_DISCARD is not supported as that has no content. + * + * On success, the function sets *@pages to the new pagelist, if allocated= , and + * sets *offset0 to the offset into the first page, *cleanup_mode to the + * cleanup required and returns the amount of buffer space added represent= ed by + * the page list. + * + * It may also return -ENOMEM and -EFAULT. + */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages, + size_t maxsize, unsigned int maxpages, + size_t *offset0, unsigned int *cleanup_mode) +{ + maxsize =3D min_t(size_t, min_t(size_t, maxsize, i->count), MAX_RW_COUNT); + if (!maxsize) + return 0; + + if (likely(user_backed_iter(i))) + return iov_iter_extract_user_pages(i, pages, maxsize, + maxpages, offset0, + cleanup_mode); + if (iov_iter_is_bvec(i)) + return iov_iter_extract_bvec_pages(i, pages, maxsize, + maxpages, offset0, + cleanup_mode); + if (iov_iter_is_pipe(i)) + return iov_iter_extract_pipe_pages(i, pages, maxsize, + maxpages, offset0, + cleanup_mode); + if (iov_iter_is_xarray(i)) + return iov_iter_extract_xarray_pages(i, pages, maxsize, + maxpages, offset0, + cleanup_mode); + return -EFAULT; +} +EXPORT_SYMBOL_GPL(iov_iter_extract_pages); From nobody Mon Apr 13 18:26:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12A93C4332F for ; Fri, 2 Dec 2022 09:45:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233168AbiLBJpY (ORCPT ); Fri, 2 Dec 2022 04:45:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233023AbiLBJoy (ORCPT ); Fri, 2 Dec 2022 04:44:54 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72D281D0FF for ; Fri, 2 Dec 2022 01:43:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669974231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=coJ6ZwB6vq1bKOKZ1ImRVDxiA5RDC8EMY0DOiTntObo=; b=LcdHltPErcGKeTrj1OaToUHJkYWvqLuB99K1nOlmF5t8fmKMzGdgE8svrogWiKjHK7I3bJ 4B6vQUgnM1SDRvvyMBx36caDtocOk7dyu5KD5UsBtSYI7qz3maRZ/49jw8ZN9sMHT580KR 1KHqwaTRbY3pGJMAhDgECA5nPE3S1FI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-153-M1MJ7vYqMjSJfcdU7hUj3A-1; Fri, 02 Dec 2022 04:43:50 -0500 X-MC-Unique: M1MJ7vYqMjSJfcdU7hUj3A-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F337738173D2; Fri, 2 Dec 2022 09:43:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.116]) by smtp.corp.redhat.com (Postfix) with ESMTP id 618F5492B06; Fri, 2 Dec 2022 09:43:48 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v3 3/4] netfs: Add a function to extract a UBUF or IOVEC into a BVEC iterator From: David Howells To: Al Viro Cc: Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, dhowells@redhat.com, Christoph Hellwig , Matthew Wilcox , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Fri, 02 Dec 2022 09:43:45 +0000 Message-ID: <166997422579.9475.12101700945635692496.stgit@warthog.procyon.org.uk> In-Reply-To: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> References: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a function to extract the pages from a user-space supplied iterator (UBUF- or IOVEC-type) into a BVEC-type iterator, retaining the pages by getting a ref on them (WRITE) or pinning them (READ) as we go. This is useful in three situations: (1) A userspace thread may have a sibling that unmaps or remaps the process's VM during the operation, changing the assignment of the pages and potentially causing an error. Retaining the pages keeps some pages around, even if this occurs; futher, we find out at the point of extraction if EFAULT is going to be incurred. (2) Pages might get swapped out/discarded if not retained, so we want to retain them to avoid the reload causing a deadlock due to a DIO from/to an mmapped region on the same file. (3) The iterator may get passed to sendmsg() by the filesystem. If a fault occurs, we may get a short write to a TCP stream that's then tricky to recover from. We don't deal with other types of iterator here, leaving it to other mechanisms to retain the pages (eg. PG_locked, PG_writeback and the pipe lock). Changes: =3D=3D=3D=3D=3D=3D=3D=3D ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/Y3zFzdWnWlEJ8X8/@infradead.org/ [1] Link: https://lore.kernel.org/r/166697255265.61150.6289490555867717077.stgi= t@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166732026503.3186319.12020462741051772825.s= tgit@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166869690376.3723671.8813331570219190705.st= git@warthog.procyon.org.uk/ # rfc --- fs/netfs/Makefile | 1=20 fs/netfs/iterator.c | 99 +++++++++++++++++++++++++++++++++++++++++++++= ++++ include/linux/netfs.h | 3 + 3 files changed, 103 insertions(+) create mode 100644 fs/netfs/iterator.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index f684c0cd1ec5..386d6fb92793 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -3,6 +3,7 @@ netfs-y :=3D \ buffered_read.o \ io.o \ + iterator.o \ main.o \ objects.o =20 diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c new file mode 100644 index 000000000000..82a691b233ef --- /dev/null +++ b/fs/netfs/iterator.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Iterator helpers. + * + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include "internal.h" + +/** + * netfs_extract_user_iter - Extract the pages from a user iterator into a= bvec + * @orig: The original iterator + * @orig_len: The amount of iterator to copy + * @new: The iterator to be set up + * @cleanup_mode: Where to indicate the cleanup mode + * + * Extract the page fragments from the given amount of the source iterator= and + * build up a second iterator that refers to all of those bits. This allo= ws + * the original iterator to disposed of. + * + * On success, the number of elements in the bvec is returned, the original + * iterator will have been advanced by the amount extracted and @*cleanup_= mode + * will have been set to FOLL_GET, FOLL_PIN or 0. + */ +ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, + struct iov_iter *new, unsigned int *cleanup_mode) +{ + struct bio_vec *bv =3D NULL; + struct page **pages; + unsigned int cur_npages; + unsigned int max_pages; + unsigned int npages =3D 0; + unsigned int i; + ssize_t ret; + size_t count =3D orig_len, offset, len; + size_t bv_size, pg_size; + + if (WARN_ON_ONCE(!iter_is_ubuf(orig) && !iter_is_iovec(orig))) + return -EIO; + + max_pages =3D iov_iter_npages(orig, INT_MAX); + bv_size =3D array_size(max_pages, sizeof(*bv)); + bv =3D kvmalloc(bv_size, GFP_KERNEL); + if (!bv) + return -ENOMEM; + + *cleanup_mode =3D 0; + + /* Put the page list at the end of the bvec list storage. bvec + * elements are larger than page pointers, so as long as we work + * 0->last, we should be fine. + */ + pg_size =3D array_size(max_pages, sizeof(*pages)); + pages =3D (void *)bv + bv_size - pg_size; + + while (count && npages < max_pages) { + ret =3D iov_iter_extract_pages(orig, &pages, count, + max_pages - npages, &offset, + cleanup_mode); + if (ret < 0) { + pr_err("Couldn't get user pages (rc=3D%zd)\n", ret); + break; + } + + if (ret > count) { + pr_err("get_pages rc=3D%zd more than %zu\n", ret, count); + break; + } + + count -=3D ret; + ret +=3D offset; + cur_npages =3D DIV_ROUND_UP(ret, PAGE_SIZE); + + if (npages + cur_npages > max_pages) { + pr_err("Out of bvec array capacity (%u vs %u)\n", + npages + cur_npages, max_pages); + break; + } + + for (i =3D 0; i < cur_npages; i++) { + len =3D ret > PAGE_SIZE ? PAGE_SIZE : ret; + bv[npages + i].bv_page =3D *pages++; + bv[npages + i].bv_offset =3D offset; + bv[npages + i].bv_len =3D len - offset; + ret -=3D len; + offset =3D 0; + } + + npages +=3D cur_npages; + } + + iov_iter_bvec(new, iov_iter_rw(orig), bv, npages, orig_len - count); + return npages; +} +EXPORT_SYMBOL_GPL(netfs_extract_user_iter); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index f2402ddeafbf..eed84474e4cf 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -288,6 +288,9 @@ void netfs_get_subrequest(struct netfs_io_subrequest *s= ubreq, void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async, enum netfs_sreq_ref_trace what); void netfs_stats_show(struct seq_file *); +ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, + struct iov_iter *new, + unsigned int *cleanup_mode); =20 /** * netfs_inode - Get the netfs inode context from the inode From nobody Mon Apr 13 18:26:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2639C4332F for ; Fri, 2 Dec 2022 09:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232943AbiLBJpz (ORCPT ); Fri, 2 Dec 2022 04:45:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232453AbiLBJpN (ORCPT ); Fri, 2 Dec 2022 04:45:13 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF30FC9370 for ; Fri, 2 Dec 2022 01:44:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669974249; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fH45g8vxeRjIKWrJFMj3QTvY5ESJN8DhvNOQ5LzmgT0=; b=Xk+BSbJTd9ocge58zPmeLfNGYHpxG3WiHuMsJjELBUGHMUkT5m7exewfpvDnMBqL5iIZcV SXXsdNdhocJQZyOClN7zhqqM6PdVPMU9nT3o9uI0XKjfQVfrOKi7UfBUZS+a0LtErqbJ6C RdFBRY+gHQUVpm9iZV2EFoXW+LlX7To= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-1-JZlpahojOTCm3jKXEYS44A-1; Fri, 02 Dec 2022 04:44:00 -0500 X-MC-Unique: JZlpahojOTCm3jKXEYS44A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 744A029AA2EA; Fri, 2 Dec 2022 09:43:59 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.116]) by smtp.corp.redhat.com (Postfix) with ESMTP id CFA201731B; Fri, 2 Dec 2022 09:43:57 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v3 4/4] netfs: Add a function to extract an iterator into a scatterlist From: David Howells To: Al Viro Cc: Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, dhowells@redhat.com, Christoph Hellwig , Matthew Wilcox , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Fri, 02 Dec 2022 09:43:55 +0000 Message-ID: <166997423514.9475.11145024341505464337.stgit@warthog.procyon.org.uk> In-Reply-To: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> References: <166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Provide a function for filling in a scatterlist from the list of pages contained in an iterator. If the iterator is UBUF- or IOBUF-type, the pages have a ref (WRITE) or a pin (READ) taken on them. If the iterator is BVEC-, KVEC- or XARRAY-type, no ref is taken on the pages and it is left to the caller to manage their lifetime. It cannot be assumed that a ref can be validly taken, particularly in the case of a KVEC iterator. Changes: =3D=3D=3D=3D=3D=3D=3D=3D ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/Y3zFzdWnWlEJ8X8/@infradead.org/ [1] Link: https://lore.kernel.org/r/166697255985.61150.16489950598033809487.stg= it@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166732027275.3186319.5186488812166611598.st= git@warthog.procyon.org.uk/ # rfc Link: https://lore.kernel.org/r/166869691313.3723671.10714823767342163891.s= tgit@warthog.procyon.org.uk/ # rfc --- fs/netfs/iterator.c | 268 +++++++++++++++++++++++++++++++++++++++++++++= ++++ include/linux/netfs.h | 4 + mm/vmalloc.c | 1=20 3 files changed, 273 insertions(+) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 82a691b233ef..327694c3ad3b 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -7,7 +7,9 @@ =20 #include #include +#include #include +#include #include #include "internal.h" =20 @@ -97,3 +99,269 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, = size_t orig_len, return npages; } EXPORT_SYMBOL_GPL(netfs_extract_user_iter); + +/* + * Extract as list of up to sg_max pages from UBUF- or IOVEC-class iterato= rs, + * pin or get refs on them appropriate and add them to the scatterlist. + */ +static ssize_t netfs_extract_user_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + unsigned int *cleanup_mode) +{ + struct scatterlist *sg =3D sgtable->sgl + sgtable->nents; + struct page **pages; + unsigned int npages; + ssize_t ret =3D 0, res; + size_t len, off; + + *cleanup_mode =3D 0; + + /* We decant the page list into the tail of the scatterlist */ + pages =3D (void *)sgtable->sgl + array_size(sg_max, sizeof(struct scatter= list)); + pages -=3D sg_max; + + do { + res =3D iov_iter_extract_pages(iter, &pages, maxsize, sg_max, &off, + cleanup_mode); + if (res < 0) + goto failed; + + len =3D res; + maxsize -=3D len; + ret +=3D len; + npages =3D DIV_ROUND_UP(off + len, PAGE_SIZE); + sg_max -=3D npages; + + for (; npages < 0; npages--) { + struct page *page =3D *pages; + size_t seg =3D min_t(size_t, PAGE_SIZE - off, len); + + *pages++ =3D NULL; + sg_set_page(sg, page, len, off); + sgtable->nents++; + sg++; + len -=3D seg; + off =3D 0; + } + } while (maxsize > 0 && sg_max > 0); + + return ret; + +failed: + while (sgtable->nents > sgtable->orig_nents) + put_page(sg_page(&sgtable->sgl[--sgtable->nents])); + return res; +} + +/* + * Extract up to sg_max pages from a BVEC-type iterator and add them to the + * scatterlist. The pages are not pinned. + */ +static ssize_t netfs_extract_bvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + unsigned int *cleanup_mode) +{ + const struct bio_vec *bv =3D iter->bvec; + struct scatterlist *sg =3D sgtable->sgl + sgtable->nents; + unsigned long start =3D iter->iov_offset; + unsigned int i; + ssize_t ret =3D 0; + + for (i =3D 0; i < iter->nr_segs; i++) { + size_t off, len; + + len =3D bv[i].bv_len; + if (start >=3D len) { + start -=3D len; + continue; + } + + len =3D min_t(size_t, maxsize, len - start); + off =3D bv[i].bv_offset + start; + + sg_set_page(sg, bv[i].bv_page, len, off); + sgtable->nents++; + sg++; + sg_max--; + + ret +=3D len; + maxsize -=3D len; + if (maxsize <=3D 0 || sg_max =3D=3D 0) + break; + start =3D 0; + } + + if (ret > 0) + iov_iter_advance(iter, ret); + *cleanup_mode =3D 0; + return ret; +} + +/* + * Extract up to sg_max pages from a KVEC-type iterator and add them to the + * scatterlist. This can deal with vmalloc'd buffers as well as kmalloc'd= or + * static buffers. The pages are not pinned. + */ +static ssize_t netfs_extract_kvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + unsigned int *cleanup_mode) +{ + const struct kvec *kv =3D iter->kvec; + struct scatterlist *sg =3D sgtable->sgl + sgtable->nents; + unsigned long start =3D iter->iov_offset; + unsigned int i; + ssize_t ret =3D 0; + + for (i =3D 0; i < iter->nr_segs; i++) { + struct page *page; + unsigned long kaddr; + size_t off, len, seg; + + len =3D kv[i].iov_len; + if (start >=3D len) { + start -=3D len; + continue; + } + + kaddr =3D (unsigned long)kv[i].iov_base + start; + off =3D kaddr & ~PAGE_MASK; + len =3D min_t(size_t, maxsize, len - start); + kaddr &=3D PAGE_MASK; + + maxsize -=3D len; + ret +=3D len; + do { + seg =3D min_t(size_t, len, PAGE_SIZE - off); + if (is_vmalloc_or_module_addr((void *)kaddr)) + page =3D vmalloc_to_page((void *)kaddr); + else + page =3D virt_to_page(kaddr); + + sg_set_page(sg, page, len, off); + sgtable->nents++; + sg++; + sg_max--; + + len -=3D seg; + kaddr +=3D PAGE_SIZE; + off =3D 0; + } while (len > 0 && sg_max > 0); + + if (maxsize <=3D 0 || sg_max =3D=3D 0) + break; + start =3D 0; + } + + if (ret > 0) + iov_iter_advance(iter, ret); + *cleanup_mode =3D 0; + return ret; +} + +/* + * Extract up to sg_max folios from an XARRAY-type iterator and add them to + * the scatterlist. The pages are not pinned. + */ +static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + unsigned int *cleanup_mode) +{ + struct scatterlist *sg =3D sgtable->sgl + sgtable->nents; + struct xarray *xa =3D iter->xarray; + struct folio *folio; + loff_t start =3D iter->xarray_start + iter->iov_offset; + pgoff_t index =3D start / PAGE_SIZE; + ssize_t ret =3D 0; + size_t offset, len; + XA_STATE(xas, xa, index); + + rcu_read_lock(); + + xas_for_each(&xas, folio, ULONG_MAX) { + if (xas_retry(&xas, folio)) + continue; + if (WARN_ON(xa_is_value(folio))) + break; + if (WARN_ON(folio_test_hugetlb(folio))) + break; + + offset =3D offset_in_folio(folio, start); + len =3D min_t(size_t, maxsize, folio_size(folio) - offset); + + sg_set_page(sg, folio_page(folio, 0), len, offset); + sgtable->nents++; + sg++; + sg_max--; + + maxsize -=3D len; + ret +=3D len; + if (maxsize <=3D 0 || sg_max =3D=3D 0) + break; + } + + rcu_read_unlock(); + if (ret > 0) + iov_iter_advance(iter, ret); + *cleanup_mode =3D 0; + return ret; +} + +/** + * netfs_extract_iter_to_sg - Extract pages from an iterator and add ot an= sglist + * @iter: The iterator to extract from + * @maxsize: The amount of iterator to copy + * @sgtable: The scatterlist table to fill in + * @sg_max: Maximum number of elements in @sgtable that may be filled + * @cleanup_mode: Where to return the cleanup mode + * + * Extract the page fragments from the given amount of the source iterator= and + * add them to a scatterlist that refers to all of those bits, to a maximum + * addition of @sg_max elements. + * + * The pages referred to by UBUF- and IOVEC-type iterators are extracted a= nd + * pinned; BVEC-, KVEC- and XARRAY-type are extracted but aren't pinned; P= IPE- + * and DISCARD-type are not supported. + * + * No end mark is placed on the scatterlist; that's left to the caller. + * + * If successul, @sgtable->nents is updated to include the number of eleme= nts + * added and the number of bytes added is returned. @sgtable->orig_nents = is + * left unaltered. + */ +ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, + struct sg_table *sgtable, unsigned int sg_max, + unsigned int *cleanup_mode) +{ + if (maxsize =3D=3D 0) + return 0; + + switch (iov_iter_type(iter)) { + case ITER_UBUF: + case ITER_IOVEC: + return netfs_extract_user_to_sg(iter, maxsize, sgtable, sg_max, + cleanup_mode); + case ITER_BVEC: + return netfs_extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, + cleanup_mode); + case ITER_KVEC: + return netfs_extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, + cleanup_mode); + case ITER_XARRAY: + return netfs_extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, + cleanup_mode); + default: + pr_err("netfs_extract_iter_to_sg(%u) unsupported\n", + iov_iter_type(iter)); + WARN_ON_ONCE(1); + return -EIO; + } +} +EXPORT_SYMBOL_GPL(netfs_extract_iter_to_sg); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index eed84474e4cf..e1b225a17388 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -291,6 +291,10 @@ void netfs_stats_show(struct seq_file *); ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, unsigned int *cleanup_mode); +struct sg_table; +ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t len, + struct sg_table *sgtable, unsigned int sg_max, + unsigned int *cleanup_mode); =20 /** * netfs_inode - Get the netfs inode context from the inode diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ccaa461998f3..b13ac142685b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -653,6 +653,7 @@ int is_vmalloc_or_module_addr(const void *x) #endif return is_vmalloc_addr(x); } +EXPORT_SYMBOL_GPL(is_vmalloc_or_module_addr); =20 /* * Walk a vmap address to the struct page it maps. Huge vmap mappings will