This will pin pages or leave them unaltered rather than getting a ref on
them as appropriate to the iterator.
The pages need to be pinned for DIO rather than having refs taken on them to
prevent VM copy-on-write from malfunctioning during a concurrent fork() (the
result of the I/O could otherwise end up being affected by/visible to the
child process).
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Al Viro <viro@zeniv.linux.org.uk>
cc: Jens Axboe <axboe@kernel.dk>
cc: Jan Kara <jack@suse.cz>
cc: Christoph Hellwig <hch@lst.de>
cc: Matthew Wilcox <willy@infradead.org>
cc: Logan Gunthorpe <logang@deltatee.com>
cc: linux-block@vger.kernel.org
---
Notes:
ver #8)
- Split the patch up a bit [hch].
- We should only be using pinned/non-pinned pages and not ref'd pages,
so adjust the comments appropriately.
ver #7)
- Don't treat BIO_PAGE_REFFED/PINNED as being the same as FOLL_GET/PIN.
ver #5)
- Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to
BIO_* flags and got rid of bi_cleanup_mode.
- Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch.
block/bio.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index fc45aaa97696..936301519e6c 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1212,7 +1212,7 @@ static int bio_iov_add_page(struct bio *bio, struct page *page,
}
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
return 0;
}
@@ -1226,7 +1226,7 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
queue_max_zone_append_sectors(q), &same_page) != len)
return -EINVAL;
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
return 0;
}
@@ -1237,10 +1237,10 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
* @bio: bio to add pages to
* @iter: iov iterator describing the region to be mapped
*
- * Pins pages from *iter and appends them to @bio's bvec array. The
- * pages will have to be released using put_page() when done.
- * For multi-segment *iter, this function only adds pages from the
- * next non-empty segment of the iov iterator.
+ * Extracts pages from *iter and appends them to @bio's bvec array. The pages
+ * will have to be cleaned up in the way indicated by the BIO_PAGE_PINNED flag.
+ * For a multi-segment *iter, this function only adds pages from the next
+ * non-empty segment of the iov iterator.
*/
static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
{
@@ -1272,9 +1272,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
* result to ensure the bio's total size is correct. The remainder of
* the iov data will be picked up in the next bio iteration.
*/
- size = iov_iter_get_pages(iter, pages,
- UINT_MAX - bio->bi_iter.bi_size,
- nr_pages, &offset, extraction_flags);
+ size = iov_iter_extract_pages(iter, &pages,
+ UINT_MAX - bio->bi_iter.bi_size,
+ nr_pages, extraction_flags, &offset);
if (unlikely(size <= 0))
return size ? size : -EFAULT;
@@ -1307,7 +1307,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
iov_iter_revert(iter, left);
out:
while (i < nr_pages)
- put_page(pages[i++]);
+ bio_release_page(bio, pages[i++]);
return ret;
}
@@ -1342,7 +1342,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
return 0;
}
- bio_set_flag(bio, BIO_PAGE_REFFED);
+ bio_set_cleanup_mode(bio, iter);
do {
ret = __bio_iov_iter_get_pages(bio, iter);
} while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
On 1/24/23 09:01, David Howells wrote: > This will pin pages or leave them unaltered rather than getting a ref on > them as appropriate to the iterator. > > The pages need to be pinned for DIO rather than having refs taken on them to > prevent VM copy-on-write from malfunctioning during a concurrent fork() (the > result of the I/O could otherwise end up being affected by/visible to the > child process). > > Signed-off-by: David Howells <dhowells@redhat.com> > cc: Al Viro <viro@zeniv.linux.org.uk> > cc: Jens Axboe <axboe@kernel.dk> > cc: Jan Kara <jack@suse.cz> > cc: Christoph Hellwig <hch@lst.de> > cc: Matthew Wilcox <willy@infradead.org> > cc: Logan Gunthorpe <logang@deltatee.com> > cc: linux-block@vger.kernel.org > --- > > Notes: > ver #8) > - Split the patch up a bit [hch]. > - We should only be using pinned/non-pinned pages and not ref'd pages, > so adjust the comments appropriately. > > ver #7) > - Don't treat BIO_PAGE_REFFED/PINNED as being the same as FOLL_GET/PIN. > > ver #5) > - Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to > BIO_* flags and got rid of bi_cleanup_mode. > - Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch. > > block/bio.c | 22 +++++++++++----------- > 1 file changed, 11 insertions(+), 11 deletions(-) > > diff --git a/block/bio.c b/block/bio.c > index fc45aaa97696..936301519e6c 100644 > --- a/block/bio.c > +++ b/block/bio.c > @@ -1212,7 +1212,7 @@ static int bio_iov_add_page(struct bio *bio, struct page *page, > } > > if (same_page) > - put_page(page); > + bio_release_page(bio, page); > return 0; > } > > @@ -1226,7 +1226,7 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, > queue_max_zone_append_sectors(q), &same_page) != len) > return -EINVAL; > if (same_page) > - put_page(page); > + bio_release_page(bio, page); > return 0; > } > > @@ -1237,10 +1237,10 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, > * @bio: bio to add pages to > * @iter: iov iterator describing the region to be mapped > * > - * Pins pages from *iter and appends them to @bio's bvec array. The > - * pages will have to be released using put_page() when done. > - * For multi-segment *iter, this function only adds pages from the > - * next non-empty segment of the iov iterator. > + * Extracts pages from *iter and appends them to @bio's bvec array. The pages > + * will have to be cleaned up in the way indicated by the BIO_PAGE_PINNED flag. > + * For a multi-segment *iter, this function only adds pages from the next > + * non-empty segment of the iov iterator. > */ > static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) > { > @@ -1272,9 +1272,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) > * result to ensure the bio's total size is correct. The remainder of > * the iov data will be picked up in the next bio iteration. > */ > - size = iov_iter_get_pages(iter, pages, > - UINT_MAX - bio->bi_iter.bi_size, > - nr_pages, &offset, extraction_flags); > + size = iov_iter_extract_pages(iter, &pages, > + UINT_MAX - bio->bi_iter.bi_size, > + nr_pages, extraction_flags, &offset); A quite minor point: it seems like the last two args got reversed more or less by accident. It's not worth re-spinning or anything, but it seems better to leave the order the same between these two routines. Either way, though, Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, -- John Hubbard NVIDIA > if (unlikely(size <= 0)) > return size ? size : -EFAULT; > > @@ -1307,7 +1307,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) > iov_iter_revert(iter, left); > out: > while (i < nr_pages) > - put_page(pages[i++]); > + bio_release_page(bio, pages[i++]); > > return ret; > } > @@ -1342,7 +1342,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) > return 0; > } > > - bio_set_flag(bio, BIO_PAGE_REFFED); > + bio_set_cleanup_mode(bio, iter); > do { > ret = __bio_iov_iter_get_pages(bio, iter); > } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0)); >
John Hubbard <jhubbard@nvidia.com> wrote: > A quite minor point: it seems like the last two args got reversed more > or less by accident. It's not worth re-spinning or anything, but it > seems better to leave the order the same between these two routines. I pushed the extra return value to the end. It seems better that way. David
On Tue, Jan 24, 2023 at 05:01:07PM +0000, David Howells wrote: > This will pin pages or leave them unaltered rather than getting a ref on > them as appropriate to the iterator. > > The pages need to be pinned for DIO rather than having refs taken on them to > prevent VM copy-on-write from malfunctioning during a concurrent fork() (the > result of the I/O could otherwise end up being affected by/visible to the > child process). Reviewed-by: Christoph Hellwig <hch@lst.de>
© 2016 - 2025 Red Hat, Inc.