From nobody Thu Nov 14 04:33:05 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DB8BC61DA4 for ; Tue, 14 Feb 2023 17:15:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232411AbjBNROt (ORCPT ); Tue, 14 Feb 2023 12:14:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231966AbjBNROg (ORCPT ); Tue, 14 Feb 2023 12:14:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89B5B5B98 for ; Tue, 14 Feb 2023 09:13:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676394829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wrfy5DSUofzooKuydOZLN29ccYXYPtSMXDanySrmvRI=; b=ECxwjEg72Aygmn9QqzZA8toC1d6o9uldlZEsTn4ThgDYTO+eQLPgsyqeNTS4s3J0h6tURC mOTcphw7r0NvOEu69u+JhFjJt1qYFuqdSInzlJalRL8NxWN1k0RRN+UtD0+olnv5ZVajsv S5522d92MnFjjDKCKaKhZlc3cMtCryU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-593-llfpyhjxOKyRI8fnUf8UoA-1; Tue, 14 Feb 2023 12:13:44 -0500 X-MC-Unique: llfpyhjxOKyRI8fnUf8UoA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EDA11C08795; Tue, 14 Feb 2023 17:13:43 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 53202140EBF6; Tue, 14 Feb 2023 17:13:41 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, syzbot+a440341a59e3b7142895@syzkaller.appspotmail.com, Christoph Hellwig , John Hubbard Subject: [PATCH v14 03/17] splice: Add a func to do a splice from an O_DIRECT file without ITER_PIPE Date: Tue, 14 Feb 2023 17:13:16 +0000 Message-Id: <20230214171330.2722188-4-dhowells@redhat.com> In-Reply-To: <20230214171330.2722188-1-dhowells@redhat.com> References: <20230214171330.2722188-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement a function, direct_file_splice(), that deals with this by using an ITER_BVEC iterator instead of an ITER_PIPE iterator as the former won't free its buffers when reverted. The function bulk allocates all the buffers it thinks it is going to use in advance, does the read synchronously and only then trims the buffer down. The pages we did use get pushed into the pipe. This fixes a problem with the upcoming iov_iter_extract_pages() function, whereby pages extracted from a non-user-backed iterator such as ITER_PIPE aren't pinned. __iomap_dio_rw(), however, calls iov_iter_revert() to shorten the iterator to just the bufferage it is going to use - which has the side-effect of freeing the excess pipe buffers, even though they're attached to a bio and may get written to by DMA (thanks to Hillf Danton for spotting this[1]). This then causes memory corruption that is particularly noticable when the syzbot test[2] is run. The test boils down to: out =3D creat(argv[1], 0666); ftruncate(out, 0x800); lseek(out, 0x200, SEEK_SET); in =3D open(argv[1], O_RDONLY | O_DIRECT | O_NOFOLLOW); sendfile(out, in, NULL, 0x1dd00); run repeatedly in parallel. What I think is happening is that ftruncate() occasionally shortens the DIO read that's about to be made by sendfile's splice core by reducing i_size. This should be more efficient for DIO read by virtue of doing a bulk page allocation, but slightly less efficient by ignoring any partial page in the pipe. Reported-by: syzbot+a440341a59e3b7142895@syzkaller.appspotmail.com Signed-off-by: David Howells cc: Jens Axboe cc: Christoph Hellwig cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: linux-mm@kvack.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/20230207094731.1390-1-hdanton@sina.com/ [1] Link: https://lore.kernel.org/r/000000000000b0b3c005f3a09383@google.com/ [2] --- Notes: ver #14) - Use alloc_pages_bulk_array() rather than alloc_pages_bulk_list(). - Use release_pages() rather than a loop calling __free_page(). - Rename to direct_splice_read(). - Don't call from generic_file_splice_read() yet. =20 ver #13) - Don't completely replace generic_file_splice_read(), but rather only= use this if we're doing a splicing from an O_DIRECT file fd. fs/splice.c | 92 +++++++++++++++++++++++++++++++++++++++ include/linux/fs.h | 3 ++ include/linux/pipe_fs_i.h | 20 +++++++++ lib/iov_iter.c | 6 --- 4 files changed, 115 insertions(+), 6 deletions(-) diff --git a/fs/splice.c b/fs/splice.c index 5969b7a1d353..4c6332854b63 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -282,6 +282,98 @@ void splice_shrink_spd(struct splice_pipe_desc *spd) kfree(spd->partial); } =20 +/* + * Splice data from an O_DIRECT file into pages and then add them to the o= utput + * pipe. + */ +ssize_t direct_splice_read(struct file *in, loff_t *ppos, + struct pipe_inode_info *pipe, + size_t len, unsigned int flags) +{ + struct iov_iter to; + struct bio_vec *bv; + struct kiocb kiocb; + struct page **pages; + ssize_t ret; + size_t used, npages, chunk, remain, reclaim; + int i; + + /* Work out how much data we can actually add into the pipe */ + used =3D pipe_occupancy(pipe->head, pipe->tail); + npages =3D max_t(ssize_t, pipe->max_usage - used, 0); + len =3D min_t(size_t, len, npages * PAGE_SIZE); + npages =3D DIV_ROUND_UP(len, PAGE_SIZE); + + bv =3D kzalloc(array_size(npages, sizeof(bv[0])) + + array_size(npages, sizeof(struct page *)), GFP_KERNEL); + if (!bv) + return -ENOMEM; + + pages =3D (void *)(bv + npages); + npages =3D alloc_pages_bulk_array(GFP_USER, npages, pages); + if (!npages) { + kfree(bv); + return -ENOMEM; + } + + remain =3D len =3D min_t(size_t, len, npages * PAGE_SIZE); + + for (i =3D 0; i < npages; i++) { + chunk =3D min_t(size_t, PAGE_SIZE, remain); + bv[i].bv_page =3D pages[i]; + bv[i].bv_offset =3D 0; + bv[i].bv_len =3D chunk; + remain -=3D chunk; + } + + /* Do the I/O */ + iov_iter_bvec(&to, ITER_DEST, bv, npages, len); + init_sync_kiocb(&kiocb, in); + kiocb.ki_pos =3D *ppos; + ret =3D call_read_iter(in, &kiocb, &to); + + reclaim =3D npages * PAGE_SIZE; + remain =3D 0; + if (ret > 0) { + reclaim -=3D ret; + remain =3D ret; + *ppos =3D kiocb.ki_pos; + file_accessed(in); + } else if (ret < 0) { + /* + * callers of ->splice_read() expect -EAGAIN on + * "can't put anything in there", rather than -EFAULT. + */ + if (ret =3D=3D -EFAULT) + ret =3D -EAGAIN; + } + + /* Free any pages that didn't get touched at all. */ + reclaim /=3D PAGE_SIZE; + if (reclaim) { + npages -=3D reclaim; + release_pages(pages + npages, reclaim); + } + + /* Push the remaining pages into the pipe. */ + for (i =3D 0; i < npages; i++) { + struct pipe_buffer *buf =3D pipe_head_buf(pipe); + + chunk =3D min_t(size_t, remain, PAGE_SIZE); + *buf =3D (struct pipe_buffer) { + .ops =3D &default_pipe_buf_ops, + .page =3D bv[i].bv_page, + .offset =3D 0, + .len =3D chunk, + }; + pipe->head++; + remain -=3D chunk; + } + + kfree(bv); + return ret; +} + /** * generic_file_splice_read - splice data from file to a pipe * @in: file to splice from diff --git a/include/linux/fs.h b/include/linux/fs.h index 28743e38df91..551c9403f9b3 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3166,6 +3166,9 @@ ssize_t vfs_iocb_iter_write(struct file *file, struct= kiocb *iocb, ssize_t filemap_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe, size_t len, unsigned int flags); +ssize_t direct_splice_read(struct file *in, loff_t *ppos, + struct pipe_inode_info *pipe, + size_t len, unsigned int flags); extern ssize_t generic_file_splice_read(struct file *, loff_t *, struct pipe_inode_info *, size_t, unsigned int); extern ssize_t iter_file_splice_write(struct pipe_inode_info *, diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h index 6cb65df3e3ba..d2c3f16cf6b1 100644 --- a/include/linux/pipe_fs_i.h +++ b/include/linux/pipe_fs_i.h @@ -156,6 +156,26 @@ static inline bool pipe_full(unsigned int head, unsign= ed int tail, return pipe_occupancy(head, tail) >=3D limit; } =20 +/** + * pipe_buf - Return the pipe buffer for the specified slot in the pipe ri= ng + * @pipe: The pipe to access + * @slot: The slot of interest + */ +static inline struct pipe_buffer *pipe_buf(const struct pipe_inode_info *p= ipe, + unsigned int slot) +{ + return &pipe->bufs[slot & (pipe->ring_size - 1)]; +} + +/** + * pipe_head_buf - Return the pipe buffer at the head of the pipe ring + * @pipe: The pipe to access + */ +static inline struct pipe_buffer *pipe_head_buf(const struct pipe_inode_in= fo *pipe) +{ + return pipe_buf(pipe, pipe->head); +} + /** * pipe_buf_get - get a reference to a pipe_buffer * @pipe: the pipe that the buffer belongs to diff --git a/lib/iov_iter.c b/lib/iov_iter.c index f9a3ff37ecd1..47c484551c59 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -186,12 +186,6 @@ static int copyin(void *to, const void __user *from, s= ize_t n) return res; } =20 -static inline struct pipe_buffer *pipe_buf(const struct pipe_inode_info *p= ipe, - unsigned int slot) -{ - return &pipe->bufs[slot & (pipe->ring_size - 1)]; -} - #ifdef PIPE_PARANOIA static bool sanity(const struct iov_iter *i) {