From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3212E3A2548 for ; Wed, 4 Mar 2026 14:04:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633043; cv=none; b=LUU5QpsMaD8sf7qLQbz/E3hG12Ixwjj5UMZih572NcZRyF4BpCqA0wJ9+7RvKqgGfFK5vyiGAqY0CHDY9TEE+WhcYiMFbCXDCrbFJ1zNMvxKYrYcp9YvsywrF/LhEBecqahIIBvZDLIrS3fwpaqf5dRBHpK0PQ8l8S+MNy7HvIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633043; c=relaxed/simple; bh=zI3HWv6KcbFLTDF6/8ZbZ2gwfLn6Sw9GxZLiQf9CMlM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TUnHLxUsS89U8p4I3JvEgzXjIhPc2Syp3J4Z4B6CmyBdYzZGqs21npB0pFY/LsJnNylkg4jvk4ag3DjVIMB2L7LjlYahMrC+uJTEWjeWpZ6l6Fw79uGgHwfKohPPkJvO3Zm9VLZOkok0wb/yn5/ReCinXcmLs0hKeUeCbwmmO+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=I4L8ihjB; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="I4L8ihjB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633041; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vYzpeyaxRRvfWXUR81PC971aeobf+q73Avt5gda2sjI=; b=I4L8ihjBoh+m+3aURc/aygn85yXH8oxGBnpyYY2+5u/H1P/Z9j9ZaCwpE0h13bH/Z9TsDa MnvrN6GFidjV0e2ZDzItIO3Vs9Xy4s24TeWnVuZ86ngUE4buVkooLzBwRsbhr9Yh+/pSaV 5j5Exprlz9wqinsGVR1k7Zro3bf+1Uw= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-440-arOpn1uFN3S9BSjGYhzQ5Q-1; Wed, 04 Mar 2026 09:03:55 -0500 X-MC-Unique: arOpn1uFN3S9BSjGYhzQ5Q-1 X-Mimecast-MFC-AGG-ID: arOpn1uFN3S9BSjGYhzQ5Q_1772633033 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AAFAC19300DE; Wed, 4 Mar 2026 14:03:46 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8A7CF195419A; Wed, 4 Mar 2026 14:03:40 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, "Paulo Alcantara (Red Hat)" Subject: [RFC PATCH 01/17] netfs: Fix unbuffered/DIO writes to dispatch subrequests in strict sequence Date: Wed, 4 Mar 2026 14:03:08 +0000 Message-ID: <20260304140328.112636-2-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" Fix netfslib such that when it's making an unbuffered or DIO write, to make sure that it sends each subrequest strictly sequentially, waiting till the previous one is 'committed' before sending the next so that we don't have pieces landing out of order and potentially leaving a hole if an error occurs (ENOSPC for example). This is done by copying in just those bits of issuing, collecting and retrying subrequests that are necessary to do one subrequest at a time. Retrying, in particular, is simpler because if the current subrequest needs retrying, the source iterator can just be copied again and the subrequest prepped and issued again without needing to be concerned about whether it needs merging with the previous or next in the sequence. Note that the issuing loop waits for a subrequest to complete right after issuing it, but this wait could be moved elsewhere allowing preparatory steps to be performed whilst the subrequest is in progress. In particular, once content encryption is available in netfslib, that could be done whilst waiting, as could cleanup of buffers that have been completed. Fixes: 153a9961b551 ("netfs: Implement unbuffered/DIO write support") Signed-off-by: David Howells Reviewed-by: Paulo Alcantara (Red Hat) cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/direct_write.c | 228 ++++++++++++++++++++++++++++++++--- fs/netfs/internal.h | 4 +- fs/netfs/write_collect.c | 21 ---- fs/netfs/write_issue.c | 41 +------ include/trace/events/netfs.h | 4 +- 5 files changed, 221 insertions(+), 77 deletions(-) diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c index a9d1c3b2c084..dd1451bf7543 100644 --- a/fs/netfs/direct_write.c +++ b/fs/netfs/direct_write.c @@ -9,6 +9,202 @@ #include #include "internal.h" =20 +/* + * Perform the cleanup rituals after an unbuffered write is complete. + */ +static void netfs_unbuffered_write_done(struct netfs_io_request *wreq) +{ + struct netfs_inode *ictx =3D netfs_inode(wreq->inode); + + _enter("R=3D%x", wreq->debug_id); + + /* Okay, declare that all I/O is complete. */ + trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); + + if (!wreq->error) + netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); + + if (wreq->origin =3D=3D NETFS_DIO_WRITE && + wreq->mapping->nrpages) { + /* mmap may have got underfoot and we may now have folios + * locally covering the region we just wrote. Attempt to + * discard the folios, but leave in place any modified locally. + * ->write_iter() is prevented from interfering by the DIO + * counter. + */ + pgoff_t first =3D wreq->start >> PAGE_SHIFT; + pgoff_t last =3D (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; + + invalidate_inode_pages2_range(wreq->mapping, first, last); + } + + if (wreq->origin =3D=3D NETFS_DIO_WRITE) + inode_dio_end(wreq->inode); + + _debug("finished"); + netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_= ip); + /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */ + + if (wreq->iocb) { + size_t written =3D umin(wreq->transferred, wreq->len); + + wreq->iocb->ki_pos +=3D written; + if (wreq->iocb->ki_complete) { + trace_netfs_rreq(wreq, netfs_rreq_trace_ki_complete); + wreq->iocb->ki_complete(wreq->iocb, wreq->error ?: written); + } + wreq->iocb =3D VFS_PTR_POISON; + } + + netfs_clear_subrequests(wreq); +} + +/* + * Collect the subrequest results of unbuffered write subrequests. + */ +static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + struct netfs_io_subrequest *subreq) +{ + trace_netfs_collect_sreq(wreq, subreq); + + spin_lock(&wreq->lock); + list_del_init(&subreq->rreq_link); + spin_unlock(&wreq->lock); + + wreq->transferred +=3D subreq->transferred; + iov_iter_advance(&wreq->buffer.iter, subreq->transferred); + + stream->collected_to =3D subreq->start + subreq->transferred; + wreq->collected_to =3D stream->collected_to; + netfs_put_subrequest(subreq, netfs_sreq_trace_put_done); + + trace_netfs_collect_stream(wreq, stream); + trace_netfs_collect_state(wreq, wreq->collected_to, 0); +} + +/* + * Write data to the server without going through the pagecache and without + * writing it to the local cache. We dispatch the subrequests serially and + * wait for each to complete before dispatching the next, lest we leave a = gap + * in the data written due to a failure such as ENOSPC. We could, however + * attempt to do preparation such as content encryption for the next subreq + * whilst the current is in progress. + */ +static int netfs_unbuffered_write(struct netfs_io_request *wreq) +{ + struct netfs_io_subrequest *subreq =3D NULL; + struct netfs_io_stream *stream =3D &wreq->io_streams[0]; + int ret; + + _enter("%llx", wreq->len); + + if (wreq->origin =3D=3D NETFS_DIO_WRITE) + inode_dio_begin(wreq->inode); + + stream->collected_to =3D wreq->start; + + for (;;) { + bool retry =3D false; + + if (!subreq) { + netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred); + subreq =3D stream->construct; + stream->construct =3D NULL; + stream->front =3D NULL; + } + + /* Check if (re-)preparation failed. */ + if (unlikely(test_bit(NETFS_SREQ_FAILED, &subreq->flags))) { + netfs_write_subrequest_terminated(subreq, subreq->error); + wreq->error =3D subreq->error; + break; + } + + iov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred); + if (!iov_iter_count(&subreq->io_iter)) + break; + + subreq->len =3D netfs_limit_iter(&subreq->io_iter, 0, + stream->sreq_max_len, + stream->sreq_max_segs); + iov_iter_truncate(&subreq->io_iter, subreq->len); + stream->submit_extendable_to =3D subreq->len; + + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + stream->issue_write(subreq); + + /* Async, need to wait. */ + netfs_wait_for_in_progress_stream(wreq, stream); + + if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + retry =3D true; + } else if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) { + ret =3D subreq->error; + wreq->error =3D ret; + netfs_see_subrequest(subreq, netfs_sreq_trace_see_failed); + subreq =3D NULL; + break; + } + ret =3D 0; + + if (!retry) { + netfs_unbuffered_write_collect(wreq, stream, subreq); + subreq =3D NULL; + if (wreq->transferred >=3D wreq->len) + break; + if (!wreq->iocb && signal_pending(current)) { + ret =3D wreq->transferred ? -EINTR : -ERESTARTSYS; + trace_netfs_rreq(wreq, netfs_rreq_trace_intr); + break; + } + continue; + } + + /* We need to retry the last subrequest, so first reset the + * iterator, taking into account what, if anything, we managed + * to transfer. + */ + subreq->error =3D -EAGAIN; + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + if (subreq->transferred > 0) + iov_iter_advance(&wreq->buffer.iter, subreq->transferred); + + if (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && + wreq->netfs_ops->retry_request) + wreq->netfs_ops->retry_request(wreq, stream); + + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); + subreq->io_iter =3D wreq->buffer.iter; + subreq->start =3D wreq->start + wreq->transferred; + subreq->len =3D wreq->len - wreq->transferred; + subreq->transferred =3D 0; + subreq->retry_count +=3D 1; + stream->sreq_max_len =3D UINT_MAX; + stream->sreq_max_segs =3D INT_MAX; + + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + stream->prepare_write(subreq); + + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + netfs_stat(&netfs_n_wh_retry_write_subreq); + } + + netfs_unbuffered_write_done(wreq); + _leave(" =3D %d", ret); + return ret; +} + +static void netfs_unbuffered_write_async(struct work_struct *work) +{ + struct netfs_io_request *wreq =3D container_of(work, struct netfs_io_requ= est, work); + + netfs_unbuffered_write(wreq); + netfs_put_request(wreq, netfs_rreq_trace_put_complete); +} + /* * Perform an unbuffered write where we may have to do an RMW operation on= an * encrypted file. This can also be used for direct I/O writes. @@ -70,35 +266,35 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kioc= b *iocb, struct iov_iter * */ wreq->buffer.iter =3D *iter; } + + wreq->len =3D iov_iter_count(&wreq->buffer.iter); } =20 __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags); - if (async) - __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); =20 /* Copy the data into the bounce buffer and encrypt it. */ // TODO =20 /* Dispatch the write. */ __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); - if (async) - wreq->iocb =3D iocb; - wreq->len =3D iov_iter_count(&wreq->buffer.iter); - ret =3D netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); - if (ret < 0) { - _debug("begin =3D %zd", ret); - goto out; - } =20 - if (!async) { - ret =3D netfs_wait_for_write(wreq); - if (ret > 0) - iocb->ki_pos +=3D ret; - } else { + if (async) { + INIT_WORK(&wreq->work, netfs_unbuffered_write_async); + wreq->iocb =3D iocb; + queue_work(system_dfl_wq, &wreq->work); ret =3D -EIOCBQUEUED; + } else { + ret =3D netfs_unbuffered_write(wreq); + if (ret < 0) { + _debug("begin =3D %zd", ret); + } else { + iocb->ki_pos +=3D wreq->transferred; + ret =3D wreq->transferred ?: wreq->error; + } + + netfs_put_request(wreq, netfs_rreq_trace_put_complete); } =20 -out: netfs_put_request(wreq, netfs_rreq_trace_put_return); return ret; =20 diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 4319611f5354..d436e20d3418 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -198,6 +198,9 @@ struct netfs_io_request *netfs_create_write_req(struct = address_space *mapping, struct file *file, loff_t start, enum netfs_io_origin origin); +void netfs_prepare_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start); void netfs_reissue_write(struct netfs_io_stream *stream, struct netfs_io_subrequest *subreq, struct iov_iter *source); @@ -212,7 +215,6 @@ int netfs_advance_writethrough(struct netfs_io_request = *wreq, struct writeback_c struct folio **writethrough_cache); ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct write= back_control *wbc, struct folio *writethrough_cache); -int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, s= ize_t len); =20 /* * write_retry.c diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 61eab34ea67e..83eb3dc1adf8 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -399,27 +399,6 @@ bool netfs_write_collection(struct netfs_io_request *w= req) ictx->ops->invalidate_cache(wreq); } =20 - if ((wreq->origin =3D=3D NETFS_UNBUFFERED_WRITE || - wreq->origin =3D=3D NETFS_DIO_WRITE) && - !wreq->error) - netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); - - if (wreq->origin =3D=3D NETFS_DIO_WRITE && - wreq->mapping->nrpages) { - /* mmap may have got underfoot and we may now have folios - * locally covering the region we just wrote. Attempt to - * discard the folios, but leave in place any modified locally. - * ->write_iter() is prevented from interfering by the DIO - * counter. - */ - pgoff_t first =3D wreq->start >> PAGE_SHIFT; - pgoff_t last =3D (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; - invalidate_inode_pages2_range(wreq->mapping, first, last); - } - - if (wreq->origin =3D=3D NETFS_DIO_WRITE) - inode_dio_end(wreq->inode); - _debug("finished"); netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_= ip); /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */ diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 34894da5a23e..437268f65640 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -154,9 +154,9 @@ EXPORT_SYMBOL(netfs_prepare_write_failed); * Prepare a write subrequest. We need to allocate a new subrequest * if we don't have one. */ -static void netfs_prepare_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start) +void netfs_prepare_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start) { struct netfs_io_subrequest *subreq; struct iov_iter *wreq_iter =3D &wreq->buffer.iter; @@ -698,41 +698,6 @@ ssize_t netfs_end_writethrough(struct netfs_io_request= *wreq, struct writeback_c return ret; } =20 -/* - * Write data to the server without going through the pagecache and without - * writing it to the local cache. - */ -int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, s= ize_t len) -{ - struct netfs_io_stream *upload =3D &wreq->io_streams[0]; - ssize_t part; - loff_t start =3D wreq->start; - int error =3D 0; - - _enter("%zx", len); - - if (wreq->origin =3D=3D NETFS_DIO_WRITE) - inode_dio_begin(wreq->inode); - - while (len) { - // TODO: Prepare content encryption - - _debug("unbuffered %zx", len); - part =3D netfs_advance_write(wreq, upload, start, len, false); - start +=3D part; - len -=3D part; - rolling_buffer_advance(&wreq->buffer, part); - if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) - netfs_wait_for_paused_write(wreq); - if (test_bit(NETFS_RREQ_FAILED, &wreq->flags)) - break; - } - - netfs_end_issue_write(wreq); - _leave(" =3D %d", error); - return error; -} - /* * Write some of a pending folio data back to the server and/or the cache. */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 64a382fbc31a..2d366be46a1c 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -57,6 +57,7 @@ EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_end_copy_to_cache, "END-C2C") \ EM(netfs_rreq_trace_free, "FREE ") \ + EM(netfs_rreq_trace_intr, "INTR ") \ EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ EM(netfs_rreq_trace_recollect, "RECLLCT") \ EM(netfs_rreq_trace_redirty, "REDIRTY") \ @@ -169,7 +170,8 @@ EM(netfs_sreq_trace_put_oom, "PUT OOM ") \ EM(netfs_sreq_trace_put_wip, "PUT WIP ") \ EM(netfs_sreq_trace_put_work, "PUT WORK ") \ - E_(netfs_sreq_trace_put_terminated, "PUT TERM ") + EM(netfs_sreq_trace_put_terminated, "PUT TERM ") \ + E_(netfs_sreq_trace_see_failed, "SEE FAILED ") =20 #define netfs_folio_traces \ EM(netfs_folio_is_uptodate, "mod-uptodate") \ From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A0BA3A8728 for ; Wed, 4 Mar 2026 14:04:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633051; cv=none; b=nYNxIki1ev2rI9BARaujcI6AVG8h0hmppR3i9bk1dwxGd75zu1YrxMdtrhzTlxfEshzrijmqvL0nmW9ZLhO5p8vd4kgvw2uCoHFGvX+IvNrKPrNArVOfwYQpemE4yQ35MtVsbOQ5TsAuH3J7GNmvtQ4+YRWZNl8RNWY7gjAmjyE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633051; c=relaxed/simple; bh=HROgkIywW9fGPMpKBzdZ5dgkdd1AaKOCkuJgEneRTqA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gy9FAruwnzmLmd3qfku3FMG2LSXbtYjjNxwX6BcKNhwWLnNOG8bS0aYocUqEfzCkYTMeGIfrhemmJgyESdlsnFJzPhGp4yAkMXqlFEWFnOEJJQic7lfgLeKOc3Q6iGmLESNNOxn3pUeVz1WDzFa1rOf2T5RzB6jEAXL2VEgsAfM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Cu6CoMHr; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Cu6CoMHr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633048; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1gQFJg7RkppjPQakl5EHx+t4GLbWiGOypk7Lzf+iCHM=; b=Cu6CoMHrmS8fIVxNr4irrHeGEh3aBFzpPUrAq1NmzHJKtLVfALaY+otmu/ysYcikZKSP09 RClj/g4PvBd3Ll7P8biYzWV8rNOwZZIcoNUG+P8NJg5CpMvLmvw4yplvLNCJDTYaMiKu7b PyFzbpKmoMR9aB/bWrIvOFufofjzvSI= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-371-ETw9p9BzOrycTv6CnItv0g-1; Wed, 04 Mar 2026 09:04:03 -0500 X-MC-Unique: ETw9p9BzOrycTv6CnItv0g-1 X-Mimecast-MFC-AGG-ID: ETw9p9BzOrycTv6CnItv0g_1772633040 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B28C818005B9; Wed, 4 Mar 2026 14:03:54 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6CE8E1800767; Wed, 4 Mar 2026 14:03:48 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French , Namjae Jeon , Tom Talpey , Chuck Lever Subject: [RFC PATCH 02/17] vfs: Implement a FIEMAP callback Date: Wed, 4 Mar 2026 14:03:09 +0000 Message-ID: <20260304140328.112636-3-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Implement a callback in the internal kernel FIEMAP API so that kernel users can make use of it as the filler function expects to write to userspace. This allows the FIEMAP data to be captured and parsed. This is useful for cachefiles and also potentially for knfsd and ksmbd to implement their equivalents of FIEMAP remotely rather than using SEEK_DATA/SEEK_HOLE. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: Namjae Jeon cc: Tom Talpey cc: Chuck Lever cc: linux-cifs@vger.kernel.org cc: linux-nfs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/ioctl.c | 29 ++++++++++++++++++++--------- include/linux/fiemap.h | 3 +++ 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/fs/ioctl.c b/fs/ioctl.c index 1c152c2b1b67..f0513e282eb7 100644 --- a/fs/ioctl.c +++ b/fs/ioctl.c @@ -93,6 +93,21 @@ static int ioctl_fibmap(struct file *filp, int __user *p) return error; } =20 +static int fiemap_fill(struct fiemap_extent_info *fieinfo, + const struct fiemap_extent *extent) +{ + struct fiemap_extent __user *dest =3D fieinfo->fi_extents_start; + + dest +=3D fieinfo->fi_extents_mapped; + if (copy_to_user(dest, extent, sizeof(*extent))) + return -EFAULT; + + fieinfo->fi_extents_mapped++; + if (fieinfo->fi_extents_mapped >=3D fieinfo->fi_extents_max) + return 1; + return 0; +} + /** * fiemap_fill_next_extent - Fiemap helper function * @fieinfo: Fiemap context passed into ->fiemap @@ -112,7 +127,7 @@ int fiemap_fill_next_extent(struct fiemap_extent_info *= fieinfo, u64 logical, u64 phys, u64 len, u32 flags) { struct fiemap_extent extent; - struct fiemap_extent __user *dest =3D fieinfo->fi_extents_start; + int ret; =20 /* only count the extents */ if (fieinfo->fi_extents_max =3D=3D 0) { @@ -140,13 +155,9 @@ int fiemap_fill_next_extent(struct fiemap_extent_info = *fieinfo, u64 logical, extent.fe_length =3D len; extent.fe_flags =3D flags; =20 - dest +=3D fieinfo->fi_extents_mapped; - if (copy_to_user(dest, &extent, sizeof(extent))) - return -EFAULT; - - fieinfo->fi_extents_mapped++; - if (fieinfo->fi_extents_mapped =3D=3D fieinfo->fi_extents_max) - return 1; + ret =3D fieinfo->fi_fill(fieinfo, &extent); + if (ret !=3D 0) + return ret; /* 1 to stop. */ return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0; } EXPORT_SYMBOL(fiemap_fill_next_extent); @@ -199,7 +210,7 @@ EXPORT_SYMBOL(fiemap_prep); static int ioctl_fiemap(struct file *filp, struct fiemap __user *ufiemap) { struct fiemap fiemap; - struct fiemap_extent_info fieinfo =3D { 0, }; + struct fiemap_extent_info fieinfo =3D { .fi_fill =3D fiemap_fill, }; struct inode *inode =3D file_inode(filp); int error; =20 diff --git a/include/linux/fiemap.h b/include/linux/fiemap.h index 966092ffa89a..01929ca4b834 100644 --- a/include/linux/fiemap.h +++ b/include/linux/fiemap.h @@ -11,12 +11,15 @@ * @fi_extents_mapped: Number of mapped extents * @fi_extents_max: Size of fiemap_extent array * @fi_extents_start: Start of fiemap_extent array + * @fi_fill: Function to fill the extents array */ struct fiemap_extent_info { unsigned int fi_flags; unsigned int fi_extents_mapped; unsigned int fi_extents_max; struct fiemap_extent __user *fi_extents_start; + int (*fi_fill)(struct fiemap_extent_info *fiefinfo, + const struct fiemap_extent *extent); }; =20 int fiemap_prep(struct inode *inode, struct fiemap_extent_info *fieinfo, From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59B9C31AA9B for ; Wed, 4 Mar 2026 14:04:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633051; cv=none; b=C1x32rbhjvcNdjjE/TMbhTGa4SQq30SE3e4HM9SQO8ry1RIK5vF7QRb/iUVjNUjcPfutDMmrGgSvfHp5fRMv4eSZ4fOj1+D06Bj15wtjMXXCxX+nJLG0foInYBczmOdfjGqxF3rBaHUyOzJrJ5hhKQLD9GVkpnD6LGLEJuX7pKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633051; c=relaxed/simple; bh=UOCQXujqYU8HKTB23BpeQy6cGC/eXg+x4TdCHYdEU6A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZcjhZco+a1bi9C4/e17UL6t20HbAdfYkUSMaiWtN6tmSK4fFb7Od5SENvuPASJbxnT18a52lONzheLkVEnkFHZ8e1KkspOBApunB2nYEaZfp8NrR2A3Z/a14IyjF0kV1AFlfFlEMYrcVr6nT5jopN+9ObFCBQHb54FoVyks0jH8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=bbHQpHsL; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bbHQpHsL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633047; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mEYVLcJgF8Opy+A3y9ZVkPWaxoFXC8SXtoDkLRyUMjs=; b=bbHQpHsLSUv+nzT+tbFVub7V7G060bXwXH4zh6j8eJ5o6F0XNwnH35G15HdZbgOBxqfvpS 5zCAoUPAvSERV9xgF/b5LCvS5Rw1ySekSPC0vaTI0+IZpTJgd5fz/g+Myiexu0PTVy1OD0 B8u13RrxjfRF0pkZ/ckTz2TPzwA97/o= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-133-HWbTQLYdPKmohNzBQY1LhQ-1; Wed, 04 Mar 2026 09:04:03 -0500 X-MC-Unique: HWbTQLYdPKmohNzBQY1LhQ-1 X-Mimecast-MFC-AGG-ID: HWbTQLYdPKmohNzBQY1LhQ_1772633041 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4D5B71955F17; Wed, 4 Mar 2026 14:04:01 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6CA5D180049D; Wed, 4 Mar 2026 14:03:56 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , linux-block@vger.kernel.org Subject: [RFC PATCH 03/17] iov_iter: Add a segmented queue of bio_vec[] Date: Wed, 4 Mar 2026 14:03:10 +0000 Message-ID: <20260304140328.112636-4-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" Add the concept of a segmented queue of bio_vec[] arrays. This allows an indefinite quantity of elements to be handled and allows things like network filesystems and crypto drivers to glue bits on the ends without having to reallocate the array. The bvecq struct that defines each segment also carries capacity/usage information along with flags indicating whether the constituent memory regions need freeing or unpinning and the file position of the first element in a segment. The bvecq structs are refcounted to allow a queue to be extracted in batches and split between a number of subrequests. The bvecq can have the bio_vec[] it manages allocated in with it, but this is not required. A flag is provided for if this is the case as comparing ->bv to ->__bv is not sufficient to detect this case. Add an iterator type ITER_BVECQ for it. This is intended to replace ITER_FOLIOQ (and ITER_XARRAY). Note that the prev pointer is only really needed for iov_iter_revert() and could be dispensed with if struct iov_iter contained the head information as well as the current point. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Jens Axboe cc: linux-block@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- include/linux/bvec.h | 33 +++++ include/linux/iov_iter.h | 61 +++++++- include/linux/uio.h | 11 ++ lib/iov_iter.c | 288 ++++++++++++++++++++++++++++++++++++- lib/scatterlist.c | 65 +++++++++ lib/tests/kunit_iov_iter.c | 179 +++++++++++++++++++++++ 6 files changed, 633 insertions(+), 4 deletions(-) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index 06fb60471aaf..d3c897270d40 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -308,4 +308,37 @@ static inline phys_addr_t bvec_phys(const struct bio_v= ec *bvec) return page_to_phys(bvec->bv_page) + bvec->bv_offset; } =20 +/* + * Segmented bio_vec queue. + * + * These can be linked together to form messages of indefinite length and + * iterated over with an ITER_BVECQ iterator. The list is non-circular; n= ext + * and prev are NULL at the ends. + * + * The bv pointer points to the segment array; this may be __bv if allocat= ed + * together. The caller is responsible for determining whether or not thi= s is + * the case as the array pointed to by bv may be follow on directly from t= he + * bvecq by accident of allocation (ie. ->bv =3D=3D ->__bv is *not* suffic= ient to + * determine this). + * + * The file position and discontiguity flag allow non-contiguous data sets= to + * be chained together, but still teased apart without the need to convert= the + * info in the bio_vec back into a folio pointer. + */ +struct bvecq { + struct bvecq *next; /* Next bvec in the list or NULL */ + struct bvecq *prev; /* Prev bvec in the list or NULL */ + unsigned long long fpos; /* File position */ + refcount_t ref; + u32 priv; /* Private data */ + u16 nr_segs; /* Number of elements in bv[] used */ + u16 max_segs; /* Number of elements allocated in bv[] */ + bool inline_bv:1; /* T if __bv[] is being used */ + bool free:1; /* T if the pages need freeing */ + bool unpin:1; /* T if the pages need unpinning, not freeing */ + bool discontig:1; /* T if not contiguous with previous bvecq */ + struct bio_vec *bv; /* Pointer to array of page fragments */ + struct bio_vec __bv[]; /* Default array (if ->inline_bv) */ +}; + #endif /* __LINUX_BVEC_H */ diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h index f9a17fbbd398..e0c129a3ca63 100644 --- a/include/linux/iov_iter.h +++ b/include/linux/iov_iter.h @@ -141,6 +141,59 @@ size_t iterate_bvec(struct iov_iter *iter, size_t len,= void *priv, void *priv2, return progress; } =20 +/* + * Handle ITER_BVECQ. + */ +static __always_inline +size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *= priv2, + iov_step_f step) +{ + const struct bvecq *bq =3D iter->bvecq; + unsigned int slot =3D iter->bvecq_slot; + size_t progress =3D 0, skip =3D iter->iov_offset; + + if (!iter->count) + return 0; + if (slot =3D=3D bq->nr_segs) { + /* The iterator may have been extended. */ + bq =3D bq->next; + slot =3D 0; + } + + do { + const struct bio_vec *bvec =3D &bq->bv[slot]; + struct page *page =3D bvec->bv_page + (bvec->bv_offset + skip) / PAGE_SI= ZE; + size_t part, remain, consumed; + size_t poff =3D (bvec->bv_offset + skip) % PAGE_SIZE; + void *base; + + part =3D umin(umin(bvec->bv_len - skip, PAGE_SIZE - poff), len); + base =3D kmap_local_page(page) + poff; + remain =3D step(base, progress, part, priv, priv2); + kunmap_local(base); + consumed =3D part - remain; + len -=3D consumed; + progress +=3D consumed; + skip +=3D consumed; + if (skip >=3D bvec->bv_len) { + skip =3D 0; + slot++; + if (slot >=3D bq->nr_segs && bq->next) { + bq =3D bq->next; + slot =3D 0; + } + } + if (remain) + break; + } while (len); + + iter->bvecq_slot =3D slot; + iter->bvecq =3D bq; + iter->iov_offset =3D skip; + iter->count -=3D progress; + return progress; +} + /* * Handle ITER_FOLIOQ. */ @@ -306,6 +359,8 @@ size_t iterate_and_advance2(struct iov_iter *iter, size= _t len, void *priv, return iterate_bvec(iter, len, priv, priv2, step); if (iov_iter_is_kvec(iter)) return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_bvecq(iter)) + return iterate_bvecq(iter, len, priv, priv2, step); if (iov_iter_is_folioq(iter)) return iterate_folioq(iter, len, priv, priv2, step); if (iov_iter_is_xarray(iter)) @@ -342,8 +397,8 @@ size_t iterate_and_advance(struct iov_iter *iter, size_= t len, void *priv, * buffer is presented in segments, which for kernel iteration are broken = up by * physical pages and mapped, with the mapped address being presented. * - * [!] Note This will only handle BVEC, KVEC, FOLIOQ, XARRAY and DISCARD-t= ype - * iterators; it will not handle UBUF or IOVEC-type iterators. + * [!] Note This will only handle BVEC, KVEC, BVECQ, FOLIOQ, XARRAY and + * DISCARD-type iterators; it will not handle UBUF or IOVEC-type iterators. * * A step functions, @step, must be provided, one for handling mapped kern= el * addresses and the other is given user addresses which have the potentia= l to @@ -370,6 +425,8 @@ size_t iterate_and_advance_kernel(struct iov_iter *iter= , size_t len, void *priv, return iterate_bvec(iter, len, priv, priv2, step); if (iov_iter_is_kvec(iter)) return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_bvecq(iter)) + return iterate_bvecq(iter, len, priv, priv2, step); if (iov_iter_is_folioq(iter)) return iterate_folioq(iter, len, priv, priv2, step); if (iov_iter_is_xarray(iter)) diff --git a/include/linux/uio.h b/include/linux/uio.h index a9bc5b3067e3..aa50d348dfcc 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -27,6 +27,7 @@ enum iter_type { ITER_BVEC, ITER_KVEC, ITER_FOLIOQ, + ITER_BVECQ, ITER_XARRAY, ITER_DISCARD, }; @@ -69,6 +70,7 @@ struct iov_iter { const struct kvec *kvec; const struct bio_vec *bvec; const struct folio_queue *folioq; + const struct bvecq *bvecq; struct xarray *xarray; void __user *ubuf; }; @@ -78,6 +80,7 @@ struct iov_iter { union { unsigned long nr_segs; u8 folioq_slot; + u16 bvecq_slot; loff_t xarray_start; }; }; @@ -150,6 +153,11 @@ static inline bool iov_iter_is_folioq(const struct iov= _iter *i) return iov_iter_type(i) =3D=3D ITER_FOLIOQ; } =20 +static inline bool iov_iter_is_bvecq(const struct iov_iter *i) +{ + return iov_iter_type(i) =3D=3D ITER_BVECQ; +} + static inline bool iov_iter_is_xarray(const struct iov_iter *i) { return iov_iter_type(i) =3D=3D ITER_XARRAY; @@ -298,6 +306,9 @@ void iov_iter_discard(struct iov_iter *i, unsigned int = direction, size_t count); void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, const struct folio_queue *folioq, unsigned int first_slot, unsigned int offset, size_t count); +void iov_iter_bvec_queue(struct iov_iter *i, unsigned int direction, + const struct bvecq *bvecq, + unsigned int first_slot, unsigned int offset, size_t count); void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xa= rray *xarray, loff_t start, size_t count); ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 0a63c7fba313..df8d037894b1 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -571,6 +571,39 @@ static void iov_iter_folioq_advance(struct iov_iter *i= , size_t size) i->folioq =3D folioq; } =20 +static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by) +{ + const struct bvecq *bq =3D i->bvecq; + unsigned int slot =3D i->bvecq_slot; + + if (!i->count) + return; + i->count -=3D by; + + if (slot >=3D bq->nr_segs) { + bq =3D bq->next; + slot =3D 0; + } + + by +=3D i->iov_offset; /* From beginning of current segment. */ + do { + size_t len =3D bq->bv[slot].bv_len; + + if (likely(by < len)) + break; + by -=3D len; + slot++; + if (slot >=3D bq->nr_segs && bq->next) { + bq =3D bq->next; + slot =3D 0; + } + } while (by); + + i->iov_offset =3D by; + i->bvecq_slot =3D slot; + i->bvecq =3D bq; +} + void iov_iter_advance(struct iov_iter *i, size_t size) { if (unlikely(i->count < size)) @@ -585,6 +618,8 @@ void iov_iter_advance(struct iov_iter *i, size_t size) iov_iter_bvec_advance(i, size); } else if (iov_iter_is_folioq(i)) { iov_iter_folioq_advance(i, size); + } else if (iov_iter_is_bvecq(i)) { + iov_iter_bvecq_advance(i, size); } else if (iov_iter_is_discard(i)) { i->count -=3D size; } @@ -617,6 +652,32 @@ static void iov_iter_folioq_revert(struct iov_iter *i,= size_t unroll) i->folioq =3D folioq; } =20 +static void iov_iter_bvecq_revert(struct iov_iter *i, size_t unroll) +{ + const struct bvecq *bq =3D i->bvecq; + unsigned int slot =3D i->bvecq_slot; + + for (;;) { + size_t len; + + if (slot =3D=3D 0) { + bq =3D bq->prev; + slot =3D bq->nr_segs; + } + slot--; + + len =3D bq->bv[slot].bv_len; + if (unroll <=3D len) { + i->iov_offset =3D len - unroll; + break; + } + unroll -=3D len; + } + + i->bvecq_slot =3D slot; + i->bvecq =3D bq; +} + void iov_iter_revert(struct iov_iter *i, size_t unroll) { if (!unroll) @@ -651,6 +712,9 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll) } else if (iov_iter_is_folioq(i)) { i->iov_offset =3D 0; iov_iter_folioq_revert(i, unroll); + } else if (iov_iter_is_bvecq(i)) { + i->iov_offset =3D 0; + iov_iter_bvecq_revert(i, unroll); } else { /* same logics for iovec and kvec */ const struct iovec *iov =3D iter_iov(i); while (1) { @@ -678,9 +742,12 @@ size_t iov_iter_single_seg_count(const struct iov_iter= *i) if (iov_iter_is_bvec(i)) return min(i->count, i->bvec->bv_len - i->iov_offset); } + if (!i->count) + return 0; if (unlikely(iov_iter_is_folioq(i))) - return !i->count ? 0 : - umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count); + return umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count); + if (unlikely(iov_iter_is_bvecq(i))) + return min(i->count, i->bvecq->bv[i->bvecq_slot].bv_len - i->iov_offset); return i->count; } EXPORT_SYMBOL(iov_iter_single_seg_count); @@ -747,6 +814,35 @@ void iov_iter_folio_queue(struct iov_iter *i, unsigned= int direction, } EXPORT_SYMBOL(iov_iter_folio_queue); =20 +/** + * iov_iter_bvec_queue - Initialise an I/O iterator to use a segmented bve= c queue + * @i: The iterator to initialise. + * @direction: The direction of the transfer. + * @bvecq: The starting point in the bvec queue. + * @first_slot: The first slot in the bvec queue to use + * @offset: The offset into the bvec in the first slot to start at + * @count: The size of the I/O buffer in bytes. + * + * Set up an I/O iterator to either draw data out of the buffers attached = to an + * inode or to inject data into those buffers. The pages *must* be preven= ted + * from evaporation, either by the caller. + */ +void iov_iter_bvec_queue(struct iov_iter *i, unsigned int direction, + const struct bvecq *bvecq, unsigned int first_slot, + unsigned int offset, size_t count) +{ + WARN_ON(direction & ~(READ | WRITE)); + *i =3D (struct iov_iter) { + .iter_type =3D ITER_BVECQ, + .data_source =3D direction, + .bvecq =3D bvecq, + .bvecq_slot =3D first_slot, + .count =3D count, + .iov_offset =3D offset, + }; +} +EXPORT_SYMBOL(iov_iter_bvec_queue); + /** * iov_iter_xarray - Initialise an I/O iterator to use the pages in an xar= ray * @i: The iterator to initialise. @@ -839,6 +935,37 @@ static unsigned long iov_iter_alignment_bvec(const str= uct iov_iter *i) return res; } =20 +static unsigned long iov_iter_alignment_bvecq(const struct iov_iter *iter) +{ + const struct bvecq *bq; + unsigned long res =3D 0; + unsigned int slot =3D iter->bvecq_slot; + size_t skip =3D iter->iov_offset; + size_t size =3D iter->count; + + if (!size) + return res; + + for (bq =3D iter->bvecq; bq; bq =3D bq->next) { + for (; slot < bq->nr_segs; slot++) { + const struct bio_vec *bvec =3D &bq->bv[slot]; + size_t part =3D umin(bvec->bv_len - skip, size); + + res |=3D bvec->bv_offset + skip; + res |=3D part; + + size -=3D part; + if (size =3D=3D 0) + return res; + skip =3D 0; + } + + slot =3D 0; + } + + return res; +} + unsigned long iov_iter_alignment(const struct iov_iter *i) { if (likely(iter_is_ubuf(i))) { @@ -858,6 +985,8 @@ unsigned long iov_iter_alignment(const struct iov_iter = *i) /* With both xarray and folioq types, we're dealing with whole folios. */ if (iov_iter_is_folioq(i)) return i->iov_offset | i->count; + if (iov_iter_is_bvecq(i)) + return iov_iter_alignment_bvecq(i); if (iov_iter_is_xarray(i)) return (i->xarray_start + i->iov_offset) | i->count; =20 @@ -1124,6 +1253,7 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_= iter *i, return iter_folioq_get_pages(i, pages, maxsize, maxpages, start); if (iov_iter_is_xarray(i)) return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); + WARN_ON_ONCE(iov_iter_is_bvecq(i)); return -EFAULT; } =20 @@ -1192,6 +1322,36 @@ static int bvec_npages(const struct iov_iter *i, int= maxpages) return npages; } =20 +static size_t iov_npages_bvecq(const struct iov_iter *iter, size_t maxpage= s) +{ + const struct bvecq *bq; + unsigned int slot =3D iter->bvecq_slot; + size_t npages =3D 0; + size_t skip =3D iter->iov_offset; + size_t size =3D iter->count; + + for (bq =3D iter->bvecq; bq; bq =3D bq->next) { + for (; slot < bq->nr_segs; slot++) { + const struct bio_vec *bvec =3D &bq->bv[slot]; + size_t offs =3D (bvec->bv_offset + skip) % PAGE_SIZE; + size_t part =3D umin(bvec->bv_len - skip, size); + + npages +=3D DIV_ROUND_UP(offs + part, PAGE_SIZE); + if (npages >=3D maxpages) + goto out; + + size -=3D part; + if (!size) + goto out; + skip =3D 0; + } + + slot =3D 0; + } +out: + return umin(npages, maxpages); +} + int iov_iter_npages(const struct iov_iter *i, int maxpages) { if (unlikely(!i->count)) @@ -1211,6 +1371,8 @@ int iov_iter_npages(const struct iov_iter *i, int max= pages) int npages =3D DIV_ROUND_UP(offset + i->count, PAGE_SIZE); return min(npages, maxpages); } + if (iov_iter_is_bvecq(i)) + return iov_npages_bvecq(i, maxpages); if (iov_iter_is_xarray(i)) { unsigned offset =3D (i->xarray_start + i->iov_offset) % PAGE_SIZE; int npages =3D DIV_ROUND_UP(offset + i->count, PAGE_SIZE); @@ -1554,6 +1716,124 @@ static ssize_t iov_iter_extract_folioq_pages(struct= iov_iter *i, return extracted; } =20 +/* + * Extract a list of virtually contiguous pages from an ITER_BVECQ iterato= r. + * This does not get references on the pages, nor does it get a pin on the= m. + */ +static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + const struct bvecq *bvecq =3D iter->bvecq; + struct page **p; + unsigned int seg =3D iter->bvecq_slot, count =3D 0, nr =3D 0; + size_t extracted =3D 0, offset =3D iter->iov_offset; + + if (seg >=3D bvecq->nr_segs) { + bvecq =3D bvecq->next; + if (WARN_ON_ONCE(!bvecq)) + return 0; + seg =3D 0; + } + + /* First, we count the run of virtually contiguous pages. */ + do { + const struct bio_vec *bv =3D &bvecq->bv[seg]; + size_t boff =3D bv->bv_offset, blen =3D bv->bv_len; + + if (!bv->bv_page) + blen =3D 0; + if (extracted > 0 && boff % PAGE_SIZE) + break; + + if (offset < blen) { + size_t part =3D umin(maxsize - extracted, blen - offset); + size_t poff =3D (boff + offset) % PAGE_SIZE; + size_t pcount =3D DIV_ROUND_UP(poff + blen, PAGE_SIZE); + + offset +=3D part; + extracted +=3D part; + count +=3D pcount; + if ((boff + blen) % PAGE_SIZE) + break; + } + + if (offset >=3D blen) { + offset =3D 0; + seg++; + if (seg >=3D bvecq->nr_segs) { + if (!bvecq->next) { + WARN_ON_ONCE(extracted < iter->count); + break; + } + bvecq =3D bvecq->next; + seg =3D 0; + } + } + } while (count < maxpages && extracted < maxsize); + + maxpages =3D umin(maxpages, count); + + if (!*pages) { + *pages =3D kvmalloc_array(maxpages, sizeof(struct page *), GFP_KERNEL); + if (!*pages) + return -ENOMEM; + } + + p =3D *pages; + + /* Now transcribe the page pointers. */ + extracted =3D 0; + bvecq =3D iter->bvecq; + offset =3D iter->iov_offset; + seg =3D iter->bvecq_slot; + + do { + const struct bio_vec *bv =3D &bvecq->bv[seg]; + size_t boff =3D bv->bv_offset, blen =3D bv->bv_len; + + if (!bv->bv_page) + blen =3D 0; + + if (offset < blen) { + size_t part =3D umin(maxsize - extracted, blen - offset); + size_t poff =3D (boff + offset) % PAGE_SIZE; + size_t pix =3D (boff + offset) / PAGE_SIZE; + + if (poff + part > PAGE_SIZE) + part =3D PAGE_SIZE - poff; + + if (!extracted) + *offset0 =3D poff; + + p[nr++] =3D bv->bv_page + pix; + offset +=3D part; + extracted +=3D part; + } + + if (offset >=3D blen) { + offset =3D 0; + seg++; + if (seg >=3D bvecq->nr_segs) { + if (!bvecq->next) { + WARN_ON_ONCE(extracted < iter->count); + break; + } + bvecq =3D bvecq->next; + seg =3D 0; + } + } + } while (nr < maxpages && extracted < maxsize); + + iter->bvecq =3D bvecq; + iter->bvecq_slot =3D seg; + iter->iov_offset =3D offset; + iter->count -=3D extracted; + return extracted; +} + /* * Extract a list of contiguous pages from an ITER_XARRAY iterator. This = does not * get references on the pages, nor does it get a pin on them. @@ -1838,6 +2118,10 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i, return iov_iter_extract_folioq_pages(i, pages, maxsize, maxpages, extraction_flags, offset0); + if (iov_iter_is_bvecq(i)) + return iov_iter_extract_bvecq_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); if (iov_iter_is_xarray(i)) return iov_iter_extract_xarray_pages(i, pages, maxsize, maxpages, extraction_flags, diff --git a/lib/scatterlist.c b/lib/scatterlist.c index d773720d11bf..61ca42ac53f3 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1328,6 +1328,68 @@ static ssize_t extract_folioq_to_sg(struct iov_iter = *iter, return ret; } =20 +/* + * Extract up to sg_max folios from an BVECQ-type iterator and add them to + * the scatterlist. The pages are not pinned. + */ +static ssize_t extract_bvecq_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + const struct bvecq *bvecq =3D iter->bvecq; + struct scatterlist *sg =3D sgtable->sgl + sgtable->nents; + unsigned int seg =3D iter->bvecq_slot; + ssize_t ret =3D 0; + size_t offset =3D iter->iov_offset; + + if (seg >=3D bvecq->nr_segs) { + bvecq =3D bvecq->next; + if (WARN_ON_ONCE(!bvecq)) + return 0; + seg =3D 0; + } + + do { + const struct bio_vec *bv =3D &bvecq->bv[seg]; + size_t blen =3D bv->bv_len; + + if (!bv->bv_page) + blen =3D 0; + + if (offset < blen) { + size_t part =3D umin(maxsize - ret, blen - offset); + + sg_set_page(sg, bv->bv_page, part, bv->bv_offset + offset); + sgtable->nents++; + sg++; + sg_max--; + offset +=3D part; + ret +=3D part; + } + + if (offset >=3D blen) { + offset =3D 0; + seg++; + if (seg >=3D bvecq->nr_segs) { + if (!bvecq->next) { + WARN_ON_ONCE(ret < iter->count); + break; + } + bvecq =3D bvecq->next; + seg =3D 0; + } + } + } while (sg_max > 0 && ret < maxsize); + + iter->bvecq =3D bvecq; + iter->bvecq_slot =3D seg; + iter->iov_offset =3D offset; + iter->count -=3D ret; + return ret; +} + /* * Extract up to sg_max folios from an XARRAY-type iterator and add them to * the scatterlist. The pages are not pinned. @@ -1426,6 +1488,9 @@ ssize_t extract_iter_to_sg(struct iov_iter *iter, siz= e_t maxsize, case ITER_FOLIOQ: return extract_folioq_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); + case ITER_BVECQ: + return extract_bvecq_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_XARRAY: return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); diff --git a/lib/tests/kunit_iov_iter.c b/lib/tests/kunit_iov_iter.c index bb847e5010eb..644a1b9eb2d3 100644 --- a/lib/tests/kunit_iov_iter.c +++ b/lib/tests/kunit_iov_iter.c @@ -536,6 +536,183 @@ static void __init iov_kunit_copy_from_folioq(struct = kunit *test) KUNIT_SUCCEED(test); } =20 +static void iov_kunit_destroy_bvecq(void *data) +{ + struct bvecq *bq, *next; + + for (bq =3D data; bq; bq =3D next) { + next =3D bq->next; + for (int i =3D 0; i < bq->nr_segs; i++) + if (bq->bv[i].bv_page) + put_page(bq->bv[i].bv_page); + kfree(bq); + } +} + +static struct bvecq *iov_kunit_alloc_bvecq(struct kunit *test, unsigned in= t max_segs) +{ + struct bvecq *bq; + + bq =3D kzalloc(struct_size(bq, __bv, max_segs), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bq); + bq->max_segs =3D max_segs; + return bq; +} + +static struct bvecq *iov_kunit_create_bvecq(struct kunit *test, unsigned i= nt max_segs) +{ + struct bvecq *bq; + + bq =3D iov_kunit_alloc_bvecq(test, max_segs); + kunit_add_action_or_reset(test, iov_kunit_destroy_bvecq, bq); + return bq; +} + +static void __init iov_kunit_load_bvecq(struct kunit *test, + struct iov_iter *iter, int dir, + struct bvecq *bq_head, + struct page **pages, size_t npages) +{ + struct bvecq *bq =3D bq_head; + size_t size =3D 0; + + for (int i =3D 0; i < npages; i++) { + if (bq->nr_segs >=3D bq->max_segs) { + bq->next =3D iov_kunit_alloc_bvecq(test, 8); + bq->next->prev =3D bq; + bq =3D bq->next; + } + bvec_set_page(&bq->bv[bq->nr_segs], pages[i], PAGE_SIZE, 0); + bq->nr_segs++; + size +=3D PAGE_SIZE; + } + iov_iter_bvec_queue(iter, dir, bq_head, 0, 0, size); +} + +/* + * Test copying to a ITER_BVECQ-type iterator. + */ +static void __init iov_kunit_copy_to_bvecq(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct bvecq *bq; + struct page **spages, **bpages; + u8 *scratch, *buffer; + size_t bufsize, npages, size, copied; + int i, patt; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + bq =3D iov_kunit_create_bvecq(test, 8); + + scratch =3D iov_kunit_create_buffer(test, &spages, npages); + for (i =3D 0; i < bufsize; i++) + scratch[i] =3D pattern(i); + + buffer =3D iov_kunit_create_buffer(test, &bpages, npages); + memset(buffer, 0, bufsize); + + iov_kunit_load_bvecq(test, &iter, READ, bq, bpages, npages); + + i =3D 0; + for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { + size =3D pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_bvec_queue(&iter, READ, bq, 0, 0, pr->to); + iov_iter_advance(&iter, pr->from); + copied =3D copy_to_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + i +=3D size; + if (test->status =3D=3D KUNIT_FAILURE) + goto stop; + } + + /* Build the expected image in the scratch buffer. */ + patt =3D 0; + memset(scratch, 0, bufsize); + for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) + for (i =3D pr->from; i < pr->to; i++) + scratch[i] =3D pattern(patt++); + + /* Compare the images */ + for (i =3D 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=3D%x", i); + if (buffer[i] !=3D scratch[i]) + return; + } + +stop: + KUNIT_SUCCEED(test); +} + +/* + * Test copying from a ITER_BVECQ-type iterator. + */ +static void __init iov_kunit_copy_from_bvecq(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct bvecq *bq; + struct page **spages, **bpages; + u8 *scratch, *buffer; + size_t bufsize, npages, size, copied; + int i, j; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + bq =3D iov_kunit_create_bvecq(test, 8); + + buffer =3D iov_kunit_create_buffer(test, &bpages, npages); + for (i =3D 0; i < bufsize; i++) + buffer[i] =3D pattern(i); + + scratch =3D iov_kunit_create_buffer(test, &spages, npages); + memset(scratch, 0, bufsize); + + iov_kunit_load_bvecq(test, &iter, READ, bq, bpages, npages); + + i =3D 0; + for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { + size =3D pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_bvec_queue(&iter, WRITE, bq, 0, 0, pr->to); + iov_iter_advance(&iter, pr->from); + copied =3D copy_from_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + i +=3D size; + } + + /* Build the expected image in the main buffer. */ + i =3D 0; + memset(buffer, 0, bufsize); + for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { + for (j =3D pr->from; j < pr->to; j++) { + buffer[i++] =3D pattern(j); + if (i >=3D bufsize) + goto stop; + } + } +stop: + + /* Compare the images */ + for (i =3D 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=3D%x", i); + if (scratch[i] !=3D buffer[i]) + return; + } + + KUNIT_SUCCEED(test); +} + static void iov_kunit_destroy_xarray(void *data) { struct xarray *xarray =3D data; @@ -1016,6 +1193,8 @@ static struct kunit_case __refdata iov_kunit_cases[] = =3D { KUNIT_CASE(iov_kunit_copy_from_bvec), KUNIT_CASE(iov_kunit_copy_to_folioq), KUNIT_CASE(iov_kunit_copy_from_folioq), + KUNIT_CASE(iov_kunit_copy_to_bvecq), + KUNIT_CASE(iov_kunit_copy_from_bvecq), KUNIT_CASE(iov_kunit_copy_to_xarray), KUNIT_CASE(iov_kunit_copy_from_xarray), KUNIT_CASE(iov_kunit_extract_pages_kvec), From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32C253B3BE9 for ; Wed, 4 Mar 2026 14:04:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633058; cv=none; b=u+oD6czeKhpB8/63D7uovleuCmS97wMAoc+ClBxQKvRPy58J7WN79ZKbJm2Wczyjy6Zv/nMlgUDDNaU+MTMlL++ierAPeOor2MavLZ508Bf9H88uVX4nl8DX1Lfztulz5mHmHjfPLWPU77auzTxjYTK3eamQA+dvYlD/KqEkXZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633058; c=relaxed/simple; bh=VtHJ4dOX/PjPH5MYXwwUtFXjGjFdBrAXw9pfIcuXNAA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lWuZ0/+sK/6m0TvuPkv/6Ciu3ZBnddCjK9sY4Zzyyg0ctDynC/S9CDk4gQyEd0Ipk6UjHjR9QIhDp7OOBTdKBMNoSJVbrNBmkJIgYUr3zTAKxVESoBy0U6OdNYV0twJzoqs4D5W6twiux02MGLnVWQz2GYvJgDbPr4z8SpZPJEE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MUnu3h80; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MUnu3h80" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633056; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d0pc7secQPVFTQku/CWBUaEkOfxce2fVEUol8rfqZhc=; b=MUnu3h80CjAOpNBNgvpq8sRJ8dxznbfrbI4B690pUcHngMNZ9nc3lKO9Z47af+NyMVjz6E /b9Ol4/14Ck7B7/Ku0xF64gEpBmvWB5M6FSvV6UfgrwQIAtPdJRXRrCzS4rccSOmf5yADz tk1o8l0kfdcn5v4D+FeV7E1whx0afzk= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-641-P3PFo_NDPhW6M-Jve7q69Q-1; Wed, 04 Mar 2026 09:04:10 -0500 X-MC-Unique: P3PFo_NDPhW6M-Jve7q69Q-1 X-Mimecast-MFC-AGG-ID: P3PFo_NDPhW6M-Jve7q69Q_1772633048 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 38584180061F; Wed, 4 Mar 2026 14:04:08 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 105D8195608E; Wed, 4 Mar 2026 14:04:02 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , linux-block@vger.kernel.org Subject: [RFC PATCH 04/17] Add a function to kmap one page of a multipage bio_vec Date: Wed, 4 Mar 2026 14:03:11 +0000 Message-ID: <20260304140328.112636-5-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" Add a function to kmap one page of a multipage bio_vec by offset (which is added to the offset in the bio_vec internally). The caller is responsible for calculating how much of the page is then available. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Jens Axboe cc: linux-block@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- include/linux/bvec.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index d3c897270d40..01292021f51e 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -308,6 +308,27 @@ static inline phys_addr_t bvec_phys(const struct bio_v= ec *bvec) return page_to_phys(bvec->bv_page) + bvec->bv_offset; } =20 +/** + * kmap_local_bvec - Map part of a bvec into the kernel virtual address sp= ace + * @bvec: bvec to map + * @offset: Offset into bvec + * + * Map the page containing the byte at @offset into the kernel virtual add= ress + * space. The caller is responsible for making sure this doesn't overrun. + * + * Call kunmap_local on the returned address to unmap. + */ +static inline void *kmap_local_bvec(struct bio_vec *bvec, size_t offset) +{ +#ifdef CONFIG_HIGHMEM + offset +=3D bvec->bv_offset; + + return kmap_local_page(bvec->bv_page + offset / PAGE_SIZE) + offset % PAG= E_SIZE; +#else + return bvec_virt(bvec) + offset; +#endif +} + /* * Segmented bio_vec queue. * From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EA5D3A257F for ; Wed, 4 Mar 2026 14:04:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633066; cv=none; b=iUGaOaDFuRwXs3PBVaM95Ub5iE4MqCHB/cFz6KN2V+UUb+8Ed5MFD+D61SEjjLmjd4v8gsCGOYLkaExSOSyYGaUeRppPEQcOj2BnH6AvrPXeykSA+rWkdPCQJxzzg/cyXOujpy/kvzkFm5V4p5uDCaITCi5y1gje9pasX+WLZWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633066; c=relaxed/simple; bh=fhxmx95adimTD53DyedtuQh+G0VSgQOTmjXFqNYxsXM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U5EoxTlNnLL/eJ5coD0pKNM8ZCOdRd1GYL86CcHvMp/495lrIs2aF05COxZF8ubGFlrvvPnr86OkvnA3AFgcE0opBAdzCcK3kVeYTcjt8jfCcT3UvKZypdsw4iY8ZS+tUe/csKGpBkpI2oB8XExZ4UsfCv+dHFJubt9yhM1K47I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GdR/55bA; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GdR/55bA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633061; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lwj1qm8DtfCWlyDlMnhAgLU1H+XG527VwwgLx5fdbTg=; b=GdR/55bACG+moXIdimqUjWKYJ3a3/b1hGOveMyBN5tM5B5cp0H+NHjhd4w2vkoJQ8Id64s zoYZOEKlTgLyKnwjYHhYgNocXZLTSIf3lCuvtTpjYnHUlSUNKN2JUaAAg6eRSyAcxNhjc6 lW/m0BGMqiqTGyBIjWAfAzfQjVLBsMU= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-264-rLD8wUqXNHeOQFPvs9Owtg-1; Wed, 04 Mar 2026 09:04:18 -0500 X-MC-Unique: rLD8wUqXNHeOQFPvs9Owtg-1 X-Mimecast-MFC-AGG-ID: rLD8wUqXNHeOQFPvs9Owtg_1772633055 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D9A26180049D; Wed, 4 Mar 2026 14:04:14 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 56A1E1958DC7; Wed, 4 Mar 2026 14:04:09 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara Subject: [RFC PATCH 05/17] netfs: Add some tools for managing bvecq chains Date: Wed, 4 Mar 2026 14:03:12 +0000 Message-ID: <20260304140328.112636-6-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Provide a selection of tools for managing bvec queue chains. This includes: (1) Allocation, prepopulation, expansion, shortening and refcounting of bvecqs and bvecq chains. This can be used to do things like creating an encryption buffer in cifs or a directory content buffer in afs. The memory segments will be appropriate disposed off according to the flags on the bvecq. (2) Management of a bvecq chain as a rolling buffer and the management of positions within it. (3) Loading folios, slicing chains and clearing content. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/Makefile | 1 + fs/netfs/bvecq.c | 634 +++++++++++++++++++++++++++++++++++ fs/netfs/internal.h | 87 +++++ fs/netfs/stats.c | 4 +- include/linux/netfs.h | 24 ++ include/trace/events/netfs.h | 24 ++ 6 files changed, 773 insertions(+), 1 deletion(-) create mode 100644 fs/netfs/bvecq.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index b43188d64bd8..e1f12ecb5abf 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -3,6 +3,7 @@ netfs-y :=3D \ buffered_read.o \ buffered_write.o \ + bvecq.o \ direct_read.o \ direct_write.o \ iterator.o \ diff --git a/fs/netfs/bvecq.c b/fs/netfs/bvecq.c new file mode 100644 index 000000000000..e223beb6661b --- /dev/null +++ b/fs/netfs/bvecq.c @@ -0,0 +1,634 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Buffering helpers for bvec queues + * + * Copyright (C) 2025 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include "internal.h" + +void dump_bvecq(const struct bvecq *bq) +{ + int b =3D 0; + + for (; bq; bq =3D bq->next, b++) { + int skipz =3D 0; + + pr_notice("BQ[%u] %u/%u fp=3D%llx\n", b, bq->nr_segs, bq->max_segs, bq->= fpos); + for (int s =3D 0; s < bq->nr_segs; s++) { + const struct bio_vec *bv =3D &bq->bv[s]; + + if (!bv->bv_page && !bv->bv_len && skipz < 2) { + skipz =3D 1; + continue; + } + if (skipz =3D=3D 1) + pr_notice("BQ[%u:00-%02u] ...\n", b, s - 1); + skipz =3D 2; + pr_notice("BQ[%u:%02u] %10lx %04x %04x %u\n", + b, s, + bv->bv_page ? page_to_pfn(bv->bv_page) : 0, + bv->bv_offset, bv->bv_len, + bv->bv_page ? page_count(bv->bv_page) : 0); + } + } +} + +/* + * Allocate a single bvecq chain element and initialise the header. + */ +struct bvecq *netfs_alloc_one_bvecq(size_t nr_slots, gfp_t gfp) +{ + struct bvecq *bq; + const size_t max_size =3D 512; + const size_t max_segs =3D (max_size - sizeof(*bq)) / sizeof(bq->__bv[0]); + size_t part =3D umin(nr_slots, max_segs); + size_t size =3D roundup_pow_of_two(struct_size(bq, __bv, part)); + + bq =3D kmalloc(size, gfp); + if (bq) { + *bq =3D (struct bvecq) { + .ref =3D REFCOUNT_INIT(1), + .bv =3D bq->__bv, + .inline_bv =3D true, + .max_segs =3D (size - sizeof(*bq)) / sizeof(bq->__bv[0]), + }; + netfs_stat(&netfs_n_bvecq); + } + return bq; +} + +/** + * netfs_alloc_bvecq - Allocate an unpopulated bvec queue + * @nr_slots: Number of slots to allocate + * @gfp: The allocation constraints. + * + * Allocate a chain of bvecq buffers providing at least the requested + * cumulative number of slots. + */ +struct bvecq *netfs_alloc_bvecq(size_t nr_slots, gfp_t gfp) +{ + struct bvecq *head =3D NULL, *tail =3D NULL; + + _enter("%zu", nr_slots); + + for (;;) { + struct bvecq *bq; + + bq =3D netfs_alloc_one_bvecq(nr_slots, gfp); + if (!bq) + goto oom; + + if (tail) { + tail->next =3D bq; + bq->prev =3D tail; + } else { + head =3D bq; + } + tail =3D bq; + if (tail->max_segs >=3D nr_slots) + break; + nr_slots -=3D tail->max_segs; + } + + return head; +oom: + netfs_free_bvecq_buffer(head); + return NULL; +} +EXPORT_SYMBOL(netfs_alloc_bvecq); + +/** + * netfs_alloc_bvecq_buffer - Allocate buffer space into a bvec queue + * @size: Target size of the buffer (can be 0 for an empty buffer) + * @pre_slots: Number of preamble slots to set aside + * @gfp: The allocation constraints. + */ +struct bvecq *netfs_alloc_bvecq_buffer(size_t size, unsigned int pre_slots= , gfp_t gfp) +{ + struct bvecq *head =3D NULL, *tail =3D NULL, *p =3D NULL; + size_t count =3D DIV_ROUND_UP(size, PAGE_SIZE); + + _enter("%zx,%zx,%u", size, count, pre_slots); + + do { + struct page **pages; + int want, got; + + p =3D netfs_alloc_one_bvecq(umin(count, 32 - 3), gfp); + if (!p) + goto oom; + + p->free =3D true; + + if (tail) { + tail->next =3D p; + p->prev =3D tail; + } else { + head =3D p; + } + tail =3D p; + if (!count) + break; + + pages =3D (struct page **)&p->bv[p->max_segs]; + pages -=3D p->max_segs - pre_slots; + + want =3D umin(count, p->max_segs - pre_slots); + got =3D alloc_pages_bulk(gfp, want, pages); + if (got < want) { + for (int i =3D 0; i < got; i++) + __free_page(pages[i]); + goto oom; + } + + tail->nr_segs =3D pre_slots + got; + for (int i =3D 0; i < got; i++) { + int j =3D pre_slots + i; + + set_page_count(pages[i], 1); + bvec_set_page(&tail->bv[j], pages[i], PAGE_SIZE, 0); + } + + count -=3D got; + pre_slots =3D 0; + } while (count > 0); + + return head; +oom: + netfs_free_bvecq_buffer(head); + return NULL; +} +EXPORT_SYMBOL(netfs_alloc_bvecq_buffer); + +/** + * netfs_expand_bvecq_buffer - Allocate buffer space into a bvec queue + * @mapping: Address space to set on the folio (or NULL). + * @_buffer: Pointer to the folio queue to add to (may point to a NULL; up= dated). + * @_cur_size: Current size of the buffer (updated). + * @size: Target size of the buffer. + * @gfp: The allocation constraints. + */ +int netfs_expand_bvecq_buffer(struct bvecq **_buffer, size_t *_cur_size, s= size_t size, gfp_t gfp) +{ + struct bvecq *tail =3D *_buffer, *p; + const size_t max_segs =3D 32; + + size =3D round_up(size, PAGE_SIZE); + if (*_cur_size >=3D size) + return 0; + + if (tail) + while (tail->next) + tail =3D tail->next; + + do { + struct page *page; + int order =3D 0; + + if (!tail || bvecq_is_full(tail)) { + p =3D netfs_alloc_one_bvecq(max_segs, gfp); + if (!p) + return -ENOMEM; + if (tail) { + tail->next =3D p; + p->prev =3D tail; + } else { + *_buffer =3D p; + } + tail =3D p; + } + + if (size - *_cur_size > PAGE_SIZE) + order =3D umin(ilog2(size - *_cur_size) - PAGE_SHIFT, + MAX_PAGECACHE_ORDER); + + page =3D alloc_pages(gfp | __GFP_COMP, order); + if (!page && order > 0) + page =3D alloc_pages(gfp | __GFP_COMP, 0); + if (!page) + return -ENOMEM; + + bvec_set_page(&p->bv[p->nr_segs++], page, PAGE_SIZE << order, 0); + *_cur_size +=3D PAGE_SIZE << order; + } while (*_cur_size < size); + + return 0; +} +EXPORT_SYMBOL(netfs_expand_bvecq_buffer); + +static void netfs_bvecq_free_seg(struct bvecq *bq, unsigned int seg) +{ + if (!bq->free || + !bq->bv[seg].bv_page) + return; + + if (bq->unpin) + unpin_user_page(bq->bv[seg].bv_page); + else + __free_page(bq->bv[seg].bv_page); +} + +/** + * netfs_free_bvecq_buffer - Free a bvec queue + * @bq: The start of the folio queue to free + * + * Free up a chain of bvecqs and the pages it points to. + */ +void netfs_free_bvecq_buffer(struct bvecq *bq) +{ + struct bvecq *next; + + for (; bq; bq =3D next) { + for (int seg =3D 0; seg < bq->nr_segs; seg++) + netfs_bvecq_free_seg(bq, seg); + next =3D bq->next; + netfs_stat_d(&netfs_n_bvecq); + kfree(bq); + } +} +EXPORT_SYMBOL(netfs_free_bvecq_buffer); + +/** + * netfs_put_bvecq - Put a bvec queue + * @bq: The start of the folio queue to free + * + * Put the ref(s) on the nodes in a bvec queue, freeing up the node and the + * page fragments it points to as the refcounts become zero. + */ +void netfs_put_bvecq(struct bvecq *bq) +{ + struct bvecq *next; + + for (; bq; bq =3D next) { + if (!refcount_dec_and_test(&bq->ref)) + break; + for (int seg =3D 0; seg < bq->nr_segs; seg++) + netfs_bvecq_free_seg(bq, seg); + next =3D bq->next; + netfs_stat_d(&netfs_n_bvecq); + kfree(bq); + } +} +EXPORT_SYMBOL(netfs_put_bvecq); + +/** + * netfs_shorten_bvecq_buffer - Shorten a bvec queue buffer + * @bq: The start of the buffer to shorten + * @seg: The slot to start from + * @size: The size to retain + * + * Shorten the content of a bvec queue down to the minimum number of segme= nts, + * starting at the specified segment, to retain the specified size. An er= ror + * will be reported if the bvec queue is undersized. + */ +int netfs_shorten_bvecq_buffer(struct bvecq *bq, unsigned int seg, size_t = size) +{ + ssize_t retain =3D size; + + /* Skip through the segments we want to keep. */ + for (; bq; bq =3D bq->next) { + for (; seg < bq->nr_segs; seg++) { + retain -=3D bq->bv[seg].bv_len; + if (retain < 0) + goto found; + } + seg =3D 0; + } + if (WARN_ON_ONCE(retain > 0)) + return -EMSGSIZE; + return 0; + +found: + /* Shorten the entry to be retained and clean the rest of this bvecq. */ + bq->bv[seg].bv_len +=3D retain; + seg++; + for (int i =3D seg; i < bq->nr_segs; i++) + netfs_bvecq_free_seg(bq, i); + bq->nr_segs =3D seg; + + /* Free the queue tail. */ + netfs_free_bvecq_buffer(bq->next); + bq->next =3D NULL; + return 0; +} + +/* + * Initialise a rolling buffer. We allocate an empty bvecq struct to so t= hat + * the pointers can be independently driven by the producer and the consum= er. + */ +int bvecq_buffer_init(struct bvecq_pos *pos, unsigned int rreq_id) +{ + struct bvecq *bq; + + bq =3D netfs_alloc_bvecq(14, GFP_NOFS); + if (!bq) + return -ENOMEM; + + pos->bvecq =3D bq; /* Comes with a ref. */ + pos->slot =3D 0; + pos->offset =3D 0; + return 0; +} + +/* + * Add a new segment on to the rolling buffer; either because the previous= one + * is full or because we have a discontiguity to contend with. + */ +int bvecq_buffer_make_space(struct bvecq_pos *pos) +{ + struct bvecq *bq, *head =3D pos->bvecq; + + bq =3D netfs_alloc_bvecq(14, GFP_NOFS); + if (!bq) + return -ENOMEM; + bq->prev =3D head; + + pos->bvecq =3D netfs_get_bvecq(bq); + pos->slot =3D 0; + pos->offset =3D 0; + + /* Make sure the initialisation is stored before the next pointer. + * + * [!] NOTE: After we set head->next, the consumer is at liberty to + * immediately delete the old head. + */ + smp_store_release(&head->next, bq); + netfs_put_bvecq(head); + return 0; +} + +/* + * Advance a bvecq position by the given amount of data. + */ +void bvecq_pos_advance(struct bvecq_pos *pos, size_t amount) +{ + struct bvecq *bq =3D pos->bvecq; + unsigned int slot =3D pos->slot; + size_t offset =3D pos->offset; + + if (slot >=3D bq->nr_segs) { + bq =3D bq->next; + slot =3D 0; + } + + while (amount) { + const struct bio_vec *bv =3D &bq->bv[slot]; + size_t part =3D umin(bv->bv_len - offset, amount); + + if (likely(part < bv->bv_len)) { + offset +=3D part; + break; + } + amount -=3D part; + offset =3D 0; + slot++; + if (slot >=3D bq->nr_segs) { + if (!bq->next) + break; + bq =3D bq->next; + slot =3D 0; + } + } + + pos->slot =3D slot; + pos->offset =3D offset; + bvecq_pos_move(pos, bq); +} + +/* + * Clear memory fragments pointed to by a bvec queue, advancing the positi= on. + */ +ssize_t bvecq_zero(struct bvecq_pos *pos, size_t amount) +{ + struct bvecq *bq =3D pos->bvecq; + unsigned int slot =3D pos->slot; + ssize_t cleared =3D 0; + size_t offset =3D pos->offset; + + if (WARN_ON_ONCE(!bq)) + return 0; + + if (slot >=3D bq->nr_segs) { + bq =3D bq->next; + if (WARN_ON_ONCE(!bq)) + return 0; + slot =3D 0; + } + + do { + const struct bio_vec *bv =3D &bq->bv[slot]; + + if (offset < bv->bv_len) { + size_t part =3D umin(amount - cleared, bv->bv_len - offset); + + memzero_page(bv->bv_page, bv->bv_offset + offset, part); + + offset +=3D part; + cleared +=3D part; + } + + if (offset >=3D bv->bv_len) { + offset =3D 0; + slot++; + if (slot >=3D bq->nr_segs) { + if (!bq->next) + break; + bq =3D bq->next; + slot =3D 0; + } + } + } while (cleared < amount); + + bvecq_pos_move(pos, bq); + pos->slot =3D slot; + pos->offset =3D offset; + return cleared; +} + +/* + * Determine the size and number of segments that can be obtained the next + * slice of bvec queue up to the maximum size and segment count specified.= The + * position cursor is updated to the end of the slice. + */ +size_t bvecq_slice(struct bvecq_pos *pos, size_t max_size, + unsigned int max_segs, unsigned int *_nr_segs) +{ + struct bvecq *bq; + unsigned int slot =3D pos->slot, nsegs =3D 0; + size_t size =3D 0; + size_t offset =3D pos->offset; + + bq =3D pos->bvecq; + for (;;) { + for (; slot < bq->nr_segs; slot++) { + const struct bio_vec *bvec =3D &bq->bv[slot]; + + if (offset < bvec->bv_len && bvec->bv_page) { + size_t part =3D umin(bvec->bv_len - offset, max_size); + + size +=3D part; + offset +=3D part; + max_size -=3D part; + nsegs++; + if (!max_size || nsegs >=3D max_segs) + goto out; + } + offset =3D 0; + } + + /* pos->bvecq isn't allowed to go NULL as the queue may get + * extended and we would lose our place. + */ + if (!bq->next) + break; + slot =3D 0; + bq =3D bq->next; + } + +out: + *_nr_segs =3D nsegs; + if (slot =3D=3D bq->nr_segs && bq->next) { + bq =3D bq->next; + slot =3D 0; + offset =3D 0; + } + bvecq_pos_move(pos, bq); + pos->slot =3D slot; + pos->offset =3D offset; + return size; +} + +/* + * Extract page fragments from a bvec queue position into another bvecq, w= hich + * we allocate. The position is advanced. + */ +ssize_t bvecq_extract(struct bvecq_pos *pos, size_t amount, + unsigned int max_segs, struct bvecq **to) +{ + struct bvecq_pos tmp_pos; + struct bvecq *src, *dst =3D NULL; + unsigned int slot =3D pos->slot, nsegs; + ssize_t extracted =3D 0; + size_t offset =3D pos->offset; + + *to =3D NULL; + if (!max_segs) + max_segs =3D UINT_MAX; + + bvecq_pos_attach(&tmp_pos, pos); + amount =3D bvecq_slice(&tmp_pos, amount, max_segs, &nsegs); + bvecq_pos_detach(&tmp_pos); + if (nsegs =3D=3D 0) + return -EIO; + + dst =3D netfs_alloc_bvecq(nsegs, GFP_KERNEL); + if (!dst) + return -ENOMEM; + *to =3D dst; + + /* Transcribe the segments */ + src =3D pos->bvecq; + for (;;) { + for (; slot < src->nr_segs; slot++) { + const struct bio_vec *sv =3D &src->bv[slot]; + struct bio_vec *dv =3D &dst->bv[dst->nr_segs]; + + _debug("EXTR sl=3D%x off=3D%zx am=3D%zx p=3D%lx", + slot, offset, amount, page_to_pfn(sv->bv_page)); + + if (offset < sv->bv_len && sv->bv_page) { + size_t part =3D umin(sv->bv_len - offset, amount); + + bvec_set_page(dv, sv->bv_page, part, + sv->bv_offset + offset); + extracted +=3D part; + amount -=3D part; + offset +=3D part; + trace_netfs_bv_slot(dst, dst->nr_segs); + dst->nr_segs++; + if (bvecq_is_full(dst)) + dst =3D dst->next; + if (nsegs >=3D max_segs) + goto out; + if (amount =3D=3D 0) + goto out; + } + offset =3D 0; + } + + /* pos->bvecq isn't allowed to go NULL as the queue may get + * extended and we would lose our place. + */ + if (!src->next) + break; + slot =3D 0; + src =3D src->next; + } + +out: + if (slot =3D=3D src->nr_segs && src->next) { + src =3D src->next; + slot =3D 0; + offset =3D 0; + } + bvecq_pos_move(pos, src); + pos->slot =3D slot; + pos->offset =3D offset; + return extracted; +} + +/* + * Decant part of the list of folios to read onto a bvecq. The list must = be + * pre-seeded with a bvecq object. + */ +ssize_t bvecq_load_from_ra(struct bvecq_pos *pos, + struct readahead_control *ractl, + struct folio_batch *put_batch) +{ + struct folio **folios; + struct bvecq *bq =3D pos->bvecq; + unsigned int space; + ssize_t loaded =3D 0; + int nr; + + if (bvecq_is_full(bq)) { + bq =3D netfs_alloc_bvecq(14, GFP_NOFS); + if (!bq) + return -ENOMEM; + bq->prev =3D pos->bvecq; + } + + space =3D bq->max_segs - bq->nr_segs; + + folios =3D (struct folio **)(bq->bv + bq->max_segs); + folios -=3D space; + + nr =3D __readahead_batch(ractl, (struct page **)folios, space); + + _enter("%u/%u %u/%u", bq->nr_segs, bq->max_segs, nr, space); + + bq->fpos =3D folio_pos(folios[0]); + + for (int i =3D 0; i < nr; i++) { + struct folio *folio =3D folios[i]; + size_t len =3D folio_size(folio); + + loaded +=3D len; + bvec_set_folio(&bq->bv[bq->nr_segs + i], folio, len, 0); + + trace_netfs_folio(folio, netfs_folio_trace_read); + if (!folio_batch_add(put_batch, folio)) + folio_batch_release(put_batch); + } + + /* Update the counter after setting the slots. */ + smp_store_release(&bq->nr_segs, bq->nr_segs + nr); + + if (bq !=3D pos->bvecq) { + /* Write the next pointer after initialisation. */ + smp_store_release(&pos->bvecq->next, bq); + bvecq_pos_move(pos, bq); + } + return loaded; +} diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index d436e20d3418..89ebeb49e969 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -33,6 +33,92 @@ int netfs_prefetch_for_write(struct file *file, struct f= olio *folio, void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, loff_t pos, size_t copied); =20 +/* + * bvecq.c + */ +struct bvecq *netfs_alloc_one_bvecq(size_t nr_slots, gfp_t gfp); +int bvecq_buffer_init(struct bvecq_pos *pos, unsigned int rreq_id); +int bvecq_buffer_make_space(struct bvecq_pos *pos); +void bvecq_pos_advance(struct bvecq_pos *pos, size_t amount); +ssize_t bvecq_zero(struct bvecq_pos *pos, size_t amount); +size_t bvecq_slice(struct bvecq_pos *pos, size_t max_size, + unsigned int max_segs, unsigned int *_nr_segs); +ssize_t bvecq_extract(struct bvecq_pos *pos, size_t amount, + unsigned int max_segs, struct bvecq **to); +ssize_t bvecq_load_from_ra(struct bvecq_pos *pos, + struct readahead_control *ractl, + struct folio_batch *put_batch); + +struct bvecq *netfs_get_bvecq(struct bvecq *bq); + +static inline void bvecq_pos_attach(struct bvecq_pos *pos, const struct bv= ecq_pos *at) +{ + *pos =3D *at; + netfs_get_bvecq(pos->bvecq); +} + +static inline void bvecq_pos_detach(struct bvecq_pos *pos) +{ + netfs_put_bvecq(pos->bvecq); + pos->bvecq =3D NULL; + pos->slot =3D 0; + pos->offset =3D 0; +} + +static inline void bvecq_pos_transfer(struct bvecq_pos *pos, struct bvecq_= pos *from) +{ + *pos =3D *from; + from->bvecq =3D NULL; + from->slot =3D 0; + from->offset =3D 0; +} + +static inline void bvecq_pos_move(struct bvecq_pos *pos, struct bvecq *to) +{ + struct bvecq *old =3D pos->bvecq; + + if (old !=3D to) { + pos->bvecq =3D netfs_get_bvecq(to); + netfs_put_bvecq(old); + } +} + +static inline bool bvecq_buffer_step(struct bvecq_pos *pos) +{ + struct bvecq *bq =3D pos->bvecq; + + pos->slot++; + pos->offset =3D 0; + if (pos->slot <=3D bq->nr_segs) + return true; + if (!bq->next) + return false; + bvecq_pos_move(pos, bq->next); + return true; +} + +static inline struct bvecq *bvecq_buffer_delete_spent(struct bvecq_pos *po= s) +{ + struct bvecq *spent =3D pos->bvecq; + /* Read the contents of the queue segment after the pointer to it. */ + struct bvecq *next =3D smp_load_acquire(&spent->next); + + if (!next) + return NULL; + next->prev =3D NULL; + spent->next =3D NULL; + netfs_put_bvecq(spent); + pos->bvecq =3D next; /* We take spent's ref */ + pos->slot =3D 0; + pos->offset =3D 0; + return next; +} + +static inline bool bvecq_is_full(const struct bvecq *bvecq) +{ + return bvecq->nr_segs >=3D bvecq->max_segs; +} + /* * main.c */ @@ -166,6 +252,7 @@ extern atomic_t netfs_n_wh_retry_write_subreq; extern atomic_t netfs_n_wb_lock_skip; extern atomic_t netfs_n_wb_lock_wait; extern atomic_t netfs_n_folioq; +extern atomic_t netfs_n_bvecq; =20 int netfs_stats_show(struct seq_file *m, void *v); =20 diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index ab6b916addc4..84c2a4bcc762 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -48,6 +48,7 @@ atomic_t netfs_n_wh_retry_write_subreq; atomic_t netfs_n_wb_lock_skip; atomic_t netfs_n_wb_lock_wait; atomic_t netfs_n_folioq; +atomic_t netfs_n_bvecq; =20 int netfs_stats_show(struct seq_file *m, void *v) { @@ -90,9 +91,10 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_rh_retry_read_subreq), atomic_read(&netfs_n_wh_retry_write_req), atomic_read(&netfs_n_wh_retry_write_subreq)); - seq_printf(m, "Objs : rr=3D%u sr=3D%u foq=3D%u wsc=3D%u\n", + seq_printf(m, "Objs : rr=3D%u sr=3D%u bq=3D%u foq=3D%u wsc=3D%u\n", atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), + atomic_read(&netfs_n_bvecq), atomic_read(&netfs_n_folioq), atomic_read(&netfs_n_wh_wstream_conflict)); seq_printf(m, "WbLock : skip=3D%u wait=3D%u\n", diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 72ee7d210a74..f360b25ceb31 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -17,12 +17,14 @@ #include #include #include +#include #include #include =20 enum netfs_sreq_ref_trace; typedef struct mempool mempool_t; struct folio_queue; +struct bvecq; =20 /** * folio_start_private_2 - Start an fscache write on a folio. [DEPRECATED] @@ -40,6 +42,16 @@ static inline void folio_start_private_2(struct folio *f= olio) folio_set_private_2(folio); } =20 +/* + * Position in a bio_vec queue. The bvecq holds a ref on the queue segmen= t it + * points to. + */ +struct bvecq_pos { + struct bvecq *bvecq; /* The first bvecq */ + unsigned int offset; /* The offset within the starting slot */ + u16 slot; /* The starting slot */ +}; + enum netfs_io_source { NETFS_SOURCE_UNKNOWN, NETFS_FILL_WITH_ZEROES, @@ -462,6 +474,12 @@ int netfs_alloc_folioq_buffer(struct address_space *ma= pping, struct folio_queue **_buffer, size_t *_cur_size, ssize_t size, gfp_t gfp); void netfs_free_folioq_buffer(struct folio_queue *fq); +void dump_bvecq(const struct bvecq *bq); +struct bvecq *netfs_alloc_bvecq(size_t nr_slots, gfp_t gfp); +struct bvecq *netfs_alloc_bvecq_buffer(size_t size, unsigned int pre_slots= , gfp_t gfp); +void netfs_free_bvecq_buffer(struct bvecq *bq); +void netfs_put_bvecq(struct bvecq *bq); +int netfs_shorten_bvecq_buffer(struct bvecq *bq, unsigned int seg, size_t = size); =20 /** * netfs_inode - Get the netfs inode context from the inode @@ -552,4 +570,10 @@ static inline void netfs_wait_for_outstanding_io(struc= t inode *inode) wait_var_event(&ictx->io_count, atomic_read(&ictx->io_count) =3D=3D 0); } =20 +static inline struct bvecq *netfs_get_bvecq(struct bvecq *bq) +{ + refcount_inc(&bq->ref); + return bq; +} + #endif /* _LINUX_NETFS_H */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 2d366be46a1c..2523adc3ad85 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -778,6 +778,30 @@ TRACE_EVENT(netfs_folioq, __print_symbolic(__entry->trace, netfs_folioq_traces)) ); =20 +TRACE_EVENT(netfs_bv_slot, + TP_PROTO(const struct bvecq *bq, int slot), + + TP_ARGS(bq, slot), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned int, offset) + __field(unsigned int, len) + __field(unsigned int, slot) + ), + + TP_fast_assign( + __entry->slot =3D slot; + __entry->pfn =3D page_to_pfn(bq->bv[slot].bv_page); + __entry->offset =3D bq->bv[slot].bv_offset; + __entry->len =3D bq->bv[slot].bv_len; + ), + + TP_printk("bq[%x] p=3D%lx %x-%x", + __entry->slot, + __entry->pfn, __entry->offset, __entry->offset + __entry->len) + ); + #undef EM #undef E_ #endif /* _TRACE_NETFS_H */ From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32E883AE195 for ; Wed, 4 Mar 2026 14:04:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633076; cv=none; b=YqCLOE58Ks21q5oGawPHHsVL7S3empoh9sVzuf3aF0OypuPto/hMcg4mkDvmZMsc333V6f6sTGVcKggq6SEygepmwhyg34o+lgTfwf6D1steZ42yN2gXXVH6CwI5dKJQLMgGI65bWcIORfPNGnGHqg4BqZPJQrDW9DoLpcbDTFk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633076; c=relaxed/simple; bh=cng6q8i1atULfRwmQ5J7rRZjycAQD9OT8/xj90RuR2A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u0WV3+Z5Qq5axnMKjuIgNFcQDt/O2Ej54yYonwQn2XqWB2yzhn2qrl5lG6YqyjcdyJIlSTUzf+QGY1O5Wu+U9NOqqPNDj+SeaSlDHMqhsYGzn1EQ+eLTNqKOOd0w2kWVBn/x92ba87+FsSH9XMxgROCFGewOnVeLGU4HzzcBciA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XXNZbH8u; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XXNZbH8u" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4+gSRLa+hNM8Yo0IhKmshzLd3HnMZAmo4+G5ANJHmwU=; b=XXNZbH8uazDTX76dX44xD9z0Y7GEnEPM1GaxxzXrSdjbzsG5PVvuJI801dFEFM7CAOENFd s10YC4UV67/1N+2+Rqk8Hvp7NAimYd5OkD/yAKlLO6SAPHA8G3GHk3ST+eGJU+DuEOWfXX CUXz22FlCHBMZ5J+24PwyVWhThiDLVE= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-253-k6wVx49iMzisJ_jRiQEOYA-1; Wed, 04 Mar 2026 09:04:23 -0500 X-MC-Unique: k6wVx49iMzisJ_jRiQEOYA-1 X-Mimecast-MFC-AGG-ID: k6wVx49iMzisJ_jRiQEOYA_1772633061 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5726918005BE; Wed, 4 Mar 2026 14:04:21 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 86EE118005AE; Wed, 4 Mar 2026 14:04:16 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Marc Dionne Subject: [RFC PATCH 06/17] afs: Use a bvecq to hold dir content rather than folioq Date: Wed, 4 Mar 2026 14:03:13 +0000 Message-ID: <20260304140328.112636-7-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Use a bvecq to hold the contents of a directory rather than the folioq so that the latter can be phased out. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/dir.c | 37 +++++----- fs/afs/dir_edit.c | 41 +++++------ fs/afs/dir_search.c | 33 ++++----- fs/afs/inode.c | 18 ++--- fs/afs/internal.h | 6 +- fs/netfs/write_issue.c | 156 ++++++----------------------------------- include/linux/netfs.h | 1 + 7 files changed, 87 insertions(+), 205 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 78caef3f1338..1d1be7e5923f 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -136,9 +136,9 @@ static void afs_dir_dump(struct afs_vnode *dvnode) pr_warn("DIR %llx:%llx is=3D%llx\n", dvnode->fid.vid, dvnode->fid.vnode, i_size); =20 - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); - iterate_folioq(&iter, iov_iter_count(&iter), NULL, NULL, - afs_dir_dump_step); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + iterate_bvecq(&iter, iov_iter_count(&iter), NULL, NULL, + afs_dir_dump_step); } =20 /* @@ -199,9 +199,9 @@ static int afs_dir_check(struct afs_vnode *dvnode) if (unlikely(!i_size)) return 0; =20 - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); - checked =3D iterate_folioq(&iter, iov_iter_count(&iter), dvnode, NULL, - afs_dir_check_step); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + checked =3D iterate_bvecq(&iter, iov_iter_count(&iter), dvnode, NULL, + afs_dir_check_step); if (checked !=3D i_size) { afs_dir_dump(dvnode); return -EIO; @@ -255,15 +255,14 @@ static ssize_t afs_do_read_single(struct afs_vnode *d= vnode, struct file *file) if (dvnode->directory_size < i_size) { size_t cur_size =3D dvnode->directory_size; =20 - ret =3D netfs_alloc_folioq_buffer(NULL, - &dvnode->directory, &cur_size, i_size, + ret =3D netfs_expand_bvecq_buffer(&dvnode->directory, &cur_size, i_size, mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); dvnode->directory_size =3D cur_size; if (ret < 0) return ret; } =20 - iov_iter_folio_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->d= irectory_size); + iov_iter_bvec_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->di= rectory_size); =20 /* AFS requires us to perform the read of a directory synchronously as * a single unit to avoid issues with the directory contents being @@ -282,9 +281,9 @@ static ssize_t afs_do_read_single(struct afs_vnode *dvn= ode, struct file *file) =20 if (ret2 < 0) ret =3D ret2; - } else if (i_size < folioq_folio_size(dvnode->directory, 0)) { + } else if (i_size < PAGE_SIZE) { /* NUL-terminate a symlink. */ - char *symlink =3D kmap_local_folio(folioq_folio(dvnode->directory, 0), = 0); + char *symlink =3D kmap_local_bvec(&dvnode->directory->bv[0], 0); =20 symlink[i_size] =3D 0; kunmap_local(symlink); @@ -305,8 +304,8 @@ ssize_t afs_read_single(struct afs_vnode *dvnode, struc= t file *file) } =20 /* - * Read the directory into a folio_queue buffer in one go, scrubbing the - * previous contents. We return -ESTALE if the caller needs to call us ag= ain. + * Read the directory into the buffer in one go, scrubbing the previous + * contents. We return -ESTALE if the caller needs to call us again. */ ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file) __acquires(&dvnode->validate_lock) @@ -487,7 +486,7 @@ static size_t afs_dir_iterate_step(void *iter_base, siz= e_t progress, size_t len, } =20 /* - * Iterate through the directory folios. + * Iterate through the directory content. */ static int afs_dir_iterate_contents(struct inode *dir, struct dir_context = *dir_ctx) { @@ -502,11 +501,11 @@ static int afs_dir_iterate_contents(struct inode *dir= , struct dir_context *dir_c if (i_size <=3D 0 || dir_ctx->pos >=3D i_size) return 0; =20 - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); iov_iter_advance(&iter, round_down(dir_ctx->pos, AFS_DIR_BLOCK_SIZE)); =20 - iterate_folioq(&iter, iov_iter_count(&iter), dvnode, &ctx, - afs_dir_iterate_step); + iterate_bvecq(&iter, iov_iter_count(&iter), dvnode, &ctx, + afs_dir_iterate_step); =20 if (ctx.error =3D=3D -ESTALE) afs_invalidate_dir(dvnode, afs_dir_invalid_iter_stale); @@ -2211,8 +2210,8 @@ int afs_single_writepages(struct address_space *mappi= ng, if (is_dir ? test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) : atomic64_read(&dvnode->cb_expires_at) !=3D AFS_NO_CB_PROMISE) { - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, - i_size_read(&dvnode->netfs.inode)); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, + i_size_read(&dvnode->netfs.inode)); ret =3D netfs_writeback_single(mapping, wbc, &iter); } =20 diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index fd3aa9f97ce6..ef9066659438 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -110,9 +110,8 @@ static void afs_clear_contig_bits(union afs_xdr_dir_blo= ck *block, */ static union afs_xdr_dir_block *afs_dir_get_block(struct afs_dir_iter *ite= r, size_t block) { - struct folio_queue *fq; struct afs_vnode *dvnode =3D iter->dvnode; - struct folio *folio; + struct bvecq *bq; size_t blpos =3D block * AFS_DIR_BLOCK_SIZE; size_t blend =3D (block + 1) * AFS_DIR_BLOCK_SIZE, fpos =3D iter->fpos; int ret; @@ -120,41 +119,39 @@ static union afs_xdr_dir_block *afs_dir_get_block(str= uct afs_dir_iter *iter, siz if (dvnode->directory_size < blend) { size_t cur_size =3D dvnode->directory_size; =20 - ret =3D netfs_alloc_folioq_buffer( - NULL, &dvnode->directory, &cur_size, blend, + ret =3D netfs_expand_bvecq_buffer( + &dvnode->directory, &cur_size, blend, mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); dvnode->directory_size =3D cur_size; if (ret < 0) goto fail; } =20 - fq =3D iter->fq; - if (!fq) - fq =3D dvnode->directory; + bq =3D iter->bq; + if (!bq) + bq =3D dvnode->directory; =20 - /* Search the folio queue for the folio containing the block... */ - for (; fq; fq =3D fq->next) { - for (int s =3D iter->fq_slot; s < folioq_count(fq); s++) { - size_t fsize =3D folioq_folio_size(fq, s); + /* Search the contents for the region containing the block... */ + for (; bq; bq =3D bq->next) { + for (int s =3D iter->bq_slot; s < bq->nr_segs; s++) { + struct bio_vec *bv =3D &bq->bv[s]; + size_t bsize =3D bv->bv_len; =20 - if (blend <=3D fpos + fsize) { + if (blend <=3D fpos + bsize) { /* ... and then return the mapped block. */ - folio =3D folioq_folio(fq, s); - if (WARN_ON_ONCE(folio_pos(folio) !=3D fpos)) - goto fail; - iter->fq =3D fq; - iter->fq_slot =3D s; + iter->bq =3D bq; + iter->bq_slot =3D s; iter->fpos =3D fpos; - return kmap_local_folio(folio, blpos - fpos); + return kmap_local_bvec(bv, blpos - fpos); } - fpos +=3D fsize; + fpos +=3D bsize; } - iter->fq_slot =3D 0; + iter->bq_slot =3D 0; } =20 fail: - iter->fq =3D NULL; - iter->fq_slot =3D 0; + iter->bq =3D NULL; + iter->bq_slot =3D 0; afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block); return NULL; } diff --git a/fs/afs/dir_search.c b/fs/afs/dir_search.c index d2516e55b5ed..1088b2c4db6e 100644 --- a/fs/afs/dir_search.c +++ b/fs/afs/dir_search.c @@ -66,12 +66,11 @@ bool afs_dir_init_iter(struct afs_dir_iter *iter, const= struct qstr *name) */ union afs_xdr_dir_block *afs_dir_find_block(struct afs_dir_iter *iter, siz= e_t block) { - struct folio_queue *fq =3D iter->fq; struct afs_vnode *dvnode =3D iter->dvnode; - struct folio *folio; + struct bvecq *bq =3D iter->bq; size_t blpos =3D block * AFS_DIR_BLOCK_SIZE; size_t blend =3D (block + 1) * AFS_DIR_BLOCK_SIZE, fpos =3D iter->fpos; - int slot =3D iter->fq_slot; + int slot =3D iter->bq_slot; =20 _enter("%zx,%d", block, slot); =20 @@ -83,36 +82,34 @@ union afs_xdr_dir_block *afs_dir_find_block(struct afs_= dir_iter *iter, size_t bl if (dvnode->directory_size < blend) goto fail; =20 - if (!fq || blpos < fpos) { - fq =3D dvnode->directory; + if (!bq || blpos < fpos) { + bq =3D dvnode->directory; slot =3D 0; fpos =3D 0; } =20 /* Search the folio queue for the folio containing the block... */ - for (; fq; fq =3D fq->next) { - for (; slot < folioq_count(fq); slot++) { - size_t fsize =3D folioq_folio_size(fq, slot); + for (; bq; bq =3D bq->next) { + for (; slot < bq->max_segs; slot++) { + struct bio_vec *bv =3D &bq->bv[slot]; + size_t bsize =3D bv->bv_len; =20 - if (blend <=3D fpos + fsize) { + if (blend <=3D fpos + bsize) { /* ... and then return the mapped block. */ - folio =3D folioq_folio(fq, slot); - if (WARN_ON_ONCE(folio_pos(folio) !=3D fpos)) - goto fail; - iter->fq =3D fq; - iter->fq_slot =3D slot; + iter->bq =3D bq; + iter->bq_slot =3D slot; iter->fpos =3D fpos; - iter->block =3D kmap_local_folio(folio, blpos - fpos); + iter->block =3D kmap_local_bvec(bv, blpos - fpos); return iter->block; } - fpos +=3D fsize; + fpos +=3D bsize; } slot =3D 0; } =20 fail: - iter->fq =3D NULL; - iter->fq_slot =3D 0; + iter->bq =3D NULL; + iter->bq_slot =3D 0; afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block); return NULL; } diff --git a/fs/afs/inode.c b/fs/afs/inode.c index dde1857fcabb..1a4e90d7ed01 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -31,12 +31,12 @@ void afs_init_new_symlink(struct afs_vnode *vnode, stru= ct afs_operation *op) size_t dsize =3D 0; char *p; =20 - if (netfs_alloc_folioq_buffer(NULL, &vnode->directory, &dsize, size, + if (netfs_expand_bvecq_buffer(&vnode->directory, &dsize, size, mapping_gfp_mask(vnode->netfs.inode.i_mapping)) < 0) return; =20 vnode->directory_size =3D dsize; - p =3D kmap_local_folio(folioq_folio(vnode->directory, 0), 0); + p =3D kmap_local_bvec(&vnode->directory->bv[0], 0); memcpy(p, op->create.symlink, size); kunmap_local(p); set_bit(AFS_VNODE_DIR_READ, &vnode->flags); @@ -45,17 +45,17 @@ void afs_init_new_symlink(struct afs_vnode *vnode, stru= ct afs_operation *op) =20 static void afs_put_link(void *arg) { - struct folio *folio =3D virt_to_folio(arg); + struct page *page =3D virt_to_page(arg); =20 kunmap_local(arg); - folio_put(folio); + put_page(page); } =20 const char *afs_get_link(struct dentry *dentry, struct inode *inode, struct delayed_call *callback) { struct afs_vnode *vnode =3D AFS_FS_I(inode); - struct folio *folio; + struct page *page; char *content; ssize_t ret; =20 @@ -84,9 +84,9 @@ const char *afs_get_link(struct dentry *dentry, struct in= ode *inode, set_bit(AFS_VNODE_DIR_READ, &vnode->flags); =20 good: - folio =3D folioq_folio(vnode->directory, 0); - folio_get(folio); - content =3D kmap_local_folio(folio, 0); + page =3D vnode->directory->bv[0].bv_page; + get_page(page); + content =3D kmap_local_page(page); set_delayed_call(callback, afs_put_link, content); return content; } @@ -761,7 +761,7 @@ void afs_evict_inode(struct inode *inode) =20 netfs_wait_for_outstanding_io(inode); truncate_inode_pages_final(&inode->i_data); - netfs_free_folioq_buffer(vnode->directory); + netfs_free_bvecq_buffer(vnode->directory); =20 afs_set_cache_aux(vnode, &aux); netfs_clear_inode_writeback(inode, &aux); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 009064b8d661..9bf5d2f1dbc4 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -710,7 +710,7 @@ struct afs_vnode { #define AFS_VNODE_MODIFYING 10 /* Set if we're performing a modification = op */ #define AFS_VNODE_DIR_READ 11 /* Set if we've read a dir's contents */ =20 - struct folio_queue *directory; /* Directory contents */ + struct bvecq *directory; /* Directory contents */ struct list_head wb_keys; /* List of keys available for writeback */ struct list_head pending_locks; /* locks waiting to be granted */ struct list_head granted_locks; /* locks granted on this file */ @@ -983,9 +983,9 @@ static inline void afs_invalidate_cache(struct afs_vnod= e *vnode, unsigned int fl struct afs_dir_iter { struct afs_vnode *dvnode; union afs_xdr_dir_block *block; - struct folio_queue *fq; + struct bvecq *bq; unsigned int fpos; - int fq_slot; + int bq_slot; unsigned int loop_check; u8 nr_slots; u8 bucket; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 437268f65640..fd4dc89d9d8d 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -698,124 +698,11 @@ ssize_t netfs_end_writethrough(struct netfs_io_reque= st *wreq, struct writeback_c return ret; } =20 -/* - * Write some of a pending folio data back to the server and/or the cache. - */ -static int netfs_write_folio_single(struct netfs_io_request *wreq, - struct folio *folio) -{ - struct netfs_io_stream *upload =3D &wreq->io_streams[0]; - struct netfs_io_stream *cache =3D &wreq->io_streams[1]; - struct netfs_io_stream *stream; - size_t iter_off =3D 0; - size_t fsize =3D folio_size(folio), flen; - loff_t fpos =3D folio_pos(folio); - bool to_eof =3D false; - bool no_debug =3D false; - - _enter(""); - - flen =3D folio_size(folio); - if (flen > wreq->i_size - fpos) { - flen =3D wreq->i_size - fpos; - folio_zero_segment(folio, flen, fsize); - to_eof =3D true; - } else if (flen =3D=3D wreq->i_size - fpos) { - to_eof =3D true; - } - - _debug("folio %zx/%zx", flen, fsize); - - if (!upload->avail && !cache->avail) { - trace_netfs_folio(folio, netfs_folio_trace_cancel_store); - return 0; - } - - if (!upload->construct) - trace_netfs_folio(folio, netfs_folio_trace_store); - else - trace_netfs_folio(folio, netfs_folio_trace_store_plus); - - /* Attach the folio to the rolling buffer. */ - folio_get(folio); - rolling_buffer_append(&wreq->buffer, folio, NETFS_ROLLBUF_PUT_MARK); - - /* Move the submission point forward to allow for write-streaming data - * not starting at the front of the page. We don't do write-streaming - * with the cache as the cache requires DIO alignment. - * - * Also skip uploading for data that's been read and just needs copying - * to the cache. - */ - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - stream->submit_off =3D 0; - stream->submit_len =3D flen; - if (!stream->avail) { - stream->submit_off =3D UINT_MAX; - stream->submit_len =3D 0; - } - } - - /* Attach the folio to one or more subrequests. For a big folio, we - * could end up with thousands of subrequests if the wsize is small - - * but we might need to wait during the creation of subrequests for - * network resources (eg. SMB credits). - */ - for (;;) { - ssize_t part; - size_t lowest_off =3D ULONG_MAX; - int choose_s =3D -1; - - /* Always add to the lowest-submitted stream first. */ - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - if (stream->submit_len > 0 && - stream->submit_off < lowest_off) { - lowest_off =3D stream->submit_off; - choose_s =3D s; - } - } - - if (choose_s < 0) - break; - stream =3D &wreq->io_streams[choose_s]; - - /* Advance the iterator(s). */ - if (stream->submit_off > iter_off) { - rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off); - iter_off =3D stream->submit_off; - } - - atomic64_set(&wreq->issued_to, fpos + stream->submit_off); - stream->submit_extendable_to =3D fsize - stream->submit_off; - part =3D netfs_advance_write(wreq, stream, fpos + stream->submit_off, - stream->submit_len, to_eof); - stream->submit_off +=3D part; - if (part > stream->submit_len) - stream->submit_len =3D 0; - else - stream->submit_len -=3D part; - if (part > 0) - no_debug =3D true; - } - - wreq->buffer.iter.iov_offset =3D 0; - if (fsize > iter_off) - rolling_buffer_advance(&wreq->buffer, fsize - iter_off); - atomic64_set(&wreq->issued_to, fpos + fsize); - - if (!no_debug) - kdebug("R=3D%x: No submit", wreq->debug_id); - _leave(" =3D 0"); - return 0; -} - /** * netfs_writeback_single - Write back a monolithic payload * @mapping: The mapping to write from * @wbc: Hints from the VM - * @iter: Data to write, must be ITER_FOLIOQ. + * @iter: Data to write. * * Write a monolithic, non-pagecache object back to the server and/or * the cache. @@ -826,13 +713,8 @@ int netfs_writeback_single(struct address_space *mappi= ng, { struct netfs_io_request *wreq; struct netfs_inode *ictx =3D netfs_inode(mapping->host); - struct folio_queue *fq; - size_t size =3D iov_iter_count(iter); int ret; =20 - if (WARN_ON_ONCE(!iov_iter_is_folioq(iter))) - return -EIO; - if (!mutex_trylock(&ictx->wb_lock)) { if (wbc->sync_mode =3D=3D WB_SYNC_NONE) { netfs_stat(&netfs_n_wb_lock_skip); @@ -848,6 +730,9 @@ int netfs_writeback_single(struct address_space *mappin= g, goto couldnt_start; } =20 + wreq->buffer.iter =3D *iter; + wreq->len =3D iov_iter_count(iter); + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); trace_netfs_write(wreq, netfs_write_trace_writeback_single); netfs_stat(&netfs_n_wh_writepages); @@ -855,31 +740,34 @@ int netfs_writeback_single(struct address_space *mapp= ing, if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) wreq->netfs_ops->begin_writeback(wreq); =20 - for (fq =3D (struct folio_queue *)iter->folioq; fq; fq =3D fq->next) { - for (int slot =3D 0; slot < folioq_count(fq); slot++) { - struct folio *folio =3D folioq_folio(fq, slot); - size_t part =3D umin(folioq_folio_size(fq, slot), size); + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream =3D &wreq->io_streams[s]; + + if (!stream->avail) + continue; =20 - _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)= ); + netfs_prepare_write(wreq, stream, 0); =20 - ret =3D netfs_write_folio_single(wreq, folio); - if (ret < 0) - goto stop; - size -=3D part; - if (size <=3D 0) - goto stop; - } + subreq =3D stream->construct; + subreq->len =3D wreq->len; + stream->submit_len =3D subreq->len; + stream->submit_extendable_to =3D round_up(wreq->len, PAGE_SIZE); + + netfs_issue_write(wreq, stream); } =20 -stop: - for (int s =3D 0; s < NR_IO_STREAMS; s++) - netfs_issue_write(wreq, &wreq->io_streams[s]); smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); =20 mutex_unlock(&ictx->wb_lock); netfs_wake_collector(wreq); =20 + /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to + * wait before modifying. + */ + ret =3D netfs_wait_for_write(wreq); + netfs_put_request(wreq, netfs_rreq_trace_put_return); _leave(" =3D %d", ret); return ret; diff --git a/include/linux/netfs.h b/include/linux/netfs.h index f360b25ceb31..f9ad067a0a0c 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -477,6 +477,7 @@ void netfs_free_folioq_buffer(struct folio_queue *fq); void dump_bvecq(const struct bvecq *bq); struct bvecq *netfs_alloc_bvecq(size_t nr_slots, gfp_t gfp); struct bvecq *netfs_alloc_bvecq_buffer(size_t size, unsigned int pre_slots= , gfp_t gfp); +int netfs_expand_bvecq_buffer(struct bvecq **_buffer, size_t *_cur_size, s= size_t size, gfp_t gfp); void netfs_free_bvecq_buffer(struct bvecq *bq); void netfs_put_bvecq(struct bvecq *bq); int netfs_shorten_bvecq_buffer(struct bvecq *bq, unsigned int seg, size_t = size); From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF26C3B4EBE for ; Wed, 4 Mar 2026 14:04:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633078; cv=none; b=gceqH0Qhmdcaa4w3YHpv//VUJhNhEJyKUPNcvzCiC8YojHwqfydZ52sPeuUFeb6kBsdl/uBierwkwGgHcFM1sJFfYq0KjM4/fmTZ7bcPwhLJdxWAH4E7yReOF3UFaFFpnhXrDcA608sUIKxTKVGap96phNJOn5ZBwCuF5Y9HjwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633078; c=relaxed/simple; bh=IP/juZtoL3vcS4yj+idXFsVpTc19MnDYbAUh31HpUdU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ph1fNnx2y14eCtYKnfYN4PjQAFLmV92yJQlWkK6z2pQJ5t7qgMJfnq1Bjc2p8r7MYcD4tbNluvUnPjamEVnfceLaNji60vQaoFZRmeIu0W4rQXGCU6ApMdT5OIKqcUP7rBVbyNi89gAXY+RK49wJ0GEMibejkpxhPyAPW64PU6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GBG69TPQ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GBG69TPQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633075; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=37JYngOeA4ilbnvJNnNil1+CpObDot6D//VD7RHfZ8Q=; b=GBG69TPQlW+oE6Mhpt0U82S+wAtN6nio8vu+81viUB/H7AXRGGvH2xId92mIX9Q6or7801 rmtVJxlZwG/NNkDOcGbR27kguQ3B8kNW21yadKeWx8IqGRTxQEnBJlk0IsdZqKhJ/ISK8o rXtCleed3RRAMKKZzSV4KtjAdL4Z2C0= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-516-f9c12XnfNWiDXtK8o_9oOQ-1; Wed, 04 Mar 2026 09:04:30 -0500 X-MC-Unique: f9c12XnfNWiDXtK8o_9oOQ-1 X-Mimecast-MFC-AGG-ID: f9c12XnfNWiDXtK8o_9oOQ_1772633068 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3596F1800245; Wed, 4 Mar 2026 14:04:28 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1B2C4195608E; Wed, 4 Mar 2026 14:04:22 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French Subject: [RFC PATCH 07/17] netfs: Add a function to extract from an iter into a bvecq Date: Wed, 4 Mar 2026 14:03:14 +0000 Message-ID: <20260304140328.112636-8-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" Add a function to extract a slice of data from an iterator of any type into a bvec queue chain. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/iterator.c | 122 ++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 3 ++ 2 files changed, 125 insertions(+) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 72a435e5fc6d..faf4f0a3b33d 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -13,6 +13,128 @@ #include #include "internal.h" =20 +/** + * netfs_extract_iter - Extract the pages from an iterator into a bvecq + * @orig: The original iterator + * @orig_len: The amount of iterator to copy + * @max_segs: Maximum number of contiguous segments + * @fpos: Starting file position to label the bvecq with + * @_bvecq_head: Where to cache the bvec queue + * @extraction_flags: Flags to qualify the request + * + * Extract the page fragments from the given amount of the source iterator= and + * build bvec queue that refers to all of those bits. This allows the ori= ginal + * iterator to disposed of. + * + * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-pee= r DMA be + * allowed on the pages extracted. + * + * On success, the amount of data in the bvec is returned, the original + * iterator will have been advanced by the amount extracted. + * + * The bvecq segments are marked with indications on how to get clean up t= he + * extracted fragments. + */ +ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t = max_segs, + unsigned long long fpos, struct bvecq **_bvecq_head, + iov_iter_extraction_t extraction_flags) +{ + struct bvecq *bq_tail =3D NULL; + ssize_t ret =3D 0; + size_t segs_per_bq; + size_t extracted =3D 0; + + _enter("{%u,%zx},%zx", orig->iter_type, orig->count, orig_len); + + if (max_segs =3D=3D 0) + max_segs =3D ULONG_MAX; + + /* We want the biggest pow-of-2 size that has at most 255 segs and that + * won't exceed a 4K page. + */ + segs_per_bq =3D (4096 - sizeof(*bq_tail)) / sizeof(bq_tail->__bv[0]); + if (segs_per_bq > 255) + segs_per_bq =3D (2048 - sizeof(*bq_tail)) / sizeof(bq_tail->__bv[0]); + + do { + struct bvecq *bq; + size_t nr_slots =3D iov_iter_npages(orig, umin(segs_per_bq, max_segs)); + + if (WARN_ON(nr_slots =3D=3D 0 && extracted < orig_len) || + WARN_ON(nr_slots > max_segs)) + break; + max_segs -=3D nr_slots; + + bq =3D netfs_alloc_one_bvecq(nr_slots, GFP_NOFS); + if (!bq) { + ret =3D -ENOMEM; + break; + } + bq->free =3D user_backed_iter(orig); + bq->unpin =3D iov_iter_extract_will_pin(orig); + bq->prev =3D bq_tail; + bq->fpos =3D fpos + extracted; + + if (bq_tail) + bq_tail->next =3D bq; + else + *_bvecq_head =3D bq; + bq_tail =3D bq; + + if (extracted >=3D orig_len) + break; + + /* Put the page list at the end of the bvec list storage. bvec + * elements are larger than page pointers, so as long as we + * work 0->last, we should be fine. + */ + struct bio_vec *bv =3D bq->bv; + struct page **pages; + size_t bv_size =3D array_size(bq->max_segs, sizeof(*bv)); + size_t pg_size =3D array_size(bq->max_segs, sizeof(*pages)); + + pages =3D (void *)bv + bv_size - pg_size; + + do { + unsigned int cur_npages; + ssize_t got; + size_t offset; + + got =3D iov_iter_extract_pages(orig, &pages, orig_len - extracted, + bq->max_segs - bq->nr_segs, + extraction_flags, &offset); + if (got < 0) { + pr_err("Couldn't get user pages (rc=3D%zd)\n", got); + ret =3D got; + break; + } + + if (got > orig_len - extracted) { + pr_err("get_pages rc=3D%zd more than %zu\n", + got, orig_len - extracted); + break; + } + + extracted +=3D got; + got +=3D offset; + cur_npages =3D DIV_ROUND_UP(got, PAGE_SIZE); + + for (unsigned int i =3D 0; i < cur_npages; i++) { + size_t len =3D umin(got, PAGE_SIZE); + + bvec_set_page(&bq->bv[bq->nr_segs], + *pages++, len - offset, offset); + bq->nr_segs++; + got -=3D len; + offset =3D 0; + } + } while (extracted < orig_len && !bvecq_is_full(bq)); + } while (extracted < orig_len && max_segs > 0); + + return extracted ?: ret; +} +EXPORT_SYMBOL_GPL(netfs_extract_iter); + /** * netfs_extract_user_iter - Extract the pages from a user iterator into a= bvec * @orig: The original iterator diff --git a/include/linux/netfs.h b/include/linux/netfs.h index f9ad067a0a0c..b146aeaaf6c9 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -448,6 +448,9 @@ void netfs_get_subrequest(struct netfs_io_subrequest *s= ubreq, enum netfs_sreq_ref_trace what); void netfs_put_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); +ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t = max_segs, + unsigned long long fpos, struct bvecq **_bvecq_head, + iov_iter_extraction_t extraction_flags); ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, iov_iter_extraction_t extraction_flags); From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 339723A873C for ; Wed, 4 Mar 2026 14:04:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633082; cv=none; b=b0/dvK/zVVsufL+cI1TvEwgH0cQJBagVxkWZatLs06fDVwxZxpk66DDnmr+AS2WvEwdDKFpOR8aE4W1D59tHFhjpxi/JP1yZXQaJtB4zdDGvw2ZGIbOK075nCGMCRQSxOSEneCUu1kNxTr3RNC6F4IYMA2Ic18b+flFYb1V4gGE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633082; c=relaxed/simple; bh=6MrN12V4VZZ7dptY6LwGI4USr7oToiDAcyNb4akUwRY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IoaNNWejFUjzleexxK6EK9EgKrCVLJughOHSuIn6A/HXTesDJy12WuysmkwwEPucxzRO5s7B6+dk7NoSi9vXAqgwSHj/4tUrNEWvEE5jUB0kEo6eGxGq3I4wuUwbgfXCFT+GnaROQiTD+pPCPul4ZgkC0KeOLVmLSa63FVfYUUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DBctbE3j; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DBctbE3j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633080; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tO/K2lO5KfGaTYM5btI0yX7gGyyJczpwnG5DnaU1Fys=; b=DBctbE3jtjrj+gTeGZwG5abdO1qplCbNEEsGjwc+fsA9Ds9JJQkAa8VLiQdEG/1nZxRzxU RkFpAtgdbyG+SGhLt5BGBSgFb4bn5ectgAWDPCFQQeJQDLWkYu9ISBSdRAfQ23NsSPil5X qHlHBtK9ZHEb0EOq/frQzwaTZ58jOQQ= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-537-_QK6y5tkMJKdWl0aLJqrfg-1; Wed, 04 Mar 2026 09:04:37 -0500 X-MC-Unique: _QK6y5tkMJKdWl0aLJqrfg-1 X-Mimecast-MFC-AGG-ID: _QK6y5tkMJKdWl0aLJqrfg_1772633075 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0347B18002CA; Wed, 4 Mar 2026 14:04:35 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E73CD18002A6; Wed, 4 Mar 2026 14:04:29 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French Subject: [RFC PATCH 08/17] cifs: Use a bvecq for buffering instead of a folioq Date: Wed, 4 Mar 2026 14:03:15 +0000 Message-ID: <20260304140328.112636-9-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Use a bvecq for internal buffering for crypto purposes instead of a folioq so that the latter can be phased out. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/smb/client/cifsglob.h | 2 +- fs/smb/client/smb2ops.c | 70 +++++++++++++++++++--------------------- 2 files changed, 34 insertions(+), 38 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 080ea601c209..12202d9537e0 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -290,7 +290,7 @@ struct smb_rqst { struct kvec *rq_iov; /* array of kvecs */ unsigned int rq_nvec; /* number of kvecs in array */ struct iov_iter rq_iter; /* Data iterator */ - struct folio_queue *rq_buffer; /* Buffer for encryption */ + struct bvecq *rq_buffer; /* Buffer for encryption */ }; =20 struct mid_q_entry; diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index fea9a35caa57..76baf21404df 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4517,19 +4517,17 @@ crypt_message(struct TCP_Server_Info *server, int n= um_rqst, } =20 /* - * Copy data from an iterator to the folios in a folio queue buffer. + * Copy data from an iterator to the pages in a bvec queue buffer. */ -static bool cifs_copy_iter_to_folioq(struct iov_iter *iter, size_t size, - struct folio_queue *buffer) +static bool cifs_copy_iter_to_bvecq(struct iov_iter *iter, size_t size, + struct bvecq *buffer) { for (; buffer; buffer =3D buffer->next) { - for (int s =3D 0; s < folioq_count(buffer); s++) { - struct folio *folio =3D folioq_folio(buffer, s); - size_t part =3D folioq_folio_size(buffer, s); + for (int s =3D 0; s < buffer->nr_segs; s++) { + struct bio_vec *bv =3D &buffer->bv[s]; + size_t part =3D umin(bv->bv_len, size); =20 - part =3D umin(part, size); - - if (copy_folio_from_iter(folio, 0, part, iter) !=3D part) + if (copy_page_from_iter(bv->bv_page, 0, part, iter) !=3D part) return false; size -=3D part; } @@ -4541,7 +4539,7 @@ void smb3_free_compound_rqst(int num_rqst, struct smb_rqst *rqst) { for (int i =3D 0; i < num_rqst; i++) - netfs_free_folioq_buffer(rqst[i].rq_buffer); + netfs_free_bvecq_buffer(rqst[i].rq_buffer); } =20 /* @@ -4568,7 +4566,7 @@ smb3_init_transform_rq(struct TCP_Server_Info *server= , int num_rqst, for (int i =3D 1; i < num_rqst; i++) { struct smb_rqst *old =3D &old_rq[i - 1]; struct smb_rqst *new =3D &new_rq[i]; - struct folio_queue *buffer =3D NULL; + struct bvecq *buffer =3D NULL; size_t size =3D iov_iter_count(&old->rq_iter); =20 orig_len +=3D smb_rqst_len(server, old); @@ -4576,17 +4574,16 @@ smb3_init_transform_rq(struct TCP_Server_Info *serv= er, int num_rqst, new->rq_nvec =3D old->rq_nvec; =20 if (size > 0) { - size_t cur_size =3D 0; - rc =3D netfs_alloc_folioq_buffer(NULL, &buffer, &cur_size, - size, GFP_NOFS); - if (rc < 0) + rc =3D -ENOMEM; + buffer =3D netfs_alloc_bvecq_buffer(size, 0, GFP_NOFS); + if (!buffer) goto err_free; =20 new->rq_buffer =3D buffer; - iov_iter_folio_queue(&new->rq_iter, ITER_SOURCE, - buffer, 0, 0, size); + iov_iter_bvec_queue(&new->rq_iter, ITER_SOURCE, + buffer, 0, 0, size); =20 - if (!cifs_copy_iter_to_folioq(&old->rq_iter, size, buffer)) { + if (!cifs_copy_iter_to_bvecq(&old->rq_iter, size, buffer)) { rc =3D smb_EIO1(smb_eio_trace_tx_copy_iter_to_buf, size); goto err_free; } @@ -4676,16 +4673,15 @@ decrypt_raw_data(struct TCP_Server_Info *server, ch= ar *buf, } =20 static int -cifs_copy_folioq_to_iter(struct folio_queue *folioq, size_t data_size, - size_t skip, struct iov_iter *iter) +cifs_copy_bvecq_to_iter(struct bvecq *bq, size_t data_size, + size_t skip, struct iov_iter *iter) { - for (; folioq; folioq =3D folioq->next) { - for (int s =3D 0; s < folioq_count(folioq); s++) { - struct folio *folio =3D folioq_folio(folioq, s); - size_t fsize =3D folio_size(folio); - size_t n, len =3D umin(fsize - skip, data_size); + for (; bq; bq =3D bq->next) { + for (int s =3D 0; s < bq->nr_segs; s++) { + struct bio_vec *bv =3D &bq->bv[s]; + size_t n, len =3D umin(bv->bv_len - skip, data_size); =20 - n =3D copy_folio_to_iter(folio, skip, len, iter); + n =3D copy_page_to_iter(bv->bv_page, bv->bv_offset + skip, len, iter); if (n !=3D len) { cifs_dbg(VFS, "%s: something went wrong\n", __func__); return smb_EIO2(smb_eio_trace_rx_copy_to_iter, @@ -4701,7 +4697,7 @@ cifs_copy_folioq_to_iter(struct folio_queue *folioq, = size_t data_size, =20 static int handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, - char *buf, unsigned int buf_len, struct folio_queue *buffer, + char *buf, unsigned int buf_len, struct bvecq *buffer, unsigned int buffer_len, bool is_offloaded) { unsigned int data_offset; @@ -4810,8 +4806,8 @@ handle_read_data(struct TCP_Server_Info *server, stru= ct mid_q_entry *mid, } =20 /* Copy the data to the output I/O iterator. */ - rdata->result =3D cifs_copy_folioq_to_iter(buffer, buffer_len, - cur_off, &rdata->subreq.io_iter); + rdata->result =3D cifs_copy_bvecq_to_iter(buffer, buffer_len, + cur_off, &rdata->subreq.io_iter); if (rdata->result !=3D 0) { if (is_offloaded) mid->mid_state =3D MID_RESPONSE_MALFORMED; @@ -4849,7 +4845,7 @@ handle_read_data(struct TCP_Server_Info *server, stru= ct mid_q_entry *mid, struct smb2_decrypt_work { struct work_struct decrypt; struct TCP_Server_Info *server; - struct folio_queue *buffer; + struct bvecq *buffer; char *buf; unsigned int len; }; @@ -4863,7 +4859,7 @@ static void smb2_decrypt_offload(struct work_struct *= work) struct mid_q_entry *mid; struct iov_iter iter; =20 - iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, dw->len); + iov_iter_bvec_queue(&iter, ITER_DEST, dw->buffer, 0, 0, dw->len); rc =3D decrypt_raw_data(dw->server, dw->buf, dw->server->vals->read_rsp_s= ize, &iter, true); if (rc) { @@ -4912,7 +4908,7 @@ static void smb2_decrypt_offload(struct work_struct *= work) } =20 free_pages: - netfs_free_folioq_buffer(dw->buffer); + netfs_free_bvecq_buffer(dw->buffer); cifs_small_buf_release(dw->buf); kfree(dw); } @@ -4950,12 +4946,12 @@ receive_encrypted_read(struct TCP_Server_Info *serv= er, struct mid_q_entry **mid, dw->len =3D len; len =3D round_up(dw->len, PAGE_SIZE); =20 - size_t cur_size =3D 0; - rc =3D netfs_alloc_folioq_buffer(NULL, &dw->buffer, &cur_size, len, GFP_N= OFS); - if (rc < 0) + rc =3D -ENOMEM; + dw->buffer =3D netfs_alloc_bvecq_buffer(len, 0, GFP_NOFS); + if (!dw->buffer) goto discard_data; =20 - iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, len); + iov_iter_bvec_queue(&iter, ITER_DEST, dw->buffer, 0, 0, len); =20 /* Read the data into the buffer and clear excess bufferage. */ rc =3D cifs_read_iter_from_socket(server, &iter, dw->len); @@ -5013,7 +5009,7 @@ receive_encrypted_read(struct TCP_Server_Info *server= , struct mid_q_entry **mid, } =20 free_pages: - netfs_free_folioq_buffer(dw->buffer); + netfs_free_bvecq_buffer(dw->buffer); free_dw: kfree(dw); return rc; From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EEED3A8728 for ; Wed, 4 Mar 2026 14:04:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633089; cv=none; b=sFVVqZs9miCj/NQ8cxyMWuW1rSGpHD84W+zZeVYa/IpDv3gE2J4PZbo9ixpi6wowFioY0KpI9KYZJlb5UiSW2pHX2DiNmqI1zfKuDILALUK+U6KqHXe4wcHZ7WWIHVMZipuhuev22LoaOOBHP9g8E0WKGzDyWbrUQiYrQlH9V9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633089; c=relaxed/simple; bh=VcOZjsJ4CdphSKdTgenpkdwStJn6CCz7omsomjQqS3Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FhCtSLZ94HXDBnXiu/EqH1ezmojtUorCOWTQin5qix+NVdL1r/ztElszD/z6SbWqUxvHUr3BjAbZzJ0dtRvikB/TAnkw0ygo7S2mjxRC8vusZIxIYPp9kY5h6vX0/Xm362U9uWjAQhQotzUthWULsf0RuiFoz3SStycymDGMArw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Yn6UWINY; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Yn6UWINY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633087; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gmalM6e/KranB4NTwyzjDkasLFtiotle3DJG5ULgat8=; b=Yn6UWINYshvFD+2qbDb7w4qbIiRwpGyM2Fwjdguqdy9Ug3MieGmIuj7NUGAxPOtz1TQ1m+ LckkLtLPz9jAEkO55u0Qk3+NjAwvSn2CelJq2Q9zzwPadDvDQ3j7wi7yl9PLEZJLg4dSKc ZR5nLOyDtIlCUDX1TK+9WOHdvqKd5Xc= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-662-JTv7Sk2YOCuKy6jgSHs42Q-1; Wed, 04 Mar 2026 09:04:45 -0500 X-MC-Unique: JTv7Sk2YOCuKy6jgSHs42Q-1 X-Mimecast-MFC-AGG-ID: JTv7Sk2YOCuKy6jgSHs42Q_1772633082 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0BF9418002F0; Wed, 4 Mar 2026 14:04:42 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AE60B1958DC5; Wed, 4 Mar 2026 14:04:36 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French , Shyam Prasad N , Tom Talpey Subject: [RFC PATCH 09/17] cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma() Date: Wed, 4 Mar 2026 14:03:16 +0000 Message-ID: <20260304140328.112636-10-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Add support for ITER_BVECQ to smb_extract_iter_to_rdma(). Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: Shyam Prasad N cc: Tom Talpey cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/smb/client/smbdirect.c | 60 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c index c79304012b08..0c6262010cd2 100644 --- a/fs/smb/client/smbdirect.c +++ b/fs/smb/client/smbdirect.c @@ -3298,6 +3298,63 @@ static ssize_t smb_extract_folioq_to_rdma(struct iov= _iter *iter, return ret; } =20 +/* + * Extract memory fragments from a BVECQ-class iterator and add them to an= RDMA + * list. The folios are not pinned. + */ +static ssize_t smb_extract_bvecq_to_rdma(struct iov_iter *iter, + struct smb_extract_to_rdma *rdma, + ssize_t maxsize) +{ + const struct bvecq *bq =3D iter->bvecq; + unsigned int slot =3D iter->bvecq_slot; + ssize_t ret =3D 0; + size_t offset =3D iter->iov_offset; + + if (slot >=3D bq->nr_segs) { + bq =3D bq->next; + if (WARN_ON_ONCE(!bq)) + return -EIO; + slot =3D 0; + } + + do { + struct bio_vec *bv =3D &bq->bv[slot]; + struct page *page =3D bv->bv_page; + size_t bsize =3D bv->bv_len; + + if (offset < bsize) { + size_t part =3D umin(maxsize, bsize - offset); + + if (!smb_set_sge(rdma, page, bv->bv_offset + offset, part)) + return -EIO; + + offset +=3D part; + ret +=3D part; + maxsize -=3D part; + } + + if (offset >=3D bsize) { + offset =3D 0; + slot++; + if (slot >=3D bq->nr_segs) { + if (!bq->next) { + WARN_ON_ONCE(ret < iter->count); + break; + } + bq =3D bq->next; + slot =3D 0; + } + } + } while (rdma->nr_sge < rdma->max_sge && maxsize > 0); + + iter->bvecq =3D bq; + iter->bvecq_slot =3D slot; + iter->iov_offset =3D offset; + iter->count -=3D ret; + return ret; +} + /* * Extract page fragments from up to the given amount of the source iterat= or * and build up an RDMA list that refers to all of those bits. The RDMA l= ist @@ -3325,6 +3382,9 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_it= er *iter, size_t len, case ITER_FOLIOQ: ret =3D smb_extract_folioq_to_rdma(iter, rdma, len); break; + case ITER_BVECQ: + ret =3D smb_extract_bvecq_to_rdma(iter, rdma, len); + break; default: WARN_ON_ONCE(1); return -EIO; From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7ED03B3BFD for ; Wed, 4 Mar 2026 14:04:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633100; cv=none; b=YOjOM5srFUsxXeQv5dgnQyveA1I6ZJ1vps6QqqQu9/S/k2P1HvsQUjAn1qYziLPhPB35CinFOIp1aX8Jw25+3OKiPyd7JXFNBbPUd2boEtmdVbJTZiB0qBZeLamqoJ+bVA1taSup+Wbz5255wSJyCsPqX/8IUCoppsVVkK//JYI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633100; c=relaxed/simple; bh=2rdQGmwDv+4ZvKsJsD9dN/dkw0EPvwyVFsjnaypRM1k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qLR642uvGTwdfCF+cGJeakkBYqP/Ix0s6cMvpS4G1FRkXD3mYwLDZ9VWFo8e0sXlXEga57bIX+efNdQPA1YvjewtOA2OYS1H257dAnuxeRKMmblY9TpDabsU4372VAg+hdLlIy2Aefu4f6WZ5nwHo/ESzOXbMCaeDY5LrTpa9E0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Ry1Mg2XG; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ry1Mg2XG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633096; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/32zjPwW5+8v0xgqozoFU8ltEyXQ6LKPVMQ2OlUvXVw=; b=Ry1Mg2XGq2cvpJGYapeZ0c3EwdkIcGwLnneDMH3k3A5Rp1LjlvVwPMoOKnQntUSHk8eHOF oVQqM0FV1zQKTGxOcj+bu2Wc7x9GA1Bar6W2lk4VnHLs/xWsYISXKCSodIsw1VonnuUDZP BitqMbuu2DqUdz40z2vYfbtDGsupmX4= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-373-sqiE3qcGNMi_1fet9-ZxPg-1; Wed, 04 Mar 2026 09:04:52 -0500 X-MC-Unique: sqiE3qcGNMi_1fet9-ZxPg-1 X-Mimecast-MFC-AGG-ID: sqiE3qcGNMi_1fet9-ZxPg_1772633090 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D1F2718002D6; Wed, 4 Mar 2026 14:04:49 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id BDEAB1800756; Wed, 4 Mar 2026 14:04:43 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French , Shyam Prasad N , Tom Talpey Subject: [RFC PATCH 10/17] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer Date: Wed, 4 Mar 2026 14:03:17 +0000 Message-ID: <20260304140328.112636-11-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Switch netfslib to using bvecq, a segmented bio_vec[] queue, instead of the folio_queue and rolling_buffer constructs, to keep track of the regions of memory it is performing I/O upon. Each bvecq struct in the chain is marked with the starting file position of that sequence so that discontiguities can be handled (the contents of each individual bvecq must be contiguous). For buffered I/O, the folios are added to the queue as the operation proceeds, much as it does now with folio_queues. For unbuffered/direct I/O, the iterator is extracted into the queue up front. The bvecq structs are marked with information as to how the regions contained therein should be disposed of (unlock-only, free, unpin). When setting up a subrequest, netfslib will furnish it with a slice of the main buffer queue as a pointer to starting bvecq, slot and offset and, for the moment, an ITER_BVECQ iterator is set to cover the slice in subreq->io_iter. Notes on the implementation: (1) This patch uses the concept of a 'bvecq position', which is a tuple of { bvecq, slot, offset }. This is lighter weight than using a full iov_iter, though that would also suffice. If not NULL, the position also holds a reference on the bvecq it is pointing to. This is probably overkill as only the hindmost position (that of collection) needs to hold a reference. (2) There are three positions on the netfs_io_request struct. Not all are used by every request type. Firstly, there's ->load_cursor, which is used by buffered read and write to point to the next slot to have a folio inserted into it (either loaded from the readahead_control or from writeback_iter()). Secondly, there's ->dispatch_cursor, which is used to provide the position in the buffer from which we start dispatching a subrequest. Thirdly, there's the ->collect_cursor, which is used by the collection routines to point to the next memory region to be cleaned up. (3) There are two positions on the netfs_io_subrequest struct. Firstly, there's ->dispatch_pos, which indicates the position from which a subrequest's buffer begins. This is used as the base of the position from which to retry (advanced by ->transfer). Secondly, there's ->content, which is normally the same as ->dispatch_pos but if the bvecq chain got duplicated or the content got copied, then this will point to that and will that will be disposed of on retry. (4) Maintenance of the position structs is done with helper functions, such as bvecq_pos_attach() to hide the refcounting. (5) When sending a write to the cache, the bvecq will be duplicated and the ends rounded up/down to the backing file's DIO block alignment. (6) bvec_slice() is used to select a slice of the source buffer and assign it to a subrequest. The source buffer position is advanced. (7) netfs_extract_iter() is used by unbuffered/direct I/O API functions to decant a chunk of the iov_iter supplied by the VFS into a bvecq chain - and to label the bvecqs with appropriate disposal information (e.g. unpin, free, nothing). There are further options that can be explored in the future: (1) Allow the provision of a duplicated bvecq chain for just that region so that the filesystem can add bits on either end (such as adding protocol headers and trailers and gluing several things together into a compound operation). (2) If a filesystem supports vectored/sparse read and write ops, it can be given a chain with discontiguities in it to perform in a single op (Ceph, for example, can do this). (3) Because each bvecq notes the start file position of the regions contained therein, there's no need to translate the info in the bio_vec into folio pointers in order to unlock the page after I/O. Instead, the inode's pagecache can be iterated over and the xarray marks cleared en masse. (4) Make MSG_SPLICE_PAGES handling read the disposal info in the bvecq and use that to indicate how it should get rid of the stuff it pasted into a sk_buff. (5) If a bounce buffer is needed (encryption, for example), the bounce buffer can be held in a bvecq and sliced up instead of the main buffer queue. (6) Get rid of subreq->io_iter and move the iov_iter stuff down into the filesystem. The I/O iterators are normally only needed transitorily, and the one currently in netfs_io_subrequest is unnecessary most of the time. folio_queue and rolling_buffer will be removed in a follow up patch. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: Shyam Prasad N cc: Tom Talpey cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/cachefiles/io.c | 13 --- fs/netfs/Makefile | 1 - fs/netfs/buffered_read.c | 153 ++++++++++++++++++++++----------- fs/netfs/direct_read.c | 73 ++++++---------- fs/netfs/direct_write.c | 72 +++++++++------- fs/netfs/internal.h | 10 +-- fs/netfs/iterator.c | 2 + fs/netfs/misc.c | 20 +---- fs/netfs/objects.c | 16 ++-- fs/netfs/read_collect.c | 83 +++++++++++------- fs/netfs/read_pgpriv2.c | 68 ++++++++++----- fs/netfs/read_retry.c | 59 ++++++++----- fs/netfs/read_single.c | 18 ++-- fs/netfs/stats.c | 4 +- fs/netfs/write_collect.c | 40 +++++---- fs/netfs/write_issue.c | 162 +++++++++++++++++++++++++---------- fs/netfs/write_retry.c | 45 ++++++---- include/linux/netfs.h | 26 +++--- include/trace/events/netfs.h | 46 +++++----- 19 files changed, 530 insertions(+), 381 deletions(-) diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index eaf47851c65f..2c3edc91a5b0 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -648,7 +648,6 @@ static void cachefiles_issue_write(struct netfs_io_subr= equest *subreq) struct netfs_cache_resources *cres =3D &wreq->cache_resources; struct cachefiles_object *object =3D cachefiles_cres_object(cres); struct cachefiles_cache *cache =3D object->volume->cache; - struct netfs_io_stream *stream =3D &wreq->io_streams[subreq->stream_nr]; const struct cred *saved_cred; size_t off, pre, post, len =3D subreq->len; loff_t start =3D subreq->start; @@ -672,18 +671,6 @@ static void cachefiles_issue_write(struct netfs_io_sub= request *subreq) iov_iter_advance(&subreq->io_iter, pre); } =20 - /* We also need to end on the cache granularity boundary */ - if (start + len =3D=3D wreq->i_size) { - size_t part =3D len % CACHEFILES_DIO_BLOCK_SIZE; - size_t need =3D CACHEFILES_DIO_BLOCK_SIZE - part; - - if (part && stream->submit_extendable_to >=3D need) { - len +=3D need; - subreq->len +=3D need; - subreq->io_iter.count +=3D need; - } - } - post =3D len & (CACHEFILES_DIO_BLOCK_SIZE - 1); if (post) { len -=3D post; diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index e1f12ecb5abf..0621e6870cbd 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -15,7 +15,6 @@ netfs-y :=3D \ read_pgpriv2.o \ read_retry.o \ read_single.o \ - rolling_buffer.o \ write_collect.o \ write_issue.o \ write_retry.o diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 88a0d801525f..d5d5a7520cbe 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -54,6 +54,28 @@ static void netfs_rreq_expand(struct netfs_io_request *r= req, } } =20 +static void netfs_clear_to_ra_end(struct netfs_io_request *rreq, + struct readahead_control *ractl) +{ + struct folio_batch batch; + + folio_batch_init(&batch); + + for (;;) { + batch.nr =3D __readahead_batch(ractl, (struct page **)batch.folios, + PAGEVEC_SIZE); + if (!batch.nr) + break; + for (int i =3D 0; i < batch.nr; i++) { + struct folio *folio =3D batch.folios[i]; + + trace_netfs_folio(folio, netfs_folio_trace_zero_ra); + folio_zero_segment(folio, 0, folio_size(folio)); + } + folio_batch_release(&batch); + } +} + /* * Begin an operation, and fetch the stored zero point value from the cook= ie if * available. @@ -82,14 +104,16 @@ static ssize_t netfs_prepare_read_iterator(struct netf= s_io_subrequest *subreq, struct readahead_control *ractl) { struct netfs_io_request *rreq =3D subreq->rreq; + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + ssize_t extracted; size_t rsize =3D subreq->len; =20 if (subreq->source =3D=3D NETFS_DOWNLOAD_FROM_SERVER) - rsize =3D umin(rsize, rreq->io_streams[0].sreq_max_len); + rsize =3D umin(rsize, stream->sreq_max_len); =20 if (ractl) { /* If we don't have sufficient folios in the rolling buffer, - * extract a folioq's worth from the readahead region at a time + * extract a bvecq's worth from the readahead region at a time * into the buffer. Note that this acquires a ref on each page * that we will need to release later - but we don't want to do * that until after we've started the I/O. @@ -100,8 +124,8 @@ static ssize_t netfs_prepare_read_iterator(struct netfs= _io_subrequest *subreq, while (rreq->submitted < subreq->start + rsize) { ssize_t added; =20 - added =3D rolling_buffer_load_from_ra(&rreq->buffer, ractl, - &put_batch); + added =3D bvecq_load_from_ra(&rreq->load_cursor, ractl, + &put_batch); if (added < 0) return added; rreq->submitted +=3D added; @@ -109,21 +133,16 @@ static ssize_t netfs_prepare_read_iterator(struct net= fs_io_subrequest *subreq, folio_batch_release(&put_batch); } =20 - subreq->len =3D rsize; - if (unlikely(rreq->io_streams[0].sreq_max_segs)) { - size_t limit =3D netfs_limit_iter(&rreq->buffer.iter, 0, rsize, - rreq->io_streams[0].sreq_max_segs); - - if (limit < rsize) { - subreq->len =3D limit; - trace_netfs_sreq(subreq, netfs_sreq_trace_limited); - } + bvecq_pos_attach(&subreq->dispatch_pos, &rreq->dispatch_cursor); + extracted =3D bvecq_slice(&rreq->dispatch_cursor, subreq->len, + stream->sreq_max_segs, &subreq->nr_segs); + if (extracted < 0) + return extracted; + if (extracted < rsize) { + subreq->len =3D extracted; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); } =20 - subreq->io_iter =3D rreq->buffer.iter; - - iov_iter_truncate(&subreq->io_iter, subreq->len); - rolling_buffer_advance(&rreq->buffer, subreq->len); return subreq->len; } =20 @@ -188,8 +207,13 @@ static void netfs_queue_read(struct netfs_io_request *= rreq, } =20 static void netfs_issue_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct readahead_control *ractl) { + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + switch (subreq->source) { case NETFS_DOWNLOAD_FROM_SERVER: rreq->netfs_ops->issue_read(subreq); @@ -198,11 +222,14 @@ static void netfs_issue_read(struct netfs_io_request = *rreq, netfs_read_cache_to_pagecache(rreq, subreq); break; default: - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + bvecq_zero(&rreq->dispatch_cursor, subreq->len); + subreq->transferred =3D subreq->len; subreq->error =3D 0; iov_iter_zero(subreq->len, &subreq->io_iter); subreq->transferred =3D subreq->len; netfs_read_subreq_terminated(subreq); + if (ractl) + netfs_clear_to_ra_end(rreq, ractl); break; } } @@ -220,6 +247,11 @@ static void netfs_read_to_pagecache(struct netfs_io_re= quest *rreq, ssize_t size =3D rreq->len; int ret =3D 0; =20 + _enter("R=3D%08x", rreq->debug_id); + + bvecq_pos_attach(&rreq->dispatch_cursor, &rreq->load_cursor); + bvecq_pos_attach(&rreq->collect_cursor, &rreq->dispatch_cursor); + do { struct netfs_io_subrequest *subreq; enum netfs_io_source source =3D NETFS_SOURCE_UNKNOWN; @@ -234,6 +266,9 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq, subreq->start =3D start; subreq->len =3D size; =20 + rreq->io_streams[0].sreq_max_len =3D MAX_RW_COUNT; + rreq->io_streams[0].sreq_max_segs =3D INT_MAX; + source =3D netfs_cache_prepare_read(rreq, subreq, rreq->i_size); subreq->source =3D source; if (source =3D=3D NETFS_DOWNLOAD_FROM_SERVER) { @@ -307,7 +342,7 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq, start +=3D slice; =20 netfs_queue_read(rreq, subreq, size <=3D 0); - netfs_issue_read(rreq, subreq); + netfs_issue_read(rreq, subreq, ractl); cond_resched(); } while (size > 0); =20 @@ -319,6 +354,9 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq, =20 /* Defer error return as we may need to wait for outstanding I/O. */ cmpxchg(&rreq->error, 0, ret); + + bvecq_pos_detach(&rreq->load_cursor); + bvecq_pos_detach(&rreq->dispatch_cursor); } =20 /** @@ -362,7 +400,7 @@ void netfs_readahead(struct readahead_control *ractl) netfs_rreq_expand(rreq, ractl); =20 rreq->submitted =3D rreq->start; - if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0) + if (bvecq_buffer_init(&rreq->load_cursor, rreq->debug_id) < 0) goto cleanup_free; netfs_read_to_pagecache(rreq, ractl); =20 @@ -374,20 +412,19 @@ void netfs_readahead(struct readahead_control *ractl) EXPORT_SYMBOL(netfs_readahead); =20 /* - * Create a rolling buffer with a single occupying folio. + * Create a buffer queue with a single occupying folio. */ -static int netfs_create_singular_buffer(struct netfs_io_request *rreq, str= uct folio *folio, - unsigned int rollbuf_flags) +static int netfs_create_singular_buffer(struct netfs_io_request *rreq, str= uct folio *folio) { - ssize_t added; + struct bvecq *bq; + size_t fsize =3D folio_size(folio); =20 - if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0) + if (bvecq_buffer_init(&rreq->load_cursor, rreq->debug_id) < 0) return -ENOMEM; =20 - added =3D rolling_buffer_append(&rreq->buffer, folio, rollbuf_flags); - if (added < 0) - return added; - rreq->submitted =3D rreq->start + added; + bq =3D rreq->load_cursor.bvecq; + bvec_set_folio(&bq->bv[bq->nr_segs++], folio, fsize, 0); + rreq->submitted =3D rreq->start + fsize; return 0; } =20 @@ -400,11 +437,11 @@ static int netfs_read_gaps(struct file *file, struct = folio *folio) struct address_space *mapping =3D folio->mapping; struct netfs_folio *finfo =3D netfs_folio_info(folio); struct netfs_inode *ctx =3D netfs_inode(mapping->host); - struct folio *sink =3D NULL; - struct bio_vec *bvec; + struct bvecq *bq =3D NULL; + struct page *sink =3D NULL; unsigned int from =3D finfo->dirty_offset; unsigned int to =3D from + finfo->dirty_len; - unsigned int off =3D 0, i =3D 0; + unsigned int off =3D 0; size_t flen =3D folio_size(folio); size_t nr_bvec =3D flen / PAGE_SIZE + 2; size_t part; @@ -429,38 +466,47 @@ static int netfs_read_gaps(struct file *file, struct = folio *folio) * end get copied to, but the middle is discarded. */ ret =3D -ENOMEM; - bvec =3D kmalloc_objs(*bvec, nr_bvec); - if (!bvec) + bq =3D netfs_alloc_bvecq(nr_bvec, GFP_KERNEL); + if (!bq) goto discard; + rreq->load_cursor.bvecq =3D bq; =20 - sink =3D folio_alloc(GFP_KERNEL, 0); - if (!sink) { - kfree(bvec); + sink =3D alloc_page(GFP_KERNEL); + if (!sink) goto discard; - } =20 trace_netfs_folio(folio, netfs_folio_trace_read_gaps); =20 - rreq->direct_bv =3D bvec; - rreq->direct_bv_count =3D nr_bvec; + for (struct bvecq *p =3D bq; p; p =3D p->next) + p->free =3D true; + if (from > 0) { - bvec_set_folio(&bvec[i++], folio, from, 0); + folio_get(folio); + bvec_set_folio(&bq->bv[bq->nr_segs++], folio, from, 0); off =3D from; } while (off < to) { - part =3D min_t(size_t, to - off, PAGE_SIZE); - bvec_set_folio(&bvec[i++], sink, part, 0); + if (bvecq_is_full(bq)) + bq =3D bq->next; + part =3D umin(to - off, PAGE_SIZE); + get_page(sink); + bvec_set_page(&bq->bv[bq->nr_segs++], sink, part, 0); off +=3D part; } - if (to < flen) - bvec_set_folio(&bvec[i++], folio, flen - to, to); - iov_iter_bvec(&rreq->buffer.iter, ITER_DEST, bvec, i, rreq->len); + if (to < flen) { + if (bvecq_is_full(bq)) + bq =3D bq->next; + folio_get(folio); + bvec_set_folio(&bq->bv[bq->nr_segs++], folio, flen - to, to); + } + + dump_bvecq(bq); + rreq->submitted =3D rreq->start + flen; =20 netfs_read_to_pagecache(rreq, NULL); =20 - if (sink) - folio_put(sink); + put_page(sink); =20 ret =3D netfs_wait_for_read(rreq); if (ret >=3D 0) { @@ -472,6 +518,8 @@ static int netfs_read_gaps(struct file *file, struct fo= lio *folio) return ret < 0 ? ret : 0; =20 discard: + if (sink) + put_page(sink); netfs_put_failed_request(rreq); alloc_error: folio_unlock(folio); @@ -522,7 +570,7 @@ int netfs_read_folio(struct file *file, struct folio *f= olio) trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); =20 /* Set up the output buffer */ - ret =3D netfs_create_singular_buffer(rreq, folio, 0); + ret =3D netfs_create_singular_buffer(rreq, folio); if (ret < 0) goto discard; =20 @@ -679,7 +727,7 @@ int netfs_write_begin(struct netfs_inode *ctx, trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); =20 /* Set up the output buffer */ - ret =3D netfs_create_singular_buffer(rreq, folio, 0); + ret =3D netfs_create_singular_buffer(rreq, folio); if (ret < 0) goto error_put; =20 @@ -744,9 +792,10 @@ int netfs_prefetch_for_write(struct file *file, struct= folio *folio, trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); =20 /* Set up the output buffer */ - ret =3D netfs_create_singular_buffer(rreq, folio, NETFS_ROLLBUF_PAGECACHE= _MARK); + ret =3D netfs_create_singular_buffer(rreq, folio); if (ret < 0) goto error_put; + rreq->load_cursor.bvecq->free =3D true; =20 netfs_read_to_pagecache(rreq, NULL); ret =3D netfs_wait_for_read(rreq); diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index a498ee8d6674..c8704c4a95a9 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -16,31 +16,6 @@ #include #include "internal.h" =20 -static void netfs_prepare_dio_read_iterator(struct netfs_io_subrequest *su= breq) -{ - struct netfs_io_request *rreq =3D subreq->rreq; - size_t rsize; - - rsize =3D umin(subreq->len, rreq->io_streams[0].sreq_max_len); - subreq->len =3D rsize; - - if (unlikely(rreq->io_streams[0].sreq_max_segs)) { - size_t limit =3D netfs_limit_iter(&rreq->buffer.iter, 0, rsize, - rreq->io_streams[0].sreq_max_segs); - - if (limit < rsize) { - subreq->len =3D limit; - trace_netfs_sreq(subreq, netfs_sreq_trace_limited); - } - } - - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - - subreq->io_iter =3D rreq->buffer.iter; - iov_iter_truncate(&subreq->io_iter, subreq->len); - iov_iter_advance(&rreq->buffer.iter, subreq->len); -} - /* * Perform a read to a buffer from the server, slicing up the region to be= read * according to the network rsize. @@ -52,9 +27,10 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_= io_request *rreq) ssize_t size =3D rreq->len; int ret =3D 0; =20 + bvecq_pos_attach(&rreq->dispatch_cursor, &rreq->load_cursor); + do { struct netfs_io_subrequest *subreq; - ssize_t slice; =20 subreq =3D netfs_alloc_subrequest(rreq); if (!subreq) { @@ -90,16 +66,24 @@ static int netfs_dispatch_unbuffered_reads(struct netfs= _io_request *rreq) } } =20 - netfs_prepare_dio_read_iterator(subreq); - slice =3D subreq->len; - size -=3D slice; - start +=3D slice; - rreq->submitted +=3D slice; + bvecq_pos_attach(&subreq->dispatch_pos, &rreq->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &rreq->dispatch_cursor); + subreq->len =3D bvecq_slice(&rreq->dispatch_cursor, + umin(size, stream->sreq_max_len), + stream->sreq_max_segs, + &subreq->nr_segs); + + size -=3D subreq->len; + start +=3D subreq->len; + rreq->submitted +=3D subreq->len; if (size <=3D 0) { smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); } =20 + iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + rreq->netfs_ops->issue_read(subreq); =20 if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) @@ -115,6 +99,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_= io_request *rreq) netfs_wake_collector(rreq); } =20 + bvecq_pos_detach(&rreq->dispatch_cursor); return ret; } =20 @@ -198,25 +183,15 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kioc= b *iocb, struct iov_iter *i * buffer for ourselves as the caller's iterator will be trashed when * we return. * - * In such a case, extract an iterator to represent as much of the the - * output buffer as we can manage. Note that the extraction might not - * be able to allocate a sufficiently large bvec array and may shorten - * the request. + * Extract a buffer queue to represent as much of the output buffer as + * we can manage. The fragments are extracted into a bvecq which will + * have sufficient nodes allocated to hold all the data, though this + * may end up truncated if ENOMEM is encountered. */ - if (user_backed_iter(iter)) { - ret =3D netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0); - if (ret < 0) - goto error_put; - rreq->direct_bv =3D (struct bio_vec *)rreq->buffer.iter.bvec; - rreq->direct_bv_count =3D ret; - rreq->direct_bv_unpin =3D iov_iter_extract_will_pin(iter); - rreq->len =3D iov_iter_count(&rreq->buffer.iter); - } else { - rreq->buffer.iter =3D *iter; - rreq->len =3D orig_count; - rreq->direct_bv_unpin =3D false; - iov_iter_advance(iter, orig_count); - } + ret =3D netfs_extract_iter(iter, rreq->len, INT_MAX, iocb->ki_pos, + &rreq->load_cursor.bvecq, 0); + if (ret < 0) + goto error_put; =20 // TODO: Set up bounce buffer if needed =20 diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c index dd1451bf7543..bb224d837b78 100644 --- a/fs/netfs/direct_write.c +++ b/fs/netfs/direct_write.c @@ -73,7 +73,11 @@ static void netfs_unbuffered_write_collect(struct netfs_= io_request *wreq, spin_unlock(&wreq->lock); =20 wreq->transferred +=3D subreq->transferred; - iov_iter_advance(&wreq->buffer.iter, subreq->transferred); + if (subreq->transferred < subreq->len) { + bvecq_pos_detach(&wreq->dispatch_cursor); + bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos); + bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred); + } =20 stream->collected_to =3D subreq->start + subreq->transferred; wreq->collected_to =3D stream->collected_to; @@ -99,6 +103,9 @@ static int netfs_unbuffered_write(struct netfs_io_reques= t *wreq) =20 _enter("%llx", wreq->len); =20 + bvecq_pos_attach(&wreq->dispatch_cursor, &wreq->load_cursor); + bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor); + if (wreq->origin =3D=3D NETFS_DIO_WRITE) inode_dio_begin(wreq->inode); =20 @@ -121,16 +128,19 @@ static int netfs_unbuffered_write(struct netfs_io_req= uest *wreq) break; } =20 - iov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred); + bvecq_pos_attach(&subreq->dispatch_pos, &wreq->dispatch_cursor); + subreq->len =3D bvecq_slice(&wreq->dispatch_cursor, stream->sreq_max_len, + stream->sreq_max_segs, &subreq->nr_segs); + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + + iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE, + subreq->content.bvecq, subreq->content.slot, + subreq->content.offset, + subreq->len); + if (!iov_iter_count(&subreq->io_iter)) break; =20 - subreq->len =3D netfs_limit_iter(&subreq->io_iter, 0, - stream->sreq_max_len, - stream->sreq_max_segs); - iov_iter_truncate(&subreq->io_iter, subreq->len); - stream->submit_extendable_to =3D subreq->len; - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); stream->issue_write(subreq); =20 @@ -167,8 +177,15 @@ static int netfs_unbuffered_write(struct netfs_io_requ= est *wreq) */ subreq->error =3D -EAGAIN; trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - if (subreq->transferred > 0) - iov_iter_advance(&wreq->buffer.iter, subreq->transferred); + + bvecq_pos_detach(&subreq->content); + bvecq_pos_detach(&wreq->dispatch_cursor); + bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos); + + if (subreq->transferred > 0) { + wreq->transferred +=3D subreq->transferred; + bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred); + } =20 if (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && wreq->netfs_ops->retry_request) @@ -177,7 +194,6 @@ static int netfs_unbuffered_write(struct netfs_io_reque= st *wreq) __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); - subreq->io_iter =3D wreq->buffer.iter; subreq->start =3D wreq->start + wreq->transferred; subreq->len =3D wreq->len - wreq->transferred; subreq->transferred =3D 0; @@ -192,6 +208,8 @@ static int netfs_unbuffered_write(struct netfs_io_reque= st *wreq) netfs_stat(&netfs_n_wh_retry_write_subreq); } =20 + bvecq_pos_detach(&wreq->dispatch_cursor); + bvecq_pos_detach(&wreq->load_cursor); netfs_unbuffered_write_done(wreq); _leave(" =3D %d", ret); return ret; @@ -210,12 +228,12 @@ static void netfs_unbuffered_write_async(struct work_= struct *work) * encrypted file. This can also be used for direct I/O writes. */ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_= iter *iter, - struct netfs_group *netfs_group) + struct netfs_group *netfs_group) { struct netfs_io_request *wreq; unsigned long long start =3D iocb->ki_pos; unsigned long long end =3D start + iov_iter_count(iter); - ssize_t ret, n; + ssize_t ret; size_t len =3D iov_iter_count(iter); bool async =3D !is_sync_kiocb(iocb); =20 @@ -249,25 +267,17 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kio= cb *iocb, struct iov_iter * * allocate a sufficiently large bvec array and may shorten the * request. */ - if (user_backed_iter(iter)) { - n =3D netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0); - if (n < 0) { - ret =3D n; - goto error_put; - } - wreq->direct_bv =3D (struct bio_vec *)wreq->buffer.iter.bvec; - wreq->direct_bv_count =3D n; - wreq->direct_bv_unpin =3D iov_iter_extract_will_pin(iter); - } else { - /* If this is a kernel-generated async DIO request, - * assume that any resources the iterator points to - * (eg. a bio_vec array) will persist till the end of - * the op. - */ - wreq->buffer.iter =3D *iter; - } + ssize_t n =3D netfs_extract_iter(iter, len, INT_MAX, iocb->ki_pos, + &wreq->load_cursor.bvecq, 0); =20 - wreq->len =3D iov_iter_count(&wreq->buffer.iter); + if (n < 0) { + ret =3D n; + goto error_put; + } + wreq->len =3D n; + _debug("dio-write %zx/%zx %u/%u", + n, len, wreq->load_cursor.bvecq->nr_segs, + wreq->load_cursor.bvecq->max_segs); } =20 __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags); diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 89ebeb49e969..19d1e31b840b 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -7,7 +7,6 @@ =20 #include #include -#include #include #include #include @@ -151,9 +150,8 @@ static inline void netfs_proc_del_rreq(struct netfs_io_= request *rreq) {} /* * misc.c */ -struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq, - enum netfs_folioq_trace trace); -void netfs_reset_iter(struct netfs_io_subrequest *subreq); +struct bvecq *netfs_buffer_make_space(struct netfs_io_request *rreq, + enum netfs_bvecq_trace trace); void netfs_wake_collector(struct netfs_io_request *rreq); void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq); void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq, @@ -251,7 +249,6 @@ extern atomic_t netfs_n_wh_retry_write_req; extern atomic_t netfs_n_wh_retry_write_subreq; extern atomic_t netfs_n_wb_lock_skip; extern atomic_t netfs_n_wb_lock_wait; -extern atomic_t netfs_n_folioq; extern atomic_t netfs_n_bvecq; =20 int netfs_stats_show(struct seq_file *m, void *v); @@ -289,8 +286,7 @@ void netfs_prepare_write(struct netfs_io_request *wreq, struct netfs_io_stream *stream, loff_t start); void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq, - struct iov_iter *source); + struct netfs_io_subrequest *subreq); void netfs_issue_write(struct netfs_io_request *wreq, struct netfs_io_stream *stream); size_t netfs_advance_write(struct netfs_io_request *wreq, diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index faf4f0a3b33d..2b0a511d6db7 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -135,6 +135,7 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_= t orig_len, size_t max_se } EXPORT_SYMBOL_GPL(netfs_extract_iter); =20 +#if 0 /** * netfs_extract_user_iter - Extract the pages from a user iterator into a= bvec * @orig: The original iterator @@ -370,3 +371,4 @@ size_t netfs_limit_iter(const struct iov_iter *iter, si= ze_t start_offset, BUG(); } EXPORT_SYMBOL(netfs_limit_iter); +#endif diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 6df89c92b10b..ab142cbaad35 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,7 @@ #include #include "internal.h" =20 +#if 0 /** * netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue * @mapping: Address space to set on the folio (or NULL). @@ -103,24 +104,7 @@ void netfs_free_folioq_buffer(struct folio_queue *fq) folio_batch_release(&fbatch); } EXPORT_SYMBOL(netfs_free_folioq_buffer); - -/* - * Reset the subrequest iterator to refer just to the region remaining to = be - * read. The iterator may or may not have been advanced by socket ops or - * extraction ops to an extent that may or may not match the amount actual= ly - * read. - */ -void netfs_reset_iter(struct netfs_io_subrequest *subreq) -{ - struct iov_iter *io_iter =3D &subreq->io_iter; - size_t remain =3D subreq->len - subreq->transferred; - - if (io_iter->count > remain) - iov_iter_advance(io_iter, io_iter->count - remain); - else if (io_iter->count < remain) - iov_iter_revert(io_iter, remain - io_iter->count); - iov_iter_truncate(&subreq->io_iter, remain); -} +#endif =20 /** * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeba= ck diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index b8c4918d3dcd..c92cdbad04de 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -119,7 +119,6 @@ static void netfs_free_request_rcu(struct rcu_head *rcu) static void netfs_deinit_request(struct netfs_io_request *rreq) { struct netfs_inode *ictx =3D netfs_inode(rreq->inode); - unsigned int i; =20 trace_netfs_rreq(rreq, netfs_rreq_trace_free); =20 @@ -134,16 +133,9 @@ static void netfs_deinit_request(struct netfs_io_reque= st *rreq) rreq->netfs_ops->free_request(rreq); if (rreq->cache_resources.ops) rreq->cache_resources.ops->end_operation(&rreq->cache_resources); - if (rreq->direct_bv) { - for (i =3D 0; i < rreq->direct_bv_count; i++) { - if (rreq->direct_bv[i].bv_page) { - if (rreq->direct_bv_unpin) - unpin_user_page(rreq->direct_bv[i].bv_page); - } - } - kvfree(rreq->direct_bv); - } - rolling_buffer_clear(&rreq->buffer); + bvecq_pos_detach(&rreq->load_cursor); + bvecq_pos_detach(&rreq->dispatch_cursor); + bvecq_pos_detach(&rreq->collect_cursor); =20 if (atomic_dec_and_test(&ictx->io_count)) wake_up_var(&ictx->io_count); @@ -236,6 +228,8 @@ static void netfs_free_subrequest(struct netfs_io_subre= quest *subreq) trace_netfs_sreq(subreq, netfs_sreq_trace_free); if (rreq->netfs_ops->free_subrequest) rreq->netfs_ops->free_subrequest(subreq); + bvecq_pos_detach(&subreq->dispatch_pos); + bvecq_pos_detach(&subreq->content); mempool_free(subreq, rreq->netfs_ops->subrequest_pool ?: &netfs_subreques= t_pool); netfs_stat_d(&netfs_n_rh_sreq); netfs_put_request(rreq, netfs_rreq_trace_put_subreq); diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 137f0e28a44c..3b5978832369 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -27,9 +27,13 @@ */ static void netfs_clear_unread(struct netfs_io_subrequest *subreq) { - netfs_reset_iter(subreq); - WARN_ON_ONCE(subreq->len - subreq->transferred !=3D iov_iter_count(&subre= q->io_iter)); - iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); + struct iov_iter iter; + + iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + iov_iter_advance(&iter, subreq->transferred); + iov_iter_zero(subreq->len, &iter); + if (subreq->start + subreq->transferred >=3D subreq->rreq->i_size) __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); } @@ -40,11 +44,11 @@ static void netfs_clear_unread(struct netfs_io_subreque= st *subreq) * dirty and let writeback handle it. */ static void netfs_unlock_read_folio(struct netfs_io_request *rreq, - struct folio_queue *folioq, + struct bvecq *bvecq, int slot) { struct netfs_folio *finfo; - struct folio *folio =3D folioq_folio(folioq, slot); + struct folio *folio =3D page_folio(bvecq->bv[slot].bv_page); =20 if (unlikely(folio_pos(folio) < rreq->abandon_to)) { trace_netfs_folio(folio, netfs_folio_trace_abandon); @@ -75,7 +79,7 @@ static void netfs_unlock_read_folio(struct netfs_io_reque= st *rreq, trace_netfs_folio(folio, netfs_folio_trace_read_done); } =20 - folioq_clear(folioq, slot); + bvecq->bv[slot].bv_page =3D NULL; } else { // TODO: Use of PG_private_2 is deprecated. if (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags)) @@ -91,7 +95,7 @@ static void netfs_unlock_read_folio(struct netfs_io_reque= st *rreq, folio_unlock(folio); } =20 - folioq_clear(folioq, slot); + bvecq->bv[slot].bv_page =3D NULL; } =20 /* @@ -100,18 +104,24 @@ static void netfs_unlock_read_folio(struct netfs_io_r= equest *rreq, static void netfs_read_unlock_folios(struct netfs_io_request *rreq, unsigned int *notes) { - struct folio_queue *folioq =3D rreq->buffer.tail; + struct bvecq *bvecq =3D rreq->collect_cursor.bvecq; unsigned long long collected_to =3D rreq->collected_to; - unsigned int slot =3D rreq->buffer.first_tail_slot; + unsigned int slot =3D rreq->collect_cursor.slot; =20 if (rreq->cleaned_to >=3D rreq->collected_to) return; =20 // TODO: Begin decryption =20 - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D rolling_buffer_delete_spent(&rreq->buffer); - if (!folioq) { + if (slot >=3D bvecq->nr_segs) { + /* We need to be very careful - the cleanup can catch the + * dispatcher, which could lead to us having nothing left in + * the queue, causing the front and back pointers to end up on + * different tracks. To avoid this, we must always keep at + * least one segment in the queue. + */ + bvecq =3D bvecq_buffer_delete_spent(&rreq->collect_cursor); + if (!bvecq) { rreq->front_folio_order =3D 0; return; } @@ -127,13 +137,13 @@ static void netfs_read_unlock_folios(struct netfs_io_= request *rreq, if (*notes & COPY_TO_CACHE) set_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags); =20 - folio =3D folioq_folio(folioq, slot); + folio =3D page_folio(bvecq->bv[slot].bv_page); if (WARN_ONCE(!folio_test_locked(folio), "R=3D%08x: folio %lx is not locked\n", rreq->debug_id, folio->index)) trace_netfs_folio(folio, netfs_folio_trace_not_locked); =20 - order =3D folioq_folio_order(folioq, slot); + order =3D folio_order(folio); rreq->front_folio_order =3D order; fsize =3D PAGE_SIZE << order; fpos =3D folio_pos(folio); @@ -145,33 +155,32 @@ static void netfs_read_unlock_folios(struct netfs_io_= request *rreq, if (collected_to < fend) break; =20 - netfs_unlock_read_folio(rreq, folioq, slot); + netfs_unlock_read_folio(rreq, bvecq, slot); WRITE_ONCE(rreq->cleaned_to, fpos + fsize); *notes |=3D MADE_PROGRESS; =20 clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags); =20 - /* Clean up the head folioq. If we clear an entire folioq, then - * we can get rid of it provided it's not also the tail folioq - * being filled by the issuer. + /* Clean up the head bvecq segment. If we clear an entire + * segment, then we can get rid of it provided it's not also + * the tail segment being filled by the issuer. */ - folioq_clear(folioq, slot); slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D rolling_buffer_delete_spent(&rreq->buffer); - if (!folioq) + if (slot >=3D bvecq->nr_segs) { + bvecq =3D bvecq_buffer_delete_spent(&rreq->collect_cursor); + if (!bvecq) goto done; slot =3D 0; - trace_netfs_folioq(folioq, netfs_trace_folioq_read_progress); + //trace_netfs_bvecq(bvecq, netfs_trace_folioq_read_progress); } =20 if (fpos + fsize >=3D collected_to) break; } =20 - rreq->buffer.tail =3D folioq; + bvecq_pos_move(&rreq->collect_cursor, bvecq); done: - rreq->buffer.first_tail_slot =3D slot; + rreq->collect_cursor.slot =3D slot; } =20 /* @@ -346,12 +355,14 @@ static void netfs_rreq_assess_dio(struct netfs_io_req= uest *rreq) =20 if (rreq->origin =3D=3D NETFS_UNBUFFERED_READ || rreq->origin =3D=3D NETFS_DIO_READ) { - for (i =3D 0; i < rreq->direct_bv_count; i++) { - flush_dcache_page(rreq->direct_bv[i].bv_page); - // TODO: cifs marks pages in the destination buffer - // dirty under some circumstances after a read. Do we - // need to do that too? - set_page_dirty(rreq->direct_bv[i].bv_page); + for (struct bvecq *bq =3D rreq->collect_cursor.bvecq; bq; bq =3D bq->nex= t) { + for (i =3D 0; i < bq->nr_segs; i++) { + flush_dcache_page(bq->bv[i].bv_page); + // TODO: cifs marks pages in the destination buffer + // dirty under some circumstances after a read. Do we + // need to do that too? + set_page_dirty(bq->bv[i].bv_page); + } } } =20 @@ -442,7 +453,15 @@ bool netfs_read_collection(struct netfs_io_request *rr= eq) =20 trace_netfs_rreq(rreq, netfs_rreq_trace_done); netfs_clear_subrequests(rreq); - netfs_unlock_abandoned_read_pages(rreq); + switch (rreq->origin) { + case NETFS_READAHEAD: + case NETFS_READPAGE: + case NETFS_READ_FOR_WRITE: + netfs_unlock_abandoned_read_pages(rreq); + break; + default: + break; + } if (unlikely(rreq->copy_to_cache)) netfs_pgpriv2_end_copy_to_cache(rreq); return true; diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c index a1489aa29f78..faf6a4fcdf26 100644 --- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -19,6 +19,9 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct= folio *folio) { struct netfs_io_stream *cache =3D &creq->io_streams[1]; + struct bvecq *queue; + unsigned int slot; + size_t dio_size =3D PAGE_SIZE; size_t fsize =3D folio_size(folio), flen =3D fsize; loff_t fpos =3D folio_pos(folio), i_size; bool to_eof =3D false; @@ -48,17 +51,40 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_re= quest *creq, struct folio to_eof =3D true; } =20 + flen =3D round_up(flen, dio_size); + _debug("folio %zx %zx", flen, fsize); =20 trace_netfs_folio(folio, netfs_folio_trace_store_copy); =20 - /* Attach the folio to the rolling buffer. */ - if (rolling_buffer_append(&creq->buffer, folio, 0) < 0) { - clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &creq->flags); - return; + + /* Institute a new bvec queue segment if the current one is full or if + * we encounter a discontiguity. The discontiguity break is important + * when it comes to bulk unlocking folios by file range. + */ + queue =3D creq->load_cursor.bvecq; + if (bvecq_is_full(queue) || + (fpos !=3D creq->last_end && creq->last_end > 0)) { + if (bvecq_buffer_make_space(&creq->load_cursor) < 0) { + clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &creq->flags); + return; + } + + queue =3D creq->load_cursor.bvecq; + queue->fpos =3D fpos; + if (fpos !=3D creq->last_end) + queue->discontig =3D true; } =20 - cache->submit_extendable_to =3D fsize; + /* Attach the folio to the rolling buffer. */ + slot =3D queue->nr_segs; + bvec_set_folio(&queue->bv[slot], folio, fsize, 0); + /* Order incrementing the slot counter after the slot is filled. */ + smp_store_release(&queue->nr_segs, slot + 1); + creq->load_cursor.slot =3D slot + 1; + creq->load_cursor.offset =3D 0; + trace_netfs_bv_slot(queue, slot); + cache->submit_off =3D 0; cache->submit_len =3D flen; =20 @@ -70,10 +96,9 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_req= uest *creq, struct folio do { ssize_t part; =20 - creq->buffer.iter.iov_offset =3D cache->submit_off; + creq->dispatch_cursor.offset =3D cache->submit_off; =20 atomic64_set(&creq->issued_to, fpos + cache->submit_off); - cache->submit_extendable_to =3D fsize - cache->submit_off; part =3D netfs_advance_write(creq, cache, fpos + cache->submit_off, cache->submit_len, to_eof); cache->submit_off +=3D part; @@ -83,8 +108,7 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_req= uest *creq, struct folio cache->submit_len -=3D part; } while (cache->submit_len > 0); =20 - creq->buffer.iter.iov_offset =3D 0; - rolling_buffer_advance(&creq->buffer, fsize); + bvecq_buffer_step(&creq->dispatch_cursor); atomic64_set(&creq->issued_to, fpos + fsize); =20 if (flen < fsize) @@ -110,6 +134,10 @@ static struct netfs_io_request *netfs_pgpriv2_begin_co= py_to_cache( if (!creq->io_streams[1].avail) goto cancel_put; =20 + bvecq_buffer_init(&creq->load_cursor, creq->debug_id); + bvecq_pos_attach(&creq->dispatch_cursor, &creq->load_cursor); + bvecq_pos_attach(&creq->collect_cursor, &creq->dispatch_cursor); + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags); trace_netfs_copy2cache(rreq, creq); trace_netfs_write(creq, netfs_write_trace_copy_to_cache); @@ -170,22 +198,23 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_= request *rreq) */ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq) { - struct folio_queue *folioq =3D creq->buffer.tail; + struct bvecq *bq =3D creq->collect_cursor.bvecq; unsigned long long collected_to =3D creq->collected_to; - unsigned int slot =3D creq->buffer.first_tail_slot; + unsigned int slot; bool made_progress =3D false; =20 - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D rolling_buffer_delete_spent(&creq->buffer); + if (bvecq_is_full(bq)) { + bq =3D bvecq_buffer_delete_spent(&creq->collect_cursor); slot =3D 0; } + slot =3D creq->collect_cursor.slot; =20 for (;;) { struct folio *folio; unsigned long long fpos, fend; size_t fsize, flen; =20 - folio =3D folioq_folio(folioq, slot); + folio =3D page_folio(bq->bv[slot].bv_page); if (WARN_ONCE(!folio_test_private_2(folio), "R=3D%08x: folio %lx is not marked private_2\n", creq->debug_id, folio->index)) @@ -212,11 +241,11 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_= io_request *creq) * we can get rid of it provided it's not also the tail folioq * being filled by the issuer. */ - folioq_clear(folioq, slot); + bq->bv[slot].bv_page =3D NULL; slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D rolling_buffer_delete_spent(&creq->buffer); - if (!folioq) + if (slot >=3D bq->nr_segs) { + bq =3D bvecq_buffer_delete_spent(&creq->collect_cursor); + if (!bq) goto done; slot =3D 0; } @@ -225,8 +254,7 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io= _request *creq) break; } =20 - creq->buffer.tail =3D folioq; done: - creq->buffer.first_tail_slot =3D slot; + creq->collect_cursor.slot =3D slot; return made_progress; } diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index 7793ba5e3e8f..68a5fece9012 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -12,6 +12,11 @@ static void netfs_reissue_read(struct netfs_io_request *rreq, struct netfs_io_subrequest *subreq) { + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + iov_iter_advance(&subreq->io_iter, subreq->transferred); + subreq->error =3D 0; __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); @@ -27,6 +32,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_= request *rreq) { struct netfs_io_subrequest *subreq; struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + struct bvecq_pos dispatch_cursor =3D {}; struct list_head *next; =20 _enter("R=3D%x", rreq->debug_id); @@ -48,7 +54,6 @@ static void netfs_retry_read_subrequests(struct netfs_io_= request *rreq) if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); subreq->retry_count++; - netfs_reset_iter(subreq); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); netfs_reissue_read(rreq, subreq); } @@ -74,11 +79,12 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) =20 do { struct netfs_io_subrequest *from, *to, *tmp; - struct iov_iter source; unsigned long long start, len; size_t part; bool boundary =3D false, subreq_superfluous =3D false; =20 + bvecq_pos_detach(&dispatch_cursor); + /* Go through the subreqs and find the next span of contiguous * buffer that we then rejig (cifs, for example, needs the * rsize renegotiating) and reissue. @@ -111,9 +117,8 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) /* Determine the set of buffers we're going to use. Each * subreq gets a subset of a single overall contiguous buffer. */ - netfs_reset_iter(from); - source =3D from->io_iter; - source.count =3D len; + bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos); + bvecq_pos_advance(&dispatch_cursor, from->transferred); =20 /* Work through the sublist. */ subreq =3D from; @@ -129,10 +134,14 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); subreq->retry_count++; =20 + bvecq_pos_detach(&subreq->dispatch_pos); + bvecq_pos_detach(&subreq->content); + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); =20 /* Renegotiate max_len (rsize) */ stream->sreq_max_len =3D subreq->len; + stream->sreq_max_segs =3D INT_MAX; if (rreq->netfs_ops->prepare_read && rreq->netfs_ops->prepare_read(subreq) < 0) { trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); @@ -140,13 +149,13 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) goto abandon; } =20 - part =3D umin(len, stream->sreq_max_len); - if (unlikely(stream->sreq_max_segs)) - part =3D netfs_limit_iter(&source, 0, part, stream->sreq_max_segs); + bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); + part =3D bvecq_slice(&dispatch_cursor, + umin(len, stream->sreq_max_len), + stream->sreq_max_segs, + &subreq->nr_segs); subreq->len =3D subreq->transferred + part; - subreq->io_iter =3D source; - iov_iter_truncate(&subreq->io_iter, part); - iov_iter_advance(&source, part); + len -=3D part; start +=3D part; if (!len) { @@ -205,9 +214,7 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) trace_netfs_sreq(subreq, netfs_sreq_trace_retry); =20 stream->sreq_max_len =3D umin(len, rreq->rsize); - stream->sreq_max_segs =3D 0; - if (unlikely(stream->sreq_max_segs)) - part =3D netfs_limit_iter(&source, 0, part, stream->sreq_max_segs); + stream->sreq_max_segs =3D INT_MAX; =20 netfs_stat(&netfs_n_rh_download); if (rreq->netfs_ops->prepare_read(subreq) < 0) { @@ -216,11 +223,12 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) goto abandon; } =20 - part =3D umin(len, stream->sreq_max_len); + bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); + part =3D bvecq_slice(&dispatch_cursor, + umin(len, stream->sreq_max_len), + stream->sreq_max_segs, + &subreq->nr_segs); subreq->len =3D subreq->transferred + part; - subreq->io_iter =3D source; - iov_iter_truncate(&subreq->io_iter, part); - iov_iter_advance(&source, part); =20 len -=3D part; start +=3D part; @@ -234,6 +242,8 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) =20 } while (!list_is_head(next, &stream->subrequests)); =20 +out: + bvecq_pos_detach(&dispatch_cursor); return; =20 /* If we hit an error, fail all remaining incomplete subrequests */ @@ -250,6 +260,7 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) __set_bit(NETFS_SREQ_FAILED, &subreq->flags); __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); } + goto out; } =20 /* @@ -278,13 +289,15 @@ void netfs_retry_reads(struct netfs_io_request *rreq) */ void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq) { - struct folio_queue *p; + struct bvecq *p; =20 - for (p =3D rreq->buffer.tail; p; p =3D p->next) { - for (int slot =3D 0; slot < folioq_count(p); slot++) { - struct folio *folio =3D folioq_folio(p, slot); + for (p =3D rreq->collect_cursor.bvecq; p; p =3D p->next) { + if (!p->free) + continue; + for (int slot =3D 0; slot < p->nr_segs; slot++) { + if (p->bv[slot].bv_page) { + struct folio *folio =3D page_folio(p->bv[slot].bv_page); =20 - if (folio && !folioq_is_marked2(p, slot)) { trace_netfs_folio(folio, netfs_folio_trace_abandon); folio_unlock(folio); } diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c index 8e6264f62a8f..0f49d6aab874 100644 --- a/fs/netfs/read_single.c +++ b/fs/netfs/read_single.c @@ -97,10 +97,15 @@ static int netfs_single_dispatch_read(struct netfs_io_r= equest *rreq) if (!subreq) return -ENOMEM; =20 - subreq->source =3D NETFS_SOURCE_UNKNOWN; - subreq->start =3D 0; - subreq->len =3D rreq->len; - subreq->io_iter =3D rreq->buffer.iter; + subreq->source =3D NETFS_SOURCE_UNKNOWN; + subreq->start =3D 0; + subreq->len =3D rreq->len; + + bvecq_pos_attach(&subreq->dispatch_pos, &rreq->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &rreq->dispatch_cursor); + + iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); =20 __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); =20 @@ -174,6 +179,10 @@ ssize_t netfs_read_single(struct inode *inode, struct = file *file, struct iov_ite if (IS_ERR(rreq)) return PTR_ERR(rreq); =20 + ret =3D netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->dispatch_c= ursor.bvecq, 0); + if (ret < 0) + goto cleanup_free; + ret =3D netfs_single_begin_cache_read(rreq, ictx); if (ret =3D=3D -ENOMEM || ret =3D=3D -EINTR || ret =3D=3D -ERESTARTSYS) goto cleanup_free; @@ -181,7 +190,6 @@ ssize_t netfs_read_single(struct inode *inode, struct f= ile *file, struct iov_ite netfs_stat(&netfs_n_rh_read_single); trace_netfs_read(rreq, 0, rreq->len, netfs_read_trace_read_single); =20 - rreq->buffer.iter =3D *iter; netfs_single_dispatch_read(rreq); =20 ret =3D netfs_wait_for_read(rreq); diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 84c2a4bcc762..1dfb5667b931 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -47,7 +47,6 @@ atomic_t netfs_n_wh_retry_write_req; atomic_t netfs_n_wh_retry_write_subreq; atomic_t netfs_n_wb_lock_skip; atomic_t netfs_n_wb_lock_wait; -atomic_t netfs_n_folioq; atomic_t netfs_n_bvecq; =20 int netfs_stats_show(struct seq_file *m, void *v) @@ -91,11 +90,10 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_rh_retry_read_subreq), atomic_read(&netfs_n_wh_retry_write_req), atomic_read(&netfs_n_wh_retry_write_subreq)); - seq_printf(m, "Objs : rr=3D%u sr=3D%u bq=3D%u foq=3D%u wsc=3D%u\n", + seq_printf(m, "Objs : rr=3D%u sr=3D%u bq=3D%u wsc=3D%u\n", atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), atomic_read(&netfs_n_bvecq), - atomic_read(&netfs_n_folioq), atomic_read(&netfs_n_wh_wstream_conflict)); seq_printf(m, "WbLock : skip=3D%u wait=3D%u\n", atomic_read(&netfs_n_wb_lock_skip), diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 83eb3dc1adf8..ed11086346b0 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -111,12 +111,12 @@ int netfs_folio_written_back(struct folio *folio) static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, unsigned int *notes) { - struct folio_queue *folioq =3D wreq->buffer.tail; + struct bvecq *bvecq =3D wreq->collect_cursor.bvecq; unsigned long long collected_to =3D wreq->collected_to; - unsigned int slot =3D wreq->buffer.first_tail_slot; + unsigned int slot =3D wreq->collect_cursor.slot; =20 - if (WARN_ON_ONCE(!folioq)) { - pr_err("[!] Writeback unlock found empty rolling buffer!\n"); + if (WARN_ON_ONCE(!bvecq)) { + pr_err("[!] Writeback unlock found empty buffer!\n"); netfs_dump_request(wreq); return; } @@ -127,9 +127,15 @@ static void netfs_writeback_unlock_folios(struct netfs= _io_request *wreq, return; } =20 - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D rolling_buffer_delete_spent(&wreq->buffer); - if (!folioq) + if (slot >=3D bvecq->nr_segs) { + /* We need to be very careful - the cleanup can catch the + * dispatcher, which could lead to us having nothing left in + * the queue, causing the front and back pointers to end up on + * different tracks. To avoid this, we must always keep at + * least one segment in the queue. + */ + bvecq =3D bvecq_buffer_delete_spent(&wreq->collect_cursor); + if (!bvecq) return; slot =3D 0; } @@ -140,7 +146,7 @@ static void netfs_writeback_unlock_folios(struct netfs_= io_request *wreq, unsigned long long fpos, fend; size_t fsize, flen; =20 - folio =3D folioq_folio(folioq, slot); + folio =3D page_folio(bvecq->bv[slot].bv_page); if (WARN_ONCE(!folio_test_writeback(folio), "R=3D%08x: folio %lx is not under writeback\n", wreq->debug_id, folio->index)) @@ -163,15 +169,15 @@ static void netfs_writeback_unlock_folios(struct netf= s_io_request *wreq, wreq->cleaned_to =3D fpos + fsize; *notes |=3D MADE_PROGRESS; =20 - /* Clean up the head folioq. If we clear an entire folioq, then - * we can get rid of it provided it's not also the tail folioq + /* Clean up the head bvecq. If we clear an entire bvecq, then + * we can get rid of it provided it's not also the tail bvecq * being filled by the issuer. */ - folioq_clear(folioq, slot); + bvecq->bv[slot].bv_page =3D NULL; slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D rolling_buffer_delete_spent(&wreq->buffer); - if (!folioq) + if (slot >=3D bvecq->nr_segs) { + bvecq =3D bvecq_buffer_delete_spent(&wreq->collect_cursor); + if (!bvecq) goto done; slot =3D 0; } @@ -180,9 +186,8 @@ static void netfs_writeback_unlock_folios(struct netfs_= io_request *wreq, break; } =20 - wreq->buffer.tail =3D folioq; done: - wreq->buffer.first_tail_slot =3D slot; + wreq->collect_cursor.slot =3D slot; } =20 /* @@ -207,7 +212,8 @@ static void netfs_collect_write_results(struct netfs_io= _request *wreq) trace_netfs_rreq(wreq, netfs_rreq_trace_collect); =20 reassess_streams: - issued_to =3D atomic64_read(&wreq->issued_to); + /* Order reading the issued_to point before reading the queue it refers t= o. */ + issued_to =3D atomic64_read_acquire(&wreq->issued_to); smp_rmb(); collected_to =3D ULLONG_MAX; if (wreq->origin =3D=3D NETFS_WRITEBACK || diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index fd4dc89d9d8d..5d4d8dbfe877 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -108,8 +108,6 @@ struct netfs_io_request *netfs_create_write_req(struct = address_space *mapping, ictx =3D netfs_inode(wreq->inode); if (is_cacheable && netfs_is_cache_enabled(ictx)) fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ict= x)); - if (rolling_buffer_init(&wreq->buffer, wreq->debug_id, ITER_SOURCE) < 0) - goto nomem; =20 wreq->cleaned_to =3D wreq->start; =20 @@ -132,9 +130,6 @@ struct netfs_io_request *netfs_create_write_req(struct = address_space *mapping, } =20 return wreq; -nomem: - netfs_put_failed_request(wreq); - return ERR_PTR(-ENOMEM); } =20 /** @@ -159,21 +154,13 @@ void netfs_prepare_write(struct netfs_io_request *wre= q, loff_t start) { struct netfs_io_subrequest *subreq; - struct iov_iter *wreq_iter =3D &wreq->buffer.iter; - - /* Make sure we don't point the iterator at a used-up folio_queue - * struct being used as a placeholder to prevent the queue from - * collapsing. In such a case, extend the queue. - */ - if (iov_iter_is_folioq(wreq_iter) && - wreq_iter->folioq_slot >=3D folioq_nr_slots(wreq_iter->folioq)) - rolling_buffer_make_space(&wreq->buffer); =20 subreq =3D netfs_alloc_subrequest(wreq); subreq->source =3D stream->source; subreq->start =3D start; subreq->stream_nr =3D stream->stream_nr; - subreq->io_iter =3D *wreq_iter; + + bvecq_pos_attach(&subreq->dispatch_pos, &wreq->dispatch_cursor); =20 _enter("R=3D%x[%x]", wreq->debug_id, subreq->debug_index); =20 @@ -239,15 +226,15 @@ static void netfs_do_issue_write(struct netfs_io_stre= am *stream, } =20 void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq, - struct iov_iter *source) + struct netfs_io_subrequest *subreq) { - size_t size =3D subreq->len - subreq->transferred; - // TODO: Use encrypted buffer - subreq->io_iter =3D *source; - iov_iter_advance(source, size); - iov_iter_truncate(&subreq->io_iter, size); + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE, + subreq->content.bvecq, subreq->content.slot, + subreq->content.offset, + subreq->len); + iov_iter_advance(&subreq->io_iter, subreq->transferred); =20 subreq->retry_count++; subreq->error =3D 0; @@ -264,8 +251,58 @@ void netfs_issue_write(struct netfs_io_request *wreq, =20 if (!subreq) return; + + /* If we have a write to the cache, we need to round out the first and + * last entries (only those as the data will be on virtually contiguous + * folios) to cache DIO boundaries. + */ + if (subreq->source =3D=3D NETFS_WRITE_TO_CACHE) { + struct bvecq_pos tmp_pos; + struct bio_vec *bv; + struct bvecq *bq; + size_t dio_size =3D PAGE_SIZE; + size_t disp, len; + int ret; + + bvecq_pos_attach(&tmp_pos, &subreq->dispatch_pos); + ret =3D bvecq_extract(&tmp_pos, subreq->len, INT_MAX, &subreq->content.b= vecq); + bvecq_pos_detach(&tmp_pos); + if (ret < 0) { + netfs_write_subrequest_terminated(subreq, -ENOMEM); + return; + } + + /* Round the first entry down. */ + bq =3D subreq->content.bvecq; + bv =3D &bq->bv[0]; + disp =3D bv->bv_offset & (dio_size - 1); + if (disp) { + bv->bv_offset -=3D disp; + bv->bv_len +=3D disp; + bq->fpos -=3D disp; + subreq->start -=3D disp; + subreq->len +=3D disp; + } + + /* Round the end of the last entry up. */ + while (bq->next) + bq =3D bq->next; + bv =3D &bq->bv[bq->nr_segs - 1]; + len =3D round_up(bv->bv_len, dio_size - 1); + if (len > bv->bv_len) { + subreq->len +=3D len - bv->bv_len; + bv->bv_len =3D len; + } + } else { + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + } + + iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE, + subreq->content.bvecq, subreq->content.slot, + subreq->content.offset, + subreq->len); + stream->construct =3D NULL; - subreq->io_iter.count =3D subreq->len; netfs_do_issue_write(stream, subreq); } =20 @@ -302,7 +339,6 @@ size_t netfs_advance_write(struct netfs_io_request *wre= q, _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, l= en); subreq->len +=3D part; subreq->nr_segs++; - stream->submit_extendable_to -=3D part; =20 if (subreq->len >=3D stream->sreq_max_len || subreq->nr_segs >=3D stream->sreq_max_segs || @@ -326,16 +362,35 @@ static int netfs_write_folio(struct netfs_io_request = *wreq, struct netfs_io_stream *stream; struct netfs_group *fgroup; /* TODO: Use this with ceph */ struct netfs_folio *finfo; - size_t iter_off =3D 0; + struct bvecq *queue =3D wreq->load_cursor.bvecq; + unsigned int slot; size_t fsize =3D folio_size(folio), flen =3D fsize, foff =3D 0; loff_t fpos =3D folio_pos(folio), i_size; bool to_eof =3D false, streamw =3D false; bool debug =3D false; + int ret; =20 _enter(""); =20 - if (rolling_buffer_make_space(&wreq->buffer) < 0) - return -ENOMEM; + /* Institute a new bvec queue segment if the current one is full or if + * we encounter a discontiguity. The discontiguity break is important + * when it comes to bulk unlocking folios by file range. + */ + if (bvecq_is_full(queue) || + (fpos !=3D wreq->last_end && wreq->last_end > 0)) { + ret =3D bvecq_buffer_make_space(&wreq->load_cursor); + if (ret < 0) { + folio_unlock(folio); + return ret; + } + + queue =3D wreq->load_cursor.bvecq; + queue->fpos =3D fpos; + if (fpos !=3D wreq->last_end) + queue->discontig =3D true; + bvecq_pos_move(&wreq->dispatch_cursor, queue); + wreq->dispatch_cursor.slot =3D 0; + } =20 /* netfs_perform_write() may shift i_size around the page or from out * of the page to beyond it, but cannot move i_size into or through the @@ -441,7 +496,12 @@ static int netfs_write_folio(struct netfs_io_request *= wreq, } =20 /* Attach the folio to the rolling buffer. */ - rolling_buffer_append(&wreq->buffer, folio, 0); + slot =3D queue->nr_segs; + bvec_set_folio(&queue->bv[slot], folio, flen, foff); + queue->nr_segs =3D slot + 1; + wreq->load_cursor.slot =3D slot + 1; + wreq->load_cursor.offset =3D 0; + trace_netfs_bv_slot(queue, slot); =20 /* Move the submission point forward to allow for write-streaming data * not starting at the front of the page. We don't do write-streaming @@ -487,14 +547,10 @@ static int netfs_write_folio(struct netfs_io_request = *wreq, break; stream =3D &wreq->io_streams[choose_s]; =20 - /* Advance the iterator(s). */ - if (stream->submit_off > iter_off) { - rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off); - iter_off =3D stream->submit_off; - } + /* Advance the cursor. */ + wreq->dispatch_cursor.offset =3D stream->submit_off; =20 atomic64_set(&wreq->issued_to, fpos + stream->submit_off); - stream->submit_extendable_to =3D fsize - stream->submit_off; part =3D netfs_advance_write(wreq, stream, fpos + stream->submit_off, stream->submit_len, to_eof); stream->submit_off +=3D part; @@ -506,9 +562,9 @@ static int netfs_write_folio(struct netfs_io_request *w= req, debug =3D true; } =20 - if (fsize > iter_off) - rolling_buffer_advance(&wreq->buffer, fsize - iter_off); - atomic64_set(&wreq->issued_to, fpos + fsize); + bvecq_buffer_step(&wreq->dispatch_cursor); + /* Order loading the queue before updating the issue_to point */ + atomic64_set_release(&wreq->issued_to, fpos + fsize); =20 if (!debug) kdebug("R=3D%x: No submit", wreq->debug_id); @@ -576,6 +632,11 @@ int netfs_writepages(struct address_space *mapping, goto couldnt_start; } =20 + if (bvecq_buffer_init(&wreq->load_cursor, wreq->debug_id) < 0) + goto nomem; + bvecq_pos_attach(&wreq->dispatch_cursor, &wreq->load_cursor); + bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor); + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); trace_netfs_write(wreq, netfs_write_trace_writeback); netfs_stat(&netfs_n_wh_writepages); @@ -600,12 +661,17 @@ int netfs_writepages(struct address_space *mapping, netfs_end_issue_write(wreq); =20 mutex_unlock(&ictx->wb_lock); + bvecq_pos_detach(&wreq->load_cursor); + bvecq_pos_detach(&wreq->dispatch_cursor); netfs_wake_collector(wreq); =20 netfs_put_request(wreq, netfs_rreq_trace_put_return); _leave(" =3D %d", error); return error; =20 +nomem: + error =3D -ENOMEM; + netfs_put_failed_request(wreq); couldnt_start: netfs_kill_dirty_pages(mapping, wbc, folio); out: @@ -647,8 +713,8 @@ int netfs_advance_writethrough(struct netfs_io_request = *wreq, struct writeback_c struct folio *folio, size_t copied, bool to_page_end, struct folio **writethrough_cache) { - _enter("R=3D%x ic=3D%zu ws=3D%u cp=3D%zu tp=3D%u", - wreq->debug_id, wreq->buffer.iter.count, wreq->wsize, copied, to_p= age_end); + _enter("R=3D%x ws=3D%u cp=3D%zu tp=3D%u", + wreq->debug_id, wreq->wsize, copied, to_page_end); =20 if (!*writethrough_cache) { if (folio_test_dirty(folio)) @@ -705,7 +771,7 @@ ssize_t netfs_end_writethrough(struct netfs_io_request = *wreq, struct writeback_c * @iter: Data to write. * * Write a monolithic, non-pagecache object back to the server and/or - * the cache. + * the cache. There's a maximum of one subrequest per stream. */ int netfs_writeback_single(struct address_space *mapping, struct writeback_control *wbc, @@ -729,10 +795,18 @@ int netfs_writeback_single(struct address_space *mapp= ing, ret =3D PTR_ERR(wreq); goto couldnt_start; } - - wreq->buffer.iter =3D *iter; wreq->len =3D iov_iter_count(iter); =20 + ret =3D netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_c= ursor.bvecq, 0); + if (ret < 0) + goto cleanup_free; + if (ret < wreq->len) { + ret =3D -EIO; + goto cleanup_free; + } + + bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor); + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); trace_netfs_write(wreq, netfs_write_trace_writeback_single); netfs_stat(&netfs_n_wh_writepages); @@ -752,11 +826,11 @@ int netfs_writeback_single(struct address_space *mapp= ing, subreq =3D stream->construct; subreq->len =3D wreq->len; stream->submit_len =3D subreq->len; - stream->submit_extendable_to =3D round_up(wreq->len, PAGE_SIZE); =20 netfs_issue_write(wreq, stream); } =20 + wreq->submitted =3D wreq->len; smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); =20 @@ -772,6 +846,8 @@ int netfs_writeback_single(struct address_space *mappin= g, _leave(" =3D %d", ret); return ret; =20 +cleanup_free: + netfs_put_request(wreq, netfs_rreq_trace_put_return); couldnt_start: mutex_unlock(&ictx->wb_lock); _leave(" =3D %d", ret); diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c index 29489a23a220..b9352bf45c4b 100644 --- a/fs/netfs/write_retry.c +++ b/fs/netfs/write_retry.c @@ -17,6 +17,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, struct netfs_io_stream *stream) { + struct bvecq_pos dispatch_cursor =3D {}; struct list_head *next; =20 _enter("R=3D%x[%x:]", wreq->debug_id, stream->stream_nr); @@ -39,12 +40,8 @@ static void netfs_retry_write_stream(struct netfs_io_req= uest *wreq, if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) break; if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { - struct iov_iter source; - - netfs_reset_iter(subreq); - source =3D subreq->io_iter; netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq, &source); + netfs_reissue_write(stream, subreq); } } return; @@ -54,11 +51,12 @@ static void netfs_retry_write_stream(struct netfs_io_re= quest *wreq, =20 do { struct netfs_io_subrequest *subreq =3D NULL, *from, *to, *tmp; - struct iov_iter source; unsigned long long start, len; size_t part; bool boundary =3D false; =20 + bvecq_pos_detach(&dispatch_cursor); + /* Go through the stream and find the next span of contiguous * data that we then rejig (cifs, for example, needs the wsize * renegotiating) and reissue. @@ -70,7 +68,7 @@ static void netfs_retry_write_stream(struct netfs_io_requ= est *wreq, =20 if (test_bit(NETFS_SREQ_FAILED, &from->flags) || !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) - return; + goto out; =20 list_for_each_continue(next, &stream->subrequests) { subreq =3D list_entry(next, struct netfs_io_subrequest, rreq_link); @@ -85,9 +83,8 @@ static void netfs_retry_write_stream(struct netfs_io_requ= est *wreq, /* Determine the set of buffers we're going to use. Each * subreq gets a subset of a single overall contiguous buffer. */ - netfs_reset_iter(from); - source =3D from->io_iter; - source.count =3D len; + bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos); + bvecq_pos_advance(&dispatch_cursor, from->transferred); =20 /* Work through the sublist. */ subreq =3D from; @@ -100,14 +97,20 @@ static void netfs_retry_write_stream(struct netfs_io_r= equest *wreq, __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); trace_netfs_sreq(subreq, netfs_sreq_trace_retry); =20 + bvecq_pos_detach(&subreq->dispatch_pos); + bvecq_pos_detach(&subreq->content); + /* Renegotiate max_len (wsize) */ stream->sreq_max_len =3D len; + stream->sreq_max_segs =3D INT_MAX; stream->prepare_write(subreq); =20 - part =3D umin(len, stream->sreq_max_len); - if (unlikely(stream->sreq_max_segs)) - part =3D netfs_limit_iter(&source, 0, part, stream->sreq_max_segs); - subreq->len =3D part; + bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); + part =3D bvecq_slice(&dispatch_cursor, + umin(len, stream->sreq_max_len), + stream->sreq_max_segs, + &subreq->nr_segs); + subreq->len =3D subreq->transferred + part; subreq->transferred =3D 0; len -=3D part; start +=3D part; @@ -116,7 +119,7 @@ static void netfs_retry_write_stream(struct netfs_io_re= quest *wreq, boundary =3D true; =20 netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq, &source); + netfs_reissue_write(stream, subreq); if (subreq =3D=3D to) break; } @@ -173,8 +176,13 @@ static void netfs_retry_write_stream(struct netfs_io_r= equest *wreq, =20 stream->prepare_write(subreq); =20 - part =3D umin(len, stream->sreq_max_len); + bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); + part =3D bvecq_slice(&dispatch_cursor, + umin(len, stream->sreq_max_len), + stream->sreq_max_segs, + &subreq->nr_segs); subreq->len =3D subreq->transferred + part; + len -=3D part; start +=3D part; if (!len && boundary) { @@ -182,13 +190,16 @@ static void netfs_retry_write_stream(struct netfs_io_= request *wreq, boundary =3D false; } =20 - netfs_reissue_write(stream, subreq, &source); + netfs_reissue_write(stream, subreq); if (!len) break; =20 } while (len); =20 } while (!list_is_head(next, &stream->subrequests)); + +out: + bvecq_pos_detach(&dispatch_cursor); } =20 /* diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b146aeaaf6c9..a48f03e85b6a 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -19,12 +19,13 @@ #include #include #include -#include =20 enum netfs_sreq_ref_trace; typedef struct mempool mempool_t; -struct folio_queue; -struct bvecq; +struct readahead_control; +struct netfs_io_request; +struct netfs_io_subrequest; +struct fscache_occupancy; =20 /** * folio_start_private_2 - Start an fscache write on a folio. [DEPRECATED] @@ -147,7 +148,6 @@ struct netfs_io_stream { unsigned int sreq_max_segs; /* 0 or max number of segments in an iterato= r */ unsigned int submit_off; /* Folio offset we're submitting from */ unsigned int submit_len; /* Amount of data left to submit */ - unsigned int submit_extendable_to; /* Amount I/O can be rounded up to */ void (*prepare_write)(struct netfs_io_subrequest *subreq); void (*issue_write)(struct netfs_io_subrequest *subreq); /* Collection tracking */ @@ -187,6 +187,8 @@ struct netfs_io_subrequest { struct netfs_io_request *rreq; /* Supervising I/O request */ struct work_struct work; struct list_head rreq_link; /* Link in rreq->subrequests */ + struct bvecq_pos dispatch_pos; /* Bookmark in the combined queue of the s= tart */ + struct bvecq_pos content; /* The (copied) content of the subrequest */ struct iov_iter io_iter; /* Iterator for this subrequest */ unsigned long long start; /* Where to start the I/O */ size_t len; /* Size of the I/O */ @@ -248,13 +250,13 @@ struct netfs_io_request { struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operatio= ns */ #define NR_IO_STREAMS 2 //wreq->nr_io_streams struct netfs_group *group; /* Writeback group being written back */ - struct rolling_buffer buffer; /* Unencrypted buffer */ -#define NETFS_ROLLBUF_PUT_MARK ROLLBUF_MARK_1 -#define NETFS_ROLLBUF_PAGECACHE_MARK ROLLBUF_MARK_2 + struct bvecq_pos collect_cursor; /* Clear-up point of I/O buffer */ + struct bvecq_pos load_cursor; /* Point at which new folios are loaded in = */ + struct bvecq_pos dispatch_cursor; /* Point from which buffers are dispatc= hed */ wait_queue_head_t waitq; /* Processor waiter */ void *netfs_priv; /* Private data for the netfs */ void *netfs_priv2; /* Private data for the netfs */ - struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter)= */ + unsigned long long last_end; /* End pos of last folio submitted */ unsigned long long submitted; /* Amount submitted for I/O so far */ unsigned long long len; /* Length of the request */ size_t transferred; /* Amount to be indicated as transferred */ @@ -266,7 +268,6 @@ struct netfs_io_request { unsigned long long cleaned_to; /* Position we've cleaned folios to */ unsigned long long abandon_to; /* Position to abandon folios to */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ - unsigned int direct_bv_count; /* Number of elements in direct_bv[] */ unsigned int debug_id; unsigned int rsize; /* Maximum read size (0 for none) */ unsigned int wsize; /* Maximum write size (0 for none) */ @@ -275,7 +276,6 @@ struct netfs_io_request { spinlock_t lock; /* Lock for queuing subreqs */ unsigned char front_folio_order; /* Order (size) of front folio */ enum netfs_io_origin origin; /* Origin of the request */ - bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ refcount_t ref; unsigned long flags; #define NETFS_RREQ_IN_PROGRESS 0 /* Unlocked when the request completes (= has ref) */ @@ -466,12 +466,6 @@ void netfs_end_io_write(struct inode *inode); int netfs_start_io_direct(struct inode *inode); void netfs_end_io_direct(struct inode *inode); =20 -/* Miscellaneous APIs. */ -struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, - unsigned int trace /*enum netfs_folioq_trace*/); -void netfs_folioq_free(struct folio_queue *folioq, - unsigned int trace /*enum netfs_trace_folioq*/); - /* Buffer wrangling helpers API. */ int netfs_alloc_folioq_buffer(struct address_space *mapping, struct folio_queue **_buffer, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 2523adc3ad85..861dc7849067 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -212,7 +212,9 @@ EM(netfs_folio_trace_store_copy, "store-copy") \ EM(netfs_folio_trace_store_plus, "store+") \ EM(netfs_folio_trace_wthru, "wthru") \ - E_(netfs_folio_trace_wthru_plus, "wthru+") + EM(netfs_folio_trace_wthru_plus, "wthru+") \ + EM(netfs_folio_trace_zero, "zero") \ + E_(netfs_folio_trace_zero_ra, "zero-ra") =20 #define netfs_collect_contig_traces \ EM(netfs_contig_trace_collect, "Collect") \ @@ -225,13 +227,13 @@ EM(netfs_trace_donate_to_next, "to-next") \ E_(netfs_trace_donate_to_deferred_next, "defer-next") =20 -#define netfs_folioq_traces \ - EM(netfs_trace_folioq_alloc_buffer, "alloc-buf") \ - EM(netfs_trace_folioq_clear, "clear") \ - EM(netfs_trace_folioq_delete, "delete") \ - EM(netfs_trace_folioq_make_space, "make-space") \ - EM(netfs_trace_folioq_rollbuf_init, "roll-init") \ - E_(netfs_trace_folioq_read_progress, "r-progress") +#define netfs_bvecq_traces \ + EM(netfs_trace_bvecq_alloc_buffer, "alloc-buf") \ + EM(netfs_trace_bvecq_clear, "clear") \ + EM(netfs_trace_bvecq_delete, "delete") \ + EM(netfs_trace_bvecq_make_space, "make-space") \ + EM(netfs_trace_bvecq_rollbuf_init, "roll-init") \ + E_(netfs_trace_bvecq_read_progress, "r-progress") =20 #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY @@ -251,7 +253,7 @@ enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __m= ode(byte); enum netfs_folio_trace { netfs_folio_traces } __mode(byte); enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byt= e); enum netfs_donate_trace { netfs_donate_traces } __mode(byte); -enum netfs_folioq_trace { netfs_folioq_traces } __mode(byte); +enum netfs_bvecq_trace { netfs_bvecq_traces } __mode(byte); =20 #endif =20 @@ -275,7 +277,7 @@ netfs_sreq_ref_traces; netfs_folio_traces; netfs_collect_contig_traces; netfs_donate_traces; -netfs_folioq_traces; +netfs_bvecq_traces; =20 /* * Now redefine the EM() and E_() macros to map the enums to the strings t= hat @@ -377,10 +379,10 @@ TRACE_EVENT(netfs_sreq, __entry->len =3D sreq->len; __entry->transferred =3D sreq->transferred; __entry->start =3D sreq->start; - __entry->slot =3D sreq->io_iter.folioq_slot; + __entry->slot =3D sreq->dispatch_pos.slot; ), =20 - TP_printk("R=3D%08x[%x] %s %s f=3D%03x s=3D%llx %zx/%zx s=3D%u e=3D%d= ", + TP_printk("R=3D%08x[%x] %s %s f=3D%03x s=3D%llx %zx/%zx qs=3D%u e=3D%= d", __entry->rreq, __entry->index, __print_symbolic(__entry->source, netfs_sreq_sources), __print_symbolic(__entry->what, netfs_sreq_traces), @@ -755,27 +757,25 @@ TRACE_EVENT(netfs_collect_stream, __entry->collected_to, __entry->front) ); =20 -TRACE_EVENT(netfs_folioq, - TP_PROTO(const struct folio_queue *fq, - enum netfs_folioq_trace trace), +TRACE_EVENT(netfs_bvecq, + TP_PROTO(const struct bvecq *bq, + enum netfs_bvecq_trace trace), =20 - TP_ARGS(fq, trace), + TP_ARGS(bq, trace), =20 TP_STRUCT__entry( - __field(unsigned int, rreq) __field(unsigned int, id) - __field(enum netfs_folioq_trace, trace) + __field(enum netfs_bvecq_trace, trace) ), =20 TP_fast_assign( - __entry->rreq =3D fq ? fq->rreq_id : 0; - __entry->id =3D fq ? fq->debug_id : 0; + __entry->id =3D bq ? bq->priv : 0; __entry->trace =3D trace; ), =20 - TP_printk("R=3D%08x fq=3D%x %s", - __entry->rreq, __entry->id, - __print_symbolic(__entry->trace, netfs_folioq_traces)) + TP_printk("fq=3D%x %s", + __entry->id, + __print_symbolic(__entry->trace, netfs_bvecq_traces)) ); =20 TRACE_EVENT(netfs_bv_slot, From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E74C3B4E96 for ; Wed, 4 Mar 2026 14:05:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633104; cv=none; b=C/pJY+HdcOBz45XBzffHhceoA0onw4MfgQHyybrnrslZuzJyvqkanBKmQkGjWDxY2oaANVY71TPK4ThJr6MJBijTk+ijcc4yHhz7ppC5Bpy8JyyFG612rBV7dOFrxMx291iAGzrmQnS/TKocYvhOgGiyH+17ZRd1zINlJQSfsE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633104; c=relaxed/simple; bh=6rmUXwRPi/9deMgHZ/4BLEmdII9DkKGjuMVzFhZqJ0Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PTlTP7dMG22rAHvunWVZhf825wVej/4xpkcEbSQP2OPRIeV8YXCEPcK1gVIQrR1jfP4FXxDssoPAdmeRPISt9PIR4IdoQfJIQ5h7tCyfNuPl7SNM5OHW99zzaM/sS93E1qyN9llEWWGxhT4NNhho8DnTclV07bP2DCKt1lQux6g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ct4M5E/F; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ct4M5E/F" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633102; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yx8pwT7ge+YPd/TZGpnf3p6bgkYOOp5xdZ3o7PjHdZY=; b=ct4M5E/F8cGAbRF2IhbEquItkeakiydbEX5XoDgEeRTvZtOHrKlV6u/BSeH1vqPxVXrE+J UM3mKsG704NkfNdgKAntf1oAM7ut30XZnWSPlzd9Bb3TstrATnDTLgB8Fc8Ydzt5U3ly1m /saPjFNNAucJcn6vlcsCP6ucpxJNwqk= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-314-2Cpe5SxAPcK2ILBk9xIC2A-1; Wed, 04 Mar 2026 09:04:59 -0500 X-MC-Unique: 2Cpe5SxAPcK2ILBk9xIC2A-1 X-Mimecast-MFC-AGG-ID: 2Cpe5SxAPcK2ILBk9xIC2A_1772633097 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E65161956094; Wed, 4 Mar 2026 14:04:56 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 969BA195608E; Wed, 4 Mar 2026 14:04:51 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Paulo Alcantara , Shyam Prasad N , Tom Talpey Subject: [RFC PATCH 11/17] cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from smb_extract_iter_to_rdma() Date: Wed, 4 Mar 2026 14:03:18 +0000 Message-ID: <20260304140328.112636-12-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" netfslib now only presents an bvecq queue and an associated ITER_BVECQ iterator to the filesystem, so it isn't going to see ITER_KVEC, ITER_BVEC or ITER_FOLIOQ iterators. So remove that code. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Shyam Prasad N cc: Tom Talpey cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/smb/client/smbdirect.c | 165 -------------------------------------- 1 file changed, 165 deletions(-) diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c index 0c6262010cd2..682df21c5ad2 100644 --- a/fs/smb/client/smbdirect.c +++ b/fs/smb/client/smbdirect.c @@ -3142,162 +3142,6 @@ static bool smb_set_sge(struct smb_extract_to_rdma = *rdma, return true; } =20 -/* - * Extract page fragments from a BVEC-class iterator and add them to an RD= MA - * element list. The pages are not pinned. - */ -static ssize_t smb_extract_bvec_to_rdma(struct iov_iter *iter, - struct smb_extract_to_rdma *rdma, - ssize_t maxsize) -{ - const struct bio_vec *bv =3D iter->bvec; - unsigned long start =3D iter->iov_offset; - unsigned int i; - ssize_t ret =3D 0; - - for (i =3D 0; i < iter->nr_segs; i++) { - size_t off, len; - - len =3D bv[i].bv_len; - if (start >=3D len) { - start -=3D len; - continue; - } - - len =3D min_t(size_t, maxsize, len - start); - off =3D bv[i].bv_offset + start; - - if (!smb_set_sge(rdma, bv[i].bv_page, off, len)) - return -EIO; - - ret +=3D len; - maxsize -=3D len; - if (rdma->nr_sge >=3D rdma->max_sge || maxsize <=3D 0) - break; - start =3D 0; - } - - if (ret > 0) - iov_iter_advance(iter, ret); - return ret; -} - -/* - * Extract fragments from a KVEC-class iterator and add them to an RDMA li= st. - * This can deal with vmalloc'd buffers as well as kmalloc'd or static buf= fers. - * The pages are not pinned. - */ -static ssize_t smb_extract_kvec_to_rdma(struct iov_iter *iter, - struct smb_extract_to_rdma *rdma, - ssize_t maxsize) -{ - const struct kvec *kv =3D iter->kvec; - unsigned long start =3D iter->iov_offset; - unsigned int i; - ssize_t ret =3D 0; - - for (i =3D 0; i < iter->nr_segs; i++) { - struct page *page; - unsigned long kaddr; - size_t off, len, seg; - - len =3D kv[i].iov_len; - if (start >=3D len) { - start -=3D len; - continue; - } - - kaddr =3D (unsigned long)kv[i].iov_base + start; - off =3D kaddr & ~PAGE_MASK; - len =3D min_t(size_t, maxsize, len - start); - kaddr &=3D PAGE_MASK; - - maxsize -=3D len; - do { - seg =3D min_t(size_t, len, PAGE_SIZE - off); - - if (is_vmalloc_or_module_addr((void *)kaddr)) - page =3D vmalloc_to_page((void *)kaddr); - else - page =3D virt_to_page((void *)kaddr); - - if (!smb_set_sge(rdma, page, off, seg)) - return -EIO; - - ret +=3D seg; - len -=3D seg; - kaddr +=3D PAGE_SIZE; - off =3D 0; - } while (len > 0 && rdma->nr_sge < rdma->max_sge); - - if (rdma->nr_sge >=3D rdma->max_sge || maxsize <=3D 0) - break; - start =3D 0; - } - - if (ret > 0) - iov_iter_advance(iter, ret); - return ret; -} - -/* - * Extract folio fragments from a FOLIOQ-class iterator and add them to an= RDMA - * list. The folios are not pinned. - */ -static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter, - struct smb_extract_to_rdma *rdma, - ssize_t maxsize) -{ - const struct folio_queue *folioq =3D iter->folioq; - unsigned int slot =3D iter->folioq_slot; - ssize_t ret =3D 0; - size_t offset =3D iter->iov_offset; - - BUG_ON(!folioq); - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - if (WARN_ON_ONCE(!folioq)) - return -EIO; - slot =3D 0; - } - - do { - struct folio *folio =3D folioq_folio(folioq, slot); - size_t fsize =3D folioq_folio_size(folioq, slot); - - if (offset < fsize) { - size_t part =3D umin(maxsize, fsize - offset); - - if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part)) - return -EIO; - - offset +=3D part; - ret +=3D part; - maxsize -=3D part; - } - - if (offset >=3D fsize) { - offset =3D 0; - slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - if (!folioq->next) { - WARN_ON_ONCE(ret < iter->count); - break; - } - folioq =3D folioq->next; - slot =3D 0; - } - } - } while (rdma->nr_sge < rdma->max_sge && maxsize > 0); - - iter->folioq =3D folioq; - iter->folioq_slot =3D slot; - iter->iov_offset =3D offset; - iter->count -=3D ret; - return ret; -} - /* * Extract memory fragments from a BVECQ-class iterator and add them to an= RDMA * list. The folios are not pinned. @@ -3373,15 +3217,6 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_i= ter *iter, size_t len, int before =3D rdma->nr_sge; =20 switch (iov_iter_type(iter)) { - case ITER_BVEC: - ret =3D smb_extract_bvec_to_rdma(iter, rdma, len); - break; - case ITER_KVEC: - ret =3D smb_extract_kvec_to_rdma(iter, rdma, len); - break; - case ITER_FOLIOQ: - ret =3D smb_extract_folioq_to_rdma(iter, rdma, len); - break; case ITER_BVECQ: ret =3D smb_extract_bvecq_to_rdma(iter, rdma, len); break; From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25C983BD62C for ; Wed, 4 Mar 2026 14:05:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633111; cv=none; b=FVIB6z+7erQpPnFUhICoVZ71GEHP94JaKvTRM+M1Pl0j7rl/wk36eGcU0dPLOQkNuTMtOwOCXHcr2nxcnMw5pvRxUDcuKJ2wCvdyHZt6HzSzljIUe/75JucJXKH6GeWErtbQApn6DToId3JbbdqNEr0knOMEipoVE5K6zZuKNGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633111; c=relaxed/simple; bh=57dWZEhyDFUlzqKRV/6JCGGq4V/AfRANMCP2O6+NUPk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mGIIcCYig6eNJqlGHgIwvjhHebSiZ93QYsqBu9kEyqS9pCSGqlUCngIg3x/843bF1krKrVqbeHClwUXq6Uv7CR/L3ikxNXwEgkiOi1yZVovq6XvYKQJbXdw4a8t96hc9tdelPjC4q2LZqPgxJi+lkp0dgkzhyH6Vigz8tXUh4TQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gfnsa4oP; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gfnsa4oP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633109; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2FdAONutYZfTMq21H+B5hhNlT9a5FjHMqQE+GvjNNa4=; b=gfnsa4oPT36EkEEbNbaXRS382ZC7I/SAS/+9jiV/eKCjGd7FuUyvYApRNq041dnrFRp2UU Tyc3DX8YCEPgwrXqaTO93lGU3tr2prCRyH25JHSO2f75jPqwkcW5RcNTJ7noZMX8A9seu4 JpS4WiConhm5O9XFtLPW9DMoaL5leXs= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-515-n0yev59CP2GCuS8n3IxcAg-1; Wed, 04 Mar 2026 09:05:05 -0500 X-MC-Unique: n0yev59CP2GCuS8n3IxcAg-1 X-Mimecast-MFC-AGG-ID: n0yev59CP2GCuS8n3IxcAg_1772633103 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6C0021956094; Wed, 4 Mar 2026 14:05:03 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A882D1958DC2; Wed, 4 Mar 2026 14:04:58 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French Subject: [RFC PATCH 12/17] netfs: Remove netfs_alloc/free_folioq_buffer() Date: Wed, 4 Mar 2026 14:03:19 +0000 Message-ID: <20260304140328.112636-13-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Remove netfs_alloc/free_folioq_buffer() as these have been replaced with netfs_alloc/free_bvecq_buffer(). Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/dir_edit.c | 1 - fs/netfs/misc.c | 98 --------------------------------------- fs/smb/client/smb2ops.c | 1 - fs/smb/client/smbdirect.c | 1 - include/linux/netfs.h | 4 -- 5 files changed, 105 deletions(-) diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index ef9066659438..ebe8cfd050a9 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -10,7 +10,6 @@ #include #include #include -#include #include "internal.h" #include "xdr_fs.h" =20 diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index ab142cbaad35..a19724389147 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,104 +8,6 @@ #include #include "internal.h" =20 -#if 0 -/** - * netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue - * @mapping: Address space to set on the folio (or NULL). - * @_buffer: Pointer to the folio queue to add to (may point to a NULL; up= dated). - * @_cur_size: Current size of the buffer (updated). - * @size: Target size of the buffer. - * @gfp: The allocation constraints. - */ -int netfs_alloc_folioq_buffer(struct address_space *mapping, - struct folio_queue **_buffer, - size_t *_cur_size, ssize_t size, gfp_t gfp) -{ - struct folio_queue *tail =3D *_buffer, *p; - - size =3D round_up(size, PAGE_SIZE); - if (*_cur_size >=3D size) - return 0; - - if (tail) - while (tail->next) - tail =3D tail->next; - - do { - struct folio *folio; - int order =3D 0, slot; - - if (!tail || folioq_full(tail)) { - p =3D netfs_folioq_alloc(0, GFP_NOFS, netfs_trace_folioq_alloc_buffer); - if (!p) - return -ENOMEM; - if (tail) { - tail->next =3D p; - p->prev =3D tail; - } else { - *_buffer =3D p; - } - tail =3D p; - } - - if (size - *_cur_size > PAGE_SIZE) - order =3D umin(ilog2(size - *_cur_size) - PAGE_SHIFT, - MAX_PAGECACHE_ORDER); - - folio =3D folio_alloc(gfp, order); - if (!folio && order > 0) - folio =3D folio_alloc(gfp, 0); - if (!folio) - return -ENOMEM; - - folio->mapping =3D mapping; - folio->index =3D *_cur_size / PAGE_SIZE; - trace_netfs_folio(folio, netfs_folio_trace_alloc_buffer); - slot =3D folioq_append_mark(tail, folio); - *_cur_size +=3D folioq_folio_size(tail, slot); - } while (*_cur_size < size); - - return 0; -} -EXPORT_SYMBOL(netfs_alloc_folioq_buffer); - -/** - * netfs_free_folioq_buffer - Free a folio queue. - * @fq: The start of the folio queue to free - * - * Free up a chain of folio_queues and, if marked, the marked folios they = point - * to. - */ -void netfs_free_folioq_buffer(struct folio_queue *fq) -{ - struct folio_queue *next; - struct folio_batch fbatch; - - folio_batch_init(&fbatch); - - for (; fq; fq =3D next) { - for (int slot =3D 0; slot < folioq_count(fq); slot++) { - struct folio *folio =3D folioq_folio(fq, slot); - - if (!folio || - !folioq_is_marked(fq, slot)) - continue; - - trace_netfs_folio(folio, netfs_folio_trace_put); - if (folio_batch_add(&fbatch, folio)) - folio_batch_release(&fbatch); - } - - netfs_stat_d(&netfs_n_folioq); - next =3D fq->next; - kfree(fq); - } - - folio_batch_release(&fbatch); -} -EXPORT_SYMBOL(netfs_free_folioq_buffer); -#endif - /** * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeba= ck * @mapping: The mapping the folio belongs to. diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 76baf21404df..7223a8deaa58 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include "cifsfs.h" #include "cifsglob.h" diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c index 682df21c5ad2..8ffb5d1eba62 100644 --- a/fs/smb/client/smbdirect.c +++ b/fs/smb/client/smbdirect.c @@ -6,7 +6,6 @@ */ #include #include -#include #define __SMBDIRECT_SOCKET_DISCONNECT(__sc) smbd_disconnect_rdma_connectio= n(__sc) #include "../common/smbdirect/smbdirect_pdu.h" #include "smbdirect.h" diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a48f03e85b6a..e49cb8ffb811 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -467,10 +467,6 @@ int netfs_start_io_direct(struct inode *inode); void netfs_end_io_direct(struct inode *inode); =20 /* Buffer wrangling helpers API. */ -int netfs_alloc_folioq_buffer(struct address_space *mapping, - struct folio_queue **_buffer, - size_t *_cur_size, ssize_t size, gfp_t gfp); -void netfs_free_folioq_buffer(struct folio_queue *fq); void dump_bvecq(const struct bvecq *bq); struct bvecq *netfs_alloc_bvecq(size_t nr_slots, gfp_t gfp); struct bvecq *netfs_alloc_bvecq_buffer(size_t size, unsigned int pre_slots= , gfp_t gfp); From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12FE83BD65D for ; Wed, 4 Mar 2026 14:05:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633118; cv=none; b=K+clKV7+XtPmL/ZBGCYdH+WZSz7hDSwvI6UotnYDQVRahJdUUIYiCYDA+1u45yx3ZeSvyYikMghhAIgPvdTq8Xtg9dMVLs/eThF5ofrL9k4/MFx5tlBTUc1c/akrliUJ/LYa8UelzK9z5JxsfIJkuxH1ZPx6BncfowsT+vM38pU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633118; c=relaxed/simple; bh=Fcbu+gVjhyb/VX8n8OSkjyLBhBYv+YF8l5OgG1xyClY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AI4PPeop+nZgLtnyvW9clc9sJkEaD2/rsHdVDjOyCwpPENqrmR28kf3YFaHeZPteAroK6oXX/FmRq76F9ZrREAoP6uui9jBVp34zOcIZ5N5ANfWQ8BB/9rrWCFDq4CWQ8gqScIw34qcDL1oWkXkEunJ4KhNn5vOAf37V1f5vgGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=h8tNa8wI; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h8tNa8wI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633115; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2iwmANub1FQccia8eO/ezgj5z9NtMDkWpwngeIeWnPU=; b=h8tNa8wIVFHZDu+kK54BkdryNCJwEq4JBHo9eCkNCJZLeBGwfwsXMx2LIpdLjeW1asB8Cl Vt8INPUicQbeA5LiDJxkq0Ca3/6ITTC7rM/KTOBsA4SlzJY9rTkAoUxKYm+x3xT6svStXT 7lu0bR2CeYPNiM1feYEVGtQQ4xky2DE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-607-aQ_4LxECNpSrd1Fb4dXbuw-1; Wed, 04 Mar 2026 09:05:12 -0500 X-MC-Unique: aQ_4LxECNpSrd1Fb4dXbuw-1 X-Mimecast-MFC-AGG-ID: aQ_4LxECNpSrd1Fb4dXbuw_1772633110 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DCB501956057; Wed, 4 Mar 2026 14:05:09 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 28333300019F; Wed, 4 Mar 2026 14:05:04 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French Subject: [RFC PATCH 13/17] netfs: Remove netfs_extract_user_iter() Date: Wed, 4 Mar 2026 14:03:20 +0000 Message-ID: <20260304140328.112636-14-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Remove netfs_extract_user_iter() as it has been replaced with netfs_extract_iter(). Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/iterator.c | 89 ------------------------------------------- include/linux/netfs.h | 3 -- 2 files changed, 92 deletions(-) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 2b0a511d6db7..5ae9279a2dfb 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -136,95 +136,6 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size= _t orig_len, size_t max_se EXPORT_SYMBOL_GPL(netfs_extract_iter); =20 #if 0 -/** - * netfs_extract_user_iter - Extract the pages from a user iterator into a= bvec - * @orig: The original iterator - * @orig_len: The amount of iterator to copy - * @new: The iterator to be set up - * @extraction_flags: Flags to qualify the request - * - * Extract the page fragments from the given amount of the source iterator= and - * build up a second iterator that refers to all of those bits. This allo= ws - * the original iterator to disposed of. - * - * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-pee= r DMA be - * allowed on the pages extracted. - * - * On success, the number of elements in the bvec is returned, the original - * iterator will have been advanced by the amount extracted. - * - * The iov_iter_extract_mode() function should be used to query how cleanup - * should be performed. - */ -ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, - struct iov_iter *new, - iov_iter_extraction_t extraction_flags) -{ - struct bio_vec *bv =3D NULL; - struct page **pages; - unsigned int cur_npages; - unsigned int max_pages; - unsigned int npages =3D 0; - unsigned int i; - ssize_t ret; - size_t count =3D orig_len, offset, len; - size_t bv_size, pg_size; - - if (WARN_ON_ONCE(!iter_is_ubuf(orig) && !iter_is_iovec(orig))) - return -EIO; - - max_pages =3D iov_iter_npages(orig, INT_MAX); - bv_size =3D array_size(max_pages, sizeof(*bv)); - bv =3D kvmalloc(bv_size, GFP_KERNEL); - if (!bv) - return -ENOMEM; - - /* Put the page list at the end of the bvec list storage. bvec - * elements are larger than page pointers, so as long as we work - * 0->last, we should be fine. - */ - pg_size =3D array_size(max_pages, sizeof(*pages)); - pages =3D (void *)bv + bv_size - pg_size; - - while (count && npages < max_pages) { - ret =3D iov_iter_extract_pages(orig, &pages, count, - max_pages - npages, extraction_flags, - &offset); - if (ret < 0) { - pr_err("Couldn't get user pages (rc=3D%zd)\n", ret); - break; - } - - if (ret > count) { - pr_err("get_pages rc=3D%zd more than %zu\n", ret, count); - break; - } - - count -=3D ret; - ret +=3D offset; - cur_npages =3D DIV_ROUND_UP(ret, PAGE_SIZE); - - if (npages + cur_npages > max_pages) { - pr_err("Out of bvec array capacity (%u vs %u)\n", - npages + cur_npages, max_pages); - break; - } - - for (i =3D 0; i < cur_npages; i++) { - len =3D ret > PAGE_SIZE ? PAGE_SIZE : ret; - bvec_set_page(bv + npages + i, *pages++, len - offset, offset); - ret -=3D len; - offset =3D 0; - } - - npages +=3D cur_npages; - } - - iov_iter_bvec(new, orig->data_source, bv, npages, orig_len - count); - return npages; -} -EXPORT_SYMBOL_GPL(netfs_extract_user_iter); - /* * Select the span of a bvec iterator we're going to use. Limit it by bot= h maximum * size and maximum number of segments. Returns the size of the span in b= ytes. diff --git a/include/linux/netfs.h b/include/linux/netfs.h index e49cb8ffb811..05abb3425962 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -451,9 +451,6 @@ void netfs_put_subrequest(struct netfs_io_subrequest *s= ubreq, ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t = max_segs, unsigned long long fpos, struct bvecq **_bvecq_head, iov_iter_extraction_t extraction_flags); -ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, - struct iov_iter *new, - iov_iter_extraction_t extraction_flags); size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, size_t max_size, size_t max_segs); void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq); From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19D143BE164 for ; Wed, 4 Mar 2026 14:05:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633127; cv=none; b=Wih72pjwt4NXvaI/m3L5rbIDo5SCGDS+yxqcmNqePgfcJcUaclTCIAkDzldyr9mmNhBpsFiosWB5QFyEzFBSFt7aJaWjtEjoy8cbNlxEDTGM15uWsPZ+nddui7NRfP0w7/VVq0tmfe/Ep0BDUnan6VNVXH8n78Lbst1D6mEYWqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633127; c=relaxed/simple; bh=AraPL1OSKmpaH5gBSzGDcSXdXyc3fxEPDz/pg/7wBf8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FIHUkfTShWTgYrLEdMHP77duMRPzybMuKfvS2hiYd9VXu9EZnXphvj6/4HNQDTbphxAcA04wQ/MqpE0doJUDSk4mkal6xL5+IPI4BHXaZOlyqxIsnToGsmVInKs2DH8TO+V4GNDAbEzMQkb8hwHBBYYGfv8BrstEsh2aK+Reag0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HGfT70+k; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HGfT70+k" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633122; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5N4Tp7WhbfGOl0t4/xOo3zAKE6foV1oI7IVqrxuvvXk=; b=HGfT70+k/deeBYA8aKcjxsemXaQtGF4MU0TpPMGtNOyJJkVDmBGMM+wmQUWofjJJTRcWh6 X+YBQIaIa+yj38B/XfIDv8wKNyOXXM8UR37++GEhOtwK4Sh3+nFBmFjwV18i4Vf3Zkfgq0 BN3Aj2V3qUQkbikWQoXn2WJt8huztpc= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-421-bCa3m24RPL-M1HIdJBC1hw-1; Wed, 04 Mar 2026 09:05:18 -0500 X-MC-Unique: bCa3m24RPL-M1HIdJBC1hw-1 X-Mimecast-MFC-AGG-ID: bCa3m24RPL-M1HIdJBC1hw_1772633116 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 95C8419560B0; Wed, 4 Mar 2026 14:05:16 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9A4BB18002A6; Wed, 4 Mar 2026 14:05:11 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French Subject: [RFC PATCH 14/17] iov_iter: Remove ITER_FOLIOQ Date: Wed, 4 Mar 2026 14:03:21 +0000 Message-ID: <20260304140328.112636-15-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Remove ITER_FOLIOQ as it's no longer used. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- include/linux/iov_iter.h | 65 +--------- include/linux/uio.h | 12 -- lib/iov_iter.c | 235 +--------------------------------- lib/scatterlist.c | 67 +--------- lib/tests/kunit_iov_iter.c | 256 ------------------------------------- 5 files changed, 5 insertions(+), 630 deletions(-) diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h index e0c129a3ca63..4b47454c5ca8 100644 --- a/include/linux/iov_iter.h +++ b/include/linux/iov_iter.h @@ -10,7 +10,6 @@ =20 #include #include -#include =20 typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len, void *priv, void *priv2); @@ -194,62 +193,6 @@ size_t iterate_bvecq(struct iov_iter *iter, size_t len= , void *priv, void *priv2, return progress; } =20 -/* - * Handle ITER_FOLIOQ. - */ -static __always_inline -size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void = *priv2, - iov_step_f step) -{ - const struct folio_queue *folioq =3D iter->folioq; - unsigned int slot =3D iter->folioq_slot; - size_t progress =3D 0, skip =3D iter->iov_offset; - - if (slot =3D=3D folioq_nr_slots(folioq)) { - /* The iterator may have been extended. */ - folioq =3D folioq->next; - slot =3D 0; - } - - do { - struct folio *folio =3D folioq_folio(folioq, slot); - size_t part, remain =3D 0, consumed; - size_t fsize; - void *base; - - if (!folio) - break; - - fsize =3D folioq_folio_size(folioq, slot); - if (skip < fsize) { - base =3D kmap_local_folio(folio, skip); - part =3D umin(len, PAGE_SIZE - skip % PAGE_SIZE); - remain =3D step(base, progress, part, priv, priv2); - kunmap_local(base); - consumed =3D part - remain; - len -=3D consumed; - progress +=3D consumed; - skip +=3D consumed; - } - if (skip >=3D fsize) { - skip =3D 0; - slot++; - if (slot =3D=3D folioq_nr_slots(folioq) && folioq->next) { - folioq =3D folioq->next; - slot =3D 0; - } - } - if (remain) - break; - } while (len); - - iter->folioq_slot =3D slot; - iter->folioq =3D folioq; - iter->iov_offset =3D skip; - iter->count -=3D progress; - return progress; -} - /* * Handle ITER_XARRAY. */ @@ -361,8 +304,6 @@ size_t iterate_and_advance2(struct iov_iter *iter, size= _t len, void *priv, return iterate_kvec(iter, len, priv, priv2, step); if (iov_iter_is_bvecq(iter)) return iterate_bvecq(iter, len, priv, priv2, step); - if (iov_iter_is_folioq(iter)) - return iterate_folioq(iter, len, priv, priv2, step); if (iov_iter_is_xarray(iter)) return iterate_xarray(iter, len, priv, priv2, step); return iterate_discard(iter, len, priv, priv2, step); @@ -397,8 +338,8 @@ size_t iterate_and_advance(struct iov_iter *iter, size_= t len, void *priv, * buffer is presented in segments, which for kernel iteration are broken = up by * physical pages and mapped, with the mapped address being presented. * - * [!] Note This will only handle BVEC, KVEC, BVECQ, FOLIOQ, XARRAY and - * DISCARD-type iterators; it will not handle UBUF or IOVEC-type iterators. + * [!] Note This will only handle BVEC, KVEC, BVECQ, XARRAY and DISCARD-ty= pe + * iterators; it will not handle UBUF or IOVEC-type iterators. * * A step functions, @step, must be provided, one for handling mapped kern= el * addresses and the other is given user addresses which have the potentia= l to @@ -427,8 +368,6 @@ size_t iterate_and_advance_kernel(struct iov_iter *iter= , size_t len, void *priv, return iterate_kvec(iter, len, priv, priv2, step); if (iov_iter_is_bvecq(iter)) return iterate_bvecq(iter, len, priv, priv2, step); - if (iov_iter_is_folioq(iter)) - return iterate_folioq(iter, len, priv, priv2, step); if (iov_iter_is_xarray(iter)) return iterate_xarray(iter, len, priv, priv2, step); return iterate_discard(iter, len, priv, priv2, step); diff --git a/include/linux/uio.h b/include/linux/uio.h index aa50d348dfcc..e84a0c4f28c6 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -11,7 +11,6 @@ #include =20 struct page; -struct folio_queue; =20 typedef unsigned int __bitwise iov_iter_extraction_t; =20 @@ -26,7 +25,6 @@ enum iter_type { ITER_IOVEC, ITER_BVEC, ITER_KVEC, - ITER_FOLIOQ, ITER_BVECQ, ITER_XARRAY, ITER_DISCARD, @@ -69,7 +67,6 @@ struct iov_iter { const struct iovec *__iov; const struct kvec *kvec; const struct bio_vec *bvec; - const struct folio_queue *folioq; const struct bvecq *bvecq; struct xarray *xarray; void __user *ubuf; @@ -79,7 +76,6 @@ struct iov_iter { }; union { unsigned long nr_segs; - u8 folioq_slot; u16 bvecq_slot; loff_t xarray_start; }; @@ -148,11 +144,6 @@ static inline bool iov_iter_is_discard(const struct io= v_iter *i) return iov_iter_type(i) =3D=3D ITER_DISCARD; } =20 -static inline bool iov_iter_is_folioq(const struct iov_iter *i) -{ - return iov_iter_type(i) =3D=3D ITER_FOLIOQ; -} - static inline bool iov_iter_is_bvecq(const struct iov_iter *i) { return iov_iter_type(i) =3D=3D ITER_BVECQ; @@ -303,9 +294,6 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int dir= ection, const struct kvec void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struc= t bio_vec *bvec, unsigned long nr_segs, size_t count); void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t c= ount); -void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, - const struct folio_queue *folioq, - unsigned int first_slot, unsigned int offset, size_t count); void iov_iter_bvec_queue(struct iov_iter *i, unsigned int direction, const struct bvecq *bvecq, unsigned int first_slot, unsigned int offset, size_t count); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index df8d037894b1..d5a4f5e5a107 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -538,39 +538,6 @@ static void iov_iter_iovec_advance(struct iov_iter *i,= size_t size) i->__iov =3D iov; } =20 -static void iov_iter_folioq_advance(struct iov_iter *i, size_t size) -{ - const struct folio_queue *folioq =3D i->folioq; - unsigned int slot =3D i->folioq_slot; - - if (!i->count) - return; - i->count -=3D size; - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - } - - size +=3D i->iov_offset; /* From beginning of current segment. */ - do { - size_t fsize =3D folioq_folio_size(folioq, slot); - - if (likely(size < fsize)) - break; - size -=3D fsize; - slot++; - if (slot >=3D folioq_nr_slots(folioq) && folioq->next) { - folioq =3D folioq->next; - slot =3D 0; - } - } while (size); - - i->iov_offset =3D size; - i->folioq_slot =3D slot; - i->folioq =3D folioq; -} - static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by) { const struct bvecq *bq =3D i->bvecq; @@ -616,8 +583,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size) iov_iter_iovec_advance(i, size); } else if (iov_iter_is_bvec(i)) { iov_iter_bvec_advance(i, size); - } else if (iov_iter_is_folioq(i)) { - iov_iter_folioq_advance(i, size); } else if (iov_iter_is_bvecq(i)) { iov_iter_bvecq_advance(i, size); } else if (iov_iter_is_discard(i)) { @@ -626,32 +591,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size) } EXPORT_SYMBOL(iov_iter_advance); =20 -static void iov_iter_folioq_revert(struct iov_iter *i, size_t unroll) -{ - const struct folio_queue *folioq =3D i->folioq; - unsigned int slot =3D i->folioq_slot; - - for (;;) { - size_t fsize; - - if (slot =3D=3D 0) { - folioq =3D folioq->prev; - slot =3D folioq_nr_slots(folioq); - } - slot--; - - fsize =3D folioq_folio_size(folioq, slot); - if (unroll <=3D fsize) { - i->iov_offset =3D fsize - unroll; - break; - } - unroll -=3D fsize; - } - - i->folioq_slot =3D slot; - i->folioq =3D folioq; -} - static void iov_iter_bvecq_revert(struct iov_iter *i, size_t unroll) { const struct bvecq *bq =3D i->bvecq; @@ -709,9 +648,6 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll) } unroll -=3D n; } - } else if (iov_iter_is_folioq(i)) { - i->iov_offset =3D 0; - iov_iter_folioq_revert(i, unroll); } else if (iov_iter_is_bvecq(i)) { i->iov_offset =3D 0; iov_iter_bvecq_revert(i, unroll); @@ -744,8 +680,6 @@ size_t iov_iter_single_seg_count(const struct iov_iter = *i) } if (!i->count) return 0; - if (unlikely(iov_iter_is_folioq(i))) - return umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count); if (unlikely(iov_iter_is_bvecq(i))) return min(i->count, i->bvecq->bv[i->bvecq_slot].bv_len - i->iov_offset); return i->count; @@ -784,36 +718,6 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int di= rection, } EXPORT_SYMBOL(iov_iter_bvec); =20 -/** - * iov_iter_folio_queue - Initialise an I/O iterator to use the folios in = a folio queue - * @i: The iterator to initialise. - * @direction: The direction of the transfer. - * @folioq: The starting point in the folio queue. - * @first_slot: The first slot in the folio queue to use - * @offset: The offset into the folio in the first slot to start at - * @count: The size of the I/O buffer in bytes. - * - * Set up an I/O iterator to either draw data out of the pages attached to= an - * inode or to inject data into those pages. The pages *must* be prevented - * from evaporation, either by taking a ref on them or locking them by the - * caller. - */ -void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, - const struct folio_queue *folioq, unsigned int first_slot, - unsigned int offset, size_t count) -{ - BUG_ON(direction & ~1); - *i =3D (struct iov_iter) { - .iter_type =3D ITER_FOLIOQ, - .data_source =3D direction, - .folioq =3D folioq, - .folioq_slot =3D first_slot, - .count =3D count, - .iov_offset =3D offset, - }; -} -EXPORT_SYMBOL(iov_iter_folio_queue); - /** * iov_iter_bvec_queue - Initialise an I/O iterator to use a segmented bve= c queue * @i: The iterator to initialise. @@ -982,9 +886,6 @@ unsigned long iov_iter_alignment(const struct iov_iter = *i) if (iov_iter_is_bvec(i)) return iov_iter_alignment_bvec(i); =20 - /* With both xarray and folioq types, we're dealing with whole folios. */ - if (iov_iter_is_folioq(i)) - return i->iov_offset | i->count; if (iov_iter_is_bvecq(i)) return iov_iter_alignment_bvecq(i); if (iov_iter_is_xarray(i)) @@ -1039,65 +940,6 @@ static int want_pages_array(struct page ***res, size_= t size, return count; } =20 -static ssize_t iter_folioq_get_pages(struct iov_iter *iter, - struct page ***ppages, size_t maxsize, - unsigned maxpages, size_t *_start_offset) -{ - const struct folio_queue *folioq =3D iter->folioq; - struct page **pages; - unsigned int slot =3D iter->folioq_slot; - size_t extracted =3D 0, count =3D iter->count, iov_offset =3D iter->iov_o= ffset; - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - if (WARN_ON(iov_offset !=3D 0)) - return -EIO; - } - - maxpages =3D want_pages_array(ppages, maxsize, iov_offset & ~PAGE_MASK, m= axpages); - if (!maxpages) - return -ENOMEM; - *_start_offset =3D iov_offset & ~PAGE_MASK; - pages =3D *ppages; - - for (;;) { - struct folio *folio =3D folioq_folio(folioq, slot); - size_t offset =3D iov_offset, fsize =3D folioq_folio_size(folioq, slot); - size_t part =3D PAGE_SIZE - offset % PAGE_SIZE; - - if (offset < fsize) { - part =3D umin(part, umin(maxsize - extracted, fsize - offset)); - count -=3D part; - iov_offset +=3D part; - extracted +=3D part; - - *pages =3D folio_page(folio, offset / PAGE_SIZE); - get_page(*pages); - pages++; - maxpages--; - } - - if (maxpages =3D=3D 0 || extracted >=3D maxsize) - break; - - if (iov_offset >=3D fsize) { - iov_offset =3D 0; - slot++; - if (slot =3D=3D folioq_nr_slots(folioq) && folioq->next) { - folioq =3D folioq->next; - slot =3D 0; - } - } - } - - iter->count =3D count; - iter->iov_offset =3D iov_offset; - iter->folioq =3D folioq; - iter->folioq_slot =3D slot; - return extracted; -} - static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarr= ay *xa, pgoff_t index, unsigned int nr_pages) { @@ -1249,8 +1091,6 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_= iter *i, } return maxsize; } - if (iov_iter_is_folioq(i)) - return iter_folioq_get_pages(i, pages, maxsize, maxpages, start); if (iov_iter_is_xarray(i)) return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); WARN_ON_ONCE(iov_iter_is_bvecq(i)); @@ -1366,11 +1206,6 @@ int iov_iter_npages(const struct iov_iter *i, int ma= xpages) return iov_npages(i, maxpages); if (iov_iter_is_bvec(i)) return bvec_npages(i, maxpages); - if (iov_iter_is_folioq(i)) { - unsigned offset =3D i->iov_offset % PAGE_SIZE; - int npages =3D DIV_ROUND_UP(offset + i->count, PAGE_SIZE); - return min(npages, maxpages); - } if (iov_iter_is_bvecq(i)) return iov_npages_bvecq(i, maxpages); if (iov_iter_is_xarray(i)) { @@ -1654,68 +1489,6 @@ void iov_iter_restore(struct iov_iter *i, struct iov= _iter_state *state) i->nr_segs =3D state->nr_segs; } =20 -/* - * Extract a list of contiguous pages from an ITER_FOLIOQ iterator. This = does - * not get references on the pages, nor does it get a pin on them. - */ -static ssize_t iov_iter_extract_folioq_pages(struct iov_iter *i, - struct page ***pages, size_t maxsize, - unsigned int maxpages, - iov_iter_extraction_t extraction_flags, - size_t *offset0) -{ - const struct folio_queue *folioq =3D i->folioq; - struct page **p; - unsigned int nr =3D 0; - size_t extracted =3D 0, offset, slot =3D i->folioq_slot; - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - if (WARN_ON(i->iov_offset !=3D 0)) - return -EIO; - } - - offset =3D i->iov_offset & ~PAGE_MASK; - *offset0 =3D offset; - - maxpages =3D want_pages_array(pages, maxsize, offset, maxpages); - if (!maxpages) - return -ENOMEM; - p =3D *pages; - - for (;;) { - struct folio *folio =3D folioq_folio(folioq, slot); - size_t offset =3D i->iov_offset, fsize =3D folioq_folio_size(folioq, slo= t); - size_t part =3D PAGE_SIZE - offset % PAGE_SIZE; - - if (offset < fsize) { - part =3D umin(part, umin(maxsize - extracted, fsize - offset)); - i->count -=3D part; - i->iov_offset +=3D part; - extracted +=3D part; - - p[nr++] =3D folio_page(folio, offset / PAGE_SIZE); - } - - if (nr >=3D maxpages || extracted >=3D maxsize) - break; - - if (i->iov_offset >=3D fsize) { - i->iov_offset =3D 0; - slot++; - if (slot =3D=3D folioq_nr_slots(folioq) && folioq->next) { - folioq =3D folioq->next; - slot =3D 0; - } - } - } - - i->folioq =3D folioq; - i->folioq_slot =3D slot; - return extracted; -} - /* * Extract a list of virtually contiguous pages from an ITER_BVECQ iterato= r. * This does not get references on the pages, nor does it get a pin on the= m. @@ -2078,8 +1851,8 @@ static ssize_t iov_iter_extract_user_pages(struct iov= _iter *i, * added to the pages, but refs will not be taken. * iov_iter_extract_will_pin() will return true. * - * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_FOLIOQ or ITER_XARRA= Y, the - * pages are merely listed; no extra refs or pins are obtained. + * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_XARRAY, the pages are + * merely listed; no extra refs or pins are obtained. * iov_iter_extract_will_pin() will return 0. * * Note also: @@ -2114,10 +1887,6 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i, return iov_iter_extract_bvec_pages(i, pages, maxsize, maxpages, extraction_flags, offset0); - if (iov_iter_is_folioq(i)) - return iov_iter_extract_folioq_pages(i, pages, maxsize, - maxpages, extraction_flags, - offset0); if (iov_iter_is_bvecq(i)) return iov_iter_extract_bvecq_pages(i, pages, maxsize, maxpages, extraction_flags, diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 61ca42ac53f3..84a6e2983f2a 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -11,7 +11,6 @@ #include #include #include -#include =20 /** * sg_nents - return total count of entries in scatterlist @@ -1267,67 +1266,6 @@ static ssize_t extract_kvec_to_sg(struct iov_iter *i= ter, return ret; } =20 -/* - * Extract up to sg_max folios from an FOLIOQ-type iterator and add them to - * the scatterlist. The pages are not pinned. - */ -static ssize_t extract_folioq_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) -{ - const struct folio_queue *folioq =3D iter->folioq; - struct scatterlist *sg =3D sgtable->sgl + sgtable->nents; - unsigned int slot =3D iter->folioq_slot; - ssize_t ret =3D 0; - size_t offset =3D iter->iov_offset; - - BUG_ON(!folioq); - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - if (WARN_ON_ONCE(!folioq)) - return 0; - slot =3D 0; - } - - do { - struct folio *folio =3D folioq_folio(folioq, slot); - size_t fsize =3D folioq_folio_size(folioq, slot); - - if (offset < fsize) { - size_t part =3D umin(maxsize - ret, fsize - offset); - - sg_set_page(sg, folio_page(folio, 0), part, offset); - sgtable->nents++; - sg++; - sg_max--; - offset +=3D part; - ret +=3D part; - } - - if (offset >=3D fsize) { - offset =3D 0; - slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - if (!folioq->next) { - WARN_ON_ONCE(ret < iter->count); - break; - } - folioq =3D folioq->next; - slot =3D 0; - } - } - } while (sg_max > 0 && ret < maxsize); - - iter->folioq =3D folioq; - iter->folioq_slot =3D slot; - iter->iov_offset =3D offset; - iter->count -=3D ret; - return ret; -} - /* * Extract up to sg_max folios from an BVECQ-type iterator and add them to * the scatterlist. The pages are not pinned. @@ -1452,7 +1390,7 @@ static ssize_t extract_xarray_to_sg(struct iov_iter *= iter, * addition of @sg_max elements. * * The pages referred to by UBUF- and IOVEC-type iterators are extracted a= nd - * pinned; BVEC-, KVEC-, FOLIOQ- and XARRAY-type are extracted but aren't + * pinned; BVEC-, KVEC-, BVECQ- and XARRAY-type are extracted but aren't * pinned; DISCARD-type is not supported. * * No end mark is placed on the scatterlist; that's left to the caller. @@ -1485,9 +1423,6 @@ ssize_t extract_iter_to_sg(struct iov_iter *iter, siz= e_t maxsize, case ITER_KVEC: return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); - case ITER_FOLIOQ: - return extract_folioq_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); case ITER_BVECQ: return extract_bvecq_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); diff --git a/lib/tests/kunit_iov_iter.c b/lib/tests/kunit_iov_iter.c index 644a1b9eb2d3..7ab915f77732 100644 --- a/lib/tests/kunit_iov_iter.c +++ b/lib/tests/kunit_iov_iter.c @@ -12,7 +12,6 @@ #include #include #include -#include #include =20 MODULE_DESCRIPTION("iov_iter testing"); @@ -363,179 +362,6 @@ static void __init iov_kunit_copy_from_bvec(struct ku= nit *test) KUNIT_SUCCEED(test); } =20 -static void iov_kunit_destroy_folioq(void *data) -{ - struct folio_queue *folioq, *next; - - for (folioq =3D data; folioq; folioq =3D next) { - next =3D folioq->next; - for (int i =3D 0; i < folioq_nr_slots(folioq); i++) - if (folioq_folio(folioq, i)) - folio_put(folioq_folio(folioq, i)); - kfree(folioq); - } -} - -static void __init iov_kunit_load_folioq(struct kunit *test, - struct iov_iter *iter, int dir, - struct folio_queue *folioq, - struct page **pages, size_t npages) -{ - struct folio_queue *p =3D folioq; - size_t size =3D 0; - int i; - - for (i =3D 0; i < npages; i++) { - if (folioq_full(p)) { - p->next =3D kzalloc_obj(struct folio_queue); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p->next); - folioq_init(p->next, 0); - p->next->prev =3D p; - p =3D p->next; - } - folioq_append(p, page_folio(pages[i])); - size +=3D PAGE_SIZE; - } - iov_iter_folio_queue(iter, dir, folioq, 0, 0, size); -} - -static struct folio_queue *iov_kunit_create_folioq(struct kunit *test) -{ - struct folio_queue *folioq; - - folioq =3D kzalloc_obj(struct folio_queue); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folioq); - kunit_add_action_or_reset(test, iov_kunit_destroy_folioq, folioq); - folioq_init(folioq, 0); - return folioq; -} - -/* - * Test copying to a ITER_FOLIOQ-type iterator. - */ -static void __init iov_kunit_copy_to_folioq(struct kunit *test) -{ - const struct kvec_test_range *pr; - struct iov_iter iter; - struct folio_queue *folioq; - struct page **spages, **bpages; - u8 *scratch, *buffer; - size_t bufsize, npages, size, copied; - int i, patt; - - bufsize =3D 0x100000; - npages =3D bufsize / PAGE_SIZE; - - folioq =3D iov_kunit_create_folioq(test); - - scratch =3D iov_kunit_create_buffer(test, &spages, npages); - for (i =3D 0; i < bufsize; i++) - scratch[i] =3D pattern(i); - - buffer =3D iov_kunit_create_buffer(test, &bpages, npages); - memset(buffer, 0, bufsize); - - iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); - - i =3D 0; - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { - size =3D pr->to - pr->from; - KUNIT_ASSERT_LE(test, pr->to, bufsize); - - iov_iter_folio_queue(&iter, READ, folioq, 0, 0, pr->to); - iov_iter_advance(&iter, pr->from); - copied =3D copy_to_iter(scratch + i, size, &iter); - - KUNIT_EXPECT_EQ(test, copied, size); - KUNIT_EXPECT_EQ(test, iter.count, 0); - KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); - i +=3D size; - if (test->status =3D=3D KUNIT_FAILURE) - goto stop; - } - - /* Build the expected image in the scratch buffer. */ - patt =3D 0; - memset(scratch, 0, bufsize); - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) - for (i =3D pr->from; i < pr->to; i++) - scratch[i] =3D pattern(patt++); - - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=3D%x", i); - if (buffer[i] !=3D scratch[i]) - return; - } - -stop: - KUNIT_SUCCEED(test); -} - -/* - * Test copying from a ITER_FOLIOQ-type iterator. - */ -static void __init iov_kunit_copy_from_folioq(struct kunit *test) -{ - const struct kvec_test_range *pr; - struct iov_iter iter; - struct folio_queue *folioq; - struct page **spages, **bpages; - u8 *scratch, *buffer; - size_t bufsize, npages, size, copied; - int i, j; - - bufsize =3D 0x100000; - npages =3D bufsize / PAGE_SIZE; - - folioq =3D iov_kunit_create_folioq(test); - - buffer =3D iov_kunit_create_buffer(test, &bpages, npages); - for (i =3D 0; i < bufsize; i++) - buffer[i] =3D pattern(i); - - scratch =3D iov_kunit_create_buffer(test, &spages, npages); - memset(scratch, 0, bufsize); - - iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); - - i =3D 0; - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { - size =3D pr->to - pr->from; - KUNIT_ASSERT_LE(test, pr->to, bufsize); - - iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to); - iov_iter_advance(&iter, pr->from); - copied =3D copy_from_iter(scratch + i, size, &iter); - - KUNIT_EXPECT_EQ(test, copied, size); - KUNIT_EXPECT_EQ(test, iter.count, 0); - KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); - i +=3D size; - } - - /* Build the expected image in the main buffer. */ - i =3D 0; - memset(buffer, 0, bufsize); - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { - for (j =3D pr->from; j < pr->to; j++) { - buffer[i++] =3D pattern(j); - if (i >=3D bufsize) - goto stop; - } - } -stop: - - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=3D%x", i); - if (scratch[i] !=3D buffer[i]) - return; - } - - KUNIT_SUCCEED(test); -} - static void iov_kunit_destroy_bvecq(void *data) { struct bvecq *bq, *next; @@ -1028,85 +854,6 @@ static void __init iov_kunit_extract_pages_bvec(struc= t kunit *test) KUNIT_SUCCEED(test); } =20 -/* - * Test the extraction of ITER_FOLIOQ-type iterators. - */ -static void __init iov_kunit_extract_pages_folioq(struct kunit *test) -{ - const struct kvec_test_range *pr; - struct folio_queue *folioq; - struct iov_iter iter; - struct page **bpages, *pagelist[8], **pages =3D pagelist; - ssize_t len; - size_t bufsize, size =3D 0, npages; - int i, from; - - bufsize =3D 0x100000; - npages =3D bufsize / PAGE_SIZE; - - folioq =3D iov_kunit_create_folioq(test); - - iov_kunit_create_buffer(test, &bpages, npages); - iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); - - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { - from =3D pr->from; - size =3D pr->to - from; - KUNIT_ASSERT_LE(test, pr->to, bufsize); - - iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to); - iov_iter_advance(&iter, from); - - do { - size_t offset0 =3D LONG_MAX; - - for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) - pagelist[i] =3D (void *)(unsigned long)0xaa55aa55aa55aa55ULL; - - len =3D iov_iter_extract_pages(&iter, &pages, 100 * 1024, - ARRAY_SIZE(pagelist), 0, &offset0); - KUNIT_EXPECT_GE(test, len, 0); - if (len < 0) - break; - KUNIT_EXPECT_LE(test, len, size); - KUNIT_EXPECT_EQ(test, iter.count, size - len); - if (len =3D=3D 0) - break; - size -=3D len; - KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0); - KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE); - - for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) { - struct page *p; - ssize_t part =3D min_t(ssize_t, len, PAGE_SIZE - offset0); - int ix; - - KUNIT_ASSERT_GE(test, part, 0); - ix =3D from / PAGE_SIZE; - KUNIT_ASSERT_LT(test, ix, npages); - p =3D bpages[ix]; - KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p); - KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE); - from +=3D part; - len -=3D part; - KUNIT_ASSERT_GE(test, len, 0); - if (len =3D=3D 0) - break; - offset0 =3D 0; - } - - if (test->status =3D=3D KUNIT_FAILURE) - goto stop; - } while (iov_iter_count(&iter) > 0); - - KUNIT_EXPECT_EQ(test, size, 0); - KUNIT_EXPECT_EQ(test, iter.count, 0); - } - -stop: - KUNIT_SUCCEED(test); -} - /* * Test the extraction of ITER_XARRAY-type iterators. */ @@ -1191,15 +938,12 @@ static struct kunit_case __refdata iov_kunit_cases[]= =3D { KUNIT_CASE(iov_kunit_copy_from_kvec), KUNIT_CASE(iov_kunit_copy_to_bvec), KUNIT_CASE(iov_kunit_copy_from_bvec), - KUNIT_CASE(iov_kunit_copy_to_folioq), - KUNIT_CASE(iov_kunit_copy_from_folioq), KUNIT_CASE(iov_kunit_copy_to_bvecq), KUNIT_CASE(iov_kunit_copy_from_bvecq), KUNIT_CASE(iov_kunit_copy_to_xarray), KUNIT_CASE(iov_kunit_copy_from_xarray), KUNIT_CASE(iov_kunit_extract_pages_kvec), KUNIT_CASE(iov_kunit_extract_pages_bvec), - KUNIT_CASE(iov_kunit_extract_pages_folioq), KUNIT_CASE(iov_kunit_extract_pages_xarray), {} }; From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA2E33C3BFC for ; Wed, 4 Mar 2026 14:05:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633136; cv=none; b=s9RSuMN/5buNVGrieVA/qy33UGjYXUCiS1ssegyufecyUh7EkPTL249vIO6Q9rbikmGB6XdBaO3vwDd9tiH5w/5fckzRg4pcDND7DN2iLw3saiUwtIxyWLN/+OvRj0154LF4P5Ey90aXttlPVXdgps0tZViNv4O+Wjxo7wK7UG4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633136; c=relaxed/simple; bh=6VN1ewOcuiQK5sFqxXMsx+cstwSbHy0yDbBMJrRgEB0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UMq7mhV8WiafanfezoeHSY2McdpdDsjfJkrvRkXG5Huf+Ixeho9QD3d7aLreHdv6SHG5FgYI5zAHtq9hc3z+iWafDct0OD6cjUVP1Y/NX+7sIKPo7ycSrSPXh4pEgF6HqyB8cq4f3uDz2q6vRZpOxCKX2ypzD/DlZUjhIhilFYw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=QMfSk23B; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QMfSk23B" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633132; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fumhdzYIPGpYClcZxtDZtPkHXmqIkbTlOkkeMqUNdFc=; b=QMfSk23ByRKPkytzOlVN8Tmg2M2OJeQGQsN5CX9fs4r/kCSHu38MGhRRAZAhw0w+GRlZLL 8FyJOsS9dYU6Bh8oK+6AaFldsTqnPG1Y4qOHUEOSrDHaJ8axUMWYtnWmzXXaPZ0bdKlbUY pJtHaB5YYFaK/lBi8ivStPjvZv5LkD8= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-80-gLQBuz0gPWGkeC--3rlcNg-1; Wed, 04 Mar 2026 09:05:25 -0500 X-MC-Unique: gLQBuz0gPWGkeC--3rlcNg-1 X-Mimecast-MFC-AGG-ID: gLQBuz0gPWGkeC--3rlcNg_1772633123 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 638FF19560B0; Wed, 4 Mar 2026 14:05:23 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 54CA230001A1; Wed, 4 Mar 2026 14:05:18 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Steve French Subject: [RFC PATCH 15/17] netfs: Remove folio_queue and rolling_buffer Date: Wed, 4 Mar 2026 14:03:22 +0000 Message-ID: <20260304140328.112636-16-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Remove folio_queue and rolling_buffer as they're no longer used. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- Documentation/core-api/folio_queue.rst | 209 ------------------ Documentation/core-api/index.rst | 1 - fs/netfs/iterator.c | 149 ------------- fs/netfs/rolling_buffer.c | 222 ------------------- include/linux/folio_queue.h | 282 ------------------------- include/linux/rolling_buffer.h | 61 ------ 6 files changed, 924 deletions(-) delete mode 100644 Documentation/core-api/folio_queue.rst delete mode 100644 fs/netfs/rolling_buffer.c delete mode 100644 include/linux/folio_queue.h delete mode 100644 include/linux/rolling_buffer.h diff --git a/Documentation/core-api/folio_queue.rst b/Documentation/core-ap= i/folio_queue.rst deleted file mode 100644 index b7628896d2b6..000000000000 --- a/Documentation/core-api/folio_queue.rst +++ /dev/null @@ -1,209 +0,0 @@ -.. SPDX-License-Identifier: GPL-2.0+ - -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D -Folio Queue -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -:Author: David Howells - -.. Contents: - - * Overview - * Initialisation - * Adding and removing folios - * Querying information about a folio - * Querying information about a folio_queue - * Folio queue iteration - * Folio marks - * Lockless simultaneous production/consumption issues - - -Overview -=3D=3D=3D=3D=3D=3D=3D=3D - -The folio_queue struct forms a single segment in a segmented list of folios -that can be used to form an I/O buffer. As such, the list can be iterated= over -using the ITER_FOLIOQ iov_iter type. - -The publicly accessible members of the structure are:: - - struct folio_queue { - struct folio_queue *next; - struct folio_queue *prev; - ... - }; - -A pair of pointers are provided, ``next`` and ``prev``, that point to the -segments on either side of the segment being accessed. Whilst this is a -doubly-linked list, it is intentionally not a circular list; the outward -sibling pointers in terminal segments should be NULL. - -Each segment in the list also stores: - - * an ordered sequence of folio pointers, - * the size of each folio and - * three 1-bit marks per folio, - -but these should not be accessed directly as the underlying data structure= may -change, but rather the access functions outlined below should be used. - -The facility can be made accessible by:: - - #include - -and to use the iterator:: - - #include - - -Initialisation -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -A segment should be initialised by calling:: - - void folioq_init(struct folio_queue *folioq); - -with a pointer to the segment to be initialised. Note that this will not -necessarily initialise all the folio pointers, so care must be taken to ch= eck -the number of folios added. - - -Adding and removing folios -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D - -Folios can be set in the next unused slot in a segment struct by calling o= ne -of:: - - unsigned int folioq_append(struct folio_queue *folioq, - struct folio *folio); - - unsigned int folioq_append_mark(struct folio_queue *folioq, - struct folio *folio); - -Both functions update the stored folio count, store the folio and note its -size. The second function also sets the first mark for the folio added. = Both -functions return the number of the slot used. [!] Note that no attempt is= made -to check that the capacity wasn't overrun and the list will not be extended -automatically. - -A folio can be excised by calling:: - - void folioq_clear(struct folio_queue *folioq, unsigned int slot); - -This clears the slot in the array and also clears all the marks for that f= olio, -but doesn't change the folio count - so future accesses of that slot must = check -if the slot is occupied. - - -Querying information about a folio -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Information about the folio in a particular slot may be queried by the -following function:: - - struct folio *folioq_folio(const struct folio_queue *folioq, - unsigned int slot); - -If a folio has not yet been set in that slot, this may yield an undefined -pointer. The size of the folio in a slot may be queried with either of:: - - unsigned int folioq_folio_order(const struct folio_queue *folioq, - unsigned int slot); - - size_t folioq_folio_size(const struct folio_queue *folioq, - unsigned int slot); - -The first function returns the size as an order and the second as a number= of -bytes. - - -Querying information about a folio_queue -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Information may be retrieved about a particular segment with the following -functions:: - - unsigned int folioq_nr_slots(const struct folio_queue *folioq); - - unsigned int folioq_count(struct folio_queue *folioq); - - bool folioq_full(struct folio_queue *folioq); - -The first function returns the maximum capacity of a segment. It must not= be -assumed that this won't vary between segments. The second returns the num= ber -of folios added to a segments and the third is a shorthand to indicate if = the -segment has been filled to capacity. - -Not that the count and fullness are not affected by clearing folios from t= he -segment. These are more about indicating how many slots in the array have= been -initialised, and it assumed that slots won't get reused, but rather the se= gment -will get discarded as the queue is consumed. - - -Folio marks -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Folios within a queue can also have marks assigned to them. These marks c= an be -used to note information such as if a folio needs folio_put() calling upon= it. -There are three marks available to be set for each folio. - -The marks can be set by:: - - void folioq_mark(struct folio_queue *folioq, unsigned int slot); - void folioq_mark2(struct folio_queue *folioq, unsigned int slot); - -Cleared by:: - - void folioq_unmark(struct folio_queue *folioq, unsigned int slot); - void folioq_unmark2(struct folio_queue *folioq, unsigned int slot); - -And the marks can be queried by:: - - bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot= ); - bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slo= t); - -The marks can be used for any purpose and are not interpreted by this API. - - -Folio queue iteration -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -A list of segments may be iterated over using the I/O iterator facility us= ing -an ``iov_iter`` iterator of ``ITER_FOLIOQ`` type. The iterator may be -initialised with:: - - void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, - const struct folio_queue *folioq, - unsigned int first_slot, unsigned int offset, - size_t count); - -This may be told to start at a particular segment, slot and offset within a -queue. The iov iterator functions will follow the next pointers when adva= ncing -and prev pointers when reverting when needed. - - -Lockless simultaneous production/consumption issues -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D - -If properly managed, the list can be extended by the producer at the head = end -and shortened by the consumer at the tail end simultaneously without the n= eed -to take locks. The ITER_FOLIOQ iterator inserts appropriate barriers to a= id -with this. - -Care must be taken when simultaneously producing and consuming a list. If= the -last segment is reached and the folios it refers to are entirely consumed = by -the IOV iterators, an iov_iter struct will be left pointing to the last se= gment -with a slot number equal to the capacity of that segment. The iterator wi= ll -try to continue on from this if there's another segment available when it = is -used again, but care must be taken lest the segment got removed and freed = by -the consumer before the iterator was advanced. - -It is recommended that the queue always contain at least one segment, even= if -that segment has never been filled or is entirely spent. This prevents the -head and tail pointers from collapsing. - - -API Function Reference -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -.. kernel-doc:: include/linux/folio_queue.h diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/inde= x.rst index 13769d5c40bf..16c529a33ac4 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -39,7 +39,6 @@ Library functionality that is used throughout the kernel. kref cleanup assoc_array - folio_queue xarray maple_tree idr diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 5ae9279a2dfb..eda6e2ca02e7 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -134,152 +134,3 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, siz= e_t orig_len, size_t max_se return extracted ?: ret; } EXPORT_SYMBOL_GPL(netfs_extract_iter); - -#if 0 -/* - * Select the span of a bvec iterator we're going to use. Limit it by bot= h maximum - * size and maximum number of segments. Returns the size of the span in b= ytes. - */ -static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_o= ffset, - size_t max_size, size_t max_segs) -{ - const struct bio_vec *bvecs =3D iter->bvec; - unsigned int nbv =3D iter->nr_segs, ix =3D 0, nsegs =3D 0; - size_t len, span =3D 0, n =3D iter->count; - size_t skip =3D iter->iov_offset + start_offset; - - if (WARN_ON(!iov_iter_is_bvec(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - - while (n && ix < nbv && skip) { - len =3D bvecs[ix].bv_len; - if (skip < len) - break; - skip -=3D len; - n -=3D len; - ix++; - } - - while (n && ix < nbv) { - len =3D min3(n, bvecs[ix].bv_len - skip, max_size); - span +=3D len; - nsegs++; - ix++; - if (span >=3D max_size || nsegs >=3D max_segs) - break; - skip =3D 0; - n -=3D len; - } - - return min(span, max_size); -} - -/* - * Select the span of an xarray iterator we're going to use. Limit it by = both - * maximum size and maximum number of segments. It is assumed that segmen= ts - * can be larger than a page in size, provided they're physically contiguo= us. - * Returns the size of the span in bytes. - */ -static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start= _offset, - size_t max_size, size_t max_segs) -{ - struct folio *folio; - unsigned int nsegs =3D 0; - loff_t pos =3D iter->xarray_start + iter->iov_offset; - pgoff_t index =3D pos / PAGE_SIZE; - size_t span =3D 0, n =3D iter->count; - - XA_STATE(xas, iter->xarray, index); - - if (WARN_ON(!iov_iter_is_xarray(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - max_size =3D min(max_size, n - start_offset); - - rcu_read_lock(); - xas_for_each(&xas, folio, ULONG_MAX) { - size_t offset, flen, len; - if (xas_retry(&xas, folio)) - continue; - if (WARN_ON(xa_is_value(folio))) - break; - if (WARN_ON(folio_test_hugetlb(folio))) - break; - - flen =3D folio_size(folio); - offset =3D offset_in_folio(folio, pos); - len =3D min(max_size, flen - offset); - span +=3D len; - nsegs++; - if (span >=3D max_size || nsegs >=3D max_segs) - break; - } - - rcu_read_unlock(); - return min(span, max_size); -} - -/* - * Select the span of a folio queue iterator we're going to use. Limit it= by - * both maximum size and maximum number of segments. Returns the size of = the - * span in bytes. - */ -static size_t netfs_limit_folioq(const struct iov_iter *iter, size_t start= _offset, - size_t max_size, size_t max_segs) -{ - const struct folio_queue *folioq =3D iter->folioq; - unsigned int nsegs =3D 0; - unsigned int slot =3D iter->folioq_slot; - size_t span =3D 0, n =3D iter->count; - - if (WARN_ON(!iov_iter_is_folioq(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - max_size =3D umin(max_size, n - start_offset); - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - } - - start_offset +=3D iter->iov_offset; - do { - size_t flen =3D folioq_folio_size(folioq, slot); - - if (start_offset < flen) { - span +=3D flen - start_offset; - nsegs++; - start_offset =3D 0; - } else { - start_offset -=3D flen; - } - if (span >=3D max_size || nsegs >=3D max_segs) - break; - - slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - } - } while (folioq); - - return umin(span, max_size); -} - -size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, - size_t max_size, size_t max_segs) -{ - if (iov_iter_is_folioq(iter)) - return netfs_limit_folioq(iter, start_offset, max_size, max_segs); - if (iov_iter_is_bvec(iter)) - return netfs_limit_bvec(iter, start_offset, max_size, max_segs); - if (iov_iter_is_xarray(iter)) - return netfs_limit_xarray(iter, start_offset, max_size, max_segs); - BUG(); -} -EXPORT_SYMBOL(netfs_limit_iter); -#endif diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c deleted file mode 100644 index a17fbf9853a4..000000000000 --- a/fs/netfs/rolling_buffer.c +++ /dev/null @@ -1,222 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* Rolling buffer helpers - * - * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - */ - -#include -#include -#include -#include -#include "internal.h" - -static atomic_t debug_ids; - -/** - * netfs_folioq_alloc - Allocate a folio_queue struct - * @rreq_id: Associated debugging ID for tracing purposes - * @gfp: Allocation constraints - * @trace: Trace tag to indicate the purpose of the allocation - * - * Allocate, initialise and account the folio_queue struct and log a trace= line - * to mark the allocation. - */ -struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, - unsigned int /*enum netfs_folioq_trace*/ trace) -{ - struct folio_queue *fq; - - fq =3D kmalloc_obj(*fq, gfp); - if (fq) { - netfs_stat(&netfs_n_folioq); - folioq_init(fq, rreq_id); - fq->debug_id =3D atomic_inc_return(&debug_ids); - trace_netfs_folioq(fq, trace); - } - return fq; -} -EXPORT_SYMBOL(netfs_folioq_alloc); - -/** - * netfs_folioq_free - Free a folio_queue struct - * @folioq: The object to free - * @trace: Trace tag to indicate which free - * - * Free and unaccount the folio_queue struct. - */ -void netfs_folioq_free(struct folio_queue *folioq, - unsigned int /*enum netfs_trace_folioq*/ trace) -{ - trace_netfs_folioq(folioq, trace); - netfs_stat_d(&netfs_n_folioq); - kfree(folioq); -} -EXPORT_SYMBOL(netfs_folioq_free); - -/* - * Initialise a rolling buffer. We allocate an empty folio queue struct t= o so - * that the pointers can be independently driven by the producer and the - * consumer. - */ -int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id, - unsigned int direction) -{ - struct folio_queue *fq; - - fq =3D netfs_folioq_alloc(rreq_id, GFP_NOFS, netfs_trace_folioq_rollbuf_i= nit); - if (!fq) - return -ENOMEM; - - roll->head =3D fq; - roll->tail =3D fq; - iov_iter_folio_queue(&roll->iter, direction, fq, 0, 0, 0); - return 0; -} - -/* - * Add another folio_queue to a rolling buffer if there's no space left. - */ -int rolling_buffer_make_space(struct rolling_buffer *roll) -{ - struct folio_queue *fq, *head =3D roll->head; - - if (!folioq_full(head)) - return 0; - - fq =3D netfs_folioq_alloc(head->rreq_id, GFP_NOFS, netfs_trace_folioq_mak= e_space); - if (!fq) - return -ENOMEM; - fq->prev =3D head; - - roll->head =3D fq; - if (folioq_full(head)) { - /* Make sure we don't leave the master iterator pointing to a - * block that might get immediately consumed. - */ - if (roll->iter.folioq =3D=3D head && - roll->iter.folioq_slot =3D=3D folioq_nr_slots(head)) { - roll->iter.folioq =3D fq; - roll->iter.folioq_slot =3D 0; - } - } - - /* Make sure the initialisation is stored before the next pointer. - * - * [!] NOTE: After we set head->next, the consumer is at liberty to - * immediately delete the old head. - */ - smp_store_release(&head->next, fq); - return 0; -} - -/* - * Decant the list of folios to read into a rolling buffer. - */ -ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll, - struct readahead_control *ractl, - struct folio_batch *put_batch) -{ - struct folio_queue *fq; - struct page **vec; - int nr, ix, to; - ssize_t size =3D 0; - - if (rolling_buffer_make_space(roll) < 0) - return -ENOMEM; - - fq =3D roll->head; - vec =3D (struct page **)fq->vec.folios; - nr =3D __readahead_batch(ractl, vec + folio_batch_count(&fq->vec), - folio_batch_space(&fq->vec)); - ix =3D fq->vec.nr; - to =3D ix + nr; - fq->vec.nr =3D to; - for (; ix < to; ix++) { - struct folio *folio =3D folioq_folio(fq, ix); - unsigned int order =3D folio_order(folio); - - fq->orders[ix] =3D order; - size +=3D PAGE_SIZE << order; - trace_netfs_folio(folio, netfs_folio_trace_read); - if (!folio_batch_add(put_batch, folio)) - folio_batch_release(put_batch); - } - WRITE_ONCE(roll->iter.count, roll->iter.count + size); - - /* Store the counter after setting the slot. */ - smp_store_release(&roll->next_head_slot, to); - return size; -} - -/* - * Append a folio to the rolling buffer. - */ -ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *f= olio, - unsigned int flags) -{ - ssize_t size =3D folio_size(folio); - int slot; - - if (rolling_buffer_make_space(roll) < 0) - return -ENOMEM; - - slot =3D folioq_append(roll->head, folio); - if (flags & ROLLBUF_MARK_1) - folioq_mark(roll->head, slot); - if (flags & ROLLBUF_MARK_2) - folioq_mark2(roll->head, slot); - - WRITE_ONCE(roll->iter.count, roll->iter.count + size); - - /* Store the counter after setting the slot. */ - smp_store_release(&roll->next_head_slot, slot); - return size; -} - -/* - * Delete a spent buffer from a rolling queue and return the next in line.= We - * don't return the last buffer to keep the pointers independent, but retu= rn - * NULL instead. - */ -struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *rol= l) -{ - struct folio_queue *spent =3D roll->tail, *next =3D READ_ONCE(spent->next= ); - - if (!next) - return NULL; - next->prev =3D NULL; - netfs_folioq_free(spent, netfs_trace_folioq_delete); - roll->tail =3D next; - return next; -} - -/* - * Clear out a rolling queue. Folios that have mark 1 set are put. - */ -void rolling_buffer_clear(struct rolling_buffer *roll) -{ - struct folio_batch fbatch; - struct folio_queue *p; - - folio_batch_init(&fbatch); - - while ((p =3D roll->tail)) { - roll->tail =3D p->next; - for (int slot =3D 0; slot < folioq_count(p); slot++) { - struct folio *folio =3D folioq_folio(p, slot); - - if (!folio) - continue; - if (folioq_is_marked(p, slot)) { - trace_netfs_folio(folio, netfs_folio_trace_put); - if (!folio_batch_add(&fbatch, folio)) - folio_batch_release(&fbatch); - } - } - - netfs_folioq_free(p, netfs_trace_folioq_clear); - } - - folio_batch_release(&fbatch); -} diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h deleted file mode 100644 index adab609c972e..000000000000 --- a/include/linux/folio_queue.h +++ /dev/null @@ -1,282 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* Queue of folios definitions - * - * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - * - * See: - * - * Documentation/core-api/folio_queue.rst - * - * for a description of the API. - */ - -#ifndef _LINUX_FOLIO_QUEUE_H -#define _LINUX_FOLIO_QUEUE_H - -#include -#include - -/* - * Segment in a queue of running buffers. Each segment can hold a number = of - * folios and a portion of the queue can be referenced with the ITER_FOLIOQ - * iterator. The possibility exists of inserting non-folio elements into = the - * queue (such as gaps). - * - * Explicit prev and next pointers are used instead of a list_head to make= it - * easier to add segments to tail and remove them from the head without the - * need for a lock. - */ -struct folio_queue { - struct folio_batch vec; /* Folios in the queue segment */ - u8 orders[PAGEVEC_SIZE]; /* Order of each folio */ - struct folio_queue *next; /* Next queue segment or NULL */ - struct folio_queue *prev; /* Previous queue segment of NULL */ - unsigned long marks; /* 1-bit mark per folio */ - unsigned long marks2; /* Second 1-bit mark per folio */ -#if PAGEVEC_SIZE > BITS_PER_LONG -#error marks is not big enough -#endif - unsigned int rreq_id; - unsigned int debug_id; -}; - -/** - * folioq_init - Initialise a folio queue segment - * @folioq: The segment to initialise - * @rreq_id: The request identifier to use in tracelines. - * - * Initialise a folio queue segment and set an identifier to be used in tr= aces. - * - * Note that the folio pointers are left uninitialised. - */ -static inline void folioq_init(struct folio_queue *folioq, unsigned int rr= eq_id) -{ - folio_batch_init(&folioq->vec); - folioq->next =3D NULL; - folioq->prev =3D NULL; - folioq->marks =3D 0; - folioq->marks2 =3D 0; - folioq->rreq_id =3D rreq_id; - folioq->debug_id =3D 0; -} - -/** - * folioq_nr_slots: Query the capacity of a folio queue segment - * @folioq: The segment to query - * - * Query the number of folios that a particular folio queue segment might = hold. - * [!] NOTE: This must not be assumed to be the same for every segment! - */ -static inline unsigned int folioq_nr_slots(const struct folio_queue *folio= q) -{ - return PAGEVEC_SIZE; -} - -/** - * folioq_count: Query the occupancy of a folio queue segment - * @folioq: The segment to query - * - * Query the number of folios that have been added to a folio queue segmen= t. - * Note that this is not decreased as folios are removed from a segment. - */ -static inline unsigned int folioq_count(struct folio_queue *folioq) -{ - return folio_batch_count(&folioq->vec); -} - -/** - * folioq_full: Query if a folio queue segment is full - * @folioq: The segment to query - * - * Query if a folio queue segment is fully occupied. Note that this does = not - * change if folios are removed from a segment. - */ -static inline bool folioq_full(struct folio_queue *folioq) -{ - //return !folio_batch_space(&folioq->vec); - return folioq_count(folioq) >=3D folioq_nr_slots(folioq); -} - -/** - * folioq_is_marked: Check first folio mark in a folio queue segment - * @folioq: The segment to query - * @slot: The slot number of the folio to query - * - * Determine if the first mark is set for the folio in the specified slot = in a - * folio queue segment. - */ -static inline bool folioq_is_marked(const struct folio_queue *folioq, unsi= gned int slot) -{ - return test_bit(slot, &folioq->marks); -} - -/** - * folioq_mark: Set the first mark on a folio in a folio queue segment - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Set the first mark for the folio in the specified slot in a folio queue - * segment. - */ -static inline void folioq_mark(struct folio_queue *folioq, unsigned int sl= ot) -{ - set_bit(slot, &folioq->marks); -} - -/** - * folioq_unmark: Clear the first mark on a folio in a folio queue segment - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Clear the first mark for the folio in the specified slot in a folio que= ue - * segment. - */ -static inline void folioq_unmark(struct folio_queue *folioq, unsigned int = slot) -{ - clear_bit(slot, &folioq->marks); -} - -/** - * folioq_is_marked2: Check second folio mark in a folio queue segment - * @folioq: The segment to query - * @slot: The slot number of the folio to query - * - * Determine if the second mark is set for the folio in the specified slot= in a - * folio queue segment. - */ -static inline bool folioq_is_marked2(const struct folio_queue *folioq, uns= igned int slot) -{ - return test_bit(slot, &folioq->marks2); -} - -/** - * folioq_mark2: Set the second mark on a folio in a folio queue segment - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Set the second mark for the folio in the specified slot in a folio queue - * segment. - */ -static inline void folioq_mark2(struct folio_queue *folioq, unsigned int s= lot) -{ - set_bit(slot, &folioq->marks2); -} - -/** - * folioq_unmark2: Clear the second mark on a folio in a folio queue segme= nt - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Clear the second mark for the folio in the specified slot in a folio qu= eue - * segment. - */ -static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int= slot) -{ - clear_bit(slot, &folioq->marks2); -} - -/** - * folioq_append: Add a folio to a folio queue segment - * @folioq: The segment to add to - * @folio: The folio to add - * - * Add a folio to the tail of the sequence in a folio queue segment, incre= asing - * the occupancy count and returning the slot number for the folio just ad= ded. - * The folio size is extracted and stored in the queue and the marks are l= eft - * unmodified. - * - * Note that it's left up to the caller to check that the segment capacity= will - * not be exceeded and to extend the queue. - */ -static inline unsigned int folioq_append(struct folio_queue *folioq, struc= t folio *folio) -{ - unsigned int slot =3D folioq->vec.nr++; - - folioq->vec.folios[slot] =3D folio; - folioq->orders[slot] =3D folio_order(folio); - return slot; -} - -/** - * folioq_append_mark: Add a folio to a folio queue segment - * @folioq: The segment to add to - * @folio: The folio to add - * - * Add a folio to the tail of the sequence in a folio queue segment, incre= asing - * the occupancy count and returning the slot number for the folio just ad= ded. - * The folio size is extracted and stored in the queue, the first mark is = set - * and and the second and third marks are left unmodified. - * - * Note that it's left up to the caller to check that the segment capacity= will - * not be exceeded and to extend the queue. - */ -static inline unsigned int folioq_append_mark(struct folio_queue *folioq, = struct folio *folio) -{ - unsigned int slot =3D folioq->vec.nr++; - - folioq->vec.folios[slot] =3D folio; - folioq->orders[slot] =3D folio_order(folio); - folioq_mark(folioq, slot); - return slot; -} - -/** - * folioq_folio: Get a folio from a folio queue segment - * @folioq: The segment to access - * @slot: The folio slot to access - * - * Retrieve the folio in the specified slot from a folio queue segment. N= ote - * that no bounds check is made and if the slot hasn't been added into yet= , the - * pointer will be undefined. If the slot has been cleared, NULL will be - * returned. - */ -static inline struct folio *folioq_folio(const struct folio_queue *folioq,= unsigned int slot) -{ - return folioq->vec.folios[slot]; -} - -/** - * folioq_folio_order: Get the order of a folio from a folio queue segment - * @folioq: The segment to access - * @slot: The folio slot to access - * - * Retrieve the order of the folio in the specified slot from a folio queue - * segment. Note that no bounds check is made and if the slot hasn't been - * added into yet, the order returned will be 0. - */ -static inline unsigned int folioq_folio_order(const struct folio_queue *fo= lioq, unsigned int slot) -{ - return folioq->orders[slot]; -} - -/** - * folioq_folio_size: Get the size of a folio from a folio queue segment - * @folioq: The segment to access - * @slot: The folio slot to access - * - * Retrieve the size of the folio in the specified slot from a folio queue - * segment. Note that no bounds check is made and if the slot hasn't been - * added into yet, the size returned will be PAGE_SIZE. - */ -static inline size_t folioq_folio_size(const struct folio_queue *folioq, u= nsigned int slot) -{ - return PAGE_SIZE << folioq_folio_order(folioq, slot); -} - -/** - * folioq_clear: Clear a folio from a folio queue segment - * @folioq: The segment to clear - * @slot: The folio slot to clear - * - * Clear a folio from a sequence in a folio queue segment and clear its ma= rks. - * The occupancy count is left unchanged. - */ -static inline void folioq_clear(struct folio_queue *folioq, unsigned int s= lot) -{ - folioq->vec.folios[slot] =3D NULL; - folioq_unmark(folioq, slot); - folioq_unmark2(folioq, slot); -} - -#endif /* _LINUX_FOLIO_QUEUE_H */ diff --git a/include/linux/rolling_buffer.h b/include/linux/rolling_buffer.h deleted file mode 100644 index ac15b1ffdd83..000000000000 --- a/include/linux/rolling_buffer.h +++ /dev/null @@ -1,61 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* Rolling buffer of folios - * - * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - */ - -#ifndef _ROLLING_BUFFER_H -#define _ROLLING_BUFFER_H - -#include -#include - -/* - * Rolling buffer. Whilst the buffer is live and in use, folios and folio - * queue segments can be added to one end by one thread and removed from t= he - * other end by another thread. The buffer isn't allowed to be empty; it = must - * always have at least one folio_queue in it so that neither side has to - * modify both queue pointers. - * - * The iterator in the buffer is extended as buffers are inserted. It can= be - * snapshotted to use a segment of the buffer. - */ -struct rolling_buffer { - struct folio_queue *head; /* Producer's insertion point */ - struct folio_queue *tail; /* Consumer's removal point */ - struct iov_iter iter; /* Iterator tracking what's left in the buffer */ - u8 next_head_slot; /* Next slot in ->head */ - u8 first_tail_slot; /* First slot in ->tail */ -}; - -/* - * Snapshot of a rolling buffer. - */ -struct rolling_buffer_snapshot { - struct folio_queue *curr_folioq; /* Queue segment in which current folio = resides */ - unsigned char curr_slot; /* Folio currently being read */ - unsigned char curr_order; /* Order of folio */ -}; - -/* Marks to store per-folio in the internal folio_queue structs. */ -#define ROLLBUF_MARK_1 BIT(0) -#define ROLLBUF_MARK_2 BIT(1) - -int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id, - unsigned int direction); -int rolling_buffer_make_space(struct rolling_buffer *roll); -ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll, - struct readahead_control *ractl, - struct folio_batch *put_batch); -ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *f= olio, - unsigned int flags); -struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *rol= l); -void rolling_buffer_clear(struct rolling_buffer *roll); - -static inline void rolling_buffer_advance(struct rolling_buffer *roll, siz= e_t amount) -{ - iov_iter_advance(&roll->iter, amount); -} - -#endif /* _ROLLING_BUFFER_H */ From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 185423C6A26 for ; Wed, 4 Mar 2026 14:05:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633138; cv=none; b=rkQFitNjpa8Bzd5CFH9w8eB5VoH91FDp9lPz3MZVqsKZ1/F8sGjOn1rCvPNrIpzRoMqyZTzqFumK3dROQuqIrBQ8rPg6dqwn9bLOgq/6s4LTSfmSLO6hgrMrx7bAgFZOUY/V9SdBhfe32ULUJJJehghIWpgoQ3IxYaUYtuyxdAQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633138; c=relaxed/simple; bh=QxdQukoNegxt7cys2Ne0RaX+CMcxVII0TxVF64PClMU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oj0Zlcl+s6O7a9Ijhx2C0laiowTTHQZZTNt1XVmnLrKee7un2Rju4Aix8HW3Wf8iQ+WtSprNKaWaz9UPBEjtTxtT3pviPN0Xys1AEa6EX7LChAotAYz918t7bXovagQfxaMOhw4T20utihMoCUYwpBq13YvnEI33S9px3oABZm4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XV2EZ6ly; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XV2EZ6ly" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633136; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8d8J22DaSQuOQSD54Wg05WTtU42gahs7pGBkEdp1IFY=; b=XV2EZ6ly52q3xrtVUJuDUXXU4IXu65/hy3aT3fIW7xweuReVVAeQVp2H1aoH5t2yMwCI6l d8xg9HNrBlpddJeit5kMT/COHrv3ZVSqcuZNVedfCZ6bkUNIbEr3FQmhlZNVK4LYWrDLkj s5hgvb0RZgSf6dlET52itsFpyOjzQ9U= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-226-XleRjEHKMWuPDQMBbTTDJg-1; Wed, 04 Mar 2026 09:05:31 -0500 X-MC-Unique: XleRjEHKMWuPDQMBbTTDJg-1 X-Mimecast-MFC-AGG-ID: XleRjEHKMWuPDQMBbTTDJg_1772633129 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A3526195606C; Wed, 4 Mar 2026 14:05:29 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 206DF180049D; Wed, 4 Mar 2026 14:05:24 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara Subject: [RFC PATCH 16/17] netfs: Check for too much data being read Date: Wed, 4 Mar 2026 14:03:23 +0000 Message-ID: <20260304140328.112636-17-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" Put in a check in read subreq termination to detect more data being read for a subrequest than was requested. Signed-off-by: David Howells cc: Paulo Alcantara cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/read_collect.c | 8 ++++++++ include/trace/events/netfs.h | 1 + 2 files changed, 9 insertions(+) diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 3b5978832369..20c80df8914f 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -545,6 +545,14 @@ void netfs_read_subreq_terminated(struct netfs_io_subr= equest *subreq) break; } =20 + if (subreq->transferred > subreq->len) { + subreq->transferred =3D 0; + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + trace_netfs_sreq(subreq, netfs_sreq_trace_too_much); + subreq->error =3D -EIO; + } + /* Deal with retry requests, short reads and errors. If we retry * but don't make progress, we abandon the attempt. */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 861dc7849067..899b85d7ef92 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -124,6 +124,7 @@ EM(netfs_sreq_trace_submit, "SUBMT") \ EM(netfs_sreq_trace_superfluous, "SPRFL") \ EM(netfs_sreq_trace_terminated, "TERM ") \ + EM(netfs_sreq_trace_too_much, "!TOOM") \ EM(netfs_sreq_trace_wait_for, "_WAIT") \ EM(netfs_sreq_trace_write, "WRITE") \ EM(netfs_sreq_trace_write_skip, "SKIP ") \ From nobody Mon Apr 13 21:40:01 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85C973CA486 for ; Wed, 4 Mar 2026 14:05:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633153; cv=none; b=qkG8NNlnxB+Zz5mXTXiGZKh+Bq4UdJlktssMzYMnMMECjxWPt38hX+nmmhebg5EsmJhVMt4lbMVefgREPoZwaMLgj9jZ2OgVwD9VIMMjXtTA+KkHkPEiHruU+rmxZLAAsE5wle7q3ZIW2RuzfGrnv3XvuVBYj5qLAsSh8uNuTYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633153; c=relaxed/simple; bh=TuZnYI39f5RfCICU/8dhs9pGeMrI9fS33hd1tHsHaY0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KN8ief+iPbfA2TdAVq95lc/nd+qh8hFWqs2/d6nCb5MWsoNbC1IwD4ittUbmML5wpglVsO/WUVaPQDJNGyaYzqg78MdsspB37oDhTT0+4xYLXfCV5yWDMwv7dq7uIN1mE8ggaoHKKzqf4NImYfsBb7Qoc+/KVJ4eu2R9A7Km+b8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IjdVtHKp; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IjdVtHKp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633144; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ueAy1C9dyf36Ulh0bO4AsfQc+xdRNBBtxeu03XadWig=; b=IjdVtHKp3FdzbZ2YD2QJEis+haKAko0wUEe/6M2ObtROA2YF2bbigfBEyJlhIaxBIRbBup xjSXjRscq11edVjknscuiv+kw9RlKInVCNCfygdhiyI2MCKU/ybPPG00/CJ/ALvcW6Cyh3 LpYwdXL1715CkEG1b20GHzB9bZLLLxE= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-641-yGYdlF_xOg2DRcg8R7qREg-1; Wed, 04 Mar 2026 09:05:38 -0500 X-MC-Unique: yGYdlF_xOg2DRcg8R7qREg-1 X-Mimecast-MFC-AGG-ID: yGYdlF_xOg2DRcg8R7qREg_1772633136 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 91BF718002C9; Wed, 4 Mar 2026 14:05:36 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 534B01958DC2; Wed, 4 Mar 2026 14:05:31 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara Subject: [RFC PATCH 17/17] netfs: Combine prepare and issue ops and grab the buffers on request Date: Wed, 4 Mar 2026 14:03:24 +0000 Message-ID: <20260304140328.112636-18-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" To try and simplify how subrequests are generated in netfslib, with the move to bvecq for buffer handling, change netfslib in the following ways: (1) ->prepare_xxx(), buffer selection and ->issue_xxx() are now collapsed together such that one ->issue_xxx() call is made with the subrequest defined to the maximum extent; the filesystem then reduces the length of the subrequest and calls back to netfslib to grab a slice of the buffer, which may reduce the subrequest further if a maximum segment limit is set. The filesystem can then dispatch the operation. (2) To allow buffer slicing to be done upon request by the filesystem, a dispatch context is now maintained by netfslib and this is passed to ->issue_xxx() which then calls netfs_prepare_xxx_buffer(). This also permits the context for retry to be kept separate from that of initial dispatch. (3) The use of iov_iter is pushed down to the filesystem. Netfslib now provides the filesystem with a bvecq holding the buffer rather than an iov_iter. The bvecq can be duplicated and headers/trailers attached to hold protocol and several bvecqs can be linked together to create a compound operation. (4) The ->issue_xxx() functions now return an error code that allows them to return an error without having to terminate the subrequest. Netfslib will handle the error immediately if it can but may request termination and punt responsibility to the result collector. ->issue_xxx() can return 0 if synchronously compete and -EIOCBQUEUED if the operation will complete (or already has completed) asynchronously. (5) During writeback, the code now builds up an accumulation of buffered data before issuing writes on each stream (one server, one cache). It asks each stream for an estimate of how much data to accumulate before it starts generating subrequests on the stream. It is not required to use up all the data accumulated on a stream at that time unless we hit the end of the pagecache. (6) During read-gaps, in which there are two gaps on either end of a dirty streaming write page that need to be filled, a buffer is constructed consisting of the two ends plus a sink page repeated to cover the middle portion. This is passed to the server as a single write. For something like Ceph, this should probably be done either as a vectored/sparse read or as two separate reads (if different Ceph objects are involved). (7) During unbuffered/DIO read/write, there is a single contiguous file region to be written or read as a single stream. The dispatching function just creates subrequests and calls ->issue_xxx() repeatedly to eat through the bufferage. (8) During buffered read, there is a single contiguous file region, to read as a single stream - however, this stream may be stitched together from subrequests to multiple sources. Which sources are used where is now determined by querying the cache to find the next couple of extents in which it has data; netfslib uses this to direct the subrequests towards the appropriate sources. Each subrequest is given the maximum length in the current extent and then ->issue_read() is called. The filesystem then limits the size and slices off a piece of the buffer for that extent. (9) The cache now uses fiemap internally to find out the occupied regions of a cachefile rather than SEEK_DATA/SEEK_HOLE. In future, it should keep track of the regions itself - including regions of zeros. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/9p/vfs_addr.c | 34 +- fs/afs/dir.c | 8 +- fs/afs/file.c | 27 +- fs/afs/fsclient.c | 8 +- fs/afs/internal.h | 10 +- fs/afs/write.c | 35 +- fs/afs/yfsclient.c | 6 +- fs/cachefiles/io.c | 339 ++++++++++------ fs/ceph/addr.c | 109 +++--- fs/netfs/Makefile | 2 +- fs/netfs/buffered_read.c | 392 ++++++++++++------- fs/netfs/buffered_write.c | 2 +- fs/netfs/direct_read.c | 90 ++--- fs/netfs/direct_write.c | 153 +++++--- fs/netfs/fscache_io.c | 6 - fs/netfs/internal.h | 71 +++- fs/netfs/iterator.c | 13 +- fs/netfs/misc.c | 31 ++ fs/netfs/objects.c | 3 - fs/netfs/read_collect.c | 33 +- fs/netfs/read_retry.c | 215 +++++----- fs/netfs/read_single.c | 181 +++++---- fs/netfs/write_collect.c | 39 +- fs/netfs/write_issue.c | 735 +++++++++++++++++++++-------------- fs/netfs/write_retry.c | 145 +++---- fs/nfs/fscache.c | 13 +- fs/smb/client/cifssmb.c | 13 +- fs/smb/client/file.c | 149 +++---- fs/smb/client/smb2ops.c | 9 +- fs/smb/client/smb2pdu.c | 28 +- fs/smb/client/transport.c | 15 +- include/linux/fscache.h | 19 + include/linux/netfs.h | 122 +++--- include/trace/events/netfs.h | 43 +- net/9p/client.c | 8 +- 35 files changed, 1885 insertions(+), 1221 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 862164181bac..66501514bc81 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -51,29 +51,54 @@ static void v9fs_begin_writeback(struct netfs_io_reques= t *wreq) /* * Issue a subrequest to write to the server. */ -static void v9fs_issue_write(struct netfs_io_subrequest *subreq) +static int v9fs_issue_write(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx) { + struct iov_iter iter; struct p9_fid *fid =3D subreq->rreq->netfs_priv; int err, len; =20 - len =3D p9_client_write(fid, subreq->start, &subreq->io_iter, &err); + subreq->len =3D umin(subreq->len, fid->clnt->msize - P9_IOHDRSZ); + + err =3D netfs_prepare_write_buffer(subreq, wctx, INT_MAX); + if (err < 0) + return err; + + iov_iter_bvec_queue(&iter, ITER_SOURCE, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + + len =3D p9_client_write(fid, subreq->start, &iter, &err); if (len > 0) __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); netfs_write_subrequest_terminated(subreq, len ?: err); + return err; } =20 /** * v9fs_issue_read - Issue a read from 9P * @subreq: The read to make + * @rctx: Read generation context */ -static void v9fs_issue_read(struct netfs_io_subrequest *subreq) +static int v9fs_issue_read(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) { struct netfs_io_request *rreq =3D subreq->rreq; + struct iov_iter iter; struct p9_fid *fid =3D rreq->netfs_priv; unsigned long long pos =3D subreq->start + subreq->transferred; int total, err; =20 - total =3D p9_client_read(fid, pos, &subreq->io_iter, &err); + err =3D netfs_prepare_read_buffer(subreq, rctx, INT_MAX); + if (err < 0) + return err; + + iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(subreq, rctx); + + total =3D p9_client_read(fid, pos, &iter, &err); =20 /* if we just extended the file size, any portion not in * cache won't be on server and is zeroes */ @@ -89,6 +114,7 @@ static void v9fs_issue_read(struct netfs_io_subrequest *= subreq) =20 subreq->error =3D err; netfs_read_subreq_terminated(subreq); + return -EIOCBQUEUED; } =20 /** diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 1d1be7e5923f..f8dbba5237f5 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -255,7 +255,8 @@ static ssize_t afs_do_read_single(struct afs_vnode *dvn= ode, struct file *file) if (dvnode->directory_size < i_size) { size_t cur_size =3D dvnode->directory_size; =20 - ret =3D netfs_expand_bvecq_buffer(&dvnode->directory, &cur_size, i_size, + ret =3D netfs_expand_bvecq_buffer(&dvnode->directory, &cur_size, + round_up(i_size, PAGE_SIZE), mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); dvnode->directory_size =3D cur_size; if (ret < 0) @@ -2210,9 +2211,10 @@ int afs_single_writepages(struct address_space *mapp= ing, if (is_dir ? test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) : atomic64_read(&dvnode->cb_expires_at) !=3D AFS_NO_CB_PROMISE) { + size_t len =3D i_size_read(&dvnode->netfs.inode); iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, - i_size_read(&dvnode->netfs.inode)); - ret =3D netfs_writeback_single(mapping, wbc, &iter); + round_up(len, PAGE_SIZE)); + ret =3D netfs_writeback_single(mapping, wbc, &iter, len); } =20 up_read(&dvnode->validate_lock); diff --git a/fs/afs/file.c b/fs/afs/file.c index f609366fd2ac..93830d08f0f4 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -329,11 +329,13 @@ void afs_fetch_data_immediate_cancel(struct afs_call = *call) /* * Fetch file data from the volume. */ -static void afs_issue_read(struct netfs_io_subrequest *subreq) +static int afs_issue_read(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) { struct afs_operation *op; struct afs_vnode *vnode =3D AFS_FS_I(subreq->rreq->inode); struct key *key =3D subreq->rreq->netfs_priv; + int ret; =20 _enter("%s{%llx:%llu.%u},%x,,,", vnode->volume->name, @@ -342,19 +344,21 @@ static void afs_issue_read(struct netfs_io_subrequest= *subreq) vnode->fid.unique, key_serial(key)); =20 + ret =3D netfs_prepare_read_buffer(subreq, rctx, INT_MAX); + if (ret < 0) + return ret; + op =3D afs_alloc_operation(key, vnode->volume); - if (IS_ERR(op)) { - subreq->error =3D PTR_ERR(op); - netfs_read_subreq_terminated(subreq); - return; - } + if (IS_ERR(op)) + return PTR_ERR(op); =20 afs_op_set_vnode(op, 0, vnode); =20 op->fetch.subreq =3D subreq; op->ops =3D &afs_fetch_data_operation; =20 - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(subreq, rctx); =20 if (subreq->rreq->origin =3D=3D NETFS_READAHEAD || subreq->rreq->iocb) { @@ -363,18 +367,19 @@ static void afs_issue_read(struct netfs_io_subrequest= *subreq) if (!afs_begin_vnode_operation(op)) { subreq->error =3D afs_put_operation(op); netfs_read_subreq_terminated(subreq); - return; + return -EIOCBQUEUED; } =20 if (!afs_select_fileserver(op)) { - afs_end_read(op); - return; + afs_end_read(op); /* Error recorded here. */ + return -EIOCBQUEUED; } =20 afs_issue_read_call(op); } else { afs_do_sync_operation(op); } + return -EIOCBQUEUED; } =20 static int afs_init_request(struct netfs_io_request *rreq, struct file *fi= le) @@ -454,7 +459,7 @@ const struct netfs_request_ops afs_req_ops =3D { .update_i_size =3D afs_update_i_size, .invalidate_cache =3D afs_netfs_invalidate_cache, .begin_writeback =3D afs_begin_writeback, - .prepare_write =3D afs_prepare_write, + .estimate_write =3D afs_estimate_write, .issue_write =3D afs_issue_write, .retry_request =3D afs_retry_request, }; diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c index 95494d5f2b8a..f59a9db4bb0e 100644 --- a/fs/afs/fsclient.c +++ b/fs/afs/fsclient.c @@ -339,7 +339,9 @@ static int afs_deliver_fs_fetch_data(struct afs_call *c= all) if (call->remaining =3D=3D 0) goto no_more_data; =20 - call->iter =3D &subreq->io_iter; + iov_iter_bvec_queue(&call->def_iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + call->iov_len =3D umin(call->remaining, subreq->len - subreq->transferre= d); call->unmarshall++; fallthrough; @@ -1085,7 +1087,7 @@ static void afs_fs_store_data64(struct afs_operation = *op) if (!call) return afs_op_nomem(op); =20 - call->write_iter =3D op->store.write_iter; + call->write_iter =3D &op->store.write_iter; =20 /* marshall the parameters */ bp =3D call->request; @@ -1139,7 +1141,7 @@ void afs_fs_store_data(struct afs_operation *op) if (!call) return afs_op_nomem(op); =20 - call->write_iter =3D op->store.write_iter; + call->write_iter =3D &op->store.write_iter; =20 /* marshall the parameters */ bp =3D call->request; diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 9bf5d2f1dbc4..ed4cf2c3891b 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -906,7 +906,7 @@ struct afs_operation { afs_lock_type_t type; } lock; struct { - struct iov_iter *write_iter; + struct iov_iter write_iter; loff_t pos; loff_t size; loff_t i_size; @@ -1680,8 +1680,12 @@ extern int afs_check_volume_status(struct afs_volume= *, struct afs_operation *); /* * write.c */ -void afs_prepare_write(struct netfs_io_subrequest *subreq); -void afs_issue_write(struct netfs_io_subrequest *subreq); +int afs_estimate_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate); +int afs_issue_write(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx); void afs_begin_writeback(struct netfs_io_request *wreq); void afs_retry_request(struct netfs_io_request *wreq, struct netfs_io_stre= am *stream); extern int afs_writepages(struct address_space *, struct writeback_control= *); diff --git a/fs/afs/write.c b/fs/afs/write.c index 93ad86ff3345..40af94a6ae0c 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -84,17 +84,21 @@ static const struct afs_operation_ops afs_store_data_op= eration =3D { }; =20 /* - * Prepare a subrequest to write to the server. This sets the max_len - * parameter. + * Estimate the maximum size of a write we can send to the server. */ -void afs_prepare_write(struct netfs_io_subrequest *subreq) +int afs_estimate_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate) { - struct netfs_io_stream *stream =3D &subreq->rreq->io_streams[subreq->stre= am_nr]; + unsigned long long limit =3D ULLONG_MAX - wctx->issue_from; + unsigned long long max_len =3D 256 * 1024 * 1024; =20 //if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) - // subreq->max_len =3D 512 * 1024; - //else - stream->sreq_max_len =3D 256 * 1024 * 1024; + // max_len =3D 512 * 1024; + + estimate->issue_at =3D wctx->issue_from + umin(max_len, limit); + return 0; } =20 /* @@ -140,12 +144,15 @@ static void afs_issue_write_worker(struct work_struct= *work) op->flags |=3D AFS_OPERATION_UNINTR; op->ops =3D &afs_store_data_operation; =20 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); afs_begin_vnode_operation(op); =20 - op->store.write_iter =3D &subreq->io_iter; op->store.i_size =3D umax(pos + len, vnode->netfs.remote_i_size); op->mtime =3D inode_get_mtime(&vnode->netfs.inode); =20 + iov_iter_bvec_queue(&op->store.write_iter, ITER_SOURCE, subreq->content.b= vecq, + subreq->content.slot, subreq->content.offset, subreq->len); + afs_wait_for_operation(op); ret =3D afs_put_operation(op); switch (ret) { @@ -169,11 +176,19 @@ static void afs_issue_write_worker(struct work_struct= *work) netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len); } =20 -void afs_issue_write(struct netfs_io_subrequest *subreq) +int afs_issue_write(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx) { + int ret; + + ret =3D netfs_prepare_write_buffer(subreq, wctx, INT_MAX); + if (ret < 0) + return ret; + subreq->work.func =3D afs_issue_write_worker; if (!queue_work(system_dfl_wq, &subreq->work)) WARN_ON_ONCE(1); + return -EIOCBQUEUED; } =20 /* @@ -184,6 +199,8 @@ void afs_begin_writeback(struct netfs_io_request *wreq) { if (S_ISREG(wreq->inode->i_mode)) afs_get_writeback_key(wreq); + + wreq->io_streams[0].avail =3D true; } =20 /* diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c index 24fb562ebd33..ffd1d4c87290 100644 --- a/fs/afs/yfsclient.c +++ b/fs/afs/yfsclient.c @@ -385,7 +385,9 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call = *call) if (call->remaining =3D=3D 0) goto no_more_data; =20 - call->iter =3D &subreq->io_iter; + iov_iter_bvec_queue(&call->def_iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + call->iov_len =3D min(call->remaining, subreq->len - subreq->transferred= ); call->unmarshall++; fallthrough; @@ -1357,7 +1359,7 @@ void yfs_fs_store_data(struct afs_operation *op) if (!call) return afs_op_nomem(op); =20 - call->write_iter =3D op->store.write_iter; + call->write_iter =3D &op->store.write_iter; =20 /* marshall the parameters */ bp =3D call->request; diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index 2c3edc91a5b0..a611769aa53a 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -26,7 +27,10 @@ struct cachefiles_kiocb { }; struct cachefiles_object *object; netfs_io_terminated_t term_func; - void *term_func_priv; + union { + struct netfs_io_subrequest *subreq; + void *term_func_priv; + }; bool was_async; unsigned int inval_counter; /* Copy of cookie->inval_counter */ u64 b_writing; @@ -193,61 +197,208 @@ static int cachefiles_read(struct netfs_cache_resour= ces *cres, } =20 /* - * Query the occupancy of the cache in a region, returning where the next = chunk - * of data starts and how long it is. + * Handle completion of a read from the cache issued by netfslib. */ -static int cachefiles_query_occupancy(struct netfs_cache_resources *cres, - loff_t start, size_t len, size_t granularity, - loff_t *_data_start, size_t *_data_len) +static void cachefiles_issue_read_complete(struct kiocb *iocb, long ret) { + struct cachefiles_kiocb *ki =3D container_of(iocb, struct cachefiles_kioc= b, iocb); + struct netfs_io_subrequest *subreq =3D ki->subreq; + struct inode *inode =3D file_inode(ki->iocb.ki_filp); + + _enter("%ld", ret); + + if (ret < 0) { + subreq->error =3D -ESTALE; + trace_cachefiles_io_error(ki->object, inode, ret, + cachefiles_trace_read_error); + } + + if (ret >=3D 0) { + if (ki->object->cookie->inval_counter =3D=3D ki->inval_counter) { + subreq->error =3D 0; + if (ret > 0) { + subreq->transferred +=3D ret; + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + } + } else { + subreq->error =3D -ESTALE; + } + } + + netfs_read_subreq_terminated(subreq); + cachefiles_put_kiocb(ki); +} + +/* + * Issue a read operation to the cache. + */ +static int cachefiles_issue_read(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) +{ + struct netfs_cache_resources *cres =3D &subreq->rreq->cache_resources; struct cachefiles_object *object; + struct cachefiles_kiocb *ki; + struct iov_iter iter; struct file *file; - loff_t off, off2; - - *_data_start =3D -1; - *_data_len =3D 0; + unsigned int old_nofs; + ssize_t ret =3D -ENOBUFS; =20 if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ)) return -ENOBUFS; =20 + fscache_count_read(); object =3D cachefiles_cres_object(cres); file =3D cachefiles_cres_file(cres); - granularity =3D max_t(size_t, object->volume->cache->bsize, granularity); =20 _enter("%pD,%li,%llx,%zx/%llx", - file, file_inode(file)->i_ino, start, len, + file, file_inode(file)->i_ino, subreq->start, subreq->len, i_size_read(file_inode(file))); =20 - off =3D cachefiles_inject_read_error(); - if (off =3D=3D 0) - off =3D vfs_llseek(file, start, SEEK_DATA); - if (off =3D=3D -ENXIO) - return -ENODATA; /* Beyond EOF */ - if (off < 0 && off >=3D (loff_t)-MAX_ERRNO) - return -ENOBUFS; /* Error. */ - if (round_up(off, granularity) >=3D start + len) - return -ENODATA; /* No data in range */ - - off2 =3D cachefiles_inject_read_error(); - if (off2 =3D=3D 0) - off2 =3D vfs_llseek(file, off, SEEK_HOLE); - if (off2 =3D=3D -ENXIO) - return -ENODATA; /* Beyond EOF */ - if (off2 < 0 && off2 >=3D (loff_t)-MAX_ERRNO) - return -ENOBUFS; /* Error. */ - - /* Round away partial blocks */ - off =3D round_up(off, granularity); - off2 =3D round_down(off2, granularity); - if (off2 <=3D off) - return -ENODATA; - - *_data_start =3D off; - if (off2 > start + len) - *_data_len =3D len; - else - *_data_len =3D off2 - off; - return 0; + if (subreq->len > MAX_RW_COUNT) + subreq->len =3D MAX_RW_COUNT; + + ret =3D netfs_prepare_read_buffer(subreq, rctx, BIO_MAX_VECS); + if (ret < 0) + return ret; + + iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + + ki =3D kzalloc_obj(struct cachefiles_kiocb); + if (!ki) + return -ENOMEM; + + refcount_set(&ki->ki_refcnt, 2); + ki->iocb.ki_filp =3D file; + ki->iocb.ki_pos =3D subreq->start; + ki->iocb.ki_flags =3D IOCB_DIRECT; + ki->iocb.ki_ioprio =3D get_current_ioprio(); + ki->iocb.ki_complete =3D cachefiles_issue_read_complete; + ki->object =3D object; + ki->inval_counter =3D cres->inval_counter; + ki->subreq =3D subreq; + ki->was_async =3D true; + + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(subreq, rctx); + + get_file(ki->iocb.ki_filp); + cachefiles_grab_object(object, cachefiles_obj_get_ioreq); + + trace_cachefiles_read(object, file_inode(file), ki->iocb.ki_pos, subreq->= len); + old_nofs =3D memalloc_nofs_save(); + ret =3D cachefiles_inject_read_error(); + if (ret =3D=3D 0) + ret =3D vfs_iocb_iter_read(file, &ki->iocb, &iter); + memalloc_nofs_restore(old_nofs); + + switch (ret) { + case -EIOCBQUEUED: + cachefiles_put_kiocb(ki); + break; + + case -ERESTARTSYS: + case -ERESTARTNOINTR: + case -ERESTARTNOHAND: + case -ERESTART_RESTARTBLOCK: + /* There's no easy way to restart the syscall since other AIO's + * may be already running. Just fail this IO with EINTR. + */ + ret =3D -EINTR; + fallthrough; + default: + ki->was_async =3D false; + cachefiles_issue_read_complete(&ki->iocb, ret); + break; + } + + _leave(" =3D %zd", ret); + return -EIOCBQUEUED; +} + +struct cachefiles_fiemap_info { + struct fiemap_extent_info fieinfo; + struct fscache_occupancy *occ; +}; + +/* + * Record a couple of logical extents in the read context. + */ +static int cachefiles_fiemap_fill(struct fiemap_extent_info *fieinfo, + const struct fiemap_extent *extent) +{ + struct cachefiles_fiemap_info *cfie =3D + container_of(fieinfo, struct cachefiles_fiemap_info, fieinfo); + struct fscache_occupancy *occ =3D cfie->occ; + unsigned long long start =3D extent->fe_logical; + unsigned long long end =3D start + extent->fe_length; + int ex =3D occ->nr_extents; + + _enter("%llx-%llx %x", start, end, extent->fe_flags); + + if (start >=3D occ->query_to) + return 1; + + if (ex =3D=3D 0) { + occ->no_more_cache =3D false; + goto fill_extent; + } + + if (start =3D=3D occ->cached_to[ex - 1]) { + occ->cached_to[ex - 1] =3D end; + goto stop_check; + } + + if (ex >=3D fieinfo->fi_extents_max) + return 1; + +fill_extent: + occ->cached_from[ex] =3D start; + occ->cached_to[ex] =3D end; + occ->cached_type[ex] =3D FSCACHE_EXTENT_DATA; + occ->nr_extents++; +stop_check: + occ->query_from =3D end; + return end >=3D occ->query_to ? 1 : 0; +} + +/* + * Query the occupancy of the cache in a region, returning the extent of t= he + * next chunk of cached data and the next hole. + */ +static int cachefiles_query_occupancy(struct netfs_cache_resources *cres, + struct fscache_occupancy *occ) +{ + struct cachefiles_fiemap_info cfie =3D { + .fieinfo.fi_fill =3D cachefiles_fiemap_fill, + .fieinfo.fi_extents_max =3D INT_MAX, + .occ =3D occ, + }; + struct cachefiles_object *object; + struct inode *inode; + struct file *file; + int ret; + + if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ)) + return -ENOBUFS; + + object =3D cachefiles_cres_object(cres); + file =3D cachefiles_cres_file(cres); + inode =3D file_inode(file); + occ->granularity =3D umax(object->volume->cache->bsize, occ->granularity); + + _enter("%pD,%li,%llx-%llx,%llx", + file, file_inode(file)->i_ino, occ->query_from, occ->query_to, + i_size_read(inode)); + + if (!inode->i_op->fiemap) + return -EOPNOTSUPP; + + ret =3D cachefiles_inject_read_error(); + if (ret =3D=3D 0) + ret =3D inode->i_op->fiemap(inode, &cfie.fieinfo, occ->query_from, + occ->query_to - occ->query_from); + return ret; } =20 /* @@ -489,18 +640,6 @@ cachefiles_do_prepare_read(struct netfs_cache_resource= s *cres, return ret; } =20 -/* - * Prepare a read operation, shortening it to a cached/uncached - * boundary as appropriate. - */ -static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subreq= uest *subreq, - unsigned long long i_size) -{ - return cachefiles_do_prepare_read(&subreq->rreq->cache_resources, - subreq->start, &subreq->len, i_size, - &subreq->flags, subreq->rreq->inode->i_ino); -} - /* * Prepare an on-demand read operation, shortening it to a cached/uncached * boundary as appropriate. @@ -599,62 +738,46 @@ int __cachefiles_prepare_write(struct cachefiles_obje= ct *object, cachefiles_has_space_for_write); } =20 -static int cachefiles_prepare_write(struct netfs_cache_resources *cres, - loff_t *_start, size_t *_len, size_t upper_len, - loff_t i_size, bool no_space_allocated_yet) +static int cachefiles_estimate_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate) { - struct cachefiles_object *object =3D cachefiles_cres_object(cres); - struct cachefiles_cache *cache =3D object->volume->cache; - const struct cred *saved_cred; - int ret; - - if (!cachefiles_cres_file(cres)) { - if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) - return -ENOBUFS; - if (!cachefiles_cres_file(cres)) - return -ENOBUFS; - } - - cachefiles_begin_secure(cache, &saved_cred); - ret =3D __cachefiles_prepare_write(object, cachefiles_cres_file(cres), - _start, _len, upper_len, - no_space_allocated_yet); - cachefiles_end_secure(cache, saved_cred); - return ret; + estimate->issue_at =3D wctx->issue_from + MAX_RW_COUNT; + estimate->max_segs =3D BIO_MAX_VECS; + return 0; } =20 -static void cachefiles_prepare_write_subreq(struct netfs_io_subrequest *su= breq) +static int cachefiles_issue_write(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx) { struct netfs_io_request *wreq =3D subreq->rreq; struct netfs_cache_resources *cres =3D &wreq->cache_resources; - struct netfs_io_stream *stream =3D &wreq->io_streams[subreq->stream_nr]; - - _enter("W=3D%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->st= art); + struct cachefiles_object *object =3D cachefiles_cres_object(cres); + struct cachefiles_cache *cache =3D object->volume->cache; + struct iov_iter iter; + const struct cred *saved_cred; + size_t off, pre, post, old_len =3D subreq->len, len; + loff_t start =3D subreq->start; + int ret; =20 - stream->sreq_max_len =3D MAX_RW_COUNT; - stream->sreq_max_segs =3D BIO_MAX_VECS; + _enter("W=3D%x[%x] %llx-%llx", + wreq->debug_id, subreq->debug_index, start, start + old_len - 1); =20 if (!cachefiles_cres_file(cres)) { if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) - return netfs_prepare_write_failed(subreq); + return -EINVAL; if (!cachefiles_cres_file(cres)) - return netfs_prepare_write_failed(subreq); + return -EINVAL; } -} =20 -static void cachefiles_issue_write(struct netfs_io_subrequest *subreq) -{ - struct netfs_io_request *wreq =3D subreq->rreq; - struct netfs_cache_resources *cres =3D &wreq->cache_resources; - struct cachefiles_object *object =3D cachefiles_cres_object(cres); - struct cachefiles_cache *cache =3D object->volume->cache; - const struct cred *saved_cred; - size_t off, pre, post, len =3D subreq->len; - loff_t start =3D subreq->start; - int ret; + ret =3D netfs_prepare_write_buffer(subreq, wctx, BIO_MAX_VECS); + if (ret < 0) + return ret; =20 - _enter("W=3D%x[%x] %llx-%llx", - wreq->debug_id, subreq->debug_index, start, start + len - 1); + len =3D subreq->len; + iov_iter_bvec_queue(&iter, ITER_SOURCE, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); =20 /* We need to start on the cache granularity boundary */ off =3D start & (CACHEFILES_DIO_BLOCK_SIZE - 1); @@ -663,23 +786,24 @@ static void cachefiles_issue_write(struct netfs_io_su= brequest *subreq) if (pre >=3D len) { fscache_count_dio_misfit(); netfs_write_subrequest_terminated(subreq, len); - return; + return 0; } subreq->transferred +=3D pre; start +=3D pre; len -=3D pre; - iov_iter_advance(&subreq->io_iter, pre); + iov_iter_advance(&iter, pre); } =20 + /* We also need to end on the cache granularity boundary */ post =3D len & (CACHEFILES_DIO_BLOCK_SIZE - 1); if (post) { len -=3D post; if (len =3D=3D 0) { fscache_count_dio_misfit(); netfs_write_subrequest_terminated(subreq, post); - return; + return 0; } - iov_iter_truncate(&subreq->io_iter, len); + iov_iter_truncate(&iter, len); } =20 trace_netfs_sreq(subreq, netfs_sreq_trace_cache_prepare); @@ -687,15 +811,13 @@ static void cachefiles_issue_write(struct netfs_io_su= brequest *subreq) ret =3D __cachefiles_prepare_write(object, cachefiles_cres_file(cres), &start, &len, len, true); cachefiles_end_secure(cache, saved_cred); - if (ret < 0) { - netfs_write_subrequest_terminated(subreq, ret); - return; - } + if (ret < 0) + return ret; =20 trace_netfs_sreq(subreq, netfs_sreq_trace_cache_write); - cachefiles_write(&subreq->rreq->cache_resources, - subreq->start, &subreq->io_iter, + cachefiles_write(&subreq->rreq->cache_resources, subreq->start, &iter, netfs_write_subrequest_terminated, subreq); + return -EIOCBQUEUED; } =20 /* @@ -714,10 +836,9 @@ static const struct netfs_cache_ops cachefiles_netfs_c= ache_ops =3D { .end_operation =3D cachefiles_end_operation, .read =3D cachefiles_read, .write =3D cachefiles_write, + .issue_read =3D cachefiles_issue_read, .issue_write =3D cachefiles_issue_write, - .prepare_read =3D cachefiles_prepare_read, - .prepare_write =3D cachefiles_prepare_write, - .prepare_write_subreq =3D cachefiles_prepare_write_subreq, + .estimate_write =3D cachefiles_estimate_write, .prepare_ondemand_read =3D cachefiles_prepare_ondemand_read, .query_occupancy =3D cachefiles_query_occupancy, }; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index e87b3bb94ee8..a9a8c01e171c 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -269,7 +269,8 @@ static void finish_netfs_read(struct ceph_osd_request *= req) ceph_dec_osd_stopping_blocker(fsc->mdsc); } =20 -static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq) +static int ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) { struct netfs_io_request *rreq =3D subreq->rreq; struct inode *inode =3D rreq->inode; @@ -278,7 +279,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_= subrequest *subreq) struct ceph_mds_request *req; struct ceph_mds_client *mdsc =3D ceph_sb_to_mdsc(inode->i_sb); struct ceph_inode_info *ci =3D ceph_inode(inode); - ssize_t err =3D 0; + ssize_t err; size_t len; int mode; =20 @@ -287,21 +288,29 @@ static bool ceph_netfs_issue_op_inline(struct netfs_i= o_subrequest *subreq) __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); =20 - if (subreq->start >=3D inode->i_size) + if (subreq->start >=3D inode->i_size) { + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); + err =3D 0; goto out; + } + + err =3D netfs_subreq_get_buffer(subreq, rctx, UINT_MAX); + if (err < 0) + return err; =20 /* We need to fetch the inline data. */ mode =3D ceph_try_to_choose_auth_mds(inode, CEPH_STAT_CAP_INLINE_DATA); req =3D ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR, mode); - if (IS_ERR(req)) { - err =3D PTR_ERR(req); - goto out; - } + if (IS_ERR(req)) + return PTR_ERR(req); + req->r_ino1 =3D ci->i_vino; req->r_args.getattr.mask =3D cpu_to_le32(CEPH_STAT_CAP_INLINE_DATA); req->r_num_caps =3D 2; =20 - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(subreq, rctx); + err =3D ceph_mdsc_do_request(mdsc, NULL, req); if (err < 0) goto out; @@ -311,7 +320,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_= subrequest *subreq) if (iinfo->inline_version =3D=3D CEPH_INLINE_NONE) { /* The data got uninlined */ ceph_mdsc_put_request(req); - return false; + return 1; } =20 len =3D min_t(size_t, iinfo->inline_len - subreq->start, subreq->len); @@ -328,26 +337,11 @@ static bool ceph_netfs_issue_op_inline(struct netfs_i= o_subrequest *subreq) subreq->error =3D err; trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); netfs_read_subreq_terminated(subreq); - return true; + return -EIOCBQUEUED; } =20 -static int ceph_netfs_prepare_read(struct netfs_io_subrequest *subreq) -{ - struct netfs_io_request *rreq =3D subreq->rreq; - struct inode *inode =3D rreq->inode; - struct ceph_inode_info *ci =3D ceph_inode(inode); - struct ceph_fs_client *fsc =3D ceph_inode_to_fs_client(inode); - u64 objno, objoff; - u32 xlen; - - /* Truncate the extent at the end of the current block */ - ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, - &objno, &objoff, &xlen); - rreq->io_streams[0].sreq_max_len =3D umin(xlen, fsc->mount_options->rsize= ); - return 0; -} - -static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) +static int ceph_netfs_issue_read(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) { struct netfs_io_request *rreq =3D subreq->rreq; struct inode *inode =3D rreq->inode; @@ -356,48 +350,60 @@ static void ceph_netfs_issue_read(struct netfs_io_sub= request *subreq) struct ceph_client *cl =3D fsc->client; struct ceph_osd_request *req =3D NULL; struct ceph_vino vino =3D ceph_vino(inode); + u64 objno, objoff, len, off =3D subreq->start; + u32 maxlen; int err; - u64 len; bool sparse =3D IS_ENCRYPTED(inode) || ceph_test_mount_opt(fsc, SPARSEREA= D); - u64 off =3D subreq->start; int extent_cnt; =20 - if (ceph_inode_is_shutdown(inode)) { - err =3D -EIO; - goto out; + if (ceph_inode_is_shutdown(inode)) + return -EIO; + + if (ceph_has_inline_data(ci)) { + err =3D ceph_netfs_issue_op_inline(subreq, rctx); + if (err !=3D 1) + return err; } =20 - if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) - return; + /* Truncate the extent at the end of the current block */ + ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, + &objno, &objoff, &maxlen); + maxlen =3D umin(maxlen, fsc->mount_options->rsize); + len =3D umin(subreq->len, maxlen); + subreq->len =3D len; =20 // TODO: This rounding here is slightly dodgy. It *should* work, for // now, as the cache only deals in blocks that are a multiple of // PAGE_SIZE and fscrypt blocks are at most PAGE_SIZE. What needs to // happen is for the fscrypt driving to be moved into netfslib and the // data in the cache also to be stored encrypted. - len =3D subreq->len; ceph_fscrypt_adjust_off_and_len(inode, &off, &len); =20 req =3D ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino, off, &len, 0, 1, sparse ? CEPH_OSD_OP_SPARSE_READ : CEPH_OSD_OP_READ, CEPH_OSD_FLAG_READ, NULL, ci->i_truncate_seq, ci->i_truncate_size, false); - if (IS_ERR(req)) { - err =3D PTR_ERR(req); - req =3D NULL; - goto out; - } + if (IS_ERR(req)) + return PTR_ERR(req); =20 if (sparse) { extent_cnt =3D __ceph_sparse_read_ext_count(inode, len); err =3D ceph_alloc_sparse_ext_map(&req->r_ops[0], extent_cnt); - if (err) - goto out; + if (err) { + ceph_osdc_put_request(req); + return err; + } } =20 doutc(cl, "%llx.%llx pos=3D%llu orig_len=3D%zu len=3D%llu\n", ceph_vinop(inode), subreq->start, subreq->len, len); =20 + err =3D netfs_subreq_get_buffer(subreq, rctx, UINT_MAX); + if (err < 0) { + ceph_osdc_put_request(req); + return err; + } + /* * FIXME: For now, use CEPH_OSD_DATA_TYPE_PAGES instead of _ITER for * encrypted inodes. We'd need infrastructure that handles an iov_iter @@ -422,7 +428,8 @@ static void ceph_netfs_issue_read(struct netfs_io_subre= quest *subreq) if (err < 0) { doutc(cl, "%llx.%llx failed to allocate pages, %d\n", ceph_vinop(inode), err); - goto out; + ceph_osdc_put_request(req); + return -EIO; } =20 /* should always give us a page-aligned read */ @@ -436,23 +443,20 @@ static void ceph_netfs_issue_read(struct netfs_io_sub= request *subreq) osd_req_op_extent_osd_iter(req, 0, &subreq->io_iter); } if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) { - err =3D -EIO; - goto out; + ceph_osdc_put_request(req); + return -EIO; } req->r_callback =3D finish_netfs_read; req->r_priv =3D subreq; req->r_inode =3D inode; ihold(inode); =20 - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(subreq, rctx); ceph_osdc_start_request(req->r_osdc, req); -out: ceph_osdc_put_request(req); - if (err) { - subreq->error =3D err; - netfs_read_subreq_terminated(subreq); - } - doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err); + doutc(cl, "%llx.%llx result -EIOCBQUEUED\n", ceph_vinop(inode)); + return -EIOCBQUEUED; } =20 static int ceph_init_request(struct netfs_io_request *rreq, struct file *f= ile) @@ -538,7 +542,6 @@ static void ceph_netfs_free_request(struct netfs_io_req= uest *rreq) const struct netfs_request_ops ceph_netfs_ops =3D { .init_request =3D ceph_init_request, .free_request =3D ceph_netfs_free_request, - .prepare_read =3D ceph_netfs_prepare_read, .issue_read =3D ceph_netfs_issue_read, .expand_readahead =3D ceph_netfs_expand_readahead, .check_write_begin =3D ceph_netfs_check_write_begin, diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 0621e6870cbd..421dd0be413b 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -12,13 +12,13 @@ netfs-y :=3D \ misc.o \ objects.o \ read_collect.o \ - read_pgpriv2.o \ read_retry.o \ read_single.o \ write_collect.o \ write_issue.o \ write_retry.o =20 +netfs-$(CONFIG_NETFS_PGPRIV2) +=3D read_pgpriv2.o netfs-$(CONFIG_NETFS_STATS) +=3D stats.o =20 netfs-$(CONFIG_FSCACHE) +=3D \ diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index d5d5a7520cbe..32e27f8f420a 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -9,6 +9,15 @@ #include #include "internal.h" =20 +struct netfs_buffered_read_context { + struct netfs_read_context r; + struct fscache_occupancy cache; /* List of cached extents */ + unsigned long long i_size; /* Size of file */ + size_t buffered; /* Amount in buffer */ + struct readahead_control *ractl; /* Readahead source buffer */ + struct bvecq_pos dispatch_cursor; /* Cursor from which we dispatch ops */ +}; + static void netfs_cache_expand_readahead(struct netfs_io_request *rreq, unsigned long long *_start, unsigned long long *_len, @@ -54,15 +63,18 @@ static void netfs_rreq_expand(struct netfs_io_request *= rreq, } } =20 +/* + * Clear any remaining pages in the readahead request. + */ static void netfs_clear_to_ra_end(struct netfs_io_request *rreq, - struct readahead_control *ractl) + struct netfs_buffered_read_context *rctx) { struct folio_batch batch; =20 folio_batch_init(&batch); =20 for (;;) { - batch.nr =3D __readahead_batch(ractl, (struct page **)batch.folios, + batch.nr =3D __readahead_batch(rctx->ractl, (struct page **)batch.folios, PAGEVEC_SIZE); if (!batch.nr) break; @@ -86,32 +98,25 @@ static int netfs_begin_cache_read(struct netfs_io_reque= st *rreq, struct netfs_in } =20 /* - * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O - * @subreq: The subrequest to be set up - * - * Prepare the I/O iterator representing the read buffer on a subrequest f= or - * the filesystem to use for I/O (it can be passed directly to a socket). = This - * is intended to be called from the ->issue_read() method once the filesy= stem - * has trimmed the request to the size it wants. - * - * Returns the limited size if successful and -ENOMEM if insufficient memo= ry - * available. + * Prepare the I/O buffer on a buffered read subrequest for the filesystem= to + * use as a bvec queue. * * [!] NOTE: This must be run in the same thread as ->issue_read() was cal= led * in as we access the readahead_control struct. */ -static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *sub= req, - struct readahead_control *ractl) +static int netfs_prepare_buffered_read_buffer(struct netfs_io_subrequest *= subreq, + struct netfs_read_context *base_rctx, + unsigned int max_segs) { + struct netfs_buffered_read_context *rctx =3D + container_of(base_rctx, struct netfs_buffered_read_context, r); struct netfs_io_request *rreq =3D subreq->rreq; - struct netfs_io_stream *stream =3D &rreq->io_streams[0]; ssize_t extracted; - size_t rsize =3D subreq->len; =20 - if (subreq->source =3D=3D NETFS_DOWNLOAD_FROM_SERVER) - rsize =3D umin(rsize, stream->sreq_max_len); + _enter("R=3D%08x[%x] l=3D%zx s=3D%u", + rreq->debug_id, subreq->debug_index, subreq->len, max_segs); =20 - if (ractl) { + if (rctx->ractl) { /* If we don't have sufficient folios in the rolling buffer, * extract a bvecq's worth from the readahead region at a time * into the buffer. Note that this acquires a ref on each page @@ -120,67 +125,108 @@ static ssize_t netfs_prepare_read_iterator(struct ne= tfs_io_subrequest *subreq, */ struct folio_batch put_batch; =20 + _debug("ractl %zx < %zx", rctx->buffered, subreq->len); + folio_batch_init(&put_batch); - while (rreq->submitted < subreq->start + rsize) { + while (rctx->buffered < subreq->len) { ssize_t added; =20 - added =3D bvecq_load_from_ra(&rreq->load_cursor, ractl, + added =3D bvecq_load_from_ra(&rreq->load_cursor, rctx->ractl, &put_batch); if (added < 0) return added; - rreq->submitted +=3D added; + rctx->buffered +=3D added; } folio_batch_release(&put_batch); } =20 - bvecq_pos_attach(&subreq->dispatch_pos, &rreq->dispatch_cursor); - extracted =3D bvecq_slice(&rreq->dispatch_cursor, subreq->len, - stream->sreq_max_segs, &subreq->nr_segs); + bvecq_pos_attach(&subreq->dispatch_pos, &rctx->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + extracted =3D bvecq_slice(&rctx->dispatch_cursor, subreq->len, + max_segs, &subreq->nr_segs); if (extracted < 0) return extracted; - if (extracted < rsize) { + + rctx->buffered -=3D extracted; + if (extracted < subreq->len) { subreq->len =3D extracted; trace_netfs_sreq(subreq, netfs_sreq_trace_limited); } =20 - return subreq->len; + return 0; } =20 -static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_reque= st *rreq, - struct netfs_io_subrequest *subreq, - loff_t i_size) +/** + * netfs_prepare_read_buffer - Get the buffer for a subrequest + * @subreq: The subrequest to get the buffer for + * @rctx: Read context + * @max_segs: Maximum number of segments in buffer (or INT_MAX) + * + * Extract a slice of buffer from the stream and attach it to the subreque= st as + * a bio_vec queue. The maximum amount of data attached is set by + * @subreq->len, but this may be shortened if @max_segs would be exceeded. + * + * [!] NOTE: This must be run in the same thread as ->issue_read() was cal= led + * in as we access the readahead_control struct if there is one. + */ +int netfs_prepare_read_buffer(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx, + unsigned int max_segs) { - struct netfs_cache_resources *cres =3D &rreq->cache_resources; - enum netfs_io_source source; - - if (!cres->ops) - return NETFS_DOWNLOAD_FROM_SERVER; - source =3D cres->ops->prepare_read(subreq, i_size); - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - return source; - + switch (subreq->rreq->origin) { + case NETFS_READAHEAD: + case NETFS_READPAGE: + case NETFS_READ_FOR_WRITE: + if (rctx->retrying) + return netfs_prepare_buffered_read_retry_buffer(subreq, rctx, max_segs); + return netfs_prepare_buffered_read_buffer(subreq, rctx, max_segs); + + case NETFS_UNBUFFERED_READ: + case NETFS_DIO_READ: + case NETFS_READ_GAPS: + return netfs_prepare_unbuffered_read_buffer(subreq, rctx, max_segs); + case NETFS_READ_SINGLE: + return netfs_prepare_read_single_buffer(subreq, rctx, max_segs); + default: + WARN_ON_ONCE(1); + return -EIO; + } } +EXPORT_SYMBOL(netfs_prepare_read_buffer); =20 -/* - * Issue a read against the cache. - * - Eats the caller's ref on subreq. - */ -static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) +int netfs_read_query_cache(struct netfs_io_request *rreq, + struct fscache_occupancy *occ) { struct netfs_cache_resources *cres =3D &rreq->cache_resources; =20 - netfs_stat(&netfs_n_rh_read); - cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_IG= NORE, - netfs_cache_read_terminated, subreq); + occ->granularity =3D PAGE_SIZE; + occ->no_more_cache =3D true; + if (occ->query_from >=3D occ->query_to) + return 0; + if (!cres->ops) + return 0; + occ->query_from =3D round_up(occ->query_from, occ->granularity); + return cres->ops->query_occupancy(cres, occ); } =20 -static void netfs_queue_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq, - bool last_subreq) +/** + * netfs_mark_read_submission - Mark a read subrequest as being ready for = submission + * @subreq: The subrequest to be marked + * @rctx: Read context supplied to ->issue_read() + * + * Calling this marks a read subrequest as being ready for submission and = makes + * it available to the collection thread. After calling this, the filesys= tem's + * ->issue_read() method must invoke netfs_read_subreq_terminated() to end= the + * subrequest and must return -EIOCBQUEUED. + */ +void netfs_mark_read_submission(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) { + struct netfs_io_request *rreq =3D subreq->rreq; struct netfs_io_stream *stream =3D &rreq->io_streams[0]; =20 + _enter("R=3D%08x[%x]", rreq->debug_id, subreq->debug_index); + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); =20 /* We add to the end of the list whilst the collector may be walking @@ -188,49 +234,57 @@ static void netfs_queue_read(struct netfs_io_request = *rreq, * remove entries off of the front. */ spin_lock(&rreq->lock); - list_add_tail(&subreq->rreq_link, &stream->subrequests); - if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { - stream->front =3D subreq; - if (!stream->active) { - stream->collected_to =3D stream->front->start; - /* Store list pointers before active flag */ - smp_store_release(&stream->active, true); + if (list_empty(&subreq->rreq_link)) { + list_add_tail(&subreq->rreq_link, &stream->subrequests); + if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { + stream->front =3D subreq; + if (!stream->active) { + stream->collected_to =3D stream->front->start; + /* Store list pointers before active flag */ + smp_store_release(&stream->active, true); + } } } =20 - if (last_subreq) { + rreq->submitted +=3D subreq->len; + rctx->start =3D subreq->start + subreq->len; + if (rctx->start >=3D rctx->stop) { smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + trace_netfs_rreq(rreq, netfs_rreq_trace_all_queued); } =20 spin_unlock(&rreq->lock); + + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); } +EXPORT_SYMBOL(netfs_mark_read_submission); =20 -static void netfs_issue_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq, - struct readahead_control *ractl) +static int netfs_issue_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq, + struct netfs_buffered_read_context *rctx) { - bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); - iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, - subreq->content.slot, subreq->content.offset, subreq->len); + _enter("R=3D%08x[%x]", rreq->debug_id, subreq->debug_index); =20 switch (subreq->source) { case NETFS_DOWNLOAD_FROM_SERVER: - rreq->netfs_ops->issue_read(subreq); - break; - case NETFS_READ_FROM_CACHE: - netfs_read_cache_to_pagecache(rreq, subreq); - break; + return rreq->netfs_ops->issue_read(subreq, &rctx->r); + case NETFS_READ_FROM_CACHE: { + struct netfs_cache_resources *cres =3D &rreq->cache_resources; + + netfs_stat(&netfs_n_rh_read); + cres->ops->issue_read(subreq, &rctx->r); + return -EIOCBQUEUED; + } default: - bvecq_zero(&rreq->dispatch_cursor, subreq->len); + netfs_mark_read_submission(subreq, &rctx->r); + bvecq_zero(&rctx->dispatch_cursor, subreq->len); subreq->transferred =3D subreq->len; subreq->error =3D 0; - iov_iter_zero(subreq->len, &subreq->io_iter); - subreq->transferred =3D subreq->len; netfs_read_subreq_terminated(subreq); - if (ractl) - netfs_clear_to_ra_end(rreq, ractl); - break; + if (rctx->ractl) + netfs_clear_to_ra_end(rreq, rctx); + return 0; } } =20 @@ -242,95 +296,134 @@ static void netfs_issue_read(struct netfs_io_request= *rreq, static void netfs_read_to_pagecache(struct netfs_io_request *rreq, struct readahead_control *ractl) { + struct netfs_buffered_read_context rctx =3D { + .cache.query_from =3D rreq->start, + .cache.query_to =3D rreq->start + rreq->len, + .cache.cached_from[0] =3D ULLONG_MAX, + .cache.cached_to[0] =3D ULLONG_MAX, + .r.start =3D rreq->start, + .r.stop =3D rreq->start + rreq->len, + .i_size =3D rreq->i_size, + .ractl =3D ractl, + }; struct netfs_inode *ictx =3D netfs_inode(rreq->inode); - unsigned long long start =3D rreq->start; - ssize_t size =3D rreq->len; int ret =3D 0; =20 _enter("R=3D%08x", rreq->debug_id); =20 - bvecq_pos_attach(&rreq->dispatch_cursor, &rreq->load_cursor); - bvecq_pos_attach(&rreq->collect_cursor, &rreq->dispatch_cursor); + bvecq_pos_attach(&rctx.dispatch_cursor, &rreq->load_cursor); + bvecq_pos_attach(&rreq->collect_cursor, &rctx.dispatch_cursor); + =20 do { struct netfs_io_subrequest *subreq; - enum netfs_io_source source =3D NETFS_SOURCE_UNKNOWN; - ssize_t slice; + struct fscache_occupancy *occ =3D &rctx.cache; + unsigned long long hole_to =3D ULLONG_MAX, cache_to =3D ULLONG_MAX; =20 - subreq =3D netfs_alloc_subrequest(rreq); - if (!subreq) { - ret =3D -ENOMEM; - break; - } - - subreq->start =3D start; - subreq->len =3D size; - - rreq->io_streams[0].sreq_max_len =3D MAX_RW_COUNT; - rreq->io_streams[0].sreq_max_segs =3D INT_MAX; - - source =3D netfs_cache_prepare_read(rreq, subreq, rreq->i_size); - subreq->source =3D source; - if (source =3D=3D NETFS_DOWNLOAD_FROM_SERVER) { - unsigned long long zp =3D umin(ictx->zero_point, rreq->i_size); - size_t len =3D subreq->len; - - if (unlikely(rreq->origin =3D=3D NETFS_READ_SINGLE)) - zp =3D rreq->i_size; - if (subreq->start >=3D zp) { - subreq->source =3D source =3D NETFS_FILL_WITH_ZEROES; - goto fill_with_zeroes; + /* If we don't have any, find out the next couple of data + * extents from the cache, containing of following the + * specified start offset. Holes have to be fetched from the + * server; data regions from the cache. + */ + if (!occ->no_more_cache) { + if (!occ->nr_extents) { + ret =3D netfs_read_query_cache(rreq, &rctx.cache); + if (ret < 0) + break; + if (occ->no_more_cache) { + occ->cached_from[0] =3D ULLONG_MAX; + occ->cached_to[0] =3D ULLONG_MAX; + occ->nr_extents =3D 0; + } } =20 - if (len > zp - subreq->start) - len =3D zp - subreq->start; - if (len =3D=3D 0) { - pr_err("ZERO-LEN READ: R=3D%08x[%x] l=3D%zx/%zx s=3D%llx z=3D%llx i=3D= %llx", - rreq->debug_id, subreq->debug_index, - subreq->len, size, - subreq->start, ictx->zero_point, rreq->i_size); - break; - } - subreq->len =3D len; + /* Shuffle down the extent list to evict used-up or + * useless extents. + */ + if (occ->nr_extents) { + hole_to =3D round_up(occ->cached_from[0], occ->granularity); + cache_to =3D round_down(occ->cached_to[0], occ->granularity); + if (hole_to > cache_to) { + occ->cached_to[0] =3D rctx.r.start; + } else { + occ->cached_from[0] =3D hole_to; + occ->cached_to[0] =3D cache_to; + } =20 - netfs_stat(&netfs_n_rh_download); - if (rreq->netfs_ops->prepare_read) { - ret =3D rreq->netfs_ops->prepare_read(subreq); - if (ret < 0) { - subreq->error =3D ret; - /* Not queued - release both refs. */ - netfs_put_subrequest(subreq, - netfs_sreq_trace_put_cancel); - netfs_put_subrequest(subreq, - netfs_sreq_trace_put_cancel); - break; + if (rctx.r.start >=3D occ->cached_to[0]) { + for (int i =3D 1; i < occ->nr_extents; i++) { + occ->cached_from[i - 1] =3D occ->cached_from[i]; + occ->cached_to[i - 1] =3D occ->cached_to[i]; + occ->cached_type[i - 1] =3D occ->cached_type[i]; + } + occ->nr_extents--; + continue; } - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); } - goto issue; } =20 - fill_with_zeroes: - if (source =3D=3D NETFS_FILL_WITH_ZEROES) { + subreq =3D netfs_alloc_subrequest(rreq); + if (!subreq) { + ret =3D -ENOMEM; + break; + } + + subreq->start =3D rctx.r.start; + + hole_to =3D occ->cached_from[0]; + cache_to =3D occ->cached_to[0]; + + _debug("rsub %llx %llx-%llx", subreq->start, hole_to, cache_to); + + if (occ->nr_extents && + rctx.r.start >=3D hole_to && rctx.r.start < cache_to) { + /* Overlap with a cached region, where the cache may + * record a block of zeroes. + */ + _debug("cached"); + subreq->len =3D cache_to - rctx.r.start; + if (occ->cached_type[0] =3D=3D FSCACHE_EXTENT_ZERO) { + subreq->source =3D NETFS_FILL_WITH_ZEROES; + netfs_stat(&netfs_n_rh_zero); + } else { + subreq->source =3D NETFS_READ_FROM_CACHE; + } + } else if (subreq->start >=3D ictx->zero_point && + subreq->start < rctx.r.stop) { + /* If this range lies beyond the zero-point, that part + * can just be cleared locally. + */ + _debug("zero %llx-%llx", rctx.r.start, rctx.r.stop); + subreq->len =3D rctx.r.stop - rctx.r.start; subreq->source =3D NETFS_FILL_WITH_ZEROES; - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); netfs_stat(&netfs_n_rh_zero); - goto issue; + } else { + /* Read a cache hole from the server. If any part of + * this range lies beyond the zero-point or the EOF, + * that part can just be cleared locally. + */ + unsigned long long zlimit =3D umin(rctx.i_size, ictx->zero_point); + unsigned long long limit =3D min3(zlimit, rctx.r.stop, hole_to); + + _debug("limit %llx %llx", rctx.i_size, ictx->zero_point); + _debug("download %llx-%llx", rctx.r.start, rctx.r.stop); + subreq->len =3D umin(limit - subreq->start, ULONG_MAX); + subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; + if (rreq->cache_resources.ops) + __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); + netfs_stat(&netfs_n_rh_download); } =20 - if (source =3D=3D NETFS_READ_FROM_CACHE) { - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - goto issue; + if (subreq->len =3D=3D 0) { + pr_err("ZERO-LEN READ: R=3D%08x[%x] l=3D%zx/%llx s=3D%llx z=3D%llx i=3D= %llx", + rreq->debug_id, subreq->debug_index, + subreq->len, rctx.r.stop - subreq->start, + subreq->start, ictx->zero_point, rreq->i_size); + break; } =20 - pr_err("Unexpected read source %u\n", source); - WARN_ON_ONCE(1); - break; - - issue: - slice =3D netfs_prepare_read_iterator(subreq, ractl); - if (slice < 0) { - ret =3D slice; + ret =3D netfs_issue_read(rreq, subreq, &rctx); + if (ret !=3D 0 && ret !=3D -EIOCBQUEUED) { subreq->error =3D ret; trace_netfs_sreq(subreq, netfs_sreq_trace_cancel); /* Not queued - release both refs. */ @@ -338,15 +431,12 @@ static void netfs_read_to_pagecache(struct netfs_io_r= equest *rreq, netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel); break; } - size -=3D slice; - start +=3D slice; + ret =3D 0; =20 - netfs_queue_read(rreq, subreq, size <=3D 0); - netfs_issue_read(rreq, subreq, ractl); cond_resched(); - } while (size > 0); + } while (rctx.r.start < rctx.r.stop); =20 - if (unlikely(size > 0)) { + if (unlikely(rctx.r.start < rctx.r.stop)) { smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); netfs_wake_collector(rreq); @@ -356,7 +446,7 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq, cmpxchg(&rreq->error, 0, ret); =20 bvecq_pos_detach(&rreq->load_cursor); - bvecq_pos_detach(&rreq->dispatch_cursor); + bvecq_pos_detach(&rctx.dispatch_cursor); } =20 /** @@ -382,6 +472,8 @@ void netfs_readahead(struct readahead_control *ractl) size_t size =3D readahead_length(ractl); int ret; =20 + _enter(""); + rreq =3D netfs_alloc_request(ractl->mapping, ractl->file, start, size, NETFS_READAHEAD); if (IS_ERR(rreq)) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 22a4d61631c9..c3834a589a7d 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -267,7 +267,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct = iov_iter *iter, * a file that's open for reading as ->read_folio() then has to * be able to flush it. */ - if ((file->f_mode & FMODE_READ) || + if (//(file->f_mode & FMODE_READ) || netfs_is_cache_enabled(ctx)) { if (finfo) { netfs_stat(&netfs_n_wh_wstream_conflict); diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index c8704c4a95a9..c435664b4f79 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -16,18 +16,41 @@ #include #include "internal.h" =20 +int netfs_prepare_unbuffered_read_buffer(struct netfs_io_subrequest *subre= q, + struct netfs_read_context *base_rctx, + unsigned int max_segs) +{ + struct netfs_unbuffered_read_context *rctx =3D + container_of(base_rctx, struct netfs_unbuffered_read_context, r); + size_t len; + + bvecq_pos_attach(&subreq->dispatch_pos, &rctx->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &rctx->dispatch_cursor); + len =3D bvecq_slice(&rctx->dispatch_cursor, subreq->len, max_segs, + &subreq->nr_segs); + + if (len < subreq->len) { + subreq->len =3D len; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + + rctx->r.start +=3D subreq->len; + return 0; +} + /* * Perform a read to a buffer from the server, slicing up the region to be= read * according to the network rsize. */ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) { - struct netfs_io_stream *stream =3D &rreq->io_streams[0]; - unsigned long long start =3D rreq->start; - ssize_t size =3D rreq->len; + struct netfs_unbuffered_read_context rctx =3D { + .r.start =3D rreq->start, + .r.stop =3D rreq->start + rreq->len, + }; int ret =3D 0; =20 - bvecq_pos_attach(&rreq->dispatch_cursor, &rreq->load_cursor); + bvecq_pos_transfer(&rctx.dispatch_cursor, &rreq->load_cursor); =20 do { struct netfs_io_subrequest *subreq; @@ -39,67 +62,36 @@ static int netfs_dispatch_unbuffered_reads(struct netfs= _io_request *rreq) } =20 subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; - subreq->start =3D start; - subreq->len =3D size; - - __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); - - spin_lock(&rreq->lock); - list_add_tail(&subreq->rreq_link, &stream->subrequests); - if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { - stream->front =3D subreq; - if (!stream->active) { - stream->collected_to =3D stream->front->start; - /* Store list pointers before active flag */ - smp_store_release(&stream->active, true); - } - } - trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock(&rreq->lock); + subreq->start =3D rctx.r.start; + subreq->len =3D rctx.r.stop - rctx.r.start; =20 netfs_stat(&netfs_n_rh_download); - if (rreq->netfs_ops->prepare_read) { - ret =3D rreq->netfs_ops->prepare_read(subreq); - if (ret < 0) { - netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel); - break; - } - } =20 - bvecq_pos_attach(&subreq->dispatch_pos, &rreq->dispatch_cursor); - bvecq_pos_attach(&subreq->content, &rreq->dispatch_cursor); - subreq->len =3D bvecq_slice(&rreq->dispatch_cursor, - umin(size, stream->sreq_max_len), - stream->sreq_max_segs, - &subreq->nr_segs); - - size -=3D subreq->len; - start +=3D subreq->len; - rreq->submitted +=3D subreq->len; - if (size <=3D 0) { - smp_wmb(); /* Write lists before ALL_QUEUED. */ - set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + ret =3D rreq->netfs_ops->issue_read(subreq, &rctx.r); + if (ret !=3D 0 && ret !=3D -EIOCBQUEUED) { + subreq->error =3D ret; + trace_netfs_sreq(subreq, netfs_sreq_trace_cancel); + /* Not queued - release both refs. */ + netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel); + netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel); + break; } =20 - iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, - subreq->content.slot, subreq->content.offset, subreq->len); - - rreq->netfs_ops->issue_read(subreq); - + ret =3D 0; if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) netfs_wait_for_paused_read(rreq); if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) break; cond_resched(); - } while (size > 0); + } while (rctx.r.start < rctx.r.stop); =20 - if (unlikely(size > 0)) { + if (unlikely(rctx.r.start < rctx.r.stop)) { smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); netfs_wake_collector(rreq); } =20 - bvecq_pos_detach(&rreq->dispatch_cursor); + bvecq_pos_detach(&rctx.dispatch_cursor); return ret; } =20 diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c index bb224d837b78..cf7d2798c50e 100644 --- a/fs/netfs/direct_write.c +++ b/fs/netfs/direct_write.c @@ -9,6 +9,39 @@ #include #include "internal.h" =20 +struct netfs_unbuf_write_context { + struct netfs_write_context wctx; + struct bvecq_pos dispatch_cursor; /* Dispatch position in buffer */ +}; + +/* + * Prepare the buffer for an unbuffered/DIO write. + */ +int netfs_prepare_unbuffered_write_buffer(struct netfs_io_subrequest *subr= eq, + struct netfs_write_context *wctx, + unsigned int max_segs) +{ + struct netfs_unbuf_write_context *uctx =3D + container_of(wctx, struct netfs_unbuf_write_context, wctx); + size_t len; + + bvecq_pos_attach(&subreq->dispatch_pos, &uctx->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &uctx->dispatch_cursor); + len =3D bvecq_slice(&uctx->dispatch_cursor, subreq->len, max_segs, + &subreq->nr_segs); + + if (len < subreq->len) { + subreq->len =3D len; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + + // TODO: Wait here for completion of prev subreq + + wctx->issue_from +=3D subreq->len; + wctx->buffered -=3D subreq->len; + return 0; +} + /* * Perform the cleanup rituals after an unbuffered write is complete. */ @@ -64,7 +97,8 @@ static void netfs_unbuffered_write_done(struct netfs_io_r= equest *wreq) */ static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq, struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct netfs_unbuf_write_context *uctx) { trace_netfs_collect_sreq(wreq, subreq); =20 @@ -74,9 +108,9 @@ static void netfs_unbuffered_write_collect(struct netfs_= io_request *wreq, =20 wreq->transferred +=3D subreq->transferred; if (subreq->transferred < subreq->len) { - bvecq_pos_detach(&wreq->dispatch_cursor); - bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos); - bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred); + bvecq_pos_detach(&uctx->dispatch_cursor); + bvecq_pos_transfer(&uctx->dispatch_cursor, &subreq->dispatch_pos); + bvecq_pos_advance(&uctx->dispatch_cursor, subreq->transferred); } =20 stream->collected_to =3D subreq->start + subreq->transferred; @@ -85,6 +119,7 @@ static void netfs_unbuffered_write_collect(struct netfs_= io_request *wreq, =20 trace_netfs_collect_stream(wreq, stream); trace_netfs_collect_state(wreq, wreq->collected_to, 0); + /* TODO: Progressively clean up wreq->direct_bq */ } =20 /* @@ -98,68 +133,68 @@ static void netfs_unbuffered_write_collect(struct netf= s_io_request *wreq, static int netfs_unbuffered_write(struct netfs_io_request *wreq) { struct netfs_io_subrequest *subreq =3D NULL; + struct netfs_unbuf_write_context uctx =3D { + .wctx.issue_from =3D wreq->start, + .wctx.buffered =3D wreq->len, + }; + struct netfs_write_context *wctx =3D &uctx.wctx; struct netfs_io_stream *stream =3D &wreq->io_streams[0]; int ret; =20 _enter("%llx", wreq->len); =20 - bvecq_pos_attach(&wreq->dispatch_cursor, &wreq->load_cursor); - bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor); + bvecq_pos_attach(&uctx.dispatch_cursor, &wreq->load_cursor); + bvecq_pos_attach(&wreq->collect_cursor, &uctx.dispatch_cursor); =20 if (wreq->origin =3D=3D NETFS_DIO_WRITE) inode_dio_begin(wreq->inode); =20 - stream->collected_to =3D wreq->start; - for (;;) { bool retry =3D false; =20 if (!subreq) { - netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred); - subreq =3D stream->construct; - stream->construct =3D NULL; - stream->front =3D NULL; + subreq =3D netfs_alloc_write_subreq(wreq, stream, wctx); + if (!subreq) + return -ENOMEM; } =20 - /* Check if (re-)preparation failed. */ - if (unlikely(test_bit(NETFS_SREQ_FAILED, &subreq->flags))) { - netfs_write_subrequest_terminated(subreq, subreq->error); - wreq->error =3D subreq->error; + ret =3D stream->issue_write(subreq, wctx); + switch (ret) { + case 0: + /* Already completed synchronously. */ break; - } - - bvecq_pos_attach(&subreq->dispatch_pos, &wreq->dispatch_cursor); - subreq->len =3D bvecq_slice(&wreq->dispatch_cursor, stream->sreq_max_len, - stream->sreq_max_segs, &subreq->nr_segs); - bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); - - iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE, - subreq->content.bvecq, subreq->content.slot, - subreq->content.offset, - subreq->len); - - if (!iov_iter_count(&subreq->io_iter)) + case -EIOCBQUEUED: + /* Async, need to wait. */ + ret =3D netfs_wait_for_in_progress_subreq(wreq, subreq); + if (ret < 0) { + if (ret =3D=3D -EAGAIN) { + retry =3D true; + break; + } + netfs_put_subrequest(subreq, netfs_sreq_trace_put_failed); + subreq =3D NULL; + ret =3D subreq->error; + goto failed; + } break; - - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - stream->issue_write(subreq); - - /* Async, need to wait. */ - netfs_wait_for_in_progress_stream(wreq, stream); - - if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + case -EAGAIN: + /* Need to retry. */ + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); retry =3D true; - } else if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) { - ret =3D subreq->error; + break; + default: + /* Probably failed before dispatch. */ + subreq->error =3D ret; wreq->error =3D ret; - netfs_see_subrequest(subreq, netfs_sreq_trace_see_failed); + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + trace_netfs_sreq(subreq, netfs_sreq_trace_cancel); + netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel); subreq =3D NULL; - break; + goto failed; } - ret =3D 0; =20 if (!retry) { - netfs_unbuffered_write_collect(wreq, stream, subreq); + netfs_unbuffered_write_collect(wreq, stream, subreq, &uctx); subreq =3D NULL; if (wreq->transferred >=3D wreq->len) break; @@ -171,20 +206,21 @@ static int netfs_unbuffered_write(struct netfs_io_req= uest *wreq) continue; } =20 - /* We need to retry the last subrequest, so first reset the - * iterator, taking into account what, if anything, we managed - * to transfer. + /* We need to retry the last subrequest, so first wind back the + * buffer position. */ subreq->error =3D -EAGAIN; trace_netfs_sreq(subreq, netfs_sreq_trace_retry); =20 bvecq_pos_detach(&subreq->content); - bvecq_pos_detach(&wreq->dispatch_cursor); - bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos); + bvecq_pos_detach(&uctx.dispatch_cursor); + bvecq_pos_transfer(&uctx.dispatch_cursor, &subreq->dispatch_pos); =20 if (subreq->transferred > 0) { wreq->transferred +=3D subreq->transferred; - bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred); + wctx->issue_from -=3D subreq->len - subreq->transferred; + wctx->buffered +=3D subreq->len - subreq->transferred; + bvecq_pos_advance(&uctx.dispatch_cursor, subreq->transferred); } =20 if (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && @@ -192,24 +228,21 @@ static int netfs_unbuffered_write(struct netfs_io_req= uest *wreq) wreq->netfs_ops->retry_request(wreq, stream); =20 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); - subreq->start =3D wreq->start + wreq->transferred; - subreq->len =3D wreq->len - wreq->transferred; + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + subreq->start =3D wctx->issue_from; + subreq->len =3D wctx->buffered; subreq->transferred =3D 0; subreq->retry_count +=3D 1; - stream->sreq_max_len =3D UINT_MAX; - stream->sreq_max_segs =3D INT_MAX; =20 netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - stream->prepare_write(subreq); =20 __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); netfs_stat(&netfs_n_wh_retry_write_subreq); } =20 - bvecq_pos_detach(&wreq->dispatch_cursor); - bvecq_pos_detach(&wreq->load_cursor); +failed: + bvecq_pos_detach(&uctx.dispatch_cursor); netfs_unbuffered_write_done(wreq); _leave(" =3D %d", ret); return ret; @@ -263,9 +296,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb= *iocb, struct iov_iter * * we have to save the source buffer as the iterator is only * good until we return. In such a case, extract an iterator * to represent as much of the the output buffer as we can - * manage. Note that the extraction might not be able to - * allocate a sufficiently large bvec array and may shorten the - * request. + * manage. Note that the extraction may shorten the request. */ ssize_t n =3D netfs_extract_iter(iter, len, INT_MAX, iocb->ki_pos, &wreq->load_cursor.bvecq, 0); @@ -280,8 +311,6 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb= *iocb, struct iov_iter * wreq->load_cursor.bvecq->max_segs); } =20 - __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags); - /* Copy the data into the bounce buffer and encrypt it. */ // TODO =20 diff --git a/fs/netfs/fscache_io.c b/fs/netfs/fscache_io.c index 37f05b4d3469..70b10ac23a27 100644 --- a/fs/netfs/fscache_io.c +++ b/fs/netfs/fscache_io.c @@ -239,10 +239,6 @@ void __fscache_write_to_cache(struct fscache_cookie *c= ookie, fscache_access_io_write) < 0) goto abandon_free; =20 - ret =3D cres->ops->prepare_write(cres, &start, &len, len, i_size, false); - if (ret < 0) - goto abandon_end; - /* TODO: Consider clearing page bits now for space the write isn't * covering. This is more complicated than it appears when THPs are * taken into account. @@ -252,8 +248,6 @@ void __fscache_write_to_cache(struct fscache_cookie *co= okie, fscache_write(cres, start, &iter, fscache_wreq_done, wreq); return; =20 -abandon_end: - return fscache_wreq_done(wreq, ret); abandon_free: kfree(wreq); abandon: diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 19d1e31b840b..3a7b7d6f1e89 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -19,9 +19,16 @@ =20 #define pr_fmt(fmt) "netfs: " fmt =20 +struct netfs_unbuffered_read_context { + struct netfs_read_context r; + struct bvecq_pos dispatch_cursor; /* Dispatch position in buffer */ +}; + /* * buffered_read.c */ +int netfs_read_query_cache(struct netfs_io_request *rreq, + struct fscache_occupancy *occ); void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error); int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len); @@ -118,6 +125,20 @@ static inline bool bvecq_is_full(const struct bvecq *b= vecq) return bvecq->nr_segs >=3D bvecq->max_segs; } =20 +/* + * direct_read.c + */ +int netfs_prepare_unbuffered_read_buffer(struct netfs_io_subrequest *subre= q, + struct netfs_read_context *rctx, + unsigned int max_segs); + +/* + * direct_write.c + */ +int netfs_prepare_unbuffered_write_buffer(struct netfs_io_subrequest *subr= eq, + struct netfs_write_context *wctx, + unsigned int max_segs); + /* * main.c */ @@ -154,6 +175,8 @@ struct bvecq *netfs_buffer_make_space(struct netfs_io_r= equest *rreq, enum netfs_bvecq_trace trace); void netfs_wake_collector(struct netfs_io_request *rreq); void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq); +int netfs_wait_for_in_progress_subreq(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq); void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq, struct netfs_io_stream *stream); ssize_t netfs_wait_for_read(struct netfs_io_request *rreq); @@ -197,16 +220,48 @@ void netfs_cache_read_terminated(void *priv, ssize_t = transferred_or_error); /* * read_pgpriv2.c */ +#ifdef CONFIG_NETFS_PGPRIV2 void netfs_pgpriv2_copy_to_cache(struct netfs_io_request *rreq, struct fol= io *folio); void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq); bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq); +static inline bool netfs_using_pgpriv2(const struct netfs_io_request *rreq) +{ + return test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags); +} +#else +static inline void netfs_pgpriv2_copy_to_cache(struct netfs_io_request *rr= eq, struct folio *folio) +{ +} +static inline void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request= *rreq) +{ +} +static inline bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_requ= est *wreq) +{ + return true; +} +static inline bool netfs_using_pgpriv2(const struct netfs_io_request *rreq) +{ + return false; +} +#endif =20 /* * read_retry.c */ +int netfs_prepare_buffered_read_retry_buffer(struct netfs_io_subrequest *s= ubreq, + struct netfs_read_context *base_rctx, + unsigned int max_segs); +int netfs_reset_for_read_retry(struct netfs_io_subrequest *subreq); void netfs_retry_reads(struct netfs_io_request *rreq); void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq); =20 +/* + * read_single.c + */ +int netfs_prepare_read_single_buffer(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx, + unsigned int max_segs); + /* * stats.c */ @@ -282,16 +337,9 @@ struct netfs_io_request *netfs_create_write_req(struct= address_space *mapping, struct file *file, loff_t start, enum netfs_io_origin origin); -void netfs_prepare_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start); -void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq); -void netfs_issue_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream); -size_t netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof); +struct netfs_io_subrequest *netfs_alloc_write_subreq(struct netfs_io_reque= st *wreq, + struct netfs_io_stream *stream, + struct netfs_write_context *wctx); struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size= _t len); int netfs_advance_writethrough(struct netfs_io_request *wreq, struct write= back_control *wbc, struct folio *folio, size_t copied, bool to_page_end, @@ -302,6 +350,9 @@ ssize_t netfs_end_writethrough(struct netfs_io_request = *wreq, struct writeback_c /* * write_retry.c */ +int netfs_prepare_write_retry_buffer(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx, + unsigned int max_segs); void netfs_retry_writes(struct netfs_io_request *wreq); =20 /* diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index eda6e2ca02e7..78cf98068e97 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -103,16 +103,24 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, siz= e_t orig_len, size_t max_se got =3D iov_iter_extract_pages(orig, &pages, orig_len - extracted, bq->max_segs - bq->nr_segs, extraction_flags, &offset); + if (got < 0) { pr_err("Couldn't get user pages (rc=3D%zd)\n", got); ret =3D got; - break; + goto out; + } + + if (got =3D=3D 0) { + pr_err("extract_pages gave nothing from %zu/%zu\n", + extracted, orig_len); + ret =3D -EIO; + goto out; } =20 if (got > orig_len - extracted) { pr_err("get_pages rc=3D%zd more than %zu\n", got, orig_len - extracted); - break; + goto out; } =20 extracted +=3D got; @@ -131,6 +139,7 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_= t orig_len, size_t max_se } while (extracted < orig_len && !bvecq_is_full(bq)); } while (extracted < orig_len && max_segs > 0); =20 +out: return extracted ?: ret; } EXPORT_SYMBOL_GPL(netfs_extract_iter); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index a19724389147..b96be273a1fe 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -232,6 +232,37 @@ void netfs_subreq_clear_in_progress(struct netfs_io_su= brequest *subreq) netfs_wake_collector(rreq); } =20 +/* + * Wait for a subrequest to come to completion. + */ +int netfs_wait_for_in_progress_subreq(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + if (netfs_check_subreq_in_progress(subreq)) { + DEFINE_WAIT(myself); + + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_quiesce); + for (;;) { + prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE); + + if (!netfs_check_subreq_in_progress(subreq)) + break; + + trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for); + schedule(); + } + + trace_netfs_rreq(rreq, netfs_rreq_trace_waited_quiesce); + finish_wait(&rreq->waitq, &myself); + } + + if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) + return -EAGAIN; + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) + return subreq->error; + return 0; +} + /* * Wait for all outstanding I/O in a stream to quiesce. */ diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index c92cdbad04de..dfa68addba27 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -46,8 +46,6 @@ struct netfs_io_request *netfs_alloc_request(struct addre= ss_space *mapping, rreq->i_size =3D i_size_read(inode); rreq->debug_id =3D atomic_inc_return(&debug_ids); rreq->wsize =3D INT_MAX; - rreq->io_streams[0].sreq_max_len =3D ULONG_MAX; - rreq->io_streams[0].sreq_max_segs =3D 0; spin_lock_init(&rreq->lock); INIT_LIST_HEAD(&rreq->io_streams[0].subrequests); INIT_LIST_HEAD(&rreq->io_streams[1].subrequests); @@ -134,7 +132,6 @@ static void netfs_deinit_request(struct netfs_io_reques= t *rreq) if (rreq->cache_resources.ops) rreq->cache_resources.ops->end_operation(&rreq->cache_resources); bvecq_pos_detach(&rreq->load_cursor); - bvecq_pos_detach(&rreq->dispatch_cursor); bvecq_pos_detach(&rreq->collect_cursor); =20 if (atomic_dec_and_test(&ictx->io_count)) diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 20c80df8914f..b80cd8b3674c 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -36,6 +36,7 @@ static void netfs_clear_unread(struct netfs_io_subrequest= *subreq) =20 if (subreq->start + subreq->transferred >=3D subreq->rreq->i_size) __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); + trace_netfs_rreq(subreq->rreq, netfs_rreq_trace_zero_unread); } =20 /* @@ -58,7 +59,7 @@ static void netfs_unlock_read_folio(struct netfs_io_reque= st *rreq, flush_dcache_folio(folio); folio_mark_uptodate(folio); =20 - if (!test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { + if (!netfs_using_pgpriv2(rreq)) { finfo =3D netfs_folio_info(folio); if (finfo) { trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); @@ -263,8 +264,7 @@ static void netfs_collect_read_results(struct netfs_io_= request *rreq) transferred =3D front->len; trace_netfs_rreq(rreq, netfs_rreq_trace_set_abandon); } - if (front->start + transferred >=3D rreq->cleaned_to + fsize || - test_bit(NETFS_SREQ_HIT_EOF, &front->flags)) + if (front->start + transferred >=3D rreq->cleaned_to + fsize) netfs_read_unlock_folios(rreq, ¬es); } else { stream->collected_to =3D front->start + transferred; @@ -381,31 +381,6 @@ static void netfs_rreq_assess_dio(struct netfs_io_requ= est *rreq) inode_dio_end(rreq->inode); } =20 -/* - * Do processing after reading a monolithic single object. - */ -static void netfs_rreq_assess_single(struct netfs_io_request *rreq) -{ - struct netfs_io_stream *stream =3D &rreq->io_streams[0]; - - if (!rreq->error && stream->source =3D=3D NETFS_DOWNLOAD_FROM_SERVER && - fscache_resources_valid(&rreq->cache_resources)) { - trace_netfs_rreq(rreq, netfs_rreq_trace_dirty); - netfs_single_mark_inode_dirty(rreq->inode); - } - - if (rreq->iocb) { - rreq->iocb->ki_pos +=3D rreq->transferred; - if (rreq->iocb->ki_complete) { - trace_netfs_rreq(rreq, netfs_rreq_trace_ki_complete); - rreq->iocb->ki_complete( - rreq->iocb, rreq->error ? rreq->error : rreq->transferred); - } - } - if (rreq->netfs_ops->done) - rreq->netfs_ops->done(rreq); -} - /* * Perform the collection of subrequests and folios. * @@ -441,7 +416,7 @@ bool netfs_read_collection(struct netfs_io_request *rre= q) netfs_rreq_assess_dio(rreq); break; case NETFS_READ_SINGLE: - netfs_rreq_assess_single(rreq); + WARN_ON_ONCE(1); break; default: break; diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index 68a5fece9012..2cdfc40f3ee2 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -9,19 +9,61 @@ #include #include "internal.h" =20 -static void netfs_reissue_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) +struct netfs_read_retry_context { + struct netfs_read_context r; + struct bvecq_pos dispatch_cursor; /* Dispatch position in buffer */ +}; + +/* + * Prepare the I/O buffer on a buffered read subrequest for the filesystem= to + * use as a bvec queue. + */ +int netfs_prepare_buffered_read_retry_buffer(struct netfs_io_subrequest *s= ubreq, + struct netfs_read_context *base_rctx, + unsigned int max_segs) { + struct netfs_read_retry_context *rctx =3D + container_of(base_rctx, struct netfs_read_retry_context, r); + size_t len; + + bvecq_pos_attach(&subreq->dispatch_pos, &rctx->dispatch_cursor); bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); - iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, - subreq->content.slot, subreq->content.offset, subreq->len); - iov_iter_advance(&subreq->io_iter, subreq->transferred); + len =3D bvecq_slice(&rctx->dispatch_cursor, subreq->len, max_segs, + &subreq->nr_segs); + if (len < subreq->len) { + subreq->len =3D len; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + rctx->r.start +=3D subreq->len; + return 0; +} =20 - subreq->error =3D 0; +/* + * Reset the state of the subrequest and discard any buffering so that we = can + * retry (where this may include sending it to the server instead of the + * cache). + */ +int netfs_reset_for_read_retry(struct netfs_io_subrequest *subreq) +{ + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + + if (subreq->retry_count > 3) { + trace_netfs_sreq(subreq, netfs_sreq_trace_too_many_retries); + return subreq->error; + } + + subreq->retry_count++; __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); - netfs_stat(&netfs_n_rh_retry_read_subreq); - subreq->rreq->netfs_ops->issue_read(subreq); + bvecq_pos_detach(&subreq->content); + bvecq_pos_detach(&subreq->dispatch_pos); + subreq->error =3D 0; + subreq->transferred =3D 0; + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + netfs_stat(&netfs_n_wh_retry_write_subreq); + return 0; } =20 /* @@ -30,10 +72,13 @@ static void netfs_reissue_read(struct netfs_io_request = *rreq, */ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq) { + struct netfs_read_retry_context rctx =3D { + .r.retrying =3D true, + }; struct netfs_io_subrequest *subreq; struct netfs_io_stream *stream =3D &rreq->io_streams[0]; - struct bvecq_pos dispatch_cursor =3D {}; struct list_head *next; + int ret; =20 _enter("R=3D%x", rreq->debug_id); =20 @@ -43,47 +88,17 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) if (rreq->netfs_ops->retry_request) rreq->netfs_ops->retry_request(rreq, NULL); =20 - /* If there's no renegotiation to do, just resend each retryable subreq - * up to the first permanently failed one. - */ - if (!rreq->netfs_ops->prepare_read && - !rreq->cache_resources.ops) { - list_for_each_entry(subreq, &stream->subrequests, rreq_link) { - if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) - break; - if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { - __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); - subreq->retry_count++; - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_read(rreq, subreq); - } - } - return; - } - /* Okay, we need to renegotiate all the download requests and flip any * failed cache reads over to being download requests and negotiate - * those also. All fully successful subreqs have been removed from the - * list and any spare data from those has been donated. - * - * What we do is decant the list and rebuild it one subreq at a time so - * that we don't end up with donations jumping over a gap we're busy - * populating with smaller subrequests. In the event that the subreq - * we just launched finishes before we insert the next subreq, it'll - * fill in rreq->prev_donated instead. - * - * Note: Alternatively, we could split the tail subrequest right before - * we reissue it and fix up the donations under lock. + * those also. */ next =3D stream->subrequests.next; =20 do { struct netfs_io_subrequest *from, *to, *tmp; - unsigned long long start, len; - size_t part; - bool boundary =3D false, subreq_superfluous =3D false; + bool subreq_superfluous =3D false; =20 - bvecq_pos_detach(&dispatch_cursor); + bvecq_pos_detach(&rctx.dispatch_cursor); =20 /* Go through the subreqs and find the next span of contiguous * buffer that we then rejig (cifs, for example, needs the @@ -91,82 +106,65 @@ static void netfs_retry_read_subrequests(struct netfs_= io_request *rreq) */ from =3D list_entry(next, struct netfs_io_subrequest, rreq_link); to =3D from; - start =3D from->start + from->transferred; - len =3D from->len - from->transferred; + rctx.r.start =3D from->start + from->transferred; + rctx.r.stop =3D from->start + from->len - from->transferred; =20 _debug("from R=3D%08x[%x] s=3D%llx ctl=3D%zx/%zx", rreq->debug_id, from->debug_index, from->start, from->transferred, from->len); =20 - if (test_bit(NETFS_SREQ_FAILED, &from->flags) || - !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) + if (!test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) goto abandon; =20 list_for_each_continue(next, &stream->subrequests) { subreq =3D list_entry(next, struct netfs_io_subrequest, rreq_link); - if (subreq->start + subreq->transferred !=3D start + len || - test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || + if (subreq->start + subreq->transferred !=3D rctx.r.stop || !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) break; to =3D subreq; - len +=3D to->len; + rctx.r.stop +=3D to->len; } =20 - _debug(" - range: %llx-%llx %llx", start, start + len - 1, len); + _debug(" - range: %llx-%llx %llx", + rctx.r.start, rctx.r.stop, rctx.r.stop - rctx.r.start); =20 /* Determine the set of buffers we're going to use. Each - * subreq gets a subset of a single overall contiguous buffer. + * subreq takes a subset of a single overall contiguous buffer. */ - bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos); - bvecq_pos_advance(&dispatch_cursor, from->transferred); + bvecq_pos_transfer(&rctx.dispatch_cursor, &from->dispatch_pos); + bvecq_pos_advance(&rctx.dispatch_cursor, from->transferred); =20 /* Work through the sublist. */ subreq =3D from; list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { - if (!len) { + if (rctx.r.start >=3D rctx.r.stop) { subreq_superfluous =3D true; break; } subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; - subreq->start =3D start - subreq->transferred; - subreq->len =3D len + subreq->transferred; - __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); - subreq->retry_count++; - - bvecq_pos_detach(&subreq->dispatch_pos); - bvecq_pos_detach(&subreq->content); - - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + subreq->start =3D rctx.r.start; + subreq->len =3D rctx.r.stop - rctx.r.start; =20 - /* Renegotiate max_len (rsize) */ - stream->sreq_max_len =3D subreq->len; - stream->sreq_max_segs =3D INT_MAX; - if (rreq->netfs_ops->prepare_read && - rreq->netfs_ops->prepare_read(subreq) < 0) { - trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); + ret =3D netfs_reset_for_read_retry(subreq); + if (ret < 0) { __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + rreq->error =3D ret; goto abandon; } =20 - bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); - part =3D bvecq_slice(&dispatch_cursor, - umin(len, stream->sreq_max_len), - stream->sreq_max_segs, - &subreq->nr_segs); - subreq->len =3D subreq->transferred + part; - - len -=3D part; - start +=3D part; - if (!len) { - if (boundary) - __set_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); - } else { - __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); + netfs_stat(&netfs_n_rh_download); + ret =3D rreq->netfs_ops->issue_read(subreq, &rctx.r); + if (ret < 0 && ret !=3D -EIOCBQUEUED) { + if (ret =3D=3D -ENOMEM) + goto abandon; + subreq->error =3D ret; + if (ret !=3D -EAGAIN) { + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + goto abandon_after; + } + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + netfs_read_subreq_terminated(subreq); } - - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_read(rreq, subreq); if (subreq =3D=3D to) { subreq_superfluous =3D false; break; @@ -176,7 +174,7 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) /* If we managed to use fewer subreqs, we can discard the * excess; if we used the same number, then we're done. */ - if (!len) { + if (rctx.r.start >=3D rctx.r.stop) { if (!subreq_superfluous) continue; list_for_each_entry_safe_from(subreq, tmp, @@ -200,8 +198,8 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) goto abandon_after; } subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; - subreq->start =3D start; - subreq->len =3D len; + subreq->start =3D rctx.r.start; + subreq->len =3D rctx.r.stop - rctx.r.start; subreq->stream_nr =3D stream->stream_nr; subreq->retry_count =3D 1; =20 @@ -213,37 +211,26 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) to =3D list_next_entry(to, rreq_link); trace_netfs_sreq(subreq, netfs_sreq_trace_retry); =20 - stream->sreq_max_len =3D umin(len, rreq->rsize); - stream->sreq_max_segs =3D INT_MAX; - netfs_stat(&netfs_n_rh_download); - if (rreq->netfs_ops->prepare_read(subreq) < 0) { - trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); - __set_bit(NETFS_SREQ_FAILED, &subreq->flags); - goto abandon; - } - - bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); - part =3D bvecq_slice(&dispatch_cursor, - umin(len, stream->sreq_max_len), - stream->sreq_max_segs, - &subreq->nr_segs); - subreq->len =3D subreq->transferred + part; - - len -=3D part; - start +=3D part; - if (!len && boundary) { - __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); - boundary =3D false; + ret =3D rreq->netfs_ops->issue_read(subreq, &rctx.r); + if (ret < 0 && ret !=3D -EIOCBQUEUED) { + if (ret =3D=3D -ENOMEM) + goto abandon; + subreq->error =3D ret; + if (ret !=3D -EAGAIN) { + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + goto abandon_after; + } + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + netfs_read_subreq_terminated(subreq); } =20 - netfs_reissue_read(rreq, subreq); - } while (len); + } while (rctx.r.start < rctx.r.stop); =20 } while (!list_is_head(next, &stream->subrequests)); =20 out: - bvecq_pos_detach(&dispatch_cursor); + bvecq_pos_detach(&rctx.dispatch_cursor); return; =20 /* If we hit an error, fail all remaining incomplete subrequests */ diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c index 0f49d6aab874..5b3a0b07be82 100644 --- a/fs/netfs/read_single.c +++ b/fs/netfs/read_single.c @@ -16,6 +16,25 @@ #include #include "internal.h" =20 +struct netfs_read_single_context { + struct netfs_read_context r; + struct fscache_occupancy cache; /* List of cached extents */ +}; + +int netfs_prepare_read_single_buffer(struct netfs_io_subrequest *subreq, + struct netfs_read_context *base_rctx, + unsigned int max_segs) +{ + struct netfs_read_single_context *rctx =3D + container_of(base_rctx, struct netfs_read_single_context, r); + + bvecq_pos_attach(&subreq->dispatch_pos, &subreq->rreq->load_cursor); + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + + rctx->r.start +=3D subreq->len; + return 0; +} + /** * netfs_single_mark_inode_dirty - Mark a single, monolithic object inode = dirty * @inode: The inode to mark @@ -58,97 +77,95 @@ static int netfs_single_begin_cache_read(struct netfs_i= o_request *rreq, struct n return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cooki= e(ctx)); } =20 -static void netfs_single_cache_prepare_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - struct netfs_cache_resources *cres =3D &rreq->cache_resources; - - if (!cres->ops) { - subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; - return; - } - subreq->source =3D cres->ops->prepare_read(subreq, rreq->i_size); - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - -} - -static void netfs_single_read_cache(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - struct netfs_cache_resources *cres =3D &rreq->cache_resources; - - _enter("R=3D%08x[%x]", rreq->debug_id, subreq->debug_index); - netfs_stat(&netfs_n_rh_read); - cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_FA= IL, - netfs_cache_read_terminated, subreq); -} - /* * Perform a read to a buffer from the cache or the server. Only a single * subreq is permitted as the object must be fetched in a single transacti= on. */ static int netfs_single_dispatch_read(struct netfs_io_request *rreq) { - struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + struct netfs_read_single_context rctx =3D { + .cache.query_from =3D rreq->start, + .cache.query_to =3D rreq->start + rreq->len, + .cache.cached_from[0] =3D ULLONG_MAX, + .cache.cached_to[0] =3D ULLONG_MAX, + .r.start =3D rreq->start, + .r.stop =3D rreq->start + rreq->len, + }; struct netfs_io_subrequest *subreq; - int ret =3D 0; + int ret; + + ret =3D netfs_read_query_cache(rreq, &rctx.cache); + if (ret < 0) + return ret; =20 subreq =3D netfs_alloc_subrequest(rreq); if (!subreq) return -ENOMEM; =20 - subreq->source =3D NETFS_SOURCE_UNKNOWN; - subreq->start =3D 0; - subreq->len =3D rreq->len; - - bvecq_pos_attach(&subreq->dispatch_pos, &rreq->dispatch_cursor); - bvecq_pos_attach(&subreq->content, &rreq->dispatch_cursor); + subreq->start =3D 0; + subreq->len =3D rreq->len; =20 - iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq, - subreq->content.slot, subreq->content.offset, subreq->len); + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); =20 - __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + /* Try to use the cache if the cache content matches the size of the + * remote file. + */ + if (rctx.cache.nr_extents =3D=3D 1 && + rctx.cache.cached_from[0] =3D=3D 0 && + rctx.cache.cached_to[0] =3D=3D rreq->len) { + struct netfs_cache_resources *cres =3D &rreq->cache_resources; + + subreq->source =3D NETFS_READ_FROM_CACHE; + netfs_stat(&netfs_n_rh_read); + ret =3D cres->ops->issue_read(subreq, &rctx.r); + if (ret =3D=3D -EIOCBQUEUED) + ret =3D netfs_wait_for_in_progress_subreq(rreq, subreq); + if (ret =3D=3D -ENOMEM) + goto cancel; + if (ret =3D=3D 0) + goto success; + + /* Didn't manage to retrieve from the cache, so toss it to the + * server instead. + */ + if (netfs_reset_for_read_retry(subreq) < 0) + goto cancel; + } =20 - spin_lock(&rreq->lock); - list_add_tail(&subreq->rreq_link, &stream->subrequests); - trace_netfs_sreq(subreq, netfs_sreq_trace_added); - stream->front =3D subreq; - /* Store list pointers before active flag */ - smp_store_release(&stream->active, true); - spin_unlock(&rreq->lock); + __set_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags); =20 - netfs_single_cache_prepare_read(rreq, subreq); - switch (subreq->source) { - case NETFS_DOWNLOAD_FROM_SERVER: + /* Try to send it to the cache. */ + for (;;) { + subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; netfs_stat(&netfs_n_rh_download); - if (rreq->netfs_ops->prepare_read) { - ret =3D rreq->netfs_ops->prepare_read(subreq); - if (ret < 0) - goto cancel; - } - - rreq->netfs_ops->issue_read(subreq); - rreq->submitted +=3D subreq->len; - break; - case NETFS_READ_FROM_CACHE: - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - netfs_single_read_cache(rreq, subreq); - rreq->submitted +=3D subreq->len; - ret =3D 0; - break; - default: - pr_warn("Unexpected single-read source %u\n", subreq->source); - WARN_ON_ONCE(true); - ret =3D -EIO; - break; + ret =3D rreq->netfs_ops->issue_read(subreq, &rctx.r); + if (ret =3D=3D -EIOCBQUEUED) + ret =3D netfs_wait_for_in_progress_subreq(rreq, subreq); + if (ret =3D=3D 0) + goto success; + if (ret =3D=3D -ENOMEM) + goto cancel; + if (ret !=3D -EAGAIN) + goto failed; + if (netfs_reset_for_read_retry(subreq) < 0) + goto cancel; } =20 - smp_wmb(); /* Write lists before ALL_QUEUED. */ - set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); - return ret; +success: + rreq->transferred =3D subreq->transferred; + list_del_init(&subreq->rreq_link); + netfs_put_subrequest(subreq, netfs_sreq_trace_put_consumed); + return 0; cancel: + rreq->error =3D ret; + list_del_init(&subreq->rreq_link); netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel); return ret; +failed: + rreq->error =3D ret; + list_del_init(&subreq->rreq_link); + netfs_put_subrequest(subreq, netfs_sreq_trace_put_failed); + return ret; } =20 /** @@ -179,7 +196,7 @@ ssize_t netfs_read_single(struct inode *inode, struct f= ile *file, struct iov_ite if (IS_ERR(rreq)) return PTR_ERR(rreq); =20 - ret =3D netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->dispatch_c= ursor.bvecq, 0); + ret =3D netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->load_curso= r.bvecq, 0); if (ret < 0) goto cleanup_free; =20 @@ -190,9 +207,29 @@ ssize_t netfs_read_single(struct inode *inode, struct = file *file, struct iov_ite netfs_stat(&netfs_n_rh_read_single); trace_netfs_read(rreq, 0, rreq->len, netfs_read_trace_read_single); =20 - netfs_single_dispatch_read(rreq); + ret =3D netfs_single_dispatch_read(rreq); + + trace_netfs_rreq(rreq, netfs_rreq_trace_complete); + if (ret =3D=3D 0) { + task_io_account_read(rreq->transferred); + + if (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags) && + fscache_resources_valid(&rreq->cache_resources)) { + trace_netfs_rreq(rreq, netfs_rreq_trace_dirty); + netfs_single_mark_inode_dirty(rreq->inode); + } + ret =3D rreq->transferred; + } + + if (rreq->netfs_ops->done) + rreq->netfs_ops->done(rreq); + + netfs_wake_rreq_flag(rreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_= ip); + /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */ + netfs_put_request(rreq, netfs_rreq_trace_put_work_ip); + + trace_netfs_rreq(rreq, netfs_rreq_trace_done); =20 - ret =3D netfs_wait_for_read(rreq); netfs_put_request(rreq, netfs_rreq_trace_put_return); return ret; =20 diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index ed11086346b0..741b43a77db8 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -28,8 +28,8 @@ static void netfs_dump_request(const struct netfs_io_requ= est *rreq) rreq->origin, rreq->error); pr_err(" st=3D%llx tsl=3D%zx/%llx/%llx\n", rreq->start, rreq->transferred, rreq->submitted, rreq->len); - pr_err(" cci=3D%llx/%llx/%llx\n", - rreq->cleaned_to, rreq->collected_to, atomic64_read(&rreq->issued_= to)); + pr_err(" cci=3D%llx/%llx\n", + rreq->cleaned_to, rreq->collected_to); pr_err(" iw=3D%pSR\n", rreq->netfs_ops->issue_write); for (int i =3D 0; i < NR_IO_STREAMS; i++) { const struct netfs_io_subrequest *sreq; @@ -38,8 +38,9 @@ static void netfs_dump_request(const struct netfs_io_requ= est *rreq) pr_err(" str[%x] s=3D%x e=3D%d acnf=3D%u,%u,%u,%u\n", s->stream_nr, s->source, s->error, s->avail, s->active, s->need_retry, s->failed); - pr_err(" str[%x] ct=3D%llx t=3D%zx\n", - s->stream_nr, s->collected_to, s->transferred); + pr_err(" str[%x] it=3D%llx ct=3D%llx t=3D%zx\n", + s->stream_nr, atomic64_read(&s->issued_to), + s->collected_to, s->transferred); list_for_each_entry(sreq, &s->subrequests, rreq_link) { pr_err(" sreq[%x:%x] sc=3D%u s=3D%llx t=3D%zx/%zx r=3D%d f=3D%lx\n", sreq->stream_nr, sreq->debug_index, sreq->source, @@ -56,7 +57,7 @@ static void netfs_dump_request(const struct netfs_io_requ= est *rreq) */ int netfs_folio_written_back(struct folio *folio) { - enum netfs_folio_trace why =3D netfs_folio_trace_clear; + enum netfs_folio_trace why =3D netfs_folio_trace_endwb; struct netfs_inode *ictx =3D netfs_inode(folio->mapping->host); struct netfs_folio *finfo; struct netfs_group *group =3D NULL; @@ -76,13 +77,13 @@ int netfs_folio_written_back(struct folio *folio) group =3D finfo->netfs_group; gcount++; kfree(finfo); - why =3D netfs_folio_trace_clear_s; + why =3D netfs_folio_trace_endwb_s; goto end_wb; } =20 if ((group =3D netfs_folio_group(folio))) { if (group =3D=3D NETFS_FOLIO_COPY_TO_CACHE) { - why =3D netfs_folio_trace_clear_cc; + why =3D netfs_folio_trace_endwb_cc; folio_detach_private(folio); goto end_wb; } @@ -95,7 +96,7 @@ int netfs_folio_written_back(struct folio *folio) if (!folio_test_dirty(folio)) { folio_detach_private(folio); gcount++; - why =3D netfs_folio_trace_clear_g; + why =3D netfs_folio_trace_endwb_g; } } =20 @@ -212,9 +213,7 @@ static void netfs_collect_write_results(struct netfs_io= _request *wreq) trace_netfs_rreq(wreq, netfs_rreq_trace_collect); =20 reassess_streams: - /* Order reading the issued_to point before reading the queue it refers t= o. */ - issued_to =3D atomic64_read_acquire(&wreq->issued_to); - smp_rmb(); + issued_to =3D ULLONG_MAX; collected_to =3D ULLONG_MAX; if (wreq->origin =3D=3D NETFS_WRITEBACK || wreq->origin =3D=3D NETFS_WRITETHROUGH || @@ -229,13 +228,25 @@ static void netfs_collect_write_results(struct netfs_= io_request *wreq) * to the tail whilst we're doing this. */ for (s =3D 0; s < NR_IO_STREAMS; s++) { + unsigned long long s_issued_to; + stream =3D &wreq->io_streams[s]; - /* Read active flag before list pointers */ + /* Read active flag before issued_to */ if (!smp_load_acquire(&stream->active)) continue; =20 - front =3D stream->front; - while (front) { + for (;;) { + /* Order reading the issued_to point before reading the + * queue it refers to. + */ + s_issued_to =3D atomic64_read_acquire(&stream->issued_to); + if (s_issued_to < issued_to) + issued_to =3D s_issued_to; + + front =3D stream->front; + if (!front) + break; + trace_netfs_collect_sreq(wreq, front); //_debug("sreq [%x] %llx %zx/%zx", // front->debug_index, front->start, front->transferred, front->l= en); diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 5d4d8dbfe877..f8d308ccb574 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -36,6 +36,46 @@ #include #include "internal.h" =20 +#define NOTE_UPLOAD_AVAIL 0x001 /* Upload is available */ +#define NOTE_CACHE_AVAIL 0x002 /* Local cache is available */ +#define NOTE_CACHE_COPY 0x004 /* Copy folio to cache */ +#define NOTE_UPLOAD 0x008 /* Upload folio to server */ +#define NOTE_UPLOAD_STARTED 0x010 /* Upload started */ +#define NOTE_STREAMW 0x020 /* Folio is from a streaming write */ +#define NOTE_DISCONTIG_BEFORE 0x040 /* Folio discontiguous with the previo= us folio */ +#define NOTE_DISCONTIG_AFTER 0x080 /* Folio discontiguous with the next fo= lio */ +#define NOTE_TO_EOF 0x100 /* Data in folio ends at EOF */ +#define NOTE_FLUSH_ANYWAY 0x200 /* Flush data, even if not hit estimated l= imit */ + +#define NOTES__KEEP_MASK (NOTE_UPLOAD_AVAIL | NOTE_CACHE_AVAIL | NOTE_UPLO= AD_STARTED) + +struct netfs_wb_context { + struct netfs_write_context wctx; + struct netfs_write_estimate estimate; + struct bvecq_pos dispatch_cursor; /* Folio queue anchor for issue_at */ + bool buffering; /* T if has data attached, needs issuing */ +}; + +struct netfs_wb_params { + unsigned long long last_end; /* End file pos of previous folio */ + unsigned long long folio_start; /* File pos of folio */ + unsigned int folio_len; /* Length of folio */ + unsigned int dirty_offset; /* Offset of dirty region in folio */ + unsigned int dirty_len; /* Length of dirty region in folio */ + unsigned int notes; /* Notes on applicability */ + struct bvecq_pos dispatch_cursor; /* Folio queue anchor for issue_at */ + struct netfs_wb_context w[2]; +}; + +struct netfs_write_single { + struct netfs_write_context wctx; + struct bvecq_pos dispatch_cursor; /* Buffer */ +}; + +static int netfs_prepare_write_single_buffer(struct netfs_io_subrequest *s= ubreq, + struct netfs_write_context *wctx, + unsigned int max_segs); + /* * Kill all dirty folios in the event of an unrecoverable error, starting = with * a locked folio we've already obtained from writeback_iter(). @@ -113,65 +153,49 @@ struct netfs_io_request *netfs_create_write_req(struc= t address_space *mapping, =20 wreq->io_streams[0].stream_nr =3D 0; wreq->io_streams[0].source =3D NETFS_UPLOAD_TO_SERVER; - wreq->io_streams[0].prepare_write =3D ictx->ops->prepare_write; + wreq->io_streams[0].applicable =3D NOTE_UPLOAD; + wreq->io_streams[0].estimate_write =3D ictx->ops->estimate_write; wreq->io_streams[0].issue_write =3D ictx->ops->issue_write; wreq->io_streams[0].collected_to =3D start; wreq->io_streams[0].transferred =3D 0; =20 wreq->io_streams[1].stream_nr =3D 1; wreq->io_streams[1].source =3D NETFS_WRITE_TO_CACHE; + wreq->io_streams[1].applicable =3D NOTE_CACHE_COPY; wreq->io_streams[1].collected_to =3D start; wreq->io_streams[1].transferred =3D 0; if (fscache_resources_valid(&wreq->cache_resources)) { wreq->io_streams[1].avail =3D true; wreq->io_streams[1].active =3D true; - wreq->io_streams[1].prepare_write =3D wreq->cache_resources.ops->prepare= _write_subreq; + wreq->io_streams[1].estimate_write =3D wreq->cache_resources.ops->estima= te_write; wreq->io_streams[1].issue_write =3D wreq->cache_resources.ops->issue_wri= te; } =20 return wreq; } =20 -/** - * netfs_prepare_write_failed - Note write preparation failed - * @subreq: The subrequest to mark - * - * Mark a subrequest to note that preparation for write failed. - */ -void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq) -{ - __set_bit(NETFS_SREQ_FAILED, &subreq->flags); - trace_netfs_sreq(subreq, netfs_sreq_trace_prep_failed); -} -EXPORT_SYMBOL(netfs_prepare_write_failed); - /* - * Prepare a write subrequest. We need to allocate a new subrequest - * if we don't have one. + * Allocate and prepare a write subrequest. */ -void netfs_prepare_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start) +struct netfs_io_subrequest *netfs_alloc_write_subreq(struct netfs_io_reque= st *wreq, + struct netfs_io_stream *stream, + struct netfs_write_context *wctx) { struct netfs_io_subrequest *subreq; =20 subreq =3D netfs_alloc_subrequest(wreq); subreq->source =3D stream->source; - subreq->start =3D start; + subreq->start =3D wctx->issue_from; + subreq->len =3D wctx->buffered; subreq->stream_nr =3D stream->stream_nr; =20 - bvecq_pos_attach(&subreq->dispatch_pos, &wreq->dispatch_cursor); - _enter("R=3D%x[%x]", wreq->debug_id, subreq->debug_index); =20 trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); =20 - stream->sreq_max_len =3D UINT_MAX; - stream->sreq_max_segs =3D INT_MAX; switch (stream->source) { case NETFS_UPLOAD_TO_SERVER: netfs_stat(&netfs_n_wh_upload); - stream->sreq_max_len =3D wreq->wsize; break; case NETFS_WRITE_TO_CACHE: netfs_stat(&netfs_n_wh_write); @@ -181,9 +205,6 @@ void netfs_prepare_write(struct netfs_io_request *wreq, break; } =20 - if (stream->prepare_write) - stream->prepare_write(subreq); - __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); =20 /* We add to the end of the list whilst the collector may be walking @@ -194,83 +215,47 @@ void netfs_prepare_write(struct netfs_io_request *wre= q, list_add_tail(&subreq->rreq_link, &stream->subrequests); if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { stream->front =3D subreq; - if (!stream->active) { - stream->collected_to =3D stream->front->start; - /* Write list pointers before active flag */ - smp_store_release(&stream->active, true); - } + if (stream->collected_to =3D=3D 0) + stream->collected_to =3D subreq->start; } =20 spin_unlock(&wreq->lock); - - stream->construct =3D subreq; + return subreq; } =20 /* - * Set the I/O iterator for the filesystem/cache to use and dispatch the I= /O - * operation. The operation may be asynchronous and should call - * netfs_write_subrequest_terminated() when complete. + * Prepare the buffer for a buffered write. */ -static void netfs_do_issue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) -{ - struct netfs_io_request *wreq =3D subreq->rreq; - - _enter("R=3D%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len= ); - - if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) - return netfs_write_subrequest_terminated(subreq, subreq->error); - - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - stream->issue_write(subreq); -} - -void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) +static int netfs_prepare_buffered_write_buffer(struct netfs_io_subrequest = *subreq, + struct netfs_write_context *wctx, + unsigned int max_segs) { - // TODO: Use encrypted buffer - bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); - iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE, - subreq->content.bvecq, subreq->content.slot, - subreq->content.offset, - subreq->len); - iov_iter_advance(&subreq->io_iter, subreq->transferred); - - subreq->retry_count++; - subreq->error =3D 0; - __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); - __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); - netfs_stat(&netfs_n_wh_retry_write_subreq); - netfs_do_issue_write(stream, subreq); -} + struct netfs_wb_context *wbctx =3D + container_of(wctx, struct netfs_wb_context, wctx); + struct netfs_io_stream *stream =3D &subreq->rreq->io_streams[subreq->stre= am_nr]; + ssize_t len; =20 -void netfs_issue_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream) -{ - struct netfs_io_subrequest *subreq =3D stream->construct; + _enter("%zx,{,%u,%u},%u", + subreq->len, wbctx->dispatch_cursor.slot, wbctx->dispatch_cursor.o= ffset, max_segs); =20 - if (!subreq) - return; + bvecq_pos_attach(&subreq->dispatch_pos, &wbctx->dispatch_cursor); =20 /* If we have a write to the cache, we need to round out the first and * last entries (only those as the data will be on virtually contiguous * folios) to cache DIO boundaries. */ if (subreq->source =3D=3D NETFS_WRITE_TO_CACHE) { - struct bvecq_pos tmp_pos; struct bio_vec *bv; struct bvecq *bq; size_t dio_size =3D PAGE_SIZE; - size_t disp, len; - int ret; + size_t disp, dlen; =20 - bvecq_pos_attach(&tmp_pos, &subreq->dispatch_pos); - ret =3D bvecq_extract(&tmp_pos, subreq->len, INT_MAX, &subreq->content.b= vecq); - bvecq_pos_detach(&tmp_pos); - if (ret < 0) { - netfs_write_subrequest_terminated(subreq, -ENOMEM); - return; - } + len =3D bvecq_extract(&wbctx->dispatch_cursor, subreq->len, max_segs, + &subreq->content.bvecq); + if (len < 0) + return -ENOMEM; + + _debug("extract %zx/%zx", len, subreq->len); =20 /* Round the first entry down. */ bq =3D subreq->content.bvecq; @@ -288,96 +273,276 @@ void netfs_issue_write(struct netfs_io_request *wreq, while (bq->next) bq =3D bq->next; bv =3D &bq->bv[bq->nr_segs - 1]; - len =3D round_up(bv->bv_len, dio_size - 1); - if (len > bv->bv_len) { - subreq->len +=3D len - bv->bv_len; - bv->bv_len =3D len; + dlen =3D round_up(bv->bv_len, dio_size - 1); + if (dlen > bv->bv_len) { + subreq->len +=3D dlen - bv->bv_len; + bv->bv_len =3D dlen; } } else { - bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + bvecq_pos_attach(&subreq->content, &wbctx->dispatch_cursor); + len =3D bvecq_slice(&wbctx->dispatch_cursor, subreq->len, max_segs, + &subreq->nr_segs); + + if (len < subreq->len) { + subreq->len =3D len; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } } =20 - iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE, - subreq->content.bvecq, subreq->content.slot, - subreq->content.offset, - subreq->len); + wctx->issue_from +=3D len; + wctx->buffered -=3D len; + if (wctx->buffered =3D=3D 0) { + wbctx->buffering =3D false; + bvecq_pos_detach(&wbctx->dispatch_cursor); + } + /* Order loading the queue before updating the issue_to point */ + atomic64_set_release(&stream->issued_to, wctx->issue_from); + return 0; +} + +/** + * netfs_prepare_write_buffer - Get the buffer for a subrequest + * @subreq: The subrequest to get the buffer for + * @wctx: Write context + * @max_segs: Maximum number of segments in buffer (or INT_MAX) + * + * Extract a slice of buffer from the stream and attach it to the subreque= st as + * a bio_vec queue. The maximum amount of data attached is set by + * @subreq->len, but this may be shortened if @max_segs would be exceeded. + */ +int netfs_prepare_write_buffer(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx, + unsigned int max_segs) +{ + struct netfs_io_request *rreq =3D subreq->rreq; + + switch (rreq->origin) { + case NETFS_WRITEBACK: + case NETFS_WRITETHROUGH: + if (test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) + return netfs_prepare_write_retry_buffer(subreq, wctx, max_segs); + return netfs_prepare_buffered_write_buffer(subreq, wctx, max_segs); + + case NETFS_UNBUFFERED_WRITE: + case NETFS_DIO_WRITE: + return netfs_prepare_unbuffered_write_buffer(subreq, wctx, max_segs); + + case NETFS_WRITEBACK_SINGLE: + return netfs_prepare_write_single_buffer(subreq, wctx, max_segs); + + case NETFS_PGPRIV2_COPY_TO_CACHE: +#if 0 + ret =3D netfs_extract_iter(&wctx->unbuff_iter, subreq->len, + max_segs, &subreq->content, 0); + if (ret < 0) + return ret; + if (ret < subreq->len) { + subreq->len =3D ret; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + + wctx->issue_from +=3D subreq->len; + wctx->buffered -=3D subreq->len; + return 0; +#endif + default: + WARN_ON_ONCE(1); + return -EIO; + } +} +EXPORT_SYMBOL(netfs_prepare_write_buffer); + +/* + * Issue writes for a stream. + */ +static int netfs_issue_writes(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + struct netfs_wb_params *params) +{ + for (;;) { + struct netfs_io_subrequest *subreq; + struct netfs_wb_context *wbctx =3D ¶ms->w[stream->stream_nr]; + struct netfs_write_context *wctx =3D &wbctx->wctx; + int ret; + + subreq =3D netfs_alloc_write_subreq(wreq, stream, wctx); + if (!subreq) + return -ENOMEM; + + ret =3D stream->issue_write(subreq, wctx); + if (ret < 0 && ret !=3D -EIOCBQUEUED) + return ret; + + if (wctx->buffered =3D=3D 0) { + if (stream->stream_nr =3D=3D 0) + params->notes &=3D ~NOTE_UPLOAD_STARTED; + return 0; + } =20 - stream->construct =3D NULL; - netfs_do_issue_write(stream, subreq); + if (!(params->notes & NOTE_FLUSH_ANYWAY)) { + wbctx->estimate.issue_at =3D ULLONG_MAX; + wbctx->estimate.max_segs =3D INT_MAX; + stream->estimate_write(wreq, stream, wctx, &wbctx->estimate); + if (wctx->issue_from + wctx->buffered < wbctx->estimate.issue_at && + wbctx->estimate.max_segs > 0) + return 0; + } + } } =20 /* - * Add data to the write subrequest, dispatching each as we fill it up or = if it - * is discontiguous with the previous. We only fill one part at a time so= that - * we can avoid overrunning the credits obtained (cifs) and try to paralle= lise - * content-crypto preparation with network writes. + * See which streams need writes issuing and issue them. */ -size_t netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof) +static int netfs_issue_streams(struct netfs_io_request *wreq, + struct netfs_wb_params *params) { - struct netfs_io_subrequest *subreq =3D stream->construct; - size_t part; + _enter("%x", params->notes); + + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + struct netfs_wb_context *wbctx =3D ¶ms->w[s]; + struct netfs_write_context *wctx =3D &wbctx->wctx; + struct netfs_io_stream *stream =3D &wreq->io_streams[s]; + unsigned long long dirty_start; + bool discontig_before =3D params->notes & NOTE_DISCONTIG_BEFORE; + int ret; + + /* If the current folio doesn't contribute to this stream, see + * if we need to flush it. + */ + if (!(params->notes & stream->applicable)) { + if (!wbctx->buffering) { + atomic64_set_release(&stream->issued_to, + params->folio_start + params->folio_len); + continue; + } + discontig_before =3D true; + } + + /* Issue writes if we meet a discontiguity before the current + * folio. Even if the filesystem can do sparse/vectored + * writes, we still generate a subreq per contiguous region + * rather than generating separate extent lists. + */ + if (wbctx->buffering && discontig_before) { + params->notes |=3D NOTE_FLUSH_ANYWAY; + ret =3D netfs_issue_writes(wreq, stream, params); + if (ret < 0) + return ret; + wbctx->buffering =3D false; + params->notes &=3D ~NOTE_FLUSH_ANYWAY; + } + + if (!(params->notes & stream->applicable)) { + atomic64_set_release(&stream->issued_to, + params->folio_start + params->folio_len); + continue; + } =20 - if (!stream->avail) { - _leave("no write"); - return len; + /* If we're not currently buffering on this stream, we need to + * get an estimate of when we need to issue a write. It might + * be within the starting folio. + */ + dirty_start =3D params->folio_start + params->dirty_offset; + if (!wbctx->buffering) { + wbctx->buffering =3D true; + wctx->issue_from =3D dirty_start; + bvecq_pos_attach(&wbctx->dispatch_cursor, ¶ms->dispatch_cursor); + wbctx->estimate.issue_at =3D ULLONG_MAX; + wbctx->estimate.max_segs =3D INT_MAX; + stream->estimate_write(wreq, stream, wctx, &wbctx->estimate); + } + + wctx->buffered +=3D params->dirty_len; + wbctx->estimate.max_segs--; + + /* Poke the filesystem to issue writes when we hit the limit it + * set or if the data ends before the end of the page. + */ + if (params->notes & NOTE_DISCONTIG_AFTER) + params->notes |=3D NOTE_FLUSH_ANYWAY; + _debug("[%u] %llx + %x >=3D %llx, %u %x", + s, dirty_start, params->dirty_len, wbctx->estimate.issue_at, + wbctx->estimate.max_segs, params->notes); + if (dirty_start + params->dirty_len >=3D wbctx->estimate.issue_at || + wbctx->estimate.max_segs <=3D 0 || + (params->notes & NOTE_FLUSH_ANYWAY)) { + ret =3D netfs_issue_writes(wreq, stream, params); + if (ret < 0) + return ret; + } } =20 - _enter("R=3D%x[%x]", wreq->debug_id, subreq ? subreq->debug_index : 0); + return 0; +} + +/* + * End the issuing of writes, let the collector know we're done. + */ +static void netfs_end_issue_write(struct netfs_io_request *wreq, + struct netfs_wb_params *params) +{ + bool needs_poke =3D true; + + params->notes |=3D NOTE_FLUSH_ANYWAY; =20 - if (subreq && start !=3D subreq->start + subreq->len) { - netfs_issue_write(wreq, stream); - subreq =3D NULL; + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + struct netfs_wb_context *wbctx =3D ¶ms->w[s]; + struct netfs_io_stream *stream =3D &wreq->io_streams[s]; + int ret; + + if (wbctx->buffering) { + ret =3D netfs_issue_writes(wreq, stream, params); + if (ret < 0) { + /* Leave the error somewhere the completion + * path can pick it up if there isn't already + * another error logged. + */ + cmpxchg(&wreq->error, 0, ret); + } + wbctx->buffering =3D false; + } } =20 - if (!stream->construct) - netfs_prepare_write(wreq, stream, start); - subreq =3D stream->construct; + smp_wmb(); /* Write subreq lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); =20 - part =3D umin(stream->sreq_max_len - subreq->len, len); - _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, l= en); - subreq->len +=3D part; - subreq->nr_segs++; + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + struct netfs_io_stream *stream =3D &wreq->io_streams[s]; =20 - if (subreq->len >=3D stream->sreq_max_len || - subreq->nr_segs >=3D stream->sreq_max_segs || - to_eof) { - netfs_issue_write(wreq, stream); - subreq =3D NULL; + if (!stream->active) + continue; + if (!list_empty(&stream->subrequests)) + needs_poke =3D false; } =20 - return part; + if (needs_poke) + netfs_wake_collector(wreq); } =20 /* - * Write some of a pending folio data back to the server. + * Queue a folio for writeback. */ -static int netfs_write_folio(struct netfs_io_request *wreq, - struct writeback_control *wbc, - struct folio *folio) +static int netfs_queue_wb_folio(struct netfs_io_request *wreq, + struct writeback_control *wbc, + struct folio *folio, + struct netfs_wb_params *params) { - struct netfs_io_stream *upload =3D &wreq->io_streams[0]; - struct netfs_io_stream *cache =3D &wreq->io_streams[1]; - struct netfs_io_stream *stream; struct netfs_group *fgroup; /* TODO: Use this with ceph */ struct netfs_folio *finfo; struct bvecq *queue =3D wreq->load_cursor.bvecq; unsigned int slot; size_t fsize =3D folio_size(folio), flen =3D fsize, foff =3D 0; loff_t fpos =3D folio_pos(folio), i_size; - bool to_eof =3D false, streamw =3D false; - bool debug =3D false; int ret; =20 - _enter(""); + _enter("%x", params->notes); =20 /* Institute a new bvec queue segment if the current one is full or if * we encounter a discontiguity. The discontiguity break is important * when it comes to bulk unlocking folios by file range. */ if (bvecq_is_full(queue) || - (fpos !=3D wreq->last_end && wreq->last_end > 0)) { + (fpos !=3D params->last_end && params->last_end > 0)) { ret =3D bvecq_buffer_make_space(&wreq->load_cursor); if (ret < 0) { folio_unlock(folio); @@ -386,10 +551,10 @@ static int netfs_write_folio(struct netfs_io_request = *wreq, =20 queue =3D wreq->load_cursor.bvecq; queue->fpos =3D fpos; - if (fpos !=3D wreq->last_end) + if (fpos !=3D params->last_end) queue->discontig =3D true; - bvecq_pos_move(&wreq->dispatch_cursor, queue); - wreq->dispatch_cursor.slot =3D 0; + bvecq_pos_move(¶ms->dispatch_cursor, queue); + params->dispatch_cursor.slot =3D 0; } =20 /* netfs_perform_write() may shift i_size around the page or from out @@ -417,23 +582,36 @@ static int netfs_write_folio(struct netfs_io_request = *wreq, if (finfo) { foff =3D finfo->dirty_offset; flen =3D foff + finfo->dirty_len; - streamw =3D true; + params->notes |=3D NOTE_STREAMW; + if (foff > 0) + params->notes |=3D NOTE_DISCONTIG_BEFORE; + if (flen < fsize) + params->notes |=3D NOTE_DISCONTIG_AFTER; } =20 + if (params->last_end && fpos !=3D params->last_end) + params->notes |=3D NOTE_DISCONTIG_BEFORE; + params->last_end =3D fpos + fsize; + if (wreq->origin =3D=3D NETFS_WRITETHROUGH) { - to_eof =3D false; if (flen > i_size - fpos) flen =3D i_size - fpos; + /* EOF may be changing. */ } else if (flen > i_size - fpos) { flen =3D i_size - fpos; - if (!streamw) + if (!(params->notes & NOTE_STREAMW)) folio_zero_segment(folio, flen, fsize); - to_eof =3D true; + params->notes |=3D NOTE_TO_EOF; } else if (flen =3D=3D i_size - fpos) { - to_eof =3D true; + params->notes |=3D NOTE_TO_EOF; } flen -=3D foff; =20 + params->folio_start =3D fpos; + params->folio_len =3D fsize; + params->dirty_offset =3D foff; + params->dirty_len =3D flen; + _debug("folio %zx %zx %zx", foff, flen, fsize); =20 /* Deal with discontinuities in the stream of dirty pages. These can @@ -453,22 +631,31 @@ static int netfs_write_folio(struct netfs_io_request = *wreq, * write-back group. */ if (fgroup =3D=3D NETFS_FOLIO_COPY_TO_CACHE) { - netfs_issue_write(wreq, upload); + if (!(params->notes & NOTE_CACHE_AVAIL)) { + trace_netfs_folio(folio, netfs_folio_trace_cancel_copy); + goto cancel_folio; + } + params->notes |=3D NOTE_CACHE_COPY; + trace_netfs_folio(folio, netfs_folio_trace_store_copy); } else if (fgroup !=3D wreq->group) { /* We can't write this page to the server yet. */ kdebug("wrong group"); - folio_redirty_for_writepage(wbc, folio); - folio_unlock(folio); - netfs_issue_write(wreq, upload); - netfs_issue_write(wreq, cache); - return 0; + goto skip_folio; + } else if (!(params->notes & (NOTE_UPLOAD_AVAIL | NOTE_CACHE_AVAIL))) { + trace_netfs_folio(folio, netfs_folio_trace_cancel_store); + goto cancel_folio_discard; + } else { + if (params->notes & NOTE_UPLOAD_STARTED) { + params->notes |=3D NOTE_UPLOAD; + trace_netfs_folio(folio, netfs_folio_trace_store_plus); + } else { + params->notes |=3D NOTE_UPLOAD | NOTE_UPLOAD_STARTED; + trace_netfs_folio(folio, netfs_folio_trace_store); + } + if (params->notes & NOTE_CACHE_AVAIL) + params->notes |=3D NOTE_CACHE_COPY; } =20 - if (foff > 0) - netfs_issue_write(wreq, upload); - if (streamw) - netfs_issue_write(wreq, cache); - /* Flip the page to the writeback state and unlock. If we're called * from write-through, then the page has already been put into the wb * state. @@ -477,24 +664,6 @@ static int netfs_write_folio(struct netfs_io_request *= wreq, folio_start_writeback(folio); folio_unlock(folio); =20 - if (fgroup =3D=3D NETFS_FOLIO_COPY_TO_CACHE) { - if (!cache->avail) { - trace_netfs_folio(folio, netfs_folio_trace_cancel_copy); - netfs_issue_write(wreq, upload); - netfs_folio_written_back(folio); - return 0; - } - trace_netfs_folio(folio, netfs_folio_trace_store_copy); - } else if (!upload->avail && !cache->avail) { - trace_netfs_folio(folio, netfs_folio_trace_cancel_store); - netfs_folio_written_back(folio); - return 0; - } else if (!upload->construct) { - trace_netfs_folio(folio, netfs_folio_trace_store); - } else { - trace_netfs_folio(folio, netfs_folio_trace_store_plus); - } - /* Attach the folio to the rolling buffer. */ slot =3D queue->nr_segs; bvec_set_folio(&queue->bv[slot], folio, flen, foff); @@ -502,103 +671,28 @@ static int netfs_write_folio(struct netfs_io_request= *wreq, wreq->load_cursor.slot =3D slot + 1; wreq->load_cursor.offset =3D 0; trace_netfs_bv_slot(queue, slot); + trace_netfs_wback(wreq, folio, params->notes); =20 - /* Move the submission point forward to allow for write-streaming data - * not starting at the front of the page. We don't do write-streaming - * with the cache as the cache requires DIO alignment. - * - * Also skip uploading for data that's been read and just needs copying - * to the cache. - */ - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - stream->submit_off =3D foff; - stream->submit_len =3D flen; - if (!stream->avail || - (stream->source =3D=3D NETFS_WRITE_TO_CACHE && streamw) || - (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && - fgroup =3D=3D NETFS_FOLIO_COPY_TO_CACHE)) { - stream->submit_off =3D UINT_MAX; - stream->submit_len =3D 0; - } - } - - /* Attach the folio to one or more subrequests. For a big folio, we - * could end up with thousands of subrequests if the wsize is small - - * but we might need to wait during the creation of subrequests for - * network resources (eg. SMB credits). - */ - for (;;) { - ssize_t part; - size_t lowest_off =3D ULONG_MAX; - int choose_s =3D -1; - - /* Always add to the lowest-submitted stream first. */ - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - if (stream->submit_len > 0 && - stream->submit_off < lowest_off) { - lowest_off =3D stream->submit_off; - choose_s =3D s; - } - } - - if (choose_s < 0) - break; - stream =3D &wreq->io_streams[choose_s]; - - /* Advance the cursor. */ - wreq->dispatch_cursor.offset =3D stream->submit_off; - - atomic64_set(&wreq->issued_to, fpos + stream->submit_off); - part =3D netfs_advance_write(wreq, stream, fpos + stream->submit_off, - stream->submit_len, to_eof); - stream->submit_off +=3D part; - if (part > stream->submit_len) - stream->submit_len =3D 0; - else - stream->submit_len -=3D part; - if (part > 0) - debug =3D true; - } - - bvecq_buffer_step(&wreq->dispatch_cursor); - /* Order loading the queue before updating the issue_to point */ - atomic64_set_release(&wreq->issued_to, fpos + fsize); - - if (!debug) - kdebug("R=3D%x: No submit", wreq->debug_id); - - if (foff + flen < fsize) - for (int s =3D 0; s < NR_IO_STREAMS; s++) - netfs_issue_write(wreq, &wreq->io_streams[s]); - - _leave(" =3D 0"); +out: + _leave(" =3D %x", params->notes); return 0; -} - -/* - * End the issuing of writes, letting the collector know we're done. - */ -static void netfs_end_issue_write(struct netfs_io_request *wreq) -{ - bool needs_poke =3D true; - - smp_wmb(); /* Write subreq lists before ALL_QUEUED. */ - set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); - - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - struct netfs_io_stream *stream =3D &wreq->io_streams[s]; =20 - if (!stream->active) - continue; - if (!list_empty(&stream->subrequests)) - needs_poke =3D false; - netfs_issue_write(wreq, stream); - } - - if (needs_poke) - netfs_wake_collector(wreq); +skip_folio: + ret =3D folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); + if (ret < 0) + return ret; + params->notes |=3D NOTE_DISCONTIG_BEFORE; + goto out; +cancel_folio_discard: + netfs_put_group(fgroup); +cancel_folio: + folio_detach_private(folio); + kfree(finfo); + folio_unlock(folio); + folio_cancel_dirty(folio); + params->notes |=3D NOTE_DISCONTIG_BEFORE; + goto out; } =20 /* @@ -609,6 +703,7 @@ int netfs_writepages(struct address_space *mapping, { struct netfs_inode *ictx =3D netfs_inode(mapping->host); struct netfs_io_request *wreq =3D NULL; + struct netfs_wb_params params =3D {}; struct folio *folio; int error =3D 0; =20 @@ -634,35 +729,47 @@ int netfs_writepages(struct address_space *mapping, =20 if (bvecq_buffer_init(&wreq->load_cursor, wreq->debug_id) < 0) goto nomem; - bvecq_pos_attach(&wreq->dispatch_cursor, &wreq->load_cursor); - bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor); + bvecq_pos_attach(¶ms.dispatch_cursor, &wreq->load_cursor); + bvecq_pos_attach(&wreq->collect_cursor, &wreq->load_cursor); =20 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); trace_netfs_write(wreq, netfs_write_trace_writeback); netfs_stat(&netfs_n_wh_writepages); =20 - do { - _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)); + if (wreq->io_streams[1].avail) + params.notes |=3D NOTE_CACHE_AVAIL; =20 - /* It appears we don't have to handle cyclic writeback wrapping. */ - WARN_ON_ONCE(wreq && folio_pos(folio) < atomic64_read(&wreq->issued_to)); + do { + _debug("wbiter %lx", folio->index); =20 if (netfs_folio_group(folio) !=3D NETFS_FOLIO_COPY_TO_CACHE && unlikely(!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))) { set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); wreq->netfs_ops->begin_writeback(wreq); + if (wreq->io_streams[0].avail) { + params.notes |=3D NOTE_UPLOAD_AVAIL; + /* Order setting the active flag after other fields. */ + smp_store_release(&wreq->io_streams[0].active, true); + } } =20 - error =3D netfs_write_folio(wreq, wbc, folio); + params.notes &=3D NOTES__KEEP_MASK; + error =3D netfs_queue_wb_folio(wreq, wbc, folio, ¶ms); + if (error < 0) + break; + error =3D netfs_issue_streams(wreq, ¶ms); if (error < 0) break; + } while ((folio =3D writeback_iter(mapping, wbc, folio, &error))); =20 - netfs_end_issue_write(wreq); + netfs_end_issue_write(wreq, ¶ms); =20 mutex_unlock(&ictx->wb_lock); bvecq_pos_detach(&wreq->load_cursor); - bvecq_pos_detach(&wreq->dispatch_cursor); + bvecq_pos_detach(¶ms.dispatch_cursor); + bvecq_pos_detach(¶ms.w[0].dispatch_cursor); + bvecq_pos_detach(¶ms.w[1].dispatch_cursor); netfs_wake_collector(wreq); =20 netfs_put_request(wreq, netfs_rreq_trace_put_return); @@ -713,6 +820,9 @@ int netfs_advance_writethrough(struct netfs_io_request = *wreq, struct writeback_c struct folio *folio, size_t copied, bool to_page_end, struct folio **writethrough_cache) { + struct netfs_wb_params params =3D {}; + int ret; + _enter("R=3D%x ws=3D%u cp=3D%zu tp=3D%u", wreq->debug_id, wreq->wsize, copied, to_page_end); =20 @@ -735,7 +845,10 @@ int netfs_advance_writethrough(struct netfs_io_request= *wreq, struct writeback_c return 0; =20 *writethrough_cache =3D NULL; - return netfs_write_folio(wreq, wbc, folio); + ret =3D netfs_queue_wb_folio(wreq, wbc, folio, ¶ms); + if (ret < 0) + return ret; + return netfs_issue_streams(wreq, ¶ms); } =20 /* @@ -744,15 +857,19 @@ int netfs_advance_writethrough(struct netfs_io_reques= t *wreq, struct writeback_c ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct write= back_control *wbc, struct folio *writethrough_cache) { + struct netfs_wb_params params =3D {}; struct netfs_inode *ictx =3D netfs_inode(wreq->inode); ssize_t ret; =20 _enter("R=3D%x", wreq->debug_id); =20 - if (writethrough_cache) - netfs_write_folio(wreq, wbc, writethrough_cache); + if (writethrough_cache) { + ret =3D netfs_queue_wb_folio(wreq, wbc, writethrough_cache, ¶ms); + if (ret =3D=3D 0) + ret =3D netfs_issue_streams(wreq, ¶ms); + } =20 - netfs_end_issue_write(wreq); + netfs_end_issue_write(wreq, ¶ms); =20 mutex_unlock(&ictx->wb_lock); =20 @@ -764,23 +881,46 @@ ssize_t netfs_end_writethrough(struct netfs_io_reques= t *wreq, struct writeback_c return ret; } =20 +/* + * Prepare a buffer for a single monolithic write. + */ +static int netfs_prepare_write_single_buffer(struct netfs_io_subrequest *s= ubreq, + struct netfs_write_context *wctx, + unsigned int max_segs) +{ + struct netfs_write_single *wsctx =3D + container_of(wctx, struct netfs_write_single, wctx); + + bvecq_pos_attach(&subreq->dispatch_pos, &wsctx->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &subreq->dispatch_pos); + + wctx->issue_from +=3D subreq->len; + wctx->buffered -=3D subreq->len; + subreq->rreq->submitted +=3D subreq->len; + return 0; +} + /** * netfs_writeback_single - Write back a monolithic payload * @mapping: The mapping to write from * @wbc: Hints from the VM - * @iter: Data to write. + * @iter: Data to write + * @len: Amount of data to write * * Write a monolithic, non-pagecache object back to the server and/or * the cache. There's a maximum of one subrequest per stream. */ int netfs_writeback_single(struct address_space *mapping, struct writeback_control *wbc, - struct iov_iter *iter) + struct iov_iter *iter, + size_t len) { struct netfs_io_request *wreq; struct netfs_inode *ictx =3D netfs_inode(mapping->host); int ret; =20 + _enter("%zx,%zx", iov_iter_count(iter), len); + if (!mutex_trylock(&ictx->wb_lock)) { if (wbc->sync_mode =3D=3D WB_SYNC_NONE) { netfs_stat(&netfs_n_wb_lock_skip); @@ -795,9 +935,10 @@ int netfs_writeback_single(struct address_space *mappi= ng, ret =3D PTR_ERR(wreq); goto couldnt_start; } - wreq->len =3D iov_iter_count(iter); =20 - ret =3D netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_c= ursor.bvecq, 0); + wreq->len =3D len; + + ret =3D netfs_extract_iter(iter, len, INT_MAX, 0, &wreq->load_cursor.bvec= q, 0); if (ret < 0) goto cleanup_free; if (ret < wreq->len) { @@ -805,29 +946,39 @@ int netfs_writeback_single(struct address_space *mapp= ing, goto cleanup_free; } =20 - bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor); + bvecq_pos_attach(&wreq->collect_cursor, &wreq->load_cursor); =20 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); trace_netfs_write(wreq, netfs_write_trace_writeback_single); netfs_stat(&netfs_n_wh_writepages); =20 - if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) + if (test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) wreq->netfs_ops->begin_writeback(wreq); =20 for (int s =3D 0; s < NR_IO_STREAMS; s++) { + struct netfs_write_single wsctx =3D { + .wctx.issue_from =3D 0, + .wctx.buffered =3D iov_iter_count(iter), + }; struct netfs_io_subrequest *subreq; struct netfs_io_stream *stream =3D &wreq->io_streams[s]; =20 if (!stream->avail) continue; =20 - netfs_prepare_write(wreq, stream, 0); + subreq =3D netfs_alloc_write_subreq(wreq, stream, &wsctx.wctx); + if (!subreq) { + ret =3D -ENOMEM; + break; + } + + bvecq_pos_attach(&wsctx.dispatch_cursor, &wreq->load_cursor); =20 - subreq =3D stream->construct; - subreq->len =3D wreq->len; - stream->submit_len =3D subreq->len; + ret =3D stream->issue_write(subreq, &wsctx.wctx); + if (ret < 0 && ret !=3D -EIOCBQUEUED) + netfs_write_subrequest_terminated(subreq, ret); =20 - netfs_issue_write(wreq, stream); + bvecq_pos_detach(&wsctx.dispatch_cursor); } =20 wreq->submitted =3D wreq->len; diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c index b9352bf45c4b..e43c7d4787b2 100644 --- a/fs/netfs/write_retry.c +++ b/fs/netfs/write_retry.c @@ -11,13 +11,52 @@ #include #include "internal.h" =20 +struct netfs_write_retry_context { + struct netfs_write_context wctx; + struct bvecq_pos dispatch_cursor; /* Dispatch position in buffer */ +}; + +/* + * Prepare the write buffer for a retry. We can't necessarily reuse the w= rite + * buffer from the previous run of a subrequest because the filesystem is + * permitted to modify it (add headers/trailers, encrypt it). Further, the + * subrequest may now be a different size (e.g. cifs has to negotiate for + * maximum transfer size). Also, we can't look at *stream as that may sti= ll + * refer to the source material being broken up into original subrequests. + */ +int netfs_prepare_write_retry_buffer(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx, + unsigned int max_segs) +{ + struct netfs_write_retry_context *yctx =3D + container_of(wctx, struct netfs_write_retry_context, wctx); + size_t len; + + bvecq_pos_attach(&subreq->dispatch_pos, &yctx->dispatch_cursor); + bvecq_pos_attach(&subreq->content, &yctx->dispatch_cursor); + len =3D bvecq_slice(&yctx->dispatch_cursor, subreq->len, max_segs, + &subreq->nr_segs); + + if (len < subreq->len) { + subreq->len =3D len; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + + wctx->issue_from +=3D len; + wctx->buffered -=3D len; + if (wctx->buffered =3D=3D 0) + bvecq_pos_detach(&yctx->dispatch_cursor); + return 0; +} + /* - * Perform retries on the streams that need it. + * Perform retries on the streams that need it. This only has to deal with + * buffered writes; unbuffered write retry is handled in direct_write.c. */ static void netfs_retry_write_stream(struct netfs_io_request *wreq, struct netfs_io_stream *stream) { - struct bvecq_pos dispatch_cursor =3D {}; + struct netfs_write_retry_context yctx =3D {}; struct list_head *next; =20 _enter("R=3D%x[%x:]", wreq->debug_id, stream->stream_nr); @@ -32,30 +71,15 @@ static void netfs_retry_write_stream(struct netfs_io_re= quest *wreq, if (unlikely(stream->failed)) return; =20 - /* If there's no renegotiation to do, just resend each failed subreq. */ - if (!stream->prepare_write) { - struct netfs_io_subrequest *subreq; - - list_for_each_entry(subreq, &stream->subrequests, rreq_link) { - if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) - break; - if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq); - } - } - return; - } - next =3D stream->subrequests.next; =20 do { + struct netfs_write_context *wctx =3D &yctx.wctx; struct netfs_io_subrequest *subreq =3D NULL, *from, *to, *tmp; unsigned long long start, len; - size_t part; - bool boundary =3D false; + int ret; =20 - bvecq_pos_detach(&dispatch_cursor); + bvecq_pos_detach(&yctx.dispatch_cursor); =20 /* Go through the stream and find the next span of contiguous * data that we then rejig (cifs, for example, needs the wsize @@ -73,7 +97,6 @@ static void netfs_retry_write_stream(struct netfs_io_requ= est *wreq, list_for_each_continue(next, &stream->subrequests) { subreq =3D list_entry(next, struct netfs_io_subrequest, rreq_link); if (subreq->start + subreq->transferred !=3D start + len || - test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) break; to =3D subreq; @@ -83,43 +106,40 @@ static void netfs_retry_write_stream(struct netfs_io_r= equest *wreq, /* Determine the set of buffers we're going to use. Each * subreq gets a subset of a single overall contiguous buffer. */ - bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos); - bvecq_pos_advance(&dispatch_cursor, from->transferred); + bvecq_pos_transfer(&yctx.dispatch_cursor, &from->dispatch_pos); + bvecq_pos_advance(&yctx.dispatch_cursor, from->transferred); + wctx->issue_from =3D start; + wctx->buffered =3D len; =20 /* Work through the sublist. */ subreq =3D from; list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { - if (!len) + if (!wctx->buffered) break; =20 - subreq->start =3D start; - subreq->len =3D len; - __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - bvecq_pos_detach(&subreq->dispatch_pos); bvecq_pos_detach(&subreq->content); + subreq->content.bvecq =3D NULL; + subreq->content.slot =3D 0; + subreq->content.offset =3D 0; =20 - /* Renegotiate max_len (wsize) */ - stream->sreq_max_len =3D len; - stream->sreq_max_segs =3D INT_MAX; - stream->prepare_write(subreq); - - bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); - part =3D bvecq_slice(&dispatch_cursor, - umin(len, stream->sreq_max_len), - stream->sreq_max_segs, - &subreq->nr_segs); - subreq->len =3D subreq->transferred + part; - subreq->transferred =3D 0; - len -=3D part; - start +=3D part; - if (len && subreq =3D=3D to && - __test_and_clear_bit(NETFS_SREQ_BOUNDARY, &to->flags)) - boundary =3D true; - + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + subreq->start =3D wctx->issue_from; + subreq->len =3D wctx->buffered; + subreq->transferred =3D 0; + subreq->retry_count +=3D 1; + subreq->error =3D 0; + + netfs_stat(&netfs_n_wh_retry_write_subreq); + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq); + ret =3D stream->issue_write(subreq, wctx); + if (ret < 0 && ret !=3D -EIOCBQUEUED) + netfs_write_subrequest_terminated(subreq, ret); + if (subreq =3D=3D to) break; } @@ -160,12 +180,9 @@ static void netfs_retry_write_stream(struct netfs_io_r= equest *wreq, to =3D list_next_entry(to, rreq_link); trace_netfs_sreq(subreq, netfs_sreq_trace_retry); =20 - stream->sreq_max_len =3D len; - stream->sreq_max_segs =3D INT_MAX; switch (stream->source) { case NETFS_UPLOAD_TO_SERVER: netfs_stat(&netfs_n_wh_upload); - stream->sreq_max_len =3D umin(len, wreq->wsize); break; case NETFS_WRITE_TO_CACHE: netfs_stat(&netfs_n_wh_write); @@ -174,32 +191,16 @@ static void netfs_retry_write_stream(struct netfs_io_= request *wreq, WARN_ON_ONCE(1); } =20 - stream->prepare_write(subreq); - - bvecq_pos_attach(&subreq->dispatch_pos, &dispatch_cursor); - part =3D bvecq_slice(&dispatch_cursor, - umin(len, stream->sreq_max_len), - stream->sreq_max_segs, - &subreq->nr_segs); - subreq->len =3D subreq->transferred + part; - - len -=3D part; - start +=3D part; - if (!len && boundary) { - __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); - boundary =3D false; - } - - netfs_reissue_write(stream, subreq); - if (!len) - break; + ret =3D stream->issue_write(subreq, wctx); + if (ret < 0 && ret !=3D -EIOCBQUEUED) + netfs_write_subrequest_terminated(subreq, ret); =20 } while (len); =20 } while (!list_is_head(next, &stream->subrequests)); =20 out: - bvecq_pos_detach(&dispatch_cursor); + bvecq_pos_detach(&yctx.dispatch_cursor); } =20 /* @@ -237,4 +238,6 @@ void netfs_retry_writes(struct netfs_io_request *wreq) netfs_retry_write_stream(wreq, stream); } } + + pr_notice("Retrying\n"); } diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 9b7fdad4a920..1f42fb5dc443 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -296,7 +296,8 @@ static struct nfs_netfs_io_data *nfs_netfs_alloc(struct= netfs_io_subrequest *sre return netfs; } =20 -static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) +static int nfs_netfs_issue_read(struct netfs_io_subrequest *sreq, + struct netfs_read_context *rctx) { struct nfs_netfs_io_data *netfs; struct nfs_pageio_descriptor pgio; @@ -314,10 +315,11 @@ static void nfs_netfs_issue_read(struct netfs_io_subr= equest *sreq) &nfs_async_read_completion_ops); =20 netfs =3D nfs_netfs_alloc(sreq); - if (!netfs) { - sreq->error =3D -ENOMEM; - return netfs_read_subreq_terminated(sreq); - } + if (!netfs) + return -ENOMEM; + + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(sreq, rctx); =20 pgio.pg_netfs =3D netfs; /* used in completion */ =20 @@ -332,6 +334,7 @@ static void nfs_netfs_issue_read(struct netfs_io_subreq= uest *sreq) out: nfs_pageio_complete_read(&pgio); nfs_netfs_put(netfs); + return -EIOCBQUEUED; } =20 void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr) diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 3990a9012264..c09232ceba35 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1466,8 +1466,7 @@ cifs_readv_callback(struct TCP_Server_Info *server, s= truct mid_q_entry *mid) struct netfs_inode *ictx =3D netfs_inode(rdata->rreq->inode); struct cifs_tcon *tcon =3D tlink_tcon(rdata->req->cfile->tlink); struct smb_rqst rqst =3D { .rq_iov =3D rdata->iov, - .rq_nvec =3D 1, - .rq_iter =3D rdata->subreq.io_iter }; + .rq_nvec =3D 1}; struct cifs_credits credits =3D { .value =3D 1, .instance =3D 0, @@ -1481,6 +1480,11 @@ cifs_readv_callback(struct TCP_Server_Info *server, = struct mid_q_entry *mid) __func__, mid->mid, mid->mid_state, rdata->result, rdata->subreq.len); =20 + if (rdata->got_bytes) + iov_iter_bvec_queue(&rqst.rq_iter, ITER_DEST, + rdata->subreq.content.bvecq, rdata->subreq.content.slot, + rdata->subreq.content.offset, rdata->subreq.len); + switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: /* result already set, check signature */ @@ -2002,7 +2006,10 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) =20 rqst.rq_iov =3D iov; rqst.rq_nvec =3D 1; - rqst.rq_iter =3D wdata->subreq.io_iter; + + iov_iter_bvec_queue(&rqst.rq_iter, ITER_DEST, + wdata->subreq.content.bvecq, wdata->subreq.content.slot, + wdata->subreq.content.offset, wdata->subreq.len); =20 cifs_dbg(FYI, "async write at %llu %zu bytes\n", wdata->subreq.start, wdata->subreq.len); diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 18f31d4eb98d..aca299520968 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -44,18 +44,36 @@ static int cifs_reopen_file(struct cifsFileInfo *cfile,= bool can_flush); * Prepare a subrequest to upload to the server. We need to allocate cred= its * so that we know the maximum amount of data that we can include in it. */ -static void cifs_prepare_write(struct netfs_io_subrequest *subreq) +static int cifs_estimate_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate) +{ + struct cifs_sb_info *cifs_sb =3D CIFS_SB(wreq->inode->i_sb); + + estimate->issue_at =3D wctx->issue_from + cifs_sb->ctx->wsize; + return 0; +} + +/* + * Issue a subrequest to upload to the server. + */ +static int cifs_issue_write(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx) { struct cifs_io_subrequest *wdata =3D container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req =3D wdata->req; - struct netfs_io_stream *stream =3D &req->rreq.io_streams[subreq->stream_n= r]; struct TCP_Server_Info *server; struct cifsFileInfo *open_file =3D req->cfile; - struct cifs_sb_info *cifs_sb =3D CIFS_SB(wdata->rreq->inode->i_sb); - size_t wsize =3D req->rreq.wsize; + struct cifs_sb_info *cifs_sb =3D CIFS_SB(subreq->rreq->inode->i_sb); + unsigned int max_segs =3D INT_MAX; + size_t len; int rc; =20 + if (cifs_forced_shutdown(cifs_sb)) + return smb_EIO(smb_eio_trace_forced_shutdown); + if (!wdata->have_xid) { wdata->xid =3D get_xid(); wdata->have_xid =3D true; @@ -74,18 +92,16 @@ static void cifs_prepare_write(struct netfs_io_subreque= st *subreq) if (rc < 0) { if (rc =3D=3D -EAGAIN) goto retry; - subreq->error =3D rc; - return netfs_prepare_write_failed(subreq); + return rc; } } =20 - rc =3D server->ops->wait_mtu_credits(server, wsize, &stream->sreq_max_len, - &wdata->credits); - if (rc < 0) { - subreq->error =3D rc; - return netfs_prepare_write_failed(subreq); - } + len =3D umin(subreq->len, cifs_sb->ctx->wsize); + rc =3D server->ops->wait_mtu_credits(server, len, &len, &wdata->credits); + if (rc < 0) + return rc; =20 + subreq->len =3D len; wdata->credits.rreq_debug_id =3D subreq->rreq->debug_id; wdata->credits.rreq_debug_index =3D subreq->debug_index; wdata->credits.in_flight_check =3D 1; @@ -101,39 +117,29 @@ static void cifs_prepare_write(struct netfs_io_subreq= uest *subreq) const struct smbdirect_socket_parameters *sp =3D smbd_get_parameters(server->smbd_conn); =20 - stream->sreq_max_segs =3D sp->max_frmr_depth; + max_segs =3D sp->max_frmr_depth; } #endif -} - -/* - * Issue a subrequest to upload to the server. - */ -static void cifs_issue_write(struct netfs_io_subrequest *subreq) -{ - struct cifs_io_subrequest *wdata =3D - container_of(subreq, struct cifs_io_subrequest, subreq); - struct cifs_sb_info *sbi =3D CIFS_SB(subreq->rreq->inode->i_sb); - int rc; =20 - if (cifs_forced_shutdown(sbi)) { - rc =3D smb_EIO(smb_eio_trace_forced_shutdown); - goto fail; + rc =3D netfs_prepare_write_buffer(subreq, wctx, max_segs); + if (rc < 0) { + add_credits_and_wake_if(wdata->server, &wdata->credits, 0); + return rc; } =20 - rc =3D adjust_credits(wdata->server, wdata, cifs_trace_rw_credits_issue_w= rite_adjust); + rc =3D adjust_credits(server, wdata, cifs_trace_rw_credits_issue_write_ad= just); if (rc) - goto fail; + goto fail_with_credits; =20 rc =3D -EAGAIN; if (wdata->req->cfile->invalidHandle) - goto fail; + goto fail_with_credits; =20 wdata->server->ops->async_writev(wdata); out: - return; + return -EIOCBQUEUED; =20 -fail: +fail_with_credits: if (rc =3D=3D -EAGAIN) trace_netfs_sreq(subreq, netfs_sreq_trace_retry); else @@ -149,17 +155,26 @@ static void cifs_netfs_invalidate_cache(struct netfs_= io_request *wreq) } =20 /* - * Negotiate the size of a read operation on behalf of the netfs library. + * Issue a read operation on behalf of the netfs helper functions. We're = asked + * to make a read of a certain size at a point in the file. We are permit= ted + * to only read a portion of that, but as long as we read something, the n= etfs + * helper will call us again so that we can issue another read. */ -static int cifs_prepare_read(struct netfs_io_subrequest *subreq) +static int cifs_issue_read(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx) { struct netfs_io_request *rreq =3D subreq->rreq; struct cifs_io_subrequest *rdata =3D container_of(subreq, struct cifs_io_= subrequest, subreq); struct cifs_io_request *req =3D container_of(subreq->rreq, struct cifs_io= _request, rreq); - struct TCP_Server_Info *server; + struct TCP_Server_Info *server =3D rdata->server; struct cifs_sb_info *cifs_sb =3D CIFS_SB(rreq->inode->i_sb); - size_t size; - int rc =3D 0; + unsigned int max_segs =3D INT_MAX; + size_t len; + int rc; + + cifs_dbg(FYI, "%s: op=3D%08x[%x] mapping=3D%p len=3D%zu/%zu\n", + __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, + subreq->transferred, subreq->len); =20 if (!rdata->have_xid) { rdata->xid =3D get_xid(); @@ -173,17 +188,15 @@ static int cifs_prepare_read(struct netfs_io_subreque= st *subreq) cifs_negotiate_rsize(server, cifs_sb->ctx, tlink_tcon(req->cfile->tlink)); =20 - rc =3D server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &size, &rdata->credits); + len =3D umin(subreq->len, cifs_sb->ctx->rsize); + rc =3D server->ops->wait_mtu_credits(server, len, &len, &rdata->credits); if (rc) return rc; =20 - rreq->io_streams[0].sreq_max_len =3D size; - - rdata->credits.in_flight_check =3D 1; + subreq->len =3D len; rdata->credits.rreq_debug_id =3D rreq->debug_id; rdata->credits.rreq_debug_index =3D subreq->debug_index; - + rdata->credits.in_flight_check =3D 1; trace_smb3_rw_credits(rdata->rreq->debug_id, rdata->subreq.debug_index, rdata->credits.value, @@ -195,33 +208,17 @@ static int cifs_prepare_read(struct netfs_io_subreque= st *subreq) const struct smbdirect_socket_parameters *sp =3D smbd_get_parameters(server->smbd_conn); =20 - rreq->io_streams[0].sreq_max_segs =3D sp->max_frmr_depth; + max_segs =3D sp->max_frmr_depth; } #endif - return 0; -} - -/* - * Issue a read operation on behalf of the netfs helper functions. We're = asked - * to make a read of a certain size at a point in the file. We are permit= ted - * to only read a portion of that, but as long as we read something, the n= etfs - * helper will call us again so that we can issue another read. - */ -static void cifs_issue_read(struct netfs_io_subrequest *subreq) -{ - struct netfs_io_request *rreq =3D subreq->rreq; - struct cifs_io_subrequest *rdata =3D container_of(subreq, struct cifs_io_= subrequest, subreq); - struct cifs_io_request *req =3D container_of(subreq->rreq, struct cifs_io= _request, rreq); - struct TCP_Server_Info *server =3D rdata->server; - int rc =3D 0; =20 - cifs_dbg(FYI, "%s: op=3D%08x[%x] mapping=3D%p len=3D%zu/%zu\n", - __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, - subreq->transferred, subreq->len); + rc =3D netfs_prepare_read_buffer(subreq, rctx, max_segs); + if (rc < 0) + goto fail_with_credits; =20 rc =3D adjust_credits(server, rdata, cifs_trace_rw_credits_issue_read_adj= ust); if (rc) - goto failed; + goto fail_with_credits; =20 if (req->cfile->invalidHandle) { do { @@ -235,15 +232,24 @@ static void cifs_issue_read(struct netfs_io_subreques= t *subreq) subreq->rreq->origin !=3D NETFS_DIO_READ) __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); =20 - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + /* After this point, we're not allowed to return an error. */ + netfs_mark_read_submission(subreq, rctx); + rc =3D rdata->server->ops->async_readv(rdata); - if (rc) - goto failed; - return; + if (rc) { + subreq->error =3D rc; + netfs_read_subreq_terminated(subreq); + } + return -EIOCBQUEUED; =20 +fail_with_credits: + if (rc =3D=3D -EAGAIN) + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + else + trace_netfs_sreq(subreq, netfs_sreq_trace_fail); + add_credits_and_wake_if(rdata->server, &rdata->credits, 0); failed: - subreq->error =3D rc; - netfs_read_subreq_terminated(subreq); + return rc; } =20 /* @@ -353,11 +359,10 @@ const struct netfs_request_ops cifs_req_ops =3D { .init_request =3D cifs_init_request, .free_request =3D cifs_free_request, .free_subrequest =3D cifs_free_subrequest, - .prepare_read =3D cifs_prepare_read, .issue_read =3D cifs_issue_read, .done =3D cifs_rreq_done, .begin_writeback =3D cifs_begin_writeback, - .prepare_write =3D cifs_prepare_write, + .estimate_write =3D cifs_estimate_write, .issue_write =3D cifs_issue_write, .invalidate_cache =3D cifs_netfs_invalidate_cache, }; diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 7223a8deaa58..c4aa11a13cef 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4705,6 +4705,7 @@ handle_read_data(struct TCP_Server_Info *server, stru= ct mid_q_entry *mid, unsigned int cur_page_idx; unsigned int pad_len; struct cifs_io_subrequest *rdata =3D mid->callback_data; + struct iov_iter iter; struct smb2_hdr *shdr =3D (struct smb2_hdr *)buf; size_t copied; bool use_rdma_mr =3D false; @@ -4777,6 +4778,10 @@ handle_read_data(struct TCP_Server_Info *server, str= uct mid_q_entry *mid, =20 pad_len =3D data_offset - server->vals->read_rsp_size; =20 + iov_iter_bvec_queue(&iter, ITER_DEST, + rdata->subreq.content.bvecq, rdata->subreq.content.slot, + rdata->subreq.content.offset, rdata->subreq.len); + if (buf_len <=3D data_offset) { /* read response payload is in pages */ cur_page_idx =3D pad_len / PAGE_SIZE; @@ -4806,7 +4811,7 @@ handle_read_data(struct TCP_Server_Info *server, stru= ct mid_q_entry *mid, =20 /* Copy the data to the output I/O iterator. */ rdata->result =3D cifs_copy_bvecq_to_iter(buffer, buffer_len, - cur_off, &rdata->subreq.io_iter); + cur_off, &iter); if (rdata->result !=3D 0) { if (is_offloaded) mid->mid_state =3D MID_RESPONSE_MALFORMED; @@ -4819,7 +4824,7 @@ handle_read_data(struct TCP_Server_Info *server, stru= ct mid_q_entry *mid, } else if (buf_len >=3D data_offset + data_len) { /* read response payload is in buf */ WARN_ONCE(buffer, "read data can be either in buf or in buffer"); - copied =3D copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_i= ter); + copied =3D copy_to_iter(buf + data_offset, data_len, &iter); if (copied =3D=3D 0) return smb_EIO2(smb_eio_trace_rx_copy_to_iter, copied, data_len); rdata->got_bytes =3D copied; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index ef655acf673d..71961776c4ab 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4562,9 +4562,13 @@ smb2_new_read_req(void **buf, unsigned int *total_le= n, */ if (rdata && smb3_use_rdma_offload(io_parms)) { struct smbdirect_buffer_descriptor_v1 *v1; + struct iov_iter iter; bool need_invalidate =3D server->dialect =3D=3D SMB30_PROT_ID; =20 - rdata->mr =3D smbd_register_mr(server->smbd_conn, &rdata->subreq.io_iter, + iov_iter_bvec_queue(&iter, ITER_DEST, + rdata->subreq.content.bvecq, rdata->subreq.content.slot, + rdata->subreq.content.offset, rdata->subreq.len); + rdata->mr =3D smbd_register_mr(server->smbd_conn, &iter, true, need_invalidate); if (!rdata->mr) return -EAGAIN; @@ -4629,9 +4633,10 @@ smb2_readv_callback(struct TCP_Server_Info *server, = struct mid_q_entry *mid) unsigned int rreq_debug_id =3D rdata->rreq->debug_id; unsigned int subreq_debug_index =3D rdata->subreq.debug_index; =20 - if (rdata->got_bytes) { - rqst.rq_iter =3D rdata->subreq.io_iter; - } + if (rdata->got_bytes) + iov_iter_bvec_queue(&rqst.rq_iter, ITER_DEST, + rdata->subreq.content.bvecq, rdata->subreq.content.slot, + rdata->subreq.content.offset, rdata->subreq.len); =20 WARN_ONCE(rdata->server !=3D server, "rdata server %p !=3D mid server %p", @@ -5119,7 +5124,9 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) goto out; =20 rqst.rq_iov =3D iov; - rqst.rq_iter =3D wdata->subreq.io_iter; + iov_iter_bvec_queue(&rqst.rq_iter, ITER_SOURCE, + wdata->subreq.content.bvecq, wdata->subreq.content.slot, + wdata->subreq.content.offset, wdata->subreq.len); =20 rqst.rq_iov[0].iov_len =3D total_len - 1; rqst.rq_iov[0].iov_base =3D (char *)req; @@ -5158,9 +5165,14 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) */ if (smb3_use_rdma_offload(io_parms)) { struct smbdirect_buffer_descriptor_v1 *v1; + struct iov_iter iter; bool need_invalidate =3D server->dialect =3D=3D SMB30_PROT_ID; =20 - wdata->mr =3D smbd_register_mr(server->smbd_conn, &wdata->subreq.io_iter, + iov_iter_bvec_queue(&iter, ITER_SOURCE, + wdata->subreq.content.bvecq, wdata->subreq.content.slot, + wdata->subreq.content.offset, wdata->subreq.len); + + wdata->mr =3D smbd_register_mr(server->smbd_conn, &iter, false, need_invalidate); if (!wdata->mr) { rc =3D -EAGAIN; @@ -5199,8 +5211,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) smb2_set_replay(server, &rqst); } =20 - cifs_dbg(FYI, "async write at %llu %u bytes iter=3D%zx\n", - io_parms->offset, io_parms->length, iov_iter_count(&wdata->subreq.io_it= er)); + cifs_dbg(FYI, "async write at %llu %u bytes len=3D%zx\n", + io_parms->offset, io_parms->length, wdata->subreq.len); =20 if (wdata->credits.value > 0) { shdr->CreditCharge =3D cpu_to_le16(DIV_ROUND_UP(wdata->subreq.len, diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 75697f6d2566..9daa98332d34 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1265,12 +1265,19 @@ cifs_readv_receive(struct TCP_Server_Info *server, = struct mid_q_entry *mid) } =20 #ifdef CONFIG_CIFS_SMB_DIRECT - if (rdata->mr) + if (rdata->mr) { length =3D data_len; /* An RDMA read is already done. */ - else + } else { +#endif + struct iov_iter iter; + + iov_iter_bvec_queue(&iter, ITER_DEST, rdata->subreq.content.bvecq, + rdata->subreq.content.slot, rdata->subreq.content.offset, + data_len); + length =3D cifs_read_iter_from_socket(server, &iter, data_len); +#ifdef CONFIG_CIFS_SMB_DIRECT + } #endif - length =3D cifs_read_iter_from_socket(server, &rdata->subreq.io_iter, - data_len); if (length > 0) rdata->got_bytes +=3D length; server->total_read +=3D length; diff --git a/include/linux/fscache.h b/include/linux/fscache.h index 58fdb9605425..637f46c68d84 100644 --- a/include/linux/fscache.h +++ b/include/linux/fscache.h @@ -147,6 +147,25 @@ struct fscache_cookie { }; }; =20 +enum fscache_extent_type { + FSCACHE_EXTENT_DATA, + FSCACHE_EXTENT_ZERO, +} __mode(byte); + +/* + * Cache occupancy information. + */ +struct fscache_occupancy { + unsigned long long query_from; /* Point to query from */ + unsigned long long query_to; /* Point to query to */ + unsigned long long cached_from[2]; /* Point at which cache extents start = */ + unsigned long long cached_to[2]; /* Point at which cache extents end */ + unsigned int granularity; /* Granularity desired */ + u8 nr_extents; /* Number of cache extents */ + enum fscache_extent_type cached_type[2]; /* Type of cache extent */ + bool no_more_cache; /* No more cached data */ +}; + /* * slow-path functions for when there is actually caching available, and t= he * netfs does actually have a valid token diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 05abb3425962..57d57ed161d6 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -76,7 +76,7 @@ struct netfs_inode { #endif struct mutex wb_lock; /* Writeback serialisation */ loff_t remote_i_size; /* Size of the remote file */ - loff_t zero_point; /* Size after which we assume there's no data + unsigned long long zero_point; /* Size after which we assume there's no d= ata * on the server */ atomic_t io_count; /* Number of outstanding reqs */ unsigned long flags; @@ -136,6 +136,39 @@ static inline struct netfs_group *netfs_folio_group(st= ruct folio *folio) return priv; } =20 +/* + * The buffering context for netfslib reads. The fields here are availabl= e for + * the filesystem to view, but it must not modify them. The struct is pro= vided + * to ->issue_read() and should be passed to the functions for buffer + * extraction and the marking of read submission. + */ +struct netfs_read_context { + unsigned long long start; /* Point to read from */ + unsigned long long stop; /* Point to read to */ + bool retrying; /* T if retrying a read */ +}; + +/* + * The buffering context for netfslib writes. The fields here are availab= le + * for the filesystem to view, but it must not modify them. The struct is + * provided to ->issue_write() and should be passed to the function for bu= ffer + * extraction. + */ +struct netfs_write_context { + unsigned long long issue_from; /* Current issue point */ + size_t buffered; /* Amount in buffer */ +}; + +/* + * Estimate of maximum write subrequest for writeback. The filesystem is + * responsible for filling this in when called from ->estimate_write(), th= ough + * netfslib will preset infinite defaults. + */ +struct netfs_write_estimate { + unsigned long long issue_at; /* Point at which we must submit */ + int max_segs; /* Max number of segments in a single RPC */ +}; + /* * Stream of I/O subrequests going to a particular destination, such as the * server or the local cache. This is mainly intended for writing where w= e may @@ -143,13 +176,15 @@ static inline struct netfs_group *netfs_folio_group(s= truct folio *folio) */ struct netfs_io_stream { /* Submission tracking */ - struct netfs_io_subrequest *construct; /* Op being constructed */ - size_t sreq_max_len; /* Maximum size of a subrequest */ - unsigned int sreq_max_segs; /* 0 or max number of segments in an iterato= r */ - unsigned int submit_off; /* Folio offset we're submitting from */ - unsigned int submit_len; /* Amount of data left to submit */ - void (*prepare_write)(struct netfs_io_subrequest *subreq); - void (*issue_write)(struct netfs_io_subrequest *subreq); + u8 applicable; /* What sources are applicable (NOTE_* mask) */ + int (*estimate_write)(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate); + int (*issue_write)(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx); + atomic64_t issued_to; /* Point to which can be considered issued */ + /* Collection tracking */ struct list_head subrequests; /* Contributory I/O operations */ struct netfs_io_subrequest *front; /* Op being collected */ @@ -189,14 +224,13 @@ struct netfs_io_subrequest { struct list_head rreq_link; /* Link in rreq->subrequests */ struct bvecq_pos dispatch_pos; /* Bookmark in the combined queue of the s= tart */ struct bvecq_pos content; /* The (copied) content of the subrequest */ - struct iov_iter io_iter; /* Iterator for this subrequest */ unsigned long long start; /* Where to start the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ + unsigned int nr_segs; /* Number of segments in content */ refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ - unsigned int nr_segs; /* Number of segs in io_iter */ u8 retry_count; /* The number of retries (0 on initial pass) */ enum netfs_io_source source; /* Where to read from/write to */ unsigned char stream_nr; /* I/O stream this belongs to */ @@ -205,7 +239,6 @@ struct netfs_io_subrequest { #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be = cleared */ #define NETFS_SREQ_MADE_PROGRESS 4 /* Set if we transferred at least some = data */ #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ -#define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph o= bject) */ #define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */ #define NETFS_SREQ_IN_PROGRESS 8 /* Unlocked when the subrequest complete= s */ #define NETFS_SREQ_NEED_RETRY 9 /* Set if the filesystem requests a retry= */ @@ -252,18 +285,16 @@ struct netfs_io_request { struct netfs_group *group; /* Writeback group being written back */ struct bvecq_pos collect_cursor; /* Clear-up point of I/O buffer */ struct bvecq_pos load_cursor; /* Point at which new folios are loaded in = */ - struct bvecq_pos dispatch_cursor; /* Point from which buffers are dispatc= hed */ + //struct bvecq_pos dispatch_cursor; /* Point from which buffers are dispa= tched */ wait_queue_head_t waitq; /* Processor waiter */ void *netfs_priv; /* Private data for the netfs */ void *netfs_priv2; /* Private data for the netfs */ - unsigned long long last_end; /* End pos of last folio submitted */ unsigned long long submitted; /* Amount submitted for I/O so far */ unsigned long long len; /* Length of the request */ size_t transferred; /* Amount to be indicated as transferred */ long error; /* 0 or error that occurred */ unsigned long long i_size; /* Size of the file */ unsigned long long start; /* Start position */ - atomic64_t issued_to; /* Write issuer folio cursor */ unsigned long long collected_to; /* Point we've collected to */ unsigned long long cleaned_to; /* Position we've cleaned folios to */ unsigned long long abandon_to; /* Position to abandon folios to */ @@ -289,8 +320,10 @@ struct netfs_io_request { #define NETFS_RREQ_FOLIO_COPY_TO_CACHE 10 /* Copy current folio to cache f= rom read */ #define NETFS_RREQ_UPLOAD_TO_SERVER 11 /* Need to write to the server */ #define NETFS_RREQ_USE_IO_ITER 12 /* Use ->io_iter rather than ->i_pages = */ +#ifdef CONFIG_NETFS_PGPRIV2 #define NETFS_RREQ_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark * write to cache on read */ +#endif const struct netfs_request_ops *netfs_ops; }; =20 @@ -306,8 +339,7 @@ struct netfs_request_ops { =20 /* Read request handling */ void (*expand_readahead)(struct netfs_io_request *rreq); - int (*prepare_read)(struct netfs_io_subrequest *subreq); - void (*issue_read)(struct netfs_io_subrequest *subreq); + int (*issue_read)(struct netfs_io_subrequest *subreq, struct netfs_read_c= ontext *rctx); bool (*is_still_valid)(struct netfs_io_request *rreq); int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, struct folio **foliop, void **_fsdata); @@ -319,8 +351,12 @@ struct netfs_request_ops { =20 /* Write request handling */ void (*begin_writeback)(struct netfs_io_request *wreq); - void (*prepare_write)(struct netfs_io_subrequest *subreq); - void (*issue_write)(struct netfs_io_subrequest *subreq); + int (*estimate_write)(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate); + int (*issue_write)(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx); void (*retry_request)(struct netfs_io_request *wreq, struct netfs_io_stre= am *stream); void (*invalidate_cache)(struct netfs_io_request *wreq); }; @@ -355,8 +391,19 @@ struct netfs_cache_ops { netfs_io_terminated_t term_func, void *term_func_priv); =20 + /* Estimate the amount of data that can be written in an op. */ + int (*estimate_write)(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + const struct netfs_write_context *wctx, + struct netfs_write_estimate *estimate); + + /* Read data from the cache for a netfs subrequest. */ + int (*issue_read)(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx); + /* Write data to the cache from a netfs subrequest. */ - void (*issue_write)(struct netfs_io_subrequest *subreq); + int (*issue_write)(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx); =20 /* Expand readahead request */ void (*expand_readahead)(struct netfs_cache_resources *cres, @@ -364,26 +411,6 @@ struct netfs_cache_ops { unsigned long long *_len, unsigned long long i_size); =20 - /* Prepare a read operation, shortening it to a cached/uncached - * boundary as appropriate. - */ - enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq, - unsigned long long i_size); - - /* Prepare a write subrequest, working out if we're allowed to do it - * and finding out the maximum amount of data to gather before - * attempting to submit. If we're not permitted to do it, the - * subrequest should be marked failed. - */ - void (*prepare_write_subreq)(struct netfs_io_subrequest *subreq); - - /* Prepare a write operation, working out what part of the write we can - * actually do. - */ - int (*prepare_write)(struct netfs_cache_resources *cres, - loff_t *_start, size_t *_len, size_t upper_len, - loff_t i_size, bool no_space_allocated_yet); - /* Prepare an on-demand read operation, shortening it to a cached/uncached * boundary as appropriate. */ @@ -396,8 +423,7 @@ struct netfs_cache_ops { * next chunk of data starts and how long it is. */ int (*query_occupancy)(struct netfs_cache_resources *cres, - loff_t start, size_t len, size_t granularity, - loff_t *_data_start, size_t *_data_len); + struct fscache_occupancy *occ); }; =20 /* High-level read API. */ @@ -421,10 +447,9 @@ void netfs_single_mark_inode_dirty(struct inode *inode= ); ssize_t netfs_read_single(struct inode *inode, struct file *file, struct i= ov_iter *iter); int netfs_writeback_single(struct address_space *mapping, struct writeback_control *wbc, - struct iov_iter *iter); + struct iov_iter *iter, size_t len); =20 /* Address operations API */ -struct readahead_control; void netfs_readahead(struct readahead_control *); int netfs_read_folio(struct file *, struct folio *); int netfs_write_begin(struct netfs_inode *, struct file *, @@ -442,6 +467,8 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp= ); vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *ne= tfs_group); =20 /* (Sub)request management API. */ +void netfs_mark_read_submission(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx); void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq); void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, @@ -451,9 +478,12 @@ void netfs_put_subrequest(struct netfs_io_subrequest *= subreq, ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t = max_segs, unsigned long long fpos, struct bvecq **_bvecq_head, iov_iter_extraction_t extraction_flags); -size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, - size_t max_size, size_t max_segs); -void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq); +int netfs_prepare_read_buffer(struct netfs_io_subrequest *subreq, + struct netfs_read_context *rctx, + unsigned int max_segs); +int netfs_prepare_write_buffer(struct netfs_io_subrequest *subreq, + struct netfs_write_context *wctx, + unsigned int max_segs); void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_e= rror); =20 int netfs_start_io_read(struct inode *inode); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 899b85d7ef92..6283e7d2ae5a 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -49,6 +49,7 @@ E_(NETFS_PGPRIV2_COPY_TO_CACHE, "2C") =20 #define netfs_rreq_traces \ + EM(netfs_rreq_trace_all_queued, "ALL-Q ") \ EM(netfs_rreq_trace_assess, "ASSESS ") \ EM(netfs_rreq_trace_collect, "COLLECT") \ EM(netfs_rreq_trace_complete, "COMPLET") \ @@ -76,7 +77,8 @@ EM(netfs_rreq_trace_waited_quiesce, "DONE-QUIESCE") \ EM(netfs_rreq_trace_wake_ip, "WAKE-IP") \ EM(netfs_rreq_trace_wake_queue, "WAKE-Q ") \ - E_(netfs_rreq_trace_write_done, "WR-DONE") + EM(netfs_rreq_trace_write_done, "WR-DONE") \ + E_(netfs_rreq_trace_zero_unread, "ZERO-UR") =20 #define netfs_sreq_sources \ EM(NETFS_SOURCE_UNKNOWN, "----") \ @@ -125,6 +127,7 @@ EM(netfs_sreq_trace_superfluous, "SPRFL") \ EM(netfs_sreq_trace_terminated, "TERM ") \ EM(netfs_sreq_trace_too_much, "!TOOM") \ + EM(netfs_sreq_trace_too_many_retries, "!RETR") \ EM(netfs_sreq_trace_wait_for, "_WAIT") \ EM(netfs_sreq_trace_write, "WRITE") \ EM(netfs_sreq_trace_write_skip, "SKIP ") \ @@ -188,12 +191,12 @@ EM(netfs_folio_trace_alloc_buffer, "alloc-buf") \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ EM(netfs_folio_trace_cancel_store, "cancel-store") \ - EM(netfs_folio_trace_clear, "clear") \ - EM(netfs_folio_trace_clear_cc, "clear-cc") \ - EM(netfs_folio_trace_clear_g, "clear-g") \ - EM(netfs_folio_trace_clear_s, "clear-s") \ EM(netfs_folio_trace_copy_to_cache, "mark-copy") \ EM(netfs_folio_trace_end_copy, "end-copy") \ + EM(netfs_folio_trace_endwb, "endwb") \ + EM(netfs_folio_trace_endwb_cc, "endwb-cc") \ + EM(netfs_folio_trace_endwb_g, "endwb-g") \ + EM(netfs_folio_trace_endwb_s, "endwb-s") \ EM(netfs_folio_trace_filled_gaps, "filled-gaps") \ EM(netfs_folio_trace_kill, "kill") \ EM(netfs_folio_trace_kill_cc, "kill-cc") \ @@ -491,6 +494,7 @@ TRACE_EVENT(netfs_folio, TP_STRUCT__entry( __field(ino_t, ino) __field(pgoff_t, index) + __field(unsigned long, pfn) __field(unsigned int, nr) __field(enum netfs_folio_trace, why) ), @@ -501,13 +505,40 @@ TRACE_EVENT(netfs_folio, __entry->why =3D why; __entry->index =3D folio->index; __entry->nr =3D folio_nr_pages(folio); + __entry->pfn =3D folio_pfn(folio); ), =20 - TP_printk("i=3D%05lx ix=3D%05lx-%05lx %s", + TP_printk("p=3D%lx i=3D%05lx ix=3D%05lx-%05lx %s", + __entry->pfn, __entry->ino, __entry->index, __entry->index + __entry->nr - 1, __print_symbolic(__entry->why, netfs_folio_traces)) ); =20 +TRACE_EVENT(netfs_wback, + TP_PROTO(struct netfs_io_request *wreq, struct folio *folio, unsigned= int notes), + + TP_ARGS(wreq, folio, notes), + + TP_STRUCT__entry( + __field(pgoff_t, index) + __field(unsigned int, wreq) + __field(unsigned int, nr) + __field(unsigned int, notes) + ), + + TP_fast_assign( + __entry->wreq =3D wreq->debug_id; + __entry->notes =3D notes; + __entry->index =3D folio->index; + __entry->nr =3D folio_nr_pages(folio); + ), + + TP_printk("R=3D%08x ix=3D%05lx-%05lx n=3D%02x", + __entry->wreq, + __entry->index, __entry->index + __entry->nr - 1, + __entry->notes) + ); + TRACE_EVENT(netfs_write_iter, TP_PROTO(const struct kiocb *iocb, const struct iov_iter *from), =20 diff --git a/net/9p/client.c b/net/9p/client.c index f0dcf252af7e..8d365c000553 100644 --- a/net/9p/client.c +++ b/net/9p/client.c @@ -1561,6 +1561,7 @@ void p9_client_write_subreq(struct netfs_io_subrequest *subreq) { struct netfs_io_request *wreq =3D subreq->rreq; + struct iov_iter iter; struct p9_fid *fid =3D wreq->netfs_priv; struct p9_client *clnt =3D fid->clnt; struct p9_req_t *req; @@ -1571,14 +1572,17 @@ p9_client_write_subreq(struct netfs_io_subrequest *= subreq) p9_debug(P9_DEBUG_9P, ">>> TWRITE fid %d offset %llu len %d\n", fid->fid, start, len); =20 + iov_iter_bvec_queue(&iter, ITER_SOURCE, subreq->content.bvecq, + subreq->content.slot, subreq->content.offset, subreq->len); + /* Don't bother zerocopy for small IO (< 1024) */ if (clnt->trans_mod->zc_request && len > 1024) { - req =3D p9_client_zc_rpc(clnt, P9_TWRITE, NULL, &subreq->io_iter, + req =3D p9_client_zc_rpc(clnt, P9_TWRITE, NULL, &iter, 0, wreq->len, P9_ZC_HDR_SZ, "dqd", fid->fid, start, len); } else { req =3D p9_client_rpc(clnt, P9_TWRITE, "dqV", fid->fid, - start, len, &subreq->io_iter); + start, len, &iter); } if (IS_ERR(req)) { netfs_write_subrequest_terminated(subreq, PTR_ERR(req));