From nobody Wed Apr 15 13:03:03 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32E883AE195 for ; Wed, 4 Mar 2026 14:04:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633076; cv=none; b=YqCLOE58Ks21q5oGawPHHsVL7S3empoh9sVzuf3aF0OypuPto/hMcg4mkDvmZMsc333V6f6sTGVcKggq6SEygepmwhyg34o+lgTfwf6D1steZ42yN2gXXVH6CwI5dKJQLMgGI65bWcIORfPNGnGHqg4BqZPJQrDW9DoLpcbDTFk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772633076; c=relaxed/simple; bh=cng6q8i1atULfRwmQ5J7rRZjycAQD9OT8/xj90RuR2A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u0WV3+Z5Qq5axnMKjuIgNFcQDt/O2Ej54yYonwQn2XqWB2yzhn2qrl5lG6YqyjcdyJIlSTUzf+QGY1O5Wu+U9NOqqPNDj+SeaSlDHMqhsYGzn1EQ+eLTNqKOOd0w2kWVBn/x92ba87+FsSH9XMxgROCFGewOnVeLGU4HzzcBciA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XXNZbH8u; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XXNZbH8u" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772633071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4+gSRLa+hNM8Yo0IhKmshzLd3HnMZAmo4+G5ANJHmwU=; b=XXNZbH8uazDTX76dX44xD9z0Y7GEnEPM1GaxxzXrSdjbzsG5PVvuJI801dFEFM7CAOENFd s10YC4UV67/1N+2+Rqk8Hvp7NAimYd5OkD/yAKlLO6SAPHA8G3GHk3ST+eGJU+DuEOWfXX CUXz22FlCHBMZ5J+24PwyVWhThiDLVE= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-253-k6wVx49iMzisJ_jRiQEOYA-1; Wed, 04 Mar 2026 09:04:23 -0500 X-MC-Unique: k6wVx49iMzisJ_jRiQEOYA-1 X-Mimecast-MFC-AGG-ID: k6wVx49iMzisJ_jRiQEOYA_1772633061 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5726918005BE; Wed, 4 Mar 2026 14:04:21 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.194]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 86EE118005AE; Wed, 4 Mar 2026 14:04:16 +0000 (UTC) From: David Howells To: Matthew Wilcox , Christoph Hellwig , Jens Axboe , Leon Romanovsky Cc: David Howells , Christian Brauner , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara , Marc Dionne Subject: [RFC PATCH 06/17] afs: Use a bvecq to hold dir content rather than folioq Date: Wed, 4 Mar 2026 14:03:13 +0000 Message-ID: <20260304140328.112636-7-dhowells@redhat.com> In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com> References: <20260304140328.112636-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Use a bvecq to hold the contents of a directory rather than the folioq so that the latter can be phased out. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/dir.c | 37 +++++----- fs/afs/dir_edit.c | 41 +++++------ fs/afs/dir_search.c | 33 ++++----- fs/afs/inode.c | 18 ++--- fs/afs/internal.h | 6 +- fs/netfs/write_issue.c | 156 ++++++----------------------------------- include/linux/netfs.h | 1 + 7 files changed, 87 insertions(+), 205 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 78caef3f1338..1d1be7e5923f 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -136,9 +136,9 @@ static void afs_dir_dump(struct afs_vnode *dvnode) pr_warn("DIR %llx:%llx is=3D%llx\n", dvnode->fid.vid, dvnode->fid.vnode, i_size); =20 - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); - iterate_folioq(&iter, iov_iter_count(&iter), NULL, NULL, - afs_dir_dump_step); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + iterate_bvecq(&iter, iov_iter_count(&iter), NULL, NULL, + afs_dir_dump_step); } =20 /* @@ -199,9 +199,9 @@ static int afs_dir_check(struct afs_vnode *dvnode) if (unlikely(!i_size)) return 0; =20 - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); - checked =3D iterate_folioq(&iter, iov_iter_count(&iter), dvnode, NULL, - afs_dir_check_step); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + checked =3D iterate_bvecq(&iter, iov_iter_count(&iter), dvnode, NULL, + afs_dir_check_step); if (checked !=3D i_size) { afs_dir_dump(dvnode); return -EIO; @@ -255,15 +255,14 @@ static ssize_t afs_do_read_single(struct afs_vnode *d= vnode, struct file *file) if (dvnode->directory_size < i_size) { size_t cur_size =3D dvnode->directory_size; =20 - ret =3D netfs_alloc_folioq_buffer(NULL, - &dvnode->directory, &cur_size, i_size, + ret =3D netfs_expand_bvecq_buffer(&dvnode->directory, &cur_size, i_size, mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); dvnode->directory_size =3D cur_size; if (ret < 0) return ret; } =20 - iov_iter_folio_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->d= irectory_size); + iov_iter_bvec_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->di= rectory_size); =20 /* AFS requires us to perform the read of a directory synchronously as * a single unit to avoid issues with the directory contents being @@ -282,9 +281,9 @@ static ssize_t afs_do_read_single(struct afs_vnode *dvn= ode, struct file *file) =20 if (ret2 < 0) ret =3D ret2; - } else if (i_size < folioq_folio_size(dvnode->directory, 0)) { + } else if (i_size < PAGE_SIZE) { /* NUL-terminate a symlink. */ - char *symlink =3D kmap_local_folio(folioq_folio(dvnode->directory, 0), = 0); + char *symlink =3D kmap_local_bvec(&dvnode->directory->bv[0], 0); =20 symlink[i_size] =3D 0; kunmap_local(symlink); @@ -305,8 +304,8 @@ ssize_t afs_read_single(struct afs_vnode *dvnode, struc= t file *file) } =20 /* - * Read the directory into a folio_queue buffer in one go, scrubbing the - * previous contents. We return -ESTALE if the caller needs to call us ag= ain. + * Read the directory into the buffer in one go, scrubbing the previous + * contents. We return -ESTALE if the caller needs to call us again. */ ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file) __acquires(&dvnode->validate_lock) @@ -487,7 +486,7 @@ static size_t afs_dir_iterate_step(void *iter_base, siz= e_t progress, size_t len, } =20 /* - * Iterate through the directory folios. + * Iterate through the directory content. */ static int afs_dir_iterate_contents(struct inode *dir, struct dir_context = *dir_ctx) { @@ -502,11 +501,11 @@ static int afs_dir_iterate_contents(struct inode *dir= , struct dir_context *dir_c if (i_size <=3D 0 || dir_ctx->pos >=3D i_size) return 0; =20 - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); iov_iter_advance(&iter, round_down(dir_ctx->pos, AFS_DIR_BLOCK_SIZE)); =20 - iterate_folioq(&iter, iov_iter_count(&iter), dvnode, &ctx, - afs_dir_iterate_step); + iterate_bvecq(&iter, iov_iter_count(&iter), dvnode, &ctx, + afs_dir_iterate_step); =20 if (ctx.error =3D=3D -ESTALE) afs_invalidate_dir(dvnode, afs_dir_invalid_iter_stale); @@ -2211,8 +2210,8 @@ int afs_single_writepages(struct address_space *mappi= ng, if (is_dir ? test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) : atomic64_read(&dvnode->cb_expires_at) !=3D AFS_NO_CB_PROMISE) { - iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, - i_size_read(&dvnode->netfs.inode)); + iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, + i_size_read(&dvnode->netfs.inode)); ret =3D netfs_writeback_single(mapping, wbc, &iter); } =20 diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index fd3aa9f97ce6..ef9066659438 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -110,9 +110,8 @@ static void afs_clear_contig_bits(union afs_xdr_dir_blo= ck *block, */ static union afs_xdr_dir_block *afs_dir_get_block(struct afs_dir_iter *ite= r, size_t block) { - struct folio_queue *fq; struct afs_vnode *dvnode =3D iter->dvnode; - struct folio *folio; + struct bvecq *bq; size_t blpos =3D block * AFS_DIR_BLOCK_SIZE; size_t blend =3D (block + 1) * AFS_DIR_BLOCK_SIZE, fpos =3D iter->fpos; int ret; @@ -120,41 +119,39 @@ static union afs_xdr_dir_block *afs_dir_get_block(str= uct afs_dir_iter *iter, siz if (dvnode->directory_size < blend) { size_t cur_size =3D dvnode->directory_size; =20 - ret =3D netfs_alloc_folioq_buffer( - NULL, &dvnode->directory, &cur_size, blend, + ret =3D netfs_expand_bvecq_buffer( + &dvnode->directory, &cur_size, blend, mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); dvnode->directory_size =3D cur_size; if (ret < 0) goto fail; } =20 - fq =3D iter->fq; - if (!fq) - fq =3D dvnode->directory; + bq =3D iter->bq; + if (!bq) + bq =3D dvnode->directory; =20 - /* Search the folio queue for the folio containing the block... */ - for (; fq; fq =3D fq->next) { - for (int s =3D iter->fq_slot; s < folioq_count(fq); s++) { - size_t fsize =3D folioq_folio_size(fq, s); + /* Search the contents for the region containing the block... */ + for (; bq; bq =3D bq->next) { + for (int s =3D iter->bq_slot; s < bq->nr_segs; s++) { + struct bio_vec *bv =3D &bq->bv[s]; + size_t bsize =3D bv->bv_len; =20 - if (blend <=3D fpos + fsize) { + if (blend <=3D fpos + bsize) { /* ... and then return the mapped block. */ - folio =3D folioq_folio(fq, s); - if (WARN_ON_ONCE(folio_pos(folio) !=3D fpos)) - goto fail; - iter->fq =3D fq; - iter->fq_slot =3D s; + iter->bq =3D bq; + iter->bq_slot =3D s; iter->fpos =3D fpos; - return kmap_local_folio(folio, blpos - fpos); + return kmap_local_bvec(bv, blpos - fpos); } - fpos +=3D fsize; + fpos +=3D bsize; } - iter->fq_slot =3D 0; + iter->bq_slot =3D 0; } =20 fail: - iter->fq =3D NULL; - iter->fq_slot =3D 0; + iter->bq =3D NULL; + iter->bq_slot =3D 0; afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block); return NULL; } diff --git a/fs/afs/dir_search.c b/fs/afs/dir_search.c index d2516e55b5ed..1088b2c4db6e 100644 --- a/fs/afs/dir_search.c +++ b/fs/afs/dir_search.c @@ -66,12 +66,11 @@ bool afs_dir_init_iter(struct afs_dir_iter *iter, const= struct qstr *name) */ union afs_xdr_dir_block *afs_dir_find_block(struct afs_dir_iter *iter, siz= e_t block) { - struct folio_queue *fq =3D iter->fq; struct afs_vnode *dvnode =3D iter->dvnode; - struct folio *folio; + struct bvecq *bq =3D iter->bq; size_t blpos =3D block * AFS_DIR_BLOCK_SIZE; size_t blend =3D (block + 1) * AFS_DIR_BLOCK_SIZE, fpos =3D iter->fpos; - int slot =3D iter->fq_slot; + int slot =3D iter->bq_slot; =20 _enter("%zx,%d", block, slot); =20 @@ -83,36 +82,34 @@ union afs_xdr_dir_block *afs_dir_find_block(struct afs_= dir_iter *iter, size_t bl if (dvnode->directory_size < blend) goto fail; =20 - if (!fq || blpos < fpos) { - fq =3D dvnode->directory; + if (!bq || blpos < fpos) { + bq =3D dvnode->directory; slot =3D 0; fpos =3D 0; } =20 /* Search the folio queue for the folio containing the block... */ - for (; fq; fq =3D fq->next) { - for (; slot < folioq_count(fq); slot++) { - size_t fsize =3D folioq_folio_size(fq, slot); + for (; bq; bq =3D bq->next) { + for (; slot < bq->max_segs; slot++) { + struct bio_vec *bv =3D &bq->bv[slot]; + size_t bsize =3D bv->bv_len; =20 - if (blend <=3D fpos + fsize) { + if (blend <=3D fpos + bsize) { /* ... and then return the mapped block. */ - folio =3D folioq_folio(fq, slot); - if (WARN_ON_ONCE(folio_pos(folio) !=3D fpos)) - goto fail; - iter->fq =3D fq; - iter->fq_slot =3D slot; + iter->bq =3D bq; + iter->bq_slot =3D slot; iter->fpos =3D fpos; - iter->block =3D kmap_local_folio(folio, blpos - fpos); + iter->block =3D kmap_local_bvec(bv, blpos - fpos); return iter->block; } - fpos +=3D fsize; + fpos +=3D bsize; } slot =3D 0; } =20 fail: - iter->fq =3D NULL; - iter->fq_slot =3D 0; + iter->bq =3D NULL; + iter->bq_slot =3D 0; afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block); return NULL; } diff --git a/fs/afs/inode.c b/fs/afs/inode.c index dde1857fcabb..1a4e90d7ed01 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -31,12 +31,12 @@ void afs_init_new_symlink(struct afs_vnode *vnode, stru= ct afs_operation *op) size_t dsize =3D 0; char *p; =20 - if (netfs_alloc_folioq_buffer(NULL, &vnode->directory, &dsize, size, + if (netfs_expand_bvecq_buffer(&vnode->directory, &dsize, size, mapping_gfp_mask(vnode->netfs.inode.i_mapping)) < 0) return; =20 vnode->directory_size =3D dsize; - p =3D kmap_local_folio(folioq_folio(vnode->directory, 0), 0); + p =3D kmap_local_bvec(&vnode->directory->bv[0], 0); memcpy(p, op->create.symlink, size); kunmap_local(p); set_bit(AFS_VNODE_DIR_READ, &vnode->flags); @@ -45,17 +45,17 @@ void afs_init_new_symlink(struct afs_vnode *vnode, stru= ct afs_operation *op) =20 static void afs_put_link(void *arg) { - struct folio *folio =3D virt_to_folio(arg); + struct page *page =3D virt_to_page(arg); =20 kunmap_local(arg); - folio_put(folio); + put_page(page); } =20 const char *afs_get_link(struct dentry *dentry, struct inode *inode, struct delayed_call *callback) { struct afs_vnode *vnode =3D AFS_FS_I(inode); - struct folio *folio; + struct page *page; char *content; ssize_t ret; =20 @@ -84,9 +84,9 @@ const char *afs_get_link(struct dentry *dentry, struct in= ode *inode, set_bit(AFS_VNODE_DIR_READ, &vnode->flags); =20 good: - folio =3D folioq_folio(vnode->directory, 0); - folio_get(folio); - content =3D kmap_local_folio(folio, 0); + page =3D vnode->directory->bv[0].bv_page; + get_page(page); + content =3D kmap_local_page(page); set_delayed_call(callback, afs_put_link, content); return content; } @@ -761,7 +761,7 @@ void afs_evict_inode(struct inode *inode) =20 netfs_wait_for_outstanding_io(inode); truncate_inode_pages_final(&inode->i_data); - netfs_free_folioq_buffer(vnode->directory); + netfs_free_bvecq_buffer(vnode->directory); =20 afs_set_cache_aux(vnode, &aux); netfs_clear_inode_writeback(inode, &aux); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 009064b8d661..9bf5d2f1dbc4 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -710,7 +710,7 @@ struct afs_vnode { #define AFS_VNODE_MODIFYING 10 /* Set if we're performing a modification = op */ #define AFS_VNODE_DIR_READ 11 /* Set if we've read a dir's contents */ =20 - struct folio_queue *directory; /* Directory contents */ + struct bvecq *directory; /* Directory contents */ struct list_head wb_keys; /* List of keys available for writeback */ struct list_head pending_locks; /* locks waiting to be granted */ struct list_head granted_locks; /* locks granted on this file */ @@ -983,9 +983,9 @@ static inline void afs_invalidate_cache(struct afs_vnod= e *vnode, unsigned int fl struct afs_dir_iter { struct afs_vnode *dvnode; union afs_xdr_dir_block *block; - struct folio_queue *fq; + struct bvecq *bq; unsigned int fpos; - int fq_slot; + int bq_slot; unsigned int loop_check; u8 nr_slots; u8 bucket; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 437268f65640..fd4dc89d9d8d 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -698,124 +698,11 @@ ssize_t netfs_end_writethrough(struct netfs_io_reque= st *wreq, struct writeback_c return ret; } =20 -/* - * Write some of a pending folio data back to the server and/or the cache. - */ -static int netfs_write_folio_single(struct netfs_io_request *wreq, - struct folio *folio) -{ - struct netfs_io_stream *upload =3D &wreq->io_streams[0]; - struct netfs_io_stream *cache =3D &wreq->io_streams[1]; - struct netfs_io_stream *stream; - size_t iter_off =3D 0; - size_t fsize =3D folio_size(folio), flen; - loff_t fpos =3D folio_pos(folio); - bool to_eof =3D false; - bool no_debug =3D false; - - _enter(""); - - flen =3D folio_size(folio); - if (flen > wreq->i_size - fpos) { - flen =3D wreq->i_size - fpos; - folio_zero_segment(folio, flen, fsize); - to_eof =3D true; - } else if (flen =3D=3D wreq->i_size - fpos) { - to_eof =3D true; - } - - _debug("folio %zx/%zx", flen, fsize); - - if (!upload->avail && !cache->avail) { - trace_netfs_folio(folio, netfs_folio_trace_cancel_store); - return 0; - } - - if (!upload->construct) - trace_netfs_folio(folio, netfs_folio_trace_store); - else - trace_netfs_folio(folio, netfs_folio_trace_store_plus); - - /* Attach the folio to the rolling buffer. */ - folio_get(folio); - rolling_buffer_append(&wreq->buffer, folio, NETFS_ROLLBUF_PUT_MARK); - - /* Move the submission point forward to allow for write-streaming data - * not starting at the front of the page. We don't do write-streaming - * with the cache as the cache requires DIO alignment. - * - * Also skip uploading for data that's been read and just needs copying - * to the cache. - */ - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - stream->submit_off =3D 0; - stream->submit_len =3D flen; - if (!stream->avail) { - stream->submit_off =3D UINT_MAX; - stream->submit_len =3D 0; - } - } - - /* Attach the folio to one or more subrequests. For a big folio, we - * could end up with thousands of subrequests if the wsize is small - - * but we might need to wait during the creation of subrequests for - * network resources (eg. SMB credits). - */ - for (;;) { - ssize_t part; - size_t lowest_off =3D ULONG_MAX; - int choose_s =3D -1; - - /* Always add to the lowest-submitted stream first. */ - for (int s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - if (stream->submit_len > 0 && - stream->submit_off < lowest_off) { - lowest_off =3D stream->submit_off; - choose_s =3D s; - } - } - - if (choose_s < 0) - break; - stream =3D &wreq->io_streams[choose_s]; - - /* Advance the iterator(s). */ - if (stream->submit_off > iter_off) { - rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off); - iter_off =3D stream->submit_off; - } - - atomic64_set(&wreq->issued_to, fpos + stream->submit_off); - stream->submit_extendable_to =3D fsize - stream->submit_off; - part =3D netfs_advance_write(wreq, stream, fpos + stream->submit_off, - stream->submit_len, to_eof); - stream->submit_off +=3D part; - if (part > stream->submit_len) - stream->submit_len =3D 0; - else - stream->submit_len -=3D part; - if (part > 0) - no_debug =3D true; - } - - wreq->buffer.iter.iov_offset =3D 0; - if (fsize > iter_off) - rolling_buffer_advance(&wreq->buffer, fsize - iter_off); - atomic64_set(&wreq->issued_to, fpos + fsize); - - if (!no_debug) - kdebug("R=3D%x: No submit", wreq->debug_id); - _leave(" =3D 0"); - return 0; -} - /** * netfs_writeback_single - Write back a monolithic payload * @mapping: The mapping to write from * @wbc: Hints from the VM - * @iter: Data to write, must be ITER_FOLIOQ. + * @iter: Data to write. * * Write a monolithic, non-pagecache object back to the server and/or * the cache. @@ -826,13 +713,8 @@ int netfs_writeback_single(struct address_space *mappi= ng, { struct netfs_io_request *wreq; struct netfs_inode *ictx =3D netfs_inode(mapping->host); - struct folio_queue *fq; - size_t size =3D iov_iter_count(iter); int ret; =20 - if (WARN_ON_ONCE(!iov_iter_is_folioq(iter))) - return -EIO; - if (!mutex_trylock(&ictx->wb_lock)) { if (wbc->sync_mode =3D=3D WB_SYNC_NONE) { netfs_stat(&netfs_n_wb_lock_skip); @@ -848,6 +730,9 @@ int netfs_writeback_single(struct address_space *mappin= g, goto couldnt_start; } =20 + wreq->buffer.iter =3D *iter; + wreq->len =3D iov_iter_count(iter); + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); trace_netfs_write(wreq, netfs_write_trace_writeback_single); netfs_stat(&netfs_n_wh_writepages); @@ -855,31 +740,34 @@ int netfs_writeback_single(struct address_space *mapp= ing, if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) wreq->netfs_ops->begin_writeback(wreq); =20 - for (fq =3D (struct folio_queue *)iter->folioq; fq; fq =3D fq->next) { - for (int slot =3D 0; slot < folioq_count(fq); slot++) { - struct folio *folio =3D folioq_folio(fq, slot); - size_t part =3D umin(folioq_folio_size(fq, slot), size); + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream =3D &wreq->io_streams[s]; + + if (!stream->avail) + continue; =20 - _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)= ); + netfs_prepare_write(wreq, stream, 0); =20 - ret =3D netfs_write_folio_single(wreq, folio); - if (ret < 0) - goto stop; - size -=3D part; - if (size <=3D 0) - goto stop; - } + subreq =3D stream->construct; + subreq->len =3D wreq->len; + stream->submit_len =3D subreq->len; + stream->submit_extendable_to =3D round_up(wreq->len, PAGE_SIZE); + + netfs_issue_write(wreq, stream); } =20 -stop: - for (int s =3D 0; s < NR_IO_STREAMS; s++) - netfs_issue_write(wreq, &wreq->io_streams[s]); smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); =20 mutex_unlock(&ictx->wb_lock); netfs_wake_collector(wreq); =20 + /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to + * wait before modifying. + */ + ret =3D netfs_wait_for_write(wreq); + netfs_put_request(wreq, netfs_rreq_trace_put_return); _leave(" =3D %d", ret); return ret; diff --git a/include/linux/netfs.h b/include/linux/netfs.h index f360b25ceb31..f9ad067a0a0c 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -477,6 +477,7 @@ void netfs_free_folioq_buffer(struct folio_queue *fq); void dump_bvecq(const struct bvecq *bq); struct bvecq *netfs_alloc_bvecq(size_t nr_slots, gfp_t gfp); struct bvecq *netfs_alloc_bvecq_buffer(size_t size, unsigned int pre_slots= , gfp_t gfp); +int netfs_expand_bvecq_buffer(struct bvecq **_buffer, size_t *_cur_size, s= size_t size, gfp_t gfp); void netfs_free_bvecq_buffer(struct bvecq *bq); void netfs_put_bvecq(struct bvecq *bq); int netfs_shorten_bvecq_buffer(struct bvecq *bq, unsigned int seg, size_t = size);