From nobody Thu Apr 2 23:53:32 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 372B33F7E67 for ; Thu, 26 Mar 2026 10:49:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774522177; cv=none; b=Or+8W7MvhsiJk+2pybQVXlP3a7Uv1WPoMZlDhrpjlp3Ut2rkSU0fkt1dSxocbwKPlzizZg1cPA3di9j7oLFlOWn0ysHx3C18u8YgQXkeE4JiC7ZKc5ql3JpsXQWWOHdd6mdC7oGG/1YrLttHRw+BiRGg6t1TKRqQ12GLLUfIB2I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774522177; c=relaxed/simple; bh=ckxwwirhW7gaqeFfrfFah+zpSvFqxBbGBtNeHLtwQ/c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fKnNPuCM9rBctjo0ZjrBqo9o+9wr174F32alo70x0UEXvCLqep/HzfkiYXatRYFJy1SVoFMg3BTzJu1yJi2ILXjJdKqDMT73R+Z311EbVj5xbArIdnrgIOsvPLhmvhcwY1SIWsmxEvABH+g6sfY73ksgzdT4O3ImIsDVFh3po2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YNHh64gE; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YNHh64gE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774522172; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j4uigjP0QA24vMb19T4wDbEFmU/7M8ptBU67QVBfdeQ=; b=YNHh64gECjU1F5CoG404Qkhh3yE5MK963ReqAVyOgJd4GYRsEtcC+yIyPjUZpNVd7K4AkW mS3Euiw8NLYJTMFbvAQsiYWUY0JFNBb9Yzqemk+I4wijyd93ivn1TjsVxFtm0b/CvAbjJl ozhBKZq8KbAd+BGfnSv6NIaD8xORVrk= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-616-riWMjUXYPcu13w47f_QtQw-1; Thu, 26 Mar 2026 06:49:29 -0400 X-MC-Unique: riWMjUXYPcu13w47f_QtQw-1 X-Mimecast-MFC-AGG-ID: riWMjUXYPcu13w47f_QtQw_1774522166 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3BC7D1800372; Thu, 26 Mar 2026 10:49:26 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.33.121]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A10C33000223; Thu, 26 Mar 2026 10:49:19 +0000 (UTC) From: David Howells To: Christian Brauner , Matthew Wilcox , Christoph Hellwig Cc: David Howells , Paulo Alcantara , Jens Axboe , Leon Romanovsky , Steve French , ChenXiaoSong , Marc Dionne , Eric Van Hensbergen , Dominique Martinet , Ilya Dryomov , Trond Myklebust , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Paulo Alcantara Subject: [PATCH 23/26] netfs: Remove folio_queue and rolling_buffer Date: Thu, 26 Mar 2026 10:45:38 +0000 Message-ID: <20260326104544.509518-24-dhowells@redhat.com> In-Reply-To: <20260326104544.509518-1-dhowells@redhat.com> References: <20260326104544.509518-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Remove folio_queue and rolling_buffer as they're no longer used. Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: Christoph Hellwig cc: Steve French cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- Documentation/core-api/folio_queue.rst | 209 ----------------- Documentation/core-api/index.rst | 1 - fs/netfs/iterator.c | 192 ---------------- fs/netfs/rolling_buffer.c | 297 ------------------------- include/linux/folio_queue.h | 282 ----------------------- include/linux/rolling_buffer.h | 64 ------ 6 files changed, 1045 deletions(-) delete mode 100644 Documentation/core-api/folio_queue.rst delete mode 100644 fs/netfs/rolling_buffer.c delete mode 100644 include/linux/folio_queue.h delete mode 100644 include/linux/rolling_buffer.h diff --git a/Documentation/core-api/folio_queue.rst b/Documentation/core-ap= i/folio_queue.rst deleted file mode 100644 index b7628896d2b6..000000000000 --- a/Documentation/core-api/folio_queue.rst +++ /dev/null @@ -1,209 +0,0 @@ -.. SPDX-License-Identifier: GPL-2.0+ - -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D -Folio Queue -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -:Author: David Howells - -.. Contents: - - * Overview - * Initialisation - * Adding and removing folios - * Querying information about a folio - * Querying information about a folio_queue - * Folio queue iteration - * Folio marks - * Lockless simultaneous production/consumption issues - - -Overview -=3D=3D=3D=3D=3D=3D=3D=3D - -The folio_queue struct forms a single segment in a segmented list of folios -that can be used to form an I/O buffer. As such, the list can be iterated= over -using the ITER_FOLIOQ iov_iter type. - -The publicly accessible members of the structure are:: - - struct folio_queue { - struct folio_queue *next; - struct folio_queue *prev; - ... - }; - -A pair of pointers are provided, ``next`` and ``prev``, that point to the -segments on either side of the segment being accessed. Whilst this is a -doubly-linked list, it is intentionally not a circular list; the outward -sibling pointers in terminal segments should be NULL. - -Each segment in the list also stores: - - * an ordered sequence of folio pointers, - * the size of each folio and - * three 1-bit marks per folio, - -but these should not be accessed directly as the underlying data structure= may -change, but rather the access functions outlined below should be used. - -The facility can be made accessible by:: - - #include - -and to use the iterator:: - - #include - - -Initialisation -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -A segment should be initialised by calling:: - - void folioq_init(struct folio_queue *folioq); - -with a pointer to the segment to be initialised. Note that this will not -necessarily initialise all the folio pointers, so care must be taken to ch= eck -the number of folios added. - - -Adding and removing folios -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D - -Folios can be set in the next unused slot in a segment struct by calling o= ne -of:: - - unsigned int folioq_append(struct folio_queue *folioq, - struct folio *folio); - - unsigned int folioq_append_mark(struct folio_queue *folioq, - struct folio *folio); - -Both functions update the stored folio count, store the folio and note its -size. The second function also sets the first mark for the folio added. = Both -functions return the number of the slot used. [!] Note that no attempt is= made -to check that the capacity wasn't overrun and the list will not be extended -automatically. - -A folio can be excised by calling:: - - void folioq_clear(struct folio_queue *folioq, unsigned int slot); - -This clears the slot in the array and also clears all the marks for that f= olio, -but doesn't change the folio count - so future accesses of that slot must = check -if the slot is occupied. - - -Querying information about a folio -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Information about the folio in a particular slot may be queried by the -following function:: - - struct folio *folioq_folio(const struct folio_queue *folioq, - unsigned int slot); - -If a folio has not yet been set in that slot, this may yield an undefined -pointer. The size of the folio in a slot may be queried with either of:: - - unsigned int folioq_folio_order(const struct folio_queue *folioq, - unsigned int slot); - - size_t folioq_folio_size(const struct folio_queue *folioq, - unsigned int slot); - -The first function returns the size as an order and the second as a number= of -bytes. - - -Querying information about a folio_queue -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Information may be retrieved about a particular segment with the following -functions:: - - unsigned int folioq_nr_slots(const struct folio_queue *folioq); - - unsigned int folioq_count(struct folio_queue *folioq); - - bool folioq_full(struct folio_queue *folioq); - -The first function returns the maximum capacity of a segment. It must not= be -assumed that this won't vary between segments. The second returns the num= ber -of folios added to a segments and the third is a shorthand to indicate if = the -segment has been filled to capacity. - -Not that the count and fullness are not affected by clearing folios from t= he -segment. These are more about indicating how many slots in the array have= been -initialised, and it assumed that slots won't get reused, but rather the se= gment -will get discarded as the queue is consumed. - - -Folio marks -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Folios within a queue can also have marks assigned to them. These marks c= an be -used to note information such as if a folio needs folio_put() calling upon= it. -There are three marks available to be set for each folio. - -The marks can be set by:: - - void folioq_mark(struct folio_queue *folioq, unsigned int slot); - void folioq_mark2(struct folio_queue *folioq, unsigned int slot); - -Cleared by:: - - void folioq_unmark(struct folio_queue *folioq, unsigned int slot); - void folioq_unmark2(struct folio_queue *folioq, unsigned int slot); - -And the marks can be queried by:: - - bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot= ); - bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slo= t); - -The marks can be used for any purpose and are not interpreted by this API. - - -Folio queue iteration -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -A list of segments may be iterated over using the I/O iterator facility us= ing -an ``iov_iter`` iterator of ``ITER_FOLIOQ`` type. The iterator may be -initialised with:: - - void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, - const struct folio_queue *folioq, - unsigned int first_slot, unsigned int offset, - size_t count); - -This may be told to start at a particular segment, slot and offset within a -queue. The iov iterator functions will follow the next pointers when adva= ncing -and prev pointers when reverting when needed. - - -Lockless simultaneous production/consumption issues -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D - -If properly managed, the list can be extended by the producer at the head = end -and shortened by the consumer at the tail end simultaneously without the n= eed -to take locks. The ITER_FOLIOQ iterator inserts appropriate barriers to a= id -with this. - -Care must be taken when simultaneously producing and consuming a list. If= the -last segment is reached and the folios it refers to are entirely consumed = by -the IOV iterators, an iov_iter struct will be left pointing to the last se= gment -with a slot number equal to the capacity of that segment. The iterator wi= ll -try to continue on from this if there's another segment available when it = is -used again, but care must be taken lest the segment got removed and freed = by -the consumer before the iterator was advanced. - -It is recommended that the queue always contain at least one segment, even= if -that segment has never been filled or is entirely spent. This prevents the -head and tail pointers from collapsing. - - -API Function Reference -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -.. kernel-doc:: include/linux/folio_queue.h diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/inde= x.rst index 13769d5c40bf..16c529a33ac4 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -39,7 +39,6 @@ Library functionality that is used throughout the kernel. kref cleanup assoc_array - folio_queue xarray maple_tree idr diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 442f893a0d65..7969c0b1f9a9 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -135,195 +135,3 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, siz= e_t orig_len, size_t max_se return extracted ?: ret; } EXPORT_SYMBOL_GPL(netfs_extract_iter); - -#if 0 -/* - * Select the span of a bvec iterator we're going to use. Limit it by bot= h maximum - * size and maximum number of segments. Returns the size of the span in b= ytes. - */ -static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_o= ffset, - size_t max_size, size_t max_segs) -{ - const struct bio_vec *bvecs =3D iter->bvec; - unsigned int nbv =3D iter->nr_segs, ix =3D 0, nsegs =3D 0; - size_t len, span =3D 0, n =3D iter->count; - size_t skip =3D iter->iov_offset + start_offset; - - if (WARN_ON(!iov_iter_is_bvec(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - - while (n && ix < nbv && skip) { - len =3D bvecs[ix].bv_len; - if (skip < len) - break; - skip -=3D len; - n -=3D len; - ix++; - } - - while (n && ix < nbv) { - len =3D min3(n, bvecs[ix].bv_len - skip, max_size); - span +=3D len; - nsegs++; - ix++; - if (span >=3D max_size || nsegs >=3D max_segs) - break; - skip =3D 0; - n -=3D len; - } - - return min(span, max_size); -} - -/* - * Select the span of a kvec iterator we're going to use. Limit it by both - * maximum size and maximum number of segments. Returns the size of the s= pan - * in bytes. - */ -static size_t netfs_limit_kvec(const struct iov_iter *iter, size_t start_o= ffset, - size_t max_size, size_t max_segs) -{ - const struct kvec *kvecs =3D iter->kvec; - unsigned int nkv =3D iter->nr_segs, ix =3D 0, nsegs =3D 0; - size_t len, span =3D 0, n =3D iter->count; - size_t skip =3D iter->iov_offset + start_offset; - - if (WARN_ON(!iov_iter_is_kvec(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - - while (n && ix < nkv && skip) { - len =3D kvecs[ix].iov_len; - if (skip < len) - break; - skip -=3D len; - n -=3D len; - ix++; - } - - while (n && ix < nkv) { - len =3D min3(n, kvecs[ix].iov_len - skip, max_size); - span +=3D len; - nsegs++; - ix++; - if (span >=3D max_size || nsegs >=3D max_segs) - break; - skip =3D 0; - n -=3D len; - } - - return min(span, max_size); -} - -/* - * Select the span of an xarray iterator we're going to use. Limit it by = both - * maximum size and maximum number of segments. It is assumed that segmen= ts - * can be larger than a page in size, provided they're physically contiguo= us. - * Returns the size of the span in bytes. - */ -static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start= _offset, - size_t max_size, size_t max_segs) -{ - struct folio *folio; - unsigned int nsegs =3D 0; - loff_t pos =3D iter->xarray_start + iter->iov_offset; - pgoff_t index =3D pos / PAGE_SIZE; - size_t span =3D 0, n =3D iter->count; - - XA_STATE(xas, iter->xarray, index); - - if (WARN_ON(!iov_iter_is_xarray(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - max_size =3D min(max_size, n - start_offset); - - rcu_read_lock(); - xas_for_each(&xas, folio, ULONG_MAX) { - size_t offset, flen, len; - if (xas_retry(&xas, folio)) - continue; - if (WARN_ON(xa_is_value(folio))) - break; - if (WARN_ON(folio_test_hugetlb(folio))) - break; - - flen =3D folio_size(folio); - offset =3D offset_in_folio(folio, pos); - len =3D min(max_size, flen - offset); - span +=3D len; - nsegs++; - if (span >=3D max_size || nsegs >=3D max_segs) - break; - } - - rcu_read_unlock(); - return min(span, max_size); -} - -/* - * Select the span of a folio queue iterator we're going to use. Limit it= by - * both maximum size and maximum number of segments. Returns the size of = the - * span in bytes. - */ -static size_t netfs_limit_folioq(const struct iov_iter *iter, size_t start= _offset, - size_t max_size, size_t max_segs) -{ - const struct folio_queue *folioq =3D iter->folioq; - unsigned int nsegs =3D 0; - unsigned int slot =3D iter->folioq_slot; - size_t span =3D 0, n =3D iter->count; - - if (WARN_ON(!iov_iter_is_folioq(iter)) || - WARN_ON(start_offset > n) || - n =3D=3D 0) - return 0; - max_size =3D umin(max_size, n - start_offset); - - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - } - - start_offset +=3D iter->iov_offset; - do { - size_t flen =3D folioq_folio_size(folioq, slot); - - if (start_offset < flen) { - span +=3D flen - start_offset; - nsegs++; - start_offset =3D 0; - } else { - start_offset -=3D flen; - } - if (span >=3D max_size || nsegs >=3D max_segs) - break; - - slot++; - if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D folioq->next; - slot =3D 0; - } - } while (folioq); - - return umin(span, max_size); -} - -size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, - size_t max_size, size_t max_segs) -{ - if (iov_iter_is_folioq(iter)) - return netfs_limit_folioq(iter, start_offset, max_size, max_segs); - if (iov_iter_is_bvec(iter)) - return netfs_limit_bvec(iter, start_offset, max_size, max_segs); - if (iov_iter_is_xarray(iter)) - return netfs_limit_xarray(iter, start_offset, max_size, max_segs); - if (iov_iter_is_kvec(iter)) - return netfs_limit_kvec(iter, start_offset, max_size, max_segs); - BUG(); -} -EXPORT_SYMBOL(netfs_limit_iter); -#endif diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c deleted file mode 100644 index 292011c1cacb..000000000000 --- a/fs/netfs/rolling_buffer.c +++ /dev/null @@ -1,297 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* Rolling buffer helpers - * - * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - */ - -#include -#include -#include -#include -#include "internal.h" - -static atomic_t debug_ids; - -/** - * netfs_folioq_alloc - Allocate a folio_queue struct - * @rreq_id: Associated debugging ID for tracing purposes - * @gfp: Allocation constraints - * @trace: Trace tag to indicate the purpose of the allocation - * - * Allocate, initialise and account the folio_queue struct and log a trace= line - * to mark the allocation. - */ -struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, - unsigned int /*enum netfs_folioq_trace*/ trace) -{ - struct folio_queue *fq; - - fq =3D kmalloc_obj(*fq, gfp); - if (fq) { - netfs_stat(&netfs_n_folioq); - folioq_init(fq, rreq_id); - fq->debug_id =3D atomic_inc_return(&debug_ids); - trace_netfs_folioq(fq, trace); - } - return fq; -} -EXPORT_SYMBOL(netfs_folioq_alloc); - -/** - * netfs_folioq_free - Free a folio_queue struct - * @folioq: The object to free - * @trace: Trace tag to indicate which free - * - * Free and unaccount the folio_queue struct. - */ -void netfs_folioq_free(struct folio_queue *folioq, - unsigned int /*enum netfs_trace_folioq*/ trace) -{ - trace_netfs_folioq(folioq, trace); - netfs_stat_d(&netfs_n_folioq); - kfree(folioq); -} -EXPORT_SYMBOL(netfs_folioq_free); - -/* - * Initialise a rolling buffer. We allocate an empty folio queue struct t= o so - * that the pointers can be independently driven by the producer and the - * consumer. - */ -int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id, - unsigned int direction) -{ - struct folio_queue *fq; - - fq =3D netfs_folioq_alloc(rreq_id, GFP_NOFS, netfs_trace_folioq_rollbuf_i= nit); - if (!fq) - return -ENOMEM; - - roll->head =3D fq; - roll->tail =3D fq; - iov_iter_folio_queue(&roll->iter, direction, fq, 0, 0, 0); - return 0; -} - -/* - * Add another folio_queue to a rolling buffer if there's no space left. - */ -int rolling_buffer_make_space(struct rolling_buffer *roll) -{ - struct folio_queue *fq, *head =3D roll->head; - - if (!folioq_full(head)) - return 0; - - fq =3D netfs_folioq_alloc(head->rreq_id, GFP_NOFS, netfs_trace_folioq_mak= e_space); - if (!fq) - return -ENOMEM; - fq->prev =3D head; - - roll->head =3D fq; - if (folioq_full(head)) { - /* Make sure we don't leave the master iterator pointing to a - * block that might get immediately consumed. - */ - if (roll->iter.folioq =3D=3D head && - roll->iter.folioq_slot =3D=3D folioq_nr_slots(head)) { - roll->iter.folioq =3D fq; - roll->iter.folioq_slot =3D 0; - } - } - - /* Make sure the initialisation is stored before the next pointer. - * - * [!] NOTE: After we set head->next, the consumer is at liberty to - * immediately delete the old head. - */ - smp_store_release(&head->next, fq); - return 0; -} - -/* - * Decant the list of folios to read into a rolling buffer. - */ -ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll, - struct readahead_control *ractl, - struct folio_batch *put_batch) -{ - struct folio_queue *fq; - struct page **vec; - int nr, ix, to; - ssize_t size =3D 0; - - if (rolling_buffer_make_space(roll) < 0) - return -ENOMEM; - - fq =3D roll->head; - vec =3D (struct page **)fq->vec.folios; - nr =3D __readahead_batch(ractl, vec + folio_batch_count(&fq->vec), - folio_batch_space(&fq->vec)); - ix =3D fq->vec.nr; - to =3D ix + nr; - fq->vec.nr =3D to; - for (; ix < to; ix++) { - struct folio *folio =3D folioq_folio(fq, ix); - unsigned int order =3D folio_order(folio); - - fq->orders[ix] =3D order; - size +=3D PAGE_SIZE << order; - trace_netfs_folio(folio, netfs_folio_trace_read); - if (!folio_batch_add(put_batch, folio)) - folio_batch_release(put_batch); - } - WRITE_ONCE(roll->iter.count, roll->iter.count + size); - - /* Store the counter after setting the slot. */ - smp_store_release(&roll->next_head_slot, to); - return size; -} - -/* - * Decant the entire list of folios to read into a rolling buffer. - */ -ssize_t rolling_buffer_bulk_load_from_ra(struct rolling_buffer *roll, - struct readahead_control *ractl, - unsigned int rreq_id) -{ - XA_STATE(xas, &ractl->mapping->i_pages, ractl->_index); - struct folio_queue *fq; - struct folio *folio; - ssize_t loaded =3D 0; - int nr, slot =3D 0, npages =3D 0; - - /* First allocate all the folioqs we're going to need to avoid having - * to deal with ENOMEM later. - */ - nr =3D ractl->_nr_folios; - do { - fq =3D netfs_folioq_alloc(rreq_id, GFP_KERNEL, - netfs_trace_folioq_make_space); - if (!fq) { - rolling_buffer_clear(roll); - return -ENOMEM; - } - fq->prev =3D roll->head; - if (!roll->tail) - roll->tail =3D fq; - else - roll->head->next =3D fq; - roll->head =3D fq; - =09 - nr -=3D folioq_nr_slots(fq); - } while (nr > 0); - - rcu_read_lock(); - - fq =3D roll->tail; - xas_for_each(&xas, folio, ractl->_index + ractl->_nr_pages - 1) { - unsigned int order; - - if (xas_retry(&xas, folio)) - continue; - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - - order =3D folio_order(folio); - fq->orders[slot] =3D order; - fq->vec.folios[slot] =3D folio; - loaded +=3D PAGE_SIZE << order; - npages +=3D 1 << order; - trace_netfs_folio(folio, netfs_folio_trace_read); - - slot++; - if (slot >=3D folioq_nr_slots(fq)) { - fq->vec.nr =3D slot; - fq =3D fq->next; - if (!fq) { - WARN_ON_ONCE(npages < readahead_count(ractl)); - break; - } - slot =3D 0; - } - } - - rcu_read_unlock(); - - if (fq) - fq->vec.nr =3D slot; - - WRITE_ONCE(roll->iter.count, loaded); - iov_iter_folio_queue(&roll->iter, ITER_DEST, roll->tail, 0, 0, loaded); - ractl->_index +=3D npages; - ractl->_nr_pages -=3D npages; - return loaded; -} - -/* - * Append a folio to the rolling buffer. - */ -ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *f= olio, - unsigned int flags) -{ - ssize_t size =3D folio_size(folio); - int slot; - - if (rolling_buffer_make_space(roll) < 0) - return -ENOMEM; - - slot =3D folioq_append(roll->head, folio); - if (flags & ROLLBUF_MARK_1) - folioq_mark(roll->head, slot); - if (flags & ROLLBUF_MARK_2) - folioq_mark2(roll->head, slot); - - WRITE_ONCE(roll->iter.count, roll->iter.count + size); - - /* Store the counter after setting the slot. */ - smp_store_release(&roll->next_head_slot, slot); - return size; -} - -/* - * Delete a spent buffer from a rolling queue and return the next in line.= We - * don't return the last buffer to keep the pointers independent, but retu= rn - * NULL instead. - */ -struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *rol= l) -{ - struct folio_queue *spent =3D roll->tail, *next =3D READ_ONCE(spent->next= ); - - if (!next) - return NULL; - next->prev =3D NULL; - netfs_folioq_free(spent, netfs_trace_folioq_delete); - roll->tail =3D next; - return next; -} - -/* - * Clear out a rolling queue. Folios that have mark 1 set are put. - */ -void rolling_buffer_clear(struct rolling_buffer *roll) -{ - struct folio_batch fbatch; - struct folio_queue *p; - - folio_batch_init(&fbatch); - - while ((p =3D roll->tail)) { - roll->tail =3D p->next; - for (int slot =3D 0; slot < folioq_count(p); slot++) { - struct folio *folio =3D folioq_folio(p, slot); - - if (!folio) - continue; - if (folioq_is_marked(p, slot)) { - trace_netfs_folio(folio, netfs_folio_trace_put); - if (!folio_batch_add(&fbatch, folio)) - folio_batch_release(&fbatch); - } - } - - netfs_folioq_free(p, netfs_trace_folioq_clear); - } - - folio_batch_release(&fbatch); -} diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h deleted file mode 100644 index adab609c972e..000000000000 --- a/include/linux/folio_queue.h +++ /dev/null @@ -1,282 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* Queue of folios definitions - * - * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - * - * See: - * - * Documentation/core-api/folio_queue.rst - * - * for a description of the API. - */ - -#ifndef _LINUX_FOLIO_QUEUE_H -#define _LINUX_FOLIO_QUEUE_H - -#include -#include - -/* - * Segment in a queue of running buffers. Each segment can hold a number = of - * folios and a portion of the queue can be referenced with the ITER_FOLIOQ - * iterator. The possibility exists of inserting non-folio elements into = the - * queue (such as gaps). - * - * Explicit prev and next pointers are used instead of a list_head to make= it - * easier to add segments to tail and remove them from the head without the - * need for a lock. - */ -struct folio_queue { - struct folio_batch vec; /* Folios in the queue segment */ - u8 orders[PAGEVEC_SIZE]; /* Order of each folio */ - struct folio_queue *next; /* Next queue segment or NULL */ - struct folio_queue *prev; /* Previous queue segment of NULL */ - unsigned long marks; /* 1-bit mark per folio */ - unsigned long marks2; /* Second 1-bit mark per folio */ -#if PAGEVEC_SIZE > BITS_PER_LONG -#error marks is not big enough -#endif - unsigned int rreq_id; - unsigned int debug_id; -}; - -/** - * folioq_init - Initialise a folio queue segment - * @folioq: The segment to initialise - * @rreq_id: The request identifier to use in tracelines. - * - * Initialise a folio queue segment and set an identifier to be used in tr= aces. - * - * Note that the folio pointers are left uninitialised. - */ -static inline void folioq_init(struct folio_queue *folioq, unsigned int rr= eq_id) -{ - folio_batch_init(&folioq->vec); - folioq->next =3D NULL; - folioq->prev =3D NULL; - folioq->marks =3D 0; - folioq->marks2 =3D 0; - folioq->rreq_id =3D rreq_id; - folioq->debug_id =3D 0; -} - -/** - * folioq_nr_slots: Query the capacity of a folio queue segment - * @folioq: The segment to query - * - * Query the number of folios that a particular folio queue segment might = hold. - * [!] NOTE: This must not be assumed to be the same for every segment! - */ -static inline unsigned int folioq_nr_slots(const struct folio_queue *folio= q) -{ - return PAGEVEC_SIZE; -} - -/** - * folioq_count: Query the occupancy of a folio queue segment - * @folioq: The segment to query - * - * Query the number of folios that have been added to a folio queue segmen= t. - * Note that this is not decreased as folios are removed from a segment. - */ -static inline unsigned int folioq_count(struct folio_queue *folioq) -{ - return folio_batch_count(&folioq->vec); -} - -/** - * folioq_full: Query if a folio queue segment is full - * @folioq: The segment to query - * - * Query if a folio queue segment is fully occupied. Note that this does = not - * change if folios are removed from a segment. - */ -static inline bool folioq_full(struct folio_queue *folioq) -{ - //return !folio_batch_space(&folioq->vec); - return folioq_count(folioq) >=3D folioq_nr_slots(folioq); -} - -/** - * folioq_is_marked: Check first folio mark in a folio queue segment - * @folioq: The segment to query - * @slot: The slot number of the folio to query - * - * Determine if the first mark is set for the folio in the specified slot = in a - * folio queue segment. - */ -static inline bool folioq_is_marked(const struct folio_queue *folioq, unsi= gned int slot) -{ - return test_bit(slot, &folioq->marks); -} - -/** - * folioq_mark: Set the first mark on a folio in a folio queue segment - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Set the first mark for the folio in the specified slot in a folio queue - * segment. - */ -static inline void folioq_mark(struct folio_queue *folioq, unsigned int sl= ot) -{ - set_bit(slot, &folioq->marks); -} - -/** - * folioq_unmark: Clear the first mark on a folio in a folio queue segment - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Clear the first mark for the folio in the specified slot in a folio que= ue - * segment. - */ -static inline void folioq_unmark(struct folio_queue *folioq, unsigned int = slot) -{ - clear_bit(slot, &folioq->marks); -} - -/** - * folioq_is_marked2: Check second folio mark in a folio queue segment - * @folioq: The segment to query - * @slot: The slot number of the folio to query - * - * Determine if the second mark is set for the folio in the specified slot= in a - * folio queue segment. - */ -static inline bool folioq_is_marked2(const struct folio_queue *folioq, uns= igned int slot) -{ - return test_bit(slot, &folioq->marks2); -} - -/** - * folioq_mark2: Set the second mark on a folio in a folio queue segment - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Set the second mark for the folio in the specified slot in a folio queue - * segment. - */ -static inline void folioq_mark2(struct folio_queue *folioq, unsigned int s= lot) -{ - set_bit(slot, &folioq->marks2); -} - -/** - * folioq_unmark2: Clear the second mark on a folio in a folio queue segme= nt - * @folioq: The segment to modify - * @slot: The slot number of the folio to modify - * - * Clear the second mark for the folio in the specified slot in a folio qu= eue - * segment. - */ -static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int= slot) -{ - clear_bit(slot, &folioq->marks2); -} - -/** - * folioq_append: Add a folio to a folio queue segment - * @folioq: The segment to add to - * @folio: The folio to add - * - * Add a folio to the tail of the sequence in a folio queue segment, incre= asing - * the occupancy count and returning the slot number for the folio just ad= ded. - * The folio size is extracted and stored in the queue and the marks are l= eft - * unmodified. - * - * Note that it's left up to the caller to check that the segment capacity= will - * not be exceeded and to extend the queue. - */ -static inline unsigned int folioq_append(struct folio_queue *folioq, struc= t folio *folio) -{ - unsigned int slot =3D folioq->vec.nr++; - - folioq->vec.folios[slot] =3D folio; - folioq->orders[slot] =3D folio_order(folio); - return slot; -} - -/** - * folioq_append_mark: Add a folio to a folio queue segment - * @folioq: The segment to add to - * @folio: The folio to add - * - * Add a folio to the tail of the sequence in a folio queue segment, incre= asing - * the occupancy count and returning the slot number for the folio just ad= ded. - * The folio size is extracted and stored in the queue, the first mark is = set - * and and the second and third marks are left unmodified. - * - * Note that it's left up to the caller to check that the segment capacity= will - * not be exceeded and to extend the queue. - */ -static inline unsigned int folioq_append_mark(struct folio_queue *folioq, = struct folio *folio) -{ - unsigned int slot =3D folioq->vec.nr++; - - folioq->vec.folios[slot] =3D folio; - folioq->orders[slot] =3D folio_order(folio); - folioq_mark(folioq, slot); - return slot; -} - -/** - * folioq_folio: Get a folio from a folio queue segment - * @folioq: The segment to access - * @slot: The folio slot to access - * - * Retrieve the folio in the specified slot from a folio queue segment. N= ote - * that no bounds check is made and if the slot hasn't been added into yet= , the - * pointer will be undefined. If the slot has been cleared, NULL will be - * returned. - */ -static inline struct folio *folioq_folio(const struct folio_queue *folioq,= unsigned int slot) -{ - return folioq->vec.folios[slot]; -} - -/** - * folioq_folio_order: Get the order of a folio from a folio queue segment - * @folioq: The segment to access - * @slot: The folio slot to access - * - * Retrieve the order of the folio in the specified slot from a folio queue - * segment. Note that no bounds check is made and if the slot hasn't been - * added into yet, the order returned will be 0. - */ -static inline unsigned int folioq_folio_order(const struct folio_queue *fo= lioq, unsigned int slot) -{ - return folioq->orders[slot]; -} - -/** - * folioq_folio_size: Get the size of a folio from a folio queue segment - * @folioq: The segment to access - * @slot: The folio slot to access - * - * Retrieve the size of the folio in the specified slot from a folio queue - * segment. Note that no bounds check is made and if the slot hasn't been - * added into yet, the size returned will be PAGE_SIZE. - */ -static inline size_t folioq_folio_size(const struct folio_queue *folioq, u= nsigned int slot) -{ - return PAGE_SIZE << folioq_folio_order(folioq, slot); -} - -/** - * folioq_clear: Clear a folio from a folio queue segment - * @folioq: The segment to clear - * @slot: The folio slot to clear - * - * Clear a folio from a sequence in a folio queue segment and clear its ma= rks. - * The occupancy count is left unchanged. - */ -static inline void folioq_clear(struct folio_queue *folioq, unsigned int s= lot) -{ - folioq->vec.folios[slot] =3D NULL; - folioq_unmark(folioq, slot); - folioq_unmark2(folioq, slot); -} - -#endif /* _LINUX_FOLIO_QUEUE_H */ diff --git a/include/linux/rolling_buffer.h b/include/linux/rolling_buffer.h deleted file mode 100644 index b35ef43f325f..000000000000 --- a/include/linux/rolling_buffer.h +++ /dev/null @@ -1,64 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* Rolling buffer of folios - * - * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - */ - -#ifndef _ROLLING_BUFFER_H -#define _ROLLING_BUFFER_H - -#include -#include - -/* - * Rolling buffer. Whilst the buffer is live and in use, folios and folio - * queue segments can be added to one end by one thread and removed from t= he - * other end by another thread. The buffer isn't allowed to be empty; it = must - * always have at least one folio_queue in it so that neither side has to - * modify both queue pointers. - * - * The iterator in the buffer is extended as buffers are inserted. It can= be - * snapshotted to use a segment of the buffer. - */ -struct rolling_buffer { - struct folio_queue *head; /* Producer's insertion point */ - struct folio_queue *tail; /* Consumer's removal point */ - struct iov_iter iter; /* Iterator tracking what's left in the buffer */ - u8 next_head_slot; /* Next slot in ->head */ - u8 first_tail_slot; /* First slot in ->tail */ -}; - -/* - * Snapshot of a rolling buffer. - */ -struct rolling_buffer_snapshot { - struct folio_queue *curr_folioq; /* Queue segment in which current folio = resides */ - unsigned char curr_slot; /* Folio currently being read */ - unsigned char curr_order; /* Order of folio */ -}; - -/* Marks to store per-folio in the internal folio_queue structs. */ -#define ROLLBUF_MARK_1 BIT(0) -#define ROLLBUF_MARK_2 BIT(1) - -int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id, - unsigned int direction); -int rolling_buffer_make_space(struct rolling_buffer *roll); -ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll, - struct readahead_control *ractl, - struct folio_batch *put_batch); -ssize_t rolling_buffer_bulk_load_from_ra(struct rolling_buffer *roll, - struct readahead_control *ractl, - unsigned int rreq_id); -ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *f= olio, - unsigned int flags); -struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *rol= l); -void rolling_buffer_clear(struct rolling_buffer *roll); - -static inline void rolling_buffer_advance(struct rolling_buffer *roll, siz= e_t amount) -{ - iov_iter_advance(&roll->iter, amount); -} - -#endif /* _ROLLING_BUFFER_H */