[PATCH v5 RESEND] block: plug attempts to batch allocate tags multiple times

Xue He posted 1 patch 2 months, 3 weeks ago
There is a newer version of this series
block/blk-mq.c | 39 ++++++++++++++++++++++-----------------
1 file changed, 22 insertions(+), 17 deletions(-)
[PATCH v5 RESEND] block: plug attempts to batch allocate tags multiple times
Posted by Xue He 2 months, 3 weeks ago
This patch aims to enable batch allocation of sufficient tags after
batch IO submission with plug mechanism, thereby avoiding the need for
frequent individual requests when the initial allocation is
insufficient.

------------------------------------------------------------
Perf:
base code: __blk_mq_alloc_requests() 1.31%
patch: __blk_mq_alloc_requests() 0.7%
------------------------------------------------------------

---
changes since v1:
- Modify multiple batch registrations into a single loop to achieve
  the batch quantity

changes since v2:
- Modify the call location of remainder handling
- Refactoring sbitmap cleanup time

changes since v3:
- Add handle operation in loop
- Add helper sbitmap_find_bits_in_word

changes since v4:
- Split blk-mq.c changes from sbitmap

Signed-off-by: hexue <xue01.he@samsung.com>
---
 block/blk-mq.c | 39 ++++++++++++++++++++++-----------------
 1 file changed, 22 insertions(+), 17 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 09f579414161..64cd0a3c7cbf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -467,26 +467,31 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data)
 	unsigned long tag_mask;
 	int i, nr = 0;
 
-	tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset);
-	if (unlikely(!tag_mask))
-		return NULL;
+	do {
+		tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset);
+		if (unlikely(!tag_mask)) {
+			if (nr == 0)
+				return NULL;
+			break;
+		}
+		tags = blk_mq_tags_from_data(data);
+		for (i = 0; tag_mask; i++) {
+			if (!(tag_mask & (1UL << i)))
+				continue;
+			tag = tag_offset + i;
+			prefetch(tags->static_rqs[tag]);
+			tag_mask &= ~(1UL << i);
+			rq = blk_mq_rq_ctx_init(data, tags, tag);
+			rq_list_add_head(data->cached_rqs, rq);
+			data->nr_tags--;
+			nr++;
+		}
+		if (!(data->rq_flags & RQF_SCHED_TAGS))
+			blk_mq_add_active_requests(data->hctx, nr);
+	} while (data->nr_tags);
 
-	tags = blk_mq_tags_from_data(data);
-	for (i = 0; tag_mask; i++) {
-		if (!(tag_mask & (1UL << i)))
-			continue;
-		tag = tag_offset + i;
-		prefetch(tags->static_rqs[tag]);
-		tag_mask &= ~(1UL << i);
-		rq = blk_mq_rq_ctx_init(data, tags, tag);
-		rq_list_add_head(data->cached_rqs, rq);
-		nr++;
-	}
-	if (!(data->rq_flags & RQF_SCHED_TAGS))
-		blk_mq_add_active_requests(data->hctx, nr);
 	/* caller already holds a reference, add for remainder */
 	percpu_ref_get_many(&data->q->q_usage_counter, nr - 1);
-	data->nr_tags -= nr;
 
 	return rq_list_pop(data->cached_rqs);
 }
-- 
2.34.1
Re: [PATCH v5 RESEND] block: plug attempts to batch allocate tags multiple times
Posted by Ming Lei 2 months, 3 weeks ago
On Thu, Nov 13, 2025 at 08:02:02AM +0000, Xue He wrote:
> This patch aims to enable batch allocation of sufficient tags after
> batch IO submission with plug mechanism, thereby avoiding the need for
> frequent individual requests when the initial allocation is
> insufficient.
> 
> ------------------------------------------------------------
> Perf:
> base code: __blk_mq_alloc_requests() 1.31%
> patch: __blk_mq_alloc_requests() 0.7%
> ------------------------------------------------------------

Can you include the workload with perf together?

> 
> ---
> changes since v1:
> - Modify multiple batch registrations into a single loop to achieve
>   the batch quantity
> 
> changes since v2:
> - Modify the call location of remainder handling
> - Refactoring sbitmap cleanup time
> 
> changes since v3:
> - Add handle operation in loop
> - Add helper sbitmap_find_bits_in_word
> 
> changes since v4:
> - Split blk-mq.c changes from sbitmap
> 
> Signed-off-by: hexue <xue01.he@samsung.com>
> ---
>  block/blk-mq.c | 39 ++++++++++++++++++++++-----------------
>  1 file changed, 22 insertions(+), 17 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 09f579414161..64cd0a3c7cbf 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -467,26 +467,31 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data)
>  	unsigned long tag_mask;
>  	int i, nr = 0;
>  
> -	tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset);
> -	if (unlikely(!tag_mask))
> -		return NULL;
> +	do {
> +		tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset);
> +		if (unlikely(!tag_mask)) {
> +			if (nr == 0)
> +				return NULL;
> +			break;
> +		}
> +		tags = blk_mq_tags_from_data(data);
> +		for (i = 0; tag_mask; i++) {
> +			if (!(tag_mask & (1UL << i)))
> +				continue;
> +			tag = tag_offset + i;
> +			prefetch(tags->static_rqs[tag]);
> +			tag_mask &= ~(1UL << i);
> +			rq = blk_mq_rq_ctx_init(data, tags, tag);
> +			rq_list_add_head(data->cached_rqs, rq);
> +			data->nr_tags--;
> +			nr++;
> +		}
> +		if (!(data->rq_flags & RQF_SCHED_TAGS))
> +			blk_mq_add_active_requests(data->hctx, nr);

Here not only less-efficient, but also a over-counting bug, please
move the above two lines after `percpu_ref_get_many`.


Thanks, 
Ming