From: Yu Kuai <yukuai3@huawei.com>
Currently, both mq-deadline and bfq have global spin lock that will be
grabbed inside elevator methods like dispatch_request, insert_requests,
and bio_merge. And the global lock is the main reason mq-deadline and
bfq can't scale very well.
While dispatching request, blk_mq_get_disatpch_budget() and
blk_mq_get_driver_tag() must be called, and they are not ready to be called
inside elevator methods, hence introduce a new method like
dispatch_requests is not possible.
Hence introduce a new high level elevator lock, currently it is protecting
dispatch_request only. Following patches will convert mq-deadline and bfq
to use this lock and finally support request batch dispatching by calling
the method multiple time while holding the lock.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/blk-mq-sched.c | 6 ++++++
block/elevator.c | 1 +
block/elevator.h | 4 ++--
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 55a0fd105147..7911fae75ce4 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -98,6 +98,7 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
max_dispatch = hctx->queue->nr_requests;
do {
+ bool sq_sched = blk_queue_sq_sched(q);
struct request *rq;
int budget_token;
@@ -113,7 +114,12 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
if (budget_token < 0)
break;
+ if (sq_sched)
+ spin_lock(&e->lock);
rq = e->type->ops.dispatch_request(hctx);
+ if (sq_sched)
+ spin_unlock(&e->lock);
+
if (!rq) {
blk_mq_put_dispatch_budget(q, budget_token);
/*
diff --git a/block/elevator.c b/block/elevator.c
index 88f8f36bed98..45303af0ca73 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -144,6 +144,7 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
eq->type = e;
kobject_init(&eq->kobj, &elv_ktype);
mutex_init(&eq->sysfs_lock);
+ spin_lock_init(&eq->lock);
hash_init(eq->hash);
return eq;
diff --git a/block/elevator.h b/block/elevator.h
index a07ce773a38f..cbbac4f7825c 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -110,12 +110,12 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset);
/*
* each queue has an elevator_queue associated with it
*/
-struct elevator_queue
-{
+struct elevator_queue {
struct elevator_type *type;
void *elevator_data;
struct kobject kobj;
struct mutex sysfs_lock;
+ spinlock_t lock;
unsigned long flags;
DECLARE_HASHTABLE(hash, ELV_HASH_BITS);
};
--
2.39.2
On 7/30/25 10:22, Yu Kuai wrote: > From: Yu Kuai <yukuai3@huawei.com> > > Currently, both mq-deadline and bfq have global spin lock that will be > grabbed inside elevator methods like dispatch_request, insert_requests, > and bio_merge. And the global lock is the main reason mq-deadline and > bfq can't scale very well. > > While dispatching request, blk_mq_get_disatpch_budget() and > blk_mq_get_driver_tag() must be called, and they are not ready to be called > inside elevator methods, hence introduce a new method like > dispatch_requests is not possible. > > Hence introduce a new high level elevator lock, currently it is protecting > dispatch_request only. Following patches will convert mq-deadline and bfq > to use this lock and finally support request batch dispatching by calling > the method multiple time while holding the lock. > > Signed-off-by: Yu Kuai <yukuai3@huawei.com> > --- > block/blk-mq-sched.c | 6 ++++++ > block/elevator.c | 1 + > block/elevator.h | 4 ++-- > 3 files changed, 9 insertions(+), 2 deletions(-) > Reviewed-by: Hannes Reinecke <hare@suse.de> Cheers, Hannes -- Dr. Hannes Reinecke Kernel Storage Architect hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
On 7/30/25 1:22 AM, Yu Kuai wrote: > + if (sq_sched) > + spin_lock(&e->lock); > rq = e->type->ops.dispatch_request(hctx); > + if (sq_sched) > + spin_unlock(&e->lock); The above will confuse static analyzers. Please change it into something like the following: if (blk_queue_sq_sched(q)) { spin_lock(&e->lock); rq = e->type->ops.dispatch_request(hctx); spin_unlock(&e->lock); } else { rq = e->type->ops.dispatch_request(hctx); } Otherwise this patch looks good to me. Thanks, Bart.
Hi 在 2025/7/31 1:19, Bart Van Assche 写道: > On 7/30/25 1:22 AM, Yu Kuai wrote: >> + if (sq_sched) >> + spin_lock(&e->lock); >> rq = e->type->ops.dispatch_request(hctx); >> + if (sq_sched) >> + spin_unlock(&e->lock); > > The above will confuse static analyzers. Please change it into something > like the following: > > if (blk_queue_sq_sched(q)) { > spin_lock(&e->lock); > rq = e->type->ops.dispatch_request(hctx); > spin_unlock(&e->lock); > } else { > rq = e->type->ops.dispatch_request(hctx); > } > > Otherwise this patch looks good to me. Ok, thanks for the review, will change in the next version. Kuai > > Thanks, > > Bart. >
© 2016 - 2025 Red Hat, Inc.