From nobody Wed Feb 11 07:01:02 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 526991DF72C; Fri, 30 May 2025 08:09:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592549; cv=none; b=jg6OvOs+849qQ57IL7K4GBcEpah0+/sqpUErDtvn7TfITA+jpeyG7uqGb/ngMilJhR/0GxA78W1aNhquNZ0YXE+pzwaESM4ZqcQjI48dX15lSmaDkhIEKBNF8VwGmNv60r61xd/ddM1cC5eyTEPhCX3YjxsKZ8aALtu10Zcxe0A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592549; c=relaxed/simple; bh=wEQGdWGsmpfrovUYNN+akxZhK07M41/34FemLEAhQbE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aY72xjTTRAcwKFTqrYuS1irtz3QyRelZwVleVmitupuRRbm8ZecFh70y5qAOn2nZhdNFsPnLoRHC/BfnnJCjQD7rbmAtJABYM3zs8Ss4tZIRqKrTViDNJbx7Cu0pmPIFXakO0xmEw0CwoTSZbwdwMPCFftjZZKfyrbtWNKsPPDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4b7wqk1w1WzYQvG4; Fri, 30 May 2025 16:09:06 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 5488A1A01A0; Fri, 30 May 2025 16:09:05 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP3 (Coremail) with SMTP id _Ch0CgAne8WfZzloSi3_Ng--.6274S5; Fri, 30 May 2025 16:09:05 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com, johnny.chenyi@huawei.com Subject: [PATCH RFC 1/4] elevator: introduce global lock for sq_shared elevator Date: Fri, 30 May 2025 16:03:52 +0800 Message-Id: <20250530080355.1138759-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250530080355.1138759-1-yukuai1@huaweicloud.com> References: <20250530080355.1138759-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgAne8WfZzloSi3_Ng--.6274S5 X-Coremail-Antispam: 1UD129KBjvJXoWxCF45GF4fuF48AF4xtrWDurg_yoWrtF4kpF 45Jan0kr4qqr47Za4xAa17Jw43t3929ry3ZrWfAw45tFyxGrWxXF18GFy8ZF4xZrs3CFsF qr4ktFZ8WFyIg3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBG14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCY1x0262kKe7AKxVWUAVWUtw CF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j 6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64 vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_ Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0x vEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUfDGrUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ Content-Type: text/plain; charset="utf-8" From: Yu Kuai Currently, both mq-deadline and bfq have internal global lock, prepare to convert them to use this high level lock and support batch request dispatching. Signed-off-by: Yu Kuai --- block/blk-mq-sched.c | 4 +-- block/blk-mq.c | 5 ++-- block/elevator.c | 1 + block/elevator.h | 61 ++++++++++++++++++++++++++++++++++++++++++-- 4 files changed, 64 insertions(+), 7 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 55a0fd105147..c1390d3e6381 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -113,7 +113,7 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_= ctx *hctx) if (budget_token < 0) break; =20 - rq =3D e->type->ops.dispatch_request(hctx); + rq =3D elevator_dispatch_request(hctx); if (!rq) { blk_mq_put_dispatch_budget(q, budget_token); /* @@ -342,7 +342,7 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, st= ruct bio *bio, enum hctx_type type; =20 if (e && e->type->ops.bio_merge) { - ret =3D e->type->ops.bio_merge(q, bio, nr_segs); + ret =3D elevator_bio_merge(q, bio, nr_segs); goto out_put; } =20 diff --git a/block/blk-mq.c b/block/blk-mq.c index 4806b867e37d..2650b7b28d1e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2637,7 +2637,7 @@ static void blk_mq_insert_request(struct request *rq,= blk_insert_t flags) WARN_ON_ONCE(rq->tag !=3D BLK_MQ_NO_TAG); =20 list_add(&rq->queuelist, &list); - q->elevator->type->ops.insert_requests(hctx, &list, flags); + elevator_insert_requests(hctx, &list, flags); } else { trace_block_rq_insert(rq); =20 @@ -2912,8 +2912,7 @@ static void blk_mq_dispatch_list(struct rq_list *rqs,= bool from_sched) spin_unlock(&this_hctx->lock); blk_mq_run_hw_queue(this_hctx, from_sched); } else if (this_hctx->queue->elevator) { - this_hctx->queue->elevator->type->ops.insert_requests(this_hctx, - &list, 0); + elevator_insert_requests(this_hctx, &list, 0); blk_mq_run_hw_queue(this_hctx, from_sched); } else { blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched); diff --git a/block/elevator.c b/block/elevator.c index ab22542e6cf0..91df270d9d91 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -144,6 +144,7 @@ struct elevator_queue *elevator_alloc(struct request_qu= eue *q, eq->type =3D e; kobject_init(&eq->kobj, &elv_ktype); mutex_init(&eq->sysfs_lock); + spin_lock_init(&eq->lock); hash_init(eq->hash); =20 return eq; diff --git a/block/elevator.h b/block/elevator.h index a07ce773a38f..8399dfe5c3b6 100644 --- a/block/elevator.h +++ b/block/elevator.h @@ -110,12 +110,12 @@ struct request *elv_rqhash_find(struct request_queue = *q, sector_t offset); /* * each queue has an elevator_queue associated with it */ -struct elevator_queue -{ +struct elevator_queue { struct elevator_type *type; void *elevator_data; struct kobject kobj; struct mutex sysfs_lock; + spinlock_t lock; unsigned long flags; DECLARE_HASHTABLE(hash, ELV_HASH_BITS); }; @@ -186,4 +186,61 @@ extern struct request *elv_rb_find(struct rb_root *, s= ector_t); void blk_mq_sched_reg_debugfs(struct request_queue *q); void blk_mq_sched_unreg_debugfs(struct request_queue *q); =20 +#define elevator_lock(e) spin_lock_irq(&(e)->lock) +#define elevator_unlock(e) spin_unlock_irq(&(e)->lock) + +static inline struct request *elevator_dispatch_request( + struct blk_mq_hw_ctx *hctx) +{ + struct request_queue *q =3D hctx->queue; + struct elevator_queue *e =3D q->elevator; + bool sq_shared =3D blk_queue_sq_sched(q); + struct request *rq; + + if (sq_shared) + elevator_lock(e); + + rq =3D e->type->ops.dispatch_request(hctx); + + if (sq_shared) + elevator_unlock(e); + + return rq; +} + +static inline void elevator_insert_requests(struct blk_mq_hw_ctx *hctx, + struct list_head *list, + blk_insert_t flags) +{ + struct request_queue *q =3D hctx->queue; + struct elevator_queue *e =3D q->elevator; + bool sq_shared =3D blk_queue_sq_sched(q); + + if (sq_shared) + elevator_lock(e); + + e->type->ops.insert_requests(hctx, list, flags); + + if (sq_shared) + elevator_unlock(e); +} + +static inline bool elevator_bio_merge(struct request_queue *q, struct bio = *bio, + unsigned int nr_segs) +{ + struct elevator_queue *e =3D q->elevator; + bool sq_shared =3D blk_queue_sq_sched(q); + bool ret; + + if (sq_shared) + elevator_lock(e); + + ret =3D e->type->ops.bio_merge(q, bio, nr_segs); + + if (sq_shared) + elevator_unlock(e); + + return ret; +} + #endif /* _ELEVATOR_H */ --=20 2.39.2 From nobody Wed Feb 11 07:01:02 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43F8520296E; Fri, 30 May 2025 08:09:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592550; cv=none; b=aqj2LXzpKoHZSEg0Flgkoou8XL1IhBdR/FpBbuMTXZOOx+nrn7U1FSniFI0bF9xOXouvMy4/touO8WInnL1WRDfDS2Qm/21YNT4fn5MQPYXN3B6YR6NhvCBg/J1sqHVbm0JgibjIBxSIxm9bVLFNij8n682w2MKPz2KwcjGv4Ow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592550; c=relaxed/simple; bh=r3LJPAy8th9swxuOjpAlgE/XhVIli1+QsomGV1LR4tU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IFfnQHzUJVH73n+ttyCTUb0hIpBKq0ZWV7QIfrh7POgAO1olchUdhg5jU91TdESLBoLPZnFpjULjD7+vFe2LUI2tYcI0AwzFu98ac2VPJmCaWJ6wmsHByuj1qzY8XRVbf69sIUedo47rwkSg/2/dWEc/M+Is/0yFqcL++2Yna3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4b7wql28XwzKHMvG; Fri, 30 May 2025 16:09:07 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id B0F8E1A07BD; Fri, 30 May 2025 16:09:05 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP3 (Coremail) with SMTP id _Ch0CgAne8WfZzloSi3_Ng--.6274S6; Fri, 30 May 2025 16:09:05 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com, johnny.chenyi@huawei.com Subject: [PATCH RFC 2/4] mq-deadline: switch to use elevator lock Date: Fri, 30 May 2025 16:03:53 +0800 Message-Id: <20250530080355.1138759-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250530080355.1138759-1-yukuai1@huaweicloud.com> References: <20250530080355.1138759-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgAne8WfZzloSi3_Ng--.6274S6 X-Coremail-Antispam: 1UD129KBjvJXoW3GF1kGw4DZF1ftF1DGrWUCFg_yoWfJFW3pF 45Kan0yr4rXFsrZ3WDJa9rZw4agw4I9ry2qr93C3yfKFn7JrZrW3WjkF10vr4fAr97CFsI gF4qqa98JFy7twUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBG14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCY1x0262kKe7AKxVWUAVWUtw CF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j 6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64 vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_ Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0x vEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUFoGdUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ Content-Type: text/plain; charset="utf-8" From: Yu Kuai There are no functional changes, prepare to support batch request dispatching. Signed-off-by: Yu Kuai --- block/elevator.h | 1 + block/mq-deadline.c | 60 +++++++++++++++++---------------------------- 2 files changed, 23 insertions(+), 38 deletions(-) diff --git a/block/elevator.h b/block/elevator.h index 8399dfe5c3b6..1b325d131c51 100644 --- a/block/elevator.h +++ b/block/elevator.h @@ -188,6 +188,7 @@ void blk_mq_sched_unreg_debugfs(struct request_queue *q= ); =20 #define elevator_lock(e) spin_lock_irq(&(e)->lock) #define elevator_unlock(e) spin_unlock_irq(&(e)->lock) +#define elevator_assert_lock(e) lockdep_assert_held(&(e)->lock) =20 static inline struct request *elevator_dispatch_request( struct blk_mq_hw_ctx *hctx) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 2edf1cac06d5..a68a50da6320 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -101,7 +101,7 @@ struct deadline_data { u32 async_depth; int prio_aging_expire; =20 - spinlock_t lock; + struct elevator_queue *eq; }; =20 /* Maps an I/O priority class to a deadline scheduler priority. */ @@ -213,7 +213,7 @@ static void dd_merged_requests(struct request_queue *q,= struct request *req, const u8 ioprio_class =3D dd_rq_ioclass(next); const enum dd_prio prio =3D ioprio_class_to_prio[ioprio_class]; =20 - lockdep_assert_held(&dd->lock); + elevator_assert_lock(q->elevator); =20 dd->per_prio[prio].stats.merged++; =20 @@ -253,7 +253,7 @@ static u32 dd_queued(struct deadline_data *dd, enum dd_= prio prio) { const struct io_stats_per_prio *stats =3D &dd->per_prio[prio].stats; =20 - lockdep_assert_held(&dd->lock); + elevator_assert_lock(dd->eq); =20 return stats->inserted - atomic_read(&stats->completed); } @@ -323,7 +323,7 @@ static struct request *__dd_dispatch_request(struct dea= dline_data *dd, enum dd_prio prio; u8 ioprio_class; =20 - lockdep_assert_held(&dd->lock); + elevator_assert_lock(dd->eq); =20 if (!list_empty(&per_prio->dispatch)) { rq =3D list_first_entry(&per_prio->dispatch, struct request, @@ -434,7 +434,7 @@ static struct request *dd_dispatch_prio_aged_requests(s= truct deadline_data *dd, enum dd_prio prio; int prio_cnt; =20 - lockdep_assert_held(&dd->lock); + elevator_assert_lock(dd->eq); =20 prio_cnt =3D !!dd_queued(dd, DD_RT_PRIO) + !!dd_queued(dd, DD_BE_PRIO) + !!dd_queued(dd, DD_IDLE_PRIO); @@ -466,10 +466,9 @@ static struct request *dd_dispatch_request(struct blk_= mq_hw_ctx *hctx) struct request *rq; enum dd_prio prio; =20 - spin_lock(&dd->lock); rq =3D dd_dispatch_prio_aged_requests(dd, now); if (rq) - goto unlock; + return rq; =20 /* * Next, dispatch requests in priority order. Ignore lower priority @@ -481,9 +480,6 @@ static struct request *dd_dispatch_request(struct blk_m= q_hw_ctx *hctx) break; } =20 -unlock: - spin_unlock(&dd->lock); - return rq; } =20 @@ -552,9 +548,9 @@ static void dd_exit_sched(struct elevator_queue *e) WARN_ON_ONCE(!list_empty(&per_prio->fifo_list[DD_READ])); WARN_ON_ONCE(!list_empty(&per_prio->fifo_list[DD_WRITE])); =20 - spin_lock(&dd->lock); + elevator_lock(e); queued =3D dd_queued(dd, prio); - spin_unlock(&dd->lock); + elevator_unlock(e); =20 WARN_ONCE(queued !=3D 0, "statistics for priority %d: i %u m %u d %u c %u\n", @@ -601,7 +597,7 @@ static int dd_init_sched(struct request_queue *q, struc= t elevator_type *e) dd->last_dir =3D DD_WRITE; dd->fifo_batch =3D fifo_batch; dd->prio_aging_expire =3D prio_aging_expire; - spin_lock_init(&dd->lock); + dd->eq =3D eq; =20 /* We dispatch from request queue wide instead of hw queue */ blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q); @@ -653,14 +649,10 @@ static int dd_request_merge(struct request_queue *q, = struct request **rq, static bool dd_bio_merge(struct request_queue *q, struct bio *bio, unsigned int nr_segs) { - struct deadline_data *dd =3D q->elevator->elevator_data; struct request *free =3D NULL; bool ret; =20 - spin_lock(&dd->lock); ret =3D blk_mq_sched_try_merge(q, bio, nr_segs, &free); - spin_unlock(&dd->lock); - if (free) blk_mq_free_request(free); =20 @@ -681,8 +673,6 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hct= x, struct request *rq, struct dd_per_prio *per_prio; enum dd_prio prio; =20 - lockdep_assert_held(&dd->lock); - prio =3D ioprio_class_to_prio[ioprio_class]; per_prio =3D &dd->per_prio[prio]; if (!rq->elv.priv[0]) @@ -721,11 +711,8 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *h= ctx, struct list_head *list, blk_insert_t flags) { - struct request_queue *q =3D hctx->queue; - struct deadline_data *dd =3D q->elevator->elevator_data; LIST_HEAD(free); =20 - spin_lock(&dd->lock); while (!list_empty(list)) { struct request *rq; =20 @@ -733,7 +720,6 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hc= tx, list_del_init(&rq->queuelist); dd_insert_request(hctx, rq, flags, &free); } - spin_unlock(&dd->lock); =20 blk_mq_free_requests(&free); } @@ -849,13 +835,13 @@ static const struct elv_fs_entry deadline_attrs[] =3D= { #define DEADLINE_DEBUGFS_DDIR_ATTRS(prio, data_dir, name) \ static void *deadline_##name##_fifo_start(struct seq_file *m, \ loff_t *pos) \ - __acquires(&dd->lock) \ + __acquires(&q->elevator->lock) \ { \ struct request_queue *q =3D m->private; \ struct deadline_data *dd =3D q->elevator->elevator_data; \ struct dd_per_prio *per_prio =3D &dd->per_prio[prio]; \ \ - spin_lock(&dd->lock); \ + elevator_lock(q->elevator); \ return seq_list_start(&per_prio->fifo_list[data_dir], *pos); \ } \ \ @@ -870,12 +856,11 @@ static void *deadline_##name##_fifo_next(struct seq_f= ile *m, void *v, \ } \ \ static void deadline_##name##_fifo_stop(struct seq_file *m, void *v) \ - __releases(&dd->lock) \ + __releases(&q->elevator->lock) \ { \ struct request_queue *q =3D m->private; \ - struct deadline_data *dd =3D q->elevator->elevator_data; \ \ - spin_unlock(&dd->lock); \ + elevator_unlock(q->elevator); \ } \ \ static const struct seq_operations deadline_##name##_fifo_seq_ops =3D { \ @@ -941,11 +926,11 @@ static int dd_queued_show(void *data, struct seq_file= *m) struct deadline_data *dd =3D q->elevator->elevator_data; u32 rt, be, idle; =20 - spin_lock(&dd->lock); + elevator_lock(q->elevator); rt =3D dd_queued(dd, DD_RT_PRIO); be =3D dd_queued(dd, DD_BE_PRIO); idle =3D dd_queued(dd, DD_IDLE_PRIO); - spin_unlock(&dd->lock); + elevator_unlock(q->elevator); =20 seq_printf(m, "%u %u %u\n", rt, be, idle); =20 @@ -957,7 +942,7 @@ static u32 dd_owned_by_driver(struct deadline_data *dd,= enum dd_prio prio) { const struct io_stats_per_prio *stats =3D &dd->per_prio[prio].stats; =20 - lockdep_assert_held(&dd->lock); + elevator_assert_lock(dd->eq); =20 return stats->dispatched + stats->merged - atomic_read(&stats->completed); @@ -969,11 +954,11 @@ static int dd_owned_by_driver_show(void *data, struct= seq_file *m) struct deadline_data *dd =3D q->elevator->elevator_data; u32 rt, be, idle; =20 - spin_lock(&dd->lock); + elevator_lock(q->elevator); rt =3D dd_owned_by_driver(dd, DD_RT_PRIO); be =3D dd_owned_by_driver(dd, DD_BE_PRIO); idle =3D dd_owned_by_driver(dd, DD_IDLE_PRIO); - spin_unlock(&dd->lock); + elevator_unlock(q->elevator); =20 seq_printf(m, "%u %u %u\n", rt, be, idle); =20 @@ -983,13 +968,13 @@ static int dd_owned_by_driver_show(void *data, struct= seq_file *m) #define DEADLINE_DISPATCH_ATTR(prio) \ static void *deadline_dispatch##prio##_start(struct seq_file *m, \ loff_t *pos) \ - __acquires(&dd->lock) \ + __acquires(&q->elevator->lock) \ { \ struct request_queue *q =3D m->private; \ struct deadline_data *dd =3D q->elevator->elevator_data; \ struct dd_per_prio *per_prio =3D &dd->per_prio[prio]; \ \ - spin_lock(&dd->lock); \ + elevator_lock(q->elevator); \ return seq_list_start(&per_prio->dispatch, *pos); \ } \ \ @@ -1004,12 +989,11 @@ static void *deadline_dispatch##prio##_next(struct s= eq_file *m, \ } \ \ static void deadline_dispatch##prio##_stop(struct seq_file *m, void *v) \ - __releases(&dd->lock) \ + __releases(&q->elevator->lock) \ { \ struct request_queue *q =3D m->private; \ - struct deadline_data *dd =3D q->elevator->elevator_data; \ \ - spin_unlock(&dd->lock); \ + elevator_unlock(q->elevator); \ } \ \ static const struct seq_operations deadline_dispatch##prio##_seq_ops =3D {= \ --=20 2.39.2 From nobody Wed Feb 11 07:01:02 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 193231E1C1A; Fri, 30 May 2025 08:09:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592550; cv=none; b=ij+lldFfg+8Fd7ua3+1aofELK0WMWT1qVvF4npWoGtC1m3VLD9FX+DPT8sHCANCzlrEgR1DTFa8p1hcEVqQRA+nOZRc47d/PV7vQfWPHFxYMO1tjkmSfDOzlIqm3eFfk6MS5MzI5is86j/KiuK9HACngJbPSE0X/3+rgYhjNq8Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592550; c=relaxed/simple; bh=EyPn0v8wmitLWs9aCg2B546g5NqdZvn6zhrSgVC0Fkg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AO1zMkd0mwtk0lT2ezwAfgGB7X4b/kchCWzAbl5DTQyr18RlyWSsC9cdDAxb31VMHvZYBmKxZmG6mhTzbH45WeJ15PBC1aCTUmdK0MH5Sgbjvzu9/WMY3jaE0EsMUoAlzFE5gC3y5jlat3Z+55DOLiTcLXosmz9Kkg64aN1hxOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4b7wql4nfrzKHMvG; Fri, 30 May 2025 16:09:07 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 153461A09DF; Fri, 30 May 2025 16:09:06 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP3 (Coremail) with SMTP id _Ch0CgAne8WfZzloSi3_Ng--.6274S7; Fri, 30 May 2025 16:09:05 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com, johnny.chenyi@huawei.com Subject: [PATCH RFC 3/4] blk-mq-sched: refactor __blk_mq_do_dispatch_sched() Date: Fri, 30 May 2025 16:03:54 +0800 Message-Id: <20250530080355.1138759-4-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250530080355.1138759-1-yukuai1@huaweicloud.com> References: <20250530080355.1138759-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgAne8WfZzloSi3_Ng--.6274S7 X-Coremail-Antispam: 1UD129KBjvJXoW3Wr45uw17CF1xtrW5try8uFg_yoWxZr13pF 4fGa13J395XF4jqFyI9w43Jw1Sy3yxWasrWryrKr4fJws8Zrs5Jrn5JFyUAFs7JrZ5uFW2 9r4DWr95AFs2qFDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBG14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCY1x0262kKe7AKxVWUAVWUtw CF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j 6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64 vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_ Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0x vEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JU9J5rUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ Content-Type: text/plain; charset="utf-8" From: Yu Kuai Introduce struct sched_dispatch_ctx(), and split it into elevator_dispatch_one_request() and elevator_finish_dispatch(). Make code cleaner and prepare to support request batch dispatching. Signed-off-by: Yu Kuai --- block/blk-mq-sched.c | 181 ++++++++++++++++++++++++++----------------- 1 file changed, 109 insertions(+), 72 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index c1390d3e6381..990d0f19594a 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -74,85 +74,88 @@ static bool blk_mq_dispatch_hctx_list(struct list_head = *rq_list) =20 #define BLK_MQ_BUDGET_DELAY 3 /* ms units */ =20 -/* - * Only SCSI implements .get_budget and .put_budget, and SCSI restarts - * its queue by itself in its completion handler, so we don't need to - * restart queue if .get_budget() fails to get the budget. - * - * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has = to - * be run again. This is necessary to avoid starving flushes. - */ -static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) -{ - struct request_queue *q =3D hctx->queue; - struct elevator_queue *e =3D q->elevator; - bool multi_hctxs =3D false, run_queue =3D false; - bool dispatched =3D false, busy =3D false; - unsigned int max_dispatch; - LIST_HEAD(rq_list); - int count =3D 0; +struct sched_dispatch_ctx { + struct blk_mq_hw_ctx *hctx; + struct elevator_queue *e; + struct request_queue *q; =20 - if (hctx->dispatch_busy) - max_dispatch =3D 1; - else - max_dispatch =3D hctx->queue->nr_requests; + struct list_head rq_list; + int count; =20 - do { - struct request *rq; - int budget_token; + bool multi_hctxs; + bool run_queue; + bool busy; +}; =20 - if (e->type->ops.has_work && !e->type->ops.has_work(hctx)) - break; +static bool elevator_can_dispatch(struct sched_dispatch_ctx *ctx) +{ + if (ctx->e->type->ops.has_work && + !ctx->e->type->ops.has_work(ctx->hctx)) + return false; =20 - if (!list_empty_careful(&hctx->dispatch)) { - busy =3D true; - break; - } + if (!list_empty_careful(&ctx->hctx->dispatch)) { + ctx->busy =3D true; + return false; + } =20 - budget_token =3D blk_mq_get_dispatch_budget(q); - if (budget_token < 0) - break; + return true; +} =20 - rq =3D elevator_dispatch_request(hctx); - if (!rq) { - blk_mq_put_dispatch_budget(q, budget_token); - /* - * We're releasing without dispatching. Holding the - * budget could have blocked any "hctx"s with the - * same queue and if we didn't dispatch then there's - * no guarantee anyone will kick the queue. Kick it - * ourselves. - */ - run_queue =3D true; - break; - } +static bool elevator_dispatch_one_request(struct sched_dispatch_ctx *ctx) +{ + struct request *rq; + int budget_token; =20 - blk_mq_set_rq_budget_token(rq, budget_token); + if (!elevator_can_dispatch(ctx)) + return false; =20 - /* - * Now this rq owns the budget which has to be released - * if this rq won't be queued to driver via .queue_rq() - * in blk_mq_dispatch_rq_list(). - */ - list_add_tail(&rq->queuelist, &rq_list); - count++; - if (rq->mq_hctx !=3D hctx) - multi_hctxs =3D true; + budget_token =3D blk_mq_get_dispatch_budget(ctx->q); + if (budget_token < 0) + return false; =20 + rq =3D elevator_dispatch_request(ctx->hctx); + if (!rq) { + blk_mq_put_dispatch_budget(ctx->q, budget_token); /* - * If we cannot get tag for the request, stop dequeueing - * requests from the IO scheduler. We are unlikely to be able - * to submit them anyway and it creates false impression for - * scheduling heuristics that the device can take more IO. + * We're releasing without dispatching. Holding the + * budget could have blocked any "hctx"s with the + * same queue and if we didn't dispatch then there's + * no guarantee anyone will kick the queue. Kick it + * ourselves. */ - if (!blk_mq_get_driver_tag(rq)) - break; - } while (count < max_dispatch); + ctx->run_queue =3D true; + return false; + } =20 - if (!count) { - if (run_queue) - blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY); - } else if (multi_hctxs) { + blk_mq_set_rq_budget_token(rq, budget_token); + + /* + * Now this rq owns the budget which has to be released + * if this rq won't be queued to driver via .queue_rq() + * in blk_mq_dispatch_rq_list(). + */ + list_add_tail(&rq->queuelist, &ctx->rq_list); + ctx->count++; + if (rq->mq_hctx !=3D ctx->hctx) + ctx->multi_hctxs =3D true; + + /* + * If we cannot get tag for the request, stop dequeueing + * requests from the IO scheduler. We are unlikely to be able + * to submit them anyway and it creates false impression for + * scheduling heuristics that the device can take more IO. + */ + return blk_mq_get_driver_tag(rq); +} + +static int elevator_finish_dispatch(struct sched_dispatch_ctx *ctx) +{ + bool dispatched =3D false; + + if (!ctx->count) { + if (ctx->run_queue) + blk_mq_delay_run_hw_queues(ctx->q, BLK_MQ_BUDGET_DELAY); + } else if (ctx->multi_hctxs) { /* * Requests from different hctx may be dequeued from some * schedulers, such as bfq and deadline. @@ -160,19 +163,53 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_h= w_ctx *hctx) * Sort the requests in the list according to their hctx, * dispatch batching requests from same hctx at a time. */ - list_sort(NULL, &rq_list, sched_rq_cmp); + list_sort(NULL, &ctx->rq_list, sched_rq_cmp); do { - dispatched |=3D blk_mq_dispatch_hctx_list(&rq_list); - } while (!list_empty(&rq_list)); + dispatched |=3D blk_mq_dispatch_hctx_list(&ctx->rq_list); + } while (!list_empty(&ctx->rq_list)); } else { - dispatched =3D blk_mq_dispatch_rq_list(hctx, &rq_list, false); + dispatched =3D blk_mq_dispatch_rq_list(ctx->hctx, &ctx->rq_list, + false); } =20 - if (busy) + if (ctx->busy) return -EAGAIN; + return !!dispatched; } =20 +/* + * Only SCSI implements .get_budget and .put_budget, and SCSI restarts + * its queue by itself in its completion handler, so we don't need to + * restart queue if .get_budget() fails to get the budget. + * + * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has = to + * be run again. This is necessary to avoid starving flushes. + */ +static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) +{ + unsigned int max_dispatch; + struct sched_dispatch_ctx ctx =3D { + .hctx =3D hctx, + .q =3D hctx->queue, + .e =3D hctx->queue->elevator, + }; + + INIT_LIST_HEAD(&ctx.rq_list); + + if (hctx->dispatch_busy) + max_dispatch =3D 1; + else + max_dispatch =3D hctx->queue->nr_requests; + + do { + if (!elevator_dispatch_one_request(&ctx)) + break; + } while (ctx.count < max_dispatch); + + return elevator_finish_dispatch(&ctx); +} + static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) { unsigned long end =3D jiffies + HZ; --=20 2.39.2 From nobody Wed Feb 11 07:01:02 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89C28205E2F; Fri, 30 May 2025 08:09:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592550; cv=none; b=iP7PG11O+oTDv8ngYAbsb3s2Rzo8WblvbGuvAtc9OlQ9fPglIqEpBplEtaRvXtw3reBDhl2imVcovM/sCiTeyoGL6Z3EF9cuepB92T8/dq0gEgkcd/+RkumrDCTvV1/yFKBd1gauo/qQkIHKh22kU81wjwpfCgU4Iv6bmTCyu+Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748592550; c=relaxed/simple; bh=QCd4T4J0SpzU2OCx/fj7+3sZvzN/6q1CRTgmuVdWTCc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=O/9mUUbPY6wXiY/yiZORMzQjAZmW2rbArNTMV86fonNqeHtxKONQQ3vaHy2SPj62rOnDdv6XGQ95npOGKWccM9PErmR9Uvh5g1LJxZhZl1tnbb30V2E8SCmuYxfPKINCKL6CCJovrVSIDZqhGEAppW58vMRdmQxrsK2v22VwYHI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4b7wql2SsvzYQvFM; Fri, 30 May 2025 16:09:07 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 6F5FE1A07BD; Fri, 30 May 2025 16:09:06 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP3 (Coremail) with SMTP id _Ch0CgAne8WfZzloSi3_Ng--.6274S8; Fri, 30 May 2025 16:09:06 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com, johnny.chenyi@huawei.com Subject: [PATCH RFC 4/4] blk-mq-sched: support request batch dispatching for sq elevator Date: Fri, 30 May 2025 16:03:55 +0800 Message-Id: <20250530080355.1138759-5-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250530080355.1138759-1-yukuai1@huaweicloud.com> References: <20250530080355.1138759-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgAne8WfZzloSi3_Ng--.6274S8 X-Coremail-Antispam: 1UD129KBjvJXoWxJF47AFyUGF4kury3ZrWxCrg_yoW5Zw15pF 4rXa1FkryqqFnFqFyfAw47J3W5G3yI9r9rWrW3Kr4fJFs2qrsxtF1rJa4UJF4xJrs5CFsr ur4DWa4DuF1IvaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCY1x0262kKe7AKxVWUAV WUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUvYLPUUUUU = X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ Content-Type: text/plain; charset="utf-8" From: Yu Kuai Before this patch, each context will hold a global lock to dispatch one request at a time, which introduce intense lock contention: lock ops.dispatch_request unlock Hence support dispatch a batch of requests while holding the lock to reduce lock contention. Signed-off-by: Yu Kuai --- block/blk-mq-sched.c | 53 ++++++++++++++++++++++++++++++++++++++++---- block/blk-mq.h | 21 ++++++++++++++++++ 2 files changed, 70 insertions(+), 4 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 990d0f19594a..d255c3e6c2a8 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -101,6 +101,47 @@ static bool elevator_can_dispatch(struct sched_dispatc= h_ctx *ctx) return true; } =20 +static void elevator_dispatch_requests(struct sched_dispatch_ctx *ctx) +{ + struct request *rq; + int budget_token[BUDGET_TOKEN_BATCH]; + int count; + int i; + + while (true) { + if (!elevator_can_dispatch(ctx)) + return; + + count =3D blk_mq_get_dispatch_budgets(ctx->q, budget_token); + if (count <=3D 0) + return; + + elevator_lock(ctx->e); + for (i =3D 0; i < count; ++i) { + rq =3D ctx->e->type->ops.dispatch_request(ctx->hctx); + if (!rq) { + ctx->run_queue =3D true; + goto err_free_budgets; + } + + blk_mq_set_rq_budget_token(rq, budget_token[i]); + list_add_tail(&rq->queuelist, &ctx->rq_list); + ctx->count++; + if (rq->mq_hctx !=3D ctx->hctx) + ctx->multi_hctxs =3D true; + + if (!blk_mq_get_driver_tag(rq)) + goto err_free_budgets; + } + elevator_unlock(ctx->e); + } + +err_free_budgets: + elevator_unlock(ctx->e); + for (; i < count; ++i) + blk_mq_put_dispatch_budget(ctx->q, budget_token[i]); +} + static bool elevator_dispatch_one_request(struct sched_dispatch_ctx *ctx) { struct request *rq; @@ -202,10 +243,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_h= w_ctx *hctx) else max_dispatch =3D hctx->queue->nr_requests; =20 - do { - if (!elevator_dispatch_one_request(&ctx)) - break; - } while (ctx.count < max_dispatch); + if (blk_queue_sq_sched(ctx.q)) + elevator_dispatch_requests(&ctx); + else { + do { + if (!elevator_dispatch_one_request(&ctx)) + break; + } while (ctx.count < max_dispatch); + } =20 return elevator_finish_dispatch(&ctx); } diff --git a/block/blk-mq.h b/block/blk-mq.h index affb2e14b56e..450c16a07841 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -37,6 +37,7 @@ enum { }; =20 #define BLK_MQ_CPU_WORK_BATCH (8) +#define BUDGET_TOKEN_BATCH (8) =20 typedef unsigned int __bitwise blk_insert_t; #define BLK_MQ_INSERT_AT_HEAD ((__force blk_insert_t)0x01) @@ -262,6 +263,26 @@ static inline int blk_mq_get_dispatch_budget(struct re= quest_queue *q) return 0; } =20 +static inline int blk_mq_get_dispatch_budgets(struct request_queue *q, + int *budget_token) +{ + int count =3D 0; + + while (count < BUDGET_TOKEN_BATCH) { + int token =3D 0; + + if (q->mq_ops->get_budget) + token =3D q->mq_ops->get_budget(q); + + if (token < 0) + return count; + + budget_token[count++] =3D token; + } + + return count; +} + static inline void blk_mq_set_rq_budget_token(struct request *rq, int toke= n) { if (token < 0) --=20 2.39.2