From nobody Tue Apr 7 17:35:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FF8DC4332F for ; Tue, 18 Oct 2022 11:13:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230047AbiJRLNC (ORCPT ); Tue, 18 Oct 2022 07:13:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230032AbiJRLMw (ORCPT ); Tue, 18 Oct 2022 07:12:52 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6ACBBB5FFC; Tue, 18 Oct 2022 04:12:47 -0700 (PDT) Received: from kwepemi500016.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MsB0x1dRpzmV8Y; Tue, 18 Oct 2022 19:08:01 +0800 (CST) Received: from huawei.com (10.174.178.129) by kwepemi500016.china.huawei.com (7.221.188.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 18 Oct 2022 19:12:43 +0800 From: Kemeng Shi To: , , CC: , , , Subject: [PATCH v2 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Date: Tue, 18 Oct 2022 19:12:38 +0800 Message-ID: <20221018111240.22612-2-shikemeng@huawei.com> X-Mailer: git-send-email 2.14.1.windows.1 In-Reply-To: <20221018111240.22612-1-shikemeng@huawei.com> References: <20221018111240.22612-1-shikemeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.129] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemi500016.china.huawei.com (7.221.188.220) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Function blkcg_iolatency_throttle will make sure blkg->parent is not NULL before calls check_scale_change. And function check_scale_change is only called in blkcg_iolatency_throttle. Signed-off-by: Kemeng Shi Reviewed-by: Josef Bacik --- block/blk-iolatency.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 571fa95aafe9..b24d7b788ba3 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -403,9 +403,6 @@ static void check_scale_change(struct iolatency_grp *io= lat) u64 scale_lat; int direction =3D 0; =20 - if (lat_to_blkg(iolat)->parent =3D=3D NULL) - return; - parent =3D blkg_to_lat(lat_to_blkg(iolat)->parent); if (!parent) return; --=20 2.30.0 From nobody Tue Apr 7 17:35:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B54C4332F for ; Tue, 18 Oct 2022 11:12:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230041AbiJRLM4 (ORCPT ); Tue, 18 Oct 2022 07:12:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229956AbiJRLMv (ORCPT ); Tue, 18 Oct 2022 07:12:51 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 021C4B6038; Tue, 18 Oct 2022 04:12:46 -0700 (PDT) Received: from kwepemi500016.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MsB150GBLzVhxN; Tue, 18 Oct 2022 19:08:09 +0800 (CST) Received: from huawei.com (10.174.178.129) by kwepemi500016.china.huawei.com (7.221.188.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 18 Oct 2022 19:12:44 +0800 From: Kemeng Shi To: , , CC: , , , Subject: [PATCH v2 2/3] block: Correct comment for scale_cookie_change Date: Tue, 18 Oct 2022 19:12:39 +0800 Message-ID: <20221018111240.22612-3-shikemeng@huawei.com> X-Mailer: git-send-email 2.14.1.windows.1 In-Reply-To: <20221018111240.22612-1-shikemeng@huawei.com> References: <20221018111240.22612-1-shikemeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.129] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemi500016.china.huawei.com (7.221.188.220) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Default queue depth of iolatency_grp is unlimited, so we scale down quickly(once by half) in scale_cookie_change. Remove the "subtract 1/16th" part which is not the truth and add the actual way we scale down. Signed-off-by: Kemeng Shi Reviewed-by: Josef Bacik --- block/blk-iolatency.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index b24d7b788ba3..2c574f98c8d1 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *= blkiolat, } =20 /* - * Change the queue depth of the iolatency_grp. We add/subtract 1/16th of= the + * Change the queue depth of the iolatency_grp. We add 1/16th of the * queue depth at a time so we don't get wild swings and hopefully dial in= to - * fairer distribution of the overall queue depth. + * fairer distribution of the overall queue depth. We halve the queue dep= th + * at a time so we can scale down queue depth quickly from default unlimit= ed + * to target. */ static void scale_change(struct iolatency_grp *iolat, bool up) { --=20 2.30.0 From nobody Tue Apr 7 17:35:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 776F1C433FE for ; Tue, 18 Oct 2022 11:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229848AbiJRLM7 (ORCPT ); Tue, 18 Oct 2022 07:12:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230025AbiJRLMw (ORCPT ); Tue, 18 Oct 2022 07:12:52 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94C2BB603B; Tue, 18 Oct 2022 04:12:47 -0700 (PDT) Received: from kwepemi500016.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MsB0y442jzmV9L; Tue, 18 Oct 2022 19:08:02 +0800 (CST) Received: from huawei.com (10.174.178.129) by kwepemi500016.china.huawei.com (7.221.188.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 18 Oct 2022 19:12:44 +0800 From: Kemeng Shi To: , , CC: , , , Subject: [PATCH v2 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Date: Tue, 18 Oct 2022 19:12:40 +0800 Message-ID: <20221018111240.22612-4-shikemeng@huawei.com> X-Mailer: git-send-email 2.14.1.windows.1 In-Reply-To: <20221018111240.22612-1-shikemeng@huawei.com> References: <20221018111240.22612-1-shikemeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.129] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemi500016.china.huawei.com (7.221.188.220) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We only need a max queue depth for every iolatency to limit the inflight io number. Replace struct rq_depth with unsigned int to simplfy "struct iolatency_grp" and save memory. Signed-off-by: Kemeng Shi Reviewed-by: Josef Bacik --- block/blk-iolatency.c | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 2c574f98c8d1..778a0057193e 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -141,7 +141,7 @@ struct iolatency_grp { struct latency_stat __percpu *stats; struct latency_stat cur_stat; struct blk_iolatency *blkiolat; - struct rq_depth rq_depth; + unsigned int max_depth; struct rq_wait rq_wait; atomic64_t window_start; atomic_t scale_cookie; @@ -280,7 +280,7 @@ static void iolat_cleanup_cb(struct rq_wait *rqw, void = *private_data) static bool iolat_acquire_inflight(struct rq_wait *rqw, void *private_data) { struct iolatency_grp *iolat =3D private_data; - return rq_wait_inc_below(rqw, iolat->rq_depth.max_depth); + return rq_wait_inc_below(rqw, iolat->max_depth); } =20 static void __blkcg_iolatency_throttle(struct rq_qos *rqos, @@ -374,7 +374,7 @@ static void scale_change(struct iolatency_grp *iolat, b= ool up) { unsigned long qd =3D iolat->blkiolat->rqos.q->nr_requests; unsigned long scale =3D scale_amount(qd, up); - unsigned long old =3D iolat->rq_depth.max_depth; + unsigned long old =3D iolat->max_depth; =20 if (old > qd) old =3D qd; @@ -386,12 +386,12 @@ static void scale_change(struct iolatency_grp *iolat,= bool up) if (old < qd) { old +=3D scale; old =3D min(old, qd); - iolat->rq_depth.max_depth =3D old; + iolat->max_depth =3D old; wake_up_all(&iolat->rq_wait.wait); } } else { old >>=3D 1; - iolat->rq_depth.max_depth =3D max(old, 1UL); + iolat->max_depth =3D max(old, 1UL); } } =20 @@ -444,7 +444,7 @@ static void check_scale_change(struct iolatency_grp *io= lat) } =20 /* We're as low as we can go. */ - if (iolat->rq_depth.max_depth =3D=3D 1 && direction < 0) { + if (iolat->max_depth =3D=3D 1 && direction < 0) { blkcg_use_delay(lat_to_blkg(iolat)); return; } @@ -452,7 +452,7 @@ static void check_scale_change(struct iolatency_grp *io= lat) /* We're back to the default cookie, unthrottle all the things. */ if (cur_cookie =3D=3D DEFAULT_SCALE_COOKIE) { blkcg_clear_delay(lat_to_blkg(iolat)); - iolat->rq_depth.max_depth =3D UINT_MAX; + iolat->max_depth =3D UINT_MAX; wake_up_all(&iolat->rq_wait.wait); return; } @@ -507,7 +507,7 @@ static void iolatency_record_time(struct iolatency_grp = *iolat, * We don't want to count issue_as_root bio's in the cgroups latency * statistics as it could skew the numbers downwards. */ - if (unlikely(issue_as_root && iolat->rq_depth.max_depth !=3D UINT_MAX)) { + if (unlikely(issue_as_root && iolat->max_depth !=3D UINT_MAX)) { u64 sub =3D iolat->min_lat_nsec; if (req_time < sub) blkcg_add_delay(lat_to_blkg(iolat), now, sub - req_time); @@ -919,7 +919,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *io= lat, struct seq_file *s) } preempt_enable(); =20 - if (iolat->rq_depth.max_depth =3D=3D UINT_MAX) + if (iolat->max_depth =3D=3D UINT_MAX) seq_printf(s, " missed=3D%llu total=3D%llu depth=3Dmax", (unsigned long long)stat.ps.missed, (unsigned long long)stat.ps.total); @@ -927,7 +927,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *io= lat, struct seq_file *s) seq_printf(s, " missed=3D%llu total=3D%llu depth=3D%u", (unsigned long long)stat.ps.missed, (unsigned long long)stat.ps.total, - iolat->rq_depth.max_depth); + iolat->max_depth); } =20 static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file= *s) @@ -944,12 +944,12 @@ static void iolatency_pd_stat(struct blkg_policy_data= *pd, struct seq_file *s) =20 avg_lat =3D div64_u64(iolat->lat_avg, NSEC_PER_USEC); cur_win =3D div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC); - if (iolat->rq_depth.max_depth =3D=3D UINT_MAX) + if (iolat->max_depth =3D=3D UINT_MAX) seq_printf(s, " depth=3Dmax avg_lat=3D%llu win=3D%llu", avg_lat, cur_win); else seq_printf(s, " depth=3D%u avg_lat=3D%llu win=3D%llu", - iolat->rq_depth.max_depth, avg_lat, cur_win); + iolat->max_depth, avg_lat, cur_win); } =20 static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp, @@ -993,9 +993,7 @@ static void iolatency_pd_init(struct blkg_policy_data *= pd) latency_stat_init(iolat, &iolat->cur_stat); rq_wait_init(&iolat->rq_wait); spin_lock_init(&iolat->child_lat.lock); - iolat->rq_depth.queue_depth =3D blkg->q->nr_requests; - iolat->rq_depth.max_depth =3D UINT_MAX; - iolat->rq_depth.default_depth =3D iolat->rq_depth.queue_depth; + iolat->max_depth =3D UINT_MAX; iolat->blkiolat =3D blkiolat; iolat->cur_win_nsec =3D 100 * NSEC_PER_MSEC; atomic64_set(&iolat->window_start, now); --=20 2.30.0