From nobody Sat Apr 18 04:22:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0364C433EF for ; Mon, 18 Jul 2022 08:37:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233708AbiGRIhH (ORCPT ); Mon, 18 Jul 2022 04:37:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233512AbiGRIhF (ORCPT ); Mon, 18 Jul 2022 04:37:05 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA7C319299 for ; Mon, 18 Jul 2022 01:37:03 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id v4-20020a17090abb8400b001ef966652a3so17459953pjr.4 for ; Mon, 18 Jul 2022 01:37:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=hrEyQGzrB3TAIf/TmXechjLg15TCHCdypY6rTcTiJ1Q=; b=aYzzebC8Nn5bqXlt1anYrWRRc1cd679atDrbMKEoimTgs6qL74K36w/o7x7Pmi/dm0 9n3CklWEN8XA3oztwnzH1on4r1sQCcH14KQNK8+ivGYom3PtaHgLTS8LCg8kTqalcHE2 0PKMiZecAeeTDeJKj4z8F/EL2zSKnFcSTX0slJm4O5/gFeezRQiV5NL/RNc/MoacC9ib bKo4svAeaNwGB6c1ArZoyD/OMA5EjGLLLMGqla2poqNgJNJfmZsaS1e2zHFMIzHsTHz1 ToqEhZUV3mwCUx/me2EiUW7zruqe1ezj2xbb/mYQCKHrzo0EgjtZ5UyzzaagxJ4HiKdY rwkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=hrEyQGzrB3TAIf/TmXechjLg15TCHCdypY6rTcTiJ1Q=; b=fpeAlAldy+JwgffeCuAZL2ieDPvb61UEImclnGLf+1XL/sL6G/o53HGxb17RNu7Q4e 0rgQtQflnbLXM8QbsdL++kSlyQ8gJue1CKjZYFtShxudM5pHWRtykcMzzyAxIQXXD9ri RHJwy8TY0iUjOMZ+BRvP3xbqguXtLx3qkXdQK0Otb4bTelYDUaqMgvzrVEBdDr0vAkcG oOEyb+wpagBi6+ryWZ7RScANCK3AiAjAgCNEZNnlWmUjjlQHIVAy8vwhHPdC9S3x7HUr A2W8y4aFA6iSrQRAw3riUeyDb3EqJwxFWAz6x6Q3bPXbw+cWM8DxuFFV5PAzdSwdDdnq 0FTA== X-Gm-Message-State: AJIora+BFqYp34Wg5eyCmJbb7b461eYRUsWLMHmf8wDkz9EUSgcKakuy jElGEpP8+NhOrmW3OQdZcvjTUA== X-Google-Smtp-Source: AGRyM1shMPa38M0T1/NNgVd+C25k5wuq8hvvwXh0UbSz0nvv2e7mPwkacDPoPodFXencByhxF9Gg7w== X-Received: by 2002:a17:90b:17c4:b0:1f0:5541:f2a with SMTP id me4-20020a17090b17c400b001f055410f2amr31585924pjb.224.1658133423176; Mon, 18 Jul 2022 01:37:03 -0700 (PDT) Received: from C02GD5ZHMD6R.bytedance.net ([61.120.150.77]) by smtp.gmail.com with ESMTPSA id b5-20020a170902d88500b00163ffbc4f74sm8794752plz.49.2022.07.18.01.37.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Jul 2022 01:37:02 -0700 (PDT) From: Jinke Han X-Google-Original-From: Jinke Han To: axboe@kernel.dk, tj@kernel.org Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, hanjinke Subject: [PATCH] block: don't allow the same type rq_qos add more than once Date: Mon, 18 Jul 2022 16:36:46 +0800 Message-Id: <20220718083646.67601-1-hanjinke.666@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: hanjinke In our test of iocost, we encounttered some list add/del corrutions of inner_walk list in ioc_timer_fn. The resean can be descripted as follow: cpu 0 cpu 1 ioc_qos_write ioc_qos_write ioc =3D q_to_ioc(bdev_get_queue(bdev)); if (!ioc) { ioc =3D kzalloc(); ioc =3D q_to_ioc(bdev_get_queue(bdev)); if (!ioc) { ioc =3D kzalloc(); ... rq_qos_add(q, rqos); } ... rq_qos_add(q, rqos); ... } When the io.cost.qos file is written by two cpu concurrently, rq_qos may be added to one disk twice. In that case, there will be two iocs enabled and running on one disk. They own different iocgs on their active list. In the ioc_timer_fn function, because of the iocgs from two ioc have the same root iocg, the root iocg's walk_list may be overwritten by each other and this lead to list add/del corrutions in building or destorying the inner_walk list. And so far, the blk-rq-qos framework works in case that one instance for one type rq_qos per queue by default. This patch make this explicit and also fix the crash above. Signed-off-by: hanjinke Acked-by: Tejun Heo Reviewed-by: Muchun Song --- block/blk-iocost.c | 7 ++++++- block/blk-iolatency.c | 6 +++++- block/blk-ioprio.c | 5 +++++ block/blk-rq-qos.h | 11 ++++++++++- block/blk-wbt.c | 8 +++++++- 5 files changed, 33 insertions(+), 4 deletions(-) diff --git a/block/blk-iocost.c b/block/blk-iocost.c index 33a11ba971ea..2c41edc3fd70 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -2886,7 +2886,12 @@ static int blk_iocost_init(struct request_queue *q) * called before policy activation completion, can't assume that the * target bio has an iocg associated and need to test for NULL iocg. */ - rq_qos_add(q, rqos); + ret =3D rq_qos_add(q, rqos); + if (ret) { + free_percpu(ioc->pcpu_stat); + kfree(ioc); + return ret; + } ret =3D blkcg_activate_policy(q, &blkcg_policy_iocost); if (ret) { rq_qos_del(q, rqos); diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 9568bf8dfe82..9a572439f326 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -773,7 +773,11 @@ int blk_iolatency_init(struct request_queue *q) rqos->ops =3D &blkcg_iolatency_ops; rqos->q =3D q; =20 - rq_qos_add(q, rqos); + ret =3D rq_qos_add(q, rqos); + if (ret) { + kfree(blkiolat); + return ret; + } =20 ret =3D blkcg_activate_policy(q, &blkcg_policy_iolatency); if (ret) { diff --git a/block/blk-ioprio.c b/block/blk-ioprio.c index 79e797f5d194..931bffdf0cab 100644 --- a/block/blk-ioprio.c +++ b/block/blk-ioprio.c @@ -251,6 +251,11 @@ int blk_ioprio_init(struct request_queue *q) * rq-qos callbacks. */ rq_qos_add(q, rqos); + if (ret) { + blkcg_deactivate_policy(q, &ioprio_policy); + kfree(blkioprio_blkg); + return ret; + } =20 return 0; } diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 0e46052b018a..bbc70cf21c9c 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -86,7 +86,7 @@ static inline void rq_wait_init(struct rq_wait *rq_wait) init_waitqueue_head(&rq_wait->wait); } =20 -static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos) +static inline int rq_qos_add(struct request_queue *q, struct rq_qos *rqos) { /* * No IO can be in-flight when adding rqos, so freeze queue, which @@ -98,6 +98,8 @@ static inline void rq_qos_add(struct request_queue *q, st= ruct rq_qos *rqos) blk_mq_freeze_queue(q); =20 spin_lock_irq(&q->queue_lock); + if (rq_qos_id(q, rqos->id)) + goto out; rqos->next =3D q->rq_qos; q->rq_qos =3D rqos; spin_unlock_irq(&q->queue_lock); @@ -109,6 +111,13 @@ static inline void rq_qos_add(struct request_queue *q,= struct rq_qos *rqos) blk_mq_debugfs_register_rqos(rqos); mutex_unlock(&q->debugfs_mutex); } + + return 0; +out: + spin_unlock_irq(&q->queue_lock); + blk_mq_unfreeze_queue(q); + return -EBUSY; + } =20 static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos) diff --git a/block/blk-wbt.c b/block/blk-wbt.c index 0c119be0e813..cc8f45929b31 100644 --- a/block/blk-wbt.c +++ b/block/blk-wbt.c @@ -820,6 +820,7 @@ int wbt_init(struct request_queue *q) { struct rq_wb *rwb; int i; + int ret; =20 rwb =3D kzalloc(sizeof(*rwb), GFP_KERNEL); if (!rwb) @@ -846,7 +847,12 @@ int wbt_init(struct request_queue *q) /* * Assign rwb and add the stats callback. */ - rq_qos_add(q, &rwb->rqos); + ret =3D rq_qos_add(q, &rwb->rqos); + if (ret) { + blk_stat_free_callback(rwb->cb); + kfree(rwb); + return ret; + } blk_stat_add_callback(q, rwb->cb); =20 rwb->min_lat_nsec =3D wbt_default_latency_nsec(q); --=20 2.20.1