From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60B2AC41513 for ; Tue, 15 Aug 2023 03:16:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234456AbjHODP7 (ORCPT ); Mon, 14 Aug 2023 23:15:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234341AbjHODOW (ORCPT ); Mon, 14 Aug 2023 23:14:22 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CA651991; Mon, 14 Aug 2023 20:13:38 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RPxDZ4zGdz4f3vdk; Tue, 15 Aug 2023 11:13:34 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S5; Tue, 15 Aug 2023 11:13:35 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 1/7] md: use separate work_struct for md_start_sync() Date: Tue, 15 Aug 2023 11:09:51 +0800 Message-Id: <20230815030957.509535-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S5 X-Coremail-Antispam: 1UD129KBjvJXoW7Ary3ur4xKr18JFykJF4Utwb_yoW5JFyfpa ySgFy3JrW8J390qw4UWFWDC3Wagw1vyryDtryfCw4FvF9xtr1UGa1FgayqqF98CayFkr1a va1FqFW5ur18Gr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9v14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8JV W8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbec_DUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai It's a little weird to borrow 'del_work' for md_start_sync(), declare a new work_struct 'sync_work' for md_start_sync(). Signed-off-by: Yu Kuai --- drivers/md/md.c | 10 ++++++---- drivers/md/md.h | 5 ++++- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 5c3c19b8d509..90815be1e80f 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -631,13 +631,13 @@ void mddev_put(struct mddev *mddev) * flush_workqueue() after mddev_find will succeed in waiting * for the work to be done. */ - INIT_WORK(&mddev->del_work, mddev_delayed_delete); queue_work(md_misc_wq, &mddev->del_work); } spin_unlock(&all_mddevs_lock); } =20 static void md_safemode_timeout(struct timer_list *t); +static void md_start_sync(struct work_struct *ws); =20 void mddev_init(struct mddev *mddev) { @@ -662,6 +662,9 @@ void mddev_init(struct mddev *mddev) mddev->resync_min =3D 0; mddev->resync_max =3D MaxSector; mddev->level =3D LEVEL_NONE; + + INIT_WORK(&mddev->sync_work, md_start_sync); + INIT_WORK(&mddev->del_work, mddev_delayed_delete); } EXPORT_SYMBOL_GPL(mddev_init); =20 @@ -9245,7 +9248,7 @@ static int remove_and_add_spares(struct mddev *mddev, =20 static void md_start_sync(struct work_struct *ws) { - struct mddev *mddev =3D container_of(ws, struct mddev, del_work); + struct mddev *mddev =3D container_of(ws, struct mddev, sync_work); =20 rcu_assign_pointer(mddev->sync_thread, md_register_thread(md_do_sync, mddev, "resync")); @@ -9458,8 +9461,7 @@ void md_check_recovery(struct mddev *mddev) */ md_bitmap_write_all(mddev->bitmap); } - INIT_WORK(&mddev->del_work, md_start_sync); - queue_work(md_misc_wq, &mddev->del_work); + queue_work(md_misc_wq, &mddev->sync_work); goto unlock; } not_running: diff --git a/drivers/md/md.h b/drivers/md/md.h index 9bcb77bca963..64d05cb65287 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -450,7 +450,10 @@ struct mddev { struct kernfs_node *sysfs_degraded; /*handle for 'degraded' */ struct kernfs_node *sysfs_level; /*handle for 'level' */ =20 - struct work_struct del_work; /* used for delayed sysfs removal */ + /* used for delayed sysfs removal */ + struct work_struct del_work; + /* used for register new sync thread */ + struct work_struct sync_work; =20 /* "lock" protects: * flush_bio transition from NULL to !NULL --=20 2.39.2 From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 887E3C04FE0 for ; Tue, 15 Aug 2023 03:15:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234441AbjHODP1 (ORCPT ); Mon, 14 Aug 2023 23:15:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234187AbjHODOH (ORCPT ); Mon, 14 Aug 2023 23:14:07 -0400 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 320DF198D; Mon, 14 Aug 2023 20:13:38 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4RPxDY13jkz4f3jXy; Tue, 15 Aug 2023 11:13:33 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S6; Tue, 15 Aug 2023 11:13:36 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 2/7] md: factor out a helper to choose sync direction from md_check_recovery() Date: Tue, 15 Aug 2023 11:09:52 +0800 Message-Id: <20230815030957.509535-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S6 X-Coremail-Antispam: 1UD129KBjvJXoWxur1UJF1Dur1kCw18KF4fGrg_yoW5Kr43pa 1fJFn8Cr4UJayfAr42q3WDXrW5Cr48trWDtFy3W34kAFn0yF1fGa45W3W7AryDGas2qa12 qw4kJFZrur1YgwUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9m14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr 0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUc6pPUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai There are no functional changes, on the one hand make the code cleaner, on the other hand prevent following checkpatch error in the next patch to delay choosing sync direction to md_start_sync(). ERROR: do not use assignment in if condition + } else if ((spares =3D remove_and_add_spares(mddev, NULL))) { Signed-off-by: Yu Kuai --- drivers/md/md.c | 68 +++++++++++++++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 25 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 90815be1e80f..4846ff6d25b0 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9246,6 +9246,48 @@ static int remove_and_add_spares(struct mddev *mddev, return spares; } =20 +static bool md_choose_sync_direction(struct mddev *mddev, int *spares) +{ + /* check reshape first */ + if (mddev->reshape_position !=3D MaxSector) { + if (mddev->pers->check_reshape =3D=3D NULL || + mddev->pers->check_reshape(mddev) !=3D 0) + return false; + + set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); + clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); + return true; + } + + /* + * remove any failed drives, then add spares if possible. Spares are + * also removed and re-added, to allow the personality to fail the + * re-add. + */ + *spares =3D remove_and_add_spares(mddev, NULL); + if (*spares) { + clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); + clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); + clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); + set_bit(MD_RECOVERY_RECOVER, &mddev->recovery); + return true; + } + + /* check recovery */ + if (mddev->recovery_cp < MaxSector) { + set_bit(MD_RECOVERY_SYNC, &mddev->recovery); + clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); + return true; + } + + /* check resync */ + if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) + return true; + + /* nothing to be done */ + return false; +} + static void md_start_sync(struct work_struct *ws) { struct mddev *mddev =3D container_of(ws, struct mddev, sync_work); @@ -9427,32 +9469,8 @@ void md_check_recovery(struct mddev *mddev) if (!test_and_clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) goto not_running; - /* no recovery is running. - * remove any failed drives, then - * add spares if possible. - * Spares are also removed and re-added, to allow - * the personality to fail the re-add. - */ - - if (mddev->reshape_position !=3D MaxSector) { - if (mddev->pers->check_reshape =3D=3D NULL || - mddev->pers->check_reshape(mddev) !=3D 0) - /* Cannot proceed */ - goto not_running; - set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); - clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); - } else if ((spares =3D remove_and_add_spares(mddev, NULL))) { - clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); - clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); - clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); - set_bit(MD_RECOVERY_RECOVER, &mddev->recovery); - } else if (mddev->recovery_cp < MaxSector) { - set_bit(MD_RECOVERY_SYNC, &mddev->recovery); - clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); - } else if (!test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) - /* nothing to be done ... */ + if (!md_choose_sync_direction(mddev, &spares)) goto not_running; - if (mddev->pers->sync_request) { if (spares) { /* We are adding a device or devices to an array --=20 2.39.2 From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B34FC001B0 for ; Tue, 15 Aug 2023 03:16:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234449AbjHODPx (ORCPT ); Mon, 14 Aug 2023 23:15:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234296AbjHODOW (ORCPT ); Mon, 14 Aug 2023 23:14:22 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 328E6198E; Mon, 14 Aug 2023 20:13:39 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RPxDb37Q2z4f3v5h; Tue, 15 Aug 2023 11:13:35 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S7; Tue, 15 Aug 2023 11:13:36 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 3/7] md: delay choosing sync direction to md_start_sync() Date: Tue, 15 Aug 2023 11:09:53 +0800 Message-Id: <20230815030957.509535-4-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S7 X-Coremail-Antispam: 1UD129KBjvJXoW3Xr4rXF1DWF4xtry8tFWfuFg_yoW7Xw1xpa yfAF98GrWUJrZxZrW2g3WDWay5ur10q39rtrWfWas5Jw1Yyan7KF15uF1UAFWDtas3Ca13 Zws5Ja13ZF15uw7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9m14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr 0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUd8n5UUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai Before this patch, for read-write array: 1) md_check_recover() found that something need to be done, and it'll try to grab 'reconfig_mutex'. The case that md_check_recover() need to do something: - array is not suspend; - super_block need to be updated; - 'MD_RECOVERY_NEEDED' or ''MD_RECOVERY_DONE' is set; - unusual case related to safemode; 2) if 'MD_RECOVERY_RUNNING' is not set, and 'MD_RECOVERY_NEEDED' is set, md_check_recover() will try to choose a sync direction, and then queue a work md_start_sync(). 3) md_start_sync() register sync_thread; After this patch, 1) is the same; 2) if 'MD_RECOVERY_RUNNING' is not set, and 'MD_RECOVERY_NEEDED' is set, queue a work md_start_sync() directly; 3) md_start_sync() will try to choose a sync direction, and then register sync_thread(); Because 'MD_RECOVERY_RUNNING' is cleared when sync_thread is done, 2) and 3) is always ran in serial and they can never concurrent, this change should not introduce any behavior change for now. Also fix a problem that md_start_sync() can clear 'MD_RECOVERY_RUNNING' without protection in error path, which might affect the logical in md_check_recovery(). The advantage to change this is that array reconfiguration is independent from daemon now, and it'll be much easier to synchronize it with io, consider that io may rely on daemon thread to be done. Signed-off-by: Yu Kuai --- drivers/md/md.c | 70 ++++++++++++++++++++++++++----------------------- 1 file changed, 37 insertions(+), 33 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 4846ff6d25b0..03615b0e9fe1 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9291,6 +9291,22 @@ static bool md_choose_sync_direction(struct mddev *m= ddev, int *spares) static void md_start_sync(struct work_struct *ws) { struct mddev *mddev =3D container_of(ws, struct mddev, sync_work); + int spares =3D 0; + + mddev_lock_nointr(mddev); + + if (!md_choose_sync_direction(mddev, &spares)) + goto not_running; + + if (!mddev->pers->sync_request) + goto not_running; + + /* + * We are adding a device or devices to an array which has the bitmap + * stored on all devices. So make sure all bitmap pages get written. + */ + if (spares) + md_bitmap_write_all(mddev->bitmap); =20 rcu_assign_pointer(mddev->sync_thread, md_register_thread(md_do_sync, mddev, "resync")); @@ -9298,20 +9314,27 @@ static void md_start_sync(struct work_struct *ws) pr_warn("%s: could not start resync thread...\n", mdname(mddev)); /* leave the spares where they are, it shouldn't hurt */ - clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); - clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); - clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); - clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); - clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery); - wake_up(&resync_wait); - if (test_and_clear_bit(MD_RECOVERY_RECOVER, - &mddev->recovery)) - if (mddev->sysfs_action) - sysfs_notify_dirent_safe(mddev->sysfs_action); - } else - md_wakeup_thread(mddev->sync_thread); + goto not_running; + } + + mddev_unlock(mddev); + md_wakeup_thread(mddev->sync_thread); sysfs_notify_dirent_safe(mddev->sysfs_action); md_new_event(); + return; + +not_running: + clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); + clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); + clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); + clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); + clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + mddev_unlock(mddev); + + wake_up(&resync_wait); + if (test_and_clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery) && + mddev->sysfs_action) + sysfs_notify_dirent_safe(mddev->sysfs_action); } =20 /* @@ -9379,7 +9402,6 @@ void md_check_recovery(struct mddev *mddev) return; =20 if (mddev_trylock(mddev)) { - int spares =3D 0; bool try_set_sync =3D mddev->safemode !=3D 0; =20 if (!mddev->external && mddev->safemode =3D=3D 1) @@ -9467,29 +9489,11 @@ void md_check_recovery(struct mddev *mddev) clear_bit(MD_RECOVERY_DONE, &mddev->recovery); =20 if (!test_and_clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || - test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) - goto not_running; - if (!md_choose_sync_direction(mddev, &spares)) - goto not_running; - if (mddev->pers->sync_request) { - if (spares) { - /* We are adding a device or devices to an array - * which has the bitmap stored on all devices. - * So make sure all bitmap pages get written - */ - md_bitmap_write_all(mddev->bitmap); - } + test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) { queue_work(md_misc_wq, &mddev->sync_work); - goto unlock; - } - not_running: - if (!mddev->sync_thread) { + } else { clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery); wake_up(&resync_wait); - if (test_and_clear_bit(MD_RECOVERY_RECOVER, - &mddev->recovery)) - if (mddev->sysfs_action) - sysfs_notify_dirent_safe(mddev->sysfs_action); } unlock: wake_up(&mddev->sb_wait); --=20 2.39.2 From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6F27C04FDF for ; Tue, 15 Aug 2023 03:15:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234444AbjHODPj (ORCPT ); Mon, 14 Aug 2023 23:15:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233905AbjHODOT (ORCPT ); Mon, 14 Aug 2023 23:14:19 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32B701990; Mon, 14 Aug 2023 20:13:39 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RPxDb5lW5z4f3vdd; Tue, 15 Aug 2023 11:13:35 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S8; Tue, 15 Aug 2023 11:13:36 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 4/7] md: factor out a helper rdev_removeable() from remove_and_add_spares() Date: Tue, 15 Aug 2023 11:09:54 +0800 Message-Id: <20230815030957.509535-5-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S8 X-Coremail-Antispam: 1UD129KBjvJXoW7Kr47Kr4xtr13KF4fuw4Uurg_yoW8KFyfpa 1fKFyYkr4UA3yaqw1kGrn5Ga45Xa18KayIkFyfGa4rZasxAr90qw1rKFy5Xr90yFZ3ZF4Y vF15Jw4rCr1xuF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9C14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6r xdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjx v20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1l F7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMx C20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAF wI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20x vE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v2 0xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxV W8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQSdkUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai There are no functional changes, just to make the code simpler and prepare to delay remove_and_add_spares() to md_start_sync(). Signed-off-by: Yu Kuai --- drivers/md/md.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 03615b0e9fe1..ea091eef23d1 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9153,6 +9153,22 @@ void md_do_sync(struct md_thread *thread) } EXPORT_SYMBOL_GPL(md_do_sync); =20 +static bool rdev_removeable(struct md_rdev *rdev) +{ + if (rdev->raid_disk < 0 || test_bit(Blocked, &rdev->flags) || + atomic_read(&rdev->nr_pending)) + return false; + + if (test_bit(RemoveSynchronized, &rdev->flags)) + return true; + + if (test_bit(In_sync, &rdev->flags) || + test_bit(Journal, &rdev->flags)) + return false; + + return true; +} + static int remove_and_add_spares(struct mddev *mddev, struct md_rdev *this) { @@ -9166,11 +9182,7 @@ static int remove_and_add_spares(struct mddev *mddev, return 0; =20 rdev_for_each(rdev, mddev) { - if ((this =3D=3D NULL || rdev =3D=3D this) && - rdev->raid_disk >=3D 0 && - !test_bit(Blocked, &rdev->flags) && - test_bit(Faulty, &rdev->flags) && - atomic_read(&rdev->nr_pending)=3D=3D0) { + if ((this =3D=3D NULL || rdev =3D=3D this) && rdev_removeable(rdev)) { /* Faulty non-Blocked devices with nr_pending =3D=3D 0 * never get nr_pending incremented, * never get Faulty cleared, and never get Blocked set. @@ -9185,19 +9197,12 @@ static int remove_and_add_spares(struct mddev *mdde= v, synchronize_rcu(); rdev_for_each(rdev, mddev) { if ((this =3D=3D NULL || rdev =3D=3D this) && - rdev->raid_disk >=3D 0 && - !test_bit(Blocked, &rdev->flags) && - ((test_bit(RemoveSynchronized, &rdev->flags) || - (!test_bit(In_sync, &rdev->flags) && - !test_bit(Journal, &rdev->flags))) && - atomic_read(&rdev->nr_pending)=3D=3D0)) { - if (mddev->pers->hot_remove_disk( - mddev, rdev) =3D=3D 0) { + rdev_removeable(rdev) && + mddev->pers->hot_remove_disk(mddev, rdev) =3D=3D 0) { sysfs_unlink_rdev(mddev, rdev); rdev->saved_raid_disk =3D rdev->raid_disk; rdev->raid_disk =3D -1; removed++; - } } if (remove_some && test_bit(RemoveSynchronized, &rdev->flags)) clear_bit(RemoveSynchronized, &rdev->flags); --=20 2.39.2 From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C7E3C001B0 for ; Tue, 15 Aug 2023 03:16:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234314AbjHODQQ (ORCPT ); Mon, 14 Aug 2023 23:16:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234238AbjHODOW (ORCPT ); Mon, 14 Aug 2023 23:14:22 -0400 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4609D1998; Mon, 14 Aug 2023 20:13:40 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4RPxDZ1nQ1z4f3khv; Tue, 15 Aug 2023 11:13:34 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S9; Tue, 15 Aug 2023 11:13:37 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 5/7] md: factor out a helper rdev_is_spare() from remove_and_add_spares() Date: Tue, 15 Aug 2023 11:09:55 +0800 Message-Id: <20230815030957.509535-6-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S9 X-Coremail-Antispam: 1UD129KBjvJXoW7Kr47Kr4xtr13KF4fCF1fCrg_yoW8GF17pa yIgFWYkw4UZayUWa1vgryUGa43K3W0g3yIkFyxCa4fZas8Jry5Kws5CF90qFn8AFWFvF45 Za1Uta1kCF1rKF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9C14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6r xdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjx v20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1l F7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMx C20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAF wI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20x vE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v2 0xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxV W8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQSdkUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai There are no functional changes, just to make the code simpler and prepare to delay remove_and_add_spares() to md_start_sync(). Signed-off-by: Yu Kuai --- drivers/md/md.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index ea091eef23d1..6baaa4d314b3 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9169,6 +9169,14 @@ static bool rdev_removeable(struct md_rdev *rdev) return true; } =20 +static bool rdev_is_spare(struct md_rdev *rdev) +{ + return !test_bit(Candidate, &rdev->flags) && rdev->raid_disk >=3D 0 && + !test_bit(In_sync, &rdev->flags) && + !test_bit(Journal, &rdev->flags) && + !test_bit(Faulty, &rdev->flags); +} + static int remove_and_add_spares(struct mddev *mddev, struct md_rdev *this) { @@ -9217,13 +9225,10 @@ static int remove_and_add_spares(struct mddev *mdde= v, rdev_for_each(rdev, mddev) { if (this && this !=3D rdev) continue; + if (rdev_is_spare(rdev)) + spares++; if (test_bit(Candidate, &rdev->flags)) continue; - if (rdev->raid_disk >=3D 0 && - !test_bit(In_sync, &rdev->flags) && - !test_bit(Journal, &rdev->flags) && - !test_bit(Faulty, &rdev->flags)) - spares++; if (rdev->raid_disk >=3D 0) continue; if (test_bit(Faulty, &rdev->flags)) --=20 2.39.2 From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84C0BC04A6A for ; Tue, 15 Aug 2023 03:17:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234445AbjHODQv (ORCPT ); Mon, 14 Aug 2023 23:16:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234095AbjHODOY (ORCPT ); Mon, 14 Aug 2023 23:14:24 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A1AC1999; Mon, 14 Aug 2023 20:13:40 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RPxDd1l9Xz4f3m7s; Tue, 15 Aug 2023 11:13:37 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S10; Tue, 15 Aug 2023 11:13:37 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 6/7] md: factor out a helper rdev_addable() from remove_and_add_spares() Date: Tue, 15 Aug 2023 11:09:56 +0800 Message-Id: <20230815030957.509535-7-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S10 X-Coremail-Antispam: 1UD129KBjvJXoW7Kr47Kr4xtr13KF4ftr13Jwb_yoW8Ary8pa yrKFy3Kw4UAF13Wa1DKryUGa4Yqa10grWIkry2ka4rZas8Jrn8Kw4rCF98XF98JFZY9F45 ZF15tw48ur13WFUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6r xdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjx v20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1l F7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMx C20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAF wI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20x vE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04k26cxK x2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI 0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai There are no functional changes, just to make the code simpler and prepare to delay remove_and_add_spares() to md_start_sync(). Signed-off-by: Yu Kuai --- drivers/md/md.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 6baaa4d314b3..d26d2c35f9af 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9177,6 +9177,20 @@ static bool rdev_is_spare(struct md_rdev *rdev) !test_bit(Faulty, &rdev->flags); } =20 +static bool rdev_addable(struct md_rdev *rdev) +{ + if (test_bit(Candidate, &rdev->flags) || rdev->raid_disk >=3D 0 || + test_bit(Faulty, &rdev->flags)) + return false; + + if (!test_bit(Journal, &rdev->flags) && !md_is_rdwr(rdev->mddev) && + !(rdev->saved_raid_disk >=3D 0 && + !test_bit(Bitmap_sync, &rdev->flags))) + return false; + + return true; +} + static int remove_and_add_spares(struct mddev *mddev, struct md_rdev *this) { @@ -9227,20 +9241,10 @@ static int remove_and_add_spares(struct mddev *mdde= v, continue; if (rdev_is_spare(rdev)) spares++; - if (test_bit(Candidate, &rdev->flags)) + if (!rdev_addable(rdev)) continue; - if (rdev->raid_disk >=3D 0) - continue; - if (test_bit(Faulty, &rdev->flags)) - continue; - if (!test_bit(Journal, &rdev->flags)) { - if (!md_is_rdwr(mddev) && - !(rdev->saved_raid_disk >=3D 0 && - !test_bit(Bitmap_sync, &rdev->flags))) - continue; - + if (!test_bit(Journal, &rdev->flags)) rdev->recovery_offset =3D 0; - } if (mddev->pers->hot_add_disk(mddev, rdev) =3D=3D 0) { /* failure here is OK */ sysfs_link_rdev(mddev, rdev); --=20 2.39.2 From nobody Thu Dec 18 07:54:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50A04C41513 for ; Tue, 15 Aug 2023 03:16:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234481AbjHODQb (ORCPT ); Mon, 14 Aug 2023 23:16:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234345AbjHODOW (ORCPT ); Mon, 14 Aug 2023 23:14:22 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4A52199A; Mon, 14 Aug 2023 20:13:40 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RPxDd4MSVz4f3q3s; Tue, 15 Aug 2023 11:13:37 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAXp6ld7dpknET3Ag--.64666S11; Tue, 15 Aug 2023 11:13:37 +0800 (CST) From: Yu Kuai To: xni@redhat.com, song@kernel.org Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH -next v2 7/7] md: delay remove_and_add_spares() for read only array to md_start_sync() Date: Tue, 15 Aug 2023 11:09:57 +0800 Message-Id: <20230815030957.509535-8-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230815030957.509535-1-yukuai1@huaweicloud.com> References: <20230815030957.509535-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAXp6ld7dpknET3Ag--.64666S11 X-Coremail-Antispam: 1UD129KBjvJXoWxZry5GFyrZryfZF17uw4fAFb_yoW5Cry7pr 4ftF9Igr4Ut3yfZr47G3WDGa4Yyr10qrZFyry3ua4xAw13Arn7C34rXayDXryrta4SyF43 Aw48KFs8uF1rKFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9K14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6r xdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjx v20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1l F7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMx C20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAF wI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20x vE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04k26cxK x2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI 0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTYUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Yu Kuai Before this patch, for read-only array: md_check_recovery() check that 'MD_RECOVERY_NEEDED' is set, then it will call remove_and_add_spares() directly to try to remove and add rdevs from array. After this patch: 1) md_check_recovery() check that 'MD_RECOVERY_NEEDED' is set, and the worker 'sync_work' is not pending, and there are rdevs can be added or removed, then it will queue new work md_start_sync(); 2) md_start_sync() will call remove_and_add_spares() and exist; This change make sure that array reconfiguration is independent from daemon, and it'll be much easier to synchronize it with io, consier that io may rely on daemon thread to be done. Signed-off-by: Yu Kuai --- drivers/md/md.c | 37 +++++++++++++++++++++++++++---------- 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index d26d2c35f9af..74d529479fcf 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9191,6 +9191,16 @@ static bool rdev_addable(struct md_rdev *rdev) return true; } =20 +static bool md_spares_need_change(struct mddev *mddev) +{ + struct md_rdev *rdev; + + rdev_for_each(rdev, mddev) + if (rdev_removeable(rdev) || rdev_addable(rdev)) + return true; + return false; +} + static int remove_and_add_spares(struct mddev *mddev, struct md_rdev *this) { @@ -9309,6 +9319,12 @@ static void md_start_sync(struct work_struct *ws) =20 mddev_lock_nointr(mddev); =20 + if (!md_is_rdwr(mddev)) { + remove_and_add_spares(mddev, NULL); + mddev_unlock(mddev); + return; + } + if (!md_choose_sync_direction(mddev, &spares)) goto not_running; =20 @@ -9403,7 +9419,8 @@ void md_check_recovery(struct mddev *mddev) } =20 if (!md_is_rdwr(mddev) && - !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery)) + (!test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || + work_pending(&mddev->sync_work))) return; if ( ! ( (mddev->sb_flags & ~ (1<flags); - /* On a read-only array we can: - * - remove failed devices - * - add already-in_sync devices if the array itself - * is in-sync. - * As we only add devices that are already in-sync, - * we can activate the spares immediately. - */ - remove_and_add_spares(mddev, NULL); - /* There is no thread, but we need to call + /* + * There is no thread, but we need to call * ->spare_active and clear saved_raid_disk */ set_bit(MD_RECOVERY_INTR, &mddev->recovery); @@ -9447,6 +9457,13 @@ void md_check_recovery(struct mddev *mddev) clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); clear_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags); + + /* + * Let md_start_sync() to remove and add rdevs to the + * array. + */ + if (md_spares_need_change(mddev)) + queue_work(md_misc_wq, &mddev->sync_work); goto unlock; } =20 --=20 2.39.2