From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F8632D8792; Mon, 24 Nov 2025 06:32:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965930; cv=none; b=W1OlhE83hR6RnbJ+RlJoQFxs1g26ZNhcmsDnd1UtXiEh60AtWfHCzlNQtNP/JDSBXKBpO5dGfzc7c5OVIrOIb0qapYgyPxgBbSGJosj/+QOi0g1RXWo/wp5HBNdjvWuRHcY1fz4wL3ItFfnEFLzrY9ftxzi23TieAW4Yk4fzz2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965930; c=relaxed/simple; bh=D0nbx1YulrTDvoWDzouAe7I7e0G5PEu7+nROD+Qkydc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sd8Mh3Qd/PwjIbJQCO2cNYVgr7Ovhmv2VUbC7B8iB9mh61wry9v3M6SmztJOaleVxSbSpN4xcdPFLAQxI5iq8YveeBttV1RM1tT+duIG5l0NuC/t5Urx9d0sqs/EEsETdKiSGzQsbU9hTQuj6d724Rrs/AygFKkXDZuTMm+tcYU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1588FC16AAE; Mon, 24 Nov 2025 06:32:07 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 01/11] md: merge mddev has_superblock into mddev_flags Date: Mon, 24 Nov 2025 14:31:53 +0800 Message-ID: <20251124063203.1692144-2-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is not need to use a separate field in struct mddev, there are no functional changes. Signed-off-by: Yu Kuai --- drivers/md/md.c | 6 +++--- drivers/md/md.h | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 7b5c5967568f..b49fdee11a03 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -6462,7 +6462,7 @@ int md_run(struct mddev *mddev) * the only valid external interface is through the md * device. */ - mddev->has_superblocks =3D false; + clear_bit(MD_HAS_SUPERBLOCK, &mddev->flags); rdev_for_each(rdev, mddev) { if (test_bit(Faulty, &rdev->flags)) continue; @@ -6475,7 +6475,7 @@ int md_run(struct mddev *mddev) } =20 if (rdev->sb_page) - mddev->has_superblocks =3D true; + set_bit(MD_HAS_SUPERBLOCK, &mddev->flags); =20 /* perform some consistency tests on the device. * We don't want the data to overlap the metadata, @@ -9085,7 +9085,7 @@ void md_write_start(struct mddev *mddev, struct bio *= bi) rcu_read_unlock(); if (did_change) sysfs_notify_dirent_safe(mddev->sysfs_state); - if (!mddev->has_superblocks) + if (!test_bit(MD_HAS_SUPERBLOCK, &mddev->flags)) return; wait_event(mddev->sb_wait, !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)); diff --git a/drivers/md/md.h b/drivers/md/md.h index 6985f2829bbd..b4c9aa600edd 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -340,6 +340,7 @@ struct md_cluster_operations; * array is ready yet. * @MD_BROKEN: This is used to stop writes and mark array as failed. * @MD_DELETED: This device is being deleted + * @MD_HAS_SUPERBLOCK: There is persistence sb in member disks. * * change UNSUPPORTED_MDDEV_FLAGS for each array type if new flag is added */ @@ -356,6 +357,7 @@ enum mddev_flags { MD_BROKEN, MD_DO_DELETE, MD_DELETED, + MD_HAS_SUPERBLOCK, }; =20 enum mddev_sb_flags { @@ -623,7 +625,6 @@ struct mddev { /* The sequence number for sync thread */ atomic_t sync_seq; =20 - bool has_superblocks:1; bool fail_last_dev:1; bool serialize_policy:1; }; --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EE872D8792; Mon, 24 Nov 2025 06:32:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965932; cv=none; b=U2loaJHhFRL2jyFHA8/re1NqUnYWFBgt9Q9r//s3LS2tD6wZBN1vVtU6H3wHiFdqZgFRm1PelF5Uat9ZauHz3ZYMTuoGFeGWkES9r2w3VSBIdHVwFPk3Y9RXovL/y2mejtyubkY/v4K0lheuEzIArgUarDTzIFNpt+s8bzYKAME= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965932; c=relaxed/simple; bh=IFXq/xbL3NhZp2EYpZqx8LIbTGlkGApWGpHbmu2WmtA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SfkPFa8xv1vNeG/rqaNhdPwgF7egz64/p3gA48H0xwsWKINz2hvpYz/QfqUIUE+n4ix/kUO2A3RdcXNLwad4Z3XoCOWF2mx8xtkoPm8IOAXp7J7lDDPSUFKga84Hg3O8q1biHLVl5rtHdhgisnC8TTJAPMdBTbOy6y3PRcXnCl4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27A44C4CEF1; Mon, 24 Nov 2025 06:32:09 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 02/11] md: merge mddev faillast_dev into mddev_flags Date: Mon, 24 Nov 2025 14:31:54 +0800 Message-ID: <20251124063203.1692144-3-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is not need to use a separate field in struct mddev, there are no functional changes. Signed-off-by: Yu Kuai --- drivers/md/md.c | 10 ++++++---- drivers/md/md.h | 3 ++- drivers/md/raid0.c | 3 ++- drivers/md/raid1.c | 4 ++-- drivers/md/raid10.c | 4 ++-- drivers/md/raid5.c | 5 ++++- 6 files changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index b49fdee11a03..5dcfd0371090 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5864,11 +5864,11 @@ __ATTR(consistency_policy, S_IRUGO | S_IWUSR, consi= stency_policy_show, =20 static ssize_t fail_last_dev_show(struct mddev *mddev, char *page) { - return sprintf(page, "%d\n", mddev->fail_last_dev); + return sprintf(page, "%d\n", test_bit(MD_FAILLAST_DEV, &mddev->flags)); } =20 /* - * Setting fail_last_dev to true to allow last device to be forcibly remov= ed + * Setting MD_FAILLAST_DEV to allow last device to be forcibly removed * from RAID1/RAID10. */ static ssize_t @@ -5881,8 +5881,10 @@ fail_last_dev_store(struct mddev *mddev, const char = *buf, size_t len) if (ret) return ret; =20 - if (value !=3D mddev->fail_last_dev) - mddev->fail_last_dev =3D value; + if (value) + set_bit(MD_FAILLAST_DEV, &mddev->flags); + else + clear_bit(MD_FAILLAST_DEV, &mddev->flags); =20 return len; } diff --git a/drivers/md/md.h b/drivers/md/md.h index b4c9aa600edd..297a104fba88 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -341,6 +341,7 @@ struct md_cluster_operations; * @MD_BROKEN: This is used to stop writes and mark array as failed. * @MD_DELETED: This device is being deleted * @MD_HAS_SUPERBLOCK: There is persistence sb in member disks. + * @MD_FAILLAST_DEV: Allow last rdev to be removed. * * change UNSUPPORTED_MDDEV_FLAGS for each array type if new flag is added */ @@ -358,6 +359,7 @@ enum mddev_flags { MD_DO_DELETE, MD_DELETED, MD_HAS_SUPERBLOCK, + MD_FAILLAST_DEV, }; =20 enum mddev_sb_flags { @@ -625,7 +627,6 @@ struct mddev { /* The sequence number for sync thread */ atomic_t sync_seq; =20 - bool fail_last_dev:1; bool serialize_policy:1; }; =20 diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 47aee1b1d4d1..012d8402af28 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -27,7 +27,8 @@ module_param(default_layout, int, 0644); (1L << MD_JOURNAL_CLEAN) | \ (1L << MD_FAILFAST_SUPPORTED) |\ (1L << MD_HAS_PPL) | \ - (1L << MD_HAS_MULTIPLE_PPLS)) + (1L << MD_HAS_MULTIPLE_PPLS) | \ + (1L << MD_FAILLAST_DEV)) =20 /* * inform the user of the raid configuration diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 57d50465eed1..98b5c93810bb 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1746,7 +1746,7 @@ static void raid1_status(struct seq_file *seq, struct= mddev *mddev) * - &mddev->degraded is bumped. * * @rdev is marked as &Faulty excluding case when array is failed and - * &mddev->fail_last_dev is off. + * MD_FAILLAST_DEV is not set. */ static void raid1_error(struct mddev *mddev, struct md_rdev *rdev) { @@ -1759,7 +1759,7 @@ static void raid1_error(struct mddev *mddev, struct m= d_rdev *rdev) (conf->raid_disks - mddev->degraded) =3D=3D 1) { set_bit(MD_BROKEN, &mddev->flags); =20 - if (!mddev->fail_last_dev) { + if (!test_bit(MD_FAILLAST_DEV, &mddev->flags)) { conf->recovery_disabled =3D mddev->recovery_disabled; spin_unlock_irqrestore(&conf->device_lock, flags); return; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 84be4cc7e873..09328e032f14 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1990,7 +1990,7 @@ static int enough(struct r10conf *conf, int ignore) * - &mddev->degraded is bumped. * * @rdev is marked as &Faulty excluding case when array is failed and - * &mddev->fail_last_dev is off. + * MD_FAILLAST_DEV is not set. */ static void raid10_error(struct mddev *mddev, struct md_rdev *rdev) { @@ -2002,7 +2002,7 @@ static void raid10_error(struct mddev *mddev, struct = md_rdev *rdev) if (test_bit(In_sync, &rdev->flags) && !enough(conf, rdev->raid_disk)) { set_bit(MD_BROKEN, &mddev->flags); =20 - if (!mddev->fail_last_dev) { + if (!test_bit(MD_FAILLAST_DEV, &mddev->flags)) { spin_unlock_irqrestore(&conf->device_lock, flags); return; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index cdbc7eba5c54..74f6729864fa 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -56,7 +56,10 @@ #include "md-bitmap.h" #include "raid5-log.h" =20 -#define UNSUPPORTED_MDDEV_FLAGS (1L << MD_FAILFAST_SUPPORTED) +#define UNSUPPORTED_MDDEV_FLAGS \ + ((1L << MD_FAILFAST_SUPPORTED) | \ + (1L << MD_FAILLAST_DEV)) + =20 #define cpu_to_group(cpu) cpu_to_node(cpu) #define ANY_GROUP NUMA_NO_NODE --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B7CD2D8792; Mon, 24 Nov 2025 06:32:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965934; cv=none; b=PJ+bPc+sy8Ig1vFkxgjhO2n+mOm3/Qg8HrgyX/ZFPHxIo53Clw24ZvF7rxNP9nUGQsNGRujijTaMc2rhaEEUI2urRtIdHs20ydf/QU78sIEGRbwAGUz6vb3qNtosMgRpk62qpy3ckl233a5IBexy5Kn+8mrI1juLW/0KxoyW4PI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965934; c=relaxed/simple; bh=YCEhcbzzKskmy/tIRthlfoQQbVMSSnqzu3amjU6jMto=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=beh8pc3Mn1xV1sb0PERo5ox9vJxG6F0OAKq6YXgniOHPVeVJA9Xh7ptkkd7s0cx3Km5Ou4oZKS7snEsfZXkggWvK3SAxlQC1PE3aZyp1YtVnjFmQuYb7A91zToHqFAHFYJkL76o9b2mi+MtqcWPMcAGn01n6Vj2h6xK8pE6D7Cs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 39CEFC116C6; Mon, 24 Nov 2025 06:32:12 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 03/11] md: merge mddev serialize_policy into mddev_flags Date: Mon, 24 Nov 2025 14:31:55 +0800 Message-ID: <20251124063203.1692144-4-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is not need to use a separate field in struct mddev, there are no functional changes. Signed-off-by: Yu Kuai --- drivers/md/md-bitmap.c | 4 ++-- drivers/md/md.c | 20 ++++++++++++-------- drivers/md/md.h | 4 ++-- drivers/md/raid0.c | 3 ++- drivers/md/raid1.c | 4 ++-- drivers/md/raid5.c | 3 ++- 6 files changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 84b7e2af6dba..dbe4c4b9a1da 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -2085,7 +2085,7 @@ static void bitmap_destroy(struct mddev *mddev) return; =20 bitmap_wait_behind_writes(mddev); - if (!mddev->serialize_policy) + if (!test_bit(MD_SERIALIZE_POLICY, &mddev->flags)) mddev_destroy_serial_pool(mddev, NULL); =20 mutex_lock(&mddev->bitmap_info.mutex); @@ -2809,7 +2809,7 @@ backlog_store(struct mddev *mddev, const char *buf, s= ize_t len) mddev->bitmap_info.max_write_behind =3D backlog; if (!backlog && mddev->serial_info_pool) { /* serial_info_pool is not needed if backlog is zero */ - if (!mddev->serialize_policy) + if (!test_bit(MD_SERIALIZE_POLICY, &mddev->flags)) mddev_destroy_serial_pool(mddev, NULL); } else if (backlog && !mddev->serial_info_pool) { /* serial_info_pool is needed since backlog is not zero */ diff --git a/drivers/md/md.c b/drivers/md/md.c index 5dcfd0371090..5833cbff4acf 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -279,7 +279,8 @@ void mddev_destroy_serial_pool(struct mddev *mddev, str= uct md_rdev *rdev) =20 rdev_for_each(temp, mddev) { if (!rdev) { - if (!mddev->serialize_policy || + if (!test_bit(MD_SERIALIZE_POLICY, + &mddev->flags) || !rdev_need_serial(temp)) rdev_uninit_serial(temp); else @@ -5897,11 +5898,12 @@ static ssize_t serialize_policy_show(struct mddev *= mddev, char *page) if (mddev->pers =3D=3D NULL || (mddev->pers->head.id !=3D ID_RAID1)) return sprintf(page, "n/a\n"); else - return sprintf(page, "%d\n", mddev->serialize_policy); + return sprintf(page, "%d\n", + test_bit(MD_SERIALIZE_POLICY, &mddev->flags)); } =20 /* - * Setting serialize_policy to true to enforce write IO is not reordered + * Setting MD_SERIALIZE_POLICY enforce write IO is not reordered * for raid1. */ static ssize_t @@ -5914,7 +5916,7 @@ serialize_policy_store(struct mddev *mddev, const cha= r *buf, size_t len) if (err) return err; =20 - if (value =3D=3D mddev->serialize_policy) + if (value =3D=3D test_bit(MD_SERIALIZE_POLICY, &mddev->flags)) return len; =20 err =3D mddev_suspend_and_lock(mddev); @@ -5926,11 +5928,13 @@ serialize_policy_store(struct mddev *mddev, const c= har *buf, size_t len) goto unlock; } =20 - if (value) + if (value) { mddev_create_serial_pool(mddev, NULL); - else + set_bit(MD_SERIALIZE_POLICY, &mddev->flags); + } else { mddev_destroy_serial_pool(mddev, NULL); - mddev->serialize_policy =3D value; + clear_bit(MD_SERIALIZE_POLICY, &mddev->flags); + } unlock: mddev_unlock_and_resume(mddev); return err ?: len; @@ -6827,7 +6831,7 @@ static void __md_stop_writes(struct mddev *mddev) md_update_sb(mddev, 1); } /* disable policy to guarantee rdevs free resources for serialization */ - mddev->serialize_policy =3D 0; + clear_bit(MD_SERIALIZE_POLICY, &mddev->flags); mddev_destroy_serial_pool(mddev, NULL); } =20 diff --git a/drivers/md/md.h b/drivers/md/md.h index 297a104fba88..6ee18045f41c 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -342,6 +342,7 @@ struct md_cluster_operations; * @MD_DELETED: This device is being deleted * @MD_HAS_SUPERBLOCK: There is persistence sb in member disks. * @MD_FAILLAST_DEV: Allow last rdev to be removed. + * @MD_SERIALIZE_POLICY: Enforce write IO is not reordered, just used by r= aid1. * * change UNSUPPORTED_MDDEV_FLAGS for each array type if new flag is added */ @@ -360,6 +361,7 @@ enum mddev_flags { MD_DELETED, MD_HAS_SUPERBLOCK, MD_FAILLAST_DEV, + MD_SERIALIZE_POLICY, }; =20 enum mddev_sb_flags { @@ -626,8 +628,6 @@ struct mddev { =20 /* The sequence number for sync thread */ atomic_t sync_seq; - - bool serialize_policy:1; }; =20 enum recovery_flags { diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 012d8402af28..bf1f3ab59c83 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -28,7 +28,8 @@ module_param(default_layout, int, 0644); (1L << MD_FAILFAST_SUPPORTED) |\ (1L << MD_HAS_PPL) | \ (1L << MD_HAS_MULTIPLE_PPLS) | \ - (1L << MD_FAILLAST_DEV)) + (1L << MD_FAILLAST_DEV) | \ + (1L << MD_SERIALIZE_POLICY)) =20 /* * inform the user of the raid configuration diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 98b5c93810bb..f4c7004888af 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -542,7 +542,7 @@ static void raid1_end_write_request(struct bio *bio) call_bio_endio(r1_bio); } } - } else if (rdev->mddev->serialize_policy) + } else if (test_bit(MD_SERIALIZE_POLICY, &rdev->mddev->flags)) remove_serial(rdev, lo, hi); if (r1_bio->bios[mirror] =3D=3D NULL) rdev_dec_pending(rdev, conf->mddev); @@ -1644,7 +1644,7 @@ static void raid1_write_request(struct mddev *mddev, = struct bio *bio, mbio =3D bio_alloc_clone(rdev->bdev, bio, GFP_NOIO, &mddev->bio_set); =20 - if (mddev->serialize_policy) + if (test_bit(MD_SERIALIZE_POLICY, &mddev->flags)) wait_for_serialization(rdev, r1_bio); } =20 diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 74f6729864fa..f405ba7b99a7 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -58,7 +58,8 @@ =20 #define UNSUPPORTED_MDDEV_FLAGS \ ((1L << MD_FAILFAST_SUPPORTED) | \ - (1L << MD_FAILLAST_DEV)) + (1L << MD_FAILLAST_DEV) | \ + (1L << MD_SERIALIZE_POLICY)) =20 =20 #define cpu_to_group(cpu) cpu_to_node(cpu) --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4883E2DEA68; Mon, 24 Nov 2025 06:32:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965936; cv=none; b=TveaJyihNGpY98cAckh6MyTjZ6XiArQlUFEPbgEP3bFdVPebn/7ZGBcnQjoglD6nVgbydAwugp31DJzxQ3s/ZZ8OI2erupZ7OcM2W1FKxyyRa5rupchYhZjGy+G3UE2b5gtY7oda5YI8ADXNHEChsKDX+lp/k9HMA7N0ym6OS+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965936; c=relaxed/simple; bh=IePfBPIxr+E6Ux/GVNLZOkeaVAPNGv0frUzzEzz0u6A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W/7jkEUhvtiE355qb7HE+qNjYaPRvj5XVIqqMEjftMN6bkwcVIIeRNt70dsGtKXrARr3Hp9lvhSenfQf5VNCeJwz6DO8Mk62Je7D6tlbEHuzXsAeqmfjwrLSVbr10+mynTxqg3Wiwv+ysbiCO5zcANHIAKmxfXxfk4Yc8/AboRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C251C116C6; Mon, 24 Nov 2025 06:32:14 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 04/11] md/raid5: use mempool to allocate stripe_request_ctx Date: Mon, 24 Nov 2025 14:31:56 +0800 Message-ID: <20251124063203.1692144-5-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On the one hand, stripe_request_ctx is 72 bytes, and it's a bit huge for a stack variable. On the other hand, the bitmap sectors_to_do is a fixed size, result in max_hw_sector_kb of raid5 array is at most 256 * 4k =3D 1Mb, and this will make full stripe IO impossible for the array that chunk_size * data_disks is bigger. Allocate ctx during runtime will make it possible to get rid of this limit. Signed-off-by: Yu Kuai --- drivers/md/md.h | 4 +++ drivers/md/raid1-10.c | 5 ---- drivers/md/raid5.c | 61 +++++++++++++++++++++++++++---------------- drivers/md/raid5.h | 2 ++ 4 files changed, 45 insertions(+), 27 deletions(-) diff --git a/drivers/md/md.h b/drivers/md/md.h index 6ee18045f41c..b8c5dec12b62 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -22,6 +22,10 @@ #include =20 #define MaxSector (~(sector_t)0) +/* + * Number of guaranteed raid bios in case of extreme VM load: + */ +#define NR_RAID_BIOS 256 =20 enum md_submodule_type { MD_PERSONALITY =3D 0, diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index 521625756128..c33099925f23 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -3,11 +3,6 @@ #define RESYNC_BLOCK_SIZE (64*1024) #define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE) =20 -/* - * Number of guaranteed raid bios in case of extreme VM load: - */ -#define NR_RAID_BIOS 256 - /* when we get a read error on a read-only array, we redirect to another * device without failing the first device, or trying to over-write to * correct the read error. To keep track of bad blocks on a per-bio diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index f405ba7b99a7..0080dec4a6ef 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -6083,13 +6083,13 @@ static sector_t raid5_bio_lowest_chunk_sector(struc= t r5conf *conf, static bool raid5_make_request(struct mddev *mddev, struct bio * bi) { DEFINE_WAIT_FUNC(wait, woken_wake_function); - bool on_wq; struct r5conf *conf =3D mddev->private; - sector_t logical_sector; - struct stripe_request_ctx ctx =3D {}; const int rw =3D bio_data_dir(bi); + struct stripe_request_ctx *ctx; + sector_t logical_sector; enum stripe_result res; int s, stripe_cnt; + bool on_wq; =20 if (unlikely(bi->bi_opf & REQ_PREFLUSH)) { int ret =3D log_handle_flush_request(conf, bi); @@ -6101,11 +6101,6 @@ static bool raid5_make_request(struct mddev *mddev, = struct bio * bi) return true; } /* ret =3D=3D -EAGAIN, fallback */ - /* - * if r5l_handle_flush_request() didn't clear REQ_PREFLUSH, - * we need to flush journal device - */ - ctx.do_flush =3D bi->bi_opf & REQ_PREFLUSH; } =20 md_write_start(mddev, bi); @@ -6128,16 +6123,24 @@ static bool raid5_make_request(struct mddev *mddev,= struct bio * bi) } =20 logical_sector =3D bi->bi_iter.bi_sector & ~((sector_t)RAID5_STRIPE_SECTO= RS(conf)-1); - ctx.first_sector =3D logical_sector; - ctx.last_sector =3D bio_end_sector(bi); bi->bi_next =3D NULL; =20 - stripe_cnt =3D DIV_ROUND_UP_SECTOR_T(ctx.last_sector - logical_sector, + ctx =3D mempool_alloc(conf->ctx_pool, GFP_NOIO | __GFP_ZERO); + ctx->first_sector =3D logical_sector; + ctx->last_sector =3D bio_end_sector(bi); + /* + * if r5l_handle_flush_request() didn't clear REQ_PREFLUSH, + * we need to flush journal device + */ + if (unlikely(bi->bi_opf & REQ_PREFLUSH)) + ctx->do_flush =3D true; + + stripe_cnt =3D DIV_ROUND_UP_SECTOR_T(ctx->last_sector - logical_sector, RAID5_STRIPE_SECTORS(conf)); - bitmap_set(ctx.sectors_to_do, 0, stripe_cnt); + bitmap_set(ctx->sectors_to_do, 0, stripe_cnt); =20 pr_debug("raid456: %s, logical %llu to %llu\n", __func__, - bi->bi_iter.bi_sector, ctx.last_sector); + bi->bi_iter.bi_sector, ctx->last_sector); =20 /* Bail out if conflicts with reshape and REQ_NOWAIT is set */ if ((bi->bi_opf & REQ_NOWAIT) && @@ -6145,6 +6148,7 @@ static bool raid5_make_request(struct mddev *mddev, s= truct bio * bi) bio_wouldblock_error(bi); if (rw =3D=3D WRITE) md_write_end(mddev); + mempool_free(ctx, conf->ctx_pool); return true; } md_account_bio(mddev, &bi); @@ -6163,10 +6167,10 @@ static bool raid5_make_request(struct mddev *mddev,= struct bio * bi) add_wait_queue(&conf->wait_for_reshape, &wait); on_wq =3D true; } - s =3D (logical_sector - ctx.first_sector) >> RAID5_STRIPE_SHIFT(conf); + s =3D (logical_sector - ctx->first_sector) >> RAID5_STRIPE_SHIFT(conf); =20 while (1) { - res =3D make_stripe_request(mddev, conf, &ctx, logical_sector, + res =3D make_stripe_request(mddev, conf, ctx, logical_sector, bi); if (res =3D=3D STRIPE_FAIL || res =3D=3D STRIPE_WAIT_RESHAPE) break; @@ -6183,9 +6187,9 @@ static bool raid5_make_request(struct mddev *mddev, s= truct bio * bi) * raid5_activate_delayed() from making progress * and thus deadlocking. */ - if (ctx.batch_last) { - raid5_release_stripe(ctx.batch_last); - ctx.batch_last =3D NULL; + if (ctx->batch_last) { + raid5_release_stripe(ctx->batch_last); + ctx->batch_last =3D NULL; } =20 wait_woken(&wait, TASK_UNINTERRUPTIBLE, @@ -6193,21 +6197,23 @@ static bool raid5_make_request(struct mddev *mddev,= struct bio * bi) continue; } =20 - s =3D find_next_bit_wrap(ctx.sectors_to_do, stripe_cnt, s); + s =3D find_next_bit_wrap(ctx->sectors_to_do, stripe_cnt, s); if (s =3D=3D stripe_cnt) break; =20 - logical_sector =3D ctx.first_sector + + logical_sector =3D ctx->first_sector + (s << RAID5_STRIPE_SHIFT(conf)); } if (unlikely(on_wq)) remove_wait_queue(&conf->wait_for_reshape, &wait); =20 - if (ctx.batch_last) - raid5_release_stripe(ctx.batch_last); + if (ctx->batch_last) + raid5_release_stripe(ctx->batch_last); =20 if (rw =3D=3D WRITE) md_write_end(mddev); + + mempool_free(ctx, conf->ctx_pool); if (res =3D=3D STRIPE_WAIT_RESHAPE) { md_free_cloned_bio(bi); return false; @@ -7374,6 +7380,10 @@ static void free_conf(struct r5conf *conf) bioset_exit(&conf->bio_split); kfree(conf->stripe_hashtbl); kfree(conf->pending_data); + + if (conf->ctx_pool) + mempool_destroy(conf->ctx_pool); + kfree(conf); } =20 @@ -8057,6 +8067,13 @@ static int raid5_run(struct mddev *mddev) goto abort; } =20 + conf->ctx_pool =3D mempool_create_kmalloc_pool(NR_RAID_BIOS, + sizeof(struct stripe_request_ctx)); + if (!conf->ctx_pool) { + ret =3D -ENOMEM; + goto abort; + } + if (log_init(conf, journal_dev, raid5_has_ppl(conf))) goto abort; =20 diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h index eafc6e9ed6ee..6e3f07119fa4 100644 --- a/drivers/md/raid5.h +++ b/drivers/md/raid5.h @@ -690,6 +690,8 @@ struct r5conf { struct list_head pending_list; int pending_data_cnt; struct r5pending_data *next_pending_data; + + mempool_t *ctx_pool; }; =20 #if PAGE_SIZE =3D=3D DEFAULT_STRIPE_SIZE --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 512F82DECB1; Mon, 24 Nov 2025 06:32:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965938; cv=none; b=DI2cn9Iar6XhMqVt3ycd0WFN3XNs5P+K2ljm0rm73f0Op+6ekSYe6Hu9n9S/JTJHgwzqzhzFxmBU03A3qcUHlKHXrW8Icuv9FOj6KTqyhTsnlgWRCF12C3C1m0HsIqEotQ9xROcYYL1JxIVWK3Vmvn7dGlQPIiTUo09aFDegvz8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965938; c=relaxed/simple; bh=jbhviUuKac27G7+Ew8grQNZm1HrGV6Hg6s+Gd+8qtOs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BQnMJXcLAbhLB9MtydX2epKCvWvSiAJAxLS3Zr0lZoPiLZfuiMsnG30YesbuJ0GB9l2VR/c8vjdRnHdyFC1aVX1XJie4U7FqzoTCg92rgcU+tjwgBM5IjqhmbePhtbc8xLExN4kqr6S8LdJ7PkxW6pRV//+fAacSqv73VaL8Ofo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EB61C16AAE; Mon, 24 Nov 2025 06:32:16 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 05/11] md/raid5: make sure max_sectors is not less than io_opt Date: Mon, 24 Nov 2025 14:31:57 +0800 Message-ID: <20251124063203.1692144-6-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Otherwise, even if user issue IO by io_opt, such IO will be split by max_sectors before they are submitted to raid5. For consequence, full stripe IO is impossible. BTW, dm-raid5 is not affected and still have such problem. Signed-off-by: Yu Kuai --- drivers/md/raid5.c | 35 ++++++++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 9 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 0080dec4a6ef..cd0eff2f69b4 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -777,14 +777,14 @@ struct stripe_request_ctx { /* last sector in the request */ sector_t last_sector; =20 + /* the request had REQ_PREFLUSH, cleared after the first stripe_head */ + bool do_flush; + /* * bitmap to track stripe sectors that have been added to stripes * add one to account for unaligned requests */ - DECLARE_BITMAP(sectors_to_do, RAID5_MAX_REQ_STRIPES + 1); - - /* the request had REQ_PREFLUSH, cleared after the first stripe_head */ - bool do_flush; + unsigned long sectors_to_do[]; }; =20 /* @@ -7739,6 +7739,24 @@ static int only_parity(int raid_disk, int algo, int = raid_disks, int max_degraded return 0; } =20 +static int raid5_create_ctx_pool(struct r5conf *conf) +{ + struct stripe_request_ctx *ctx; + int size; + + if (mddev_is_dm(conf->mddev)) + size =3D BITS_TO_LONGS(RAID5_MAX_REQ_STRIPES); + else + size =3D BITS_TO_LONGS( + queue_max_hw_sectors(conf->mddev->gendisk->queue) >> + RAID5_STRIPE_SHIFT(conf)); + + conf->ctx_pool =3D mempool_create_kmalloc_pool(NR_RAID_BIOS, + struct_size(ctx, sectors_to_do, size)); + + return conf->ctx_pool ? 0 : -ENOMEM; +} + static int raid5_set_limits(struct mddev *mddev) { struct r5conf *conf =3D mddev->private; @@ -7795,6 +7813,8 @@ static int raid5_set_limits(struct mddev *mddev) * Limit the max sectors based on this. */ lim.max_hw_sectors =3D RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf); + if ((lim.max_hw_sectors << 9) < lim.io_opt) + lim.max_hw_sectors =3D lim.io_opt >> 9; =20 /* No restrictions on the number of segments in the request */ lim.max_segments =3D USHRT_MAX; @@ -8067,12 +8087,9 @@ static int raid5_run(struct mddev *mddev) goto abort; } =20 - conf->ctx_pool =3D mempool_create_kmalloc_pool(NR_RAID_BIOS, - sizeof(struct stripe_request_ctx)); - if (!conf->ctx_pool) { - ret =3D -ENOMEM; + ret =3D raid5_create_ctx_pool(conf); + if (ret) goto abort; - } =20 if (log_init(conf, journal_dev, raid5_has_ppl(conf))) goto abort; --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2693D2DECB1; Mon, 24 Nov 2025 06:32:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965940; cv=none; b=PWPy81apM652PVDpxg3aEAR3OGZDbfSfsVUcmwhgmIaVBD6AoD8iA11k4ohGlOMcBDmtjj6xUqnNqZ/TqDLs9eh4Hbjb6Yav61CuBrYwXSbjZmWafdpNL1vs8OKlbLBrRFfvNBOM38b3B7vYzEluDtxYx1I4++PM12clyC/lU8U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965940; c=relaxed/simple; bh=fY0BzEVm9sfZcUQikujZgp1iImD0UEOOT90ro6ySvr0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aDQf08t72h/CahCgq0NZdfH4IlsbOXPpXYC8ne2ptgA9ybpEcmihvG0hRFHUOIDokkkEWIIWYudHjc3ZVCOWxE61z2ZecX1VAYs/7+aodMzWjXuI2GQumJfbiq9IiS25eKvC+5vKwr/K2ya3qHpN5eY1s1QF04W9aGdKfUvQ2zI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70755C116C6; Mon, 24 Nov 2025 06:32:18 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 06/11] md: support to align bio to limits Date: Mon, 24 Nov 2025 14:31:58 +0800 Message-ID: <20251124063203.1692144-7-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For personalities that report optimal IO size, it's indicate that users can get the best IO bandwidth if they issue IO with this size. However there is also an implicit condition that IO should also be aligned to the optimal IO size. Currently, bio will only be split by limits, if bio offset is not aligned to limits, then all split bio will not be aligned. This patch add a new feature to align bio to limits first, and following patches will support this for each personality if necessary. Signed-off-by: Yu Kuai --- drivers/md/md.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ drivers/md/md.h | 2 ++ 2 files changed, 48 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index 5833cbff4acf..db2d950a1449 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -428,6 +428,48 @@ bool md_handle_request(struct mddev *mddev, struct bio= *bio) } EXPORT_SYMBOL(md_handle_request); =20 +static struct bio *__md_bio_align_to_limits(struct mddev *mddev, + struct bio *bio) +{ + unsigned int max_sectors =3D mddev->gendisk->queue->limits.max_sectors; + sector_t start =3D bio->bi_iter.bi_sector; + sector_t align_start =3D roundup(start, max_sectors); + sector_t end; + sector_t align_end; + + /* already aligned */ + if (align_start =3D=3D start) + return bio; + + end =3D start + bio_sectors(bio); + align_end =3D rounddown(end, max_sectors); + + /* bio is too small to split */ + if (align_end <=3D align_start) + return bio; + + return bio_submit_split_bioset(bio, align_start - start, + &mddev->gendisk->bio_split); +} + +static struct bio *md_bio_align_to_limits(struct mddev *mddev, struct bio = *bio) +{ + if (!test_bit(MD_BIO_ALIGN, &mddev->flags)) + return bio; + + /* atomic write can't split */ + if (bio->bi_opf & REQ_ATOMIC) + return bio; + + switch (bio_op(bio)) { + case REQ_OP_READ: + case REQ_OP_WRITE: + return __md_bio_align_to_limits(mddev, bio); + default: + return bio; + } +} + static void md_submit_bio(struct bio *bio) { const int rw =3D bio_data_dir(bio); @@ -443,6 +485,10 @@ static void md_submit_bio(struct bio *bio) return; } =20 + bio =3D md_bio_align_to_limits(mddev, bio); + if (!bio) + return; + bio =3D bio_split_to_limits(bio); if (!bio) return; diff --git a/drivers/md/md.h b/drivers/md/md.h index b8c5dec12b62..e7aba83b708b 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -347,6 +347,7 @@ struct md_cluster_operations; * @MD_HAS_SUPERBLOCK: There is persistence sb in member disks. * @MD_FAILLAST_DEV: Allow last rdev to be removed. * @MD_SERIALIZE_POLICY: Enforce write IO is not reordered, just used by r= aid1. + * @MD_BIO_ALIGN: Bio issued to the array will align to io_opt before spli= t. * * change UNSUPPORTED_MDDEV_FLAGS for each array type if new flag is added */ @@ -366,6 +367,7 @@ enum mddev_flags { MD_HAS_SUPERBLOCK, MD_FAILLAST_DEV, MD_SERIALIZE_POLICY, + MD_BIO_ALIGN, }; =20 enum mddev_sb_flags { --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D1BE2D9492; Mon, 24 Nov 2025 06:32:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965942; cv=none; b=O2NC11wGTeuQOrz/WroBXFmodITsXNnAPIzWaOZd9+gQ1ByqpShblbUJ+Xg18fV+PoFsTul80/M5OMjbPr7Hj+Lrt0eVXukCBedBfgVSvw7nzLFMpAH1FJ0GHpjJhe9T9vKNqCWShrsimY9eehiT1N5KZvf9X1kpGZLJhkSPnBs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965942; c=relaxed/simple; bh=XqqQSI5xgpGNLdeTJz2ljmD1xJP37tkjVNb/x31AvmE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uRmJHY9gV2U/7BrUNoStMVRv3Glsm4gm6YAt2sAcac09FGtYmP48iJlKrrD9O+gMqJZi1NwjQC9P2kImHFWMciOukziGiPlpwP/TtpRR8wmoSHV2oN4sihUuRsbiU2RXkLfgcxajsgZBNpha4u/Xft5PDcN54OBwe+cn2NCeprk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82C53C4CEF1; Mon, 24 Nov 2025 06:32:20 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 07/11] md: add a helper md_config_align_limits() Date: Mon, 24 Nov 2025 14:31:59 +0800 Message-ID: <20251124063203.1692144-8-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This helper will be used by personalities that want to align bio to io_opt to get best IO bandwidth. Also add the new flag to UNSUPPORTED_MDDEV_FLAGS for now, following patches will enable this for personalities. Signed-off-by: Yu Kuai --- drivers/md/md.h | 11 +++++++++++ drivers/md/raid0.c | 3 ++- drivers/md/raid1.c | 3 ++- drivers/md/raid5.c | 3 ++- 4 files changed, 17 insertions(+), 3 deletions(-) diff --git a/drivers/md/md.h b/drivers/md/md.h index e7aba83b708b..ddf989f2a139 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -1091,6 +1091,17 @@ static inline bool rdev_blocked(struct md_rdev *rdev) return false; } =20 +static inline void md_config_align_limits(struct mddev *mddev, + struct queue_limits *lim) +{ + if ((lim->max_hw_sectors << 9) < lim->io_opt) + lim->max_hw_sectors =3D lim->io_opt >> 9; + else + lim->max_hw_sectors =3D rounddown(lim->max_hw_sectors, + lim->io_opt >> 9); + set_bit(MD_BIO_ALIGN, &mddev->flags); +} + #define mddev_add_trace_msg(mddev, fmt, args...) \ do { \ if (!mddev_is_dm(mddev)) \ diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index bf1f3ab59c83..01cce0c3eab7 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -29,7 +29,8 @@ module_param(default_layout, int, 0644); (1L << MD_HAS_PPL) | \ (1L << MD_HAS_MULTIPLE_PPLS) | \ (1L << MD_FAILLAST_DEV) | \ - (1L << MD_SERIALIZE_POLICY)) + (1L << MD_SERIALIZE_POLICY) | \ + (1L << MD_BIO_ALIGN)) =20 /* * inform the user of the raid configuration diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index f4c7004888af..1a957dba2640 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -42,7 +42,8 @@ ((1L << MD_HAS_JOURNAL) | \ (1L << MD_JOURNAL_CLEAN) | \ (1L << MD_HAS_PPL) | \ - (1L << MD_HAS_MULTIPLE_PPLS)) + (1L << MD_HAS_MULTIPLE_PPLS) | \ + (1L << MD_BIO_ALIGN)) =20 static void allow_barrier(struct r1conf *conf, sector_t sector_nr); static void lower_barrier(struct r1conf *conf, sector_t sector_nr); diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index cd0eff2f69b4..0b607aa5963e 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -59,7 +59,8 @@ #define UNSUPPORTED_MDDEV_FLAGS \ ((1L << MD_FAILFAST_SUPPORTED) | \ (1L << MD_FAILLAST_DEV) | \ - (1L << MD_SERIALIZE_POLICY)) + (1L << MD_SERIALIZE_POLICY) | \ + (1L << MD_BIO_ALIGN)) =20 =20 #define cpu_to_group(cpu) cpu_to_node(cpu) --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A5522D9492; Mon, 24 Nov 2025 06:32:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965944; cv=none; b=F4L0m/KVrH7HgtjlIKdlZgLSnWq/oXSfu5pZpnRC3J6+djO5lZpGA6Jq4wJlycFh6sy2+LpHEb3pgCSvk3dwR/hJa2FVC2keNZYvFnG5Ik+uNiHRREN7tNTqq7yr7QRnyM6VEk/W9WA+xEIpSNl3aSShzsa+bEMtMpLqQce4VNY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965944; c=relaxed/simple; bh=ZW4pAi1LZApvGkKihBfrAqHldizMyv9Uj3UUEC8DFuw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FCYF8CA0z2kOh9n0xXqZ/kXl1QT67ntGBsIqvnSnHSXByAUSUjbIFG36giHObPYrligPEwqnhTO7p2qVT5kM/Ar1JPWKUJN/OJjr0ISBngMqpdZNu8YjOzwQJAqOij6Dn/GGCwQiqdkeathALfFSFkXIK6GveYbr9TqzXO4Dpfw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94F5EC116C6; Mon, 24 Nov 2025 06:32:22 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 08/11] md/raid5: align bio to io_opt Date: Mon, 24 Nov 2025 14:32:00 +0800 Message-ID: <20251124063203.1692144-9-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" raid5 internal implementaion indicates that if write bio is aligned to io_opt, then full stripe write will be used, which will be best for bandwidth because there is no need to read extra data to build new xor data. Simple test in my VM, 32 disks raid5 with 64kb chunksize: dd if=3D/dev/zero of=3D/dev/md0 bs=3D100M oflag=3Ddirect Before this patch: 782 MB/s With this patch: 1.1 GB/s BTW, there are still other bottleneck related to stripe handler, and require further optimization. Signed-off-by: Yu Kuai --- drivers/md/raid5.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 0b607aa5963e..bbcb1c06951c 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -59,8 +59,7 @@ #define UNSUPPORTED_MDDEV_FLAGS \ ((1L << MD_FAILFAST_SUPPORTED) | \ (1L << MD_FAILLAST_DEV) | \ - (1L << MD_SERIALIZE_POLICY) | \ - (1L << MD_BIO_ALIGN)) + (1L << MD_SERIALIZE_POLICY)) =20 =20 #define cpu_to_group(cpu) cpu_to_node(cpu) @@ -7814,8 +7813,7 @@ static int raid5_set_limits(struct mddev *mddev) * Limit the max sectors based on this. */ lim.max_hw_sectors =3D RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf); - if ((lim.max_hw_sectors << 9) < lim.io_opt) - lim.max_hw_sectors =3D lim.io_opt >> 9; + md_config_align_limits(mddev, &lim); =20 /* No restrictions on the number of segments in the request */ lim.max_segments =3D USHRT_MAX; --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 670672E093B; Mon, 24 Nov 2025 06:32:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965946; cv=none; b=TSw0qM7a+uMckZz86JHOE8NTYIB6qktLWtFC9j3gy+KRn2VD+gocnANVSxf+mjDoO92+zX0NklutjUyrhwfO9f4tesW62/TlE8JXs7kdedBNVm8vShZ62r5eZjq+VwD2PW17a6H/uhd7piYxmqPGGM0R8GmGbF0tOH0FWfqPHtM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965946; c=relaxed/simple; bh=WAc2Acq73SBDFe9cL4p3D9sMxC+yEhbbN+qNSuKZSjU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iMequpk8fLsRFP1MWMpgR+vYiBACXLWvsRfK5PUPjDpkKTCLWdNjav0iB9oRvnauMUsvTwLmi9PRLYutLrd0cRM6ttlrpJgii1tW6bWyTwCdWe3mkqCsB46hu7Q8XHWCNzCWBU0aPyC7m4P19NMr0U9LgWzOjOlunh9l8lBnzHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6EC4C16AAE; Mon, 24 Nov 2025 06:32:24 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 09/11] md/raid10: align bio to io_opt Date: Mon, 24 Nov 2025 14:32:01 +0800 Message-ID: <20251124063203.1692144-10-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The impact is not so significant for raid10 compared to raid5, however it's still more appropriate to issue IOs evenly to underlying disks. Signed-off-by: Yu Kuai --- drivers/md/raid10.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 09328e032f14..2c6b65b83724 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -4008,6 +4008,8 @@ static int raid10_set_queue_limits(struct mddev *mdde= v) err =3D mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); if (err) return err; + + md_config_align_limits(mddev, &lim); return queue_limits_set(mddev->gendisk->queue, &lim); } =20 --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6DE322A4D8; Mon, 24 Nov 2025 06:32:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965948; cv=none; b=ik5yFT7UJOYA7sYWlh3CYojSoYgBtNC8NetaHAzLUqd1Xzxa62IVP93N//gqsj2jpiHGU/qo2OAkXvrFwP/A57fEPH9Zp4AGiIpgsDLVTOtxOxhyaxOXs6FZ1tCrqSEYTMI4uw0exSoBihEjnCw3f8qqoXiB+HDXSJbuf6ef00Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965948; c=relaxed/simple; bh=b91I3Xtz94sY70J4NWdlUmmQNLfTHM3GSNW92lDHvpQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t60dqbyaS70yBBK3hTFlJIjIBvZFlKpdWuPKSGx9y5Ia4LJVTZxedClWf9Z/zDTuX66HZgnOtzsajp1BE/gMxM4O9LXQfU65nXC5+cw7baKNDRAwMLYzM34wUvQEuRK3tK1YvHEYRP5wvc+u3PMHPOkLHL9EVc2qx4EqzhpZPVo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id C2393C19422; Mon, 24 Nov 2025 06:32:26 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 10/11] md/raid0: align bio to io_opt Date: Mon, 24 Nov 2025 14:32:02 +0800 Message-ID: <20251124063203.1692144-11-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The impact is not so significant for raid0 compared to raid5, however it's still more appropriate to issue IOs evenly to underlying disks. Signed-off-by: Yu Kuai --- drivers/md/raid0.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 01cce0c3eab7..c94c6f78767f 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -29,8 +29,7 @@ module_param(default_layout, int, 0644); (1L << MD_HAS_PPL) | \ (1L << MD_HAS_MULTIPLE_PPLS) | \ (1L << MD_FAILLAST_DEV) | \ - (1L << MD_SERIALIZE_POLICY) | \ - (1L << MD_BIO_ALIGN)) + (1L << MD_SERIALIZE_POLICY)) =20 /* * inform the user of the raid configuration @@ -391,6 +390,8 @@ static int raid0_set_limits(struct mddev *mddev) err =3D mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); if (err) return err; + + md_config_align_limits(mddev, &lim); return queue_limits_set(mddev->gendisk->queue, &lim); } =20 --=20 2.51.0 From nobody Tue Dec 2 00:47:56 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C938422A4D8; Mon, 24 Nov 2025 06:32:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965950; cv=none; b=gwYoJUfPnpdJL4RF5P7moE6vbkINn7LX+c5qUijrwq4aT4gH/nIGhGwVjaKy5qN+z4T99/QzDtloCPm3VWi1lbgLYBPJH8iJINCSy95wqJ5uZGlIe5a6xDu9R3R4qYcMVccAXg3+RaY7L9gFh9LgKcfAkLrNx4xXRQrf9bWgEWs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763965950; c=relaxed/simple; bh=oPULvgQ53yOxncZCQRS8kdJk4RkXsdAM2BQmiTAPCf8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jAfFTEcw19XDoDSy9PNucq/O0QRzsKZweTmXnB1bGJYKGyn33W6tgbRlOQwL3jfQ5Yr9kmlfopCUUvTbvetSq8izo6FsZ/9qu4sgbvMW1Vo85EsobUqfGWgpOWpw0YsI/+5nXHlvJdhXfczCApHLlzrNIQqDIu3ajDBUKr+9K5o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5089C4CEF1; Mon, 24 Nov 2025 06:32:28 +0000 (UTC) From: Yu Kuai To: song@kernel.org, linux-raid@vger.kernel.org Cc: linux-kernel@vger.kernel.org, filippo@debian.org, colyli@fnnas.com, yukuai@fnnas.com Subject: [PATCH v2 11/11] md: fix abnormal io_opt from member disks Date: Mon, 24 Nov 2025 14:32:03 +0800 Message-ID: <20251124063203.1692144-12-yukuai@fnnas.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251124063203.1692144-1-yukuai@fnnas.com> References: <20251124063203.1692144-1-yukuai@fnnas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It's reported that mtp3sas can report abnormal io_opt, for consequence, md array will end up with abnormal io_opt as well, due to the lcm_not_zero() from blk_stack_limits(). Some personalities will configure optimal IO size, and it's indicate that users can get the best IO bandwidth if they issue IO with this size, and we don't want io_opt to be covered by member disks with abnormal io_opt. Fix this problem by adding a new mddev flags MD_STACK_IO_OPT to indicate that io_opt configured by personalities is preferred over member disks or not. Reported-by: Filippo Giunchedi Closes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D1121006 Reported-by: Coly Li Closes: https://lore.kernel.org/all/20250817152645.7115-1-colyli@kernel.org/ Signed-off-by: Yu Kuai --- drivers/md/md.c | 35 ++++++++++++++++++++++++++++++++++- drivers/md/md.h | 5 ++++- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 4 ++-- 4 files changed, 41 insertions(+), 5 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index db2d950a1449..7714f367765f 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -6191,11 +6191,17 @@ static const struct kobj_type md_ktype =3D { =20 int mdp_major =3D 0; =20 +static bool rdev_is_mddev(struct md_rdev *rdev) +{ + return rdev->bdev->bd_disk->fops =3D=3D &md_fops; +} + /* stack the limit for all rdevs into lim */ int mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim, unsigned int flags) { struct md_rdev *rdev; + unsigned int io_opt =3D lim->io_opt; =20 rdev_for_each(rdev, mddev) { queue_limits_stack_bdev(lim, rdev->bdev, rdev->data_offset, @@ -6203,6 +6209,9 @@ int mddev_stack_rdev_limits(struct mddev *mddev, stru= ct queue_limits *lim, if ((flags & MDDEV_STACK_INTEGRITY) && !queue_limits_stack_integrity_bdev(lim, rdev->bdev)) return -EINVAL; + + if (rdev_is_mddev(rdev)) + set_bit(MD_STACK_IO_OPT, &mddev->flags); } =20 /* @@ -6216,14 +6225,24 @@ int mddev_stack_rdev_limits(struct mddev *mddev, st= ruct queue_limits *lim, } mddev->logical_block_size =3D lim->logical_block_size; =20 + /* + * If all member disks are not mdraid array, and the personality + * already configures io_opt, keep this io_opt and ignore io_opt from + * member disks. + */ + if (!test_bit(MD_STACK_IO_OPT, &mddev->flags) && io_opt) + lim->io_opt =3D io_opt; + return 0; } EXPORT_SYMBOL_GPL(mddev_stack_rdev_limits); =20 /* apply the extra stacking limits from a new rdev into mddev */ -int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev) +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev, + bool io_opt_configured) { struct queue_limits lim; + unsigned int io_opt =3D 0; =20 if (mddev_is_dm(mddev)) return 0; @@ -6236,6 +6255,18 @@ int mddev_stack_new_rdev(struct mddev *mddev, struct= md_rdev *rdev) } =20 lim =3D queue_limits_start_update(mddev->gendisk->queue); + + /* + * Keep the old io_opt if no member disks are from md array, and + * the personality configure it's own io_opt. + */ + if (!test_bit(MD_STACK_IO_OPT, &mddev->flags)) { + if (rdev_is_mddev(rdev)) + set_bit(MD_STACK_IO_OPT, &mddev->flags); + else if (io_opt_configured) + io_opt =3D lim.io_opt; + } + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, mddev->gendisk->disk_name); =20 @@ -6246,6 +6277,8 @@ int mddev_stack_new_rdev(struct mddev *mddev, struct = md_rdev *rdev) return -ENXIO; } =20 + if (io_opt) + lim.io_opt =3D io_opt; return queue_limits_commit_update(mddev->gendisk->queue, &lim); } EXPORT_SYMBOL_GPL(mddev_stack_new_rdev); diff --git a/drivers/md/md.h b/drivers/md/md.h index ddf989f2a139..d37076593403 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -348,6 +348,7 @@ struct md_cluster_operations; * @MD_FAILLAST_DEV: Allow last rdev to be removed. * @MD_SERIALIZE_POLICY: Enforce write IO is not reordered, just used by r= aid1. * @MD_BIO_ALIGN: Bio issued to the array will align to io_opt before spli= t. + * @MD_STACK_IO_OPT: Stack io_opt by member disks. * * change UNSUPPORTED_MDDEV_FLAGS for each array type if new flag is added */ @@ -368,6 +369,7 @@ enum mddev_flags { MD_FAILLAST_DEV, MD_SERIALIZE_POLICY, MD_BIO_ALIGN, + MD_STACK_IO_OPT, }; =20 enum mddev_sb_flags { @@ -1041,7 +1043,8 @@ int do_md_run(struct mddev *mddev); #define MDDEV_STACK_INTEGRITY (1u << 0) int mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim, unsigned int flags); -int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev); +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev, + bool io_opt_configured); void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes); =20 extern const struct block_device_operations md_fops; diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 1a957dba2640..f3f3086f27fa 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1944,7 +1944,7 @@ static int raid1_add_disk(struct mddev *mddev, struct= md_rdev *rdev) for (mirror =3D first; mirror <=3D last; mirror++) { p =3D conf->mirrors + mirror; if (!p->rdev) { - err =3D mddev_stack_new_rdev(mddev, rdev); + err =3D mddev_stack_new_rdev(mddev, rdev, false); if (err) return err; =20 diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 2c6b65b83724..a6edc91e7a9a 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2139,7 +2139,7 @@ static int raid10_add_disk(struct mddev *mddev, struc= t md_rdev *rdev) continue; } =20 - err =3D mddev_stack_new_rdev(mddev, rdev); + err =3D mddev_stack_new_rdev(mddev, rdev, true); if (err) return err; p->head_position =3D 0; @@ -2157,7 +2157,7 @@ static int raid10_add_disk(struct mddev *mddev, struc= t md_rdev *rdev) clear_bit(In_sync, &rdev->flags); set_bit(Replacement, &rdev->flags); rdev->raid_disk =3D repl_slot; - err =3D mddev_stack_new_rdev(mddev, rdev); + err =3D mddev_stack_new_rdev(mddev, rdev, true); if (err) return err; conf->fullsync =3D 1; --=20 2.51.0