After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
if the error handler is called on the last rdev in RAID1 or RAID10,
the MD_BROKEN flag will be set on that mddev.
When MD_BROKEN is set, write bios to the md will result in an I/O error.
This causes a problem when using FailFast.
The current implementation of FailFast expects the array to continue
functioning without issues even after calling md_error for the last
rdev. Furthermore, due to the nature of its functionality, FailFast may
call md_error on all rdevs of the md. Even if retrying I/O on an rdev
would succeed, it first calls md_error before retrying.
To fix this issue, this commit ensures that for RAID1 and RAID10, if the
last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
is off, the MD_BROKEN flag will not be set on that mddev.
This change impacts userspace. After this commit, If the rdev has the
FailFast flag, the mddev never broken even if the failing bio is not
FailFast. However, it's unlikely that any setup using FailFast expects
the array to halt when md_error is called on the last rdev.
Since FailFast is only implemented for RAID1 and RAID10, no changes are
needed for other personalities.
Fixes: 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10")
Suggested-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Kenta Akagi <k@mgml.me>
---
drivers/md/md.c | 6 ++++--
drivers/md/raid1.c | 8 +++++++-
drivers/md/raid10.c | 8 +++++++-
3 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 6062e0deb616..f1745f8921fc 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -3050,7 +3050,8 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
if (cmd_match(buf, "faulty") && rdev->mddev->pers) {
md_error(rdev->mddev, rdev);
- if (test_bit(MD_BROKEN, &rdev->mddev->flags))
+ if (test_bit(MD_BROKEN, &rdev->mddev->flags) ||
+ !test_bit(Faulty, &rdev->flags))
err = -EBUSY;
else
err = 0;
@@ -7915,7 +7916,8 @@ static int set_disk_faulty(struct mddev *mddev, dev_t dev)
err = -ENODEV;
else {
md_error(mddev, rdev);
- if (test_bit(MD_BROKEN, &mddev->flags))
+ if (test_bit(MD_BROKEN, &mddev->flags) ||
+ !test_bit(Faulty, &rdev->flags))
err = -EBUSY;
}
rcu_read_unlock();
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 592a40233004..459b34cd358b 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1745,6 +1745,10 @@ static void raid1_status(struct seq_file *seq, struct mddev *mddev)
* - recovery is interrupted.
* - &mddev->degraded is bumped.
*
+ * If the following conditions are met, @mddev never fails:
+ * - The last In_sync @rdev has the &FailFast flag set.
+ * - &mddev->fail_last_dev is off.
+ *
* @rdev is marked as &Faulty excluding case when array is failed and
* &mddev->fail_last_dev is off.
*/
@@ -1757,7 +1761,9 @@ static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
if (test_bit(In_sync, &rdev->flags) &&
(conf->raid_disks - mddev->degraded) == 1) {
- set_bit(MD_BROKEN, &mddev->flags);
+ if (!test_bit(FailFast, &rdev->flags) ||
+ mddev->fail_last_dev)
+ set_bit(MD_BROKEN, &mddev->flags);
if (!mddev->fail_last_dev) {
conf->recovery_disabled = mddev->recovery_disabled;
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 14dcd5142eb4..b33149aa5b29 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1989,6 +1989,10 @@ static int enough(struct r10conf *conf, int ignore)
* - recovery is interrupted.
* - &mddev->degraded is bumped.
*
+ * If the following conditions are met, @mddev never fails:
+ * - The last In_sync @rdev has the &FailFast flag set.
+ * - &mddev->fail_last_dev is off.
+ *
* @rdev is marked as &Faulty excluding case when array is failed and
* &mddev->fail_last_dev is off.
*/
@@ -2000,7 +2004,9 @@ static void raid10_error(struct mddev *mddev, struct md_rdev *rdev)
spin_lock_irqsave(&conf->device_lock, flags);
if (test_bit(In_sync, &rdev->flags) && !enough(conf, rdev->raid_disk)) {
- set_bit(MD_BROKEN, &mddev->flags);
+ if (!test_bit(FailFast, &rdev->flags) ||
+ mddev->fail_last_dev)
+ set_bit(MD_BROKEN, &mddev->flags);
if (!mddev->fail_last_dev) {
spin_unlock_irqrestore(&conf->device_lock, flags);
--
2.50.1
在 2026/1/5 22:40, Kenta Akagi 写道:
> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
> if the error handler is called on the last rdev in RAID1 or RAID10,
> the MD_BROKEN flag will be set on that mddev.
> When MD_BROKEN is set, write bios to the md will result in an I/O error.
>
> This causes a problem when using FailFast.
> The current implementation of FailFast expects the array to continue
> functioning without issues even after calling md_error for the last
> rdev. Furthermore, due to the nature of its functionality, FailFast may
> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
> would succeed, it first calls md_error before retrying.
>
> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
> is off, the MD_BROKEN flag will not be set on that mddev.
>
> This change impacts userspace. After this commit, If the rdev has the
> FailFast flag, the mddev never broken even if the failing bio is not
> FailFast. However, it's unlikely that any setup using FailFast expects
> the array to halt when md_error is called on the last rdev.
>
In the current RAID design, when an IO error occurs, RAID ensures faulty
data is not read via the following actions:
1. Mark the badblocks (no FailFast flag); if this fails,
2. Mark the disk as Faulty.
If neither action is taken, and BROKEN is not set to prevent continued RAID
use, errors on the last remaining disk will be ignored. Subsequent reads
may return incorrect data. This seems like a more serious issue in my opinion.
In scenarios with a large number of transient IO errors, is FailFast not a
suitable configuration? As you mentioned: "retrying I/O on an rdev would
succeed".
--
Thanks,
Nan
Hi,
Thank you for reviewing.
On 2026/01/06 11:57, Li Nan wrote:
>
>
> 在 2026/1/5 22:40, Kenta Akagi 写道:
>> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
>> if the error handler is called on the last rdev in RAID1 or RAID10,
>> the MD_BROKEN flag will be set on that mddev.
>> When MD_BROKEN is set, write bios to the md will result in an I/O error.
>>
>> This causes a problem when using FailFast.
>> The current implementation of FailFast expects the array to continue
>> functioning without issues even after calling md_error for the last
>> rdev. Furthermore, due to the nature of its functionality, FailFast may
>> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
>> would succeed, it first calls md_error before retrying.
>>
>> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
>> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
>> is off, the MD_BROKEN flag will not be set on that mddev.
>>
>> This change impacts userspace. After this commit, If the rdev has the
>> FailFast flag, the mddev never broken even if the failing bio is not
>> FailFast. However, it's unlikely that any setup using FailFast expects
>> the array to halt when md_error is called on the last rdev.
>>
>
> In the current RAID design, when an IO error occurs, RAID ensures faulty
> data is not read via the following actions:
> 1. Mark the badblocks (no FailFast flag); if this fails,
> 2. Mark the disk as Faulty.
>
> If neither action is taken, and BROKEN is not set to prevent continued RAID
> use, errors on the last remaining disk will be ignored. Subsequent reads
> may return incorrect data. This seems like a more serious issue in my opinion.
I agree that data inconsistency can certainly occur in this scenario.
However, a RAID1 with only one remaining rdev can considered the same as a plain
disk. From that perspective, I do not believe it is the mandatory responsibility
of md raid to block subsequent writes nor prevent data inconsistency in this situation.
The commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10") that introduced
BROKEN for RAID1/10 also does not seem to have done so for that responsibility.
>
> In scenarios with a large number of transient IO errors, is FailFast not a
> suitable configuration? As you mentioned: "retrying I/O on an rdev would
It seems be right about that. Using FailFast with unstable underlayer is not good.
However, as md raid, which is issuer of FailFast bios,
I believe it is incorrect to shutdown the array due to the failure of a FailFast bio.
Thanks,
Akagi
> succeed".
>
> --
> Thanks,
> Nan
>
>
On Tue, Jan 6, 2026 at 8:30 PM Kenta Akagi <k@mgml.me> wrote:
>
> Hi,
> Thank you for reviewing.
>
> On 2026/01/06 11:57, Li Nan wrote:
> >
> >
> > 在 2026/1/5 22:40, Kenta Akagi 写道:
> >> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
> >> if the error handler is called on the last rdev in RAID1 or RAID10,
> >> the MD_BROKEN flag will be set on that mddev.
> >> When MD_BROKEN is set, write bios to the md will result in an I/O error.
> >>
> >> This causes a problem when using FailFast.
> >> The current implementation of FailFast expects the array to continue
> >> functioning without issues even after calling md_error for the last
> >> rdev. Furthermore, due to the nature of its functionality, FailFast may
> >> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
> >> would succeed, it first calls md_error before retrying.
> >>
> >> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
> >> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
> >> is off, the MD_BROKEN flag will not be set on that mddev.
> >>
> >> This change impacts userspace. After this commit, If the rdev has the
> >> FailFast flag, the mddev never broken even if the failing bio is not
> >> FailFast. However, it's unlikely that any setup using FailFast expects
> >> the array to halt when md_error is called on the last rdev.
> >>
> >
> > In the current RAID design, when an IO error occurs, RAID ensures faulty
> > data is not read via the following actions:
> > 1. Mark the badblocks (no FailFast flag); if this fails,
> > 2. Mark the disk as Faulty.
> >
> > If neither action is taken, and BROKEN is not set to prevent continued RAID
> > use, errors on the last remaining disk will be ignored. Subsequent reads
> > may return incorrect data. This seems like a more serious issue in my opinion.
>
> I agree that data inconsistency can certainly occur in this scenario.
>
> However, a RAID1 with only one remaining rdev can considered the same as a plain
> disk. From that perspective, I do not believe it is the mandatory responsibility
> of md raid to block subsequent writes nor prevent data inconsistency in this situation.
>
> The commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10") that introduced
> BROKEN for RAID1/10 also does not seem to have done so for that responsibility.
>
> >
> > In scenarios with a large number of transient IO errors, is FailFast not a
> > suitable configuration? As you mentioned: "retrying I/O on an rdev would
>
> It seems be right about that. Using FailFast with unstable underlayer is not good.
> However, as md raid, which is issuer of FailFast bios,
> I believe it is incorrect to shutdown the array due to the failure of a FailFast bio.
Hi all
I understand @Li Nan 's point now. The badblock can't be recorded in
this situation and the last working device is not set to faulty. To be
frank, I think consistency of data is more important. Users don't
think it's a single disk, they must think raid1 should guarantee the
consistency. But the write request should return an error when calling
raid1_error for the last working device, right? So there is no
consistency problem?
hi, Kenta. I have a question too. What will you do in your environment
after the network connection works again? Add those disks one by one
to do recovery?
Best Regards
Xiao
>
> Thanks,
> Akagi
>
> > succeed".
> >
> > --
> > Thanks,
> > Nan
> >
> >
>
On 2026/01/07 12:35, Xiao Ni wrote:
> On Tue, Jan 6, 2026 at 8:30 PM Kenta Akagi <k@mgml.me> wrote:
>>
>> Hi,
>> Thank you for reviewing.
>>
>> On 2026/01/06 11:57, Li Nan wrote:
>>>
>>>
>>> 在 2026/1/5 22:40, Kenta Akagi 写道:
>>>> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
>>>> if the error handler is called on the last rdev in RAID1 or RAID10,
>>>> the MD_BROKEN flag will be set on that mddev.
>>>> When MD_BROKEN is set, write bios to the md will result in an I/O error.
>>>>
>>>> This causes a problem when using FailFast.
>>>> The current implementation of FailFast expects the array to continue
>>>> functioning without issues even after calling md_error for the last
>>>> rdev. Furthermore, due to the nature of its functionality, FailFast may
>>>> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
>>>> would succeed, it first calls md_error before retrying.
>>>>
>>>> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
>>>> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
>>>> is off, the MD_BROKEN flag will not be set on that mddev.
>>>>
>>>> This change impacts userspace. After this commit, If the rdev has the
>>>> FailFast flag, the mddev never broken even if the failing bio is not
>>>> FailFast. However, it's unlikely that any setup using FailFast expects
>>>> the array to halt when md_error is called on the last rdev.
>>>>
>>>
>>> In the current RAID design, when an IO error occurs, RAID ensures faulty
>>> data is not read via the following actions:
>>> 1. Mark the badblocks (no FailFast flag); if this fails,
>>> 2. Mark the disk as Faulty.
>>>
>>> If neither action is taken, and BROKEN is not set to prevent continued RAID
>>> use, errors on the last remaining disk will be ignored. Subsequent reads
>>> may return incorrect data. This seems like a more serious issue in my opinion.
>>
>> I agree that data inconsistency can certainly occur in this scenario.
>>
>> However, a RAID1 with only one remaining rdev can considered the same as a plain
>> disk. From that perspective, I do not believe it is the mandatory responsibility
>> of md raid to block subsequent writes nor prevent data inconsistency in this situation.
>>
>> The commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10") that introduced
>> BROKEN for RAID1/10 also does not seem to have done so for that responsibility.
>>
>>>
>>> In scenarios with a large number of transient IO errors, is FailFast not a
>>> suitable configuration? As you mentioned: "retrying I/O on an rdev would
>>
>> It seems be right about that. Using FailFast with unstable underlayer is not good.
>> However, as md raid, which is issuer of FailFast bios,
>> I believe it is incorrect to shutdown the array due to the failure of a FailFast bio.
>
> Hi all
>
> I understand @Li Nan 's point now. The badblock can't be recorded in
> this situation and the last working device is not set to faulty. To be
> frank, I think consistency of data is more important. Users don't
> think it's a single disk, they must think raid1 should guarantee the
> consistency. But the write request should return an error when calling
> raid1_error for the last working device, right? So there is no
> consistency problem?
Hi all,
I understand that when md_error is issued for the last remaining rdev,
the array should be stopped except in the failfast case, also,
it is no longer appropriate to treat an RAID1 array that has lost
redundancy as "just a normal single drive" [1].
I will post an PATCH v7 based on v5.
[1] commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10")
Thanks.
>
> hi, Kenta. I have a question too. What will you do in your environment
> after the network connection works again? Add those disks one by one
> to do recovery?
>
> Best Regards
> Xiao
>
>>
>> Thanks,
>> Akagi
>>
>>> succeed".
>>>
>>> --
>>> Thanks,
>>> Nan
>>>
>>>
>>
>
>
在 2026/1/6 20:30, Kenta Akagi 写道:
> Hi,
> Thank you for reviewing.
>
> On 2026/01/06 11:57, Li Nan wrote:
>>
>>
>> 在 2026/1/5 22:40, Kenta Akagi 写道:
>>> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
>>> if the error handler is called on the last rdev in RAID1 or RAID10,
>>> the MD_BROKEN flag will be set on that mddev.
>>> When MD_BROKEN is set, write bios to the md will result in an I/O error.
>>>
>>> This causes a problem when using FailFast.
>>> The current implementation of FailFast expects the array to continue
>>> functioning without issues even after calling md_error for the last
>>> rdev. Furthermore, due to the nature of its functionality, FailFast may
>>> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
>>> would succeed, it first calls md_error before retrying.
>>>
>>> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
>>> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
>>> is off, the MD_BROKEN flag will not be set on that mddev.
>>>
>>> This change impacts userspace. After this commit, If the rdev has the
>>> FailFast flag, the mddev never broken even if the failing bio is not
>>> FailFast. However, it's unlikely that any setup using FailFast expects
>>> the array to halt when md_error is called on the last rdev.
>>>
>>
>> In the current RAID design, when an IO error occurs, RAID ensures faulty
>> data is not read via the following actions:
>> 1. Mark the badblocks (no FailFast flag); if this fails,
>> 2. Mark the disk as Faulty.
>>
>> If neither action is taken, and BROKEN is not set to prevent continued RAID
>> use, errors on the last remaining disk will be ignored. Subsequent reads
>> may return incorrect data. This seems like a more serious issue in my opinion.
>
> I agree that data inconsistency can certainly occur in this scenario.
>
> However, a RAID1 with only one remaining rdev can considered the same as a plain
> disk. From that perspective, I do not believe it is the mandatory responsibility
> of md raid to block subsequent writes nor prevent data inconsistency in this situation.
>
> The commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10") that introduced
> BROKEN for RAID1/10 also does not seem to have done so for that responsibility.
>
>>
>> In scenarios with a large number of transient IO errors, is FailFast not a
>> suitable configuration? As you mentioned: "retrying I/O on an rdev would
>
> It seems be right about that. Using FailFast with unstable underlayer is not good.
> However, as md raid, which is issuer of FailFast bios,
> I believe it is incorrect to shutdown the array due to the failure of a FailFast bio.
>
> Thanks,
> Akagi
>
I get your point, Kuai, what's your take on this?
--
Thanks,
Nan
On Tue, Jan 6, 2026 at 10:57 AM Li Nan <linan666@huaweicloud.com> wrote:
>
>
>
> 在 2026/1/5 22:40, Kenta Akagi 写道:
> > After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
> > if the error handler is called on the last rdev in RAID1 or RAID10,
> > the MD_BROKEN flag will be set on that mddev.
> > When MD_BROKEN is set, write bios to the md will result in an I/O error.
> >
> > This causes a problem when using FailFast.
> > The current implementation of FailFast expects the array to continue
> > functioning without issues even after calling md_error for the last
> > rdev. Furthermore, due to the nature of its functionality, FailFast may
> > call md_error on all rdevs of the md. Even if retrying I/O on an rdev
> > would succeed, it first calls md_error before retrying.
> >
> > To fix this issue, this commit ensures that for RAID1 and RAID10, if the
> > last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
> > is off, the MD_BROKEN flag will not be set on that mddev.
> >
> > This change impacts userspace. After this commit, If the rdev has the
> > FailFast flag, the mddev never broken even if the failing bio is not
> > FailFast. However, it's unlikely that any setup using FailFast expects
> > the array to halt when md_error is called on the last rdev.
> >
>
> In the current RAID design, when an IO error occurs, RAID ensures faulty
> data is not read via the following actions:
> 1. Mark the badblocks (no FailFast flag); if this fails,
> 2. Mark the disk as Faulty.
>
> If neither action is taken, and BROKEN is not set to prevent continued RAID
> use, errors on the last remaining disk will be ignored. Subsequent reads
> may return incorrect data. This seems like a more serious issue in my opinion.
>
> In scenarios with a large number of transient IO errors, is FailFast not a
> suitable configuration? As you mentioned: "retrying I/O on an rdev would
> succeed".
Hi Nan
According to my understanding, the policy here is to try to keep raid
work if io error happens on the last device. It doesn't set faulty on
the last in_sync device. It only sets MD_BROKEN to forbid write
requests. But it still can read data from the last device.
static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
{
if (test_bit(In_sync, &rdev->flags) &&
(conf->raid_disks - mddev->degraded) == 1) {
set_bit(MD_BROKEN, &mddev->flags);
if (!mddev->fail_last_dev) {
return; // return directly here
}
static void md_submit_bio(struct bio *bio)
{
if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
bio_io_error(bio);
return;
}
Read requests can submit to the last working device. Right?
Best Regards
Xiao
>
> --
> Thanks,
> Nan
>
在 2026/1/6 15:59, Xiao Ni 写道:
> On Tue, Jan 6, 2026 at 10:57 AM Li Nan <linan666@huaweicloud.com> wrote:
>>
>>
>>
>> 在 2026/1/5 22:40, Kenta Akagi 写道:
>>> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
>>> if the error handler is called on the last rdev in RAID1 or RAID10,
>>> the MD_BROKEN flag will be set on that mddev.
>>> When MD_BROKEN is set, write bios to the md will result in an I/O error.
>>>
>>> This causes a problem when using FailFast.
>>> The current implementation of FailFast expects the array to continue
>>> functioning without issues even after calling md_error for the last
>>> rdev. Furthermore, due to the nature of its functionality, FailFast may
>>> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
>>> would succeed, it first calls md_error before retrying.
>>>
>>> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
>>> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
>>> is off, the MD_BROKEN flag will not be set on that mddev.
>>>
>>> This change impacts userspace. After this commit, If the rdev has the
>>> FailFast flag, the mddev never broken even if the failing bio is not
>>> FailFast. However, it's unlikely that any setup using FailFast expects
>>> the array to halt when md_error is called on the last rdev.
>>>
>>
>> In the current RAID design, when an IO error occurs, RAID ensures faulty
>> data is not read via the following actions:
>> 1. Mark the badblocks (no FailFast flag); if this fails,
>> 2. Mark the disk as Faulty.
>>
>> If neither action is taken, and BROKEN is not set to prevent continued RAID
>> use, errors on the last remaining disk will be ignored. Subsequent reads
>> may return incorrect data. This seems like a more serious issue in my opinion.
>>
>> In scenarios with a large number of transient IO errors, is FailFast not a
>> suitable configuration? As you mentioned: "retrying I/O on an rdev would
>> succeed".
>
> Hi Nan
>
> According to my understanding, the policy here is to try to keep raid
> work if io error happens on the last device. It doesn't set faulty on
> the last in_sync device. It only sets MD_BROKEN to forbid write
> requests. But it still can read data from the last device.
>
> static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
> {
>
> if (test_bit(In_sync, &rdev->flags) &&
> (conf->raid_disks - mddev->degraded) == 1) {
> set_bit(MD_BROKEN, &mddev->flags);
>
> if (!mddev->fail_last_dev) {
> return; // return directly here
> }
>
>
>
> static void md_submit_bio(struct bio *bio)
> {
> if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
> bio_io_error(bio);
> return;
> }
>
> Read requests can submit to the last working device. Right?
>
> Best Regards
> Xiao
>
Yeah, after MD_BROKEN is set, read are forbidden but writes remain allowed.
IMO we preserve the RAID array in this state to enable users to retrieve
stored data, not to continue using it. However, continued writes to the
array will cause subsequent errors to fail to be logged, either due to
failfast or the badblocks being full. Read errors have no impact as they do
not damage the original data.
--
Thanks,
Nan
On Tue, Jan 6, 2026 at 5:11 PM Li Nan <linan666@huaweicloud.com> wrote:
>
>
>
> 在 2026/1/6 15:59, Xiao Ni 写道:
> > On Tue, Jan 6, 2026 at 10:57 AM Li Nan <linan666@huaweicloud.com> wrote:
> >>
> >>
> >>
> >> 在 2026/1/5 22:40, Kenta Akagi 写道:
> >>> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
> >>> if the error handler is called on the last rdev in RAID1 or RAID10,
> >>> the MD_BROKEN flag will be set on that mddev.
> >>> When MD_BROKEN is set, write bios to the md will result in an I/O error.
> >>>
> >>> This causes a problem when using FailFast.
> >>> The current implementation of FailFast expects the array to continue
> >>> functioning without issues even after calling md_error for the last
> >>> rdev. Furthermore, due to the nature of its functionality, FailFast may
> >>> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
> >>> would succeed, it first calls md_error before retrying.
> >>>
> >>> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
> >>> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
> >>> is off, the MD_BROKEN flag will not be set on that mddev.
> >>>
> >>> This change impacts userspace. After this commit, If the rdev has the
> >>> FailFast flag, the mddev never broken even if the failing bio is not
> >>> FailFast. However, it's unlikely that any setup using FailFast expects
> >>> the array to halt when md_error is called on the last rdev.
> >>>
> >>
> >> In the current RAID design, when an IO error occurs, RAID ensures faulty
> >> data is not read via the following actions:
> >> 1. Mark the badblocks (no FailFast flag); if this fails,
> >> 2. Mark the disk as Faulty.
> >>
> >> If neither action is taken, and BROKEN is not set to prevent continued RAID
> >> use, errors on the last remaining disk will be ignored. Subsequent reads
> >> may return incorrect data. This seems like a more serious issue in my opinion.
> >>
> >> In scenarios with a large number of transient IO errors, is FailFast not a
> >> suitable configuration? As you mentioned: "retrying I/O on an rdev would
> >> succeed".
> >
> > Hi Nan
> >
> > According to my understanding, the policy here is to try to keep raid
> > work if io error happens on the last device. It doesn't set faulty on
> > the last in_sync device. It only sets MD_BROKEN to forbid write
> > requests. But it still can read data from the last device.
> >
> > static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
> > {
> >
> > if (test_bit(In_sync, &rdev->flags) &&
> > (conf->raid_disks - mddev->degraded) == 1) {
> > set_bit(MD_BROKEN, &mddev->flags);
> >
> > if (!mddev->fail_last_dev) {
> > return; // return directly here
> > }
> >
> >
> >
> > static void md_submit_bio(struct bio *bio)
> > {
> > if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
> > bio_io_error(bio);
> > return;
> > }
> >
> > Read requests can submit to the last working device. Right?
> >
> > Best Regards
> > Xiao
> >
>
> Yeah, after MD_BROKEN is set, read are forbidden but writes remain allowed.
Hmm, reverse way? Write requests are forbidden and read requests are
allowed now. If MD_BROKEN is set, write requests return directly after
bio_io_error.
Regards
Xiao
> IMO we preserve the RAID array in this state to enable users to retrieve
> stored data, not to continue using it. However, continued writes to the
> array will cause subsequent errors to fail to be logged, either due to
> failfast or the badblocks being full. Read errors have no impact as they do
> not damage the original data.
>
> --
> Thanks,
> Nan
>
在 2026/1/6 17:25, Xiao Ni 写道:
> On Tue, Jan 6, 2026 at 5:11 PM Li Nan <linan666@huaweicloud.com> wrote:
>>
>>
>>
>> 在 2026/1/6 15:59, Xiao Ni 写道:
>>> On Tue, Jan 6, 2026 at 10:57 AM Li Nan <linan666@huaweicloud.com> wrote:
>>>>
>>>>
>>>>
>>>> 在 2026/1/5 22:40, Kenta Akagi 写道:
>>>>> After commit 9631abdbf406 ("md: Set MD_BROKEN for RAID1 and RAID10"),
>>>>> if the error handler is called on the last rdev in RAID1 or RAID10,
>>>>> the MD_BROKEN flag will be set on that mddev.
>>>>> When MD_BROKEN is set, write bios to the md will result in an I/O error.
>>>>>
>>>>> This causes a problem when using FailFast.
>>>>> The current implementation of FailFast expects the array to continue
>>>>> functioning without issues even after calling md_error for the last
>>>>> rdev. Furthermore, due to the nature of its functionality, FailFast may
>>>>> call md_error on all rdevs of the md. Even if retrying I/O on an rdev
>>>>> would succeed, it first calls md_error before retrying.
>>>>>
>>>>> To fix this issue, this commit ensures that for RAID1 and RAID10, if the
>>>>> last In_sync rdev has the FailFast flag set and the mddev's fail_last_dev
>>>>> is off, the MD_BROKEN flag will not be set on that mddev.
>>>>>
>>>>> This change impacts userspace. After this commit, If the rdev has the
>>>>> FailFast flag, the mddev never broken even if the failing bio is not
>>>>> FailFast. However, it's unlikely that any setup using FailFast expects
>>>>> the array to halt when md_error is called on the last rdev.
>>>>>
>>>>
>>>> In the current RAID design, when an IO error occurs, RAID ensures faulty
>>>> data is not read via the following actions:
>>>> 1. Mark the badblocks (no FailFast flag); if this fails,
>>>> 2. Mark the disk as Faulty.
>>>>
>>>> If neither action is taken, and BROKEN is not set to prevent continued RAID
>>>> use, errors on the last remaining disk will be ignored. Subsequent reads
>>>> may return incorrect data. This seems like a more serious issue in my opinion.
>>>>
>>>> In scenarios with a large number of transient IO errors, is FailFast not a
>>>> suitable configuration? As you mentioned: "retrying I/O on an rdev would
>>>> succeed".
>>>
>>> Hi Nan
>>>
>>> According to my understanding, the policy here is to try to keep raid
>>> work if io error happens on the last device. It doesn't set faulty on
>>> the last in_sync device. It only sets MD_BROKEN to forbid write
>>> requests. But it still can read data from the last device.
>>>
>>> static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
>>> {
>>>
>>> if (test_bit(In_sync, &rdev->flags) &&
>>> (conf->raid_disks - mddev->degraded) == 1) {
>>> set_bit(MD_BROKEN, &mddev->flags);
>>>
>>> if (!mddev->fail_last_dev) {
>>> return; // return directly here
>>> }
>>>
>>>
>>>
>>> static void md_submit_bio(struct bio *bio)
>>> {
>>> if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
>>> bio_io_error(bio);
>>> return;
>>> }
>>>
>>> Read requests can submit to the last working device. Right?
>>>
>>> Best Regards
>>> Xiao
>>>
>>
>> Yeah, after MD_BROKEN is set, read are forbidden but writes remain allowed.
>
> Hmm, reverse way? Write requests are forbidden and read requests are
> allowed now. If MD_BROKEN is set, write requests return directly after
> bio_io_error.
>
> Regards
> Xiao
>
Apologies for the typo... The rest of the content was written with this
exact meaning in mind.
>> IMO we preserve the RAID array in this state to enable users to retrieve
>> stored data, not to continue using it. However, continued writes to the
>> array will cause subsequent errors to fail to be logged, either due to
>> failfast or the badblocks being full. Read errors have no impact as they do
>> not damage the original data.
>>
--
Thanks,
Nan
© 2016 - 2026 Red Hat, Inc.