When using device-mapper's dm-raid target, stopping a RAID array can cause the
system to hang under specific conditions.
This occurs when:
- A dm-raid managed device tree is suspended from top to bottom
(the top-level RAID device is suspended first, followed by its
underlying metadata and data devices)
- The top-level RAID device is then removed
Removing the top-level device triggers a hang in the following sequence: the dm-raid
destructor calls md_stop(), which tries to flush the write-intent bitmap by writing
to the metadata sub-devices. However, these devices are already suspended, making
them unable to complete the write operations and causing an indefinite block.
Fix:
- Prevent bitmap flushing when md_stop() is called from dm-raid destructor context
and avoid a quiescing/unquescing cycle which could also cause I/O
- Still allow write-intent bitmap flushing when called from dm-raid suspend context
This ensures that RAID array teardown can complete successfully even when the
underlying devices are in a suspended state.
This second patch uses md_is_rdwr() to distinguish between suspend and
destructor paths as elaborated on above.
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
---
drivers/md/md.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 4e033c26fdd4..78408d2f78fc 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -6541,12 +6541,14 @@ static void __md_stop_writes(struct mddev *mddev)
{
timer_delete_sync(&mddev->safemode_timer);
- if (mddev->pers && mddev->pers->quiesce) {
- mddev->pers->quiesce(mddev, 1);
- mddev->pers->quiesce(mddev, 0);
- }
+ if (md_is_rdwr(mddev) || !mddev_is_dm(mddev)) {
+ if (mddev->pers && mddev->pers->quiesce) {
+ mddev->pers->quiesce(mddev, 1);
+ mddev->pers->quiesce(mddev, 0);
+ }
- mddev->bitmap_ops->flush(mddev);
+ mddev->bitmap_ops->flush(mddev);
+ }
if (md_is_rdwr(mddev) &&
((!mddev->in_sync && !mddev_is_clustered(mddev)) ||
--
2.51.0