[PATCH] nvme: fix deadlock between reset and scan

Bitao Hu posted 1 patch 2 years ago
There is a newer version of this series
drivers/nvme/host/core.c | 5 +++++
1 file changed, 5 insertions(+)
[PATCH] nvme: fix deadlock between reset and scan
Posted by Bitao Hu 2 years ago
If controller reset occurs when allocating namespace, both
nvme_reset_work and nvme_scan_work will hang, as shown below.

Test Scripts:

    for ((t=1;t<=128;t++))
    do
    nsid=`nvme create-ns /dev/nvme1 -s 14537724 -c 14537724 -f 0 -m 0 \
    -d 0 | awk -F: '{print($NF);}'`
    nvme attach-ns /dev/nvme1 -n $nsid -c 0
    done
    nvme reset /dev/nvme1

We will find that both nvme_reset_work and nvme_scan_work hung:

    INFO: task kworker/u249:4:17848 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
    message.
    task:kworker/u249:4  state:D stack:    0 pid:17848 ppid:     2
    flags:0x00000028
    Workqueue: nvme-reset-wq nvme_reset_work [nvme]
    Call trace:
    __switch_to+0xb4/0xfc
    __schedule+0x22c/0x670
    schedule+0x4c/0xd0
    blk_mq_freeze_queue_wait+0x84/0xc0
    nvme_wait_freeze+0x40/0x64 [nvme_core]
    nvme_reset_work+0x1c0/0x5cc [nvme]
    process_one_work+0x1d8/0x4b0
    worker_thread+0x230/0x440
    kthread+0x114/0x120
    INFO: task kworker/u249:3:22404 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
    message.
    task:kworker/u249:3  state:D stack:    0 pid:22404 ppid:     2
    flags:0x00000028
    Workqueue: nvme-wq nvme_scan_work [nvme_core]
    Call trace:
    __switch_to+0xb4/0xfc
    __schedule+0x22c/0x670
    schedule+0x4c/0xd0
    rwsem_down_write_slowpath+0x32c/0x98c
    down_write+0x70/0x80
    nvme_alloc_ns+0x1ac/0x38c [nvme_core]
    nvme_validate_or_alloc_ns+0xbc/0x150 [nvme_core]
    nvme_scan_ns_list+0xe8/0x2e4 [nvme_core]
    nvme_scan_work+0x60/0x500 [nvme_core]
    process_one_work+0x1d8/0x4b0
    worker_thread+0x260/0x440
    kthread+0x114/0x120
    INFO: task nvme:28428 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
    message.
    task:nvme            state:D stack:    0 pid:28428 ppid: 27119
    flags:0x00000000
    Call trace:
    __switch_to+0xb4/0xfc
    __schedule+0x22c/0x670
    schedule+0x4c/0xd0
    schedule_timeout+0x160/0x194
    do_wait_for_common+0xac/0x1d0
    __wait_for_common+0x78/0x100
    wait_for_completion+0x24/0x30
    __flush_work.isra.0+0x74/0x90
    flush_work+0x14/0x20
    nvme_reset_ctrl_sync+0x50/0x74 [nvme_core]
    nvme_dev_ioctl+0x1b0/0x250 [nvme_core]
    __arm64_sys_ioctl+0xa8/0xf0
    el0_svc_common+0x88/0x234
    do_el0_svc+0x7c/0x90
    el0_svc+0x1c/0x30
    el0_sync_handler+0xa8/0xb0
    el0_sync+0x148/0x180

The reason for the hang is that nvme_reset_work occurs while nvme_scan_work
is still running. nvme_scan_work may add new ns into ctrl->namespaces
list after nvme_reset_work frozen all ns->q in ctrl->namespaces list.
The newly added ns is not frozen, so nvme_wait_freeze will wait forever.
Unfortunately, ctrl->namespaces_rwsem is held by nvme_reset_work, so
nvme_scan_work will also wait forever. Now we are deadlocked!

PROCESS1                         PROCESS2
==============                   ==============
nvme_scan_work
  ...                            nvme_reset_work
  nvme_validate_or_alloc_ns        nvme_dev_disable
    nvme_alloc_ns                    nvme_start_freeze
     down_write                      ...
     nvme_ns_add_to_ctrl_list        ...
     up_write                        nvme_wait_freeze
    ...                                down_read
    nvme_alloc_ns                      blk_mq_freeze_queue_wait
     down_write

Fix by checking ctrl->state whether is NVME_CTRL_LIVE before adding new
ns into ctrl->namespaces.

Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
---
 drivers/nvme/host/core.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 62612f8..7551b55 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3631,6 +3631,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 		goto out_unlink_ns;
 
 	down_write(&ctrl->namespaces_rwsem);
+	/* preventing adding ns during resetting */
+	if (unlikely(ctrl->state != NVME_CTRL_LIVE)) {
+		up_write(&ctrl->namespaces_rwsem);
+		goto out_unlink_ns;
+	}
 	nvme_ns_add_to_ctrl_list(ns);
 	up_write(&ctrl->namespaces_rwsem);
 	nvme_get_ctrl(ctrl);
-- 
1.8.3.1
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by Keith Busch 2 years ago
On Thu, Nov 23, 2023 at 07:00:13PM +0800, Bitao Hu wrote:
> @@ -3631,6 +3631,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
>  		goto out_unlink_ns;
>  
>  	down_write(&ctrl->namespaces_rwsem);
> +	/* preventing adding ns during resetting */
> +	if (unlikely(ctrl->state != NVME_CTRL_LIVE)) {

We can't rely on ctrl->state for preventing deadlocks. Reading unlocked
ctrl->state is often used, but should be considered advisory-only since
the state could change immediatly after reading it.

> +		up_write(&ctrl->namespaces_rwsem);
> +		goto out_unlink_ns;
> +	}
>  	nvme_ns_add_to_ctrl_list(ns);
>  	up_write(&ctrl->namespaces_rwsem);
>  	nvme_get_ctrl(ctrl);
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by yaoma 2 years ago
Hi Keith Busch

Thanks for your reply.

The idea to avoid such a deadlock between nvme_reset and nvme_scan is to 
ensure that no namespace can be added to ctrl->namespaces after 
nvme_start_freeze has already been called. We can achieve this goal by 
assessing the ctrl->state after we have already acquired the 
ctrl->namespaces_rwsem lock, to decide whether to add the namespace to 
the list or not.
1. After we determine that ctrl->state is LIVE, it may be immediately 
changed to another state. However, since we have already acquired the 
lock, other tasks cannot access ctrl->namespace, so we can still safely 
add the namespace to the list. After acquiring the lock, 
nvme_start_freeze will freeze all ns->q in the list, including any newly 
added namespaces.
2. Before the completion of nvme_reset, ctrl->state will not be changed 
to LIVE, so we will not add any more namespaces to the list. All ns->q 
in the list is frozen, so nvme_wait_freeze can exit normally.


On 2023/11/28 02:07, Keith Busch wrote:
> On Thu, Nov 23, 2023 at 07:00:13PM +0800, Bitao Hu wrote:
>> @@ -3631,6 +3631,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
>>   		goto out_unlink_ns;
>>   
>>   	down_write(&ctrl->namespaces_rwsem);
>> +	/* preventing adding ns during resetting */
>> +	if (unlikely(ctrl->state != NVME_CTRL_LIVE)) {
> 
> We can't rely on ctrl->state for preventing deadlocks. Reading unlocked
> ctrl->state is often used, but should be considered advisory-only since
> the state could change immediatly after reading it.
> 
>> +		up_write(&ctrl->namespaces_rwsem);
>> +		goto out_unlink_ns;
>> +	}
>>   	nvme_ns_add_to_ctrl_list(ns);
>>   	up_write(&ctrl->namespaces_rwsem);
>>   	nvme_get_ctrl(ctrl);
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by Sagi Grimberg 2 years ago

On 11/28/23 08:22, yaoma wrote:
> Hi Keith Busch
> 
> Thanks for your reply.
> 
> The idea to avoid such a deadlock between nvme_reset and nvme_scan is to 
> ensure that no namespace can be added to ctrl->namespaces after 
> nvme_start_freeze has already been called. We can achieve this goal by 
> assessing the ctrl->state after we have already acquired the 
> ctrl->namespaces_rwsem lock, to decide whether to add the namespace to 
> the list or not.
> 1. After we determine that ctrl->state is LIVE, it may be immediately 
> changed to another state. However, since we have already acquired the 
> lock, other tasks cannot access ctrl->namespace, so we can still safely 
> add the namespace to the list. After acquiring the lock, 
> nvme_start_freeze will freeze all ns->q in the list, including any newly 
> added namespaces.
> 2. Before the completion of nvme_reset, ctrl->state will not be changed 
> to LIVE, so we will not add any more namespaces to the list. All ns->q 
> in the list is frozen, so nvme_wait_freeze can exit normally.

I agree with the analysis, there is a hole between start_freeze and
freeze_wait that a scan may add a ns to the ctrl ns list.

However the fix should be to mark the ctrl with say NVME_CTRL_FROZEN
flag set in nvme_freeze_start and cleared in nvme_unfreeze (similar
to what we did with quiesce). Then the scan can check it before adding
the new namespace (under the namespaces_rwsem).
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by yaoma 2 years ago
Hi, Sagi Grimberg

I revised my code following your advice and carried out tests.

Test Scripts:
	for ((t=1;t<=128;t++))
	do
     	nsid=`nvme create-ns /dev/nvme0 -s 1453772 -c 1453772 -f 0\
	-m 0 -d 0 | awk -F:  '{print($NF);}'`
     	nvme attach-ns /dev/nvme0 -n $nsid -c 0
	done

	echo "resetting"
	nvme reset /dev/nvme0
	lsblk | grep nvme0 | wc -l
	sleep 2
	lsblk | grep nvme0 | wc -l

Results:
	...
	attach-ns: Success, nsid:128
	resetting
	23
	128

After the fix, we will not be deadlocked.

I find a minor issue. In the resetting state, the scan may not recognize 
all ns, but since a scan work is queued at the end of reset, so the 
impact is not significant. After the reset is completed, all ns can 
eventually be recognized.

---
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 21783aa2e..e361aba39 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3630,6 +3630,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, 
struct nvme_ns_info *info)
                 goto out_unlink_ns;

         down_write(&ctrl->namespaces_rwsem);
+       if (test_bit(NVME_CTRL_FROZEN, &ctrl->flags)) {
+               up_write(&ctrl->namespaces_rwsem);
+               goto out_unlink_ns;
+       }
         nvme_ns_add_to_ctrl_list(ns);
         up_write(&ctrl->namespaces_rwsem);
         nvme_get_ctrl(ctrl);
@@ -4539,6 +4543,7 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl)
         list_for_each_entry(ns, &ctrl->namespaces, list)
                 blk_mq_unfreeze_queue(ns->queue);
         up_read(&ctrl->namespaces_rwsem);
+       clear_bit(NVME_CTRL_FROZEN, &ctrl->flags);
  }
  EXPORT_SYMBOL_GPL(nvme_unfreeze);

@@ -4572,6 +4577,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl)
  {
         struct nvme_ns *ns;

+       set_bit(NVME_CTRL_FROZEN, &ctrl->flags);
         down_read(&ctrl->namespaces_rwsem);
         list_for_each_entry(ns, &ctrl->namespaces, list)
                 blk_freeze_queue_start(ns->queue);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f35647c47..755319b0d 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -251,6 +251,7 @@ enum nvme_ctrl_flags {
         NVME_CTRL_STOPPED               = 3,
         NVME_CTRL_SKIP_ID_CNS_CS        = 4,
         NVME_CTRL_DIRTY_CAPABILITY      = 5,
+       NVME_CTRL_FROZEN                = 6,
  };

  struct nvme_ctrl {
--

On 2023/11/28 18:13, Sagi Grimberg wrote:
> 
> 
> On 11/28/23 08:22, yaoma wrote:
>> Hi Keith Busch
>>
>> Thanks for your reply.
>>
>> The idea to avoid such a deadlock between nvme_reset and nvme_scan is 
>> to ensure that no namespace can be added to ctrl->namespaces after 
>> nvme_start_freeze has already been called. We can achieve this goal by 
>> assessing the ctrl->state after we have already acquired the 
>> ctrl->namespaces_rwsem lock, to decide whether to add the namespace to 
>> the list or not.
>> 1. After we determine that ctrl->state is LIVE, it may be immediately 
>> changed to another state. However, since we have already acquired the 
>> lock, other tasks cannot access ctrl->namespace, so we can still 
>> safely add the namespace to the list. After acquiring the lock, 
>> nvme_start_freeze will freeze all ns->q in the list, including any 
>> newly added namespaces.
>> 2. Before the completion of nvme_reset, ctrl->state will not be 
>> changed to LIVE, so we will not add any more namespaces to the list. 
>> All ns->q in the list is frozen, so nvme_wait_freeze can exit normally.
> 
> I agree with the analysis, there is a hole between start_freeze and
> freeze_wait that a scan may add a ns to the ctrl ns list.
> 
> However the fix should be to mark the ctrl with say NVME_CTRL_FROZEN
> flag set in nvme_freeze_start and cleared in nvme_unfreeze (similar
> to what we did with quiesce). Then the scan can check it before adding
> the new namespace (under the namespaces_rwsem).
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by yaoma 2 years ago

On 2023/11/28 18:13, Sagi Grimberg wrote:
> 
> 
> On 11/28/23 08:22, yaoma wrote:
>> Hi Keith Busch
>>
>> Thanks for your reply.
>>
>> The idea to avoid such a deadlock between nvme_reset and nvme_scan is 
>> to ensure that no namespace can be added to ctrl->namespaces after 
>> nvme_start_freeze has already been called. We can achieve this goal by 
>> assessing the ctrl->state after we have already acquired the 
>> ctrl->namespaces_rwsem lock, to decide whether to add the namespace to 
>> the list or not.
>> 1. After we determine that ctrl->state is LIVE, it may be immediately 
>> changed to another state. However, since we have already acquired the 
>> lock, other tasks cannot access ctrl->namespace, so we can still 
>> safely add the namespace to the list. After acquiring the lock, 
>> nvme_start_freeze will freeze all ns->q in the list, including any 
>> newly added namespaces.
>> 2. Before the completion of nvme_reset, ctrl->state will not be 
>> changed to LIVE, so we will not add any more namespaces to the list. 
>> All ns->q in the list is frozen, so nvme_wait_freeze can exit normally.
> 
> I agree with the analysis, there is a hole between start_freeze and
> freeze_wait that a scan may add a ns to the ctrl ns list.
> 
I think your proposal is nice, and I will test it.
> However the fix should be to mark the ctrl with say NVME_CTRL_FROZEN
> flag set in nvme_freeze_start and cleared in nvme_unfreeze (similar
> to what we did with quiesce). Then the scan can check it before adding
> the new namespace (under the namespaces_rwsem).
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by Keith Busch 2 years ago
On Tue, Nov 28, 2023 at 12:13:59PM +0200, Sagi Grimberg wrote:
> 
> 
> On 11/28/23 08:22, yaoma wrote:
> > Hi Keith Busch
> > 
> > Thanks for your reply.
> > 
> > The idea to avoid such a deadlock between nvme_reset and nvme_scan is to
> > ensure that no namespace can be added to ctrl->namespaces after
> > nvme_start_freeze has already been called. We can achieve this goal by
> > assessing the ctrl->state after we have already acquired the
> > ctrl->namespaces_rwsem lock, to decide whether to add the namespace to
> > the list or not.
> > 1. After we determine that ctrl->state is LIVE, it may be immediately
> > changed to another state. However, since we have already acquired the
> > lock, other tasks cannot access ctrl->namespace, so we can still safely
> > add the namespace to the list. After acquiring the lock,
> > nvme_start_freeze will freeze all ns->q in the list, including any newly
> > added namespaces.
> > 2. Before the completion of nvme_reset, ctrl->state will not be changed
> > to LIVE, so we will not add any more namespaces to the list. All ns->q
> > in the list is frozen, so nvme_wait_freeze can exit normally.
> 
> I agree with the analysis, there is a hole between start_freeze and
> freeze_wait that a scan may add a ns to the ctrl ns list.
> 
> However the fix should be to mark the ctrl with say NVME_CTRL_FROZEN
> flag set in nvme_freeze_start and cleared in nvme_unfreeze (similar
> to what we did with quiesce). Then the scan can check it before adding
> the new namespace (under the namespaces_rwsem).

Could we just make sure that scan_work isn't running? If we reset a live
controller, then we're not depending on reset_work to unblock scan_work,
and can let scan_work end gracefully. The scan_work can't be rescheduled
again while in the resetting state.

---
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index fad4cccce745c..5d6305475bad5 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2701,8 +2701,10 @@ static void nvme_reset_work(struct work_struct *work)
         * If we're called to reset a live controller first shut it down before
         * moving on.
         */
-       if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)
+       if (dev->ctrl.ctrl_config & NVME_CC_ENABLE) {
+               flush_work(&dev->ctrl.scan_work);
                nvme_dev_disable(dev, false);
+       }
        nvme_sync_queues(&dev->ctrl);

        mutex_lock(&dev->shutdown_lock);
--
Re: [PATCH] nvme: fix deadlock between reset and scan
Posted by yaoma 2 years ago
I have previously tried the method that you proposed, and it could solve 
the deadlock issue. My worry is that if an I/O timeout occurs during the 
scan, it will trigger a reset. However, the reset will wait for the scan 
to end, which could introduce a new risk of deadlock.
I agree with the suggestion made by Sagi Grimberg that this approach 
does not introduce new problems.

On 2023/11/29 02:00, Keith Busch wrote:
> On Tue, Nov 28, 2023 at 12:13:59PM +0200, Sagi Grimberg wrote:
>>
>>
>> On 11/28/23 08:22, yaoma wrote:
>>> Hi Keith Busch
>>>
>>> Thanks for your reply.
>>>
>>> The idea to avoid such a deadlock between nvme_reset and nvme_scan is to
>>> ensure that no namespace can be added to ctrl->namespaces after
>>> nvme_start_freeze has already been called. We can achieve this goal by
>>> assessing the ctrl->state after we have already acquired the
>>> ctrl->namespaces_rwsem lock, to decide whether to add the namespace to
>>> the list or not.
>>> 1. After we determine that ctrl->state is LIVE, it may be immediately
>>> changed to another state. However, since we have already acquired the
>>> lock, other tasks cannot access ctrl->namespace, so we can still safely
>>> add the namespace to the list. After acquiring the lock,
>>> nvme_start_freeze will freeze all ns->q in the list, including any newly
>>> added namespaces.
>>> 2. Before the completion of nvme_reset, ctrl->state will not be changed
>>> to LIVE, so we will not add any more namespaces to the list. All ns->q
>>> in the list is frozen, so nvme_wait_freeze can exit normally.
>>
>> I agree with the analysis, there is a hole between start_freeze and
>> freeze_wait that a scan may add a ns to the ctrl ns list.
>>
>> However the fix should be to mark the ctrl with say NVME_CTRL_FROZEN
>> flag set in nvme_freeze_start and cleared in nvme_unfreeze (similar
>> to what we did with quiesce). Then the scan can check it before adding
>> the new namespace (under the namespaces_rwsem).
> 
> Could we just make sure that scan_work isn't running? If we reset a live
> controller, then we're not depending on reset_work to unblock scan_work,
> and can let scan_work end gracefully. The scan_work can't be rescheduled
> again while in the resetting state.
> 
> ---
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index fad4cccce745c..5d6305475bad5 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2701,8 +2701,10 @@ static void nvme_reset_work(struct work_struct *work)
>           * If we're called to reset a live controller first shut it down before
>           * moving on.
>           */
> -       if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)
> +       if (dev->ctrl.ctrl_config & NVME_CC_ENABLE) {
> +               flush_work(&dev->ctrl.scan_work);
>                  nvme_dev_disable(dev, false);
> +       }
>          nvme_sync_queues(&dev->ctrl);
> 
>          mutex_lock(&dev->shutdown_lock);
> --
[PATCH v2] nvme: fix deadlock between reset and scan
Posted by Bitao Hu 2 years ago
If controller reset occurs when allocating namespace, both
nvme_reset_work and nvme_scan_work will hang, as shown below.

Test Scripts:

    for ((t=1;t<=128;t++))
    do
    nsid=`nvme create-ns /dev/nvme1 -s 14537724 -c 14537724 -f 0 -m 0 \
    -d 0 | awk -F: '{print($NF);}'`
    nvme attach-ns /dev/nvme1 -n $nsid -c 0
    done
    nvme reset /dev/nvme1

We will find that both nvme_reset_work and nvme_scan_work hung:

    INFO: task kworker/u249:4:17848 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
    message.
    task:kworker/u249:4  state:D stack:    0 pid:17848 ppid:     2
    flags:0x00000028
    Workqueue: nvme-reset-wq nvme_reset_work [nvme]
    Call trace:
    __switch_to+0xb4/0xfc
    __schedule+0x22c/0x670
    schedule+0x4c/0xd0
    blk_mq_freeze_queue_wait+0x84/0xc0
    nvme_wait_freeze+0x40/0x64 [nvme_core]
    nvme_reset_work+0x1c0/0x5cc [nvme]
    process_one_work+0x1d8/0x4b0
    worker_thread+0x230/0x440
    kthread+0x114/0x120
    INFO: task kworker/u249:3:22404 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
    message.
    task:kworker/u249:3  state:D stack:    0 pid:22404 ppid:     2
    flags:0x00000028
    Workqueue: nvme-wq nvme_scan_work [nvme_core]
    Call trace:
    __switch_to+0xb4/0xfc
    __schedule+0x22c/0x670
    schedule+0x4c/0xd0
    rwsem_down_write_slowpath+0x32c/0x98c
    down_write+0x70/0x80
    nvme_alloc_ns+0x1ac/0x38c [nvme_core]
    nvme_validate_or_alloc_ns+0xbc/0x150 [nvme_core]
    nvme_scan_ns_list+0xe8/0x2e4 [nvme_core]
    nvme_scan_work+0x60/0x500 [nvme_core]
    process_one_work+0x1d8/0x4b0
    worker_thread+0x260/0x440
    kthread+0x114/0x120
    INFO: task nvme:28428 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
    message.
    task:nvme            state:D stack:    0 pid:28428 ppid: 27119
    flags:0x00000000
    Call trace:
    __switch_to+0xb4/0xfc
    __schedule+0x22c/0x670
    schedule+0x4c/0xd0
    schedule_timeout+0x160/0x194
    do_wait_for_common+0xac/0x1d0
    __wait_for_common+0x78/0x100
    wait_for_completion+0x24/0x30
    __flush_work.isra.0+0x74/0x90
    flush_work+0x14/0x20
    nvme_reset_ctrl_sync+0x50/0x74 [nvme_core]
    nvme_dev_ioctl+0x1b0/0x250 [nvme_core]
    __arm64_sys_ioctl+0xa8/0xf0
    el0_svc_common+0x88/0x234
    do_el0_svc+0x7c/0x90
    el0_svc+0x1c/0x30
    el0_sync_handler+0xa8/0xb0
    el0_sync+0x148/0x180

The reason for the hang is that nvme_reset_work occurs while nvme_scan_work
is still running. nvme_scan_work may add new ns into ctrl->namespaces
list after nvme_reset_work frozen all ns->q in ctrl->namespaces list.
The newly added ns is not frozen, so nvme_wait_freeze will wait forever.
Unfortunately, ctrl->namespaces_rwsem is held by nvme_reset_work, so
nvme_scan_work will also wait forever. Now we are deadlocked!

PROCESS1                         PROCESS2
==============                   ==============
nvme_scan_work
  ...                            nvme_reset_work
  nvme_validate_or_alloc_ns        nvme_dev_disable
    nvme_alloc_ns                    nvme_start_freeze
     down_write                      ...
     nvme_ns_add_to_ctrl_list        ...
     up_write                      nvme_wait_freeze
    ...                              down_read
    nvme_alloc_ns                    blk_mq_freeze_queue_wait
     down_write

Fix by marking the ctrl with say NVME_CTRL_FROZEN flag set in
nvme_start_freeze and cleared in nvme_unfreeze. Then the scan can check
it before adding the new namespace (under the namespaces_rwsem).

Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
---
v1 -> v2:
As per the review comments given by Sagi Grimberg and Keith Busch,
did below changes in v2,
- Add NVME_CTRL_FROZEN nvme_ctrl_flags
- Check ctrl->flags before adding the new namespace (under the namespaces_rwsem), rather than rely on ctrl->state
---
 drivers/nvme/host/core.c | 10 ++++++++++
 drivers/nvme/host/nvme.h |  1 +
 2 files changed, 11 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 62612f8..89181c7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3631,6 +3631,14 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 		goto out_unlink_ns;
 
 	down_write(&ctrl->namespaces_rwsem);
+	/*
+	 * Ensure that no namespaces are added to the ctrl list after the queues
+	 * are frozen, thereby avoiding a deadlock between scan and reset.
+	 */
+	if (test_bit(NVME_CTRL_FROZEN, &ctrl->flags)) {
+		up_write(&ctrl->namespaces_rwsem);
+		goto out_unlink_ns;
+	}
 	nvme_ns_add_to_ctrl_list(ns);
 	up_write(&ctrl->namespaces_rwsem);
 	nvme_get_ctrl(ctrl);
@@ -4540,6 +4548,7 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl)
 	list_for_each_entry(ns, &ctrl->namespaces, list)
 		blk_mq_unfreeze_queue(ns->queue);
 	up_read(&ctrl->namespaces_rwsem);
+	clear_bit(NVME_CTRL_FROZEN, &ctrl->flags);
 }
 EXPORT_SYMBOL_GPL(nvme_unfreeze);
 
@@ -4573,6 +4582,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl)
 {
 	struct nvme_ns *ns;
 
+	set_bit(NVME_CTRL_FROZEN, &ctrl->flags);
 	down_read(&ctrl->namespaces_rwsem);
 	list_for_each_entry(ns, &ctrl->namespaces, list)
 		blk_freeze_queue_start(ns->queue);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 39a90b7..07b57df 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -251,6 +251,7 @@ enum nvme_ctrl_flags {
 	NVME_CTRL_STOPPED		= 3,
 	NVME_CTRL_SKIP_ID_CNS_CS	= 4,
 	NVME_CTRL_DIRTY_CAPABILITY	= 5,
+	NVME_CTRL_FROZEN		= 6,
 };
 
 struct nvme_ctrl {
-- 
1.8.3.1
Re: [PATCH v2] nvme: fix deadlock between reset and scan
Posted by Keith Busch 2 years ago
Thanks, applied to nvme-6.7.
Re: [PATCH v2] nvme: fix deadlock between reset and scan
Posted by Guixin Liu 2 years ago
Looks good to me.

Reviewed-by: Guixin Liu <kanie@linux.alibaba.com>

My thanks for the advise Sagi given.

在 2023/11/30 10:13, Bitao Hu 写道:
> If controller reset occurs when allocating namespace, both
> nvme_reset_work and nvme_scan_work will hang, as shown below.
>
> Test Scripts:
>
>      for ((t=1;t<=128;t++))
>      do
>      nsid=`nvme create-ns /dev/nvme1 -s 14537724 -c 14537724 -f 0 -m 0 \
>      -d 0 | awk -F: '{print($NF);}'`
>      nvme attach-ns /dev/nvme1 -n $nsid -c 0
>      done
>      nvme reset /dev/nvme1
>
> We will find that both nvme_reset_work and nvme_scan_work hung:
>
>      INFO: task kworker/u249:4:17848 blocked for more than 120 seconds.
>      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>      message.
>      task:kworker/u249:4  state:D stack:    0 pid:17848 ppid:     2
>      flags:0x00000028
>      Workqueue: nvme-reset-wq nvme_reset_work [nvme]
>      Call trace:
>      __switch_to+0xb4/0xfc
>      __schedule+0x22c/0x670
>      schedule+0x4c/0xd0
>      blk_mq_freeze_queue_wait+0x84/0xc0
>      nvme_wait_freeze+0x40/0x64 [nvme_core]
>      nvme_reset_work+0x1c0/0x5cc [nvme]
>      process_one_work+0x1d8/0x4b0
>      worker_thread+0x230/0x440
>      kthread+0x114/0x120
>      INFO: task kworker/u249:3:22404 blocked for more than 120 seconds.
>      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>      message.
>      task:kworker/u249:3  state:D stack:    0 pid:22404 ppid:     2
>      flags:0x00000028
>      Workqueue: nvme-wq nvme_scan_work [nvme_core]
>      Call trace:
>      __switch_to+0xb4/0xfc
>      __schedule+0x22c/0x670
>      schedule+0x4c/0xd0
>      rwsem_down_write_slowpath+0x32c/0x98c
>      down_write+0x70/0x80
>      nvme_alloc_ns+0x1ac/0x38c [nvme_core]
>      nvme_validate_or_alloc_ns+0xbc/0x150 [nvme_core]
>      nvme_scan_ns_list+0xe8/0x2e4 [nvme_core]
>      nvme_scan_work+0x60/0x500 [nvme_core]
>      process_one_work+0x1d8/0x4b0
>      worker_thread+0x260/0x440
>      kthread+0x114/0x120
>      INFO: task nvme:28428 blocked for more than 120 seconds.
>      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>      message.
>      task:nvme            state:D stack:    0 pid:28428 ppid: 27119
>      flags:0x00000000
>      Call trace:
>      __switch_to+0xb4/0xfc
>      __schedule+0x22c/0x670
>      schedule+0x4c/0xd0
>      schedule_timeout+0x160/0x194
>      do_wait_for_common+0xac/0x1d0
>      __wait_for_common+0x78/0x100
>      wait_for_completion+0x24/0x30
>      __flush_work.isra.0+0x74/0x90
>      flush_work+0x14/0x20
>      nvme_reset_ctrl_sync+0x50/0x74 [nvme_core]
>      nvme_dev_ioctl+0x1b0/0x250 [nvme_core]
>      __arm64_sys_ioctl+0xa8/0xf0
>      el0_svc_common+0x88/0x234
>      do_el0_svc+0x7c/0x90
>      el0_svc+0x1c/0x30
>      el0_sync_handler+0xa8/0xb0
>      el0_sync+0x148/0x180
>
> The reason for the hang is that nvme_reset_work occurs while nvme_scan_work
> is still running. nvme_scan_work may add new ns into ctrl->namespaces
> list after nvme_reset_work frozen all ns->q in ctrl->namespaces list.
> The newly added ns is not frozen, so nvme_wait_freeze will wait forever.
> Unfortunately, ctrl->namespaces_rwsem is held by nvme_reset_work, so
> nvme_scan_work will also wait forever. Now we are deadlocked!
>
> PROCESS1                         PROCESS2
> ==============                   ==============
> nvme_scan_work
>    ...                            nvme_reset_work
>    nvme_validate_or_alloc_ns        nvme_dev_disable
>      nvme_alloc_ns                    nvme_start_freeze
>       down_write                      ...
>       nvme_ns_add_to_ctrl_list        ...
>       up_write                      nvme_wait_freeze
>      ...                              down_read
>      nvme_alloc_ns                    blk_mq_freeze_queue_wait
>       down_write
>
> Fix by marking the ctrl with say NVME_CTRL_FROZEN flag set in
> nvme_start_freeze and cleared in nvme_unfreeze. Then the scan can check
> it before adding the new namespace (under the namespaces_rwsem).
>
> Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
> ---
> v1 -> v2:
> As per the review comments given by Sagi Grimberg and Keith Busch,
> did below changes in v2,
> - Add NVME_CTRL_FROZEN nvme_ctrl_flags
> - Check ctrl->flags before adding the new namespace (under the namespaces_rwsem), rather than rely on ctrl->state
> ---
>   drivers/nvme/host/core.c | 10 ++++++++++
>   drivers/nvme/host/nvme.h |  1 +
>   2 files changed, 11 insertions(+)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 62612f8..89181c7 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3631,6 +3631,14 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
>   		goto out_unlink_ns;
>   
>   	down_write(&ctrl->namespaces_rwsem);
> +	/*
> +	 * Ensure that no namespaces are added to the ctrl list after the queues
> +	 * are frozen, thereby avoiding a deadlock between scan and reset.
> +	 */
> +	if (test_bit(NVME_CTRL_FROZEN, &ctrl->flags)) {
> +		up_write(&ctrl->namespaces_rwsem);
> +		goto out_unlink_ns;
> +	}
>   	nvme_ns_add_to_ctrl_list(ns);
>   	up_write(&ctrl->namespaces_rwsem);
>   	nvme_get_ctrl(ctrl);
> @@ -4540,6 +4548,7 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl)
>   	list_for_each_entry(ns, &ctrl->namespaces, list)
>   		blk_mq_unfreeze_queue(ns->queue);
>   	up_read(&ctrl->namespaces_rwsem);
> +	clear_bit(NVME_CTRL_FROZEN, &ctrl->flags);
>   }
>   EXPORT_SYMBOL_GPL(nvme_unfreeze);
>   
> @@ -4573,6 +4582,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl)
>   {
>   	struct nvme_ns *ns;
>   
> +	set_bit(NVME_CTRL_FROZEN, &ctrl->flags);
>   	down_read(&ctrl->namespaces_rwsem);
>   	list_for_each_entry(ns, &ctrl->namespaces, list)
>   		blk_freeze_queue_start(ns->queue);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 39a90b7..07b57df 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -251,6 +251,7 @@ enum nvme_ctrl_flags {
>   	NVME_CTRL_STOPPED		= 3,
>   	NVME_CTRL_SKIP_ID_CNS_CS	= 4,
>   	NVME_CTRL_DIRTY_CAPABILITY	= 5,
> +	NVME_CTRL_FROZEN		= 6,
>   };
>   
>   struct nvme_ctrl {
Re: [PATCH v2] nvme: fix deadlock between reset and scan
Posted by Sagi Grimberg 2 years ago
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Re: [PATCH v2] nvme: fix deadlock between reset and scan
Posted by Christoph Hellwig 2 years ago
Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Re: [PATCH v2] nvme: fix deadlock between reset and scan
Posted by Keith Busch 2 years ago
On Thu, Nov 30, 2023 at 10:13:37AM +0800, Bitao Hu wrote:
> Fix by marking the ctrl with say NVME_CTRL_FROZEN flag set in
> nvme_start_freeze and cleared in nvme_unfreeze. Then the scan can check
> it before adding the new namespace (under the namespaces_rwsem).

Thanks for the detailed explanation. This looks good to me.

Reviewed-by: Keith Busch <kbsuch@kernel.org>