drivers/nvme/host/rdma.c | 22 +++++++++++++++------- drivers/nvme/host/tcp.c | 23 +++++++++++++++-------- 2 files changed, 30 insertions(+), 15 deletions(-)
I've picked up Hannes' DNR patches. In short the make the transports behave the
same way when the DNR bit set on a re-connect attempt. We had a discussion this
topic in the past and if I got this right we all agreed is that the host should
honor the DNR bit on a connect attempt [1]
The nvme/045 test case (authentication tests) in blktests is a good test case
for this after extending it slightly. TCP and RDMA try to reconnect with an
invalid key over and over again, while loop and FC stop after the first fail.
[1] https://lore.kernel.org/linux-nvme/20220927143157.3659-1-dwagner@suse.de/
changes:
v3:
- added my SOB tag
- fixed indention
v2:
- refresh/rebase on current head
- extended blktests (nvme/045) to cover this case
(see separate post)
- https://lore.kernel.org/linux-nvme/20240304161006.19328-1-dwagner@suse.de/
v1:
- initial version
- https://lore.kernel.org/linux-nvme/20210623143250.82445-1-hare@suse.de/
Hannes Reinecke (2):
nvme-tcp: short-circuit reconnect retries
nvme-rdma: short-circuit reconnect retries
drivers/nvme/host/rdma.c | 22 +++++++++++++++-------
drivers/nvme/host/tcp.c | 23 +++++++++++++++--------
2 files changed, 30 insertions(+), 15 deletions(-)
--
2.44.0
On 05/03/2024 10:00, Daniel Wagner wrote: > I've picked up Hannes' DNR patches. In short the make the transports behave the > same way when the DNR bit set on a re-connect attempt. We had a discussion this > topic in the past and if I got this right we all agreed is that the host should > honor the DNR bit on a connect attempt [1] Umm, I don't recall this being conclusive though. The spec ought to be clearer here I think. > > The nvme/045 test case (authentication tests) in blktests is a good test case > for this after extending it slightly. TCP and RDMA try to reconnect with an > invalid key over and over again, while loop and FC stop after the first fail. Who says that invalid key is a permanent failure though?
On 3/7/24 09:00, Sagi Grimberg wrote: > > On 05/03/2024 10:00, Daniel Wagner wrote: >> I've picked up Hannes' DNR patches. In short the make the transports >> behave the same way when the DNR bit set on a re-connect attempt. We >> had a discussion this >> topic in the past and if I got this right we all agreed is that the >> host should honor the DNR bit on a connect attempt [1] > Umm, I don't recall this being conclusive though. The spec ought to be > clearer here I think. I've asked the NVMexpress fmds group, and the response was pretty unanimous that the DNR bit on connect should be evaluated. >> >> The nvme/045 test case (authentication tests) in blktests is a good >> test case for this after extending it slightly. TCP and RDMA try to >> reconnect with an >> invalid key over and over again, while loop and FC stop after the >> first fail. > > Who says that invalid key is a permanent failure though? > See the response to the other patchset. 'Invalid key' in this context means that the _client_ evaluated the key as invalid, ie the key is unusable for the client. As the key is passed in via the commandline there is no way the client can ever change the value here, and no amount of retry will change things here. That's what we try to fix. The controller surely can return an invalid key, but that's part of the authenticaion protocol and will be evaluated differently. Cheers, Hannes
On 07/03/2024 12:37, Hannes Reinecke wrote: > On 3/7/24 09:00, Sagi Grimberg wrote: >> >> On 05/03/2024 10:00, Daniel Wagner wrote: >>> I've picked up Hannes' DNR patches. In short the make the transports >>> behave the same way when the DNR bit set on a re-connect attempt. We >>> had a discussion this >>> topic in the past and if I got this right we all agreed is that the >>> host should honor the DNR bit on a connect attempt [1] >> Umm, I don't recall this being conclusive though. The spec ought to >> be clearer here I think. > > I've asked the NVMexpress fmds group, and the response was pretty > unanimous that the DNR bit on connect should be evaluated. OK. > >>> >>> The nvme/045 test case (authentication tests) in blktests is a good >>> test case for this after extending it slightly. TCP and RDMA try to >>> reconnect with an >>> invalid key over and over again, while loop and FC stop after the >>> first fail. >> >> Who says that invalid key is a permanent failure though? >> > See the response to the other patchset. > 'Invalid key' in this context means that the _client_ evaluated the > key as invalid, ie the key is unusable for the client. > As the key is passed in via the commandline there is no way the client > can ever change the value here, and no amount of retry will change > things here. That's what we try to fix. Where is this retried today, I don't see where connect failure is retried, outside of a periodic reconnect. Maybe I'm missing where what is the actual failure here.
On 3/7/24 12:30, Sagi Grimberg wrote:
>
>
> On 07/03/2024 12:37, Hannes Reinecke wrote:
>> On 3/7/24 09:00, Sagi Grimberg wrote:
>>>
>>> On 05/03/2024 10:00, Daniel Wagner wrote:
>>>> I've picked up Hannes' DNR patches. In short the make the transports
>>>> behave the same way when the DNR bit set on a re-connect attempt. We
>>>> had a discussion this
>>>> topic in the past and if I got this right we all agreed is that the
>>>> host should honor the DNR bit on a connect attempt [1]
>>> Umm, I don't recall this being conclusive though. The spec ought to
>>> be clearer here I think.
>>
>> I've asked the NVMexpress fmds group, and the response was pretty
>> unanimous that the DNR bit on connect should be evaluated.
>
> OK.
>
>>
>>>>
>>>> The nvme/045 test case (authentication tests) in blktests is a good
>>>> test case for this after extending it slightly. TCP and RDMA try to
>>>> reconnect with an
>>>> invalid key over and over again, while loop and FC stop after the
>>>> first fail.
>>>
>>> Who says that invalid key is a permanent failure though?
>>>
>> See the response to the other patchset.
>> 'Invalid key' in this context means that the _client_ evaluated the
>> key as invalid, ie the key is unusable for the client.
>> As the key is passed in via the commandline there is no way the client
>> can ever change the value here, and no amount of retry will change
>> things here. That's what we try to fix.
>
> Where is this retried today, I don't see where connect failure is
> retried, outside of a periodic reconnect.
> Maybe I'm missing where what is the actual failure here.
static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
{
struct nvme_tcp_ctrl *tcp_ctrl =
container_of(to_delayed_work(work),
struct nvme_tcp_ctrl, connect_work);
struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
++ctrl->nr_reconnects;
if (nvme_tcp_setup_ctrl(ctrl, false))
goto requeue;
dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n",
ctrl->nr_reconnects);
ctrl->nr_reconnects = 0;
return;
requeue:
dev_info(ctrl->device, "Failed reconnect attempt %d\n",
and nvme_tcp_setup_ctrl() returns either a negative errno or an NVMe
status code (which might include the DNR bit).
Cheers,
Hannes
On 07/03/2024 13:45, Hannes Reinecke wrote:
> On 3/7/24 12:30, Sagi Grimberg wrote:
>>
>>
>> On 07/03/2024 12:37, Hannes Reinecke wrote:
>>> On 3/7/24 09:00, Sagi Grimberg wrote:
>>>>
>>>> On 05/03/2024 10:00, Daniel Wagner wrote:
>>>>> I've picked up Hannes' DNR patches. In short the make the
>>>>> transports behave the same way when the DNR bit set on a
>>>>> re-connect attempt. We
>>>>> had a discussion this
>>>>> topic in the past and if I got this right we all agreed is that
>>>>> the host should honor the DNR bit on a connect attempt [1]
>>>> Umm, I don't recall this being conclusive though. The spec ought to
>>>> be clearer here I think.
>>>
>>> I've asked the NVMexpress fmds group, and the response was pretty
>>> unanimous that the DNR bit on connect should be evaluated.
>>
>> OK.
>>
>>>
>>>>>
>>>>> The nvme/045 test case (authentication tests) in blktests is a
>>>>> good test case for this after extending it slightly. TCP and RDMA
>>>>> try to
>>>>> reconnect with an
>>>>> invalid key over and over again, while loop and FC stop after the
>>>>> first fail.
>>>>
>>>> Who says that invalid key is a permanent failure though?
>>>>
>>> See the response to the other patchset.
>>> 'Invalid key' in this context means that the _client_ evaluated the
>>> key as invalid, ie the key is unusable for the client.
>>> As the key is passed in via the commandline there is no way the client
>>> can ever change the value here, and no amount of retry will change
>>> things here. That's what we try to fix.
>>
>> Where is this retried today, I don't see where connect failure is
>> retried, outside of a periodic reconnect.
>> Maybe I'm missing where what is the actual failure here.
>
> static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
> {
> struct nvme_tcp_ctrl *tcp_ctrl =
> container_of(to_delayed_work(work),
> struct nvme_tcp_ctrl, connect_work);
> struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
>
> ++ctrl->nr_reconnects;
>
> if (nvme_tcp_setup_ctrl(ctrl, false))
> goto requeue;
>
> dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n",
> ctrl->nr_reconnects);
>
> ctrl->nr_reconnects = 0;
>
> return;
>
> requeue:
> dev_info(ctrl->device, "Failed reconnect attempt %d\n",
>
> and nvme_tcp_setup_ctrl() returns either a negative errno or an NVMe
> status code (which might include the DNR bit).
I thought this is about the initialization. yes today we ignore the
status in re-connection assuming that whatever
happened, may (or may not) resolve itself. The basis for this assumption
is that if we managed to connect the first
time there is no reason to assume that connecting again should fail
persistently.
If there is a consensus that we should not assume it, its a valid
argument. I didn't see where this happens with respect
to authentication though.
On 3/7/24 13:14, Sagi Grimberg wrote:
>
>
> On 07/03/2024 13:45, Hannes Reinecke wrote:
>> On 3/7/24 12:30, Sagi Grimberg wrote:
>>>
[ .. ]
>>>
>>> Where is this retried today, I don't see where connect failure is
>>> retried, outside of a periodic reconnect.
>>> Maybe I'm missing where what is the actual failure here.
>>
>> static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
>> {
>> struct nvme_tcp_ctrl *tcp_ctrl =
>> container_of(to_delayed_work(work),
>> struct nvme_tcp_ctrl, connect_work);
>> struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
>>
>> ++ctrl->nr_reconnects;
>>
>> if (nvme_tcp_setup_ctrl(ctrl, false))
>> goto requeue;
>>
>> dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n",
>> ctrl->nr_reconnects);
>>
>> ctrl->nr_reconnects = 0;
>>
>> return;
>>
>> requeue:
>> dev_info(ctrl->device, "Failed reconnect attempt %d\n",
>>
>> and nvme_tcp_setup_ctrl() returns either a negative errno or an NVMe
>> status code (which might include the DNR bit).
>
> I thought this is about the initialization. yes today we ignore the
> status in re-connection assuming that whatever
> happened, may (or may not) resolve itself. The basis for this assumption
> is that if we managed to connect the first
> time there is no reason to assume that connecting again should fail
> persistently.
>
And that is another issue where I'm not really comfortable with.
While it would make sense to have the connect functionality to be
one-shot, and let userspace retry if needed, the problem is that we
don't have a means of transporting that information to userspace.
The only thing which we can transport is an error number, which
could be anything and mean anything.
If we had a defined way stating: 'This is a retryable, retry with the
same options.' vs 'This is retryable error, retry with modified
options.' vs 'This a non-retryable error, don't bother.' I'd be
fine with delegating retries to userspace.
But currently we don't.
> If there is a consensus that we should not assume it, its a valid
> argument. I didn't see where this happens with respect
> to authentication though.
nvmf_connect_admin_queue():
/* Authentication required */
ret = nvme_auth_negotiate(ctrl, 0);
if (ret) {
dev_warn(ctrl->device,
"qid 0: authentication setup failed\n");
ret = NVME_SC_AUTH_REQUIRED;
goto out_free_data;
}
ret = nvme_auth_wait(ctrl, 0);
if (ret)
dev_warn(ctrl->device,
"qid 0: authentication failed\n");
else
dev_info(ctrl->device,
"qid 0: authenticated\n");
The first call to 'nvme_auth_negotiate()' is just for setting up
the negotiation context and start the protocol. So if we get
an error here it's pretty much non-retryable as it's completely
controlled by the fabrics options.
nvme_auth_wait(), OTOH, contains the actual result from the negotiation,
so there we might or might not retry, depending on the value of 'ret'.
Cheers,
Hannes
On 07/03/2024 14:52, Hannes Reinecke wrote:
> On 3/7/24 13:14, Sagi Grimberg wrote:
>>
>>
>> On 07/03/2024 13:45, Hannes Reinecke wrote:
>>> On 3/7/24 12:30, Sagi Grimberg wrote:
>>>>
> [ .. ]
>>>>
>>>> Where is this retried today, I don't see where connect failure is
>>>> retried, outside of a periodic reconnect.
>>>> Maybe I'm missing where what is the actual failure here.
>>>
>>> static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
>>> {
>>> struct nvme_tcp_ctrl *tcp_ctrl =
>>> container_of(to_delayed_work(work),
>>> struct nvme_tcp_ctrl, connect_work);
>>> struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
>>>
>>> ++ctrl->nr_reconnects;
>>>
>>> if (nvme_tcp_setup_ctrl(ctrl, false))
>>> goto requeue;
>>>
>>> dev_info(ctrl->device, "Successfully reconnected (%d
>>> attempt)\n",
>>> ctrl->nr_reconnects);
>>>
>>> ctrl->nr_reconnects = 0;
>>>
>>> return;
>>>
>>> requeue:
>>> dev_info(ctrl->device, "Failed reconnect attempt %d\n",
>>>
>>> and nvme_tcp_setup_ctrl() returns either a negative errno or an NVMe
>>> status code (which might include the DNR bit).
>>
>> I thought this is about the initialization. yes today we ignore the
>> status in re-connection assuming that whatever
>> happened, may (or may not) resolve itself. The basis for this
>> assumption is that if we managed to connect the first
>> time there is no reason to assume that connecting again should fail
>> persistently.
>>
> And that is another issue where I'm not really comfortable with.
> While it would make sense to have the connect functionality to be
> one-shot, and let userspace retry if needed, the problem is that we
> don't have a means of transporting that information to userspace.
> The only thing which we can transport is an error number, which
> could be anything and mean anything.
Not necessarily. error codes semantics exists for a reason.
I just really don't think that doing reconnects on a user-driven
initialization is a good idea at all
unlike the case where the controller was connected and got disrupted,
this is not user driven and
hence makes sense.
> If we had a defined way stating: 'This is a retryable, retry with the
> same options.' vs 'This is retryable error, retry with modified
> options.' vs 'This a non-retryable error, don't bother.' I'd be
> fine with delegating retries to userspace.
> But currently we don't.
Well, TBH I don't know if userspace even needs it. Most likely what a
user would want is to define
a number of retries and give up if they expire. Adding the intelligence
for what connect is retry-able or
not does not seem all that useful to me.
>
>> If there is a consensus that we should not assume it, its a valid
>> argument. I didn't see where this happens with respect
>> to authentication though.
>
> nvmf_connect_admin_queue():
>
> /* Authentication required */
> ret = nvme_auth_negotiate(ctrl, 0);
> if (ret) {
> dev_warn(ctrl->device,
> "qid 0: authentication setup failed\n");
> ret = NVME_SC_AUTH_REQUIRED;
> goto out_free_data;
> }
> ret = nvme_auth_wait(ctrl, 0);
> if (ret)
> dev_warn(ctrl->device,
> "qid 0: authentication failed\n");
> else
> dev_info(ctrl->device,
> "qid 0: authenticated\n");
>
> The first call to 'nvme_auth_negotiate()' is just for setting up
> the negotiation context and start the protocol. So if we get
> an error here it's pretty much non-retryable as it's completely
> controlled by the fabrics options.
> nvme_auth_wait(), OTOH, contains the actual result from the negotiation,
> so there we might or might not retry, depending on the value of 'ret'.
>
> Cheers,
>
> Hannes
>
© 2016 - 2025 Red Hat, Inc.