[PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP

Ping Gan posted 2 patches 1 year, 5 months ago
drivers/nvme/target/rdma.c | 10 +++++++++-
drivers/nvme/target/tcp.c  | 12 ++++++++++--
2 files changed, 19 insertions(+), 3 deletions(-)
[PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Ping Gan 1 year, 5 months ago
When running nvmf on SMP platform, current nvme target's RDMA and
TCP use bounded workqueue to handle IO, but when there is other high
workload on the system(eg: kubernetes), the competition between the 
bounded kworker and other workload is very radical. To decrease the
resource race of OS among them, this patchset will enable unbounded
workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
get some performance improvement. And this patchset bases on previous
discussion from below session.

https://lore.kernel.org/lkml/20240717005318.109027-1-jacky_gam_2001@163.com/


Ping Gan (2):
  nvmet-tcp: add unbound_wq support for nvmet-tcp
  nvmet-rdma:  add unbound_wq support for nvmet-rdma

 drivers/nvme/target/rdma.c | 10 +++++++++-
 drivers/nvme/target/tcp.c  | 12 ++++++++++--
 2 files changed, 19 insertions(+), 3 deletions(-)

-- 
2.26.2
Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Christoph Hellwig 1 year, 5 months ago
On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
> When running nvmf on SMP platform, current nvme target's RDMA and
> TCP use bounded workqueue to handle IO, but when there is other high
> workload on the system(eg: kubernetes), the competition between the 
> bounded kworker and other workload is very radical. To decrease the
> resource race of OS among them, this patchset will enable unbounded
> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
> get some performance improvement. And this patchset bases on previous
> discussion from below session.

So why aren't we using unbound workqueues by default?  Who makea the
policy decision and how does anyone know which one to chose?
Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Sagi Grimberg 1 year, 4 months ago


On 19/07/2024 8:31, Christoph Hellwig wrote:
> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>> When running nvmf on SMP platform, current nvme target's RDMA and
>> TCP use bounded workqueue to handle IO, but when there is other high
>> workload on the system(eg: kubernetes), the competition between the
>> bounded kworker and other workload is very radical. To decrease the
>> resource race of OS among them, this patchset will enable unbounded
>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>> get some performance improvement. And this patchset bases on previous
>> discussion from below session.
> So why aren't we using unbound workqueues by default?  Who makea the
> policy decision and how does anyone know which one to chose?
>

The use-case presented is a case where the cpu resources are shared
between nvmet and other workloads running on the system. The ask is to
prevent nvmet to run io threads from specific cpu cores, and vice-versa, to
minimize interference.

The decision is made by the administrator that decides which resources are
dedicated to nvmet vs. other workloads (which are containers in this case).

Changing to unbound workqueues universally needs to prove that it is better
in the general case, outside of this specific use-case. Meaning that 
latency is
not affected by having unbound kthreads accessing the nvme device, the 
rdma qp
and/or the tcp socket.
Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Hannes Reinecke 1 year, 5 months ago
On 7/19/24 07:31, Christoph Hellwig wrote:
> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>> When running nvmf on SMP platform, current nvme target's RDMA and
>> TCP use bounded workqueue to handle IO, but when there is other high
>> workload on the system(eg: kubernetes), the competition between the
>> bounded kworker and other workload is very radical. To decrease the
>> resource race of OS among them, this patchset will enable unbounded
>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>> get some performance improvement. And this patchset bases on previous
>> discussion from below session.
> 
> So why aren't we using unbound workqueues by default?  Who makea the
> policy decision and how does anyone know which one to chose?
> 
I'd be happy to switch to unbound workqueues per default.
It actually might be a left over from the various workqueue changes;
at one point 'unbound' meant that effectively only one CPU was used
for the workqueue, and you had to remove the 'unbound' parameter to
have the workqueue run on all CPUs. That has since changed, so I guess
switching to unbound per default is the better option here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Sagi Grimberg 1 year, 4 months ago


On 19/07/2024 9:28, Hannes Reinecke wrote:
> On 7/19/24 07:31, Christoph Hellwig wrote:
>> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>> TCP use bounded workqueue to handle IO, but when there is other high
>>> workload on the system(eg: kubernetes), the competition between the
>>> bounded kworker and other workload is very radical. To decrease the
>>> resource race of OS among them, this patchset will enable unbounded
>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>> get some performance improvement. And this patchset bases on previous
>>> discussion from below session.
>>
>> So why aren't we using unbound workqueues by default?  Who makea the
>> policy decision and how does anyone know which one to chose?
>>
> I'd be happy to switch to unbound workqueues per default.
> It actually might be a left over from the various workqueue changes;
> at one point 'unbound' meant that effectively only one CPU was used
> for the workqueue, and you had to remove the 'unbound' parameter to
> have the workqueue run on all CPUs. That has since changed, so I guess
> switching to unbound per default is the better option here.

A guess needs to be based with supporting data if we want to have this 
change.
Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Ping Gan 1 year, 5 months ago
> On 7/19/24 07:31, Christoph Hellwig wrote:
>> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>> TCP use bounded workqueue to handle IO, but when there is other high
>>> workload on the system(eg: kubernetes), the competition between the
>>> bounded kworker and other workload is very radical. To decrease the
>>> resource race of OS among them, this patchset will enable unbounded
>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>> get some performance improvement. And this patchset bases on
>>> previous
>>> discussion from below session.
>> 
>> So why aren't we using unbound workqueues by default?  Who makea the
>> policy decision and how does anyone know which one to chose?
>> 
> I'd be happy to switch to unbound workqueues per default.
> It actually might be a left over from the various workqueue changes;
> at one point 'unbound' meant that effectively only one CPU was used
> for the workqueue, and you had to remove the 'unbound' parameter to
> have the workqueue run on all CPUs. That has since changed, so I guess
> switching to unbound per default is the better option here.

I don't fully understand what you said 'by default'. Did you mean we 
should just remove 'unbounded' parameter and create workqueue by 
WQ_UNBOUND flag or besides that, we should also add other parameter 
to switch 'unbounded' workqueue  to 'bounded' workqueue?

Thanks,
Ping
Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Hannes Reinecke 1 year, 5 months ago
On 7/19/24 10:07, Ping Gan wrote:
>> On 7/19/24 07:31, Christoph Hellwig wrote:
>>> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>>> TCP use bounded workqueue to handle IO, but when there is other high
>>>> workload on the system(eg: kubernetes), the competition between the
>>>> bounded kworker and other workload is very radical. To decrease the
>>>> resource race of OS among them, this patchset will enable unbounded
>>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>>> get some performance improvement. And this patchset bases on
>>>> previous
>>>> discussion from below session.
>>>
>>> So why aren't we using unbound workqueues by default?  Who makea the
>>> policy decision and how does anyone know which one to chose?
>>>
>> I'd be happy to switch to unbound workqueues per default.
>> It actually might be a left over from the various workqueue changes;
>> at one point 'unbound' meant that effectively only one CPU was used
>> for the workqueue, and you had to remove the 'unbound' parameter to
>> have the workqueue run on all CPUs. That has since changed, so I guess
>> switching to unbound per default is the better option here.
> 
> I don't fully understand what you said 'by default'. Did you mean we
> should just remove 'unbounded' parameter and create workqueue by
> WQ_UNBOUND flag or besides that, we should also add other parameter
> to switch 'unbounded' workqueue  to 'bounded' workqueue?
> 
The former. Just remove the 'unbounded' parameter and always us
'WQ_UNBOUND' flag when creating workqueues.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Posted by Ping Gan 1 year, 5 months ago
> On 7/19/24 10:07, Ping Gan wrote:
>>> On 7/19/24 07:31, Christoph Hellwig wrote:
>>>> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>>>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>>>> TCP use bounded workqueue to handle IO, but when there is other
>>>>> high
>>>>> workload on the system(eg: kubernetes), the competition between
>>>>> the
>>>>> bounded kworker and other workload is very radical. To decrease
>>>>> the
>>>>> resource race of OS among them, this patchset will enable
>>>>> unbounded
>>>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>>>> get some performance improvement. And this patchset bases on
>>>>> previous
>>>>> discussion from below session.
>>>>
>>>> So why aren't we using unbound workqueues by default?  Who makea
>>>> the
>>>> policy decision and how does anyone know which one to chose?
>>>>
>>> I'd be happy to switch to unbound workqueues per default.
>>> It actually might be a left over from the various workqueue changes;
>>> at one point 'unbound' meant that effectively only one CPU was used
>>> for the workqueue, and you had to remove the 'unbound' parameter to
>>> have the workqueue run on all CPUs. That has since changed, so I
>>> guess
>>> switching to unbound per default is the better option here.
>> 
>> I don't fully understand what you said 'by default'. Did you mean we
>> should just remove 'unbounded' parameter and create workqueue by
>> WQ_UNBOUND flag or besides that, we should also add other parameter
>> to switch 'unbounded' workqueue  to 'bounded' workqueue?
>> 
> The former. Just remove the 'unbounded' parameter and always us
> 'WQ_UNBOUND' flag when creating workqueues.

Okay, will do in next version

Thanks,
Ping