[PATCH v2 0/3] nvme: queue-depth multipath iopolicy

John Meneghini posted 3 patches 1 year, 7 months ago
Only 0 patches received!
There is a newer version of this series
drivers/nvme/host/core.c      |  2 +-
drivers/nvme/host/multipath.c | 77 +++++++++++++++++++++++++++++++++--
drivers/nvme/host/nvme.h      |  8 ++++
3 files changed, 82 insertions(+), 5 deletions(-)
[PATCH v2 0/3] nvme: queue-depth multipath iopolicy
Posted by John Meneghini 1 year, 7 months ago
I'm re-issuing Ewan's queue-depth patches in preparation for LSFMM

These patches were first show at ALPSS 2023 where I shared the following
graphs which measure the IO distribution across 4 active-optimized
controllers using the round-robin verses queue-depth iopolicy.

 https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf

Since that time we have continued testing these patches with a number of
different nvme-of storage arrays and test bed configurations, and I've codified
the tests and methods we use to measure IO distribution 

All of my test results, together with the scripts I used to generate these
graphs, are available at:

 https://github.com/johnmeneghini/iopolicy

Please use the scripts in this repository to do your own testing.

These patches are based on nvme-v6.9

Ewan D. Milne (3):
  nvme: multipath: Implemented new iopolicy "queue-depth"
  nvme: multipath: only update ctrl->nr_active when using queue-depth
    iopolicy
  nvme: multipath: Invalidate current_path when changing iopolicy

 drivers/nvme/host/core.c      |  2 +-
 drivers/nvme/host/multipath.c | 77 +++++++++++++++++++++++++++++++++--
 drivers/nvme/host/nvme.h      |  8 ++++
 3 files changed, 82 insertions(+), 5 deletions(-)

-- 
2.39.3