drivers/nvme/host/core.c | 2 +- drivers/nvme/host/multipath.c | 102 +++++++++++++++++++++++++++++++--- drivers/nvme/host/nvme.h | 4 ++ 3 files changed, 99 insertions(+), 9 deletions(-)
I've addressed all of Christoph's review comments and rebased this to the latest nvme-6.11 branch. I also retested everything. The test results can be seen at: https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data Please add this to nvme-6.11. Changes since V7 Broke the changes to nvme_find_path() into a prepare patch. This made the patch much more readable. Removed the WARN_ON_ONCE in nvme_mpath_end_request() and changed the pr_notice message in nvme_subsys_iopolicy_update(). Changes since V6: Cleanup tab formatting in nvme.h and removed extra white lines. Removed the results variable from nvme_mpath_end_request(). Changes since V5: Refactored nvme_find_path() to reduce the spaghetti code. Cleaned up all comments and reduced the total size of the diff, and fixed the commit message. Thomas Song now gets credit as the first author. Changes since V4: Removed atomic_set() from and return if (old_iopolicy == iopolicy) At the beginning of nvme_subsys_iopolicy_update(). Changes since V3: Addresssed all review comments, fixed the commit log, and moved nr_counter initialization from nvme_mpath_init_ctlr() to nvme_mpath_init_identify(). Changes since V2: Add the NVME_MPATH_CNT_ACTIVE flag to eliminate a READ_ONCE in the completion path and increment/decrement the active_nr count on all mpath IOs - including passthru commands. Send a pr_notice when ever the iopolicy on a subsystem is changed. This is important for support reasons. It is fully expected that users will be changing the iopolicy with active IO in progress. Squashed everything and rebased to nvme-v6.10 Changes since V1: I'm re-issuing Ewan's queue-depth patches in preparation for LSFMM These patches were first show at ALPSS 2023 where I shared the following graphs which measure the IO distribution across 4 active-optimized controllers using the round-robin verses queue-depth iopolicy. https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf Since that time we have continued testing these patches with a number of different nvme-of storage arrays and test bed configurations, and I've codified the tests and methods we use to measure IO distribution All of my test results, together with the scripts I used to generate these graphs, are available at: https://github.com/johnmeneghini/iopolicy Please use the scripts in this repository to do your own testing. These patches are based on nvme-v6.9 John Meneghini (1): nvme-multipath: prepare for "queue-depth" iopolicy Thomas Song (1): nvme-multipath: implement "queue-depth" iopolicy drivers/nvme/host/core.c | 2 +- drivers/nvme/host/multipath.c | 102 +++++++++++++++++++++++++++++++--- drivers/nvme/host/nvme.h | 4 ++ 3 files changed, 99 insertions(+), 9 deletions(-) -- 2.45.2
On Tue, Jun 25, 2024 at 08:26:03AM -0400, John Meneghini wrote: > Please add this to nvme-6.11. This looks good to me! I'll give another day for potential comments/reviews, but I think this okay for 6.11.
On Tue, Jun 25, 2024 at 10:37:33AM -0600, Keith Busch wrote: > On Tue, Jun 25, 2024 at 08:26:03AM -0400, John Meneghini wrote: > > Please add this to nvme-6.11. > > This looks good to me! I'll give another day for potential > comments/reviews, but I think this okay for 6.11. I fixed up the suggestions from Christoph while applying. Thanks, patches are now in nvme-6.11.
© 2016 - 2025 Red Hat, Inc.