From nobody Fri Dec 19 15:58:10 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79C76181B96 for ; Tue, 14 May 2024 17:53:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715709226; cv=none; b=elpzmMyF8XI6q43onSv3sPFhpTnfOLQ3phg4r+CGxP8y4wCSzEkhTlDQmFdqndK27+0MFG9yD6nlQefyEoB8AcI96VzfwP+d/sIVqc9fYN2WjGqnq2VWYlmP+jt3wgAjEkU+4qNenJTBWbXL3Hn2aFz4aPGS59yFgUW6L3l1zOo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715709226; c=relaxed/simple; bh=o5KUGg2KS1AmCQ0prYwGDR7DSjizqXsJpnTKNoT/JQk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-type; b=JR2Ei87ast0iwCz8L8XYtj0W1dBSgWPMhLQuMOKOGsdKVPizkwHGpc8bBAR2CW4gNDcNBXYvROGZRlWpzzfWOfVGckjpS8HVXC9Ije9ulSa2iGOcQ1RYs1c0LngI6fBk+HNKkHaoAJz619KngC4am0RKef+SYy75FGhXuS3V+Es= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AuTg8Jcn; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AuTg8Jcn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1715709223; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=spH1JDbuK4ZYnUhfY5CtXEY/+ATQ9QBw/vzumT0zZiI=; b=AuTg8JcntBDfBpMc6naIYWpyguVZ0fhSv/BG3ON/ocEMmsuI4ksOjZxG5RKMzYMFhGA8zY wS9DNKHNF09d5dn8IKiFx8CWptfEtpsh9iUX5hNPti59Iw8t6bj/KFw+qA7O15//eAPhb+ 3C9QHTRY/MxUNvUtRqlg/9kj6E3PuH8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-587-vQ3kFgJwPRa0E5_slYnDiQ-1; Tue, 14 May 2024 13:53:30 -0400 X-MC-Unique: vQ3kFgJwPRa0E5_slYnDiQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9514C101A54F; Tue, 14 May 2024 17:53:29 +0000 (UTC) Received: from jmeneghi.bos.com (unknown [10.2.17.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 79D3E4B400E; Tue, 14 May 2024 17:53:28 +0000 (UTC) From: John Meneghini To: tj@kernel.org, josef@toxicpanda.com, axboe@kernel.dk, kbusch@kernel.org, hch@lst.de, sagi@grimberg.me, emilne@redhat.com, hare@kernel.org Cc: linux-block@vger.kernel.org, cgroups@vger.kernel.org, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, jmeneghi@redhat.com, jrani@purestorage.com, randyj@purestorage.com Subject: [PATCH v4 1/6] nvme: multipath: Implemented new iopolicy "queue-depth" Date: Tue, 14 May 2024 13:53:17 -0400 Message-Id: <20240514175322.19073-2-jmeneghi@redhat.com> In-Reply-To: <20240514175322.19073-1-jmeneghi@redhat.com> References: <20240514175322.19073-1-jmeneghi@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 Content-Type: text/plain; charset="utf-8" From: "Ewan D. Milne" The existing iopolicies are inefficient in some cases, such as the presence of a path with high latency. The round-robin policy would use that path equally with faster paths, which results in sub-optimal performance. The queue-depth policy instead sends I/O requests down the path with the least amount of requests in its request queue. Paths with lower latency will clear requests more quickly and have less requests in their queues compared to "bad" paths. The aim is to use those paths the most to bring down overall latency. This implementation adds an atomic variable to the nvme_ctrl struct to represent the queue depth. It is updated each time a request specific to that controller starts or ends. [edm: patch developed by Thomas Song @ Pure Storage, fixed whitespace and compilation warnings, updated MODULE_PARM description, and fixed potential issue with ->current_path[] being used] Tested-by: John Meneghini Co-developed-by: Thomas Song Signed-off-by: Thomas Song Signed-off-by: Ewan D. Milne --- drivers/nvme/host/multipath.c | 59 +++++++++++++++++++++++++++++++++-- drivers/nvme/host/nvme.h | 2 ++ 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 5397fb428b24..9e36002d0831 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -17,6 +17,7 @@ MODULE_PARM_DESC(multipath, static const char *nvme_iopolicy_names[] =3D { [NVME_IOPOLICY_NUMA] =3D "numa", [NVME_IOPOLICY_RR] =3D "round-robin", + [NVME_IOPOLICY_QD] =3D "queue-depth", }; =20 static int iopolicy =3D NVME_IOPOLICY_NUMA; @@ -29,6 +30,8 @@ static int nvme_set_iopolicy(const char *val, const struc= t kernel_param *kp) iopolicy =3D NVME_IOPOLICY_NUMA; else if (!strncmp(val, "round-robin", 11)) iopolicy =3D NVME_IOPOLICY_RR; + else if (!strncmp(val, "queue-depth", 11)) + iopolicy =3D NVME_IOPOLICY_QD; else return -EINVAL; =20 @@ -43,7 +46,7 @@ static int nvme_get_iopolicy(char *buf, const struct kern= el_param *kp) module_param_call(iopolicy, nvme_set_iopolicy, nvme_get_iopolicy, &iopolicy, 0644); MODULE_PARM_DESC(iopolicy, - "Default multipath I/O policy; 'numa' (default) or 'round-robin'"); + "Default multipath I/O policy; 'numa' (default) , 'round-robin' or 'queue= -depth'"); =20 void nvme_mpath_default_iopolicy(struct nvme_subsystem *subsys) { @@ -130,6 +133,7 @@ void nvme_mpath_start_request(struct request *rq) if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq)) return; =20 + atomic_inc(&ns->ctrl->nr_active); nvme_req(rq)->flags |=3D NVME_MPATH_IO_STATS; nvme_req(rq)->start_time =3D bdev_start_io_acct(disk->part0, req_op(rq), jiffies); @@ -142,6 +146,8 @@ void nvme_mpath_end_request(struct request *rq) =20 if (!(nvme_req(rq)->flags & NVME_MPATH_IO_STATS)) return; + + atomic_dec(&ns->ctrl->nr_active); bdev_end_io_acct(ns->head->disk->part0, req_op(rq), blk_rq_bytes(rq) >> SECTOR_SHIFT, nvme_req(rq)->start_time); @@ -330,6 +336,40 @@ static struct nvme_ns *nvme_round_robin_path(struct nv= me_ns_head *head, return found; } =20 +static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head) +{ + struct nvme_ns *best_opt =3D NULL, *best_nonopt =3D NULL, *ns; + unsigned int min_depth_opt =3D UINT_MAX, min_depth_nonopt =3D UINT_MAX; + unsigned int depth; + + list_for_each_entry_rcu(ns, &head->list, siblings) { + if (nvme_path_is_disabled(ns)) + continue; + + depth =3D atomic_read(&ns->ctrl->nr_active); + + switch (ns->ana_state) { + case NVME_ANA_OPTIMIZED: + if (depth < min_depth_opt) { + min_depth_opt =3D depth; + best_opt =3D ns; + } + break; + + case NVME_ANA_NONOPTIMIZED: + if (depth < min_depth_nonopt) { + min_depth_nonopt =3D depth; + best_nonopt =3D ns; + } + break; + default: + break; + } + } + + return best_opt ? best_opt : best_nonopt; +} + static inline bool nvme_path_is_optimized(struct nvme_ns *ns) { return nvme_ctrl_state(ns->ctrl) =3D=3D NVME_CTRL_LIVE && @@ -338,15 +378,27 @@ static inline bool nvme_path_is_optimized(struct nvme= _ns *ns) =20 inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head) { - int node =3D numa_node_id(); + int iopolicy =3D READ_ONCE(head->subsys->iopolicy); + int node; struct nvme_ns *ns; =20 + /* + * queue-depth iopolicy does not need to reference ->current_path + * but round-robin needs the last path used to advance to the + * next one, and numa will continue to use the last path unless + * it is or has become not optimized + */ + if (iopolicy =3D=3D NVME_IOPOLICY_QD) + return nvme_queue_depth_path(head); + + node =3D numa_node_id(); ns =3D srcu_dereference(head->current_path[node], &head->srcu); if (unlikely(!ns)) return __nvme_find_path(head, node); =20 - if (READ_ONCE(head->subsys->iopolicy) =3D=3D NVME_IOPOLICY_RR) + if (iopolicy =3D=3D NVME_IOPOLICY_RR) return nvme_round_robin_path(head, node, ns); + if (unlikely(!nvme_path_is_optimized(ns))) return __nvme_find_path(head, node); return ns; @@ -905,6 +957,7 @@ void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl) mutex_init(&ctrl->ana_lock); timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0); INIT_WORK(&ctrl->ana_work, nvme_ana_work); + atomic_set(&ctrl->nr_active, 0); } =20 int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *= id) diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index f243a5822c2b..e7d0a56d35d4 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -354,6 +354,7 @@ struct nvme_ctrl { size_t ana_log_size; struct timer_list anatt_timer; struct work_struct ana_work; + atomic_t nr_active; #endif =20 #ifdef CONFIG_NVME_HOST_AUTH @@ -402,6 +403,7 @@ static inline enum nvme_ctrl_state nvme_ctrl_state(stru= ct nvme_ctrl *ctrl) enum nvme_iopolicy { NVME_IOPOLICY_NUMA, NVME_IOPOLICY_RR, + NVME_IOPOLICY_QD, }; =20 struct nvme_subsystem { --=20 2.39.3