From nobody Mon Feb 9 09:32:55 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6DDA2857DB for ; Thu, 10 Apr 2025 11:41:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744285293; cv=none; b=sz98PbqA0oLlS1qLc9XTEhIet1UFiNbLcfHSSDP7DhvqfaUJzvelhgMgvpMl0hgm/yG1JICyMMOP3tcpGpJgMkrXyANRsz3bMSfiVUFNVi6NpbHNfEb8/7LVpLOAk6IR5lgPvG4SzGrdXbME5FLC7k0dAlr+PPueG0S8ApGg5zo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744285293; c=relaxed/simple; bh=mXSrbFVuNJV6VkeoxAR0XoweTJRrbm3vnqZ/Tfb8XTU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IgB51V6KRavuyl1+YL1KNj/3VQSMChRtueQPj4ksFCUZSY4LOjV4CuRzhhBP7Vq807GOvzQWqlx8pXQNa2Xm7YbJzTG61dD6/x58L+o5iiqGcCLHBfMwislD6ocyRxC36oycwXK6nRQ1b2rqKR5xdpEG+6N9eliZiktnMUeb+iU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=t1E2Dw/Y; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="t1E2Dw/Y" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32616C4CEE3; Thu, 10 Apr 2025 11:41:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744285293; bh=mXSrbFVuNJV6VkeoxAR0XoweTJRrbm3vnqZ/Tfb8XTU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=t1E2Dw/YfggToWw8ncn614jn4Ni5v4/5WRmuWFlrgjEVFc3fhIjKFsdATcdnpeOo8 XM/1xeyKNttK8T+mqzMSHmU1K6UaC6BxHlqblEbNozwasq3GEMt92DRB6e4kxPmghN tWKGrN4CEqSn0xQEv2DOkrXlV225sT797VT5kf+vuTZ4HCdDuanv44bm89qqWh4Un2 G0JxRYqqbRf9spZt8GS7c3zqvA4kZ2+IXxjD9mI9CZ29tjXDWYpYwulfI13xjhD7zP UW7RXKIWA9CdiiTbz20dxYPl/4V5aiQCVIa/+Y62OGrhC8tIcsECgb9WAmbAP5VK9k /SYPgs8eRtSQg== From: Daniel Wagner Date: Thu, 10 Apr 2025 13:41:12 +0200 Subject: [PATCH v4 09/14] nvmet-fcloop: allocate/free fcloop_lsreq directly Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250410-nvmet-fcloop-v4-9-7e5c42b7b2cb@kernel.org> References: <20250410-nvmet-fcloop-v4-0-7e5c42b7b2cb@kernel.org> In-Reply-To: <20250410-nvmet-fcloop-v4-0-7e5c42b7b2cb@kernel.org> To: James Smart , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni Cc: Hannes Reinecke , Keith Busch , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Daniel Wagner X-Mailer: b4 0.14.2 fcloop depends on the host or the target to allocate the fcloop_lsreq object. This means that the lifetime of the fcloop_lsreq is tied to either the host or the target. Consequently, the host or the target must cooperate during shutdown. Unfortunately, this approach does not work well when the target forces a shutdown, as there are dependencies that are difficult to resolve in a clean way. The simplest solution is to decouple the lifetime of the fcloop_lsreq object by managing them directly within fcloop. Since this is not a performance-critical path and only a small number of LS objects are used during setup and cleanup, it does not significantly impact performance to allocate them during normal operation. Signed-off-by: Daniel Wagner --- drivers/nvme/target/fcloop.c | 63 +++++++++++++++++++++++++++++++---------= ---- 1 file changed, 45 insertions(+), 18 deletions(-) diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c index 014cc66f92cb1db0a81a79d2109eae3fff5fd38a..47ce51183a66b4b37fc850cced2= ddf70be2cdb42 100644 --- a/drivers/nvme/target/fcloop.c +++ b/drivers/nvme/target/fcloop.c @@ -293,6 +293,9 @@ struct fcloop_ini_fcpreq { spinlock_t inilock; }; =20 +/* SLAB cache for fcloop_lsreq structures */ +struct kmem_cache *lsreq_cache; + static inline struct fcloop_lsreq * ls_rsp_to_lsreq(struct nvmefc_ls_rsp *lsrsp) { @@ -343,6 +346,7 @@ fcloop_rport_lsrqst_work(struct work_struct *work) * callee may free memory containing tls_req. * do not reference lsreq after this. */ + kmem_cache_free(lsreq_cache, tls_req); =20 spin_lock(&rport->lock); } @@ -354,10 +358,13 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localpor= t, struct nvme_fc_remote_port *remoteport, struct nvmefc_ls_req *lsreq) { - struct fcloop_lsreq *tls_req =3D lsreq->private; struct fcloop_rport *rport =3D remoteport->private; + struct fcloop_lsreq *tls_req; int ret =3D 0; =20 + tls_req =3D kmem_cache_alloc(lsreq_cache, GFP_KERNEL); + if (!tls_req) + return -ENOMEM; tls_req->lsreq =3D lsreq; INIT_LIST_HEAD(&tls_req->ls_list); =20 @@ -394,14 +401,17 @@ fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *ta= rgetport, =20 lsrsp->done(lsrsp); =20 - if (remoteport) { - rport =3D remoteport->private; - spin_lock(&rport->lock); - list_add_tail(&tls_req->ls_list, &rport->ls_list); - spin_unlock(&rport->lock); - queue_work(nvmet_wq, &rport->ls_work); + if (!remoteport) { + kmem_cache_free(lsreq_cache, tls_req); + return 0; } =20 + rport =3D remoteport->private; + spin_lock(&rport->lock); + list_add_tail(&tls_req->ls_list, &rport->ls_list); + spin_unlock(&rport->lock); + queue_work(nvmet_wq, &rport->ls_work); + return 0; } =20 @@ -427,6 +437,7 @@ fcloop_tport_lsrqst_work(struct work_struct *work) * callee may free memory containing tls_req. * do not reference lsreq after this. */ + kmem_cache_free(lsreq_cache, tls_req); =20 spin_lock(&tport->lock); } @@ -437,8 +448,8 @@ static int fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetport, void *hosthandl= e, struct nvmefc_ls_req *lsreq) { - struct fcloop_lsreq *tls_req =3D lsreq->private; struct fcloop_tport *tport =3D targetport->private; + struct fcloop_lsreq *tls_req; int ret =3D 0; =20 /* @@ -446,6 +457,10 @@ fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetp= ort, void *hosthandle, * hosthandle ignored as fcloop currently is * 1:1 tgtport vs remoteport */ + + tls_req =3D kmem_cache_alloc(lsreq_cache, GFP_KERNEL); + if (!tls_req) + return -ENOMEM; tls_req->lsreq =3D lsreq; INIT_LIST_HEAD(&tls_req->ls_list); =20 @@ -462,6 +477,9 @@ fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetpo= rt, void *hosthandle, ret =3D nvme_fc_rcv_ls_req(tport->remoteport, &tls_req->ls_rsp, lsreq->rqstaddr, lsreq->rqstlen); =20 + if (ret) + kmem_cache_free(lsreq_cache, tls_req); + return ret; } =20 @@ -481,14 +499,17 @@ fcloop_t2h_xmt_ls_rsp(struct nvme_fc_local_port *loca= lport, lsreq->rsplen : lsrsp->rsplen)); lsrsp->done(lsrsp); =20 - if (targetport) { - tport =3D targetport->private; - spin_lock(&tport->lock); - list_add_tail(&tls_req->ls_list, &tport->ls_list); - spin_unlock(&tport->lock); - queue_work(nvmet_wq, &tport->ls_work); + if (!targetport) { + kmem_cache_free(lsreq_cache, tls_req); + return 0; } =20 + tport =3D targetport->private; + spin_lock(&tport->lock); + list_add_tail(&tls_req->ls_list, &tport->ls_list); + spin_unlock(&tport->lock); + queue_work(nvmet_wq, &tport->ls_work); + return 0; } =20 @@ -1131,7 +1152,6 @@ static struct nvme_fc_port_template fctemplate =3D { /* sizes of additional private data for data structures */ .local_priv_sz =3D sizeof(struct fcloop_lport_priv), .remote_priv_sz =3D sizeof(struct fcloop_rport), - .lsrqst_priv_sz =3D sizeof(struct fcloop_lsreq), .fcprqst_priv_sz =3D sizeof(struct fcloop_ini_fcpreq), }; =20 @@ -1154,7 +1174,6 @@ static struct nvmet_fc_target_template tgttemplate = =3D { .target_features =3D 0, /* sizes of additional private data for data structures */ .target_priv_sz =3D sizeof(struct fcloop_tport), - .lsrqst_priv_sz =3D sizeof(struct fcloop_lsreq), }; =20 static ssize_t @@ -1674,15 +1693,20 @@ static const struct class fcloop_class =3D { }; static struct device *fcloop_device; =20 - static int __init fcloop_init(void) { int ret; =20 + lsreq_cache =3D kmem_cache_create("lsreq_cache", + sizeof(struct fcloop_lsreq), 0, + 0, NULL); + if (!lsreq_cache) + return -ENOMEM; + ret =3D class_register(&fcloop_class); if (ret) { pr_err("couldn't register class fcloop\n"); - return ret; + goto out_destroy_cache; } =20 fcloop_device =3D device_create_with_groups( @@ -1700,6 +1724,8 @@ static int __init fcloop_init(void) =20 out_destroy_class: class_unregister(&fcloop_class); +out_destroy_cache: + kmem_cache_destroy(lsreq_cache); return ret; } =20 @@ -1765,6 +1791,7 @@ static void __exit fcloop_exit(void) =20 device_destroy(&fcloop_class, MKDEV(0, 0)); class_unregister(&fcloop_class); + kmem_cache_destroy(lsreq_cache); } =20 module_init(fcloop_init); --=20 2.49.0