From nobody Fri Oct 3 14:34:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4CD41E51EA for ; Fri, 29 Aug 2025 15:43:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756482193; cv=none; b=EtMHsPccOOYhrEyoUNHlsd+87NgY+CWPbEE8nyjfbt3ljHf1kQEgry2xgtW4C/UyCRAjD3XrBIX/bw6VRQrYPmdBQFIzvaGGRtxdi7NO6dGMg34Y0pWpZx+v+5c9hEwpOpCCw+ojDbGPsGny6FCZH3QrzoNu7rwQLntgiWiRnSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756482193; c=relaxed/simple; bh=aoPIGzDgsINBf0Ebqd1cCI2ZeqD5+jR9ioyq3QpOSC8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Mp9J9Tz8RP2ggMRWezJ0m3foaymRjWp79fbYvRIXZsQxjwvJOYzyrXE+Dscy+fiWGPR7+0iRvj+aaBzU0Wy2gNb8eqQ8NeB/IpbPnGzxGV2CmqzdeIA1k3gU/OGtuZOtdXMkP+i0NHn5UJc6IvpEquKVkzv9TAg+TqXxnmgA4mc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RScIlWWh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RScIlWWh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E653C4CEF5; Fri, 29 Aug 2025 15:43:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756482192; bh=aoPIGzDgsINBf0Ebqd1cCI2ZeqD5+jR9ioyq3QpOSC8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=RScIlWWhzgJbNdbu3nEiGj1gatpovR7GejHeJFrJeo9hD/4NgLB1vvGr/1Od89alU czJg9ktUZGmpVvxVdan1tLBCFWhUxvAdoxFIBeRp7ePDczy40jWpROG1/prHBK/13R ou+54nR0iwgfHrW0Nic728fjdv2dhvMTPsBVE2H40Lhska/4vKPpLjVew2Evg6shyt vOHmboA86T1mQ6QnXod6PP5bYsKQG6xqi3YbcEK31twncnLc1ZuJ6pJJZg/OlgwgK0 iYz/+lG737KIWAehiH117fhueoIDv7eoFuFrIaoQy6OlQMbONSDbOoVV+5V7uuiwsN DtIY9FdHrUItA== From: Daniel Wagner Date: Fri, 29 Aug 2025 17:43:00 +0200 Subject: [PATCH v2 1/2] nvmet-fc: move lsop put work to nvmet_fc_ls_req_op Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250829-fix-nvmet-fc-v2-1-26620e2742c7@kernel.org> References: <20250829-fix-nvmet-fc-v2-0-26620e2742c7@kernel.org> In-Reply-To: <20250829-fix-nvmet-fc-v2-0-26620e2742c7@kernel.org> To: James Smart , Christoph Hellwig , Sagi Grimberg , Keith Busch Cc: Shinichiro Kawasaki , Hannes Reinecke , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Daniel Wagner X-Mailer: b4 0.14.2 It=E2=80=99s possible for more than one async command to be in flight from __nvmet_fc_send_ls_req. For each command, a tgtport reference is taken. In the current code, only one put work item is queued at a time, which results in a leaked reference. To fix this, move the work item to the nvmet_fc_ls_req_op struct, which already tracks all resources related to the command. Fixes: 710c69dbaccd ("nvmet-fc: avoid deadlock on delete association path") Signed-off-by: Daniel Wagner --- drivers/nvme/target/fc.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index a9b18c051f5bd830a6ae45ff0f4892c3f28c8608..6725c34dd7c90ae38f8271368e6= 09fd0ba267561 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -54,6 +54,8 @@ struct nvmet_fc_ls_req_op { /* for an LS RQST XMT */ int ls_error; struct list_head lsreq_list; /* tgtport->ls_req_list */ bool req_queued; + + struct work_struct put_work; }; =20 =20 @@ -111,8 +113,6 @@ struct nvmet_fc_tgtport { struct nvmet_fc_port_entry *pe; struct kref ref; u32 max_sg_cnt; - - struct work_struct put_work; }; =20 struct nvmet_fc_port_entry { @@ -235,12 +235,13 @@ static int nvmet_fc_tgt_a_get(struct nvmet_fc_tgt_ass= oc *assoc); static void nvmet_fc_tgt_q_put(struct nvmet_fc_tgt_queue *queue); static int nvmet_fc_tgt_q_get(struct nvmet_fc_tgt_queue *queue); static void nvmet_fc_tgtport_put(struct nvmet_fc_tgtport *tgtport); -static void nvmet_fc_put_tgtport_work(struct work_struct *work) +static void nvmet_fc_put_lsop_work(struct work_struct *work) { - struct nvmet_fc_tgtport *tgtport =3D - container_of(work, struct nvmet_fc_tgtport, put_work); + struct nvmet_fc_ls_req_op *lsop =3D + container_of(work, struct nvmet_fc_ls_req_op, put_work); =20 - nvmet_fc_tgtport_put(tgtport); + nvmet_fc_tgtport_put(lsop->tgtport); + kfree(lsop); } static int nvmet_fc_tgtport_get(struct nvmet_fc_tgtport *tgtport); static void nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport, @@ -367,7 +368,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lso= p) DMA_BIDIRECTIONAL); =20 out_putwork: - queue_work(nvmet_wq, &tgtport->put_work); + queue_work(nvmet_wq, &lsop->put_work); } =20 static int @@ -388,6 +389,7 @@ __nvmet_fc_send_ls_req(struct nvmet_fc_tgtport *tgtport, lsreq->done =3D done; lsop->req_queued =3D false; INIT_LIST_HEAD(&lsop->lsreq_list); + INIT_WORK(&lsop->put_work, nvmet_fc_put_lsop_work); =20 lsreq->rqstdma =3D fc_dma_map_single(tgtport->dev, lsreq->rqstaddr, lsreq->rqstlen + lsreq->rsplen, @@ -447,8 +449,6 @@ nvmet_fc_disconnect_assoc_done(struct nvmefc_ls_req *ls= req, int status) __nvmet_fc_finish_ls_req(lsop); =20 /* fc-nvme target doesn't care about success or failure of cmd */ - - kfree(lsop); } =20 /* @@ -1410,7 +1410,6 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_inf= o *pinfo, kref_init(&newrec->ref); ida_init(&newrec->assoc_cnt); newrec->max_sg_cnt =3D template->max_sgl_segments; - INIT_WORK(&newrec->put_work, nvmet_fc_put_tgtport_work); =20 ret =3D nvmet_fc_alloc_ls_iodlist(newrec); if (ret) { --=20 2.51.0 From nobody Fri Oct 3 14:34:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BEFAC326D58 for ; Fri, 29 Aug 2025 15:43:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756482196; cv=none; b=b83FHz9ZnrfB4IP6LXLxE0JI7Q5lOSz5dJh+xFL6QFkNxTMIUA+Q9455399BUaafAxeoSLdjdiVDfLU7oQAcya5lyLzw4o8SSmVDjrIzpjtowYWB4+oj/RG3eOdVvIwNFSUUbQak9fT59DPIyQEEPWnf3xRnsZuJvkVKFm5zzNk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756482196; c=relaxed/simple; bh=fqRuJpbCT5D0SI6B1QuBDKlqWeqnFieL/HVZ3RqF/YE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PKnk3EZHZ31hORmAwtdhY8OMErH7ssevpWiOcouhMAuMXO4ecBfzEjU+3A12zaztbQmGf35lwj8cZHRE6UWCGar9rAjJaQ3397q4s922vBnrEJcMcFu/i54eNZk4SVg6Cv8DKL90CogV3o+xRW5iQ/I04FgS/PYr7qV6Md3WipQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=P2Oaridc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="P2Oaridc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCF43C4CEF6; Fri, 29 Aug 2025 15:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756482195; bh=fqRuJpbCT5D0SI6B1QuBDKlqWeqnFieL/HVZ3RqF/YE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=P2Oaridc/+ZIk9H+gKb71zx2+9aHlG2tVTkcao+8z7qWiBJZJV7xAuDDXQrM/HimZ Wr7AfitBOhvmIXHc40LWraIv7YIbqRtKutDf41Wf6qBiHCuzfljKo9lOvIhST8BblI IH5XogRtOcxCH4tmcZamHv3G6qmLdeDLZxpd8PPE2ohj8PH/Qoa6jTdzV9CsRoXPPF yU6vLIlXjT9Vn7lYOGFmUsTTOUxx1zX4SN9Bcyw95B24FMsZQMTM8c5lrd+zizlJ2T 09KL+C9e/Jp4Vrp982rO0bI7rEBPMw4EBPAxf3lu6/jKpTPCOhf7P6HtADwhnjxFAu YfV+oFNXKK9KQ== From: Daniel Wagner Date: Fri, 29 Aug 2025 17:43:01 +0200 Subject: [PATCH v2 2/2] nvmet-fc: avoid scheduling association deletion twice Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250829-fix-nvmet-fc-v2-2-26620e2742c7@kernel.org> References: <20250829-fix-nvmet-fc-v2-0-26620e2742c7@kernel.org> In-Reply-To: <20250829-fix-nvmet-fc-v2-0-26620e2742c7@kernel.org> To: James Smart , Christoph Hellwig , Sagi Grimberg , Keith Busch Cc: Shinichiro Kawasaki , Hannes Reinecke , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Daniel Wagner X-Mailer: b4 0.14.2 When forcefully shutting down a port via the configfs interface, nvmet_port_subsys_drop_link() first calls nvmet_port_del_ctrls() and then nvmet_disable_port(). Both functions will eventually schedule all remaining associations for deletion. The current implementation checks whether an association is about to be removed, but only after the work item has already been scheduled. As a result, it is possible for the first scheduled work item to free all resources, and then for the same work item to be scheduled again for deletion. Because the association list is an RCU list, it is not possible to take a lock and remove the list entry directly, so it cannot be looked up again. Instead, a flag (terminating) must be used to determine whether the association is already in the process of being deleted. Signed-off-by: Daniel Wagner Reviewed-by: Hannes Reinecke --- drivers/nvme/target/fc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index 6725c34dd7c90ae38f8271368e609fd0ba267561..7d84527d5a43efe1d43ccf5fb80= 10a4884f99e3e 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -1075,6 +1075,14 @@ nvmet_fc_delete_assoc_work(struct work_struct *work) static void nvmet_fc_schedule_delete_assoc(struct nvmet_fc_tgt_assoc *assoc) { + int terminating; + + terminating =3D atomic_xchg(&assoc->terminating, 1); + + /* if already terminating, do nothing */ + if (terminating) + return; + nvmet_fc_tgtport_get(assoc->tgtport); if (!queue_work(nvmet_wq, &assoc->del_work)) nvmet_fc_tgtport_put(assoc->tgtport); @@ -1202,13 +1210,7 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_ass= oc *assoc) { struct nvmet_fc_tgtport *tgtport =3D assoc->tgtport; unsigned long flags; - int i, terminating; - - terminating =3D atomic_xchg(&assoc->terminating, 1); - - /* if already terminating, do nothing */ - if (terminating) - return; + int i; =20 spin_lock_irqsave(&tgtport->lock, flags); list_del_rcu(&assoc->a_list); --=20 2.51.0