From nobody Wed Sep 17 06:28:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFE37C3DA6E for ; Fri, 23 Dec 2022 06:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235788AbiLWGwa (ORCPT ); Fri, 23 Dec 2022 01:52:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230202AbiLWGw1 (ORCPT ); Fri, 23 Dec 2022 01:52:27 -0500 Received: from esa6.hc1455-7.c3s2.iphmx.com (esa6.hc1455-7.c3s2.iphmx.com [68.232.139.139]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D36E724BFB; Thu, 22 Dec 2022 22:52:25 -0800 (PST) X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="102069663" X-IronPort-AV: E=Sophos;i="5.96,267,1665414000"; d="scan'208";a="102069663" Received: from unknown (HELO yto-r2.gw.nic.fujitsu.com) ([218.44.52.218]) by esa6.hc1455-7.c3s2.iphmx.com with ESMTP; 23 Dec 2022 15:52:23 +0900 Received: from yto-m4.gw.nic.fujitsu.com (yto-nat-yto-m4.gw.nic.fujitsu.com [192.168.83.67]) by yto-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 55D30DE50C; Fri, 23 Dec 2022 15:52:22 +0900 (JST) Received: from m3003.s.css.fujitsu.com (m3003.s.css.fujitsu.com [10.128.233.114]) by yto-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id 9AB5AF7DB; Fri, 23 Dec 2022 15:52:21 +0900 (JST) Received: from localhost.localdomain (unknown [10.19.3.107]) by m3003.s.css.fujitsu.com (Postfix) with ESMTP id 653E9203EF13; Fri, 23 Dec 2022 15:52:21 +0900 (JST) From: Daisuke Matsuda To: linux-rdma@vger.kernel.org, leonro@nvidia.com, jgg@nvidia.com, zyjzyj2000@gmail.com Cc: nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, rpearsonhpe@gmail.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, y-goto@fujitsu.com, Daisuke Matsuda Subject: [PATCH for-next v3 3/7] RDMA/rxe: Cleanup code for responder Atomic operations Date: Fri, 23 Dec 2022 15:51:54 +0900 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, rxe_responder() directly calls the function to execute Atomic operations. This need to be modified to insert some conditional branches for the ODP feature. Additionally, rxe_resp.h is newly added to be used by rxe_odp.c in near future. Signed-off-by: Daisuke Matsuda --- drivers/infiniband/sw/rxe/rxe_resp.c | 100 +++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_resp.h | 9 +++ 2 files changed, 71 insertions(+), 38 deletions(-) create mode 100644 drivers/infiniband/sw/rxe/rxe_resp.h diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/r= xe/rxe_resp.c index 991550baef8c..e18bca076337 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -9,6 +9,7 @@ #include "rxe.h" #include "rxe_loc.h" #include "rxe_queue.h" +#include "rxe_resp.h" =20 enum resp_states { RESPST_NONE, @@ -733,60 +734,83 @@ static enum resp_states process_flush(struct rxe_qp *= qp, /* Guarantee atomicity of atomic operations at the machine level. */ static DEFINE_SPINLOCK(atomic_ops_lock); =20 -static enum resp_states atomic_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +enum resp_states rxe_process_atomic(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, u64 *vaddr) { - u64 *vaddr; enum resp_states ret; - struct rxe_mr *mr =3D qp->resp.mr; struct resp_res *res =3D qp->resp.res; u64 value; =20 - if (!res) { - res =3D rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); - qp->resp.res =3D res; + /* check vaddr is 8 bytes aligned. */ + if (!vaddr || (uintptr_t)vaddr & 7) { + ret =3D RESPST_ERR_MISALIGNED_ATOMIC; + goto out; } =20 - if (!res->replay) { - if (mr->state !=3D RXE_MR_STATE_VALID) { - ret =3D RESPST_ERR_RKEY_VIOLATION; - goto out; - } + spin_lock(&atomic_ops_lock); + res->atomic.orig_val =3D value =3D *vaddr; =20 - vaddr =3D iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, - sizeof(u64)); + if (pkt->opcode =3D=3D IB_OPCODE_RC_COMPARE_SWAP) { + if (value =3D=3D atmeth_comp(pkt)) + value =3D atmeth_swap_add(pkt); + } else { + value +=3D atmeth_swap_add(pkt); + } =20 - /* check vaddr is 8 bytes aligned. */ - if (!vaddr || (uintptr_t)vaddr & 7) { - ret =3D RESPST_ERR_MISALIGNED_ATOMIC; - goto out; - } + *vaddr =3D value; + spin_unlock(&atomic_ops_lock); =20 - spin_lock_bh(&atomic_ops_lock); - res->atomic.orig_val =3D value =3D *vaddr; + qp->resp.msn++; =20 - if (pkt->opcode =3D=3D IB_OPCODE_RC_COMPARE_SWAP) { - if (value =3D=3D atmeth_comp(pkt)) - value =3D atmeth_swap_add(pkt); - } else { - value +=3D atmeth_swap_add(pkt); - } + /* next expected psn, read handles this separately */ + qp->resp.psn =3D (pkt->psn + 1) & BTH_PSN_MASK; + qp->resp.ack_psn =3D qp->resp.psn; =20 - *vaddr =3D value; - spin_unlock_bh(&atomic_ops_lock); + qp->resp.opcode =3D pkt->opcode; + qp->resp.status =3D IB_WC_SUCCESS; =20 - qp->resp.msn++; + ret =3D RESPST_ACKNOWLEDGE; +out: + return ret; +} + +static enum resp_states rxe_atomic_ops(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + struct rxe_mr *mr) +{ + u64 *vaddr; + int ret; + + vaddr =3D iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, + sizeof(u64)); + + if (pkt->mask & RXE_ATOMIC_MASK) + ret =3D rxe_process_atomic(qp, pkt, vaddr); + else + ret =3D RESPST_ERR_UNSUPPORTED_OPCODE; + + return ret; +} =20 - /* next expected psn, read handles this separately */ - qp->resp.psn =3D (pkt->psn + 1) & BTH_PSN_MASK; - qp->resp.ack_psn =3D qp->resp.psn; +static enum resp_states rxe_atomic_reply(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + struct rxe_mr *mr =3D qp->resp.mr; + struct resp_res *res =3D qp->resp.res; + int ret; =20 - qp->resp.opcode =3D pkt->opcode; - qp->resp.status =3D IB_WC_SUCCESS; + if (!res) { + res =3D rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); + qp->resp.res =3D res; } =20 - ret =3D RESPST_ACKNOWLEDGE; -out: + if (!res->replay) { + if (mr->state !=3D RXE_MR_STATE_VALID) + return RESPST_ERR_RKEY_VIOLATION; + ret =3D rxe_atomic_ops(qp, pkt, mr); + } else + ret =3D RESPST_ACKNOWLEDGE; + return ret; } =20 @@ -1556,7 +1580,7 @@ int rxe_responder(void *arg) state =3D read_reply(qp, pkt); break; case RESPST_ATOMIC_REPLY: - state =3D atomic_reply(qp, pkt); + state =3D rxe_atomic_reply(qp, pkt); break; case RESPST_ATOMIC_WRITE_REPLY: state =3D atomic_write_reply(qp, pkt); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.h b/drivers/infiniband/sw/r= xe/rxe_resp.h new file mode 100644 index 000000000000..94a4869fdab6 --- /dev/null +++ b/drivers/infiniband/sw/rxe/rxe_resp.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ + +#ifndef RXE_RESP_H +#define RXE_RESP_H + +enum resp_states rxe_process_atomic(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, u64 *vaddr); + +#endif /* RXE_RESP_H */ --=20 2.31.1