From nobody Fri Oct 3 08:51:12 2025 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2050.outbound.protection.outlook.com [40.107.237.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33F022D7DDC; Wed, 3 Sep 2025 06:17:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.50 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880251; cv=fail; b=dVHnBcgYdBRXJUPNQ0coITrm7OoWpmrY0akrKtiADCoG4uJC+XLjnyYEqe4KeK8oLMbonEkHSiG2WeFDeBd3uXwlDHH9uvaYK8Xm4RyFnsk1KV/arvJWI5Z1diraImHdnqT2hVr5YZa6D29xI+kuRrqF8fPu3qr+07NL8tA7WJw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880251; c=relaxed/simple; bh=8baS+8kNkKKysKNbtC6ek054hR41M8i+R7PIIbBvjb4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jXVugC/+AlBad4kLuE7A/vA2U4Iot1S9LZTAMUd0KPZ0b/69OUhXiPv5B6G4pkJmb3jxFJjL4dJjZKlu+dE+PVmn6Ux3gnwTwIEOpCU4K7Ls96DKAKuffWjpgyXY8/zsbrpgfRnO2DF56BeA+bpukMg8SQbgpt3pLob+8oSmyDQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=RpfT48uh; arc=fail smtp.client-ip=40.107.237.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="RpfT48uh" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xZk/ewr7NGcjtIRPr+xcPg4AxSpL9IWvHZ82/ue8tS/2oaGNqbA2GHyyQFR45KelfCVlTKEI1oojYu61Qir+UCBnxW4YMVm99lbvPQoePa3guRx698PIfyFXCTvgLkQ+wRPtoiNzWYNRFshZZCmmgajY4/dPx3JSgJtWE8FNngi6IPbw5SDX4ewBKFJf0t9OlXXdBvi4YvHGr+KPYQsXZdfEn8T+LoKQN5JjzWiyOE3sXYeZCN7DL9gw82Eti5SErJBQhZ0Z13pUfjSaVSRieELqQeW6kKLTulkSuuebPqwpUjmnXymr9sEr59BXIraGYF0Xc06k+Eh/R+EzDPYX1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kjoJ+wT79wex1FVBIMZn8v45mKl9PpTMNBBYTO+QRTo=; b=me+R9BJkZ4KanXUx3TDWLQoGXRmgJtzkCV9rKSZwdPVPtNTUvxb0+WOJta/c7jrYNa6L4UqedDyLMNEbwM396tmRVDYQMqGhCO452KO6bNd+cAZx+jj1q8cfBNPtZ/fh6XT0U5M1dRTbw44vXjEBdpRG81nM1rq45PseuUGxC03QfjhPZzlk6WVxARuAnvtweJGb8C01bhweYCNLqxgCy3jkfSZOp7hBwqH0GXjnxlQhD4Zvp1TVCSJeFOhhYUnwR2ZFwHnMuSadOGQ7gZhfzhGRNCTB43x/3dPBaSd1Sg5ICSStBcK0bBiDWyh9LM4x6rtN1PVMOHWwiqqihgBtOg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kjoJ+wT79wex1FVBIMZn8v45mKl9PpTMNBBYTO+QRTo=; b=RpfT48uhXEQMqR31MSRv8TtSyoxhZwy+DljMf/6s1mhpMQv/OkCUp+dB4hUdgTOnmthar1cQXOJ2O8baObx3apaOXjuD0u0E7u8PwAYl4Q1Kdrjhm6YZVXX8oT+nApfQKsw1+/DwE/srKZsm7G+UXSrFM3ohveJLiekJit/EjOs= Received: from CH0PR03CA0034.namprd03.prod.outlook.com (2603:10b6:610:b3::9) by DS7PR12MB6119.namprd12.prod.outlook.com (2603:10b6:8:99::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:17:22 +0000 Received: from CH2PEPF0000009E.namprd02.prod.outlook.com (2603:10b6:610:b3:cafe::6a) by CH0PR03CA0034.outlook.office365.com (2603:10b6:610:b3::9) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.17 via Frontend Transport; Wed, 3 Sep 2025 06:17:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CH2PEPF0000009E.mail.protection.outlook.com (10.167.244.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9052.8 via Frontend Transport; Wed, 3 Sep 2025 06:17:21 +0000 Received: from Satlexmb09.amd.com (10.181.42.218) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:21 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb09.amd.com (10.181.42.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:17:20 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:16 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Shannon Nelson Subject: [PATCH v6 01/14] net: ionic: Create an auxiliary device for rdma driver Date: Wed, 3 Sep 2025 11:45:53 +0530 Message-ID: <20250903061606.4139957-2-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009E:EE_|DS7PR12MB6119:EE_ X-MS-Office365-Filtering-Correlation-Id: 9019c990-5bc2-4de5-ebca-08ddeab18aef X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|7416014|376014|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?BBm+NcH/R3SgDba47f+3aX7MiRkzJEMccugYHZ14w2yMCrmsPn4nMhwH4F+G?= =?us-ascii?Q?thS0LK6Cqf1L9tVjSI7iQ58PBwp3/8qwws5lyr23g2Klr1a60IHkRrHVGEoW?= =?us-ascii?Q?tvfKaDEOPYZZ43YPgnlq2MXJgFm6i21tgJhectKDwEe6C+29YaD3qet5xQaE?= =?us-ascii?Q?kzwCr7swU+yxcsor+RxWmL40j7delJKBrcRn1iAg0WLwBX6t6FVwIuxM94a5?= =?us-ascii?Q?Up9Lt0wyLdmm8oVS1Pt5yhtEbSTn8vwa8Ts6rSO1IJQKLR7gj2KB6Xj9pCw9?= =?us-ascii?Q?/QzTCvryFjmwvNlmzGIdV6sJTjPmtFykZwXWKkF4j71xeaAVudKAdUQu3Pox?= =?us-ascii?Q?Mnr2jiYiq2WYMDjTsG8VKLfkKSpl940qAw12e76dNu6hwIzd/fNc2tf5ddH6?= =?us-ascii?Q?3jxX3zGJFjzJirXp68YikvmOtmDlxyRb1G/kjJhJhUNHvg28gzAc1HO4GVpT?= =?us-ascii?Q?ZgukKMQDnx/tMopMd8XzBejTdXngb7VIVoZyOwAfkCYZOG8hW6FM63a5f07U?= =?us-ascii?Q?rtuFgjlws+Ka1THjeyhmMXrBWFmUtq3+5HvguOKdw+Xxt5itpCKFFJLobNHd?= =?us-ascii?Q?csL3/Q6Pqj5RSw+e0cRAfbdzcFnSlwdNZ7jf8eBQPY5jn1IC83+dyeG3aaNn?= =?us-ascii?Q?WOKcS1MFj5/TxdD1uio1AXTvQIm/wrQ6+CKmD8z+WzBkZeXiXTNn+SBUqHVM?= =?us-ascii?Q?QHyLN+FGw2CK7UwhKej+YpcSZ3oPZo8DLZBjiqcLK7rm2sHmlUPuZ+4OZxLT?= =?us-ascii?Q?MZw9t2/+p3lq1CrFCzXhGqSbpsIOWCFVEJ7teVBkMop8PG3Q2LMof+OfxwH9?= =?us-ascii?Q?/+UkLIJiIHnsU98I4Gl7SGLVWG6hgeYpUD2ZTmIkCafGkzwYw70mCGllnceL?= =?us-ascii?Q?8pSsjy/QrD7yQi7+zfKQurW2WglOZCkXn+YHbVoZNOTmsdqGqsfuJl0GCNUV?= =?us-ascii?Q?lU2LspMiSskUnnoVqHOFTDkJlSxnsRhlhVNpFl8S8hBdcgVfTL8xhUSlAUjO?= =?us-ascii?Q?fLdXKtQvm+pxB40yEfV9fsuXtncu2qaDXCMhL/4WW5lOkef48LpmqNj05NTx?= =?us-ascii?Q?s7E/HdmJPl6Jy5cPZjQVAwdjcIWi0SCIrryTyrsS9gprz+Iuy7sf09vBCpfL?= =?us-ascii?Q?4OjEo+bZ7dmnGIR/2Uf0ZmDOFyz2rIhkkXSSZoYhUzjp9oRese2K/aYJXtqY?= =?us-ascii?Q?jITXVDx7Nqf3PAUaOGJaSdyTRdVmzHeb4peKWRikQCSbz90O5FWAH1vZ4IX6?= =?us-ascii?Q?yNo5yVeLtOqzSYgtDDcmrXzta09UIOqEB7N9bmchF2duCoy0OEKUH21KOCjg?= =?us-ascii?Q?ntCGr1A1dPI4Xkjq4sTo/AXiBMqLavpUmSIUFoEmfCHUAeycKnRKwFxrXtvB?= =?us-ascii?Q?rEv8hILXpZnGtYasa8jyXgxDsIlzrhJyoRcKuW596hDr/hzpXMl+dRxH9nQa?= =?us-ascii?Q?JY/RU0kcETkZ5JW+2QCn+dSNmy3hpUAW9fPznjKTGVesS1qe9whgZkfE+Pub?= =?us-ascii?Q?Q8VyXTsxmQWINer5k7FfQrHQWl1QfZmsMiwE?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(7416014)(376014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:21.7010 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9019c990-5bc2-4de5-ebca-08ddeab18aef X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6119 Content-Type: text/plain; charset="utf-8" To support RDMA capable ethernet device, create an auxiliary device in the ionic Ethernet driver. The RDMA device is modeled as an auxiliary device to the Ethernet device. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- v5->v6 - Updated documentation v4->v5 - Updated documentation - Fixed error path in aux device creation .../ethernet/pensando/ionic.rst | 10 +++ drivers/net/ethernet/pensando/Kconfig | 1 + drivers/net/ethernet/pensando/ionic/Makefile | 2 +- .../net/ethernet/pensando/ionic/ionic_api.h | 21 +++++ .../net/ethernet/pensando/ionic/ionic_aux.c | 80 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_aux.h | 10 +++ .../ethernet/pensando/ionic/ionic_bus_pci.c | 5 ++ .../net/ethernet/pensando/ionic/ionic_lif.c | 7 ++ .../net/ethernet/pensando/ionic/ionic_lif.h | 3 + 9 files changed, 138 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_api.h create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_aux.c create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_aux.h diff --git a/Documentation/networking/device_drivers/ethernet/pensando/ioni= c.rst b/Documentation/networking/device_drivers/ethernet/pensando/ionic.rst index 05fe2b11bb18..a0029b6db31e 100644 --- a/Documentation/networking/device_drivers/ethernet/pensando/ionic.rst +++ b/Documentation/networking/device_drivers/ethernet/pensando/ionic.rst @@ -13,6 +13,7 @@ Contents - Identifying the Adapter - Enabling the driver - Configuring the driver +- RDMA Support via Auxiliary Device - Statistics - Support =20 @@ -105,6 +106,15 @@ XDP Support for XDP includes the basics, plus Jumbo frames, Redirect and ndo_xmit. There is no current support for zero-copy sockets or HW offload. =20 +RDMA Support via Auxiliary Device +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D + +The ionic driver supports RDMA (Remote Direct Memory Access) functionality +through the Linux auxiliary device framework when advertised by the firmwa= re. +RDMA capability is detected during device initialization, and if supported, +the ethernet driver will create an auxiliary device that allows the RDMA +driver to bind and provide InfiniBand/RoCE functionality. + Statistics =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 diff --git a/drivers/net/ethernet/pensando/Kconfig b/drivers/net/ethernet/p= ensando/Kconfig index 01fe76786f77..c99758adf3ad 100644 --- a/drivers/net/ethernet/pensando/Kconfig +++ b/drivers/net/ethernet/pensando/Kconfig @@ -24,6 +24,7 @@ config IONIC select NET_DEVLINK select DIMLIB select PAGE_POOL + select AUXILIARY_BUS help This enables the support for the Pensando family of Ethernet adapters. More specific information on this driver can be diff --git a/drivers/net/ethernet/pensando/ionic/Makefile b/drivers/net/eth= ernet/pensando/ionic/Makefile index 4e7642a2d25f..a598972fef41 100644 --- a/drivers/net/ethernet/pensando/ionic/Makefile +++ b/drivers/net/ethernet/pensando/ionic/Makefile @@ -5,5 +5,5 @@ obj-$(CONFIG_IONIC) :=3D ionic.o =20 ionic-y :=3D ionic_main.o ionic_bus_pci.o ionic_devlink.o ionic_dev.o \ ionic_debugfs.o ionic_lif.o ionic_rx_filter.o ionic_ethtool.o \ - ionic_txrx.o ionic_stats.o ionic_fw.o + ionic_txrx.o ionic_stats.o ionic_fw.o ionic_aux.o ionic-$(CONFIG_PTP_1588_CLOCK) +=3D ionic_phc.o diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h new file mode 100644 index 000000000000..f9fcd1b67b35 --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_API_H_ +#define _IONIC_API_H_ + +#include + +/** + * struct ionic_aux_dev - Auxiliary device information + * @lif: Logical interface + * @idx: Index identifier + * @adev: Auxiliary device + */ +struct ionic_aux_dev { + struct ionic_lif *lif; + int idx; + struct auxiliary_device adev; +}; + +#endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.c b/drivers/net/= ethernet/pensando/ionic/ionic_aux.c new file mode 100644 index 000000000000..f3c2a5227b36 --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include "ionic.h" +#include "ionic_lif.h" +#include "ionic_aux.h" + +static DEFINE_IDA(aux_ida); + +static void ionic_auxbus_release(struct device *dev) +{ + struct ionic_aux_dev *ionic_adev; + + ionic_adev =3D container_of(dev, struct ionic_aux_dev, adev.dev); + ida_free(&aux_ida, ionic_adev->adev.id); + kfree(ionic_adev); +} + +int ionic_auxbus_register(struct ionic_lif *lif) +{ + struct ionic_aux_dev *ionic_adev; + struct auxiliary_device *aux_dev; + int err, id; + + if (!(le64_to_cpu(lif->ionic->ident.lif.capabilities) & IONIC_LIF_CAP_RDM= A)) + return 0; + + ionic_adev =3D kzalloc(sizeof(*ionic_adev), GFP_KERNEL); + if (!ionic_adev) + return -ENOMEM; + + aux_dev =3D &ionic_adev->adev; + + id =3D ida_alloc(&aux_ida, GFP_KERNEL); + if (id < 0) { + dev_err(lif->ionic->dev, "Failed to allocate aux id: %d\n", id); + kfree(ionic_adev); + return id; + } + + aux_dev->id =3D id; + aux_dev->name =3D "rdma"; + aux_dev->dev.parent =3D &lif->ionic->pdev->dev; + aux_dev->dev.release =3D ionic_auxbus_release; + ionic_adev->lif =3D lif; + err =3D auxiliary_device_init(aux_dev); + if (err) { + dev_err(lif->ionic->dev, "Failed to initialize %s aux device: %d\n", + aux_dev->name, err); + ida_free(&aux_ida, id); + kfree(ionic_adev); + return err; + } + + err =3D auxiliary_device_add(aux_dev); + if (err) { + dev_err(lif->ionic->dev, "Failed to add %s aux device: %d\n", + aux_dev->name, err); + auxiliary_device_uninit(aux_dev); + return err; + } + + lif->ionic_adev =3D ionic_adev; + return 0; +} + +void ionic_auxbus_unregister(struct ionic_lif *lif) +{ + mutex_lock(&lif->adev_lock); + if (!lif->ionic_adev) + goto out; + + auxiliary_device_delete(&lif->ionic_adev->adev); + auxiliary_device_uninit(&lif->ionic_adev->adev); + + lif->ionic_adev =3D NULL; +out: + mutex_unlock(&lif->adev_lock); +} diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.h b/drivers/net/= ethernet/pensando/ionic/ionic_aux.h new file mode 100644 index 000000000000..f5528a9f187d --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_AUX_H_ +#define _IONIC_AUX_H_ + +int ionic_auxbus_register(struct ionic_lif *lif); +void ionic_auxbus_unregister(struct ionic_lif *lif); + +#endif /* _IONIC_AUX_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/= net/ethernet/pensando/ionic/ionic_bus_pci.c index 136bfa3516d0..f8752b1d2790 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -9,6 +9,7 @@ #include "ionic.h" #include "ionic_bus.h" #include "ionic_lif.h" +#include "ionic_aux.h" #include "ionic_debugfs.h" =20 /* Supported devices */ @@ -375,6 +376,8 @@ static int ionic_probe(struct pci_dev *pdev, const stru= ct pci_device_id *ent) goto err_out_deregister_devlink; } =20 + ionic_auxbus_register(ionic->lif); + mod_timer(&ionic->watchdog_timer, round_jiffies(jiffies + ionic->watchdog_period)); ionic_queue_doorbell_check(ionic, IONIC_NAPI_DEADLINE); @@ -416,6 +419,7 @@ static void ionic_remove(struct pci_dev *pdev) =20 if (ionic->lif->doorbell_wa) cancel_delayed_work_sync(&ionic->doorbell_check_dwork); + ionic_auxbus_unregister(ionic->lif); ionic_lif_unregister(ionic->lif); ionic_devlink_unregister(ionic); ionic_lif_deinit(ionic->lif); @@ -445,6 +449,7 @@ static void ionic_reset_prepare(struct pci_dev *pdev) timer_delete_sync(&ionic->watchdog_timer); cancel_work_sync(&lif->deferred.work); =20 + ionic_auxbus_unregister(ionic->lif); mutex_lock(&lif->queue_lock); ionic_stop_queues_reconfig(lif); ionic_txrx_free(lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 48cb5d30b5f6..8ed5d2e5fde4 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -19,6 +19,7 @@ #include "ionic_bus.h" #include "ionic_dev.h" #include "ionic_lif.h" +#include "ionic_aux.h" #include "ionic_txrx.h" #include "ionic_ethtool.h" #include "ionic_debugfs.h" @@ -3293,6 +3294,7 @@ int ionic_lif_alloc(struct ionic *ionic) =20 mutex_init(&lif->queue_lock); mutex_init(&lif->config_lock); + mutex_init(&lif->adev_lock); =20 spin_lock_init(&lif->adminq_lock); =20 @@ -3349,6 +3351,7 @@ int ionic_lif_alloc(struct ionic *ionic) lif->info =3D NULL; lif->info_pa =3D 0; err_out_free_mutex: + mutex_destroy(&lif->adev_lock); mutex_destroy(&lif->config_lock); mutex_destroy(&lif->queue_lock); err_out_free_netdev: @@ -3384,6 +3387,7 @@ static void ionic_lif_handle_fw_down(struct ionic_lif= *lif) =20 netif_device_detach(lif->netdev); =20 + ionic_auxbus_unregister(ionic->lif); mutex_lock(&lif->queue_lock); if (test_bit(IONIC_LIF_F_UP, lif->state)) { dev_info(ionic->dev, "Surprise FW stop, stopping queues\n"); @@ -3446,6 +3450,8 @@ int ionic_restart_lif(struct ionic_lif *lif) netif_device_attach(lif->netdev); ionic_queue_doorbell_check(ionic, IONIC_NAPI_DEADLINE); =20 + ionic_auxbus_register(ionic->lif); + return 0; =20 err_txrx_free: @@ -3528,6 +3534,7 @@ void ionic_lif_free(struct ionic_lif *lif) =20 mutex_destroy(&lif->config_lock); mutex_destroy(&lif->queue_lock); + mutex_destroy(&lif->adev_lock); =20 /* free netdev & lif */ ionic_debugfs_del_lif(lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/= ethernet/pensando/ionic/ionic_lif.h index e01756fb7fdd..43bdd0fb8733 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h @@ -10,6 +10,7 @@ #include #include #include "ionic_rx_filter.h" +#include "ionic_api.h" =20 #define IONIC_ADMINQ_LENGTH 16 /* must be a power of two */ #define IONIC_NOTIFYQ_LENGTH 64 /* must be a power of two */ @@ -225,6 +226,8 @@ struct ionic_lif { dma_addr_t info_pa; u32 info_sz; struct ionic_qtype_info qtype_info[IONIC_QTYPE_MAX]; + struct ionic_aux_dev *ionic_adev; + struct mutex adev_lock; /* lock for aux_dev actions */ =20 u8 rss_hash_key[IONIC_RSS_HASH_KEY_SIZE]; u8 *rss_ind_tbl; --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2066.outbound.protection.outlook.com [40.107.243.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4427C2D7DF3; Wed, 3 Sep 2025 06:17:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880252; cv=fail; b=nxPbVED8c1gV710yRC7Z2tcHCniEiL/OkkPyDJG++CHX93LuprG45BmufjS2mMDLjOVIpsV9WBG736AM+PSYrBbioa89F/iVSdOS2QdwIRrIxHD3HnUav3cOfKITQ4RbIu3YgLrKyKTnSWW1aos3HsIREiBrGL9RFxRWtJXW81E= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880252; c=relaxed/simple; bh=2IcA+ZewbIPBFTMAhKNmsUJSwq3Ue4LkutemY7k5U4s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rh6vrFJh5K7fY9HlG8bC4KuYxMNNzOKE7GpMllbg4eN1PmY0HjS+jYk7tpYpyAdFo8L7FCzDDsB6BX+tmHfYMEppqIHmWfrteE17hDaX6k1CBPldgl5+3IEqmp1pgWyrjDzYCuone8dk/Vch890iIdqyaodbxCYCVgiYg8FZdlQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=OdFnYJc+; arc=fail smtp.client-ip=40.107.243.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="OdFnYJc+" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=NMNOvoAI+pJ4nvbLorIESgzrYCzR7eT7BH2ymEI6jC9kWVnboP8FXMR0NGLwYd0RlnLnjuRy/JeavFlPzDvy2hJDbwdze07LYtBBtLv8NvldmbevWUBVRExVImuDYnLbfo6h7TMpnEMaGSnPf/5V8lqDE6radocJIsV2kYVBSjvCGOfC/eL7HRB6lqhOy5xHwIrRU3OOlQN2DAQJ3A2VTmN+hbRJClfL5EX1+HLX3EwrbozcNxLiz7979VA9G2+PpcKlole2cOjNXSiCcmdQVFIGa52/s7AP4Z+3gcd4VVncboPp/eni9jk0hjk/6ZDus3N2SyOq5LgclfkNMBwTIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Zi+wCB2dAlASJvVCJ4PZ5ZY4vvQzdKDiY6XUBQjouFY=; b=d0wrLtH6581cCA+o4AQCM8TkX3zvrlMEOABD/pUImYfmhv/4ohKkRuuARIVQKJQAmI90LraVc3nRAdaEkny2L1X3lmJ01IpGxHd0o+sWmyE6Yq9LPBkVrYiA78ZK54AtkE3Z36KNtNfQrgLGX+gVF/Hf+CLDeWkaJcvqZtgLkq37N+cfdp1txeQRseY37fnBG02BjXTSmWOyS7jfjMI7BYrhWFK4f5KA811GwzZ54hJ+b4f3u+K57Rmu4bz/MMfMbmBzfHyCnXVRnGQR0wKjUcyieWN034XBy31NsIAID0OBvSh9fdVInbYqeDo0Fd7Yc61nSyt0guVyGmvpTiWvCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Zi+wCB2dAlASJvVCJ4PZ5ZY4vvQzdKDiY6XUBQjouFY=; b=OdFnYJc+nuKi/wX2+UqmVwq3Dv+XwYcmgj42IKnLXqrrRHlIYbY125gL9bnjDKrrtvdEIP+jYmqNyqeFVjT94x0h9KXoOqiHnCYublmbpfXHdZtAFRlJeYuOnIX9TzM63DKKlbTif81DKgLXQH1ujkLnslBQMHiEqIyGRdDl7Fk= Received: from CH2PR07CA0053.namprd07.prod.outlook.com (2603:10b6:610:5b::27) by CY5PR12MB9056.namprd12.prod.outlook.com (2603:10b6:930:34::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8989.16; Wed, 3 Sep 2025 06:17:27 +0000 Received: from CH2PEPF0000009A.namprd02.prod.outlook.com (2603:10b6:610:5b:cafe::bb) by CH2PR07CA0053.outlook.office365.com (2603:10b6:610:5b::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.16 via Frontend Transport; Wed, 3 Sep 2025 06:17:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CH2PEPF0000009A.mail.protection.outlook.com (10.167.244.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9052.8 via Frontend Transport; Wed, 3 Sep 2025 06:17:26 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:26 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:25 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:21 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Shannon Nelson , "Simon Horman" Subject: [PATCH v6 02/14] net: ionic: Update LIF identity with additional RDMA capabilities Date: Wed, 3 Sep 2025 11:45:54 +0530 Message-ID: <20250903061606.4139957-3-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009A:EE_|CY5PR12MB9056:EE_ X-MS-Office365-Filtering-Correlation-Id: fb1a479c-afc2-40e0-832a-08ddeab18dee X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|36860700013|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?2zB4ZU5M4rGvNOshhxNE5QRtSWZ5UXFEDToOJq9QE3A+Zr2q/LikOGrqPsp0?= =?us-ascii?Q?zy/cPrernKF+Z3glxWcJ7gUSLwUCCAl6uWk0lP5SffgwnLkSWVYubsoTrZmL?= =?us-ascii?Q?Nh19fM4SuMqCDMQ1ujADqqdLN3sSaoGPlHn3i81ANcZG9YugNeOKDzFKc/bz?= =?us-ascii?Q?uDcGaNbzs0eE4hhTdXdRRgobKLXtM8YNV0fmnFamehFjLoBnTFcqrtJN1Xuk?= =?us-ascii?Q?r6qWNJeFy5vXMhtk47Vfm47+TeTSigsFp20jZWRbOBubY/I698RewRF9dTrL?= =?us-ascii?Q?C6WH0+JZblAjIEikzRPTSl0yZoQ3k+mqEgDwtYKjBCHvStBH/jCO0A8JvEwW?= =?us-ascii?Q?LVIdhBS6kxno2O6P2T+bMFdDe49UsCN7SOgODnxZQX1mEKUXKb4OW+kPDDCf?= =?us-ascii?Q?cpoiRaR+oUI3MQcqYWgQgS+gFa//gLvrJh1Kf45y7r/lAZGcss7dX7pwkhjS?= =?us-ascii?Q?ai0ZzeaiBLre9f/Dpn+tc/HMJRiG8Q2LBGo8Ic1y7Vgk+LQmAUILPrpEsGNY?= =?us-ascii?Q?B6RCSFQ4rEtb8m4T+aFNS/VnCvs6A8F2rTANKzCAsDdHJwNOn7NMQ7kJxdWH?= =?us-ascii?Q?X7WFW5xhOCqpWilGg0e6VKVN22xSfi2JJ8WdluVauDqHqg6OScEqvPSpb4ea?= =?us-ascii?Q?wZbxewfEirccIAUNogeW1wVT+HJaxkTLI0r88UtYenvL2nShXi9yeuZQWhzO?= =?us-ascii?Q?+evpH8lmUuLqhorT75f5BBHR/tDqdiWA6B+q3m/8x6WVvv1P2T5Lt52l+fLb?= =?us-ascii?Q?VPPRP+1LZnNixWdwHVbntO5Q00WahQ7tgAInKWL1foroMdtKE4CnPqGeXOvx?= =?us-ascii?Q?B9RFCIZOJL8o5h939Mw2C1CrvIzDno9/bT3dZZK/h2t1pR/kTK8DKWcKKqWG?= =?us-ascii?Q?Z+oACISQJEnS7sKpQ4xEjp2X7+Mc1/L8GDeCoGBaYQBwy1hp181TStdfvc6G?= =?us-ascii?Q?3KSiJ2RsLQsVHwnANy+pnf7h27/kIM6G2CsCeh+bhDdzibz/DeVyZm5+I8JJ?= =?us-ascii?Q?li81jwDyiTW4Y15C+IlMdaN4ez63Amm/Zvopy/DsjaA6jC6ELJDqtYWJ5VrN?= =?us-ascii?Q?KYr/8nl9piE7QyZ3vjhW5KnzWzgFtkSXgzXdVBihEXbrfLWfnm89s3Nh5ln/?= =?us-ascii?Q?gsyUANXE6RdblBx1RMg/ciWf9oypulQiTaAk4C0B6q0aKxrGIdmCrW9lN0+8?= =?us-ascii?Q?d8RCjVq5VroVYMffgORhN4QqfaqAvAC5KCrRhOJigm1y1J/FHeVrB3xFFCFG?= =?us-ascii?Q?frNplc6hziNGY5OdRgewZSfoFs/d3ys0+ogXxMknasgIf/WxwydyRDF1m+xx?= =?us-ascii?Q?eNNIIDJzxAII/dK39Vj9dudwkZiyaYuU8OFu7YLTt6UAZPuKsGA/9pt0refY?= =?us-ascii?Q?WvaIEBZp0qtKiZK5Qu7spQCQrqovpD1lWZT3Z5lwRcuBAL6bmIanZ4UBwQLl?= =?us-ascii?Q?l9fnAeheU5UZN8WKb/ceHsY/G4V8xsJsrM3g1e9OTlDlIZJb8Djw13b6UmHO?= =?us-ascii?Q?nJ4mStbDc9sAgX1qKHYqob9L6USy7pevU+WE?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(376014)(36860700013)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:26.7244 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fb1a479c-afc2-40e0-832a-08ddeab18dee X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009A.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB9056 Content-Type: text/plain; charset="utf-8" Firmware sends the RDMA capability in a response for LIF_IDENTIFY device command. Update the LIF indentify with additional RDMA capabilities used by driver and firmware. Reviewed-by: Shannon Nelson Reviewed-by: Simon Horman Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_if.h | 29 +++++++++++++++---- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/e= thernet/pensando/ionic/ionic_if.h index 9886cd66ce68..77c3dc188264 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -494,6 +494,16 @@ union ionic_lif_config { __le32 words[64]; }; =20 +/** + * enum ionic_lif_rdma_cap_stats - LIF stat type + * @IONIC_LIF_RDMA_STAT_GLOBAL: Global stats + * @IONIC_LIF_RDMA_STAT_QP: Queue pair stats + */ +enum ionic_lif_rdma_cap_stats { + IONIC_LIF_RDMA_STAT_GLOBAL =3D BIT(0), + IONIC_LIF_RDMA_STAT_QP =3D BIT(1), +}; + /** * struct ionic_lif_identity - LIF identity information (type-specific) * @@ -513,10 +523,10 @@ union ionic_lif_config { * @eth.config: LIF config struct with features, mtu, mac, q c= ounts * * @rdma: RDMA identify structure - * @rdma.version: RDMA version of opcodes and queue descriptors + * @rdma.version: RDMA capability version * @rdma.qp_opcodes: Number of RDMA queue pair opcodes supported * @rdma.admin_opcodes: Number of RDMA admin opcodes supported - * @rdma.rsvd: reserved byte(s) + * @rdma.minor_version: RDMA capability minor version * @rdma.npts_per_lif: Page table size per LIF * @rdma.nmrs_per_lif: Number of memory regions per LIF * @rdma.nahs_per_lif: Number of address handles per LIF @@ -526,12 +536,17 @@ union ionic_lif_config { * @rdma.rrq_stride: Remote RQ work request stride * @rdma.rsq_stride: Remote SQ work request stride * @rdma.dcqcn_profiles: Number of DCQCN profiles - * @rdma.rsvd_dimensions: reserved byte(s) + * @rdma.udma_shift: Log2 number of queues per queue group + * @rdma.rsvd_dimensions: Reserved byte + * @rdma.page_size_cap: Supported page sizes * @rdma.aq_qtype: RDMA Admin Qtype * @rdma.sq_qtype: RDMA Send Qtype * @rdma.rq_qtype: RDMA Receive Qtype * @rdma.cq_qtype: RDMA Completion Qtype * @rdma.eq_qtype: RDMA Event Qtype + * @rdma.stats_type: Supported statistics type + * (enum ionic_lif_rdma_cap_stats) + * @rdma.rsvd1: Reserved byte(s) * @words: word access to struct contents */ union ionic_lif_identity { @@ -557,7 +572,7 @@ union ionic_lif_identity { u8 version; u8 qp_opcodes; u8 admin_opcodes; - u8 rsvd; + u8 minor_version; __le32 npts_per_lif; __le32 nmrs_per_lif; __le32 nahs_per_lif; @@ -567,12 +582,16 @@ union ionic_lif_identity { u8 rrq_stride; u8 rsq_stride; u8 dcqcn_profiles; - u8 rsvd_dimensions[10]; + u8 udma_shift; + u8 rsvd_dimensions; + __le64 page_size_cap; struct ionic_lif_logical_qtype aq_qtype; struct ionic_lif_logical_qtype sq_qtype; struct ionic_lif_logical_qtype rq_qtype; struct ionic_lif_logical_qtype cq_qtype; struct ionic_lif_logical_qtype eq_qtype; + __le16 stats_type; + u8 rsvd1[162]; } __packed rdma; } __packed; __le32 words[478]; --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2085.outbound.protection.outlook.com [40.107.244.85]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D363D2D8DB1; Wed, 3 Sep 2025 06:17:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.85 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880257; cv=fail; b=dUUa77VhO2cDo4b+ORhwzc8z/hHebtzg4RQ5YL5L8nNCAkx+BHRlwg20zoy3xw375UGyG0qt2ODoCd4z97lTbWltJgN62uTYkZ5actqGeMWJ2xeIqCS18CWLHRqK2MnXlEeBrPcu1ns2NItPjQbnbvIqMOYl/gX4EQTQli5kH4s= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880257; c=relaxed/simple; bh=DuGRXm3G6STnw5PojIvaj7h3+QCDBTUd0I13tqGOb/M=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=OGP76muPXaMfWYGFDy2dYrZv9U+A4nmCfBrI6DuBkk8wmwc87io15RXU1QEnw6Mf5wq+HHNOOAyv7SJ8/N+OhXBjdOFbB2e4ba4dmXorfunweJixi264lSAecwzCMDF7DKyCumjcUjaaIGquctaODP70u5Ifr4EqVGW46dAratE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=J0sDnoHi; arc=fail smtp.client-ip=40.107.244.85 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="J0sDnoHi" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=noQP4hcNhExVyedFQVrqr7z869AWxJTrJvj1EysDpqgYINJ8U9shMNQdkyUIPAEsKgLS2uivj7CAAWomK5HOy9MxRZg77lOg6kxH6C1p9CI+tMNaRu9qmezTo1XPh2QsrPQRWxoC+uqcuRlhmw8kIGeyN53fQne+uqndETraPCoKmUppp9mUzl8gpKuVlNcgIngFy71msvGmKbDTF8UBxeZMiwAg5KAeBY/YCIuGTRuCh+j2b1h6cxtzot06X34QpTeM9e3SG+JjFWk55m0Ok2XB65CTyiML0ZLviEacJFuXc+430jXPHAjRjwIwt0zCKzGGuwDAkgBTCrg4phho+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ROUcFiiAmA0UvChZqyGDfkFIjIIbqTCtaOqGidUrdns=; b=uRenrJa5Jp6DkXETByo2jkcP9BmLsW6qMz6SvSWEk+qkXi1AoKPOtka1qnPnF6Bd81Y+KPY6XuLEUXit6ePeYqQbd+JDPQ7dbw1Zdysm0zNHUOswr7UP+uGO7InnRptmCpus2VzODUwKBkx8+FHER8qvmH+aCRx/DPoKxB12Xv6neH/WMgYoA4o+QUhUPuF88ZjiLKjxO7TI6QYF+NBVikNRB7hIcn77P0n72GMYk2CTSlk0rJM46oXxmLb+Qx6BUjPb3fCaDyNWe/W0SlTHJdf2v+mb9POC5BejB5rK9YOSC46pSLaFbfWI1Z8YHL6jqxcKJy/lLSyhdKgXv7nwFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ROUcFiiAmA0UvChZqyGDfkFIjIIbqTCtaOqGidUrdns=; b=J0sDnoHiXnnYjU4IurtWDDCmcux0ZfmZZ5DRfVIwfeCsOLpgui6QKxGZpeHZGVYeSyMGuAp6deoLl8/mGwzGcEOuy6KT7BSJX+2RQDx+Ws9qNXCg8zfh1EN4X0dj++K/mxxWTZ9aXuKBKbtRNhUgndnDIQr7h6qyVcFP0F0vkHI= Received: from CH5PR02CA0006.namprd02.prod.outlook.com (2603:10b6:610:1ed::16) by BY5PR12MB4130.namprd12.prod.outlook.com (2603:10b6:a03:20b::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:17:32 +0000 Received: from DS2PEPF00003442.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::7f) by CH5PR02CA0006.outlook.office365.com (2603:10b6:610:1ed::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.17 via Frontend Transport; Wed, 3 Sep 2025 06:17:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003442.mail.protection.outlook.com (10.167.17.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:17:31 +0000 Received: from Satlexmb09.amd.com (10.181.42.218) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:31 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb09.amd.com (10.181.42.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:17:30 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:26 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Shannon Nelson Subject: [PATCH v6 03/14] net: ionic: Export the APIs from net driver to support device commands Date: Wed, 3 Sep 2025 11:45:55 +0530 Message-ID: <20250903061606.4139957-4-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003442:EE_|BY5PR12MB4130:EE_ X-MS-Office365-Filtering-Correlation-Id: 6f583782-8568-41eb-fb5c-08ddeab190d0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?9ooidIDKhp8QBWp1X4K7s6AbFFk34tBLvz9MQmpscpKTuG9RGpIQC9OqRqs4?= =?us-ascii?Q?UEWvP+7o6qMAJ/+YZaOYr94y/oZhGa3TRHpCrtyFYJE47mod/yCCjCWcJ+2h?= =?us-ascii?Q?XYvFytysoXhJxveoOuaWznnX4cxmpxGdoJLCKWXFbW7JGTiXHBOshBKgH7+S?= =?us-ascii?Q?8oCB0IE/iUOzyR3sHKE4LEUO1CgPVoPXs/RSoiRaFJEGKytNyQ4Q1r2BsaFh?= =?us-ascii?Q?fzydjlKz3A2hnThbwPWumQz/glZn/hSKtoJDnUEtHpx6JE8sgT3/IqyPlGPO?= =?us-ascii?Q?Pm6bFu82rRYHR9nY/KdecQIK+Wn1jfrvyQD+sMjI889TRTZAvWb+YcEe0oNb?= =?us-ascii?Q?HX9NQOgCsTKKzJf7Sj6H00u68mUCijaWlEwOhIT92/pprqNo6uGKB/LrubUV?= =?us-ascii?Q?PqgyzFJ2j1M6rwNXXsx74OpDhn+M+GPstUvsW8qEBBfxmjGMQw6YyT7ZZauv?= =?us-ascii?Q?57yvKxkSBHgrVHSgRcqy5N+yIsz/LM8Ti1pTrkPNU/UDuuIu0k8PiWXvnj6S?= =?us-ascii?Q?SXCbMtqAnvOzWoxwBvF3DeXED5WB5iuNo1jr/rq63Oo5nfj/da7drGv+jbkX?= =?us-ascii?Q?ZtJeD95G2k69ZLVlPjZClLdL9Mnalshs7ppmxoed7gip/oAIYKTKRk/my3Gd?= =?us-ascii?Q?sgH1dpSepctgGc0ypziOpqcSoOgLkqzFEufVCs5SQZUWWahA1Nnw+yWMBkap?= =?us-ascii?Q?oHxsIfkrxVP+AYcaY//0gLjjzpDmhjLo3YmYex3YdizPKhgWBSVfTUEYjkDD?= =?us-ascii?Q?GqpmmfxItig5RvRS65SMClhD+rwENA9NhLiJgY48E84efXb0EdF4ANJS6PqA?= =?us-ascii?Q?SuRwVsFKmiM0T3vcZRz0qB5q+hd23XTvNdohXpZNcbTKPd/ka7qLd3LXUR+q?= =?us-ascii?Q?fS6QuwJ0XFeWS5xKQVLctXGZcIjTi2O5DZM70Tewv25D3kVf7AhALplNxMLu?= =?us-ascii?Q?tQauNlBq19faKU4VZaHN+6ATxxFxEEQMHUGIqXXrXQuHL9elULu0LncKNfJy?= =?us-ascii?Q?o5pCwOS2ipG7VgzTG1jwHJYH3BtjTPPXl2vVoqPDKZNsCKIvumZGNhJT7PTX?= =?us-ascii?Q?rI5kCmHsphzkV8RxwuafPuPLTec9G/YDRLhJM+vNDmJFJnQTxbdX+arSinoi?= =?us-ascii?Q?LwXXQig/VCFqhcFRl/F7y1wZedKcTE7rXmse9+Q0rpPPgirdMTDvV72THbUB?= =?us-ascii?Q?2VPqds0ddXml/iJY2NF1pPjKD2XUuy0NLu/AOsXT+GOrCT/4qYFdlXpy3OU9?= =?us-ascii?Q?J1B5p2e2z97xjOjB8mpOIw6xG/W6Jq+cvJ6PpDnTSopAfxDa0DnGcxEZjxnW?= =?us-ascii?Q?bjSjtpTTpTqu4g8IsMt2MORDdVGmlCZHU9I/Z4ZxG6QyosNUI1Gj5VtZ1Pmn?= =?us-ascii?Q?MC1nYeLdiV1HzwPoc2oFR+rZ3le2mxlhL5tNDgAozd/lXU5SziOBQW1PcoxW?= =?us-ascii?Q?lESXCXOdlsRmi9qpcj41eQ0uwBa2/Zygan3mcd1yvHPBJq971DccPEpMtUT9?= =?us-ascii?Q?hD5lZQPfMkzFtn+smwHFD7QBPD2nUqtNDAZD?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:31.5397 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6f583782-8568-41eb-fb5c-08ddeab190d0 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003442.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4130 Content-Type: text/plain; charset="utf-8" RDMA driver needs to establish admin queues to support admin operations. Export the APIs to send device commands for the RDMA driver. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- drivers/net/ethernet/pensando/ionic/ionic.h | 7 ---- .../net/ethernet/pensando/ionic/ionic_api.h | 36 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_dev.h | 1 + .../net/ethernet/pensando/ionic/ionic_main.c | 4 ++- 4 files changed, 40 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic.h b/drivers/net/ethe= rnet/pensando/ionic/ionic.h index 04f00ea94230..85198e6a806e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic.h +++ b/drivers/net/ethernet/pensando/ionic/ionic.h @@ -65,16 +65,9 @@ struct ionic { int watchdog_period; }; =20 -struct ionic_admin_ctx { - struct completion work; - union ionic_adminq_cmd cmd; - union ionic_adminq_comp comp; -}; - int ionic_adminq_post(struct ionic_lif *lif, struct ionic_admin_ctx *ctx); int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx, const int err, const bool do_msg); -int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *= ctx); int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin= _ctx *ctx); void ionic_adminq_netdev_err_print(struct ionic_lif *lif, u8 opcode, u8 status, int err); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index f9fcd1b67b35..d75902ca34af 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -5,6 +5,8 @@ #define _IONIC_API_H_ =20 #include +#include "ionic_if.h" +#include "ionic_regs.h" =20 /** * struct ionic_aux_dev - Auxiliary device information @@ -18,4 +20,38 @@ struct ionic_aux_dev { struct auxiliary_device adev; }; =20 +/** + * struct ionic_admin_ctx - Admin command context + * @work: Work completion wait queue element + * @cmd: Admin command (64B) to be copied to the queue + * @comp: Admin completion (16B) copied from the queue + */ +struct ionic_admin_ctx { + struct completion work; + union ionic_adminq_cmd cmd; + union ionic_adminq_comp comp; +}; + +/** + * ionic_adminq_post_wait - Post an admin command and wait for response + * @lif: Logical interface + * @ctx: API admin command context + * + * Post the command to an admin queue in the ethernet driver. If this com= mand + * succeeds, then the command has been posted, but that does not indicate a + * completion. If this command returns success, then the completion callb= ack + * will eventually be called. + * + * Return: zero or negative error status + */ +int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *= ctx); + +/** + * ionic_error_to_errno - Transform ionic_if errors to os errno + * @code: Ionic error number + * + * Return: Negative OS error number or zero + */ +int ionic_error_to_errno(enum ionic_status_code code); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index c8c710cfe70c..bc26eb8f5779 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -12,6 +12,7 @@ =20 #include "ionic_if.h" #include "ionic_regs.h" +#include "ionic_api.h" =20 #define IONIC_MAX_TX_DESC 8192 #define IONIC_MAX_RX_DESC 16384 diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net= /ethernet/pensando/ionic/ionic_main.c index 0e60a6bef99a..14dc055be3e9 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_main.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c @@ -72,7 +72,7 @@ static const char *ionic_error_to_str(enum ionic_status_c= ode code) } } =20 -static int ionic_error_to_errno(enum ionic_status_code code) +int ionic_error_to_errno(enum ionic_status_code code) { switch (code) { case IONIC_RC_SUCCESS: @@ -114,6 +114,7 @@ static int ionic_error_to_errno(enum ionic_status_code = code) return -EIO; } } +EXPORT_SYMBOL_NS(ionic_error_to_errno, "NET_IONIC"); =20 static const char *ionic_opcode_to_str(enum ionic_cmd_opcode opcode) { @@ -480,6 +481,7 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struc= t ionic_admin_ctx *ctx) { return __ionic_adminq_post_wait(lif, ctx, true); } +EXPORT_SYMBOL_NS(ionic_adminq_post_wait, "NET_IONIC"); =20 int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin= _ctx *ctx) { --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2075.outbound.protection.outlook.com [40.107.244.75]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 904DD2D7DC7; Wed, 3 Sep 2025 06:17:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.75 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880263; cv=fail; b=JdN3zpO/1STWdTUpRDvtuWifIqa9SDcNkaZQpJh56caDJ5cFDq8M9NMDwDjmeV1pYB0uC+zdvO3d1vCNCeRWYaPiYaUio5vJcE5UfyXj94eZ1Qa3M0j8p+3a0jpKvSh52Y/o3pXtUMqZNLHWChtD/CxmZ1ins3LFfhx3hV0Q6l8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880263; c=relaxed/simple; bh=+VzOr43iTWgDtxaSZ3J6fp7o8fJIVlHBtyaiHDdKPAM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kCboQQrZeWahY0EA/i+5m0rHwJ/wuMXBPmDDo9iIfPQUOVESoI1T44igthN8u4BKpSKHA4xLmR2DbpQonHV4Cz2jOmwTe7lc0JxvvjVPUKBt+H2gC0NPmb+yXZuKUcH7e6dNqf4faLbd2wSLnA43hrl2wcW+ZL3/CHJp5jx7kdg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=pERWIOvS; arc=fail smtp.client-ip=40.107.244.75 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="pERWIOvS" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=olIli5UBkP3ICDp81XoZrgoGaGnr3JQF7rKSRoRZfAcTguMJ3ef9bxU2YuOrTMXZor6JMR6XTRnhWusUO8nN8zB+Z0x6UrbMXOZPvoIUH0v3qnNFbFgshVAZ4+TKh+AZKOSby40WdK/Nh3LUQAMq2/0v/OjBaZYWr/F4pQQBEXHENjlkJYsh5qedgHlJa9irwBDOE4D1Dn+/Fcx5BGDo14OFr+oXiNVYhN1leOAZ5ulGMtJJ7eMbs8bRPOUER2wGzRhxudhI5V4m5kUMfIAuAOryWQrYGEbGFAl9+K4RMi4yHS/u7oesk7jIijABRcOgAlmAM3bOPmhkMlX3jaRMew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mFLk0eoNiAIeG45aUblgkvyIUpJd3GlBO06frfi2H4w=; b=k2GlCfmhYlVYCDGRjGA4yeVLMf8cofHAxjECB41Pd7sCl464cSCRvVMXRvyncfz/lBS9EVDKlmVNTlWYatFFq2X+bEqf9gtWxbPnyVkuNZueAZJ+Ef0JbwLhJq2BakFqXrgHgBh1AdWfymvxEBKJhQrKKXa1jSut/w5ozj54J42noB11wZBxiQoyaDrp8lESBB62M8f1eZvRoFeKGOefbEzQ/RGgCq+OSWQpzG/Ijcac/SidGaY/nOp1F2rgteM5Bk0HITwqn1uM689cmSSZ56/H1jy9RL1q1jnSjN9OyjtE3BNKVFmFN26MmjbbTTTy43Qt4U367WYTa2h93DCxqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mFLk0eoNiAIeG45aUblgkvyIUpJd3GlBO06frfi2H4w=; b=pERWIOvSQakjQ6WP4hB71prUUq/oF8e7pQmPPbGxwbRdhTfh9tGrLziUq0MmG5gmrdHnv9sjX7qLiZYLkA3OtRoI+4mwMQ6a8eZbFWWJGIG4UUYj0EblH/15wD+Gn46vB+Jad3wjIHj9PoTpR8kq1xHyNqty5ZHYZaR5pz3FSFI= Received: from CH0P223CA0030.NAMP223.PROD.OUTLOOK.COM (2603:10b6:610:116::6) by SJ5PPFD5E8DE351.namprd12.prod.outlook.com (2603:10b6:a0f:fc02::9a4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:17:36 +0000 Received: from DS2PEPF00003444.namprd04.prod.outlook.com (2603:10b6:610:116:cafe::38) by CH0P223CA0030.outlook.office365.com (2603:10b6:610:116::6) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9073.27 via Frontend Transport; Wed, 3 Sep 2025 06:17:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003444.mail.protection.outlook.com (10.167.17.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:17:36 +0000 Received: from satlexmb08.amd.com (10.181.42.217) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:35 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb08.amd.com (10.181.42.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:17:35 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:31 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Shannon Nelson Subject: [PATCH v6 04/14] net: ionic: Provide RDMA reset support for the RDMA driver Date: Wed, 3 Sep 2025 11:45:56 +0530 Message-ID: <20250903061606.4139957-5-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003444:EE_|SJ5PPFD5E8DE351:EE_ X-MS-Office365-Filtering-Correlation-Id: 18e75b65-25f2-44a5-54c9-08ddeab193aa X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|7416014|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?IK045qZZCyemNlBvFaghGLRe65/RYIoVcM+GJ9W7arm4F/mu1MLT0d+Y5NZC?= =?us-ascii?Q?Q7xAxjQdvr2MkyqkJH+WZLBqk4dOxHlcYvdY/oP+lDeJ0JFgVOxv8zX9+nqn?= =?us-ascii?Q?nPssa1PmsGAvxXDIOpnwYxbFGGjIAh5zl/JUDbwCxqB5SIeHiVvJ1JlG8QS6?= =?us-ascii?Q?8D3Ce4DGFyIJVAaVTS8wVtAfNg4VMzTFIEPDtTJ4wzwNSIa5VOlniUeyVXaO?= =?us-ascii?Q?v9KbkWPCSa04IUyHcxB5U0Zim/tFlIQeNnr9GQeoWrRdBJnYdWYr/Ihs6e7u?= =?us-ascii?Q?bpGiMTtP4Mgr+VdRnJ3XzOIF5pFg6grim58umoQBfBKLygOlk4tHwgEQTelF?= =?us-ascii?Q?HMnA8I2POsdLUpvIfyrJpA7gUFTtTXhNWd8SrSkJm9/WyFKa1DdHaXLgLmsW?= =?us-ascii?Q?WktmfZwiMmK5i3bbWqYjXFbz698tO4cjCXKmW0vJ/yQYO+pgcxou8xyVCpJS?= =?us-ascii?Q?dtVbwlOPBAnyaxNJ8p8rxJQC8ooIlxtjhd21e8YtyZt4STQE1JxrpApCNPLt?= =?us-ascii?Q?WpAE3z2cTHfqCE2KUMUF8vEQJB2+pBI3bWF+3YviwVgVy/Cqf+oO2MwkXM/E?= =?us-ascii?Q?Zh/ZFAGTqBiSQDQnp1G7CjyN6J+XRxfau1vA29YrPfLPQ5WKBproSOUZkbgh?= =?us-ascii?Q?k+VAhHpanbfbuVIIe7joMYDctohX1wmaK82n3EsC7ZvCRlXJpd4uEEuQaeYN?= =?us-ascii?Q?PXEjawWTnveIWKpQQs052TPObkVb0YqtU4IrKWJzFuN4IdD2ovCfHLG6g/PF?= =?us-ascii?Q?w1Th5lPlzl0bS7/2z2gx6+YAOaeLSMT2OTTjS8j2t/UTunsFMEa6bEpCzmed?= =?us-ascii?Q?GGxsNAuacISBCMObUSYY5fFeRR70+T2KMQkxJatWcptkDBRvuodCnwt00XId?= =?us-ascii?Q?s67FYU3CfxQAq2dLjYsdGYZ/nftNBpQKzyRmDbzaWbXAGTFIs3YX8F4qKN1A?= =?us-ascii?Q?6uyBygoAS8FoHYx3qts/srUxnEOmh2JDM7iAMSRFYMOIMTQuF5441unSTsPb?= =?us-ascii?Q?F0xWFPFYZngFvA7s7OxtK9OMy7RjXDZm0pdOF/s5KYkPuD2i1U2PR2APjI9g?= =?us-ascii?Q?DTSSRVmSeuC82RkBNFs7iA+6h+99zvxRJdhew9exRkV9MdDgqWMNK+qXUuBe?= =?us-ascii?Q?/TMlHVUrLmq3e1a6Y5LwqsAuZdcHr3gd3GbYYZ+q2Z6owMD7B+78IMs4FJWL?= =?us-ascii?Q?y1XhnaKcJKeKmX8PsTRyfh7ypZUSpNEEG+ul2pxOT+XLiN1FtCgtWme7V7Pn?= =?us-ascii?Q?+TqHIRKe+frJyP6Bpr1NNcmgTfm59yHrSr6Pdcgj8Rj64KoT4JuTnqbzt/xR?= =?us-ascii?Q?FPj91OR+jsjUB3qy1ub1TjJ2CC0Zazg2kRgsmxnZLjVGfo3DTTowXucXcVEy?= =?us-ascii?Q?xtbpW4F6DCZgRMM2HeZhgvdITT48m7eyApqYx9ch8mw3EnsNswE48qf5sVCG?= =?us-ascii?Q?mvpPlm0lPTvef94E0FbNQ7yI1iwDMo8eaA422yYBSVFUN4Bnvwak5KamW5Gj?= =?us-ascii?Q?6/c+PBWb1niqZNHKTLUNRjC6xZ1fEkFUTlnA?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(7416014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:36.3189 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 18e75b65-25f2-44a5-54c9-08ddeab193aa X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003444.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ5PPFD5E8DE351 Content-Type: text/plain; charset="utf-8" The Ethernet driver holds the privilege to execute the device commands. Export the function to execute RDMA reset command for use by RDMA driver. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 9 ++++++++ .../net/ethernet/pensando/ionic/ionic_aux.c | 22 +++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index d75902ca34af..e0b766d1769f 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -54,4 +54,13 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struct= ionic_admin_ctx *ctx); */ int ionic_error_to_errno(enum ionic_status_code code); =20 +/** + * ionic_request_rdma_reset - request reset or disable the device or lif + * @lif: Logical interface + * + * The reset is triggered asynchronously. It will wait until reset request + * completes or times out. + */ +void ionic_request_rdma_reset(struct ionic_lif *lif); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.c b/drivers/net/= ethernet/pensando/ionic/ionic_aux.c index f3c2a5227b36..a2be338eb3e5 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_aux.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.c @@ -78,3 +78,25 @@ void ionic_auxbus_unregister(struct ionic_lif *lif) out: mutex_unlock(&lif->adev_lock); } + +void ionic_request_rdma_reset(struct ionic_lif *lif) +{ + struct ionic *ionic =3D lif->ionic; + int err; + + union ionic_dev_cmd cmd =3D { + .cmd.opcode =3D IONIC_CMD_RDMA_RESET_LIF, + .cmd.lif_index =3D cpu_to_le16(lif->index), + }; + + mutex_lock(&ionic->dev_cmd_lock); + + ionic_dev_cmd_go(&ionic->idev, &cmd); + err =3D ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + + mutex_unlock(&ionic->dev_cmd_lock); + + if (err) + pr_warn("%s request_reset: error %d\n", __func__, err); +} +EXPORT_SYMBOL_NS(ionic_request_rdma_reset, "NET_IONIC"); --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2074.outbound.protection.outlook.com [40.107.244.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 600582D7DFA; Wed, 3 Sep 2025 06:17:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880275; cv=fail; b=TNzSRaqPRkF1ESyykDzMOkvk1thCOaMMo/xqX8Iqw69sLlz1800xmQTIX3ASAqbbAVmXiU9HoXoUiEOZ60x7Hm8XBHogbEa1kIf1IV7w8+F1MZPDwOqcKqluslOVXNcRqpcosefJKF8N1NKIJmi6Wu0Z5qRfd8mRmdj8jPBdrsw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880275; c=relaxed/simple; bh=fX3Z5SU+ffWK2r2Gdff9s1fM0ULMEjyDGIZPJ6N6974=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=CEl/lY83hxODIamEtl7NgivT6YLhkD0AhBKrdXAN+VtCyBFCHzdbtGCvWzeNMkU8LLRG+Bj1XAJDgpzjlhooQnaOHmp8BzWpMtwvFHuluTUnOfgdM/Gk+2xjA3p7fHmx6Lj44a/UyhCwjVKbMREXi3kQAG9xl7+sd+F/ZvA6C6I= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=Jq9thBIQ; arc=fail smtp.client-ip=40.107.244.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="Jq9thBIQ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=FlAI99l9KuCOdmOqV6UhhJL18mNojbclOoPhfhsVy09wVtrwFrZuIpHOq9+L+cyInK+yGWJnUvp4Teb0lY2W4o8ygpq1nfyaUn+3wbY1908lzyiUG90cJcHs8bOIee0O7IUlSqZV/Xh6FTPVx39Zk35k5ZRuBMwQXmKe2SiLhYwZJgBDFJQcQivP2nW78VnamqQ3H0cGmT8vWmiWLq8ClKCwPcbC+xB+r+hpAU1wH7rb++V5fnAujd2oWeuqms8/OGGi4x6MHkfJu2ujUXMTTTWtG4ukijs7MUmBAB5Fs/aZ6jjr5JNG+jmaa/JwxvgIWZQx5fpxsCJEUX7LkEFumg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xflnG8/TfydJ8Y6B1S9sEjhxICv8C0Om/JD7epWP4ZU=; b=eOaX+oHbhQRyXYtvTzVmTqboCqxxGAuXW/kkPhzaaddn3DQyaXZfxQJ79KjjNHWvSG5YZu92n5eiLK4XPEO5pBhJ+sx6cu9jy8BbAOd/dHIXTa6d4y1yJLUhi3TkyGuhJnqI3VBe+FYceKP3vUt8m+8HYWkquY4RlV5JLiuL7LUUx3UV6SrTEPlje1ua1MfERB/C2DYMj77Pf1/RV9TWfx4WhH65VkyG2ZnE10jtOKrHDwieBMMJKaW8maChk1qiPljRHyampgHzX/7HUx0crVczkLg2D+KeFIO+7Bnt1p2XOgTGZOUVvT40O88X6T6bwWFa2LJxS079ogwlHUdpSQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xflnG8/TfydJ8Y6B1S9sEjhxICv8C0Om/JD7epWP4ZU=; b=Jq9thBIQz8Xcds73pPcsnfWeNxxjUs5aAcQqU/73Ukhn3ZV07Gxw9IV01sJ2ly8uS9dH1A0mwd6GYqdFwsz1Ef6PUwgYB6OCUUdAd19T5S2/gBpeZVprr4bhNTjFEQF+U3hdpVp4vOwyNmVIF1cZpMx7fHvYpBgj/99i/ufEi6U= Received: from MN0PR05CA0025.namprd05.prod.outlook.com (2603:10b6:208:52c::13) by DS5PPF266051432.namprd12.prod.outlook.com (2603:10b6:f:fc00::648) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.21; Wed, 3 Sep 2025 06:17:46 +0000 Received: from BL6PEPF0001AB75.namprd02.prod.outlook.com (2603:10b6:208:52c:cafe::73) by MN0PR05CA0025.outlook.office365.com (2603:10b6:208:52c::13) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.16 via Frontend Transport; Wed, 3 Sep 2025 06:17:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB75.mail.protection.outlook.com (10.167.242.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:17:45 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:40 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:40 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:35 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Shannon Nelson Subject: [PATCH v6 05/14] net: ionic: Provide interrupt allocation support for the RDMA driver Date: Wed, 3 Sep 2025 11:45:57 +0530 Message-ID: <20250903061606.4139957-6-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB75:EE_|DS5PPF266051432:EE_ X-MS-Office365-Filtering-Correlation-Id: 6abe8b48-1659-4f7f-7114-08ddeab19950 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|36860700013|376014|7416014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?iQiM/jrxLjvPU8Mkk1Uam1DQWt1dKHVgiO2Kb5Mty8YzBfUUl8hIuPPYHrsb?= =?us-ascii?Q?cpiEIvCWQ6RxELUIN5QgEcZF7LhKSE8Ym1ktRlHP+0Z/zVpT5+7CqYIWFjr0?= =?us-ascii?Q?6POA1FjiYPq11+dkZJZbTP+ZdJjPt1WNMtNaKcrjfGXxi5hZ8045VKM/iJZZ?= =?us-ascii?Q?nWvhxYq2RCLoh30cvFwD6Tgdm0sNRrGJCMSm7BheeRXGBiCkV/JfRf3PTRLS?= =?us-ascii?Q?pXMY2RWaFGvZqnL6Y5p/fhmQlBcJ+JPy64/oo/o1zNq1f/+BqDgMn8Ld0/Rt?= =?us-ascii?Q?4AI2QWQPZv2pyRydnyDoXDcS5WSd4TK5Xp3qWgap9+UaRagSarMWLxQOktVH?= =?us-ascii?Q?zXlG+V3GMYFU+LDFNOlc6+iWjEx0Ldyw6SRp3f7ifNqIFUDzaf24YeIDIFnC?= =?us-ascii?Q?q14OkZ90R3Odyo6IZgMQHFwgwkXLzXQAZOTTy3YjREjQuyG3/C/3SzC9O7ir?= =?us-ascii?Q?OAT5eCjHZbDiXy812ugVsxKAKGftrpiV+ezFPAngYpA1JfNMD50xra8++K/n?= =?us-ascii?Q?gIPzMSgKchGPAUP5PsftJYiZQaZpCXT4WdCRn3YlqN8RwS8mLF5gZptHdT9c?= =?us-ascii?Q?pRS7/QWaEl6H08T1zWF+yIZNvcUmH64ftT63j12G1niugQ+MGAFBhYU0xdsH?= =?us-ascii?Q?5c9bRZszRlw/7H2KJfBG8YmQigi36zX1B92wB7odfM8GgyBhP81MxwvMHhKD?= =?us-ascii?Q?mB8SA5kdPDt/THmSgHVi6hbsHTxzn8tCugD6HeOcbRcqh2Z1MMWJwEvQNsJu?= =?us-ascii?Q?TtGAc7i/K0xkBAJ6pp44845Rsk1wqH1bdXVCsNcL5KlnWxl0Fb+xhGCxHlQB?= =?us-ascii?Q?coHjX7Z9pwRdEu6Qw9REIdm0hCaE1iBWLUiv+ZWFBSKvdFt3YI+CRPiGcztx?= =?us-ascii?Q?9Y9e8LPmBE/FnuEbLHqcE6YRi6EvxN0YF1efbFJaZ1DgXm/89zBO6NzQUYM5?= =?us-ascii?Q?ukLGkGtzJlZIBgQpNQQzynZDxZFmweyJ8npFetyGeFzuWfMrm53yNa+hh59J?= =?us-ascii?Q?/pXv/XEzXr9Sp7AQhSMAOiWn6LqHT2wXiPUpoYeA5AoTHXBY1Ty0uXGDxVQX?= =?us-ascii?Q?BSOvQBrAHv17HgR3i0O4dqiOWXfIQ1mCuXzrUb9tYEzR+f3zLq79qzkAkpP6?= =?us-ascii?Q?NLgEwfcMRcHDNALArJM+7iUk8qB6YX3erI3Jhq8ktJUZQmdFDjQIZc/ejfIA?= =?us-ascii?Q?EJ+cMLshenTcYhqchEQdWooiG5OKe9pPb0bVWohU1q+rBlY10yWmjgX0121R?= =?us-ascii?Q?7muO6bPpBNEsq+C8xAINd17uy5wcXbiCvAYvcPHOISzgUM4Rlciec0lXv8bG?= =?us-ascii?Q?VTovqxFgNnU1HIhUWfxaE/BeQgkGLOeziIFRa8PkVZQqeKPtci15eTK3yiqG?= =?us-ascii?Q?swD7v/D633pXADpFCRH6ed7tX/yayESNyq9/gDHq0g1FIx1UOeUGfpEntSui?= =?us-ascii?Q?l8C7Lr6j8quAe5DpcwpFXseyaLgoAxhiyHdD1f6nKccYXyzfLKZbnE21M/Rx?= =?us-ascii?Q?P8X91hyQQvc+SCzrLngcMr6sVHDbNY2q5gtk?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(36860700013)(376014)(7416014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:45.8333 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6abe8b48-1659-4f7f-7114-08ddeab19950 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB75.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS5PPF266051432 Content-Type: text/plain; charset="utf-8" RDMA driver needs an interrupt for an event queue. Export function from net driver to allocate an interrupt. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 43 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_dev.h | 13 ------ .../net/ethernet/pensando/ionic/ionic_lif.c | 38 ++++++++-------- 3 files changed, 62 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index e0b766d1769f..5fd23aa8c5a1 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -32,6 +32,29 @@ struct ionic_admin_ctx { union ionic_adminq_comp comp; }; =20 +#define IONIC_INTR_INDEX_NOT_ASSIGNED -1 +#define IONIC_INTR_NAME_MAX_SZ 32 + +/** + * struct ionic_intr_info - Interrupt information + * @name: Name identifier + * @rearm_count: Interrupt rearm count + * @index: Interrupt index position + * @vector: Interrupt number + * @dim_coal_hw: Interrupt coalesce value in hardware units + * @affinity_mask: CPU affinity mask + * @aff_notify: context for notification of IRQ affinity changes + */ +struct ionic_intr_info { + char name[IONIC_INTR_NAME_MAX_SZ]; + u64 rearm_count; + unsigned int index; + unsigned int vector; + u32 dim_coal_hw; + cpumask_var_t *affinity_mask; + struct irq_affinity_notify aff_notify; +}; + /** * ionic_adminq_post_wait - Post an admin command and wait for response * @lif: Logical interface @@ -63,4 +86,24 @@ int ionic_error_to_errno(enum ionic_status_code code); */ void ionic_request_rdma_reset(struct ionic_lif *lif); =20 +/** + * ionic_intr_alloc - Reserve a device interrupt + * @lif: Logical interface + * @intr: Reserved ionic interrupt structure + * + * Reserve an interrupt index and get irq number for that index. + * + * Return: zero or negative error status + */ +int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr); + +/** + * ionic_intr_free - Release a device interrupt index + * @lif: Logical interface + * @intr: Interrupt index + * + * Mark the interrupt index unused so that it can be reserved again. + */ +void ionic_intr_free(struct ionic_lif *lif, int intr); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index bc26eb8f5779..68cf4da3c6b3 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -274,19 +274,6 @@ struct ionic_queue { char name[IONIC_QUEUE_NAME_MAX_SZ]; } ____cacheline_aligned_in_smp; =20 -#define IONIC_INTR_INDEX_NOT_ASSIGNED -1 -#define IONIC_INTR_NAME_MAX_SZ 32 - -struct ionic_intr_info { - char name[IONIC_INTR_NAME_MAX_SZ]; - u64 rearm_count; - unsigned int index; - unsigned int vector; - u32 dim_coal_hw; - cpumask_var_t *affinity_mask; - struct irq_affinity_notify aff_notify; -}; - struct ionic_cq { struct ionic_lif *lif; struct ionic_queue *bound_q; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 8ed5d2e5fde4..276024002484 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -244,29 +244,36 @@ static int ionic_request_irq(struct ionic_lif *lif, s= truct ionic_qcq *qcq) 0, intr->name, &qcq->napi); } =20 -static int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info = *intr) +int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr) { struct ionic *ionic =3D lif->ionic; - int index; + int index, err; =20 index =3D find_first_zero_bit(ionic->intrs, ionic->nintrs); - if (index =3D=3D ionic->nintrs) { - netdev_warn(lif->netdev, "%s: no intr, index=3D%d nintrs=3D%d\n", - __func__, index, ionic->nintrs); + if (index =3D=3D ionic->nintrs) return -ENOSPC; - } =20 set_bit(index, ionic->intrs); ionic_intr_init(&ionic->idev, intr, index); =20 + err =3D ionic_bus_get_irq(ionic, intr->index); + if (err < 0) { + clear_bit(index, ionic->intrs); + return err; + } + + intr->vector =3D err; + return 0; } +EXPORT_SYMBOL_NS(ionic_intr_alloc, "NET_IONIC"); =20 -static void ionic_intr_free(struct ionic *ionic, int index) +void ionic_intr_free(struct ionic_lif *lif, int index) { - if (index !=3D IONIC_INTR_INDEX_NOT_ASSIGNED && index < ionic->nintrs) - clear_bit(index, ionic->intrs); + if (index !=3D IONIC_INTR_INDEX_NOT_ASSIGNED && index < lif->ionic->nintr= s) + clear_bit(index, lif->ionic->intrs); } +EXPORT_SYMBOL_NS(ionic_intr_free, "NET_IONIC"); =20 static void ionic_irq_aff_notify(struct irq_affinity_notify *notify, const cpumask_t *mask) @@ -401,7 +408,7 @@ static void ionic_qcq_intr_free(struct ionic_lif *lif, = struct ionic_qcq *qcq) irq_set_affinity_hint(qcq->intr.vector, NULL); devm_free_irq(lif->ionic->dev, qcq->intr.vector, &qcq->napi); qcq->intr.vector =3D 0; - ionic_intr_free(lif->ionic, qcq->intr.index); + ionic_intr_free(lif, qcq->intr.index); qcq->intr.index =3D IONIC_INTR_INDEX_NOT_ASSIGNED; } =20 @@ -511,13 +518,6 @@ static int ionic_alloc_qcq_interrupt(struct ionic_lif = *lif, struct ionic_qcq *qc goto err_out; } =20 - err =3D ionic_bus_get_irq(lif->ionic, qcq->intr.index); - if (err < 0) { - netdev_warn(lif->netdev, "no vector for %s: %d\n", - qcq->q.name, err); - goto err_out_free_intr; - } - qcq->intr.vector =3D err; ionic_intr_mask_assert(lif->ionic->idev.intr_ctrl, qcq->intr.index, IONIC_INTR_MASK_SET); =20 @@ -546,7 +546,7 @@ static int ionic_alloc_qcq_interrupt(struct ionic_lif *= lif, struct ionic_qcq *qc return 0; =20 err_out_free_intr: - ionic_intr_free(lif->ionic, qcq->intr.index); + ionic_intr_free(lif, qcq->intr.index); err_out: return err; } @@ -741,7 +741,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsig= ned int type, err_out_free_irq: if (flags & IONIC_QCQ_F_INTR) { devm_free_irq(dev, new->intr.vector, &new->napi); - ionic_intr_free(lif->ionic, new->intr.index); + ionic_intr_free(lif, new->intr.index); } err_out_free_page_pool: page_pool_destroy(new->q.page_pool); --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2078.outbound.protection.outlook.com [40.107.92.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B6862D7DEF; Wed, 3 Sep 2025 06:17:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.78 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880277; cv=fail; b=f8aSlCHoJS9ZugZVS6e2IpFbmrJgc/5NzP2RMxVMVp7Nv5wSsa3lwzvsMjZkm0+hdn1D/SHCnvsJdprl/jkrFgSjELakTeXmG5Lwf4+e7l+s9Q7FweQhDFufV4if7aeTD2rWZizpdO0F1RFsVz39a7OSKEatZ9AGn3c5mdjHkTg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880277; c=relaxed/simple; bh=obVB456u+6WEUFqDjkIao8MqnxweC7AbD8f6TXEpOYI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=B1J6cY+olw8ilbi38v7/8n1OlBpDaaBlx+Lu2wNzdKdLOMaCmGWo16i+81mt1awbahdcINTs76QHaTV7rgJnsyXunheUbyQ5hZwFVxXK1cZ62YiqBqP3bkQewxF/Rj3BeAmfuz7bFesMGHgucUsEOS/6pKoYqElO/ATgMSdgdV0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=GvH/vgl9; arc=fail smtp.client-ip=40.107.92.78 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="GvH/vgl9" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ZMtmApbMXStJ2Rd/hjRnyqnXOHFkwNtJC56eUY3p3iwA0/F/KhACNFXspmonVdXK2lVQ5sfmN0y9CfH0N0mzNSSj7mIeYko4oqwRGtZjIe6De7MO7xgTB4XwPFaxC9exPXkizL8RRGNFz1BjDGK9LGVovhj5sr7ro6P7LMCawQz9jhW9wqR2VE5GlBkKGACuzLuyFfOXZhvIaHX2dThM0DFWGlp2PiaAKDSN6N+o9N1rBTrzoaseSNwbNICjKRhnBt8CBD89OIBGVR6kHRtj6ewzzE+FILlba9/u2eN8TSXxSYDPKuRCS8ZLPSvaGh+9LdGIvDIdVy5Enq7gCrH4gA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GZDPuJxY1c+2P1yI2WRvV8oHsKIktCmeOabCeNkRGxo=; b=V4DToOXtSG2imQa3nk0c2Yd6sZCSahSv18MVj3Y2BRP426CqF9tDWM/y22EIwcFstQwEhVEO+MwyVm8ifVc8BeB3AWjDHs/1hJ7LV5OpT1zeDLwqStokA+Q8rO5vKVOi1ZvV+RUxuB6c7S9w2qpRP8myK0OJ00x2P6yLwXXHMZXk04HuVfwitymUdF0cC85wDLWTUff/cVeNpvV82azU1lLfIOyc7nYyZzpW8AYk3grqeHraUd0ej+0XWny8z4DpWn/1pvrwT91Z5+yrso8Gnl86T26HRTv2FbNdgg1UviHmvmGoGJzElij6I2+kUiZZf3xGRPyWZugWIjXdUKRPzg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GZDPuJxY1c+2P1yI2WRvV8oHsKIktCmeOabCeNkRGxo=; b=GvH/vgl9Oz5uAYwyuk0nCLfcFnwA/AtdRHivytN863q9l+l7/NwMR5ip+TNQrsnzklpCaA29VYjRar6Z59iO/FgiYwgQLoI16AVYYeBXCttLurIvTBA0c7EiU2Al+D7cNKpqSE+r0Zh2zvdNnoykoteXGC/+N0PVTIS5HHGNrmU= Received: from MN0PR05CA0015.namprd05.prod.outlook.com (2603:10b6:208:52c::28) by DM6PR12MB4330.namprd12.prod.outlook.com (2603:10b6:5:21d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:17:49 +0000 Received: from BL6PEPF0001AB75.namprd02.prod.outlook.com (2603:10b6:208:52c:cafe::27) by MN0PR05CA0015.outlook.office365.com (2603:10b6:208:52c::28) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.16 via Frontend Transport; Wed, 3 Sep 2025 06:17:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB75.mail.protection.outlook.com (10.167.242.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:17:49 +0000 Received: from satlexmb08.amd.com (10.181.42.217) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:45 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb08.amd.com (10.181.42.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:17:45 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:40 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Shannon Nelson , =?UTF-8?q?Pablo=20Casc=C3=B3n?= Subject: [PATCH v6 06/14] net: ionic: Provide doorbell and CMB region information Date: Wed, 3 Sep 2025 11:45:58 +0530 Message-ID: <20250903061606.4139957-7-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB75:EE_|DM6PR12MB4330:EE_ X-MS-Office365-Filtering-Correlation-Id: f7aab8f4-664d-4c2c-bf4a-08ddeab19b71 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|376014|7416014|36860700013; X-Microsoft-Antispam-Message-Info: =?utf-8?B?ZzBCcnhvOG9DNzY2UjJDTzFLelc5Q3g0bjFyclpaQUYybFZXOTFEVHJQdEtF?= =?utf-8?B?VjJYMGQzaSthWjlBOEo3Yzd1NGlsOGtQb2kvU2w1dkd5Tzk2VEVSUGN6OGxC?= =?utf-8?B?RGl5OEhRZEtUWUt0OXZzelFtL01RbW5PMW0vaUc4YVBBNGRBc0hadzJnek9U?= =?utf-8?B?cVpyK1hpK2l3cW9rQllTemFxbi9QNjhGWmloemEvOUpvK1oxVFhQNllxYkta?= =?utf-8?B?YzlPNnNqcDA3VlZPQjU5ZG4yN0h3MVdSbVpwTlhzbXlIdTNWaUg2N0pvTEk5?= =?utf-8?B?dm9UOXMxejl5MEtvZlg4RHlrTU9mL29XTWtta3RjVWJ5WkplNUlYajJxSUZR?= =?utf-8?B?cFByQy8vK3pOaE1kZElHaDRYZEFDcnZib3YyRWVNK0lMb3IvUHF4VWxpY1VX?= =?utf-8?B?QVlYN0xiT0E1VG5CZUdDNGNzWGtWOGRWN2RuMzJVVnlPM2xpV05MS2F5K2I4?= =?utf-8?B?djN4OGdpMHQwb0w0RFNnTnkxWXdtSXNEVlEzSGtZUS9wamQ5eVdqbnA1MVNo?= =?utf-8?B?UVBtaUdJM2RTZGF0UFQ1OE5Kdmx4UXNsZDdQS2JxUnBMdEQ0Qmk0dUtVTFFO?= =?utf-8?B?SXI4YVdUalJrdmZpVk5LT21BclgvRjk2eUdVbmNsYzJDYlU0Yk51Z0RiSVhF?= =?utf-8?B?c21RTWxQaGw5cnZndkFCeDYvTEhQdW5UZS93ME9wTGtOYldoc1QyR2xCVW1m?= =?utf-8?B?dnA3NTJaOTRaWjJkYVVwTzNFb1NicjZnY294V0xZVCtzNzFFRFBGQWdBYVYz?= =?utf-8?B?TjZEaHFrQlJuVEFCamhQck5idERwdWFYSU9jWVJTUmtuNUV6Y1E0RzFmMEhQ?= =?utf-8?B?dHphVFFTdTRqbnBmK0pFWXVnUVhIZktCbWMvUit2RU1wdkVLMG5xdDNDOVpE?= =?utf-8?B?Q2RLL01ZV3dYOVk5NVMrVWxpQ2pDN1BoaG44Q1dlN25yNHZ4cFJZdVNQd3A5?= =?utf-8?B?TkNiZXNWcUt5QVIrQUVockR4SHkzUE01WTVIVG1kMmJiMlpBcEY3UE5CR205?= =?utf-8?B?WWF1aU0vS1JwYkdzcndxSFBvNUovUndtL05LZzFacVBEZjdERGJ1WEpZL3pF?= =?utf-8?B?TkJoL25yQUcxeE1EcU5DNXEzQ1M1eHRtd0Z1SmlCNXY0K2t5UUpJTnF6Vkc1?= =?utf-8?B?OG9QUENKeTJjRVp4bDhmZ1kxSGFINE9lb0I2V1ZtSjhaMzRaa2R1UkJlTzg4?= =?utf-8?B?M0l1VTh3VzJvRGtqQktLOXFiTld3MkZDdjBIVzBmWVByVWVmazBjaHkzTC9X?= =?utf-8?B?Y0tKRG9ESGhHaktFSloycThGNnFUMzJzb1dHeWs1b3Brc1BhUno4N0hqYmlO?= =?utf-8?B?YkhLMG1nWXZqU29UdDM4Zmk4THJXTHZCYnVGMTZ2MnhRVkdHbzFhQ2xKYlVn?= =?utf-8?B?NWtJcjNrRUE3dnZoeERsNU85R3ZZUW1ZZDFRcEkxVUhOYTlmMWJWVGhRbVRw?= =?utf-8?B?d21UUnM4L0dmcjNVYStZRDJUUHFWTjV0Z25pcEYxM0NOaGd4NmdDcUt4NTF5?= =?utf-8?B?dytyL0ZBc0FoRkgvajR0ZEpIM2tOSXNNbkZDR25VK0hnckVrdzhydnN5N25N?= =?utf-8?B?NGk0LzZLTG9QN2Q3K2xiM3ZZVnJ5aWRZdXY5akQrUE5VMm1nZXRYVUd0UW5W?= =?utf-8?B?RWtFUGVubVFZTklIQ1l6b1hIby8yeVRoS09JalZrM1JBcE9USWJhbSsydTNV?= =?utf-8?B?bUJSd0c4bkpQKzVUeGRIaFVBVGhML2RIM0RKcFd5N0NkeXE2VkswMDRJVW9K?= =?utf-8?B?Lzd6QVhqR2JjZEJUcDVUYkRoMUhERXVnMHhDaCtMQ2FIcWhHWXJFVlJqYmk2?= =?utf-8?B?cStHd2l3Qjl3eW9lK1FzWXlnV1dxQzVJWWdWV20xMk9UbGU4UkI4eGtnNVht?= =?utf-8?B?cTc2dmF1N3NMYXhSbmE0MUJEQitUOFlpd1lJUk1ncUdnZzg4QnAxVnFHeU9U?= =?utf-8?B?UWttRkJLVnkvWm5yODlvVEJuVmEyZjI2Qm1XV25oazRoaFczQjNBcWtEMU0x?= =?utf-8?B?MDRDOFliNk91RWpBUUg4WmRLbkVQc1pXelBNaHVpRTQwRGJIcTE0ME4xRHRS?= =?utf-8?Q?ZYZ4dS?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(7416014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:49.4073 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f7aab8f4-664d-4c2c-bf4a-08ddeab19b71 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB75.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4330 The RDMA device needs information of controller memory bar and doorbell capability to share with user context. Discover CMB regions and express doorbell capabilities on device init. Reviewed-by: Shannon Nelson Co-developed-by: Pablo Casc=C3=B3n Signed-off-by: Pablo Casc=C3=B3n Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 22 ++ .../ethernet/pensando/ionic/ionic_bus_pci.c | 2 + .../net/ethernet/pensando/ionic/ionic_dev.c | 270 +++++++++++++++++- .../net/ethernet/pensando/ionic/ionic_dev.h | 14 +- .../net/ethernet/pensando/ionic/ionic_if.h | 89 ++++++ .../net/ethernet/pensando/ionic/ionic_lif.c | 2 +- 6 files changed, 381 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index 5fd23aa8c5a1..bd88666836b8 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -106,4 +106,26 @@ int ionic_intr_alloc(struct ionic_lif *lif, struct ion= ic_intr_info *intr); */ void ionic_intr_free(struct ionic_lif *lif, int intr); =20 +/** + * ionic_get_cmb - Reserve cmb pages + * @lif: Logical interface + * @pgid: First page index + * @pgaddr: First page bus addr (contiguous) + * @order: Log base two number of pages (PAGE_SIZE) + * @stride_log2: Size of stride to determine CMB pool + * @expdb: Will be set to true if this CMB region has expdb enabled + * + * Return: zero or negative error status + */ +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, + int order, u8 stride_log2, bool *expdb); + +/** + * ionic_put_cmb - Release cmb pages + * @lif: Logical interface + * @pgid: First page index + * @order: Log base two number of pages (PAGE_SIZE) + */ +void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/= net/ethernet/pensando/ionic/ionic_bus_pci.c index f8752b1d2790..70d86c5f52fb 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -272,6 +272,8 @@ static int ionic_setup_one(struct ionic *ionic) } ionic_debugfs_add_ident(ionic); =20 + ionic_map_cmb(ionic); + err =3D ionic_init(ionic); if (err) { dev_err(dev, "Cannot init device: %d, aborting\n", err); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/= ethernet/pensando/ionic/ionic_dev.c index 093c5358b6e8..ab27e9225c1e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c @@ -199,13 +199,201 @@ void ionic_init_devinfo(struct ionic *ionic) dev_dbg(ionic->dev, "fw_version %s\n", idev->dev_info.fw_version); } =20 +static void ionic_map_disc_cmb(struct ionic *ionic) +{ + struct ionic_identity *ident =3D &ionic->ident; + u32 length_reg0, length, offset, num_regions; + struct ionic_dev_bar *bar =3D ionic->bars; + struct ionic_dev *idev =3D &ionic->idev; + struct device *dev =3D ionic->dev; + int err, sz, i; + u64 end; + + mutex_lock(&ionic->dev_cmd_lock); + + ionic_dev_cmd_discover_cmb(idev); + err =3D ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + if (!err) { + sz =3D min(sizeof(ident->cmb_layout), + sizeof(idev->dev_cmd_regs->data)); + memcpy_fromio(&ident->cmb_layout, + &idev->dev_cmd_regs->data, sz); + } + mutex_unlock(&ionic->dev_cmd_lock); + + if (err) { + dev_warn(dev, "Cannot discover CMB layout, disabling CMB\n"); + return; + } + + bar +=3D 2; + + num_regions =3D le32_to_cpu(ident->cmb_layout.num_regions); + if (!num_regions || num_regions > IONIC_MAX_CMB_REGIONS) { + dev_warn(dev, "Invalid number of CMB entries (%d)\n", + num_regions); + return; + } + + dev_dbg(dev, "ionic_cmb_layout_identity num_regions %d flags %x:\n", + num_regions, ident->cmb_layout.flags); + + for (i =3D 0; i < num_regions; i++) { + offset =3D le32_to_cpu(ident->cmb_layout.region[i].offset); + length =3D le32_to_cpu(ident->cmb_layout.region[i].length); + end =3D offset + length; + + dev_dbg(dev, "CMB entry %d: bar_num %u cmb_type %u offset %x length %u\n= ", + i, ident->cmb_layout.region[i].bar_num, + ident->cmb_layout.region[i].cmb_type, + offset, length); + + if (end > (bar->len >> IONIC_CMB_SHIFT_64K)) { + dev_warn(dev, "Out of bounds CMB region %d offset %x length %u\n", + i, offset, length); + return; + } + } + + /* if first entry matches PCI config, expdb is not supported */ + if (ident->cmb_layout.region[0].bar_num =3D=3D bar->res_index && + le32_to_cpu(ident->cmb_layout.region[0].length) =3D=3D bar->len && + !ident->cmb_layout.region[0].offset) { + dev_warn(dev, "No CMB mapping discovered\n"); + return; + } + + /* process first entry for regular mapping */ + length_reg0 =3D le32_to_cpu(ident->cmb_layout.region[0].length); + if (!length_reg0) { + dev_warn(dev, "region len =3D 0. No CMB mapping discovered\n"); + return; + } + + /* Verify first entry size matches expected 8MB size (in 64KB pages) */ + if (length_reg0 !=3D IONIC_BAR2_CMB_ENTRY_SIZE >> IONIC_CMB_SHIFT_64K) { + dev_warn(dev, "Unexpected CMB size in entry 0: %u pages\n", + length_reg0); + return; + } + + sz =3D BITS_TO_LONGS((length_reg0 << IONIC_CMB_SHIFT_64K) / + PAGE_SIZE) * sizeof(long); + idev->cmb_inuse =3D kzalloc(sz, GFP_KERNEL); + if (!idev->cmb_inuse) { + dev_warn(dev, "No memory for CMB, disabling\n"); + idev->phy_cmb_pages =3D 0; + idev->phy_cmb_expdb64_pages =3D 0; + idev->phy_cmb_expdb128_pages =3D 0; + idev->phy_cmb_expdb256_pages =3D 0; + idev->phy_cmb_expdb512_pages =3D 0; + idev->cmb_npages =3D 0; + return; + } + + for (i =3D 0; i < num_regions; i++) { + /* check this region matches first region length as to + * ease implementation + */ + if (le32_to_cpu(ident->cmb_layout.region[i].length) !=3D + length_reg0) + continue; + + offset =3D le32_to_cpu(ident->cmb_layout.region[i].offset); + + switch (ident->cmb_layout.region[i].cmb_type) { + case IONIC_CMB_TYPE_DEVMEM: + idev->phy_cmb_pages =3D bar->bus_addr + offset; + idev->cmb_npages =3D + (length_reg0 << IONIC_CMB_SHIFT_64K) / PAGE_SIZE; + dev_dbg(dev, "regular cmb mapping: bar->bus_addr %pa region[%d].length = %u\n", + &bar->bus_addr, i, length); + dev_dbg(dev, "idev->phy_cmb_pages %pad, idev->cmb_npages %u\n", + &idev->phy_cmb_pages, idev->cmb_npages); + break; + + case IONIC_CMB_TYPE_EXPDB64: + idev->phy_cmb_expdb64_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb64_pages %pad\n", + &idev->phy_cmb_expdb64_pages); + break; + + case IONIC_CMB_TYPE_EXPDB128: + idev->phy_cmb_expdb128_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb128_pages %pad\n", + &idev->phy_cmb_expdb128_pages); + break; + + case IONIC_CMB_TYPE_EXPDB256: + idev->phy_cmb_expdb256_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb256_pages %pad\n", + &idev->phy_cmb_expdb256_pages); + break; + + case IONIC_CMB_TYPE_EXPDB512: + idev->phy_cmb_expdb512_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb512_pages %pad\n", + &idev->phy_cmb_expdb512_pages); + break; + + default: + dev_warn(dev, "[%d] Invalid cmb_type (%d)\n", + i, ident->cmb_layout.region[i].cmb_type); + break; + } + } +} + +static void ionic_map_classic_cmb(struct ionic *ionic) +{ + struct ionic_dev_bar *bar =3D ionic->bars; + struct ionic_dev *idev =3D &ionic->idev; + struct device *dev =3D ionic->dev; + int sz; + + bar +=3D 2; + /* classic CMB mapping */ + idev->phy_cmb_pages =3D bar->bus_addr; + idev->cmb_npages =3D bar->len / PAGE_SIZE; + dev_dbg(dev, "classic cmb mapping: bar->bus_addr %pa bar->len %lu\n", + &bar->bus_addr, bar->len); + dev_dbg(dev, "idev->phy_cmb_pages %pad, idev->cmb_npages %u\n", + &idev->phy_cmb_pages, idev->cmb_npages); + + sz =3D BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); + idev->cmb_inuse =3D kzalloc(sz, GFP_KERNEL); + if (!idev->cmb_inuse) { + idev->phy_cmb_pages =3D 0; + idev->cmb_npages =3D 0; + } +} + +void ionic_map_cmb(struct ionic *ionic) +{ + struct pci_dev *pdev =3D ionic->pdev; + struct device *dev =3D ionic->dev; + + if (!(pci_resource_flags(pdev, 4) & IORESOURCE_MEM)) { + dev_dbg(dev, "No CMB, disabling\n"); + return; + } + + if (ionic->ident.dev.capabilities & cpu_to_le64(IONIC_DEV_CAP_DISC_CMB)) + ionic_map_disc_cmb(ionic); + else + ionic_map_classic_cmb(ionic); +} + int ionic_dev_setup(struct ionic *ionic) { struct ionic_dev_bar *bar =3D ionic->bars; unsigned int num_bars =3D ionic->num_bars; struct ionic_dev *idev =3D &ionic->idev; struct device *dev =3D ionic->dev; - int size; u32 sig; int err; =20 @@ -255,16 +443,11 @@ int ionic_dev_setup(struct ionic *ionic) mutex_init(&idev->cmb_inuse_lock); if (num_bars < 3 || !ionic->bars[IONIC_PCI_BAR_CMB].len) { idev->cmb_inuse =3D NULL; + idev->phy_cmb_pages =3D 0; + idev->cmb_npages =3D 0; return 0; } =20 - idev->phy_cmb_pages =3D bar->bus_addr; - idev->cmb_npages =3D bar->len / PAGE_SIZE; - size =3D BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); - idev->cmb_inuse =3D kzalloc(size, GFP_KERNEL); - if (!idev->cmb_inuse) - dev_warn(dev, "No memory for CMB, disabling\n"); - return 0; } =20 @@ -277,6 +460,11 @@ void ionic_dev_teardown(struct ionic *ionic) idev->phy_cmb_pages =3D 0; idev->cmb_npages =3D 0; =20 + idev->phy_cmb_expdb64_pages =3D 0; + idev->phy_cmb_expdb128_pages =3D 0; + idev->phy_cmb_expdb256_pages =3D 0; + idev->phy_cmb_expdb512_pages =3D 0; + if (ionic->wq) { destroy_workqueue(ionic->wq); ionic->wq =3D NULL; @@ -698,28 +886,79 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev= , struct ionic_qcq *qcq, ionic_dev_cmd_go(idev, &cmd); } =20 +void ionic_dev_cmd_discover_cmb(struct ionic_dev *idev) +{ + union ionic_dev_cmd cmd =3D { + .discover_cmb.opcode =3D IONIC_CMD_DISCOVER_CMB, + }; + + ionic_dev_cmd_go(idev, &cmd); +} + int ionic_db_page_num(struct ionic_lif *lif, int pid) { return (lif->hw_index * lif->dbid_count) + pid; } =20 -int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, i= nt order) +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, + int order, u8 stride_log2, bool *expdb) { struct ionic_dev *idev =3D &lif->ionic->idev; - int ret; + void __iomem *nonexpdb_pgptr; + phys_addr_t nonexpdb_pgaddr; + int i, idx; =20 mutex_lock(&idev->cmb_inuse_lock); - ret =3D bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); + idx =3D bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); mutex_unlock(&idev->cmb_inuse_lock); =20 - if (ret < 0) - return ret; + if (idx < 0) + return idx; + + *pgid =3D (u32)idx; + + if (idev->phy_cmb_expdb64_pages && + stride_log2 =3D=3D IONIC_EXPDB_64B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb64_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb128_pages && + stride_log2 =3D=3D IONIC_EXPDB_128B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb128_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb256_pages && + stride_log2 =3D=3D IONIC_EXPDB_256B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb256_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb512_pages && + stride_log2 =3D=3D IONIC_EXPDB_512B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb512_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else { + *pgaddr =3D idev->phy_cmb_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D false; + } =20 - *pgid =3D ret; - *pgaddr =3D idev->phy_cmb_pages + ret * PAGE_SIZE; + /* clear the requested CMB region, 1 PAGE_SIZE ioremap at a time */ + nonexpdb_pgaddr =3D idev->phy_cmb_pages + idx * PAGE_SIZE; + for (i =3D 0; i < (1 << order); i++) { + nonexpdb_pgptr =3D + ioremap_wc(nonexpdb_pgaddr + i * PAGE_SIZE, PAGE_SIZE); + if (!nonexpdb_pgptr) { + ionic_put_cmb(lif, *pgid, order); + return -ENOMEM; + } + memset_io(nonexpdb_pgptr, 0, PAGE_SIZE); + iounmap(nonexpdb_pgptr); + } =20 return 0; } +EXPORT_SYMBOL_NS(ionic_get_cmb, "NET_IONIC"); =20 void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order) { @@ -729,6 +968,7 @@ void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int= order) bitmap_release_region(idev->cmb_inuse, pgid, order); mutex_unlock(&idev->cmb_inuse_lock); } +EXPORT_SYMBOL_NS(ionic_put_cmb, "NET_IONIC"); =20 int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index 68cf4da3c6b3..35566f97eaea 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -35,6 +35,11 @@ #define IONIC_RX_MIN_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ #define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 4) /* 4s */ =20 +#define IONIC_EXPDB_64B_WQE_LG2 6 +#define IONIC_EXPDB_128B_WQE_LG2 7 +#define IONIC_EXPDB_256B_WQE_LG2 8 +#define IONIC_EXPDB_512B_WQE_LG2 9 + struct ionic_dev_bar { void __iomem *vaddr; phys_addr_t bus_addr; @@ -171,6 +176,11 @@ struct ionic_dev { dma_addr_t phy_cmb_pages; u32 cmb_npages; =20 + dma_addr_t phy_cmb_expdb64_pages; + dma_addr_t phy_cmb_expdb128_pages; + dma_addr_t phy_cmb_expdb256_pages; + dma_addr_t phy_cmb_expdb512_pages; + u32 port_info_sz; struct ionic_port_info *port_info; dma_addr_t port_info_pa; @@ -351,8 +361,8 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, = struct ionic_qcq *qcq, =20 int ionic_db_page_num(struct ionic_lif *lif, int pid); =20 -int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, i= nt order); -void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); +void ionic_dev_cmd_discover_cmb(struct ionic_dev *idev); +void ionic_map_cmb(struct ionic *ionic); =20 int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/e= thernet/pensando/ionic/ionic_if.h index 77c3dc188264..47559c909c8b 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -56,6 +56,9 @@ enum ionic_cmd_opcode { IONIC_CMD_VF_SETATTR =3D 61, IONIC_CMD_VF_CTRL =3D 62, =20 + /* CMB command */ + IONIC_CMD_DISCOVER_CMB =3D 80, + /* QoS commands */ IONIC_CMD_QOS_CLASS_IDENTIFY =3D 240, IONIC_CMD_QOS_CLASS_INIT =3D 241, @@ -269,9 +272,11 @@ union ionic_drv_identity { /** * enum ionic_dev_capability - Device capabilities * @IONIC_DEV_CAP_VF_CTRL: Device supports VF ctrl operations + * @IONIC_DEV_CAP_DISC_CMB: Device supports CMB discovery operations */ enum ionic_dev_capability { IONIC_DEV_CAP_VF_CTRL =3D BIT(0), + IONIC_DEV_CAP_DISC_CMB =3D BIT(1), }; =20 /** @@ -395,6 +400,7 @@ enum ionic_logical_qtype { * @IONIC_Q_F_4X_DESC: Quadruple main descriptor size * @IONIC_Q_F_4X_CQ_DESC: Quadruple cq descriptor size * @IONIC_Q_F_4X_SG_DESC: Quadruple sg descriptor size + * @IONIC_QIDENT_F_EXPDB: Queue supports express doorbell */ enum ionic_q_feature { IONIC_QIDENT_F_CQ =3D BIT_ULL(0), @@ -407,6 +413,7 @@ enum ionic_q_feature { IONIC_Q_F_4X_DESC =3D BIT_ULL(7), IONIC_Q_F_4X_CQ_DESC =3D BIT_ULL(8), IONIC_Q_F_4X_SG_DESC =3D BIT_ULL(9), + IONIC_QIDENT_F_EXPDB =3D BIT_ULL(10), }; =20 /** @@ -2213,6 +2220,80 @@ struct ionic_vf_ctrl_comp { u8 rsvd[15]; }; =20 +/** + * struct ionic_discover_cmb_cmd - CMB discovery command + * @opcode: Opcode for the command + * @rsvd: Reserved bytes + */ +struct ionic_discover_cmb_cmd { + u8 opcode; + u8 rsvd[63]; +}; + +/** + * struct ionic_discover_cmb_comp - CMB discover command completion. + * @status: Status of the command (enum ionic_status_code) + * @rsvd: Reserved bytes + */ +struct ionic_discover_cmb_comp { + u8 status; + u8 rsvd[15]; +}; + +#define IONIC_MAX_CMB_REGIONS 16 +#define IONIC_CMB_SHIFT_64K 16 + +enum ionic_cmb_type { + IONIC_CMB_TYPE_DEVMEM =3D 0, + IONIC_CMB_TYPE_EXPDB64 =3D 1, + IONIC_CMB_TYPE_EXPDB128 =3D 2, + IONIC_CMB_TYPE_EXPDB256 =3D 3, + IONIC_CMB_TYPE_EXPDB512 =3D 4, +}; + +/** + * union ionic_cmb_region - Configuration for CMB region + * @bar_num: CMB mapping number from FW + * @cmb_type: Type of CMB this region describes (enum ionic_cmb_type) + * @rsvd: Reserved + * @offset: Offset within BAR in 64KB pages + * @length: Length of the CMB region + * @words: 32-bit words for direct access to the entire region + */ +union ionic_cmb_region { + struct { + u8 bar_num; + u8 cmb_type; + u8 rsvd[6]; + __le32 offset; + __le32 length; + } __packed; + __le32 words[4]; +}; + +/** + * union ionic_discover_cmb_identity - CMB layout identity structure + * @num_regions: Number of CMB regions, up to 16 + * @flags: Feature and capability bits (0 for express + * doorbell, 1 for 4K alignment indicator, + * 31-24 for version information) + * @region: CMB mappings region, entry 0 for regular + * mapping, entries 1-7 for WQE sizes 64, + * 128, 256, 512, 1024, 2048 and 4096 bytes + * @words: Full union buffer size + */ +union ionic_discover_cmb_identity { + struct { + __le32 num_regions; +#define IONIC_CMB_FLAG_EXPDB BIT(0) +#define IONIC_CMB_FLAG_4KALIGN BIT(1) +#define IONIC_CMB_FLAG_VERSION 0xff000000 + __le32 flags; + union ionic_cmb_region region[IONIC_MAX_CMB_REGIONS]; + }; + __le32 words[478]; +}; + /** * struct ionic_qos_identify_cmd - QoS identify command * @opcode: opcode @@ -3073,6 +3154,8 @@ union ionic_dev_cmd { struct ionic_vf_getattr_cmd vf_getattr; struct ionic_vf_ctrl_cmd vf_ctrl; =20 + struct ionic_discover_cmb_cmd discover_cmb; + struct ionic_lif_identify_cmd lif_identify; struct ionic_lif_init_cmd lif_init; struct ionic_lif_reset_cmd lif_reset; @@ -3112,6 +3195,8 @@ union ionic_dev_cmd_comp { struct ionic_vf_getattr_comp vf_getattr; struct ionic_vf_ctrl_comp vf_ctrl; =20 + struct ionic_discover_cmb_comp discover_cmb; + struct ionic_lif_identify_comp lif_identify; struct ionic_lif_init_comp lif_init; ionic_lif_reset_comp lif_reset; @@ -3253,6 +3338,9 @@ union ionic_adminq_comp { #define IONIC_BAR0_DEV_CMD_DATA_REGS_OFFSET 0x0c00 #define IONIC_BAR0_INTR_STATUS_OFFSET 0x1000 #define IONIC_BAR0_INTR_CTRL_OFFSET 0x2000 + +/* BAR2 */ +#define IONIC_BAR2_CMB_ENTRY_SIZE 0x800000 #define IONIC_DEV_CMD_DONE 0x00000001 =20 #define IONIC_ASIC_TYPE_NONE 0 @@ -3306,6 +3394,7 @@ struct ionic_identity { union ionic_port_identity port; union ionic_qos_identity qos; union ionic_q_identity txq; + union ionic_discover_cmb_identity cmb_layout; }; =20 #endif /* _IONIC_IF_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 276024002484..b28966ae50c2 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -673,7 +673,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsig= ned int type, new->cmb_order =3D order_base_2(new->cmb_q_size / PAGE_SIZE); =20 err =3D ionic_get_cmb(lif, &new->cmb_pgid, &new->cmb_q_base_pa, - new->cmb_order); + new->cmb_order, 0, NULL); if (err) { netdev_err(lif->netdev, "Cannot allocate queue order %d from cmb: err %d\n", --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2085.outbound.protection.outlook.com [40.107.220.85]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E0382D7DF1; Wed, 3 Sep 2025 06:17:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.85 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880275; cv=fail; b=CI7iHTBe1FL/RkUhC3Jl8AkxaCWwsCEe7I6ZPaZV7rHOrxa73JYn6HOgP3QhlEy07gPzUozpelOq+jNXNH8FeDCNDYh+gdcFZpzaOZG0yT68CaaK+fjhTaJZxPu9VSDIuDh3m9iNi6NUKYmt6P5X2d18a120vaDGpDmLYotthnQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880275; c=relaxed/simple; bh=NfROQPYeGxHoicQQxDjEDs5h42zqtaKn9+xjzZTPuQY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Asq0hkueuHFQLzr2xvBBjCBJZHZ2spxUxG0jf9/vpjTTGmZU1M5wEd/bnPihI9SPicfDEXII8BXPpasX3XDG5vHK3yiTeN/raOmtlTaVCrHqC/j2vOtJp/RMJsWhRpt4+zQ3d5IdopGi0lwwWjGPoyREHeSL7fC27Yk93SZqwXA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=iuXEf1pg; arc=fail smtp.client-ip=40.107.220.85 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="iuXEf1pg" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RdA//NLEV5gVLnQFT/iK30gcl1sYn5Prqbbs1fYtmEsyJeVOsMSKn6mxFg+T+NbL/4rlU0s3PlXvcK0dZ+djGMW5nHcskkVqp5k8XRxN+RaM9NapONeaHpUK88ocLR530SwbcgpCl0uc0xXKt37iNbX/1ZQZNx6+vU5XPjYk4SAjz+RcYcgRNchCXr8M351OATU6CudJkM2AZ4VrJD55j6fZ0EUJn/SySRh0G8YZzrVPuHK5oEuFs6phyqObF0KOXrWWiDwBV9gU9SxeYzl8SMb3NmFnOMOg88T/zACkBwp3tkgu7MbWE9/7ONQMRZmIN59/nCL4RISiwcCV+OrWIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jpRkPx+dQVGxL2AILTyXPMS7gdxA9IxJorx8b13lVVE=; b=CkxKUxadd4G6BG/RKPH27xsKz2GHrJfy4qy5fS8iTINgMP0B7p+VBEqnQlJPx8XcU3YQnO/UWvndassGwFzgTIVccK1BYAKKZzha8YJCa/EFgtgp/MAJx+51oR+3hjS3Hb5+pMiHt4mFTUXGNDPWipkne3mTyY/jHVBQG4arbtoyQYqoi5uWQrQPtwKbqKfhdkJrVpklfSvNJoTy1rgKrqV/Y/nVGsBG7nouBsuo2TXRqIyWOm5RJouJtkvF/e3+uMT12lcmkFqpO7v+8uMD4KkhdBtunbUHZ96X+3UAtTKspstpJ4ttzy0j7hdceLGpljsa4kMs/zH3JrVlaXG7mQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jpRkPx+dQVGxL2AILTyXPMS7gdxA9IxJorx8b13lVVE=; b=iuXEf1pgn49lcfTaZ8/9ys8hzYR8sdiWOTCbXdHztlsZMzv87uc8O0jrCdgX5aeTi2nz/ocNyW983AyUPyjk0YiyuLQ+P7eqkPAhgxH8gmjp5ZrXbV39l+hEaYo8/Mu/EzmG9+4QuSSGkwxQM+f/5QSkeTL7zWzw437/owGFtAQ= Received: from DM6PR10CA0014.namprd10.prod.outlook.com (2603:10b6:5:60::27) by DM6PR12MB4154.namprd12.prod.outlook.com (2603:10b6:5:21d::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.16; Wed, 3 Sep 2025 06:17:51 +0000 Received: from DS2PEPF00003441.namprd04.prod.outlook.com (2603:10b6:5:60:cafe::e8) by DM6PR10CA0014.outlook.office365.com (2603:10b6:5:60::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.17 via Frontend Transport; Wed, 3 Sep 2025 06:17:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003441.mail.protection.outlook.com (10.167.17.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:17:50 +0000 Received: from satlexmb10.amd.com (10.181.42.219) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:50 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb10.amd.com (10.181.42.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:17:50 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:45 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde Subject: [PATCH v6 07/14] RDMA: Add IONIC to rdma_driver_id definition Date: Wed, 3 Sep 2025 11:45:59 +0530 Message-ID: <20250903061606.4139957-8-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003441:EE_|DM6PR12MB4154:EE_ X-MS-Office365-Filtering-Correlation-Id: 6aa50d06-f270-4569-f0c7-08ddeab19c53 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|82310400026|376014|7416014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?g1D8M70e0aMhYdBeTfoM/ZmcbKOAo7rfKW4tAR1PE2uzAYYKS/8BGqkA8sY8?= =?us-ascii?Q?jh4Ym7m22HOpielcYWT00ST0ujGCVNTXlBAUwklhPgPz0/OoWjw2+qvSaYiI?= =?us-ascii?Q?NEvZAZrNau+EMOgq96l8w/L+zFkrcFDwfR2XxW+J/wgucqKbRUhfTF2j/G0e?= =?us-ascii?Q?vOTKxNohxPNkIn6gGuerLss19MyT2vJxemm0XYPQajWMevUELm2q9qIihkq2?= =?us-ascii?Q?cBecUiLlQW7Wyq5OsDTbHnpdF0/otOYNdvocMOvjr3jjD8GjFtuuwFHGgpwN?= =?us-ascii?Q?bSnOYLdpQeI9CW/1mQ2bNUiONWW8m+vc1ZblegEk+9hj/DoY/EKmErlF7Xph?= =?us-ascii?Q?WUuYcA47RZDZDwZnaEYkV80QiOiqcigQapod0V3U5u5qIv7UY0zfINL5doar?= =?us-ascii?Q?a+pewRHySHiJ/uyzBGHV7AlPSHMDI/VmXv74oEnygGRhf24TflHKqV7RCSPI?= =?us-ascii?Q?fpQU3AIrNgoX/qMtlIJyTMs7bu7pATUFLve9eKEY9W6igQHISE9ORDMDEsJv?= =?us-ascii?Q?0Qo9FVGpWz6Yzn+oC9FvhIfHsaJaJcGWqz0H86tZ52vXIm9vQ3o+v8HajZru?= =?us-ascii?Q?pI2PZBTZh7CxvYpNn1UnjrwVeLdXfso494LrxTF6IV+bO2euQXMid8Z8ZOWz?= =?us-ascii?Q?q9TvKgMDBKcbLSwiP4nc38TGaE3ZaOxNrdpaPtfxF06VKnDB/Sr1vlH9NGKY?= =?us-ascii?Q?r3Egfre3cIyxmeyWqGrplWquzlV4p4cLZti4piivVNhss2wGO0QYc3hvbkyc?= =?us-ascii?Q?EMXJlqv/Kbv6EpOeN2ZREOetH1jMBW+ZumnIz8zpQCNKRffp8cRJVTVM8IKE?= =?us-ascii?Q?ULDBHyLCLDcqFYrNsg2GsIo+e98oPaMIVZlzRqF9FnqP0PMAIH6EoNhnFqmB?= =?us-ascii?Q?0yNUT3x5WYTlVYbR5ipEfr8GlMA/Vyf1AKh5+XpiuSxOCg80knFoiVUmvkH/?= =?us-ascii?Q?TszDqWnVR3YCjOd5Jbgc637A5bmM3ZPSWjXWiCab+eu+ThM807SeOXCQxFdf?= =?us-ascii?Q?amMiSuilL6P65hwSA6DY0L4uoVvde+K0DMw8YxrylWRHgBQKrHBecsHLV4ya?= =?us-ascii?Q?capVUz6TRGi/4ocGKqLrr+wY76ymxQBzuZTZrpn3prLOgN61FIQh1jIQI7RR?= =?us-ascii?Q?d0TJkk9x7qZKQ6CF5ZPhyBNm83G8vB7hk9h/WqNnjZ8eU9QE0w5RrF9JPYaN?= =?us-ascii?Q?QwAtc4LzdazQrvOxTZKo3GW2/hmYh1daDLMNNM6qJ6hTt0qPe+eCwxGzTlgM?= =?us-ascii?Q?LoPx5c0z0J776Tki7vCoUIm0YgCy5oe1JFJUn0M1qvjp4wOHm9T6vcsgWHQV?= =?us-ascii?Q?xNnhMsvTrlrnPPbYdZMq48W5ZFbff3PmM3QrfDvbSIZ0kY6uSivtZ0EZWIkA?= =?us-ascii?Q?dlUVCWmHqx7832owe4HCYmRhDvloIvgAhjMSOB5aK2d7KFq1NfdRku2uwc22?= =?us-ascii?Q?FRdC6yCjhuoqdsR45h0S084pp4lI5b8vYLJmJgDiwfZLiQ+ZSWz5WXxr+wVm?= =?us-ascii?Q?24hwEzUYTENQ3CZx0y9NWJ2N1pBjU5xop4XL?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(82310400026)(376014)(7416014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:50.8526 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6aa50d06-f270-4569-f0c7-08ddeab19c53 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003441.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4154 Content-Type: text/plain; charset="utf-8" Define RDMA_DRIVER_IONIC in enum rdma_driver_id. Signed-off-by: Abhijit Gangurde --- include/uapi/rdma/ib_user_ioctl_verbs.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib= _user_ioctl_verbs.h index fe15bc7e9f70..89e6a3f13191 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -255,6 +255,7 @@ enum rdma_driver_id { RDMA_DRIVER_SIW, RDMA_DRIVER_ERDMA, RDMA_DRIVER_MANA, + RDMA_DRIVER_IONIC, }; =20 enum ib_uverbs_gid_type { --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2053.outbound.protection.outlook.com [40.107.220.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78F2E2DE6FC; Wed, 3 Sep 2025 06:18:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.53 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880283; cv=fail; b=qvTYcm8CMYDHvh44fC6bBi+Vr41whUfYgrheo9hk4VgDZSZ2LpajEEAWadz/M1dF7vAP8wALXZIHMUqYrLP6luqr5kiMoODAjeaUWZ0Ej+C0YOCqKKkEvisSzk7hAdhaR8BPRGgDcL4gKwLQMYbRGLLrxY3YvyU5Ju75x0CvKYo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880283; c=relaxed/simple; bh=cfAPpr9FLjx1dDkl8128I0XDbVgK84Fov9ZJccVDNWc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=D5GvDIlyBF3qH9u+ZQWtYR85FMOS8/2sipjB9Gg96zUcal9Mbj8Uh+tr5xHD80vqmBHSS1PYKGZyC6h7Il55ZA3QKcm9WvRUZe73nsanaIBLkvB+pRv+pWGPp37G97F7zX0IjXABVONByBotJPjGUAJFr8TZMQw9c1nLNetS+BQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=bdvms4tg; arc=fail smtp.client-ip=40.107.220.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="bdvms4tg" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=o2UUekJpvaqINkXZJJiZKgbnOD4TLVIayT4XwvDYLtg3BSWj0a1TXpGrchtH7JB77q1ZOtj1hHXXKE8Ol6mdO2TjbCpetXXZdjreRRYs+ZCr5sDqVdHDr/rK20G2ulLiDivO6Vb1hkMO6eoQ4cugsFFAKx3nLENRFZNv9ndRwLOofxFnIYArMdJ/Y5S7QuKd0LTHICfk3KxngXwUvV19YAC6e+Unxy7ht+HwoVDGg+Dh17pZOv3KbJjdtNkzb9AxItK7qaBeREAmt5f9f++KfHyxua0Mq+vicLaO5/NhrC+tr3bvimvdVB9N9VTtuyWfkjNbLO3POKFyueXL48nsKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dSbuYxLGmefOiITmUeGLgntibKjgkrgO7SStEDGtPjY=; b=XPHNNIr1jr/8qiYvsVp+E/CViD3GBPmGMvBa5lXQAt47Ui3LCesOcuAbYt0QuN/Pz4+u2SvyD/Nhngbp2AHqnTeZSU0PapaqyyrG1hWY4H4Ye/cDBZ89NP94NkdPGlr7kHu5+Y+4Ur4H/pRgq2K1OpHDIWYB91Z47A6ynysz1vWF1o+5ca7Op8yqK4jCst2C1cz9i/wQP5/aC3X9U59W7EIOBObhAXOc8rNdXG6hW6bKkUSPQCcRvvarxSFXQp3xmUTYRM47hdi0n2iOelBnNbRJinlid/tf6ax0NOmFYPDNDh/jOVUpzeWPoOiRRUuSqREgD7lCKwGJCXB28R6BBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dSbuYxLGmefOiITmUeGLgntibKjgkrgO7SStEDGtPjY=; b=bdvms4tg/Q5/NnW/AP90kB3w6kXovlMPsuGdLp7uF6JNPxUSu8upki1gqgalKl8IOUhrEDKylxhuSBnuJBOcCyyzi4lye4C6jTG+sh0RvnSyNSeP4Zy5Nc8Quakt7XfOB41DjkbUJzFRQ/G65gs/xJzOENxxYkn3RTqbyEevMLU= Received: from DM6PR03CA0070.namprd03.prod.outlook.com (2603:10b6:5:100::47) by PH0PR12MB8006.namprd12.prod.outlook.com (2603:10b6:510:28d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8989.14; Wed, 3 Sep 2025 06:17:56 +0000 Received: from DS2PEPF00003443.namprd04.prod.outlook.com (2603:10b6:5:100:cafe::3d) by DM6PR03CA0070.outlook.office365.com (2603:10b6:5:100::47) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.17 via Frontend Transport; Wed, 3 Sep 2025 06:17:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003443.mail.protection.outlook.com (10.167.17.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:17:55 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:54 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:50 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v6 08/14] RDMA/ionic: Register auxiliary module for ionic ethernet adapter Date: Wed, 3 Sep 2025 11:46:00 +0530 Message-ID: <20250903061606.4139957-9-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003443:EE_|PH0PR12MB8006:EE_ X-MS-Office365-Filtering-Correlation-Id: face3af0-2a7a-4ca9-72cb-08ddeab19f1c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?XIpKsOEXwFR8+0ketkspkb9M1GxEtzGbbzE1ipRiG8+j+0EXVNR6h+NFsRMx?= =?us-ascii?Q?h4KPEb3JCCQ3CRXMUL40R0lVBqfABhCiV+VghiP/2ARV7rrg1lwfPnJpMvcj?= =?us-ascii?Q?wQbaTcc3sPMia9LsuBN3uv8Bf0Yw/IZ2EuCvzpT2Fwre3csR5/Y4vb3Tw9li?= =?us-ascii?Q?ZvB9Z+CN1peieR/VrvPUCooz3W9pkQTNtxE+fbxnN8zeyqyCAn3y1fWAkeh5?= =?us-ascii?Q?yJznCUINyIUOmFL7UWa/AAY+zYwY+dOGHO54vEVo5eZhnJf1Wb7zF+XKC/G3?= =?us-ascii?Q?wICahXYY8Jv7R4OSuJd/J8/qVEYyxJeQwtdQqAhwVnExPq/IKKhJEEeDEvfB?= =?us-ascii?Q?HIKulWdRDivII8p0myCFJS38Y7Pe4aXtvqxMBpNCLCw4Bqf8wAPhk00Kbme/?= =?us-ascii?Q?CZz4ehbB4cknROyFTuU08QLWsthtpgpm/gcD1TfNGa+/LZbovWOhpssNS6xl?= =?us-ascii?Q?azZYvMYTgI2PxB0w9a6iR7mhm1cQIIvUrPjT9I2LzPLQ0S0kJqdDh4oZFF73?= =?us-ascii?Q?anhDAYMUg5sQbXJ4CFaoozI5fouhGdrtozhY8GnRcBG/+H7m9EKO7urFpzc3?= =?us-ascii?Q?muZlePsLPzYDcEqXGVZFoR4GxqKNInkXAq4GKlqKA/jp9DLClxlB5JuASSKC?= =?us-ascii?Q?i+bSsBNJHO4H0Wfl9Kl+NBGuws7J6o3XXJmEZn+bxJW6o5rzQ0L+QK9g4NAh?= =?us-ascii?Q?8dAMrIOtB/hOFxvXy2xGVxPOe/aCUrQ9GyvK1MxNZtvWmKEyfr6WQoBFB91w?= =?us-ascii?Q?shDNY2kcuoB0/vZdS0bOh4Br5sHsya0OJdv0EZmM0V+Qt+hBjLk9WrAzG38Z?= =?us-ascii?Q?D8SmoFlgUUnY3+vwE6X6aw7CVaonf6oe+VjNA1VL4zYHYtSiobml+Y9oWGa1?= =?us-ascii?Q?TL/i30tYKcrHkRKVhNHh0+fR7KSumR7H+u9DWo4Uzs+wpFsyf+Rt5kPpy7sV?= =?us-ascii?Q?NyBXVSAEixdyvOZ11AIaXc18+tMCe3II2gdnNviloiQRP5ffkS/8rI7Y7imm?= =?us-ascii?Q?IiUgd5EggqbgIXWBFnnD0ci3XYBWEHLE7OGpvUvVq7yzZcCfoAyNYWjCU/HA?= =?us-ascii?Q?3OTUgeu58/3Rm622piygiSsbKRC06OPBM/0RD0E7Tk5HD/nZXoTFdOzdOmXk?= =?us-ascii?Q?iTjKY3kdVmORD8GHlx9W0wAZijI6OP/APIlWB6CSJzpoQDr29CJj4VfJ0ZeE?= =?us-ascii?Q?aCwhKP+z9vMx4eVWizQWlt4FRYRg8nO9jaA6n+YaziJr5jVaCBF/kjzqWYVk?= =?us-ascii?Q?A5RECjpUsAj3p4gRlg2TlORbbhII1JFGqS0H5bMqhIaQI81D4Rz1OptOFMrR?= =?us-ascii?Q?dmpt0yOwus5C2W7VotlssGLtir4FgYrXChNCNkLM6VIVLuyuCxB9JjVSfTuY?= =?us-ascii?Q?zG9K1m9VsxbOhqf8GIs69DkiPW0XK1s7fsXDEmWxy2+DeFAlFt0XBJ/Kw8CY?= =?us-ascii?Q?DbOfd7wpiuOdQjV5Fy5msOt1PH4BFZQDDNxi/VO99Sr/VVDSfhDzVhfIdXG1?= =?us-ascii?Q?62dvZvgl3EmuVqMmB3COVR2eocG7MJoBwjuX?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:17:55.5210 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: face3af0-2a7a-4ca9-72cb-08ddeab19f1c X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003443.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8006 Content-Type: text/plain; charset="utf-8" Register auxiliary module to create ibdevice for ionic ethernet adapter. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v1->v2 - Removed netdev references from ionic RDMA driver - Moved to ionic_lif* instead of void* to convey information between aux devices and drivers. drivers/infiniband/hw/ionic/ionic_ibdev.c | 131 ++++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 18 +++ drivers/infiniband/hw/ionic/ionic_lif_cfg.c | 101 +++++++++++++++ drivers/infiniband/hw/ionic/ionic_lif_cfg.h | 64 ++++++++++ 4 files changed, 314 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_ibdev.c create mode 100644 drivers/infiniband/hw/ionic/ionic_ibdev.h create mode 100644 drivers/infiniband/hw/ionic/ionic_lif_cfg.c create mode 100644 drivers/infiniband/hw/ionic/ionic_lif_cfg.h diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c new file mode 100644 index 000000000000..d79470dae13a --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_ibdev.h" + +#define DRIVER_DESCRIPTION "AMD Pensando RoCE HCA driver" +#define DEVICE_DESCRIPTION "AMD Pensando RoCE HCA" + +MODULE_AUTHOR("Allen Hubbe "); +MODULE_DESCRIPTION(DRIVER_DESCRIPTION); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS("NET_IONIC"); + +static void ionic_destroy_ibdev(struct ionic_ibdev *dev) +{ + ib_unregister_device(&dev->ibdev); + ib_dealloc_device(&dev->ibdev); +} + +static struct ionic_ibdev *ionic_create_ibdev(struct ionic_aux_dev *ionic_= adev) +{ + struct ib_device *ibdev; + struct ionic_ibdev *dev; + struct net_device *ndev; + int rc; + + dev =3D ib_alloc_device(ionic_ibdev, ibdev); + if (!dev) + return ERR_PTR(-EINVAL); + + ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); + + ibdev =3D &dev->ibdev; + ibdev->dev.parent =3D dev->lif_cfg.hwdev; + + strscpy(ibdev->name, "ionic_%d", IB_DEVICE_NAME_MAX); + strscpy(ibdev->node_desc, DEVICE_DESCRIPTION, IB_DEVICE_NODE_DESC_MAX); + + ibdev->node_type =3D RDMA_NODE_IB_CA; + ibdev->phys_port_cnt =3D 1; + + /* the first two eq are reserved for async events */ + ibdev->num_comp_vectors =3D dev->lif_cfg.eq_count - 2; + + ndev =3D ionic_lif_netdev(ionic_adev->lif); + addrconf_ifid_eui48((u8 *)&ibdev->node_guid, ndev); + rc =3D ib_device_set_netdev(ibdev, ndev, 1); + /* ionic_lif_netdev() returns ndev with refcount held */ + dev_put(ndev); + if (rc) + goto err_admin; + + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); + if (rc) + goto err_register; + + return dev; + +err_register: +err_admin: + ib_dealloc_device(&dev->ibdev); + + return ERR_PTR(rc); +} + +static int ionic_aux_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *id) +{ + struct ionic_aux_dev *ionic_adev; + struct ionic_ibdev *dev; + + ionic_adev =3D container_of(adev, struct ionic_aux_dev, adev); + dev =3D ionic_create_ibdev(ionic_adev); + if (IS_ERR(dev)) + return dev_err_probe(&adev->dev, PTR_ERR(dev), + "Failed to register ibdev\n"); + + auxiliary_set_drvdata(adev, dev); + ibdev_dbg(&dev->ibdev, "registered\n"); + + return 0; +} + +static void ionic_aux_remove(struct auxiliary_device *adev) +{ + struct ionic_ibdev *dev =3D auxiliary_get_drvdata(adev); + + dev_dbg(&adev->dev, "unregister ibdev\n"); + ionic_destroy_ibdev(dev); + dev_dbg(&adev->dev, "unregistered\n"); +} + +static const struct auxiliary_device_id ionic_aux_id_table[] =3D { + { .name =3D "ionic.rdma", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, ionic_aux_id_table); + +static struct auxiliary_driver ionic_aux_r_driver =3D { + .name =3D "rdma", + .probe =3D ionic_aux_probe, + .remove =3D ionic_aux_remove, + .id_table =3D ionic_aux_id_table, +}; + +static int __init ionic_mod_init(void) +{ + int rc; + + rc =3D auxiliary_driver_register(&ionic_aux_r_driver); + if (rc) + goto err_aux; + + return 0; + +err_aux: + return rc; +} + +static void __exit ionic_mod_exit(void) +{ + auxiliary_driver_unregister(&ionic_aux_r_driver); +} + +module_init(ionic_mod_init); +module_exit(ionic_mod_exit); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h new file mode 100644 index 000000000000..224e5e74056d --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_IBDEV_H_ +#define _IONIC_IBDEV_H_ + +#include +#include + +#include "ionic_lif_cfg.h" + +struct ionic_ibdev { + struct ib_device ibdev; + + struct ionic_lif_cfg lif_cfg; +}; + +#endif /* _IONIC_IBDEV_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.c new file mode 100644 index 000000000000..8d0d209227e9 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.c @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include +#include + +#include "ionic_lif_cfg.h" + +#define IONIC_MIN_RDMA_VERSION 0 +#define IONIC_MAX_RDMA_VERSION 2 + +static u8 ionic_get_expdb(struct ionic_lif *lif) +{ + u8 expdb_support =3D 0; + + if (lif->ionic->idev.phy_cmb_expdb64_pages) + expdb_support |=3D IONIC_EXPDB_64B_WQE; + if (lif->ionic->idev.phy_cmb_expdb128_pages) + expdb_support |=3D IONIC_EXPDB_128B_WQE; + if (lif->ionic->idev.phy_cmb_expdb256_pages) + expdb_support |=3D IONIC_EXPDB_256B_WQE; + if (lif->ionic->idev.phy_cmb_expdb512_pages) + expdb_support |=3D IONIC_EXPDB_512B_WQE; + + return expdb_support; +} + +void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg) +{ + union ionic_lif_identity *ident =3D &lif->ionic->ident.lif; + + cfg->lif =3D lif; + cfg->hwdev =3D &lif->ionic->pdev->dev; + cfg->lif_index =3D lif->index; + cfg->lif_hw_index =3D lif->hw_index; + + cfg->dbid =3D lif->kern_pid; + cfg->dbid_count =3D le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif); + cfg->dbpage =3D lif->kern_dbpage; + cfg->intr_ctrl =3D lif->ionic->idev.intr_ctrl; + + cfg->db_phys =3D lif->ionic->bars[IONIC_PCI_BAR_DBELL].bus_addr; + + if (IONIC_VERSION(ident->rdma.version, ident->rdma.minor_version) >=3D + IONIC_VERSION(2, 1)) + cfg->page_size_supported =3D + le64_to_cpu(ident->rdma.page_size_cap); + else + cfg->page_size_supported =3D IONIC_PAGE_SIZE_SUPPORTED; + + cfg->rdma_version =3D ident->rdma.version; + cfg->qp_opcodes =3D ident->rdma.qp_opcodes; + cfg->admin_opcodes =3D ident->rdma.admin_opcodes; + + cfg->stats_type =3D le16_to_cpu(ident->rdma.stats_type); + cfg->npts_per_lif =3D le32_to_cpu(ident->rdma.npts_per_lif); + cfg->nmrs_per_lif =3D le32_to_cpu(ident->rdma.nmrs_per_lif); + cfg->nahs_per_lif =3D le32_to_cpu(ident->rdma.nahs_per_lif); + + cfg->aq_base =3D le32_to_cpu(ident->rdma.aq_qtype.qid_base); + cfg->cq_base =3D le32_to_cpu(ident->rdma.cq_qtype.qid_base); + cfg->eq_base =3D le32_to_cpu(ident->rdma.eq_qtype.qid_base); + + /* + * ionic_create_rdma_admin() may reduce aq_count or eq_count if + * it is unable to allocate all that were requested. + * aq_count is tunable; see ionic_aq_count + * eq_count is tunable; see ionic_eq_count + */ + cfg->aq_count =3D le32_to_cpu(ident->rdma.aq_qtype.qid_count); + cfg->eq_count =3D le32_to_cpu(ident->rdma.eq_qtype.qid_count); + cfg->cq_count =3D le32_to_cpu(ident->rdma.cq_qtype.qid_count); + cfg->qp_count =3D le32_to_cpu(ident->rdma.sq_qtype.qid_count); + cfg->dbid_count =3D le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif); + + cfg->aq_qtype =3D ident->rdma.aq_qtype.qtype; + cfg->sq_qtype =3D ident->rdma.sq_qtype.qtype; + cfg->rq_qtype =3D ident->rdma.rq_qtype.qtype; + cfg->cq_qtype =3D ident->rdma.cq_qtype.qtype; + cfg->eq_qtype =3D ident->rdma.eq_qtype.qtype; + cfg->udma_qgrp_shift =3D ident->rdma.udma_shift; + cfg->udma_count =3D 2; + + cfg->max_stride =3D ident->rdma.max_stride; + cfg->expdb_mask =3D ionic_get_expdb(lif); + + cfg->sq_expdb =3D + !!(lif->qtype_info[IONIC_QTYPE_TXQ].features & IONIC_QIDENT_F_EXPDB); + cfg->rq_expdb =3D + !!(lif->qtype_info[IONIC_QTYPE_RXQ].features & IONIC_QIDENT_F_EXPDB); +} + +struct net_device *ionic_lif_netdev(struct ionic_lif *lif) +{ + struct net_device *netdev =3D lif->netdev; + + dev_hold(netdev); + return netdev; +} diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.h new file mode 100644 index 000000000000..5b04b8a9937e --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_LIF_CFG_H_ + +#define IONIC_VERSION(a, b) (((a) << 16) + ((b) << 8)) +#define IONIC_PAGE_SIZE_SUPPORTED 0x40201000 /* 4kb, 2Mb, 1Gb */ + +#define IONIC_EXPDB_64B_WQE BIT(0) +#define IONIC_EXPDB_128B_WQE BIT(1) +#define IONIC_EXPDB_256B_WQE BIT(2) +#define IONIC_EXPDB_512B_WQE BIT(3) + +struct ionic_lif_cfg { + struct device *hwdev; + struct ionic_lif *lif; + + int lif_index; + int lif_hw_index; + + u32 dbid; + int dbid_count; + u64 __iomem *dbpage; + struct ionic_intr __iomem *intr_ctrl; + phys_addr_t db_phys; + + u64 page_size_supported; + u32 npts_per_lif; + u32 nmrs_per_lif; + u32 nahs_per_lif; + + u32 aq_base; + u32 cq_base; + u32 eq_base; + + int aq_count; + int eq_count; + int cq_count; + int qp_count; + + u16 stats_type; + u8 aq_qtype; + u8 sq_qtype; + u8 rq_qtype; + u8 cq_qtype; + u8 eq_qtype; + + u8 udma_count; + u8 udma_qgrp_shift; + + u8 rdma_version; + u8 qp_opcodes; + u8 admin_opcodes; + + u8 max_stride; + bool sq_expdb; + bool rq_expdb; + u8 expdb_mask; +}; + +void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg); +struct net_device *ionic_lif_netdev(struct ionic_lif *lif); + +#endif /* _IONIC_LIF_CFG_H_ */ --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2056.outbound.protection.outlook.com [40.107.220.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 783DB2DEA93; Wed, 3 Sep 2025 06:18:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.56 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880291; cv=fail; b=oXmuFXQqIcsYtrH9wddlNQQRvEHo/wLYVHMf5FdJO2/2de4gwFFaVEELLMEXGeqsg8v9yysap7KPwXfoaDVbF7tdfLOEbpbJM2k9cgHVQfn17F8TH0BnDZraz/qpZ/jiqgPV0Sb6yLbXRLaQ/Np5NRte+kw+lo1aeDVkhVlQI/Q= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880291; c=relaxed/simple; bh=mYmuwWAhod5Cu8zQgC0Hvo4XPK5iaP69OkW6rvQ8QyQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=E61QxhOBHpKAgNgwXElALkNYcAh8S3sl+yDA0MUE39fqT/rzmUBf+GSix7jqwS8C/bGQulYSOQNrUayIlTaX55b4p6RDwNtPqK9idx/5iw1bP+m/gEvWk3KacicLWH1rgQolWEagX//TSM8758J+aKX4tLzcb4DqnpDSVz1uch0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=e7fzwA0j; arc=fail smtp.client-ip=40.107.220.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="e7fzwA0j" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=zGvI8Q1YVEUQS39X1+Bz3Yrx0DNcfuky7N/WYaARpAAxOnoYq0UZojJ8qLw4Sy9j5vCp0TfM53f8nw7DPwv5T5J4EyRDPS+6jNWAxeiBIqKbiZQUXYTKE0K00kKhsayCEaJ5Rm7OV/k+VRyPZvPOHm/0eUrLGGhJjhJYZXRih9mv7C0XSPn7XRzsK/3Iojyiw3jwxOYKrKFef6HXMcsZgam3e1cZY82rcKIKC8irRqPeUYnnAfupB1RP1IxObyFaUb195I1xltzIwjRLDMRwqxhtBdzMz+kgVZhzTh8EsViIkO7okJsHACDyeqUfymKWsxvoHOGWchHTpCQfZklqtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2vP/baKu8b5NKsEfq3BlcaUZSfkJ6GJ1hcHohnSc/LU=; b=S1WQJ1EKngTICVehSFQ/hGDkEFM35f+8jAy0gbdgWwoP8uTKWvZx2svOYXNXkJE4+yDDwb3pEDUKSJRhCBVdQ8qv9EpeFqBVu3a2hwTaWSdrFLi5LRO9wnF3lHtlc4YT5Sqv9TnhPOADiN4bcdr/7wevDAYHByLwmBexAeOjaJZq/H+oU4UTEtnRMSdNhienSdJloazdWyBb2GIW+LaFKFTCTdjKt/QPiIfHfcWeCCSdmLu8GL7jpZ9lNBhKWHMWkBq7wjlz6zYQVjlO/zBNTm2yjDaVvH1HDrXnRu1KgHvsard1GXdHCO77kJ8nAugrlWnJ5PM2ws9UcDPzVp4L/Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2vP/baKu8b5NKsEfq3BlcaUZSfkJ6GJ1hcHohnSc/LU=; b=e7fzwA0jLwXcOFrbVYoexpPCwU7nfTjoErYrBsG5qVpzVoE2BON+3CYhNyRbZ6kXvZmwMmtzhmXJRVUy8FXXtwACaYkLFcp0Vn819XOjPWBtioxsAnRNekGmiBJjsyEqMilxJTT9k/FHlOub45K6L/k5C4Nytz1eT80L1eZbIbU= Received: from DS7PR06CA0019.namprd06.prod.outlook.com (2603:10b6:8:2a::18) by CYYPR12MB8892.namprd12.prod.outlook.com (2603:10b6:930:be::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9052.21; Wed, 3 Sep 2025 06:18:01 +0000 Received: from DS2PEPF00003439.namprd02.prod.outlook.com (2603:10b6:8:2a:cafe::60) by DS7PR06CA0019.outlook.office365.com (2603:10b6:8:2a::18) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.17 via Frontend Transport; Wed, 3 Sep 2025 06:18:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by DS2PEPF00003439.mail.protection.outlook.com (10.167.18.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:18:00 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:18:00 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:17:59 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:17:55 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v6 09/14] RDMA/ionic: Create device queues to support admin operations Date: Wed, 3 Sep 2025 11:46:01 +0530 Message-ID: <20250903061606.4139957-10-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003439:EE_|CYYPR12MB8892:EE_ X-MS-Office365-Filtering-Correlation-Id: a82820f7-9635-4a4c-eb2f-08ddeab1a252 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|7416014|376014|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?TnFq63lgXJwQnK+Bx4J9Mk4wz9wxo1JbQUU95afuq6kyS2glFPcaA6CHFvKd?= =?us-ascii?Q?Uko5ohWSm4gWk/V+9Wc74ul08z8FBAUxEXAPNZ6PxeNwrclEZWmwLsHMFufW?= =?us-ascii?Q?I6l62T3DXmst5lXtQWTvsB8nYMgkWCStEVXiqFYf8n5IsNLMHOJjmpMGoPdx?= =?us-ascii?Q?ttkVx0luDrFqKqgxZnmB0GPao0lKq/mLt7tnWNlkQ3Mfo+c3552USgbiOtDh?= =?us-ascii?Q?k07hPmEWXVSY5G91o4PYwQK6yRnEiRLRUeVW5uhH6/GXYjsA8d8NjmQRUhfq?= =?us-ascii?Q?t7F7R7MMeXPlTC7l+1Ys4fEn9J4NVnTh7l+f35Pa7nqRM2AK+DphZ1hvpYE7?= =?us-ascii?Q?FHuUJsiYIIcvnILJ4h8J9om37zM1G3Gtzdfvcqje3mkMQEoCUhSTbh8ZKRQy?= =?us-ascii?Q?+H5a7ZM05yvMDrwxkWQ8CbPU2txfRaXLg/rtOwN/xBOjbCT9Q83p03axnyAj?= =?us-ascii?Q?ax8BFPMAHdLHPy2bNzY1ufXLcojMfb7673r2ForZUpMA5tRQm6MGT80Dzm/V?= =?us-ascii?Q?HDuZ0kiFIEiR558T2MCRq3+f0SVPWYgxEqWCWBmABnQNFVrmk9n5o9GjYD9Y?= =?us-ascii?Q?9Dvo78nbYVbYm5vuLs31wUfTfVq68N0ojnBM5ymXxCpP350i0Q6pYhUC9Rie?= =?us-ascii?Q?HYf/1b+pJSzD373omx2I9cxiDUQ9sOShYnzQvk9SCANIzD0uyUe1XMvNwtly?= =?us-ascii?Q?uArO1DD6XEs3SwxmdtN8s/cw8FTbL7RUKCwfD0vNg+pY8OIS/sbh/7leNIju?= =?us-ascii?Q?puJAGzF7Mc+Xi+0NuGLxVpxTrWbWy0EFRM+pA0k4xs833utzbfaXMlD9lIXi?= =?us-ascii?Q?o/GEm3wc2WVzJhmLeZUyWF51/OP8RrlP364gajz6igru5xrOi/AD+r3NyVwN?= =?us-ascii?Q?BmYJ/U9oB1kF3TWU/tXolvCtU+Fhn0IrbCC/lC99UBhXgtdep1zHZDE7m/fL?= =?us-ascii?Q?Ik3gWpAFLXlH6thLWu7vjra4dQHmhjs35fQ5NHD2gWKinzKZNIFuDPEnV/CE?= =?us-ascii?Q?u4EkFAQZGxUc0ps3CoCkx2uNZrdc86rcV/UpnWCgsYMsT9cLu2RAfth3haK5?= =?us-ascii?Q?dNYGWEBeoDKI6SiSSTZqJQ1ZZYHMgYOqbSQSgm4tHlbutzOyWwl/laiGcHPH?= =?us-ascii?Q?w3YsVZ8c9UuEdUtRVdcdEN/OQgskdHl0rR59AUnhIWq9NJdTWLm5ChUey2zr?= =?us-ascii?Q?qk648HMW1ndO6iJDYunFZQzVFBk/choFjAf7e8vUVlY6SNkIolWMz+6YIcR2?= =?us-ascii?Q?q+NTMINO55D2nslnXSQbS/umPdNI2eN1WTBu06vbWLZtJgxumjF0q8yV56rz?= =?us-ascii?Q?UNbgCweo+zzxo+lNHUmb4HHY+QL0hFmS8z7+OwpYTNZgdA8KjGMo+0RKh6lA?= =?us-ascii?Q?d+NcZ/bu0CPCdcGuXGIHtSOOLx5x9j05N+WE3hE48RKywwei5NFHQLakcIjI?= =?us-ascii?Q?PWWAmtlclRs9WpJwtZtWkPa3ja/pwvOGVbGdp76P4uLoiu4/CxrDFaUlMpD8?= =?us-ascii?Q?C6U562QLVHwH5ddv8QbmxR0u9aEW+Oi2w0Lq?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(7416014)(376014)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:18:00.9115 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a82820f7-9635-4a4c-eb2f-08ddeab1a252 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003439.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8892 Content-Type: text/plain; charset="utf-8" Setup RDMA admin queues using device command exposed over auxiliary device and manage these queues using ida. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v3->v4 - Used xa lock instead of rcu lock for qp and cq access to handle async events - Improved comments - Removed unwanted warning and error prints v2->v3 - Fixed lockdep warning - Used IDA for resource id allocation - Removed rw locks around xarrays drivers/infiniband/hw/ionic/ionic_admin.c | 1124 +++++++++++++++++ .../infiniband/hw/ionic/ionic_controlpath.c | 181 +++ drivers/infiniband/hw/ionic/ionic_fw.h | 164 +++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 56 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 222 ++++ drivers/infiniband/hw/ionic/ionic_pgtbl.c | 113 ++ drivers/infiniband/hw/ionic/ionic_queue.c | 52 + drivers/infiniband/hw/ionic/ionic_queue.h | 234 ++++ drivers/infiniband/hw/ionic/ionic_res.h | 154 +++ 9 files changed, 2300 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_admin.c create mode 100644 drivers/infiniband/hw/ionic/ionic_controlpath.c create mode 100644 drivers/infiniband/hw/ionic/ionic_fw.h create mode 100644 drivers/infiniband/hw/ionic/ionic_pgtbl.c create mode 100644 drivers/infiniband/hw/ionic/ionic_queue.c create mode 100644 drivers/infiniband/hw/ionic/ionic_queue.h create mode 100644 drivers/infiniband/hw/ionic/ionic_res.h diff --git a/drivers/infiniband/hw/ionic/ionic_admin.c b/drivers/infiniband= /hw/ionic/ionic_admin.c new file mode 100644 index 000000000000..845c03f6d9fb --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_admin.c @@ -0,0 +1,1124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +#define IONIC_EQ_COUNT_MIN 4 +#define IONIC_AQ_COUNT_MIN 1 + +/* not a valid queue position or negative error status */ +#define IONIC_ADMIN_POSTED 0x10000 + +/* cpu can be held with irq disabled for COUNT * MS (for create/destroy_a= h) */ +#define IONIC_ADMIN_BUSY_RETRY_COUNT 2000 +#define IONIC_ADMIN_BUSY_RETRY_MS 1 + +/* admin queue will be considered failed if a command takes longer */ +#define IONIC_ADMIN_TIMEOUT (HZ * 2) +#define IONIC_ADMIN_WARN (HZ / 8) + +/* will poll for admin cq to tolerate and report from missed event */ +#define IONIC_ADMIN_DELAY (HZ / 8) + +/* work queue for polling the event queue and admin cq */ +struct workqueue_struct *ionic_evt_workq; + +static void ionic_admin_timedout(struct ionic_aq *aq) +{ + struct ionic_ibdev *dev =3D aq->dev; + unsigned long irqflags; + u16 pos; + + spin_lock_irqsave(&aq->lock, irqflags); + if (ionic_queue_empty(&aq->q)) + goto out; + + /* Reset ALL adminq if any one times out */ + if (atomic_read(&aq->admin_state) < IONIC_ADMIN_KILLED) + queue_work(ionic_evt_workq, &dev->reset_work); + + ibdev_err(&dev->ibdev, "admin command timed out, aq %d after: %ums\n", + aq->aqid, (u32)jiffies_to_msecs(jiffies - aq->stamp)); + + pos =3D (aq->q.prod - 1) & aq->q.mask; + if (pos =3D=3D aq->q.cons) + goto out; + + ibdev_warn(&dev->ibdev, "admin pos %u (last posted)\n", pos); + print_hex_dump(KERN_WARNING, "cmd ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at(&aq->q, pos), + BIT(aq->q.stride_log2), true); + +out: + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_admin_reset_dwork(struct ionic_ibdev *dev) +{ + if (atomic_read(&dev->admin_state) =3D=3D IONIC_ADMIN_KILLED) + return; + + queue_delayed_work(ionic_evt_workq, &dev->admin_dwork, + IONIC_ADMIN_DELAY); +} + +static void ionic_admin_reset_wdog(struct ionic_aq *aq) +{ + if (atomic_read(&aq->admin_state) =3D=3D IONIC_ADMIN_KILLED) + return; + + aq->stamp =3D jiffies; + ionic_admin_reset_dwork(aq->dev); +} + +static bool ionic_admin_next_cqe(struct ionic_ibdev *dev, struct ionic_cq = *cq, + struct ionic_v1_cqe **cqe) +{ + struct ionic_v1_cqe *qcqe =3D ionic_queue_at_prod(&cq->q); + + if (unlikely(cq->color !=3D ionic_v1_cqe_color(qcqe))) + return false; + + /* Prevent out-of-order reads of the CQE */ + dma_rmb(); + *cqe =3D qcqe; + + return true; +} + +static void ionic_admin_poll_locked(struct ionic_aq *aq) +{ + struct ionic_cq *cq =3D &aq->vcq->cq[0]; + struct ionic_admin_wr *wr, *wr_next; + struct ionic_ibdev *dev =3D aq->dev; + u32 wr_strides, avlbl_strides; + struct ionic_v1_cqe *cqe; + u32 qtf, qid; + u16 old_prod; + u8 type; + + lockdep_assert_held(&aq->lock); + + if (atomic_read(&aq->admin_state) =3D=3D IONIC_ADMIN_KILLED) { + list_for_each_entry_safe(wr, wr_next, &aq->wr_prod, aq_ent) { + INIT_LIST_HEAD(&wr->aq_ent); + aq->q_wr[wr->status].wr =3D NULL; + wr->status =3D atomic_read(&aq->admin_state); + complete_all(&wr->work); + } + INIT_LIST_HEAD(&aq->wr_prod); + + list_for_each_entry_safe(wr, wr_next, &aq->wr_post, aq_ent) { + INIT_LIST_HEAD(&wr->aq_ent); + wr->status =3D atomic_read(&aq->admin_state); + complete_all(&wr->work); + } + INIT_LIST_HEAD(&aq->wr_post); + + return; + } + + old_prod =3D cq->q.prod; + + while (ionic_admin_next_cqe(dev, cq, &cqe)) { + qtf =3D ionic_v1_cqe_qtf(cqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + if (unlikely(type !=3D IONIC_V1_CQE_TYPE_ADMIN)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe type %u\n", type); + goto cq_next; + } + + if (unlikely(qid !=3D aq->aqid)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe qid %u\n", qid); + goto cq_next; + } + + if (unlikely(be16_to_cpu(cqe->admin.cmd_idx) !=3D aq->q.cons)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad idx %u cons %u qid %u\n", + be16_to_cpu(cqe->admin.cmd_idx), + aq->q.cons, qid); + goto cq_next; + } + + if (unlikely(ionic_queue_empty(&aq->q))) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe for empty adminq\n"); + goto cq_next; + } + + wr =3D aq->q_wr[aq->q.cons].wr; + if (wr) { + aq->q_wr[aq->q.cons].wr =3D NULL; + list_del_init(&wr->aq_ent); + + wr->cqe =3D *cqe; + wr->status =3D atomic_read(&aq->admin_state); + complete_all(&wr->work); + } + + ionic_queue_consume_entries(&aq->q, + aq->q_wr[aq->q.cons].wqe_strides); + +cq_next: + ionic_queue_produce(&cq->q); + cq->color =3D ionic_color_wrap(cq->q.prod, cq->color); + } + + if (old_prod !=3D cq->q.prod) { + ionic_admin_reset_wdog(aq); + cq->q.cons =3D cq->q.prod; + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + ionic_queue_dbell_val(&cq->q)); + queue_work(ionic_evt_workq, &aq->work); + } else if (!aq->armed) { + aq->armed =3D true; + cq->arm_any_prod =3D ionic_queue_next(&cq->q, cq->arm_any_prod); + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + cq->q.dbell | IONIC_CQ_RING_ARM | + cq->arm_any_prod); + queue_work(ionic_evt_workq, &aq->work); + } + + if (atomic_read(&aq->admin_state) !=3D IONIC_ADMIN_ACTIVE) + return; + + old_prod =3D aq->q.prod; + + if (ionic_queue_empty(&aq->q) && !list_empty(&aq->wr_post)) + ionic_admin_reset_wdog(aq); + + if (list_empty(&aq->wr_post)) + return; + + do { + u8 *src; + int i, src_len; + size_t stride_len; + + wr =3D list_first_entry(&aq->wr_post, struct ionic_admin_wr, + aq_ent); + wr_strides =3D (le16_to_cpu(wr->wqe.len) + ADMIN_WQE_HDR_LEN + + (ADMIN_WQE_STRIDE - 1)) >> aq->q.stride_log2; + avlbl_strides =3D ionic_queue_length_remaining(&aq->q); + + if (wr_strides > avlbl_strides) + break; + + list_move(&wr->aq_ent, &aq->wr_prod); + wr->status =3D aq->q.prod; + aq->q_wr[aq->q.prod].wr =3D wr; + aq->q_wr[aq->q.prod].wqe_strides =3D wr_strides; + + src_len =3D le16_to_cpu(wr->wqe.len); + src =3D (uint8_t *)&wr->wqe.cmd; + + /* First stride */ + memcpy(ionic_queue_at_prod(&aq->q), &wr->wqe, + ADMIN_WQE_HDR_LEN); + stride_len =3D ADMIN_WQE_STRIDE - ADMIN_WQE_HDR_LEN; + if (stride_len > src_len) + stride_len =3D src_len; + memcpy(ionic_queue_at_prod(&aq->q) + ADMIN_WQE_HDR_LEN, + src, stride_len); + ibdev_dbg(&dev->ibdev, "post admin prod %u (%u strides)\n", + aq->q.prod, wr_strides); + print_hex_dump_debug("wqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at_prod(&aq->q), + BIT(aq->q.stride_log2), true); + ionic_queue_produce(&aq->q); + + /* Remaining strides */ + for (i =3D stride_len; i < src_len; i +=3D stride_len) { + stride_len =3D ADMIN_WQE_STRIDE; + + if (i + stride_len > src_len) + stride_len =3D src_len - i; + + memcpy(ionic_queue_at_prod(&aq->q), src + i, + stride_len); + print_hex_dump_debug("wqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at_prod(&aq->q), + BIT(aq->q.stride_log2), true); + ionic_queue_produce(&aq->q); + } + } while (!list_empty(&aq->wr_post)); + + if (old_prod !=3D aq->q.prod) + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.aq_qtype, + ionic_queue_dbell_val(&aq->q)); +} + +static void ionic_admin_dwork(struct work_struct *ws) +{ + struct ionic_ibdev *dev =3D + container_of(ws, struct ionic_ibdev, admin_dwork.work); + struct ionic_aq *aq, *bad_aq =3D NULL; + bool do_reschedule =3D false; + unsigned long irqflags; + bool do_reset =3D false; + u16 pos; + int i; + + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + aq =3D dev->aq_vec[i]; + + spin_lock_irqsave(&aq->lock, irqflags); + + if (ionic_queue_empty(&aq->q)) + goto next_aq; + + /* Reschedule if any queue has outstanding work */ + do_reschedule =3D true; + + if (time_is_after_eq_jiffies(aq->stamp + IONIC_ADMIN_WARN)) + /* Warning threshold not met, nothing to do */ + goto next_aq; + + /* See if polling now makes some progress */ + pos =3D aq->q.cons; + ionic_admin_poll_locked(aq); + if (pos !=3D aq->q.cons) { + ibdev_dbg(&dev->ibdev, + "missed event for acq %d\n", aq->cqid); + goto next_aq; + } + + if (time_is_after_eq_jiffies(aq->stamp + + IONIC_ADMIN_TIMEOUT)) { + /* Timeout threshold not met */ + ibdev_dbg(&dev->ibdev, "no progress after %ums\n", + (u32)jiffies_to_msecs(jiffies - aq->stamp)); + goto next_aq; + } + + /* Queue timed out */ + bad_aq =3D aq; + do_reset =3D true; +next_aq: + spin_unlock_irqrestore(&aq->lock, irqflags); + } + + if (do_reset) + /* Reset RDMA lif on a timeout */ + ionic_admin_timedout(bad_aq); + else if (do_reschedule) + /* Try to poll again later */ + ionic_admin_reset_dwork(dev); +} + +static void ionic_admin_work(struct work_struct *ws) +{ + struct ionic_aq *aq =3D container_of(ws, struct ionic_aq, work); + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_admin_post_aq(struct ionic_aq *aq, struct ionic_admin_wr= *wr) +{ + unsigned long irqflags; + bool poll; + + wr->status =3D IONIC_ADMIN_POSTED; + wr->aq =3D aq; + + spin_lock_irqsave(&aq->lock, irqflags); + poll =3D list_empty(&aq->wr_post); + list_add(&wr->aq_ent, &aq->wr_post); + if (poll) + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +void ionic_admin_post(struct ionic_ibdev *dev, struct ionic_admin_wr *wr) +{ + int aq_idx; + + /* Use cpu id for the adminq selection */ + aq_idx =3D raw_smp_processor_id() % dev->lif_cfg.aq_count; + ionic_admin_post_aq(dev->aq_vec[aq_idx], wr); +} + +static void ionic_admin_cancel(struct ionic_admin_wr *wr) +{ + struct ionic_aq *aq =3D wr->aq; + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + + if (!list_empty(&wr->aq_ent)) { + list_del(&wr->aq_ent); + if (wr->status !=3D IONIC_ADMIN_POSTED) + aq->q_wr[wr->status].wr =3D NULL; + } + + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static int ionic_admin_busy_wait(struct ionic_admin_wr *wr) +{ + struct ionic_aq *aq =3D wr->aq; + unsigned long irqflags; + int try_i; + + for (try_i =3D 0; try_i < IONIC_ADMIN_BUSY_RETRY_COUNT; ++try_i) { + if (completion_done(&wr->work)) + return 0; + + mdelay(IONIC_ADMIN_BUSY_RETRY_MS); + + spin_lock_irqsave(&aq->lock, irqflags); + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); + } + + /* + * we timed out. Initiate RDMA LIF reset and indicate + * error to caller. + */ + ionic_admin_timedout(aq); + return -ETIMEDOUT; +} + +int ionic_admin_wait(struct ionic_ibdev *dev, struct ionic_admin_wr *wr, + enum ionic_admin_flags flags) +{ + int rc, timo; + + if (flags & IONIC_ADMIN_F_BUSYWAIT) { + /* Spin */ + rc =3D ionic_admin_busy_wait(wr); + } else if (flags & IONIC_ADMIN_F_INTERRUPT) { + /* + * Interruptible sleep, 1s timeout + * This is used for commands which are safe for the caller + * to clean up without killing and resetting the adminq. + */ + timo =3D wait_for_completion_interruptible_timeout(&wr->work, + HZ); + if (timo > 0) + rc =3D 0; + else if (timo =3D=3D 0) + rc =3D -ETIMEDOUT; + else + rc =3D timo; + } else { + /* + * Uninterruptible sleep + * This is used for commands which are NOT safe for the + * caller to clean up. Cleanup must be handled by the + * adminq kill and reset process so that host memory is + * not corrupted by the device. + */ + wait_for_completion(&wr->work); + rc =3D 0; + } + + if (rc) { + ibdev_warn(&dev->ibdev, "wait status %d\n", rc); + ionic_admin_cancel(wr); + } else if (wr->status =3D=3D IONIC_ADMIN_KILLED) { + ibdev_dbg(&dev->ibdev, "admin killed\n"); + + /* No error if admin already killed during teardown */ + rc =3D (flags & IONIC_ADMIN_F_TEARDOWN) ? 0 : -ENODEV; + } else if (ionic_v1_cqe_error(&wr->cqe)) { + ibdev_warn(&dev->ibdev, "opcode %u error %u\n", + wr->wqe.op, + be32_to_cpu(wr->cqe.status_length)); + rc =3D -EINVAL; + } + return rc; +} + +static int ionic_rdma_devcmd(struct ionic_ibdev *dev, + struct ionic_admin_ctx *admin) +{ + int rc; + + rc =3D ionic_adminq_post_wait(dev->lif_cfg.lif, admin); + if (rc) + return rc; + + return ionic_error_to_errno(admin->comp.comp.status); +} + +int ionic_rdma_reset_devcmd(struct ionic_ibdev *dev) +{ + struct ionic_admin_ctx admin =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(admin.work), + .cmd.rdma_reset =3D { + .opcode =3D IONIC_CMD_RDMA_RESET_LIF, + .lif_index =3D cpu_to_le16(dev->lif_cfg.lif_index), + }, + }; + + return ionic_rdma_devcmd(dev, &admin); +} + +static int ionic_rdma_queue_devcmd(struct ionic_ibdev *dev, + struct ionic_queue *q, + u32 qid, u32 cid, u16 opcode) +{ + struct ionic_admin_ctx admin =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(admin.work), + .cmd.rdma_queue =3D { + .opcode =3D opcode, + .lif_index =3D cpu_to_le16(dev->lif_cfg.lif_index), + .qid_ver =3D cpu_to_le32(qid), + .cid =3D cpu_to_le32(cid), + .dbid =3D cpu_to_le16(dev->lif_cfg.dbid), + .depth_log2 =3D q->depth_log2, + .stride_log2 =3D q->stride_log2, + .dma_addr =3D cpu_to_le64(q->dma), + }, + }; + + return ionic_rdma_devcmd(dev, &admin); +} + +static void ionic_rdma_admincq_comp(struct ib_cq *ibcq, void *cq_context) +{ + struct ionic_aq *aq =3D cq_context; + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + aq->armed =3D false; + if (atomic_read(&aq->admin_state) < IONIC_ADMIN_KILLED) + queue_work(ionic_evt_workq, &aq->work); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_rdma_admincq_event(struct ib_event *event, void *cq_cont= ext) +{ + struct ionic_aq *aq =3D cq_context; + + ibdev_err(&aq->dev->ibdev, "admincq event %d\n", event->event); +} + +static struct ionic_vcq *ionic_create_rdma_admincq(struct ionic_ibdev *dev, + int comp_vector) +{ + struct ib_cq_init_attr attr =3D { + .cqe =3D IONIC_AQ_DEPTH, + .comp_vector =3D comp_vector, + }; + struct ionic_tbl_buf buf =3D {}; + struct ionic_vcq *vcq; + struct ionic_cq *cq; + int rc; + + vcq =3D kzalloc(sizeof(*vcq), GFP_KERNEL); + if (!vcq) + return ERR_PTR(-ENOMEM); + + vcq->ibcq.device =3D &dev->ibdev; + vcq->ibcq.comp_handler =3D ionic_rdma_admincq_comp; + vcq->ibcq.event_handler =3D ionic_rdma_admincq_event; + atomic_set(&vcq->ibcq.usecnt, 0); + + vcq->udma_mask =3D 1; + cq =3D &vcq->cq[0]; + + rc =3D ionic_create_cq_common(vcq, &buf, &attr, NULL, NULL, + NULL, NULL, 0); + if (rc) + goto err_init; + + rc =3D ionic_rdma_queue_devcmd(dev, &cq->q, cq->cqid, cq->eqid, + IONIC_CMD_RDMA_CREATE_CQ); + if (rc) + goto err_cmd; + + return vcq; + +err_cmd: + ionic_destroy_cq_common(dev, cq); +err_init: + kfree(vcq); + + return ERR_PTR(rc); +} + +static struct ionic_aq *__ionic_create_rdma_adminq(struct ionic_ibdev *dev, + u32 aqid, u32 cqid) +{ + struct ionic_aq *aq; + int rc; + + aq =3D kzalloc(sizeof(*aq), GFP_KERNEL); + if (!aq) + return ERR_PTR(-ENOMEM); + + atomic_set(&aq->admin_state, IONIC_ADMIN_KILLED); + aq->dev =3D dev; + aq->aqid =3D aqid; + aq->cqid =3D cqid; + spin_lock_init(&aq->lock); + + rc =3D ionic_queue_init(&aq->q, dev->lif_cfg.hwdev, IONIC_EQ_DEPTH, + ADMIN_WQE_STRIDE); + if (rc) + goto err_q; + + ionic_queue_dbell_init(&aq->q, aq->aqid); + + aq->q_wr =3D kcalloc((u32)aq->q.mask + 1, sizeof(*aq->q_wr), GFP_KERNEL); + if (!aq->q_wr) { + rc =3D -ENOMEM; + goto err_wr; + } + + INIT_LIST_HEAD(&aq->wr_prod); + INIT_LIST_HEAD(&aq->wr_post); + + INIT_WORK(&aq->work, ionic_admin_work); + aq->armed =3D false; + + return aq; + +err_wr: + ionic_queue_destroy(&aq->q, dev->lif_cfg.hwdev); +err_q: + kfree(aq); + + return ERR_PTR(rc); +} + +static void __ionic_destroy_rdma_adminq(struct ionic_ibdev *dev, + struct ionic_aq *aq) +{ + ionic_queue_destroy(&aq->q, dev->lif_cfg.hwdev); + kfree(aq); +} + +static struct ionic_aq *ionic_create_rdma_adminq(struct ionic_ibdev *dev, + u32 aqid, u32 cqid) +{ + struct ionic_aq *aq; + int rc; + + aq =3D __ionic_create_rdma_adminq(dev, aqid, cqid); + if (IS_ERR(aq)) + return aq; + + rc =3D ionic_rdma_queue_devcmd(dev, &aq->q, aq->aqid, aq->cqid, + IONIC_CMD_RDMA_CREATE_ADMINQ); + if (rc) + goto err_cmd; + + return aq; + +err_cmd: + __ionic_destroy_rdma_adminq(dev, aq); + + return ERR_PTR(rc); +} + +static void ionic_kill_ibdev(struct ionic_ibdev *dev, bool fatal_path) +{ + unsigned long irqflags; + bool do_flush =3D false; + int i; + + /* Mark AQs for drain and flush the QPs while irq is disabled */ + local_irq_save(irqflags); + + /* Mark the admin queue, flushing at most once */ + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + struct ionic_aq *aq =3D dev->aq_vec[i]; + + spin_lock(&aq->lock); + if (atomic_read(&aq->admin_state) !=3D IONIC_ADMIN_KILLED) { + atomic_set(&aq->admin_state, IONIC_ADMIN_KILLED); + /* Flush incomplete admin commands */ + ionic_admin_poll_locked(aq); + do_flush =3D true; + } + spin_unlock(&aq->lock); + } + + local_irq_restore(irqflags); + + /* Post a fatal event if requested */ + if (fatal_path) { + struct ib_event ev; + + ev.device =3D &dev->ibdev; + ev.element.port_num =3D 1; + ev.event =3D IB_EVENT_DEVICE_FATAL; + + ib_dispatch_event(&ev); + } + + atomic_set(&dev->admin_state, IONIC_ADMIN_KILLED); +} + +void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path) +{ + enum ionic_admin_state old_state; + unsigned long irqflags =3D 0; + int i, rc; + + if (!dev->aq_vec) + return; + + /* + * Admin queues are transitioned from active to paused to killed state. + * When in paused state, no new commands are issued to the device, + * nor are any completed locally. After resetting the lif, it will be + * safe to resume the rdma admin queues in the killed state. Commands + * will not be issued to the device, but will complete locally with status + * IONIC_ADMIN_KILLED. Handling completion will ensure that creating or + * modifying resources fails, but destroying resources succeeds. + * If there was a failure resetting the lif using this strategy, + * then the state of the device is unknown. + */ + old_state =3D atomic_cmpxchg(&dev->admin_state, IONIC_ADMIN_ACTIVE, + IONIC_ADMIN_PAUSED); + if (old_state !=3D IONIC_ADMIN_ACTIVE) + return; + + /* Pause all the AQs */ + local_irq_save(irqflags); + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + struct ionic_aq *aq =3D dev->aq_vec[i]; + + spin_lock(&aq->lock); + /* pause rdma admin queues to reset lif */ + if (atomic_read(&aq->admin_state) =3D=3D IONIC_ADMIN_ACTIVE) + atomic_set(&aq->admin_state, IONIC_ADMIN_PAUSED); + spin_unlock(&aq->lock); + } + local_irq_restore(irqflags); + + rc =3D ionic_rdma_reset_devcmd(dev); + if (unlikely(rc)) { + ibdev_err(&dev->ibdev, "failed to reset rdma %d\n", rc); + ionic_request_rdma_reset(dev->lif_cfg.lif); + } + + ionic_kill_ibdev(dev, fatal_path); +} + +static void ionic_reset_work(struct work_struct *ws) +{ + struct ionic_ibdev *dev =3D + container_of(ws, struct ionic_ibdev, reset_work); + + ionic_kill_rdma_admin(dev, true); +} + +static bool ionic_next_eqe(struct ionic_eq *eq, struct ionic_v1_eqe *eqe) +{ + struct ionic_v1_eqe *qeqe; + bool color; + + qeqe =3D ionic_queue_at_prod(&eq->q); + color =3D ionic_v1_eqe_color(qeqe); + + /* cons is color for eq */ + if (eq->q.cons !=3D color) + return false; + + /* Prevent out-of-order reads of the EQE */ + dma_rmb(); + + ibdev_dbg(&eq->dev->ibdev, "poll eq prod %u\n", eq->q.prod); + print_hex_dump_debug("eqe ", DUMP_PREFIX_OFFSET, 16, 1, + qeqe, BIT(eq->q.stride_log2), true); + *eqe =3D *qeqe; + + return true; +} + +static void ionic_cq_event(struct ionic_ibdev *dev, u32 cqid, u8 code) +{ + unsigned long irqflags; + struct ib_event ibev; + struct ionic_cq *cq; + + xa_lock_irqsave(&dev->cq_tbl, irqflags); + cq =3D xa_load(&dev->cq_tbl, cqid); + if (cq) + kref_get(&cq->cq_kref); + xa_unlock_irqrestore(&dev->cq_tbl, irqflags); + + if (!cq) { + ibdev_dbg(&dev->ibdev, + "missing cqid %#x code %u\n", cqid, code); + return; + } + + switch (code) { + case IONIC_V1_EQE_CQ_NOTIFY: + if (cq->vcq->ibcq.comp_handler) + cq->vcq->ibcq.comp_handler(&cq->vcq->ibcq, + cq->vcq->ibcq.cq_context); + break; + + case IONIC_V1_EQE_CQ_ERR: + if (cq->vcq->ibcq.event_handler) { + ibev.event =3D IB_EVENT_CQ_ERR; + ibev.device =3D &dev->ibdev; + ibev.element.cq =3D &cq->vcq->ibcq; + + cq->vcq->ibcq.event_handler(&ibev, + cq->vcq->ibcq.cq_context); + } + break; + + default: + ibdev_dbg(&dev->ibdev, + "unrecognized cqid %#x code %u\n", cqid, code); + break; + } + + kref_put(&cq->cq_kref, ionic_cq_complete); +} + +static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budget) +{ + struct ionic_ibdev *dev =3D eq->dev; + struct ionic_v1_eqe eqe; + u16 npolled =3D 0; + u8 type, code; + u32 evt, qid; + + while (npolled < budget) { + if (!ionic_next_eqe(eq, &eqe)) + break; + + ionic_queue_produce(&eq->q); + + /* cons is color for eq */ + eq->q.cons =3D ionic_color_wrap(eq->q.prod, eq->q.cons); + + ++npolled; + + evt =3D ionic_v1_eqe_evt(&eqe); + type =3D ionic_v1_eqe_evt_type(evt); + code =3D ionic_v1_eqe_evt_code(evt); + qid =3D ionic_v1_eqe_evt_qid(evt); + + switch (type) { + case IONIC_V1_EQE_TYPE_CQ: + ionic_cq_event(dev, qid, code); + break; + + default: + ibdev_dbg(&dev->ibdev, + "unknown event %#x type %u\n", evt, type); + } + } + + return npolled; +} + +static void ionic_poll_eq_work(struct work_struct *work) +{ + struct ionic_eq *eq =3D container_of(work, struct ionic_eq, work); + u32 npolled; + + if (unlikely(!eq->enable) || WARN_ON(eq->armed)) + return; + + npolled =3D ionic_poll_eq(eq, IONIC_EQ_WORK_BUDGET); + if (npolled =3D=3D IONIC_EQ_WORK_BUDGET) { + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + npolled, 0); + queue_work(ionic_evt_workq, &eq->work); + } else { + xchg(&eq->armed, true); + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + 0, IONIC_INTR_CRED_UNMASK); + } +} + +static irqreturn_t ionic_poll_eq_isr(int irq, void *eqptr) +{ + struct ionic_eq *eq =3D eqptr; + bool was_armed; + u32 npolled; + + was_armed =3D xchg(&eq->armed, false); + + if (unlikely(!eq->enable) || !was_armed) + return IRQ_HANDLED; + + npolled =3D ionic_poll_eq(eq, IONIC_EQ_ISR_BUDGET); + if (npolled =3D=3D IONIC_EQ_ISR_BUDGET) { + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + npolled, 0); + queue_work(ionic_evt_workq, &eq->work); + } else { + xchg(&eq->armed, true); + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + 0, IONIC_INTR_CRED_UNMASK); + } + + return IRQ_HANDLED; +} + +static struct ionic_eq *ionic_create_eq(struct ionic_ibdev *dev, int eqid) +{ + struct ionic_intr_info intr_obj =3D { }; + struct ionic_eq *eq; + int rc; + + eq =3D kzalloc(sizeof(*eq), GFP_KERNEL); + if (!eq) + return ERR_PTR(-ENOMEM); + + eq->dev =3D dev; + + rc =3D ionic_queue_init(&eq->q, dev->lif_cfg.hwdev, IONIC_EQ_DEPTH, + sizeof(struct ionic_v1_eqe)); + if (rc) + goto err_q; + + eq->eqid =3D eqid; + + eq->armed =3D true; + eq->enable =3D false; + INIT_WORK(&eq->work, ionic_poll_eq_work); + + rc =3D ionic_intr_alloc(dev->lif_cfg.lif, &intr_obj); + if (rc < 0) + goto err_intr; + + eq->irq =3D intr_obj.vector; + eq->intr =3D intr_obj.index; + + ionic_queue_dbell_init(&eq->q, eq->eqid); + + /* cons is color for eq */ + eq->q.cons =3D true; + + snprintf(eq->name, sizeof(eq->name), "%s-%d-%d-eq", + "ionr", dev->lif_cfg.lif_index, eq->eqid); + + ionic_intr_mask(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_SET); + ionic_intr_mask_assert(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_= SET); + ionic_intr_coal_init(dev->lif_cfg.intr_ctrl, eq->intr, 0); + ionic_intr_clean(dev->lif_cfg.intr_ctrl, eq->intr); + + eq->enable =3D true; + + rc =3D request_irq(eq->irq, ionic_poll_eq_isr, 0, eq->name, eq); + if (rc) + goto err_irq; + + rc =3D ionic_rdma_queue_devcmd(dev, &eq->q, eq->eqid, eq->intr, + IONIC_CMD_RDMA_CREATE_EQ); + if (rc) + goto err_cmd; + + ionic_intr_mask(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_CLEAR); + + return eq; + +err_cmd: + eq->enable =3D false; + free_irq(eq->irq, eq); + flush_work(&eq->work); +err_irq: + ionic_intr_free(dev->lif_cfg.lif, eq->intr); +err_intr: + ionic_queue_destroy(&eq->q, dev->lif_cfg.hwdev); +err_q: + kfree(eq); + + return ERR_PTR(rc); +} + +static void ionic_destroy_eq(struct ionic_eq *eq) +{ + struct ionic_ibdev *dev =3D eq->dev; + + eq->enable =3D false; + free_irq(eq->irq, eq); + flush_work(&eq->work); + + ionic_intr_free(dev->lif_cfg.lif, eq->intr); + ionic_queue_destroy(&eq->q, dev->lif_cfg.hwdev); + kfree(eq); +} + +int ionic_create_rdma_admin(struct ionic_ibdev *dev) +{ + int eq_i =3D 0, aq_i =3D 0, rc =3D 0; + struct ionic_vcq *vcq; + struct ionic_aq *aq; + struct ionic_eq *eq; + + dev->eq_vec =3D NULL; + dev->aq_vec =3D NULL; + + INIT_WORK(&dev->reset_work, ionic_reset_work); + INIT_DELAYED_WORK(&dev->admin_dwork, ionic_admin_dwork); + atomic_set(&dev->admin_state, IONIC_ADMIN_KILLED); + + if (dev->lif_cfg.aq_count > IONIC_AQ_COUNT) { + ibdev_dbg(&dev->ibdev, "limiting adminq count to %d\n", + IONIC_AQ_COUNT); + dev->lif_cfg.aq_count =3D IONIC_AQ_COUNT; + } + + if (dev->lif_cfg.eq_count > IONIC_EQ_COUNT) { + dev_dbg(&dev->ibdev.dev, "limiting eventq count to %d\n", + IONIC_EQ_COUNT); + dev->lif_cfg.eq_count =3D IONIC_EQ_COUNT; + } + + /* need at least two eq and one aq */ + if (dev->lif_cfg.eq_count < IONIC_EQ_COUNT_MIN || + dev->lif_cfg.aq_count < IONIC_AQ_COUNT_MIN) { + rc =3D -EINVAL; + goto out; + } + + dev->eq_vec =3D kmalloc_array(dev->lif_cfg.eq_count, sizeof(*dev->eq_vec), + GFP_KERNEL); + if (!dev->eq_vec) { + rc =3D -ENOMEM; + goto out; + } + + for (eq_i =3D 0; eq_i < dev->lif_cfg.eq_count; ++eq_i) { + eq =3D ionic_create_eq(dev, eq_i + dev->lif_cfg.eq_base); + if (IS_ERR(eq)) { + rc =3D PTR_ERR(eq); + + if (eq_i < IONIC_EQ_COUNT_MIN) { + ibdev_err(&dev->ibdev, + "fail create eq %d\n", rc); + goto out; + } + + /* ok, just fewer eq than device supports */ + ibdev_dbg(&dev->ibdev, "eq count %d want %d rc %d\n", + eq_i, dev->lif_cfg.eq_count, rc); + + rc =3D 0; + break; + } + + dev->eq_vec[eq_i] =3D eq; + } + + dev->lif_cfg.eq_count =3D eq_i; + + dev->aq_vec =3D kmalloc_array(dev->lif_cfg.aq_count, sizeof(*dev->aq_vec), + GFP_KERNEL); + if (!dev->aq_vec) { + rc =3D -ENOMEM; + goto out; + } + + /* Create one CQ per AQ */ + for (aq_i =3D 0; aq_i < dev->lif_cfg.aq_count; ++aq_i) { + vcq =3D ionic_create_rdma_admincq(dev, aq_i % eq_i); + if (IS_ERR(vcq)) { + rc =3D PTR_ERR(vcq); + + if (!aq_i) { + ibdev_err(&dev->ibdev, + "failed to create acq %d\n", rc); + goto out; + } + + /* ok, just fewer adminq than device supports */ + ibdev_dbg(&dev->ibdev, "acq count %d want %d rc %d\n", + aq_i, dev->lif_cfg.aq_count, rc); + break; + } + + aq =3D ionic_create_rdma_adminq(dev, aq_i + dev->lif_cfg.aq_base, + vcq->cq[0].cqid); + if (IS_ERR(aq)) { + /* Clean up the dangling CQ */ + ionic_destroy_cq_common(dev, &vcq->cq[0]); + kfree(vcq); + + rc =3D PTR_ERR(aq); + + if (!aq_i) { + ibdev_err(&dev->ibdev, + "failed to create aq %d\n", rc); + goto out; + } + + /* ok, just fewer adminq than device supports */ + ibdev_dbg(&dev->ibdev, "aq count %d want %d rc %d\n", + aq_i, dev->lif_cfg.aq_count, rc); + break; + } + + vcq->ibcq.cq_context =3D aq; + aq->vcq =3D vcq; + + atomic_set(&aq->admin_state, IONIC_ADMIN_ACTIVE); + dev->aq_vec[aq_i] =3D aq; + } + + atomic_set(&dev->admin_state, IONIC_ADMIN_ACTIVE); +out: + dev->lif_cfg.eq_count =3D eq_i; + dev->lif_cfg.aq_count =3D aq_i; + + return rc; +} + +void ionic_destroy_rdma_admin(struct ionic_ibdev *dev) +{ + struct ionic_vcq *vcq; + struct ionic_aq *aq; + struct ionic_eq *eq; + + /* + * Killing the admin before destroy makes sure all admin and + * completions are flushed. admin_state =3D IONIC_ADMIN_KILLED + * stops queueing up further works. + */ + cancel_delayed_work_sync(&dev->admin_dwork); + cancel_work_sync(&dev->reset_work); + + if (dev->aq_vec) { + while (dev->lif_cfg.aq_count > 0) { + aq =3D dev->aq_vec[--dev->lif_cfg.aq_count]; + vcq =3D aq->vcq; + + cancel_work_sync(&aq->work); + + __ionic_destroy_rdma_adminq(dev, aq); + if (vcq) { + ionic_destroy_cq_common(dev, &vcq->cq[0]); + kfree(vcq); + } + } + + kfree(dev->aq_vec); + } + + if (dev->eq_vec) { + while (dev->lif_cfg.eq_count > 0) { + eq =3D dev->eq_vec[--dev->lif_cfg.eq_count]; + ionic_destroy_eq(eq); + } + + kfree(dev->eq_vec); + } +} diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infi= niband/hw/ionic/ionic_controlpath.c new file mode 100644 index 000000000000..e1130573bd39 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c @@ -0,0 +1,181 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include "ionic_ibdev.h" + +static int ionic_validate_qdesc(struct ionic_qdesc *q) +{ + if (!q->addr || !q->size || !q->mask || + !q->depth_log2 || !q->stride_log2) + return -EINVAL; + + if (q->addr & (PAGE_SIZE - 1)) + return -EINVAL; + + if (q->mask !=3D BIT(q->depth_log2) - 1) + return -EINVAL; + + if (q->size < BIT_ULL(q->depth_log2 + q->stride_log2)) + return -EINVAL; + + return 0; +} + +static u32 ionic_get_eqid(struct ionic_ibdev *dev, u32 comp_vector, u8 udm= a_idx) +{ + /* EQ per vector per udma, and the first eqs reserved for async events. + * The rest of the vectors can be requested for completions. + */ + u32 comp_vec_count =3D dev->lif_cfg.eq_count / dev->lif_cfg.udma_count - = 1; + + return (comp_vector % comp_vec_count + 1) * dev->lif_cfg.udma_count + udm= a_idx; +} + +static int ionic_get_cqid(struct ionic_ibdev *dev, u32 *cqid, u8 udma_idx) +{ + unsigned int size, base, bound; + int rc; + + size =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + base =3D size * udma_idx; + bound =3D base + size; + + rc =3D ionic_resid_get_shared(&dev->inuse_cqid, base, bound); + if (rc >=3D 0) { + /* cq_base is zero or a multiple of two queue groups */ + *cqid =3D dev->lif_cfg.cq_base + + ionic_bitid_to_qid(rc, dev->lif_cfg.udma_qgrp_shift, + dev->half_cqid_udma_shift); + + rc =3D 0; + } + + return rc; +} + +static void ionic_put_cqid(struct ionic_ibdev *dev, u32 cqid) +{ + u32 bitid =3D ionic_qid_to_bitid(cqid - dev->lif_cfg.cq_base, + dev->lif_cfg.udma_qgrp_shift, + dev->half_cqid_udma_shift); + + ionic_resid_put(&dev->inuse_cqid, bitid); +} + +int ionic_create_cq_common(struct ionic_vcq *vcq, + struct ionic_tbl_buf *buf, + const struct ib_cq_init_attr *attr, + struct ionic_ctx *ctx, + struct ib_udata *udata, + struct ionic_qdesc *req_cq, + __u32 *resp_cqid, + int udma_idx) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(vcq->ibcq.device); + struct ionic_cq *cq =3D &vcq->cq[udma_idx]; + void *entry; + int rc; + + cq->vcq =3D vcq; + + if (attr->cqe < 1 || attr->cqe + IONIC_CQ_GRACE > 0xffff) { + rc =3D -EINVAL; + goto err_args; + } + + rc =3D ionic_get_cqid(dev, &cq->cqid, udma_idx); + if (rc) + goto err_args; + + cq->eqid =3D ionic_get_eqid(dev, attr->comp_vector, udma_idx); + + spin_lock_init(&cq->lock); + INIT_LIST_HEAD(&cq->poll_sq); + INIT_LIST_HEAD(&cq->flush_sq); + INIT_LIST_HEAD(&cq->flush_rq); + + if (udata) { + rc =3D ionic_validate_qdesc(req_cq); + if (rc) + goto err_qdesc; + + cq->umem =3D ib_umem_get(&dev->ibdev, req_cq->addr, req_cq->size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(cq->umem)) { + rc =3D PTR_ERR(cq->umem); + goto err_qdesc; + } + + cq->q.ptr =3D NULL; + cq->q.size =3D req_cq->size; + cq->q.mask =3D req_cq->mask; + cq->q.depth_log2 =3D req_cq->depth_log2; + cq->q.stride_log2 =3D req_cq->stride_log2; + + *resp_cqid =3D cq->cqid; + } else { + rc =3D ionic_queue_init(&cq->q, dev->lif_cfg.hwdev, + attr->cqe + IONIC_CQ_GRACE, + sizeof(struct ionic_v1_cqe)); + if (rc) + goto err_q_init; + + ionic_queue_dbell_init(&cq->q, cq->cqid); + cq->color =3D true; + cq->credit =3D cq->q.mask; + } + + rc =3D ionic_pgtbl_init(dev, buf, cq->umem, cq->q.dma, 1, PAGE_SIZE); + if (rc) + goto err_pgtbl_init; + + init_completion(&cq->cq_rel_comp); + kref_init(&cq->cq_kref); + + entry =3D xa_store_irq(&dev->cq_tbl, cq->cqid, cq, GFP_KERNEL); + if (entry) { + if (!xa_is_err(entry)) + rc =3D -EINVAL; + else + rc =3D xa_err(entry); + + goto err_xa; + } + + return 0; + +err_xa: + ionic_pgtbl_unbuf(dev, buf); +err_pgtbl_init: + if (!udata) + ionic_queue_destroy(&cq->q, dev->lif_cfg.hwdev); +err_q_init: + if (cq->umem) + ib_umem_release(cq->umem); +err_qdesc: + ionic_put_cqid(dev, cq->cqid); +err_args: + cq->vcq =3D NULL; + + return rc; +} + +void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq) +{ + if (!cq->vcq) + return; + + xa_erase_irq(&dev->cq_tbl, cq->cqid); + + kref_put(&cq->cq_kref, ionic_cq_complete); + wait_for_completion(&cq->cq_rel_comp); + + if (cq->umem) + ib_umem_release(cq->umem); + else + ionic_queue_destroy(&cq->q, dev->lif_cfg.hwdev); + + ionic_put_cqid(dev, cq->cqid); + + cq->vcq =3D NULL; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h new file mode 100644 index 000000000000..44ec69487519 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -0,0 +1,164 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_FW_H_ +#define _IONIC_FW_H_ + +#include + +/* completion queue v1 cqe */ +struct ionic_v1_cqe { + union { + struct { + __be16 cmd_idx; + __u8 cmd_op; + __u8 rsvd[17]; + __le16 old_sq_cindex; + __le16 old_rq_cq_cindex; + } admin; + struct { + __u64 wqe_id; + __be32 src_qpn_op; + __u8 src_mac[6]; + __be16 vlan_tag; + __be32 imm_data_rkey; + } recv; + struct { + __u8 rsvd[4]; + __be32 msg_msn; + __u8 rsvd2[8]; + __u64 npg_wqe_id; + } send; + }; + __be32 status_length; + __be32 qid_type_flags; +}; + +/* bits for cqe qid_type_flags */ +enum ionic_v1_cqe_qtf_bits { + IONIC_V1_CQE_COLOR =3D BIT(0), + IONIC_V1_CQE_ERROR =3D BIT(1), + IONIC_V1_CQE_TYPE_SHIFT =3D 5, + IONIC_V1_CQE_TYPE_MASK =3D 0x7, + IONIC_V1_CQE_QID_SHIFT =3D 8, + + IONIC_V1_CQE_TYPE_ADMIN =3D 0, + IONIC_V1_CQE_TYPE_RECV =3D 1, + IONIC_V1_CQE_TYPE_SEND_MSN =3D 2, + IONIC_V1_CQE_TYPE_SEND_NPG =3D 3, +}; + +static inline bool ionic_v1_cqe_color(struct ionic_v1_cqe *cqe) +{ + return cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_COLOR); +} + +static inline bool ionic_v1_cqe_error(struct ionic_v1_cqe *cqe) +{ + return cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_ERROR); +} + +static inline void ionic_v1_cqe_clean(struct ionic_v1_cqe *cqe) +{ + cqe->qid_type_flags |=3D cpu_to_be32(~0u << IONIC_V1_CQE_QID_SHIFT); +} + +static inline u32 ionic_v1_cqe_qtf(struct ionic_v1_cqe *cqe) +{ + return be32_to_cpu(cqe->qid_type_flags); +} + +static inline u8 ionic_v1_cqe_qtf_type(u32 qtf) +{ + return (qtf >> IONIC_V1_CQE_TYPE_SHIFT) & IONIC_V1_CQE_TYPE_MASK; +} + +static inline u32 ionic_v1_cqe_qtf_qid(u32 qtf) +{ + return qtf >> IONIC_V1_CQE_QID_SHIFT; +} + +#define ADMIN_WQE_STRIDE 64 +#define ADMIN_WQE_HDR_LEN 4 + +/* admin queue v1 wqe */ +struct ionic_v1_admin_wqe { + __u8 op; + __u8 rsvd; + __le16 len; + + union { + } cmd; +}; + +/* admin queue v1 cqe status */ +enum ionic_v1_admin_status { + IONIC_V1_ASTS_OK, + IONIC_V1_ASTS_BAD_CMD, + IONIC_V1_ASTS_BAD_INDEX, + IONIC_V1_ASTS_BAD_STATE, + IONIC_V1_ASTS_BAD_TYPE, + IONIC_V1_ASTS_BAD_ATTR, + IONIC_V1_ASTS_MSG_TOO_BIG, +}; + +/* event queue v1 eqe */ +struct ionic_v1_eqe { + __be32 evt; +}; + +/* bits for cqe queue_type_flags */ +enum ionic_v1_eqe_evt_bits { + IONIC_V1_EQE_COLOR =3D BIT(0), + IONIC_V1_EQE_TYPE_SHIFT =3D 1, + IONIC_V1_EQE_TYPE_MASK =3D 0x7, + IONIC_V1_EQE_CODE_SHIFT =3D 4, + IONIC_V1_EQE_CODE_MASK =3D 0xf, + IONIC_V1_EQE_QID_SHIFT =3D 8, + + /* cq events */ + IONIC_V1_EQE_TYPE_CQ =3D 0, + /* cq normal events */ + IONIC_V1_EQE_CQ_NOTIFY =3D 0, + /* cq error events */ + IONIC_V1_EQE_CQ_ERR =3D 8, + + /* qp and srq events */ + IONIC_V1_EQE_TYPE_QP =3D 1, + /* qp normal events */ + IONIC_V1_EQE_SRQ_LEVEL =3D 0, + IONIC_V1_EQE_SQ_DRAIN =3D 1, + IONIC_V1_EQE_QP_COMM_EST =3D 2, + IONIC_V1_EQE_QP_LAST_WQE =3D 3, + /* qp error events */ + IONIC_V1_EQE_QP_ERR =3D 8, + IONIC_V1_EQE_QP_ERR_REQUEST =3D 9, + IONIC_V1_EQE_QP_ERR_ACCESS =3D 10, +}; + +static inline bool ionic_v1_eqe_color(struct ionic_v1_eqe *eqe) +{ + return eqe->evt & cpu_to_be32(IONIC_V1_EQE_COLOR); +} + +static inline u32 ionic_v1_eqe_evt(struct ionic_v1_eqe *eqe) +{ + return be32_to_cpu(eqe->evt); +} + +static inline u8 ionic_v1_eqe_evt_type(u32 evt) +{ + return (evt >> IONIC_V1_EQE_TYPE_SHIFT) & IONIC_V1_EQE_TYPE_MASK; +} + +static inline u8 ionic_v1_eqe_evt_code(u32 evt) +{ + return (evt >> IONIC_V1_EQE_CODE_SHIFT) & IONIC_V1_EQE_CODE_MASK; +} + +static inline u32 ionic_v1_eqe_evt_qid(u32 evt) +{ + return evt >> IONIC_V1_EQE_QID_SHIFT; +} + +#endif /* _IONIC_FW_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index d79470dae13a..7710190ff65f 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -15,9 +15,41 @@ MODULE_DESCRIPTION(DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("NET_IONIC"); =20 +static void ionic_init_resids(struct ionic_ibdev *dev) +{ + ionic_resid_init(&dev->inuse_cqid, dev->lif_cfg.cq_count); + dev->half_cqid_udma_shift =3D + order_base_2(dev->lif_cfg.cq_count / dev->lif_cfg.udma_count); + ionic_resid_init(&dev->inuse_pdid, IONIC_MAX_PD); + ionic_resid_init(&dev->inuse_ahid, dev->lif_cfg.nahs_per_lif); + ionic_resid_init(&dev->inuse_mrid, dev->lif_cfg.nmrs_per_lif); + /* skip reserved lkey */ + dev->next_mrkey =3D 1; + ionic_resid_init(&dev->inuse_qpid, dev->lif_cfg.qp_count); + /* skip reserved SMI and GSI qpids */ + dev->half_qpid_udma_shift =3D + order_base_2(dev->lif_cfg.qp_count / dev->lif_cfg.udma_count); + ionic_resid_init(&dev->inuse_dbid, dev->lif_cfg.dbid_count); +} + +static void ionic_destroy_resids(struct ionic_ibdev *dev) +{ + ionic_resid_destroy(&dev->inuse_cqid); + ionic_resid_destroy(&dev->inuse_pdid); + ionic_resid_destroy(&dev->inuse_ahid); + ionic_resid_destroy(&dev->inuse_mrid); + ionic_resid_destroy(&dev->inuse_qpid); + ionic_resid_destroy(&dev->inuse_dbid); +} + static void ionic_destroy_ibdev(struct ionic_ibdev *dev) { + ionic_kill_rdma_admin(dev, false); ib_unregister_device(&dev->ibdev); + ionic_destroy_rdma_admin(dev); + ionic_destroy_resids(dev); + WARN_ON(!xa_empty(&dev->cq_tbl)); + xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); } =20 @@ -34,6 +66,18 @@ static struct ionic_ibdev *ionic_create_ibdev(struct ion= ic_aux_dev *ionic_adev) =20 ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); =20 + xa_init_flags(&dev->cq_tbl, GFP_ATOMIC); + + ionic_init_resids(dev); + + rc =3D ionic_rdma_reset_devcmd(dev); + if (rc) + goto err_reset; + + rc =3D ionic_create_rdma_admin(dev); + if (rc) + goto err_admin; + ibdev =3D &dev->ibdev; ibdev->dev.parent =3D dev->lif_cfg.hwdev; =20 @@ -62,6 +106,11 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 err_register: err_admin: + ionic_kill_rdma_admin(dev, false); + ionic_destroy_rdma_admin(dev); +err_reset: + ionic_destroy_resids(dev); + xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); =20 return ERR_PTR(rc); @@ -112,6 +161,10 @@ static int __init ionic_mod_init(void) { int rc; =20 + ionic_evt_workq =3D create_workqueue(DRIVER_NAME "-evt"); + if (!ionic_evt_workq) + return -ENOMEM; + rc =3D auxiliary_driver_register(&ionic_aux_r_driver); if (rc) goto err_aux; @@ -119,12 +172,15 @@ static int __init ionic_mod_init(void) return 0; =20 err_aux: + destroy_workqueue(ionic_evt_workq); + return rc; } =20 static void __exit ionic_mod_exit(void) { auxiliary_driver_unregister(&ionic_aux_r_driver); + destroy_workqueue(ionic_evt_workq); } =20 module_init(ionic_mod_init); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 224e5e74056d..490897628f41 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -4,15 +4,237 @@ #ifndef _IONIC_IBDEV_H_ #define _IONIC_IBDEV_H_ =20 +#include #include + #include +#include + +#include "ionic_fw.h" +#include "ionic_queue.h" +#include "ionic_res.h" =20 #include "ionic_lif_cfg.h" =20 +/* Config knobs */ +#define IONIC_EQ_DEPTH 511 +#define IONIC_EQ_COUNT 32 +#define IONIC_AQ_DEPTH 63 +#define IONIC_AQ_COUNT 4 +#define IONIC_EQ_ISR_BUDGET 10 +#define IONIC_EQ_WORK_BUDGET 1000 +#define IONIC_MAX_PD 1024 + +#define IONIC_CQ_GRACE 100 + +struct ionic_aq; +struct ionic_cq; +struct ionic_eq; +struct ionic_vcq; + +enum ionic_admin_state { + IONIC_ADMIN_ACTIVE, /* submitting admin commands to queue */ + IONIC_ADMIN_PAUSED, /* not submitting, but may complete normally */ + IONIC_ADMIN_KILLED, /* not submitting, locally completed */ +}; + +enum ionic_admin_flags { + IONIC_ADMIN_F_BUSYWAIT =3D BIT(0), /* Don't sleep */ + IONIC_ADMIN_F_TEARDOWN =3D BIT(1), /* In destroy path */ + IONIC_ADMIN_F_INTERRUPT =3D BIT(2), /* Interruptible w/timeout */ +}; + +struct ionic_qdesc { + __aligned_u64 addr; + __u32 size; + __u16 mask; + __u8 depth_log2; + __u8 stride_log2; +}; + +enum ionic_mmap_flag { + IONIC_MMAP_WC =3D BIT(0), +}; + +struct ionic_mmap_entry { + struct rdma_user_mmap_entry rdma_entry; + unsigned long size; + unsigned long pfn; + u8 mmap_flags; +}; + struct ionic_ibdev { struct ib_device ibdev; =20 struct ionic_lif_cfg lif_cfg; + + struct xarray qp_tbl; + struct xarray cq_tbl; + + struct ionic_resid_bits inuse_dbid; + struct ionic_resid_bits inuse_pdid; + struct ionic_resid_bits inuse_ahid; + struct ionic_resid_bits inuse_mrid; + struct ionic_resid_bits inuse_qpid; + struct ionic_resid_bits inuse_cqid; + + u8 half_cqid_udma_shift; + u8 half_qpid_udma_shift; + u8 next_qpid_udma_idx; + u8 next_mrkey; + + struct work_struct reset_work; + bool reset_posted; + u32 reset_cnt; + + struct delayed_work admin_dwork; + struct ionic_aq **aq_vec; + atomic_t admin_state; + + struct ionic_eq **eq_vec; +}; + +struct ionic_eq { + struct ionic_ibdev *dev; + + u32 eqid; + u32 intr; + + struct ionic_queue q; + + bool armed; + bool enable; + + struct work_struct work; + + int irq; + char name[32]; +}; + +struct ionic_admin_wr { + struct completion work; + struct list_head aq_ent; + struct ionic_v1_admin_wqe wqe; + struct ionic_v1_cqe cqe; + struct ionic_aq *aq; + int status; +}; + +struct ionic_admin_wr_q { + struct ionic_admin_wr *wr; + int wqe_strides; }; =20 +struct ionic_aq { + struct ionic_ibdev *dev; + struct ionic_vcq *vcq; + + struct work_struct work; + + atomic_t admin_state; + unsigned long stamp; + bool armed; + + u32 aqid; + u32 cqid; + + spinlock_t lock; /* for posting */ + struct ionic_queue q; + struct ionic_admin_wr_q *q_wr; + struct list_head wr_prod; + struct list_head wr_post; +}; + +struct ionic_ctx { + struct ib_ucontext ibctx; + u32 dbid; + struct rdma_user_mmap_entry *mmap_dbell; +}; + +struct ionic_tbl_buf { + u32 tbl_limit; + u32 tbl_pages; + size_t tbl_size; + __le64 *tbl_buf; + dma_addr_t tbl_dma; + u8 page_size_log2; +}; + +struct ionic_cq { + struct ionic_vcq *vcq; + + u32 cqid; + u32 eqid; + + spinlock_t lock; /* for polling */ + struct list_head poll_sq; + bool flush; + struct list_head flush_sq; + struct list_head flush_rq; + struct list_head ibkill_flush_ent; + + struct ionic_queue q; + bool color; + int credit; + u16 arm_any_prod; + u16 arm_sol_prod; + + struct kref cq_kref; + struct completion cq_rel_comp; + + /* infrequently accessed, keep at end */ + struct ib_umem *umem; +}; + +struct ionic_vcq { + struct ib_cq ibcq; + struct ionic_cq cq[2]; + u8 udma_mask; + u8 poll_idx; +}; + +static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) +{ + return container_of(ibdev, struct ionic_ibdev, ibdev); +} + +static inline void ionic_cq_complete(struct kref *kref) +{ + struct ionic_cq *cq =3D container_of(kref, struct ionic_cq, cq_kref); + + complete(&cq->cq_rel_comp); +} + +/* ionic_admin.c */ +extern struct workqueue_struct *ionic_evt_workq; +void ionic_admin_post(struct ionic_ibdev *dev, struct ionic_admin_wr *wr); +int ionic_admin_wait(struct ionic_ibdev *dev, struct ionic_admin_wr *wr, + enum ionic_admin_flags); + +int ionic_rdma_reset_devcmd(struct ionic_ibdev *dev); + +int ionic_create_rdma_admin(struct ionic_ibdev *dev); +void ionic_destroy_rdma_admin(struct ionic_ibdev *dev); +void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path); + +/* ionic_controlpath.c */ +int ionic_create_cq_common(struct ionic_vcq *vcq, + struct ionic_tbl_buf *buf, + const struct ib_cq_init_attr *attr, + struct ionic_ctx *ctx, + struct ib_udata *udata, + struct ionic_qdesc *req_cq, + __u32 *resp_cqid, + int udma_idx); +void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq); + +/* ionic_pgtbl.c */ +int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); +int ionic_pgtbl_init(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf, + struct ib_umem *umem, + dma_addr_t dma, + int limit, + u64 page_size); +void ionic_pgtbl_unbuf(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf); #endif /* _IONIC_IBDEV_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c new file mode 100644 index 000000000000..11461f7642bc --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) +{ + if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) + return -ENOMEM; + + if (buf->tbl_buf) + buf->tbl_buf[buf->tbl_pages] =3D cpu_to_le64(dma); + else + buf->tbl_dma =3D dma; + + ++buf->tbl_pages; + + return 0; +} + +static int ionic_tbl_buf_alloc(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf) +{ + int rc; + + buf->tbl_size =3D buf->tbl_limit * sizeof(*buf->tbl_buf); + buf->tbl_buf =3D kmalloc(buf->tbl_size, GFP_KERNEL); + if (!buf->tbl_buf) + return -ENOMEM; + + buf->tbl_dma =3D dma_map_single(dev->lif_cfg.hwdev, buf->tbl_buf, + buf->tbl_size, DMA_TO_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, buf->tbl_dma); + if (rc) { + kfree(buf->tbl_buf); + return rc; + } + + return 0; +} + +static int ionic_pgtbl_umem(struct ionic_tbl_buf *buf, struct ib_umem *ume= m) +{ + struct ib_block_iter biter; + u64 page_dma; + int rc; + + rdma_umem_for_each_dma_block(umem, &biter, BIT_ULL(buf->page_size_log2)) { + page_dma =3D rdma_block_iter_dma_address(&biter); + rc =3D ionic_pgtbl_page(buf, page_dma); + if (rc) + return rc; + } + + return 0; +} + +void ionic_pgtbl_unbuf(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf) +{ + if (buf->tbl_buf) + dma_unmap_single(dev->lif_cfg.hwdev, buf->tbl_dma, + buf->tbl_size, DMA_TO_DEVICE); + + kfree(buf->tbl_buf); + memset(buf, 0, sizeof(*buf)); +} + +int ionic_pgtbl_init(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf, + struct ib_umem *umem, + dma_addr_t dma, + int limit, + u64 page_size) +{ + int rc; + + memset(buf, 0, sizeof(*buf)); + + if (umem) { + limit =3D ib_umem_num_dma_blocks(umem, page_size); + buf->page_size_log2 =3D order_base_2(page_size); + } + + if (limit < 1) + return -EINVAL; + + buf->tbl_limit =3D limit; + + /* skip pgtbl if contiguous / direct translation */ + if (limit > 1) { + rc =3D ionic_tbl_buf_alloc(dev, buf); + if (rc) + return rc; + } + + if (umem) + rc =3D ionic_pgtbl_umem(buf, umem); + else + rc =3D ionic_pgtbl_page(buf, dma); + + if (rc) + goto err_unbuf; + + return 0; + +err_unbuf: + ionic_pgtbl_unbuf(dev, buf); + return rc; +} diff --git a/drivers/infiniband/hw/ionic/ionic_queue.c b/drivers/infiniband= /hw/ionic/ionic_queue.c new file mode 100644 index 000000000000..aa897ed2a412 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_queue.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include "ionic_queue.h" + +int ionic_queue_init(struct ionic_queue *q, struct device *dma_dev, + int depth, size_t stride) +{ + if (depth < 0 || depth > 0xffff) + return -EINVAL; + + if (stride =3D=3D 0 || stride > 0x10000) + return -EINVAL; + + if (depth =3D=3D 0) + depth =3D 1; + + q->depth_log2 =3D order_base_2(depth + 1); + q->stride_log2 =3D order_base_2(stride); + + if (q->depth_log2 + q->stride_log2 < PAGE_SHIFT) + q->depth_log2 =3D PAGE_SHIFT - q->stride_log2; + + if (q->depth_log2 > 16 || q->stride_log2 > 16) + return -EINVAL; + + q->size =3D BIT_ULL(q->depth_log2 + q->stride_log2); + q->mask =3D BIT(q->depth_log2) - 1; + + q->ptr =3D dma_alloc_coherent(dma_dev, q->size, &q->dma, GFP_KERNEL); + if (!q->ptr) + return -ENOMEM; + + /* it will always be page aligned, but just to be sure... */ + if (!PAGE_ALIGNED(q->ptr)) { + dma_free_coherent(dma_dev, q->size, q->ptr, q->dma); + return -ENOMEM; + } + + q->prod =3D 0; + q->cons =3D 0; + q->dbell =3D 0; + + return 0; +} + +void ionic_queue_destroy(struct ionic_queue *q, struct device *dma_dev) +{ + dma_free_coherent(dma_dev, q->size, q->ptr, q->dma); +} diff --git a/drivers/infiniband/hw/ionic/ionic_queue.h b/drivers/infiniband= /hw/ionic/ionic_queue.h new file mode 100644 index 000000000000..d18020d4cad5 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_queue.h @@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_QUEUE_H_ +#define _IONIC_QUEUE_H_ + +#include +#include + +#define IONIC_MAX_DEPTH 0xffff +#define IONIC_MAX_CQ_DEPTH 0xffff +#define IONIC_CQ_RING_ARM IONIC_DBELL_RING_1 +#define IONIC_CQ_RING_SOL IONIC_DBELL_RING_2 + +/** + * struct ionic_queue - Ring buffer used between device and driver + * @size: Size of the buffer, in bytes + * @dma: Dma address of the buffer + * @ptr: Buffer virtual address + * @prod: Driver position in the queue + * @cons: Device position in the queue + * @mask: Capacity of the queue, subtracting the hole + * This value is equal to ((1 << depth_log2) - 1) + * @depth_log2: Log base two size depth of the queue + * @stride_log2: Log base two size of an element in the queue + * @dbell: Doorbell identifying bits + */ +struct ionic_queue { + size_t size; + dma_addr_t dma; + void *ptr; + u16 prod; + u16 cons; + u16 mask; + u8 depth_log2; + u8 stride_log2; + u64 dbell; +}; + +/** + * ionic_queue_init() - Initialize user space queue + * @q: Uninitialized queue structure + * @dma_dev: DMA device for mapping + * @depth: Depth of the queue + * @stride: Size of each element of the queue + * + * Return: status code + */ +int ionic_queue_init(struct ionic_queue *q, struct device *dma_dev, + int depth, size_t stride); + +/** + * ionic_queue_destroy() - Destroy user space queue + * @q: Queue structure + * @dma_dev: DMA device for mapping + * + * Return: status code + */ +void ionic_queue_destroy(struct ionic_queue *q, struct device *dma_dev); + +/** + * ionic_queue_empty() - Test if queue is empty + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: is empty + */ +static inline bool ionic_queue_empty(struct ionic_queue *q) +{ + return q->prod =3D=3D q->cons; +} + +/** + * ionic_queue_length() - Get the current length of the queue + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: length + */ +static inline u16 ionic_queue_length(struct ionic_queue *q) +{ + return (q->prod - q->cons) & q->mask; +} + +/** + * ionic_queue_length_remaining() - Get the remaining length of the queue + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: length remaining + */ +static inline u16 ionic_queue_length_remaining(struct ionic_queue *q) +{ + return q->mask - ionic_queue_length(q); +} + +/** + * ionic_queue_full() - Test if queue is full + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: is full + */ +static inline bool ionic_queue_full(struct ionic_queue *q) +{ + return q->mask =3D=3D ionic_queue_length(q); +} + +/** + * ionic_color_wrap() - Flip the color if prod is wrapped + * @prod: Queue index just after advancing + * @color: Queue color just prior to advancing the index + * + * Return: color after advancing the index + */ +static inline bool ionic_color_wrap(u16 prod, bool color) +{ + /* logical xor color with (prod =3D=3D 0) */ + return color !=3D (prod =3D=3D 0); +} + +/** + * ionic_queue_at() - Get the element at the given index + * @q: Queue structure + * @idx: Index in the queue + * + * The index must be within the bounds of the queue. It is not checked he= re. + * + * Return: pointer to element at index + */ +static inline void *ionic_queue_at(struct ionic_queue *q, u16 idx) +{ + return q->ptr + ((unsigned long)idx << q->stride_log2); +} + +/** + * ionic_queue_at_prod() - Get the element at the producer index + * @q: Queue structure + * + * Return: pointer to element at producer index + */ +static inline void *ionic_queue_at_prod(struct ionic_queue *q) +{ + return ionic_queue_at(q, q->prod); +} + +/** + * ionic_queue_at_cons() - Get the element at the consumer index + * @q: Queue structure + * + * Return: pointer to element at consumer index + */ +static inline void *ionic_queue_at_cons(struct ionic_queue *q) +{ + return ionic_queue_at(q, q->cons); +} + +/** + * ionic_queue_next() - Compute the next index + * @q: Queue structure + * @idx: Index + * + * Return: next index after idx + */ +static inline u16 ionic_queue_next(struct ionic_queue *q, u16 idx) +{ + return (idx + 1) & q->mask; +} + +/** + * ionic_queue_produce() - Increase the producer index + * @q: Queue structure + * + * Caller must ensure that the queue is not full. It is not checked here. + */ +static inline void ionic_queue_produce(struct ionic_queue *q) +{ + q->prod =3D ionic_queue_next(q, q->prod); +} + +/** + * ionic_queue_consume() - Increase the consumer index + * @q: Queue structure + * + * Caller must ensure that the queue is not empty. It is not checked here. + * + * This is only valid for to-device queues. + */ +static inline void ionic_queue_consume(struct ionic_queue *q) +{ + q->cons =3D ionic_queue_next(q, q->cons); +} + +/** + * ionic_queue_consume_entries() - Increase the consumer index by entries + * @q: Queue structure + * @entries: Number of entries to increment + * + * Caller must ensure that the queue is not empty. It is not checked here. + * + * This is only valid for to-device queues. + */ +static inline void ionic_queue_consume_entries(struct ionic_queue *q, + u16 entries) +{ + q->cons =3D (q->cons + entries) & q->mask; +} + +/** + * ionic_queue_dbell_init() - Initialize doorbell bits for queue id + * @q: Queue structure + * @qid: Queue identifying number + */ +static inline void ionic_queue_dbell_init(struct ionic_queue *q, u32 qid) +{ + q->dbell =3D IONIC_DBELL_QID(qid); +} + +/** + * ionic_queue_dbell_val() - Get current doorbell update value + * @q: Queue structure + * + * Return: current doorbell update value + */ +static inline u64 ionic_queue_dbell_val(struct ionic_queue *q) +{ + return q->dbell | q->prod; +} + +#endif /* _IONIC_QUEUE_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_res.h b/drivers/infiniband/h= w/ionic/ionic_res.h new file mode 100644 index 000000000000..46c8c584bd9a --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_res.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_RES_H_ +#define _IONIC_RES_H_ + +#include +#include + +/** + * struct ionic_resid_bits - Number allocator based on IDA + * + * @inuse: IDA handle + * @inuse_size: Highest ID limit for IDA + */ +struct ionic_resid_bits { + struct ida inuse; + unsigned int inuse_size; +}; + +/** + * ionic_resid_init() - Initialize a resid allocator + * @resid: Uninitialized resid allocator + * @size: Capacity of the allocator + * + * Return: Zero on success, or negative error number + */ +static inline void ionic_resid_init(struct ionic_resid_bits *resid, + unsigned int size) +{ + resid->inuse_size =3D size; + ida_init(&resid->inuse); +} + +/** + * ionic_resid_destroy() - Destroy a resid allocator + * @resid: Resid allocator + */ +static inline void ionic_resid_destroy(struct ionic_resid_bits *resid) +{ + ida_destroy(&resid->inuse); +} + +/** + * ionic_resid_get_shared() - Allocate an available shared resource id + * @resid: Resid allocator + * @min: Smallest valid resource id + * @size: One after largest valid resource id + * + * Return: Resource id, or negative error number + */ +static inline int ionic_resid_get_shared(struct ionic_resid_bits *resid, + unsigned int min, + unsigned int size) +{ + return ida_alloc_range(&resid->inuse, min, size - 1, GFP_KERNEL); +} + +/** + * ionic_resid_get() - Allocate an available resource id + * @resid: Resid allocator + * + * Return: Resource id, or negative error number + */ +static inline int ionic_resid_get(struct ionic_resid_bits *resid) +{ + return ionic_resid_get_shared(resid, 0, resid->inuse_size); +} + +/** + * ionic_resid_put() - Free a resource id + * @resid: Resid allocator + * @id: Resource id + */ +static inline void ionic_resid_put(struct ionic_resid_bits *resid, int id) +{ + ida_free(&resid->inuse, id); +} + +/** + * ionic_bitid_to_qid() - Transform a resource bit index into a queue id + * @bitid: Bit index + * @qgrp_shift: Log2 number of queues per queue group + * @half_qid_shift: Log2 of half the total number of queues + * + * Return: Queue id + * + * Udma-constrained queues (QPs and CQs) are associated with their udma by + * queue group. Even queue groups are associated with udma0, and odd queue + * groups with udma1. + * + * For allocating queue ids, we want to arrange the bits into two halves, + * with the even queue groups of udma0 in the lower half of the bitset, + * and the odd queue groups of udma1 in the upper half of the bitset. + * Then, one or two calls of find_next_zero_bit can examine all the bits + * for queues of an entire udma. + * + * For example, assuming eight queue groups with qgrp qids per group: + * + * bitid 0*qgrp..1*qgrp-1 : qid 0*qgrp..1*qgrp-1 + * bitid 1*qgrp..2*qgrp-1 : qid 2*qgrp..3*qgrp-1 + * bitid 2*qgrp..3*qgrp-1 : qid 4*qgrp..5*qgrp-1 + * bitid 3*qgrp..4*qgrp-1 : qid 6*qgrp..7*qgrp-1 + * bitid 4*qgrp..5*qgrp-1 : qid 1*qgrp..2*qgrp-1 + * bitid 5*qgrp..6*qgrp-1 : qid 3*qgrp..4*qgrp-1 + * bitid 6*qgrp..7*qgrp-1 : qid 5*qgrp..6*qgrp-1 + * bitid 7*qgrp..8*qgrp-1 : qid 7*qgrp..8*qgrp-1 + * + * There are three important ranges of bits in the qid. There is the udma + * bit "U" at qgrp_shift, which is the least significant bit of the group + * index, and determines which udma a queue is associated with. + * The bits of lesser significance we can call the idx bits "I", which are + * the index of the queue within the group. The bits of greater significa= nce + * we can call the grp bits "G", which are other bits of the group index t= hat + * do not determine the udma. Those bits are just rearranged in the bit i= ndex + * in the bitset. A bitid has the udma bit in the most significant place, + * then the grp bits, then the idx bits. + * + * bitid: 00000000000000 U GGG IIIIII + * qid: 00000000000000 GGG U IIIIII + * + * Transforming from bit index to qid, or from qid to bit index, can be + * accomplished by rearranging the bits by masking and shifting. + */ +static inline u32 ionic_bitid_to_qid(u32 bitid, u8 qgrp_shift, + u8 half_qid_shift) +{ + u32 udma_bit =3D + (bitid & BIT(half_qid_shift)) >> (half_qid_shift - qgrp_shift); + u32 grp_bits =3D (bitid & GENMASK(half_qid_shift - 1, qgrp_shift)) << 1; + u32 idx_bits =3D bitid & (BIT(qgrp_shift) - 1); + + return grp_bits | udma_bit | idx_bits; +} + +/** + * ionic_qid_to_bitid() - Transform a queue id into a resource bit index + * @qid: queue index + * @qgrp_shift: Log2 number of queues per queue group + * @half_qid_shift: Log2 of half the total number of queues + * + * Return: Resource bit index + * + * This is the inverse of ionic_bitid_to_qid(). + */ +static inline u32 ionic_qid_to_bitid(u32 qid, u8 qgrp_shift, u8 half_qid_s= hift) +{ + u32 udma_bit =3D (qid & BIT(qgrp_shift)) << (half_qid_shift - qgrp_shift); + u32 grp_bits =3D (qid & GENMASK(half_qid_shift, qgrp_shift + 1)) >> 1; + u32 idx_bits =3D qid & (BIT(qgrp_shift) - 1); + + return udma_bit | grp_bits | idx_bits; +} +#endif /* _IONIC_RES_H_ */ --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2079.outbound.protection.outlook.com [40.107.92.79]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D60442DF71B; Wed, 3 Sep 2025 06:18:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.79 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880304; cv=fail; b=A8eZofl8CxyCMGFrIuWHtz78AhzqqO/a7reAHv18O8c3RXs9IklMHFjym/0npbj41v1RZVbsOGBvtCYJhxF175w9hEo1Ea850e51RRcSSXh+QHxfJ4Y3kEC6lwO+HQRnyWPBtzyK4hQ9E95fyCaNH87IrdeqAeynY9wj8m/YszA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880304; c=relaxed/simple; bh=vALJT7Mvu9zke4uIYXpCOS7dqvXlzO2B7ByM2PxgjKY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=e3XTvrUsawI4SaKn//nXA3WrRx7AWcSKCG5I7iYBfzNjXYPkpHvddFdBLvNj+XE4BO3MOaPF2reLbRy5ClmeN3P/jNcs7bXuhQzZD2ODElBKAbye8XcL3WJARWdXdHVr2O/cw8nHR+FK006mGrzkQ4mHPcu9Qk+ZUnOlfwLfTUU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=V85k2N6l; arc=fail smtp.client-ip=40.107.92.79 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="V85k2N6l" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ieP2pGXOgL/03iLVYWwc+blt939II8u4JzVRTxzF4E+NVsaM15C6F6jTltYWViCdf35istljv0bN5F6TKofuvNSTqTyRs51ez89gnMmd04HtDVO2gCN91ltQ/z5XjofZBaTWE4M2Oqt0FM7Hmr+mcvJpk+Qt/xVkbvcCAF4InSCzIntlDvJtzTVDwtnO6YiZAKKltl0YT79+kYsnjnuObhMtguXfGa7w5Ej31aexCgtRtvAStLertdS/o7wJ0Y4ps0AM2UNIsLTHc1utDcH6r099CnNGXAwraszlvkgUFozZdBYd0Rrix3HMnPRk4E3vRoLiMi7DleVBV8qrsZzyfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JOnnZQVmGOezxqsbosekd13AdfuNLcwkI9nI0QeIZfI=; b=d0GXTeFud4mSEAyiWoPomBQlwhotUZ9gqRmUBnptcAKHmePFi0OwZJrwkfQfzSfC0pPPbk/edAsqSFIc5ix/gRQs3O5EI86T62197oMOmcbn9X7xSz0e96XzSVSzWTW6QmzAbKNdtw/v+1779kj4UhX8k1D74xam9Fq1fkIlodIMyZXR9IDEXvVRlqVUXx1825I0/uIJXkiC01cRG5kAe8yTobJ17Q2XzoTiT6MJRaI4TPypJnlWgrk9Ed4ERJtUq13NmV/GlAMNOTuZ2mHPuBXatr3gcqzU7H4bJ9SqztHggvVmuJgWpAvZ/BDHaAJUdJjHpRrLJG37+0ZpUiwddg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JOnnZQVmGOezxqsbosekd13AdfuNLcwkI9nI0QeIZfI=; b=V85k2N6lj830p4zj7D4T1FHBeag7BWl+TOA8tQFI3ncAXwCsNH+nUXbHrZQt/HPvdHx9MFOSdFltcQjMfLBFzmwbfccsKNEwx4DK7x+yxJq707Y/EO5TWheZkL8M88FlYRTcqgbX9s5IewlbDC/BjtKJig+0TmYR/EHOsL6U0vw= Received: from DM6PR10CA0007.namprd10.prod.outlook.com (2603:10b6:5:60::20) by PH7PR12MB5808.namprd12.prod.outlook.com (2603:10b6:510:1d4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:18:12 +0000 Received: from DS2PEPF00003441.namprd04.prod.outlook.com (2603:10b6:5:60:cafe::c8) by DM6PR10CA0007.outlook.office365.com (2603:10b6:5:60::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.16 via Frontend Transport; Wed, 3 Sep 2025 06:18:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003441.mail.protection.outlook.com (10.167.17.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:18:12 +0000 Received: from satlexmb10.amd.com (10.181.42.219) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:18:05 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb10.amd.com (10.181.42.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:18:04 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:18:00 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v6 10/14] RDMA/ionic: Register device ops for control path Date: Wed, 3 Sep 2025 11:46:02 +0530 Message-ID: <20250903061606.4139957-11-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003441:EE_|PH7PR12MB5808:EE_ X-MS-Office365-Filtering-Correlation-Id: b1c0e0c9-58b8-4286-dc60-08ddeab1a90a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|7416014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?mfPJa7EZLwNildfBXbrIc2L2+EydjiXbmt36bc4qhGxpEU84D1mj8RIIpGkq?= =?us-ascii?Q?apJhaSoxtHhsbRyisCuAUCuqUOXKM9ozOjfGzlPD8HCEcfyvA1V9fQgj8Wz9?= =?us-ascii?Q?Ak968fMxzn9IXmgDAhxNzsQY+GSwIhsl81ClmmSdlXGUmVW/k/JdwOaT+jWW?= =?us-ascii?Q?Hzg1bUR8oBcA3HDEId33KwTDGwagMzfALNU77yi+x2NMHeoE+t+rxkM+RPS0?= =?us-ascii?Q?AXjBop7SXaE4a7jEPsTCHll9zVIBsiT26ANWl+JkIdXuHD02cmyU3zdFjmpy?= =?us-ascii?Q?Uc8eAL79EVrpVqpPmyj7jnvkUTGpUHDEk9erQnFikziqk+09ItaZPWEHZgHx?= =?us-ascii?Q?CU08BoikW8gRWzigSwuOxvSg+EMzZEl+HW4RvwvbNliBQdsTL0tiNhFuw0iB?= =?us-ascii?Q?fLDcGkgHHtBjlHSpvOQRUamAPSdz4jTKSFtatFdCwtaFamMO93YSdcALy8wu?= =?us-ascii?Q?nd4UFGCtH3XK7C5S8q6ufgSelFt+8IzzoXYmHhZeI5wbClUET3t2iKEMMngZ?= =?us-ascii?Q?6jkfkN8I3tCkN5Kenqhv8+Pkp0cQbknGsthQ4X0ND84btAZfNqtNrENh9pcD?= =?us-ascii?Q?xDYAzxgzqP8S6wj43bVi+u22exLrGDU16/HJlbF/H9KvwqslTfvpaWTnGAKT?= =?us-ascii?Q?7E1b8VdXDt2FBBAsXbgbiqGTvCE43jDmlkIAwU0pxsj+w7GXD4+jONHbP/2w?= =?us-ascii?Q?yOT2kaGZV2VFwKe5X+qLzNHrYhD3BEBmB514XSFEyC/f9sRysJ0fG84QaIIo?= =?us-ascii?Q?bRD2T0Ff4gNFzS1WvGExtMrsDFBcWr7yaM6gIWcab15HhRejxVsPDxIfhlAs?= =?us-ascii?Q?CcQB/rbg+eAXzgYBjCqwnx3qiqBF2l+thP6KZt2fEArhjxs5GMtgACgQ0vhc?= =?us-ascii?Q?VMgPjqN0dbe+UUTORWZVvk4k0cYeWCvvGjUXZN0emmAqcHTdl8C1T56ORwBg?= =?us-ascii?Q?LtkxMXmwnAGwM1MhhJ3HzAnAWZ++LXI1As3CRqWPPzsJiYMyEn0s3RfWwMBe?= =?us-ascii?Q?9AT+ilVj21GYIvDlHtoQHvC1dMoQwYXR/d/8KTdMpHLSnAmHlppqUBJjPL7c?= =?us-ascii?Q?/9rbjSz8oG/lX1fSoNPZWMdkVrnfSIzx6c/+kkkub3za43JlIyeIz+gmjJ+7?= =?us-ascii?Q?58meJQWfDSoNmlINqeBg0XkE5VbMMbPx1gBH3Y2Dn5pBBKnjRVa8jJEPj4w3?= =?us-ascii?Q?jDs30xnLSeJkvmBGAxTsTnqnXkfoQahIRTFQNFUrYi7d0m3bgrP0gtDcU4eA?= =?us-ascii?Q?Z6uSRXnlhAxaZL2koBei89HDlKKwuQ1hfkfzNS2tTZjWdbe4Jc5MJp2BH3c2?= =?us-ascii?Q?qABCwNXiqykdtM3cqvsbyd7zGkL8IFmM2Iw5q1/YsvPGmj9iyrkgN8OYzeo2?= =?us-ascii?Q?nXJner5PD5m2XuZewsRxCnM22ZzSVqYZU+w1BO+uGEc+tsK/h/PtufkdHsGl?= =?us-ascii?Q?BnAQlnZEInOLJLijaj3NDhSytGL0cCJtqW5E1xwoCBbYOrbZRdB2NP7hy7ok?= =?us-ascii?Q?BlQraKHtar5NwBPrmZjzZRKkcYh6/wT+21RQ?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(376014)(7416014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:18:12.1836 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b1c0e0c9-58b8-4286-dc60-08ddeab1a90a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003441.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5808 Content-Type: text/plain; charset="utf-8" Implement device supported verb APIs for control path. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v4->v5 - Updated APIs to latest DMAH changes v3->v4 - Removed empty labels - Used xa lock instead of rcu lock for qp and cq access and removed sync_rcu v2->v3 - Registered main ib ops at once - Removed uverbs_cmd_mask - Used rdma_user_mmap_* APIs for mappings - Removed rw locks around xarrays - Fixed sparse checks drivers/infiniband/hw/ionic/ionic_admin.c | 104 + .../infiniband/hw/ionic/ionic_controlpath.c | 2498 +++++++++++++++++ drivers/infiniband/hw/ionic/ionic_fw.h | 717 +++++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 46 +- drivers/infiniband/hw/ionic/ionic_ibdev.h | 251 +- drivers/infiniband/hw/ionic/ionic_pgtbl.c | 19 + include/uapi/rdma/ionic-abi.h | 115 + 7 files changed, 3741 insertions(+), 9 deletions(-) create mode 100644 include/uapi/rdma/ionic-abi.h diff --git a/drivers/infiniband/hw/ionic/ionic_admin.c b/drivers/infiniband= /hw/ionic/ionic_admin.c index 845c03f6d9fb..1ba7a8ecc073 100644 --- a/drivers/infiniband/hw/ionic/ionic_admin.c +++ b/drivers/infiniband/hw/ionic/ionic_admin.c @@ -627,6 +627,44 @@ static struct ionic_aq *ionic_create_rdma_adminq(struc= t ionic_ibdev *dev, return ERR_PTR(rc); } =20 +static void ionic_flush_qs(struct ionic_ibdev *dev) +{ + struct ionic_qp *qp, *qp_tmp; + struct ionic_cq *cq, *cq_tmp; + LIST_HEAD(flush_list); + unsigned long index; + + WARN_ON(!irqs_disabled()); + + /* Flush qp send and recv */ + xa_lock(&dev->qp_tbl); + xa_for_each(&dev->qp_tbl, index, qp) { + kref_get(&qp->qp_kref); + list_add_tail(&qp->ibkill_flush_ent, &flush_list); + } + xa_unlock(&dev->qp_tbl); + + list_for_each_entry_safe(qp, qp_tmp, &flush_list, ibkill_flush_ent) { + ionic_flush_qp(dev, qp); + kref_put(&qp->qp_kref, ionic_qp_complete); + list_del(&qp->ibkill_flush_ent); + } + + /* Notify completions */ + xa_lock(&dev->cq_tbl); + xa_for_each(&dev->cq_tbl, index, cq) { + kref_get(&cq->cq_kref); + list_add_tail(&cq->ibkill_flush_ent, &flush_list); + } + xa_unlock(&dev->cq_tbl); + + list_for_each_entry_safe(cq, cq_tmp, &flush_list, ibkill_flush_ent) { + ionic_notify_flush_cq(cq); + kref_put(&cq->cq_kref, ionic_cq_complete); + list_del(&cq->ibkill_flush_ent); + } +} + static void ionic_kill_ibdev(struct ionic_ibdev *dev, bool fatal_path) { unsigned long irqflags; @@ -650,6 +688,9 @@ static void ionic_kill_ibdev(struct ionic_ibdev *dev, b= ool fatal_path) spin_unlock(&aq->lock); } =20 + if (do_flush) + ionic_flush_qs(dev); + local_irq_restore(irqflags); =20 /* Post a fatal event if requested */ @@ -789,6 +830,65 @@ static void ionic_cq_event(struct ionic_ibdev *dev, u3= 2 cqid, u8 code) kref_put(&cq->cq_kref, ionic_cq_complete); } =20 +static void ionic_qp_event(struct ionic_ibdev *dev, u32 qpid, u8 code) +{ + unsigned long irqflags; + struct ib_event ibev; + struct ionic_qp *qp; + + xa_lock_irqsave(&dev->qp_tbl, irqflags); + qp =3D xa_load(&dev->qp_tbl, qpid); + if (qp) + kref_get(&qp->qp_kref); + xa_unlock_irqrestore(&dev->qp_tbl, irqflags); + + if (!qp) { + ibdev_dbg(&dev->ibdev, + "missing qpid %#x code %u\n", qpid, code); + return; + } + + ibev.device =3D &dev->ibdev; + ibev.element.qp =3D &qp->ibqp; + + switch (code) { + case IONIC_V1_EQE_SQ_DRAIN: + ibev.event =3D IB_EVENT_SQ_DRAINED; + break; + + case IONIC_V1_EQE_QP_COMM_EST: + ibev.event =3D IB_EVENT_COMM_EST; + break; + + case IONIC_V1_EQE_QP_LAST_WQE: + ibev.event =3D IB_EVENT_QP_LAST_WQE_REACHED; + break; + + case IONIC_V1_EQE_QP_ERR: + ibev.event =3D IB_EVENT_QP_FATAL; + break; + + case IONIC_V1_EQE_QP_ERR_REQUEST: + ibev.event =3D IB_EVENT_QP_REQ_ERR; + break; + + case IONIC_V1_EQE_QP_ERR_ACCESS: + ibev.event =3D IB_EVENT_QP_ACCESS_ERR; + break; + + default: + ibdev_dbg(&dev->ibdev, + "unrecognized qpid %#x code %u\n", qpid, code); + goto out; + } + + if (qp->ibqp.event_handler) + qp->ibqp.event_handler(&ibev, qp->ibqp.qp_context); + +out: + kref_put(&qp->qp_kref, ionic_qp_complete); +} + static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budget) { struct ionic_ibdev *dev =3D eq->dev; @@ -818,6 +918,10 @@ static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budg= et) ionic_cq_event(dev, qid, code); break; =20 + case IONIC_V1_EQE_TYPE_QP: + ionic_qp_event(dev, qid, code); + break; + default: ibdev_dbg(&dev->ibdev, "unknown event %#x type %u\n", evt, type); diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infi= niband/hw/ionic/ionic_controlpath.c index e1130573bd39..9ce7c2e6d7a8 100644 --- a/drivers/infiniband/hw/ionic/ionic_controlpath.c +++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c @@ -1,8 +1,19 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ =20 +#include +#include +#include +#include +#include +#include + +#include "ionic_fw.h" #include "ionic_ibdev.h" =20 +#define ionic_set_ecn(tos) (((tos) | 2u) & ~1u) +#define ionic_clear_ecn(tos) ((tos) & ~3u) + static int ionic_validate_qdesc(struct ionic_qdesc *q) { if (!q->addr || !q->size || !q->mask || @@ -179,3 +190,2490 @@ void ionic_destroy_cq_common(struct ionic_ibdev *dev= , struct ionic_cq *cq) =20 cq->vcq =3D NULL; } + +static int ionic_validate_qdesc_zero(struct ionic_qdesc *q) +{ + if (q->addr || q->size || q->mask || q->depth_log2 || q->stride_log2) + return -EINVAL; + + return 0; +} + +static int ionic_get_pdid(struct ionic_ibdev *dev, u32 *pdid) +{ + int rc; + + rc =3D ionic_resid_get(&dev->inuse_pdid); + if (rc < 0) + return rc; + + *pdid =3D rc; + return 0; +} + +static int ionic_get_ahid(struct ionic_ibdev *dev, u32 *ahid) +{ + int rc; + + rc =3D ionic_resid_get(&dev->inuse_ahid); + if (rc < 0) + return rc; + + *ahid =3D rc; + return 0; +} + +static int ionic_get_mrid(struct ionic_ibdev *dev, u32 *mrid) +{ + int rc; + + /* wrap to 1, skip reserved lkey */ + rc =3D ionic_resid_get_shared(&dev->inuse_mrid, 1, + dev->inuse_mrid.inuse_size); + if (rc < 0) + return rc; + + *mrid =3D ionic_mrid(rc, dev->next_mrkey++); + return 0; +} + +static int ionic_get_gsi_qpid(struct ionic_ibdev *dev, u32 *qpid) +{ + int rc =3D 0; + + rc =3D ionic_resid_get_shared(&dev->inuse_qpid, IB_QPT_GSI, IB_QPT_GSI + = 1); + if (rc < 0) + return rc; + + *qpid =3D IB_QPT_GSI; + return 0; +} + +static int ionic_get_qpid(struct ionic_ibdev *dev, u32 *qpid, + u8 *udma_idx, u8 udma_mask) +{ + unsigned int size, base, bound; + int udma_i, udma_x, udma_ix; + int rc =3D -EINVAL; + + udma_x =3D dev->next_qpid_udma_idx; + + dev->next_qpid_udma_idx ^=3D dev->lif_cfg.udma_count - 1; + + for (udma_i =3D 0; udma_i < dev->lif_cfg.udma_count; ++udma_i) { + udma_ix =3D udma_i ^ udma_x; + + if (!(udma_mask & BIT(udma_ix))) + continue; + + size =3D dev->lif_cfg.qp_count / dev->lif_cfg.udma_count; + base =3D size * udma_ix; + bound =3D base + size; + + /* skip reserved SMI and GSI qpids in group zero */ + if (!base) + base =3D 2; + + rc =3D ionic_resid_get_shared(&dev->inuse_qpid, base, bound); + if (rc >=3D 0) { + *qpid =3D ionic_bitid_to_qid(rc, + dev->lif_cfg.udma_qgrp_shift, + dev->half_qpid_udma_shift); + *udma_idx =3D udma_ix; + + rc =3D 0; + break; + } + } + + return rc; +} + +static int ionic_get_dbid(struct ionic_ibdev *dev, u32 *dbid, phys_addr_t = *addr) +{ + int rc, dbpage_num; + + /* wrap to 1, skip kernel reserved */ + rc =3D ionic_resid_get_shared(&dev->inuse_dbid, 1, + dev->inuse_dbid.inuse_size); + if (rc < 0) + return rc; + + dbpage_num =3D (dev->lif_cfg.lif_hw_index * dev->lif_cfg.dbid_count) + rc; + *addr =3D dev->lif_cfg.db_phys + ((phys_addr_t)dbpage_num << PAGE_SHIFT); + + *dbid =3D rc; + + return 0; +} + +static void ionic_put_pdid(struct ionic_ibdev *dev, u32 pdid) +{ + ionic_resid_put(&dev->inuse_pdid, pdid); +} + +static void ionic_put_ahid(struct ionic_ibdev *dev, u32 ahid) +{ + ionic_resid_put(&dev->inuse_ahid, ahid); +} + +static void ionic_put_mrid(struct ionic_ibdev *dev, u32 mrid) +{ + ionic_resid_put(&dev->inuse_mrid, ionic_mrid_index(mrid)); +} + +static void ionic_put_qpid(struct ionic_ibdev *dev, u32 qpid) +{ + u32 bitid =3D ionic_qid_to_bitid(qpid, + dev->lif_cfg.udma_qgrp_shift, + dev->half_qpid_udma_shift); + + ionic_resid_put(&dev->inuse_qpid, bitid); +} + +static void ionic_put_dbid(struct ionic_ibdev *dev, u32 dbid) +{ + ionic_resid_put(&dev->inuse_dbid, dbid); +} + +static struct rdma_user_mmap_entry* +ionic_mmap_entry_insert(struct ionic_ctx *ctx, unsigned long size, + unsigned long pfn, u8 mmap_flags, u64 *offset) +{ + struct ionic_mmap_entry *entry; + int rc; + + entry =3D kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return NULL; + + entry->size =3D size; + entry->pfn =3D pfn; + entry->mmap_flags =3D mmap_flags; + + rc =3D rdma_user_mmap_entry_insert(&ctx->ibctx, &entry->rdma_entry, + entry->size); + if (rc) { + kfree(entry); + return NULL; + } + + if (offset) + *offset =3D rdma_user_mmap_get_offset(&entry->rdma_entry); + + return &entry->rdma_entry; +} + +int ionic_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + struct ionic_ctx_resp resp =3D {}; + struct ionic_ctx_req req; + phys_addr_t db_phys =3D 0; + int rc; + + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + return rc; + + /* try to allocate dbid for user ctx */ + rc =3D ionic_get_dbid(dev, &ctx->dbid, &db_phys); + if (rc < 0) + return rc; + + ibdev_dbg(&dev->ibdev, "user space dbid %u\n", ctx->dbid); + + ctx->mmap_dbell =3D ionic_mmap_entry_insert(ctx, PAGE_SIZE, + PHYS_PFN(db_phys), 0, NULL); + if (!ctx->mmap_dbell) { + rc =3D -ENOMEM; + goto err_mmap_dbell; + } + + resp.page_shift =3D PAGE_SHIFT; + + resp.dbell_offset =3D db_phys & ~PAGE_MASK; + + resp.version =3D dev->lif_cfg.rdma_version; + resp.qp_opcodes =3D dev->lif_cfg.qp_opcodes; + resp.admin_opcodes =3D dev->lif_cfg.admin_opcodes; + + resp.sq_qtype =3D dev->lif_cfg.sq_qtype; + resp.rq_qtype =3D dev->lif_cfg.rq_qtype; + resp.cq_qtype =3D dev->lif_cfg.cq_qtype; + resp.admin_qtype =3D dev->lif_cfg.aq_qtype; + resp.max_stride =3D dev->lif_cfg.max_stride; + resp.max_spec =3D IONIC_SPEC_HIGH; + + resp.udma_count =3D dev->lif_cfg.udma_count; + resp.expdb_mask =3D dev->lif_cfg.expdb_mask; + + if (dev->lif_cfg.sq_expdb) + resp.expdb_qtypes |=3D IONIC_EXPDB_SQ; + if (dev->lif_cfg.rq_expdb) + resp.expdb_qtypes |=3D IONIC_EXPDB_RQ; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + + return 0; + +err_resp: + rdma_user_mmap_entry_remove(ctx->mmap_dbell); +err_mmap_dbell: + ionic_put_dbid(dev, ctx->dbid); + + return rc; +} + +void ionic_dealloc_ucontext(struct ib_ucontext *ibctx) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + + rdma_user_mmap_entry_remove(ctx->mmap_dbell); + ionic_put_dbid(dev, ctx->dbid); +} + +int ionic_mmap(struct ib_ucontext *ibctx, struct vm_area_struct *vma) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + struct rdma_user_mmap_entry *rdma_entry; + struct ionic_mmap_entry *ionic_entry; + int rc =3D 0; + + rdma_entry =3D rdma_user_mmap_entry_get(&ctx->ibctx, vma); + if (!rdma_entry) { + ibdev_dbg(&dev->ibdev, "not found %#lx\n", + vma->vm_pgoff << PAGE_SHIFT); + return -EINVAL; + } + + ionic_entry =3D container_of(rdma_entry, struct ionic_mmap_entry, + rdma_entry); + + ibdev_dbg(&dev->ibdev, "writecombine? %d\n", + ionic_entry->mmap_flags & IONIC_MMAP_WC); + if (ionic_entry->mmap_flags & IONIC_MMAP_WC) + vma->vm_page_prot =3D pgprot_writecombine(vma->vm_page_prot); + else + vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); + + ibdev_dbg(&dev->ibdev, "remap st %#lx pf %#lx sz %#lx\n", + vma->vm_start, ionic_entry->pfn, ionic_entry->size); + rc =3D rdma_user_mmap_io(&ctx->ibctx, vma, ionic_entry->pfn, + ionic_entry->size, vma->vm_page_prot, + rdma_entry); + if (rc) + ibdev_dbg(&dev->ibdev, "remap failed %d\n", rc); + + rdma_user_mmap_entry_put(rdma_entry); + return rc; +} + +void ionic_mmap_free(struct rdma_user_mmap_entry *rdma_entry) +{ + struct ionic_mmap_entry *ionic_entry; + + ionic_entry =3D container_of(rdma_entry, struct ionic_mmap_entry, + rdma_entry); + kfree(ionic_entry); +} + +int ionic_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + + return ionic_get_pdid(dev, &pd->pdid); +} + +int ionic_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + + ionic_put_pdid(dev, pd->pdid); + + return 0; +} + +static int ionic_build_hdr(struct ionic_ibdev *dev, + struct ib_ud_header *hdr, + const struct rdma_ah_attr *attr, + u16 sport, bool want_ecn) +{ + const struct ib_global_route *grh; + enum rdma_network_type net; + u16 vlan; + int rc; + + if (attr->ah_flags !=3D IB_AH_GRH) + return -EINVAL; + if (attr->type !=3D RDMA_AH_ATTR_TYPE_ROCE) + return -EINVAL; + + grh =3D rdma_ah_read_grh(attr); + + rc =3D rdma_read_gid_l2_fields(grh->sgid_attr, &vlan, &hdr->eth.smac_h[0]= ); + if (rc) + return rc; + + net =3D rdma_gid_attr_network_type(grh->sgid_attr); + + rc =3D ib_ud_header_init(0, /* no payload */ + 0, /* no lrh */ + 1, /* yes eth */ + vlan !=3D 0xffff, + 0, /* no grh */ + net =3D=3D RDMA_NETWORK_IPV4 ? 4 : 6, + 1, /* yes udp */ + 0, /* no imm */ + hdr); + if (rc) + return rc; + + ether_addr_copy(hdr->eth.dmac_h, attr->roce.dmac); + + if (net =3D=3D RDMA_NETWORK_IPV4) { + hdr->eth.type =3D cpu_to_be16(ETH_P_IP); + hdr->ip4.frag_off =3D cpu_to_be16(0x4000); /* don't fragment */ + hdr->ip4.ttl =3D grh->hop_limit; + hdr->ip4.tot_len =3D cpu_to_be16(0xffff); + hdr->ip4.saddr =3D + *(const __be32 *)(grh->sgid_attr->gid.raw + 12); + hdr->ip4.daddr =3D *(const __be32 *)(grh->dgid.raw + 12); + + if (want_ecn) + hdr->ip4.tos =3D ionic_set_ecn(grh->traffic_class); + else + hdr->ip4.tos =3D ionic_clear_ecn(grh->traffic_class); + } else { + hdr->eth.type =3D cpu_to_be16(ETH_P_IPV6); + hdr->grh.flow_label =3D cpu_to_be32(grh->flow_label); + hdr->grh.hop_limit =3D grh->hop_limit; + hdr->grh.source_gid =3D grh->sgid_attr->gid; + hdr->grh.destination_gid =3D grh->dgid; + + if (want_ecn) + hdr->grh.traffic_class =3D + ionic_set_ecn(grh->traffic_class); + else + hdr->grh.traffic_class =3D + ionic_clear_ecn(grh->traffic_class); + } + + if (vlan !=3D 0xffff) { + vlan |=3D rdma_ah_get_sl(attr) << VLAN_PRIO_SHIFT; + hdr->vlan.tag =3D cpu_to_be16(vlan); + hdr->vlan.type =3D hdr->eth.type; + hdr->eth.type =3D cpu_to_be16(ETH_P_8021Q); + } + + hdr->udp.sport =3D cpu_to_be16(sport); + hdr->udp.dport =3D cpu_to_be16(ROCE_V2_UDP_DPORT); + + return 0; +} + +static void ionic_set_ah_attr(struct ionic_ibdev *dev, + struct rdma_ah_attr *ah_attr, + struct ib_ud_header *hdr, + int sgid_index) +{ + u32 flow_label; + u16 vlan =3D 0; + u8 tos, ttl; + + if (hdr->vlan_present) + vlan =3D be16_to_cpu(hdr->vlan.tag); + + if (hdr->ipv4_present) { + flow_label =3D 0; + ttl =3D hdr->ip4.ttl; + tos =3D hdr->ip4.tos; + *(__be16 *)(hdr->grh.destination_gid.raw + 10) =3D cpu_to_be16(0xffff); + *(__be32 *)(hdr->grh.destination_gid.raw + 12) =3D hdr->ip4.daddr; + } else { + flow_label =3D be32_to_cpu(hdr->grh.flow_label); + ttl =3D hdr->grh.hop_limit; + tos =3D hdr->grh.traffic_class; + } + + memset(ah_attr, 0, sizeof(*ah_attr)); + ah_attr->type =3D RDMA_AH_ATTR_TYPE_ROCE; + if (hdr->eth_present) + memcpy(&ah_attr->roce.dmac, &hdr->eth.dmac_h, ETH_ALEN); + rdma_ah_set_sl(ah_attr, vlan >> VLAN_PRIO_SHIFT); + rdma_ah_set_port_num(ah_attr, 1); + rdma_ah_set_grh(ah_attr, NULL, flow_label, sgid_index, ttl, tos); + rdma_ah_set_dgid_raw(ah_attr, &hdr->grh.destination_gid); +} + +static int ionic_create_ah_cmd(struct ionic_ibdev *dev, + struct ionic_ah *ah, + struct ionic_pd *pd, + struct rdma_ah_attr *attr, + u32 flags) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_AH, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_AH_IN_V1_LEN), + .cmd.create_ah =3D { + .pd_id =3D cpu_to_le32(pd->pdid), + .dbid_flags =3D cpu_to_le16(dev->lif_cfg.dbid), + .id_ver =3D cpu_to_le32(ah->ahid), + } + } + }; + enum ionic_admin_flags admin_flags =3D 0; + dma_addr_t hdr_dma =3D 0; + void *hdr_buf; + gfp_t gfp =3D GFP_ATOMIC; + int rc, hdr_len =3D 0; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_AH) + return -EBADRQC; + + if (flags & RDMA_CREATE_AH_SLEEPABLE) + gfp =3D GFP_KERNEL; + else + admin_flags |=3D IONIC_ADMIN_F_BUSYWAIT; + + rc =3D ionic_build_hdr(dev, &ah->hdr, attr, IONIC_ROCE_UDP_SPORT, false); + if (rc) + return rc; + + if (ah->hdr.eth.type =3D=3D cpu_to_be16(ETH_P_8021Q)) { + if (ah->hdr.vlan.type =3D=3D cpu_to_be16(ETH_P_IP)) + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP; + else + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP; + } else { + if (ah->hdr.eth.type =3D=3D cpu_to_be16(ETH_P_IP)) + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP; + else + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP; + } + + ah->sgid_index =3D rdma_ah_read_grh(attr)->sgid_index; + + hdr_buf =3D kmalloc(PAGE_SIZE, gfp); + if (!hdr_buf) + return -ENOMEM; + + hdr_len =3D ib_ud_header_pack(&ah->hdr, hdr_buf); + hdr_len -=3D IB_BTH_BYTES; + hdr_len -=3D IB_DETH_BYTES; + ibdev_dbg(&dev->ibdev, "roce packet header template\n"); + print_hex_dump_debug("hdr ", DUMP_PREFIX_OFFSET, 16, 1, + hdr_buf, hdr_len, true); + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, hdr_len, + DMA_TO_DEVICE); + + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + wr.wqe.cmd.create_ah.dma_addr =3D cpu_to_le64(hdr_dma); + wr.wqe.cmd.create_ah.length =3D cpu_to_le32(hdr_len); + + ionic_admin_post(dev, &wr); + rc =3D ionic_admin_wait(dev, &wr, admin_flags); + + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, hdr_len, + DMA_TO_DEVICE); +err_dma: + kfree(hdr_buf); + + return rc; +} + +static int ionic_destroy_ah_cmd(struct ionic_ibdev *dev, u32 ahid, u32 fla= gs) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_AH, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_AH_IN_V1_LEN), + .cmd.destroy_ah =3D { + .ah_id =3D cpu_to_le32(ahid), + }, + } + }; + enum ionic_admin_flags admin_flags =3D IONIC_ADMIN_F_TEARDOWN; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_AH) + return -EBADRQC; + + if (!(flags & RDMA_CREATE_AH_SLEEPABLE)) + admin_flags |=3D IONIC_ADMIN_F_BUSYWAIT; + + ionic_admin_post(dev, &wr); + ionic_admin_wait(dev, &wr, admin_flags); + + /* No host-memory resource is associated with ah, so it is ok + * to "succeed" and complete this destroy ah on the host. + */ + return 0; +} + +int ionic_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_att= r, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct rdma_ah_attr *attr =3D init_attr->ah_attr; + struct ionic_pd *pd =3D to_ionic_pd(ibah->pd); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + struct ionic_ah_resp resp =3D {}; + u32 flags =3D init_attr->flags; + int rc; + + rc =3D ionic_get_ahid(dev, &ah->ahid); + if (rc) + return rc; + + rc =3D ionic_create_ah_cmd(dev, ah, pd, attr, flags); + if (rc) + goto err_cmd; + + if (udata) { + resp.ahid =3D ah->ahid; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + return 0; + +err_resp: + ionic_destroy_ah_cmd(dev, ah->ahid, flags); +err_cmd: + ionic_put_ahid(dev, ah->ahid); + return rc; +} + +int ionic_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + + ionic_set_ah_attr(dev, ah_attr, &ah->hdr, ah->sgid_index); + + return 0; +} + +int ionic_destroy_ah(struct ib_ah *ibah, u32 flags) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + int rc; + + rc =3D ionic_destroy_ah_cmd(dev, ah->ahid, flags); + if (rc) + return rc; + + ionic_put_ahid(dev, ah->ahid); + + return 0; +} + +static int ionic_create_mr_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_mr *mr, + u64 addr, + u64 length) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_MR, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_MR_IN_V1_LEN), + .cmd.create_mr =3D { + .va =3D cpu_to_le64(addr), + .length =3D cpu_to_le64(length), + .pd_id =3D cpu_to_le32(pd->pdid), + .page_size_log2 =3D mr->buf.page_size_log2, + .tbl_index =3D cpu_to_le32(~0), + .map_count =3D cpu_to_le32(mr->buf.tbl_pages), + .dma_addr =3D ionic_pgtbl_dma(&mr->buf, addr), + .dbid_flags =3D cpu_to_le16(mr->flags), + .id_ver =3D cpu_to_le32(mr->mrid), + } + } + }; + int rc; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_MR) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + rc =3D ionic_admin_wait(dev, &wr, 0); + if (!rc) + mr->created =3D true; + + return rc; +} + +static int ionic_destroy_mr_cmd(struct ionic_ibdev *dev, u32 mrid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_MR, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_MR_IN_V1_LEN), + .cmd.destroy_mr =3D { + .mr_id =3D cpu_to_le32(mrid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_MR) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +struct ib_mr *ionic_get_dma_mr(struct ib_pd *ibpd, int access) +{ + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + mr->ibmr.lkey =3D IONIC_DMA_LKEY; + mr->ibmr.rkey =3D IONIC_DMA_RKEY; + + if (pd) + pd->flags |=3D IONIC_QPF_PRIVILEGED; + + return &mr->ibmr; +} + +struct ib_mr *ionic_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 addr, int access, struct ib_dmah *dmah, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + unsigned long pg_sz; + int rc; + + if (dmah) + return ERR_PTR(-EOPNOTSUPP); + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + mr->umem =3D ib_umem_get(&dev->ibdev, start, length, access); + if (IS_ERR(mr->umem)) { + rc =3D PTR_ERR(mr->umem); + goto err_umem; + } + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) + goto err_pgtbl; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &mr->buf); + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); +err_umem: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); + return ERR_PTR(rc); +} + +struct ib_mr *ionic_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 offset, + u64 length, u64 addr, int fd, int access, + struct ib_dmah *dmah, + struct uverbs_attr_bundle *attrs) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ib_umem_dmabuf *umem_dmabuf; + struct ionic_mr *mr; + u64 pg_sz; + int rc; + + if (dmah) + return ERR_PTR(-EOPNOTSUPP); + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + umem_dmabuf =3D ib_umem_dmabuf_get_pinned(&dev->ibdev, offset, length, + fd, access); + if (IS_ERR(umem_dmabuf)) { + rc =3D PTR_ERR(umem_dmabuf); + goto err_umem; + } + + mr->umem =3D &umem_dmabuf->umem; + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) + goto err_pgtbl; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &mr->buf); + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); +err_umem: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); + return ERR_PTR(rc); +} + +int ionic_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + int rc; + + if (!mr->ibmr.lkey) + goto out; + + if (mr->created) { + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + return rc; + } + + ionic_pgtbl_unbuf(dev, &mr->buf); + + if (mr->umem) + ib_umem_release(mr->umem); + + ionic_put_mrid(dev, mr->mrid); + +out: + kfree(mr); + + return 0; +} + +struct ib_mr *ionic_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type type, + u32 max_sg) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + int rc; + + if (type !=3D IB_MR_TYPE_MEM_REG) + return ERR_PTR(-EINVAL); + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + + mr->flags =3D IONIC_MRF_PHYS_MR; + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, max_sg, PAGE_SIZE); + if (rc) + goto err_pgtbl; + + mr->buf.tbl_pages =3D 0; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, 0, 0); + if (rc) + goto err_cmd; + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); + return ERR_PTR(rc); +} + +static int ionic_map_mr_page(struct ib_mr *ibmr, u64 dma) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + + ibdev_dbg(&dev->ibdev, "dma %p\n", (void *)dma); + return ionic_pgtbl_page(&mr->buf, dma); +} + +int ionic_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nen= ts, + unsigned int *sg_offset) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + int rc; + + /* mr must be allocated using ib_alloc_mr() */ + if (unlikely(!mr->buf.tbl_limit)) + return -EINVAL; + + mr->buf.tbl_pages =3D 0; + + if (mr->buf.tbl_buf) + dma_sync_single_for_cpu(dev->lif_cfg.hwdev, mr->buf.tbl_dma, + mr->buf.tbl_size, DMA_TO_DEVICE); + + ibdev_dbg(&dev->ibdev, "sg %p nent %d\n", sg, sg_nents); + rc =3D ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, ionic_map_mr_page); + + mr->buf.page_size_log2 =3D order_base_2(ibmr->page_size); + + if (mr->buf.tbl_buf) + dma_sync_single_for_device(dev->lif_cfg.hwdev, mr->buf.tbl_dma, + mr->buf.tbl_size, DMA_TO_DEVICE); + + return rc; +} + +int ionic_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmw->device); + struct ionic_pd *pd =3D to_ionic_pd(ibmw->pd); + struct ionic_mr *mr =3D to_ionic_mw(ibmw); + int rc; + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + return rc; + + mr->ibmw.rkey =3D mr->mrid; + + if (mr->ibmw.type =3D=3D IB_MW_TYPE_1) + mr->flags =3D IONIC_MRF_MW_1; + else + mr->flags =3D IONIC_MRF_MW_2; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, 0, 0); + if (rc) + goto err_cmd; + + return 0; + +err_cmd: + ionic_put_mrid(dev, mr->mrid); + return rc; +} + +int ionic_dealloc_mw(struct ib_mw *ibmw) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmw->device); + struct ionic_mr *mr =3D to_ionic_mw(ibmw); + int rc; + + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + return rc; + + ionic_put_mrid(dev, mr->mrid); + + return 0; +} + +static int ionic_create_cq_cmd(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_cq *cq, + struct ionic_tbl_buf *buf) +{ + const u16 dbid =3D ionic_ctx_dbid(dev, ctx); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_CQ, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_CQ_IN_V1_LEN), + .cmd.create_cq =3D { + .eq_id =3D cpu_to_le32(cq->eqid), + .depth_log2 =3D cq->q.depth_log2, + .stride_log2 =3D cq->q.stride_log2, + .page_size_log2 =3D buf->page_size_log2, + .tbl_index =3D cpu_to_le32(~0), + .map_count =3D cpu_to_le32(buf->tbl_pages), + .dma_addr =3D ionic_pgtbl_dma(buf, 0), + .dbid_flags =3D cpu_to_le16(dbid), + .id_ver =3D cpu_to_le32(cq->cqid), + } + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_CQ) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, 0); +} + +static int ionic_destroy_cq_cmd(struct ionic_ibdev *dev, u32 cqid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_CQ, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN), + .cmd.destroy_cq =3D { + .cq_id =3D cpu_to_le32(cqid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_CQ) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +int ionic_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct uverbs_attr_bundle *attrs) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ib_udata *udata =3D &attrs->driver_udata; + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + struct ionic_tbl_buf buf =3D {}; + struct ionic_cq_resp resp; + struct ionic_cq_req req; + int udma_idx =3D 0, rc; + + if (udata) { + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + return rc; + } + + vcq->udma_mask =3D BIT(dev->lif_cfg.udma_count) - 1; + + if (udata) + vcq->udma_mask &=3D req.udma_mask; + + if (!vcq->udma_mask) { + rc =3D -EINVAL; + goto err_init; + } + + for (; udma_idx < dev->lif_cfg.udma_count; ++udma_idx) { + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + + rc =3D ionic_create_cq_common(vcq, &buf, attr, ctx, udata, + &req.cq[udma_idx], + &resp.cqid[udma_idx], + udma_idx); + if (rc) + goto err_init; + + rc =3D ionic_create_cq_cmd(dev, ctx, &vcq->cq[udma_idx], &buf); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &buf); + } + + vcq->ibcq.cqe =3D attr->cqe; + + if (udata) { + resp.udma_mask =3D vcq->udma_mask; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + return 0; + +err_resp: + while (udma_idx) { + --udma_idx; + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + ionic_destroy_cq_cmd(dev, vcq->cq[udma_idx].cqid); +err_cmd: + ionic_pgtbl_unbuf(dev, &buf); + ionic_destroy_cq_common(dev, &vcq->cq[udma_idx]); +err_init: + ; + } + + return rc; +} + +int ionic_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int udma_idx, rc_tmp, rc =3D 0; + + for (udma_idx =3D dev->lif_cfg.udma_count; udma_idx; ) { + --udma_idx; + + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + + rc_tmp =3D ionic_destroy_cq_cmd(dev, vcq->cq[udma_idx].cqid); + if (rc_tmp) { + if (!rc) + rc =3D rc_tmp; + + continue; + } + + ionic_destroy_cq_common(dev, &vcq->cq[udma_idx]); + } + + return rc; +} + +static bool pd_remote_privileged(struct ib_pd *pd) +{ + return pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; +} + +static int ionic_create_qp_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_cq *send_cq, + struct ionic_cq *recv_cq, + struct ionic_qp *qp, + struct ionic_tbl_buf *sq_buf, + struct ionic_tbl_buf *rq_buf, + struct ib_qp_init_attr *attr) +{ + const u16 dbid =3D ionic_obj_dbid(dev, pd->ibpd.uobject); + const u32 flags =3D to_ionic_qp_flags(0, 0, + qp->sq_cmb & IONIC_CMB_ENABLE, + qp->rq_cmb & IONIC_CMB_ENABLE, + qp->sq_spec, qp->rq_spec, + pd->flags & IONIC_QPF_PRIVILEGED, + pd_remote_privileged(&pd->ibpd)); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_QP_IN_V1_LEN), + .cmd.create_qp =3D { + .pd_id =3D cpu_to_le32(pd->pdid), + .priv_flags =3D cpu_to_be32(flags), + .type_state =3D to_ionic_qp_type(attr->qp_type), + .dbid_flags =3D cpu_to_le16(dbid), + .id_ver =3D cpu_to_le32(qp->qpid), + } + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_QP) + return -EBADRQC; + + if (qp->has_sq) { + wr.wqe.cmd.create_qp.sq_cq_id =3D cpu_to_le32(send_cq->cqid); + wr.wqe.cmd.create_qp.sq_depth_log2 =3D qp->sq.depth_log2; + wr.wqe.cmd.create_qp.sq_stride_log2 =3D qp->sq.stride_log2; + wr.wqe.cmd.create_qp.sq_page_size_log2 =3D sq_buf->page_size_log2; + wr.wqe.cmd.create_qp.sq_tbl_index_xrcd_id =3D cpu_to_le32(~0); + wr.wqe.cmd.create_qp.sq_map_count =3D + cpu_to_le32(sq_buf->tbl_pages); + wr.wqe.cmd.create_qp.sq_dma_addr =3D ionic_pgtbl_dma(sq_buf, 0); + } + + if (qp->has_rq) { + wr.wqe.cmd.create_qp.rq_cq_id =3D cpu_to_le32(recv_cq->cqid); + wr.wqe.cmd.create_qp.rq_depth_log2 =3D qp->rq.depth_log2; + wr.wqe.cmd.create_qp.rq_stride_log2 =3D qp->rq.stride_log2; + wr.wqe.cmd.create_qp.rq_page_size_log2 =3D rq_buf->page_size_log2; + wr.wqe.cmd.create_qp.rq_tbl_index_srq_id =3D cpu_to_le32(~0); + wr.wqe.cmd.create_qp.rq_map_count =3D + cpu_to_le32(rq_buf->tbl_pages); + wr.wqe.cmd.create_qp.rq_dma_addr =3D ionic_pgtbl_dma(rq_buf, 0); + } + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, 0); +} + +static int ionic_modify_qp_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_qp *qp, + struct ib_qp_attr *attr, + int mask) +{ + const u32 flags =3D to_ionic_qp_flags(attr->qp_access_flags, + attr->en_sqd_async_notify, + qp->sq_cmb & IONIC_CMB_ENABLE, + qp->rq_cmb & IONIC_CMB_ENABLE, + qp->sq_spec, qp->rq_spec, + pd->flags & IONIC_QPF_PRIVILEGED, + pd_remote_privileged(qp->ibqp.pd)); + const u8 state =3D to_ionic_qp_modify_state(attr->qp_state, + attr->cur_qp_state); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_MODIFY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_MODIFY_QP_IN_V1_LEN), + .cmd.mod_qp =3D { + .attr_mask =3D cpu_to_be32(mask), + .access_flags =3D cpu_to_be16(flags), + .rq_psn =3D cpu_to_le32(attr->rq_psn), + .sq_psn =3D cpu_to_le32(attr->sq_psn), + .rate_limit_kbps =3D + cpu_to_le32(attr->rate_limit), + .pmtu =3D (attr->path_mtu + 7), + .retry =3D (attr->retry_cnt | + (attr->rnr_retry << 4)), + .rnr_timer =3D attr->min_rnr_timer, + .retry_timeout =3D attr->timeout, + .type_state =3D state, + .id_ver =3D cpu_to_le32(qp->qpid), + } + } + }; + const struct ib_global_route *grh =3D rdma_ah_read_grh(&attr->ah_attr); + void *hdr_buf =3D NULL; + dma_addr_t hdr_dma =3D 0; + int rc, hdr_len =3D 0; + u16 sport; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_MODIFY_QP) + return -EBADRQC; + + if ((mask & IB_QP_MAX_DEST_RD_ATOMIC) && attr->max_dest_rd_atomic) { + /* Note, round up/down was already done for allocating + * resources on the device. The allocation order is in cache + * line size. We can't use the order of the resource + * allocation to determine the order wqes here, because for + * queue length <=3D one cache line it is not distinct. + * + * Therefore, order wqes is computed again here. + * + * Account for hole and round up to the next order. + */ + wr.wqe.cmd.mod_qp.rsq_depth =3D + order_base_2(attr->max_dest_rd_atomic + 1); + wr.wqe.cmd.mod_qp.rsq_index =3D cpu_to_le32(~0); + } + + if ((mask & IB_QP_MAX_QP_RD_ATOMIC) && attr->max_rd_atomic) { + /* Account for hole and round down to the next order */ + wr.wqe.cmd.mod_qp.rrq_depth =3D + order_base_2(attr->max_rd_atomic + 2) - 1; + wr.wqe.cmd.mod_qp.rrq_index =3D cpu_to_le32(~0); + } + + if (qp->ibqp.qp_type =3D=3D IB_QPT_RC || qp->ibqp.qp_type =3D=3D IB_QPT_U= C) + wr.wqe.cmd.mod_qp.qkey_dest_qpn =3D + cpu_to_le32(attr->dest_qp_num); + else + wr.wqe.cmd.mod_qp.qkey_dest_qpn =3D cpu_to_le32(attr->qkey); + + if (mask & IB_QP_AV) { + if (!qp->hdr) + return -ENOMEM; + + sport =3D rdma_get_udp_sport(grh->flow_label, + qp->qpid, + attr->dest_qp_num); + + rc =3D ionic_build_hdr(dev, qp->hdr, &attr->ah_attr, sport, true); + if (rc) + return rc; + + qp->sgid_index =3D grh->sgid_index; + + hdr_buf =3D kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!hdr_buf) + return -ENOMEM; + + hdr_len =3D ib_ud_header_pack(qp->hdr, hdr_buf); + hdr_len -=3D IB_BTH_BYTES; + hdr_len -=3D IB_DETH_BYTES; + ibdev_dbg(&dev->ibdev, "roce packet header template\n"); + print_hex_dump_debug("hdr ", DUMP_PREFIX_OFFSET, 16, 1, + hdr_buf, hdr_len, true); + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, hdr_len, + DMA_TO_DEVICE); + + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + if (qp->hdr->ipv4_present) { + wr.wqe.cmd.mod_qp.tfp_csum_profile =3D + qp->hdr->vlan_present ? + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP : + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP; + } else { + wr.wqe.cmd.mod_qp.tfp_csum_profile =3D + qp->hdr->vlan_present ? + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP : + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP; + } + + wr.wqe.cmd.mod_qp.ah_id_len =3D + cpu_to_le32(qp->ahid | (hdr_len << 24)); + wr.wqe.cmd.mod_qp.dma_addr =3D cpu_to_le64(hdr_dma); + + wr.wqe.cmd.mod_qp.en_pcp =3D attr->ah_attr.sl; + wr.wqe.cmd.mod_qp.ip_dscp =3D grh->traffic_class >> 2; + } + + ionic_admin_post(dev, &wr); + + rc =3D ionic_admin_wait(dev, &wr, 0); + + if (mask & IB_QP_AV) + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, hdr_len, + DMA_TO_DEVICE); +err_dma: + if (mask & IB_QP_AV) + kfree(hdr_buf); + + return rc; +} + +static int ionic_query_qp_cmd(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_qp_attr *attr, + int mask) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_QUERY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_QUERY_QP_IN_V1_LEN), + .cmd.query_qp =3D { + .id_ver =3D cpu_to_le32(qp->qpid), + }, + } + }; + struct ionic_v1_admin_query_qp_sq *query_sqbuf; + struct ionic_v1_admin_query_qp_rq *query_rqbuf; + dma_addr_t query_sqdma; + dma_addr_t query_rqdma; + dma_addr_t hdr_dma =3D 0; + void *hdr_buf =3D NULL; + int flags, rc; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_QUERY_QP) + return -EBADRQC; + + if (qp->has_sq) { + bool expdb =3D !!(qp->sq_cmb & IONIC_CMB_EXPDB); + + attr->cap.max_send_sge =3D + ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + expdb); + attr->cap.max_inline_data =3D + ionic_v1_send_wqe_max_data(qp->sq.stride_log2, expdb); + } + + if (qp->has_rq) { + attr->cap.max_recv_sge =3D + ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, + qp->rq_spec, + qp->rq_cmb & IONIC_CMB_EXPDB); + } + + query_sqbuf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!query_sqbuf) + return -ENOMEM; + + query_rqbuf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!query_rqbuf) { + rc =3D -ENOMEM; + goto err_rqbuf; + } + + query_sqdma =3D dma_map_single(dev->lif_cfg.hwdev, query_sqbuf, PAGE_SIZE, + DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, query_sqdma); + if (rc) + goto err_sqdma; + + query_rqdma =3D dma_map_single(dev->lif_cfg.hwdev, query_rqbuf, PAGE_SIZE, + DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, query_rqdma); + if (rc) + goto err_rqdma; + + if (mask & IB_QP_AV) { + hdr_buf =3D kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!hdr_buf) { + rc =3D -ENOMEM; + goto err_hdrbuf; + } + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_hdrdma; + } + + wr.wqe.cmd.query_qp.sq_dma_addr =3D cpu_to_le64(query_sqdma); + wr.wqe.cmd.query_qp.rq_dma_addr =3D cpu_to_le64(query_rqdma); + wr.wqe.cmd.query_qp.hdr_dma_addr =3D cpu_to_le64(hdr_dma); + wr.wqe.cmd.query_qp.ah_id =3D cpu_to_le32(qp->ahid); + + ionic_admin_post(dev, &wr); + + rc =3D ionic_admin_wait(dev, &wr, 0); + + if (rc) + goto err_hdrdma; + + flags =3D be16_to_cpu(query_sqbuf->access_perms_flags | + query_rqbuf->access_perms_flags); + + print_hex_dump_debug("sqbuf ", DUMP_PREFIX_OFFSET, 16, 1, + query_sqbuf, sizeof(*query_sqbuf), true); + print_hex_dump_debug("rqbuf ", DUMP_PREFIX_OFFSET, 16, 1, + query_rqbuf, sizeof(*query_rqbuf), true); + ibdev_dbg(&dev->ibdev, "query qp %u state_pmtu %#x flags %#x", + qp->qpid, query_rqbuf->state_pmtu, flags); + + attr->qp_state =3D from_ionic_qp_state(query_rqbuf->state_pmtu >> 4); + attr->cur_qp_state =3D attr->qp_state; + attr->path_mtu =3D (query_rqbuf->state_pmtu & 0xf) - 7; + attr->path_mig_state =3D IB_MIG_MIGRATED; + attr->qkey =3D be32_to_cpu(query_sqbuf->qkey_dest_qpn); + attr->rq_psn =3D be32_to_cpu(query_sqbuf->rq_psn); + attr->sq_psn =3D be32_to_cpu(query_rqbuf->sq_psn); + attr->dest_qp_num =3D attr->qkey; + attr->qp_access_flags =3D from_ionic_qp_flags(flags); + attr->pkey_index =3D 0; + attr->alt_pkey_index =3D 0; + attr->en_sqd_async_notify =3D !!(flags & IONIC_QPF_SQD_NOTIFY); + attr->sq_draining =3D !!(flags & IONIC_QPF_SQ_DRAINING); + attr->max_rd_atomic =3D BIT(query_rqbuf->rrq_depth) - 1; + attr->max_dest_rd_atomic =3D BIT(query_rqbuf->rsq_depth) - 1; + attr->min_rnr_timer =3D query_sqbuf->rnr_timer; + attr->port_num =3D 0; + attr->timeout =3D query_sqbuf->retry_timeout; + attr->retry_cnt =3D query_rqbuf->retry_rnrtry & 0xf; + attr->rnr_retry =3D query_rqbuf->retry_rnrtry >> 4; + attr->alt_port_num =3D 0; + attr->alt_timeout =3D 0; + attr->rate_limit =3D be32_to_cpu(query_sqbuf->rate_limit_kbps); + + if (mask & IB_QP_AV) + ionic_set_ah_attr(dev, &attr->ah_attr, + qp->hdr, qp->sgid_index); + +err_hdrdma: + if (mask & IB_QP_AV) { + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, + PAGE_SIZE, DMA_FROM_DEVICE); + kfree(hdr_buf); + } +err_hdrbuf: + dma_unmap_single(dev->lif_cfg.hwdev, query_rqdma, sizeof(*query_rqbuf), + DMA_FROM_DEVICE); +err_rqdma: + dma_unmap_single(dev->lif_cfg.hwdev, query_sqdma, sizeof(*query_sqbuf), + DMA_FROM_DEVICE); +err_sqdma: + kfree(query_rqbuf); +err_rqbuf: + kfree(query_sqbuf); + + return rc; +} + +static int ionic_destroy_qp_cmd(struct ionic_ibdev *dev, u32 qpid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_QP_IN_V1_LEN), + .cmd.destroy_qp =3D { + .qp_id =3D cpu_to_le32(qpid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_QP) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +static bool ionic_expdb_wqe_size_supported(struct ionic_ibdev *dev, + uint32_t wqe_size) +{ + switch (wqe_size) { + case 64: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_64; + case 128: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_128; + case 256: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_256; + case 512: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_512; + } + + return false; +} + +static void ionic_qp_sq_init_cmb(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_udata *udata, + int max_data) +{ + u8 expdb_stride_log2 =3D 0; + bool expdb; + int rc; + + if (!(qp->sq_cmb & IONIC_CMB_ENABLE)) + goto not_in_cmb; + + if (qp->sq_cmb & ~IONIC_CMB_SUPPORTED) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->sq_cmb &=3D IONIC_CMB_SUPPORTED; + } + + if ((qp->sq_cmb & IONIC_CMB_EXPDB) && !dev->lif_cfg.sq_expdb) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + qp->sq_cmb_order =3D order_base_2(qp->sq.size / PAGE_SIZE); + + if (qp->sq_cmb_order >=3D IONIC_SQCMB_ORDER) + goto not_in_cmb; + + if (qp->sq_cmb & IONIC_CMB_EXPDB) + expdb_stride_log2 =3D qp->sq.stride_log2; + + rc =3D ionic_get_cmb(dev->lif_cfg.lif, &qp->sq_cmb_pgid, + &qp->sq_cmb_addr, qp->sq_cmb_order, + expdb_stride_log2, &expdb); + if (rc) + goto not_in_cmb; + + if ((qp->sq_cmb & IONIC_CMB_EXPDB) && !expdb) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto err_map; + + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + return; + +err_map: + ionic_put_cmb(dev->lif_cfg.lif, qp->sq_cmb_pgid, qp->sq_cmb_order); +not_in_cmb: + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + ibdev_dbg(&dev->ibdev, "could not place sq in cmb as required\n"); + + qp->sq_cmb =3D 0; + qp->sq_cmb_order =3D IONIC_RES_INVALID; + qp->sq_cmb_pgid =3D 0; + qp->sq_cmb_addr =3D 0; +} + +static void ionic_qp_sq_destroy_cmb(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!(qp->sq_cmb & IONIC_CMB_ENABLE)) + return; + + if (ctx) + rdma_user_mmap_entry_remove(qp->mmap_sq_cmb); + + ionic_put_cmb(dev->lif_cfg.lif, qp->sq_cmb_pgid, qp->sq_cmb_order); +} + +static int ionic_qp_sq_init(struct ionic_ibdev *dev, struct ionic_ctx *ctx, + struct ionic_qp *qp, struct ionic_qdesc *sq, + struct ionic_tbl_buf *buf, int max_wr, int max_sge, + int max_data, int sq_spec, struct ib_udata *udata) +{ + u32 wqe_size; + int rc =3D 0; + + qp->sq_msn_prod =3D 0; + qp->sq_msn_cons =3D 0; + + if (!qp->has_sq) { + if (buf) { + buf->tbl_buf =3D NULL; + buf->tbl_limit =3D 0; + buf->tbl_pages =3D 0; + } + if (udata) + rc =3D ionic_validate_qdesc_zero(sq); + + return rc; + } + + rc =3D -EINVAL; + + if (max_wr < 0 || max_wr > 0xffff) + return rc; + + if (max_sge < 1) + return rc; + + if (max_sge > min(ionic_v1_send_wqe_max_sge(dev->lif_cfg.max_stride, 0, + qp->sq_cmb & + IONIC_CMB_EXPDB), + IONIC_SPEC_HIGH)) + return rc; + + if (max_data < 0) + return rc; + + if (max_data > ionic_v1_send_wqe_max_data(dev->lif_cfg.max_stride, + qp->sq_cmb & IONIC_CMB_EXPDB)) + return rc; + + if (udata) { + rc =3D ionic_validate_qdesc(sq); + if (rc) + return rc; + + qp->sq_spec =3D sq_spec; + + qp->sq.ptr =3D NULL; + qp->sq.size =3D sq->size; + qp->sq.mask =3D sq->mask; + qp->sq.depth_log2 =3D sq->depth_log2; + qp->sq.stride_log2 =3D sq->stride_log2; + + qp->sq_meta =3D NULL; + qp->sq_msn_idx =3D NULL; + + qp->sq_umem =3D ib_umem_get(&dev->ibdev, sq->addr, sq->size, 0); + if (IS_ERR(qp->sq_umem)) + return PTR_ERR(qp->sq_umem); + } else { + qp->sq_umem =3D NULL; + + qp->sq_spec =3D ionic_v1_use_spec_sge(max_sge, sq_spec); + if (sq_spec && !qp->sq_spec) + ibdev_dbg(&dev->ibdev, + "init sq: max_sge %u disables spec\n", + max_sge); + + if (qp->sq_cmb & IONIC_CMB_EXPDB) { + wqe_size =3D ionic_v1_send_wqe_min_size(max_sge, max_data, + qp->sq_spec, + true); + + if (!ionic_expdb_wqe_size_supported(dev, wqe_size)) + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + if (!(qp->sq_cmb & IONIC_CMB_EXPDB)) + wqe_size =3D ionic_v1_send_wqe_min_size(max_sge, max_data, + qp->sq_spec, + false); + + rc =3D ionic_queue_init(&qp->sq, dev->lif_cfg.hwdev, + max_wr, wqe_size); + if (rc) + return rc; + + ionic_queue_dbell_init(&qp->sq, qp->qpid); + + qp->sq_meta =3D kmalloc_array((u32)qp->sq.mask + 1, + sizeof(*qp->sq_meta), + GFP_KERNEL); + if (!qp->sq_meta) { + rc =3D -ENOMEM; + goto err_sq_meta; + } + + qp->sq_msn_idx =3D kmalloc_array((u32)qp->sq.mask + 1, + sizeof(*qp->sq_msn_idx), + GFP_KERNEL); + if (!qp->sq_msn_idx) { + rc =3D -ENOMEM; + goto err_sq_msn; + } + } + + ionic_qp_sq_init_cmb(dev, qp, udata, max_data); + + if (qp->sq_cmb & IONIC_CMB_ENABLE) + rc =3D ionic_pgtbl_init(dev, buf, NULL, + (u64)qp->sq_cmb_pgid << PAGE_SHIFT, + 1, PAGE_SIZE); + else + rc =3D ionic_pgtbl_init(dev, buf, + qp->sq_umem, qp->sq.dma, 1, PAGE_SIZE); + if (rc) + goto err_sq_tbl; + + return 0; + +err_sq_tbl: + ionic_qp_sq_destroy_cmb(dev, ctx, qp); + kfree(qp->sq_msn_idx); +err_sq_msn: + kfree(qp->sq_meta); +err_sq_meta: + if (qp->sq_umem) + ib_umem_release(qp->sq_umem); + else + ionic_queue_destroy(&qp->sq, dev->lif_cfg.hwdev); + return rc; +} + +static void ionic_qp_sq_destroy(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!qp->has_sq) + return; + + ionic_qp_sq_destroy_cmb(dev, ctx, qp); + + kfree(qp->sq_msn_idx); + kfree(qp->sq_meta); + + if (qp->sq_umem) + ib_umem_release(qp->sq_umem); + else + ionic_queue_destroy(&qp->sq, dev->lif_cfg.hwdev); +} + +static void ionic_qp_rq_init_cmb(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_udata *udata) +{ + u8 expdb_stride_log2 =3D 0; + bool expdb; + int rc; + + if (!(qp->rq_cmb & IONIC_CMB_ENABLE)) + goto not_in_cmb; + + if (qp->rq_cmb & ~IONIC_CMB_SUPPORTED) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->rq_cmb &=3D IONIC_CMB_SUPPORTED; + } + + if ((qp->rq_cmb & IONIC_CMB_EXPDB) && !dev->lif_cfg.rq_expdb) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + qp->rq_cmb_order =3D order_base_2(qp->rq.size / PAGE_SIZE); + + if (qp->rq_cmb_order >=3D IONIC_RQCMB_ORDER) + goto not_in_cmb; + + if (qp->rq_cmb & IONIC_CMB_EXPDB) + expdb_stride_log2 =3D qp->rq.stride_log2; + + rc =3D ionic_get_cmb(dev->lif_cfg.lif, &qp->rq_cmb_pgid, + &qp->rq_cmb_addr, qp->rq_cmb_order, + expdb_stride_log2, &expdb); + if (rc) + goto not_in_cmb; + + if ((qp->rq_cmb & IONIC_CMB_EXPDB) && !expdb) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto err_map; + + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + return; + +err_map: + ionic_put_cmb(dev->lif_cfg.lif, qp->rq_cmb_pgid, qp->rq_cmb_order); +not_in_cmb: + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + ibdev_dbg(&dev->ibdev, "could not place rq in cmb as required\n"); + + qp->rq_cmb =3D 0; + qp->rq_cmb_order =3D IONIC_RES_INVALID; + qp->rq_cmb_pgid =3D 0; + qp->rq_cmb_addr =3D 0; +} + +static void ionic_qp_rq_destroy_cmb(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!(qp->rq_cmb & IONIC_CMB_ENABLE)) + return; + + if (ctx) + rdma_user_mmap_entry_remove(qp->mmap_rq_cmb); + + ionic_put_cmb(dev->lif_cfg.lif, qp->rq_cmb_pgid, qp->rq_cmb_order); +} + +static int ionic_qp_rq_init(struct ionic_ibdev *dev, struct ionic_ctx *ctx, + struct ionic_qp *qp, struct ionic_qdesc *rq, + struct ionic_tbl_buf *buf, int max_wr, int max_sge, + int rq_spec, struct ib_udata *udata) +{ + int rc =3D 0, i; + u32 wqe_size; + + if (!qp->has_rq) { + if (buf) { + buf->tbl_buf =3D NULL; + buf->tbl_limit =3D 0; + buf->tbl_pages =3D 0; + } + if (udata) + rc =3D ionic_validate_qdesc_zero(rq); + + return rc; + } + + rc =3D -EINVAL; + + if (max_wr < 0 || max_wr > 0xffff) + return rc; + + if (max_sge < 1) + return rc; + + if (max_sge > min(ionic_v1_recv_wqe_max_sge(dev->lif_cfg.max_stride, 0, f= alse), + IONIC_SPEC_HIGH)) + return rc; + + if (udata) { + rc =3D ionic_validate_qdesc(rq); + if (rc) + return rc; + + qp->rq_spec =3D rq_spec; + + qp->rq.ptr =3D NULL; + qp->rq.size =3D rq->size; + qp->rq.mask =3D rq->mask; + qp->rq.depth_log2 =3D rq->depth_log2; + qp->rq.stride_log2 =3D rq->stride_log2; + + qp->rq_meta =3D NULL; + + qp->rq_umem =3D ib_umem_get(&dev->ibdev, rq->addr, rq->size, 0); + if (IS_ERR(qp->rq_umem)) + return PTR_ERR(qp->rq_umem); + } else { + qp->rq_umem =3D NULL; + + qp->rq_spec =3D ionic_v1_use_spec_sge(max_sge, rq_spec); + if (rq_spec && !qp->rq_spec) + ibdev_dbg(&dev->ibdev, + "init rq: max_sge %u disables spec\n", + max_sge); + + if (qp->rq_cmb & IONIC_CMB_EXPDB) { + wqe_size =3D ionic_v1_recv_wqe_min_size(max_sge, + qp->rq_spec, + true); + + if (!ionic_expdb_wqe_size_supported(dev, wqe_size)) + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + if (!(qp->rq_cmb & IONIC_CMB_EXPDB)) + wqe_size =3D ionic_v1_recv_wqe_min_size(max_sge, + qp->rq_spec, + false); + + rc =3D ionic_queue_init(&qp->rq, dev->lif_cfg.hwdev, + max_wr, wqe_size); + if (rc) + return rc; + + ionic_queue_dbell_init(&qp->rq, qp->qpid); + + qp->rq_meta =3D kmalloc_array((u32)qp->rq.mask + 1, + sizeof(*qp->rq_meta), + GFP_KERNEL); + if (!qp->rq_meta) { + rc =3D -ENOMEM; + goto err_rq_meta; + } + + for (i =3D 0; i < qp->rq.mask; ++i) + qp->rq_meta[i].next =3D &qp->rq_meta[i + 1]; + qp->rq_meta[i].next =3D IONIC_META_LAST; + qp->rq_meta_head =3D &qp->rq_meta[0]; + } + + ionic_qp_rq_init_cmb(dev, qp, udata); + + if (qp->rq_cmb & IONIC_CMB_ENABLE) + rc =3D ionic_pgtbl_init(dev, buf, NULL, + (u64)qp->rq_cmb_pgid << PAGE_SHIFT, + 1, PAGE_SIZE); + else + rc =3D ionic_pgtbl_init(dev, buf, + qp->rq_umem, qp->rq.dma, 1, PAGE_SIZE); + if (rc) + goto err_rq_tbl; + + return 0; + +err_rq_tbl: + ionic_qp_rq_destroy_cmb(dev, ctx, qp); + kfree(qp->rq_meta); +err_rq_meta: + if (qp->rq_umem) + ib_umem_release(qp->rq_umem); + else + ionic_queue_destroy(&qp->rq, dev->lif_cfg.hwdev); + return rc; +} + +static void ionic_qp_rq_destroy(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!qp->has_rq) + return; + + ionic_qp_rq_destroy_cmb(dev, ctx, qp); + + kfree(qp->rq_meta); + + if (qp->rq_umem) + ib_umem_release(qp->rq_umem); + else + ionic_queue_destroy(&qp->rq, dev->lif_cfg.hwdev); +} + +int ionic_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_tbl_buf sq_buf =3D {}, rq_buf =3D {}; + struct ionic_pd *pd =3D to_ionic_pd(ibqp->pd); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_qp_resp resp =3D {}; + struct ionic_qp_req req =3D {}; + struct ionic_cq *cq; + u8 udma_mask; + void *entry; + int rc; + + if (udata) { + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + return rc; + } else { + req.sq_spec =3D IONIC_SPEC_HIGH; + req.rq_spec =3D IONIC_SPEC_HIGH; + } + + if (attr->qp_type =3D=3D IB_QPT_SMI || attr->qp_type > IB_QPT_UD) + return -EOPNOTSUPP; + + qp->state =3D IB_QPS_RESET; + + INIT_LIST_HEAD(&qp->cq_poll_sq); + INIT_LIST_HEAD(&qp->cq_flush_sq); + INIT_LIST_HEAD(&qp->cq_flush_rq); + + spin_lock_init(&qp->sq_lock); + spin_lock_init(&qp->rq_lock); + + qp->has_sq =3D 1; + qp->has_rq =3D 1; + + if (attr->qp_type =3D=3D IB_QPT_GSI) { + rc =3D ionic_get_gsi_qpid(dev, &qp->qpid); + } else { + udma_mask =3D BIT(dev->lif_cfg.udma_count) - 1; + + if (qp->has_sq) + udma_mask &=3D to_ionic_vcq(attr->send_cq)->udma_mask; + + if (qp->has_rq) + udma_mask &=3D to_ionic_vcq(attr->recv_cq)->udma_mask; + + if (udata && req.udma_mask) + udma_mask &=3D req.udma_mask; + + if (!udma_mask) + return -EINVAL; + + rc =3D ionic_get_qpid(dev, &qp->qpid, &qp->udma_idx, udma_mask); + } + if (rc) + return rc; + + qp->sig_all =3D attr->sq_sig_type =3D=3D IB_SIGNAL_ALL_WR; + qp->has_ah =3D attr->qp_type =3D=3D IB_QPT_RC; + + if (qp->has_ah) { + qp->hdr =3D kzalloc(sizeof(*qp->hdr), GFP_KERNEL); + if (!qp->hdr) { + rc =3D -ENOMEM; + goto err_ah_alloc; + } + + rc =3D ionic_get_ahid(dev, &qp->ahid); + if (rc) + goto err_ahid; + } + + if (udata) { + if (req.rq_cmb & IONIC_CMB_ENABLE) + qp->rq_cmb =3D req.rq_cmb; + + if (req.sq_cmb & IONIC_CMB_ENABLE) + qp->sq_cmb =3D req.sq_cmb; + } + + rc =3D ionic_qp_sq_init(dev, ctx, qp, &req.sq, &sq_buf, + attr->cap.max_send_wr, attr->cap.max_send_sge, + attr->cap.max_inline_data, req.sq_spec, udata); + if (rc) + goto err_sq; + + rc =3D ionic_qp_rq_init(dev, ctx, qp, &req.rq, &rq_buf, + attr->cap.max_recv_wr, attr->cap.max_recv_sge, + req.rq_spec, udata); + if (rc) + goto err_rq; + + rc =3D ionic_create_qp_cmd(dev, pd, + to_ionic_vcq_cq(attr->send_cq, qp->udma_idx), + to_ionic_vcq_cq(attr->recv_cq, qp->udma_idx), + qp, &sq_buf, &rq_buf, attr); + if (rc) + goto err_cmd; + + if (udata) { + resp.qpid =3D qp->qpid; + resp.udma_idx =3D qp->udma_idx; + + if (qp->sq_cmb & IONIC_CMB_ENABLE) { + bool wc; + + if ((qp->sq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) =3D=3D + (IONIC_CMB_WC | IONIC_CMB_UC)) { + ibdev_dbg(&dev->ibdev, + "Both sq_cmb flags IONIC_CMB_WC and IONIC_CMB_UC are set, using def= ault driver mapping\n"); + qp->sq_cmb &=3D ~(IONIC_CMB_WC | IONIC_CMB_UC); + } + + wc =3D (qp->sq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + !=3D IONIC_CMB_UC; + + /* let userspace know the mapping */ + if (wc) + qp->sq_cmb |=3D IONIC_CMB_WC; + else + qp->sq_cmb |=3D IONIC_CMB_UC; + + qp->mmap_sq_cmb =3D + ionic_mmap_entry_insert(ctx, + qp->sq.size, + PHYS_PFN(qp->sq_cmb_addr), + wc ? IONIC_MMAP_WC : 0, + &resp.sq_cmb_offset); + if (!qp->mmap_sq_cmb) { + rc =3D -ENOMEM; + goto err_mmap_sq; + } + + resp.sq_cmb =3D qp->sq_cmb; + } + + if (qp->rq_cmb & IONIC_CMB_ENABLE) { + bool wc; + + if ((qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) =3D=3D + (IONIC_CMB_WC | IONIC_CMB_UC)) { + ibdev_dbg(&dev->ibdev, + "Both rq_cmb flags IONIC_CMB_WC and IONIC_CMB_UC are set, using def= ault driver mapping\n"); + qp->rq_cmb &=3D ~(IONIC_CMB_WC | IONIC_CMB_UC); + } + + if (qp->rq_cmb & IONIC_CMB_EXPDB) + wc =3D (qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + =3D=3D IONIC_CMB_WC; + else + wc =3D (qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + !=3D IONIC_CMB_UC; + + /* let userspace know the mapping */ + if (wc) + qp->rq_cmb |=3D IONIC_CMB_WC; + else + qp->rq_cmb |=3D IONIC_CMB_UC; + + qp->mmap_rq_cmb =3D + ionic_mmap_entry_insert(ctx, + qp->rq.size, + PHYS_PFN(qp->rq_cmb_addr), + wc ? IONIC_MMAP_WC : 0, + &resp.rq_cmb_offset); + if (!qp->mmap_rq_cmb) { + rc =3D -ENOMEM; + goto err_mmap_rq; + } + + resp.rq_cmb =3D qp->rq_cmb; + } + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + ionic_pgtbl_unbuf(dev, &rq_buf); + ionic_pgtbl_unbuf(dev, &sq_buf); + + qp->ibqp.qp_num =3D qp->qpid; + + init_completion(&qp->qp_rel_comp); + kref_init(&qp->qp_kref); + + entry =3D xa_store_irq(&dev->qp_tbl, qp->qpid, qp, GFP_KERNEL); + if (entry) { + if (!xa_is_err(entry)) + rc =3D -EINVAL; + else + rc =3D xa_err(entry); + + goto err_resp; + } + + if (qp->has_sq) { + cq =3D to_ionic_vcq_cq(attr->send_cq, qp->udma_idx); + + attr->cap.max_send_wr =3D qp->sq.mask; + attr->cap.max_send_sge =3D + ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + qp->sq_cmb & IONIC_CMB_EXPDB); + attr->cap.max_inline_data =3D + ionic_v1_send_wqe_max_data(qp->sq.stride_log2, + qp->sq_cmb & + IONIC_CMB_EXPDB); + qp->sq_cqid =3D cq->cqid; + } + + if (qp->has_rq) { + cq =3D to_ionic_vcq_cq(attr->recv_cq, qp->udma_idx); + + attr->cap.max_recv_wr =3D qp->rq.mask; + attr->cap.max_recv_sge =3D + ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, + qp->rq_spec, + qp->rq_cmb & IONIC_CMB_EXPDB); + qp->rq_cqid =3D cq->cqid; + } + + return 0; + +err_resp: + if (udata && (qp->rq_cmb & IONIC_CMB_ENABLE)) + rdma_user_mmap_entry_remove(qp->mmap_rq_cmb); +err_mmap_rq: + if (udata && (qp->sq_cmb & IONIC_CMB_ENABLE)) + rdma_user_mmap_entry_remove(qp->mmap_sq_cmb); +err_mmap_sq: + ionic_destroy_qp_cmd(dev, qp->qpid); +err_cmd: + ionic_pgtbl_unbuf(dev, &rq_buf); + ionic_qp_rq_destroy(dev, ctx, qp); +err_rq: + ionic_pgtbl_unbuf(dev, &sq_buf); + ionic_qp_sq_destroy(dev, ctx, qp); +err_sq: + if (qp->has_ah) + ionic_put_ahid(dev, qp->ahid); +err_ahid: + kfree(qp->hdr); +err_ah_alloc: + ionic_put_qpid(dev, qp->qpid); + return rc; +} + +void ionic_notify_flush_cq(struct ionic_cq *cq) +{ + if (cq->flush && cq->vcq->ibcq.comp_handler) + cq->vcq->ibcq.comp_handler(&cq->vcq->ibcq, + cq->vcq->ibcq.cq_context); +} + +static void ionic_notify_qp_cqs(struct ionic_ibdev *dev, struct ionic_qp *= qp) +{ + if (qp->ibqp.send_cq) + ionic_notify_flush_cq(to_ionic_vcq_cq(qp->ibqp.send_cq, + qp->udma_idx)); + if (qp->ibqp.recv_cq && qp->ibqp.recv_cq !=3D qp->ibqp.send_cq) + ionic_notify_flush_cq(to_ionic_vcq_cq(qp->ibqp.recv_cq, + qp->udma_idx)); +} + +void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp) +{ + unsigned long irqflags; + struct ionic_cq *cq; + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + + /* Hold the CQ lock and QP sq_lock to set up flush */ + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->sq_lock); + qp->sq_flush =3D true; + if (!ionic_queue_empty(&qp->sq)) { + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + spin_unlock(&qp->sq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + + /* Hold the CQ lock and QP rq_lock to set up flush */ + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->rq_lock); + qp->rq_flush =3D true; + if (!ionic_queue_empty(&qp->rq)) { + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + } + spin_unlock(&qp->rq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + } +} + +static void ionic_clean_cq(struct ionic_cq *cq, u32 qpid) +{ + struct ionic_v1_cqe *qcqe; + int prod, qtf, qid, type; + bool color; + + if (!cq->q.ptr) + return; + + color =3D cq->color; + prod =3D cq->q.prod; + qcqe =3D ionic_queue_at(&cq->q, prod); + + while (color =3D=3D ionic_v1_cqe_color(qcqe)) { + qtf =3D ionic_v1_cqe_qtf(qcqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + if (qid =3D=3D qpid && type !=3D IONIC_V1_CQE_TYPE_ADMIN) + ionic_v1_cqe_clean(qcqe); + + prod =3D ionic_queue_next(&cq->q, prod); + qcqe =3D ionic_queue_at(&cq->q, prod); + color =3D ionic_color_wrap(prod, color); + } +} + +static void ionic_reset_qp(struct ionic_ibdev *dev, struct ionic_qp *qp) +{ + unsigned long irqflags; + struct ionic_cq *cq; + int i; + + local_irq_save(irqflags); + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + spin_lock(&cq->lock); + ionic_clean_cq(cq, qp->qpid); + spin_unlock(&cq->lock); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + spin_lock(&cq->lock); + ionic_clean_cq(cq, qp->qpid); + spin_unlock(&cq->lock); + } + + if (qp->has_sq) { + spin_lock(&qp->sq_lock); + qp->sq_flush =3D false; + qp->sq_flush_rcvd =3D false; + qp->sq_msn_prod =3D 0; + qp->sq_msn_cons =3D 0; + qp->sq.prod =3D 0; + qp->sq.cons =3D 0; + spin_unlock(&qp->sq_lock); + } + + if (qp->has_rq) { + spin_lock(&qp->rq_lock); + qp->rq_flush =3D false; + qp->rq.prod =3D 0; + qp->rq.cons =3D 0; + if (qp->rq_meta) { + for (i =3D 0; i < qp->rq.mask; ++i) + qp->rq_meta[i].next =3D &qp->rq_meta[i + 1]; + qp->rq_meta[i].next =3D IONIC_META_LAST; + } + qp->rq_meta_head =3D &qp->rq_meta[0]; + spin_unlock(&qp->rq_lock); + } + + local_irq_restore(irqflags); +} + +static bool ionic_qp_cur_state_is_ok(enum ib_qp_state q_state, + enum ib_qp_state attr_state) +{ + if (q_state =3D=3D attr_state) + return true; + + if (attr_state =3D=3D IB_QPS_ERR) + return true; + + if (attr_state =3D=3D IB_QPS_SQE) + return q_state =3D=3D IB_QPS_RTS || q_state =3D=3D IB_QPS_SQD; + + return false; +} + +static int ionic_check_modify_qp(struct ionic_qp *qp, struct ib_qp_attr *a= ttr, + int mask) +{ + enum ib_qp_state cur_state =3D (mask & IB_QP_CUR_STATE) ? + attr->cur_qp_state : qp->state; + enum ib_qp_state next_state =3D (mask & IB_QP_STATE) ? + attr->qp_state : cur_state; + + if ((mask & IB_QP_CUR_STATE) && + !ionic_qp_cur_state_is_ok(qp->state, attr->cur_qp_state)) + return -EINVAL; + + if (!ib_modify_qp_is_ok(cur_state, next_state, qp->ibqp.qp_type, mask)) + return -EINVAL; + + /* unprivileged qp not allowed privileged qkey */ + if ((mask & IB_QP_QKEY) && (attr->qkey & 0x80000000) && + qp->ibqp.uobject) + return -EPERM; + + return 0; +} + +int ionic_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int mask, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_pd *pd =3D to_ionic_pd(ibqp->pd); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + int rc; + + rc =3D ionic_check_modify_qp(qp, attr, mask); + if (rc) + return rc; + + if (mask & IB_QP_CAP) + return -EINVAL; + + rc =3D ionic_modify_qp_cmd(dev, pd, qp, attr, mask); + if (rc) + return rc; + + if (mask & IB_QP_STATE) { + qp->state =3D attr->qp_state; + + if (attr->qp_state =3D=3D IB_QPS_ERR) { + ionic_flush_qp(dev, qp); + ionic_notify_qp_cqs(dev, qp); + } else if (attr->qp_state =3D=3D IB_QPS_RESET) { + ionic_reset_qp(dev, qp); + } + } + + return 0; +} + +int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int mask, struct ib_qp_init_attr *init_attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + int rc; + + memset(attr, 0, sizeof(*attr)); + memset(init_attr, 0, sizeof(*init_attr)); + + rc =3D ionic_query_qp_cmd(dev, qp, attr, mask); + if (rc) + return rc; + + if (qp->has_sq) + attr->cap.max_send_wr =3D qp->sq.mask; + + if (qp->has_rq) + attr->cap.max_recv_wr =3D qp->rq.mask; + + init_attr->event_handler =3D ibqp->event_handler; + init_attr->qp_context =3D ibqp->qp_context; + init_attr->send_cq =3D ibqp->send_cq; + init_attr->recv_cq =3D ibqp->recv_cq; + init_attr->srq =3D ibqp->srq; + init_attr->xrcd =3D ibqp->xrcd; + init_attr->cap =3D attr->cap; + init_attr->sq_sig_type =3D qp->sig_all ? + IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR; + init_attr->qp_type =3D ibqp->qp_type; + init_attr->create_flags =3D 0; + init_attr->port_num =3D 0; + init_attr->rwq_ind_tbl =3D ibqp->rwq_ind_tbl; + init_attr->source_qpn =3D 0; + + return rc; +} + +int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) +{ + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + unsigned long irqflags; + struct ionic_cq *cq; + int rc; + + rc =3D ionic_destroy_qp_cmd(dev, qp->qpid); + if (rc) + return rc; + + xa_erase_irq(&dev->qp_tbl, qp->qpid); + + kref_put(&qp->qp_kref, ionic_qp_complete); + wait_for_completion(&qp->qp_rel_comp); + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + ionic_clean_cq(cq, qp->qpid); + list_del(&qp->cq_poll_sq); + list_del(&qp->cq_flush_sq); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + ionic_clean_cq(cq, qp->qpid); + list_del(&qp->cq_flush_rq); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + ionic_qp_rq_destroy(dev, ctx, qp); + ionic_qp_sq_destroy(dev, ctx, qp); + if (qp->has_ah) { + ionic_put_ahid(dev, qp->ahid); + kfree(qp->hdr); + } + ionic_put_qpid(dev, qp->qpid); + + return 0; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index 44ec69487519..8c1c0a07c527 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -5,6 +5,266 @@ #define _IONIC_FW_H_ =20 #include +#include + +/* common for ib spec */ + +#define IONIC_EXP_DBELL_SZ 8 + +enum ionic_mrid_bits { + IONIC_MRID_INDEX_SHIFT =3D 8, +}; + +static inline u32 ionic_mrid(u32 index, u8 key) +{ + return (index << IONIC_MRID_INDEX_SHIFT) | key; +} + +static inline u32 ionic_mrid_index(u32 lrkey) +{ + return lrkey >> IONIC_MRID_INDEX_SHIFT; +} + +/* common to all versions */ + +/* wqe scatter gather element */ +struct ionic_sge { + __be64 va; + __be32 len; + __be32 lkey; +}; + +/* admin queue mr type */ +enum ionic_mr_flags { + /* bits that determine mr access */ + IONIC_MRF_LOCAL_WRITE =3D BIT(0), + IONIC_MRF_REMOTE_WRITE =3D BIT(1), + IONIC_MRF_REMOTE_READ =3D BIT(2), + IONIC_MRF_REMOTE_ATOMIC =3D BIT(3), + IONIC_MRF_MW_BIND =3D BIT(4), + IONIC_MRF_ZERO_BASED =3D BIT(5), + IONIC_MRF_ON_DEMAND =3D BIT(6), + IONIC_MRF_PB =3D BIT(7), + IONIC_MRF_ACCESS_MASK =3D BIT(12) - 1, + + /* bits that determine mr type */ + IONIC_MRF_UKEY_EN =3D BIT(13), + IONIC_MRF_IS_MW =3D BIT(14), + IONIC_MRF_INV_EN =3D BIT(15), + + /* base flags combinations for mr types */ + IONIC_MRF_USER_MR =3D 0, + IONIC_MRF_PHYS_MR =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_INV_EN), + IONIC_MRF_MW_1 =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_IS_MW), + IONIC_MRF_MW_2 =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_IS_MW | + IONIC_MRF_INV_EN), +}; + +static inline int to_ionic_mr_flags(int access) +{ + int flags =3D 0; + + if (access & IB_ACCESS_LOCAL_WRITE) + flags |=3D IONIC_MRF_LOCAL_WRITE; + + if (access & IB_ACCESS_REMOTE_READ) + flags |=3D IONIC_MRF_REMOTE_READ; + + if (access & IB_ACCESS_REMOTE_WRITE) + flags |=3D IONIC_MRF_REMOTE_WRITE; + + if (access & IB_ACCESS_REMOTE_ATOMIC) + flags |=3D IONIC_MRF_REMOTE_ATOMIC; + + if (access & IB_ACCESS_MW_BIND) + flags |=3D IONIC_MRF_MW_BIND; + + if (access & IB_ZERO_BASED) + flags |=3D IONIC_MRF_ZERO_BASED; + + return flags; +} + +enum ionic_qp_flags { + /* bits that determine qp access */ + IONIC_QPF_REMOTE_WRITE =3D BIT(0), + IONIC_QPF_REMOTE_READ =3D BIT(1), + IONIC_QPF_REMOTE_ATOMIC =3D BIT(2), + + /* bits that determine other qp behavior */ + IONIC_QPF_SQ_PB =3D BIT(6), + IONIC_QPF_RQ_PB =3D BIT(7), + IONIC_QPF_SQ_SPEC =3D BIT(8), + IONIC_QPF_RQ_SPEC =3D BIT(9), + IONIC_QPF_REMOTE_PRIVILEGED =3D BIT(10), + IONIC_QPF_SQ_DRAINING =3D BIT(11), + IONIC_QPF_SQD_NOTIFY =3D BIT(12), + IONIC_QPF_SQ_CMB =3D BIT(13), + IONIC_QPF_RQ_CMB =3D BIT(14), + IONIC_QPF_PRIVILEGED =3D BIT(15), +}; + +static inline int from_ionic_qp_flags(int flags) +{ + int access_flags =3D 0; + + if (flags & IONIC_QPF_REMOTE_WRITE) + access_flags |=3D IB_ACCESS_REMOTE_WRITE; + + if (flags & IONIC_QPF_REMOTE_READ) + access_flags |=3D IB_ACCESS_REMOTE_READ; + + if (flags & IONIC_QPF_REMOTE_ATOMIC) + access_flags |=3D IB_ACCESS_REMOTE_ATOMIC; + + return access_flags; +} + +static inline int to_ionic_qp_flags(int access, bool sqd_notify, + bool sq_is_cmb, bool rq_is_cmb, + bool sq_spec, bool rq_spec, + bool privileged, bool remote_privileged) +{ + int flags =3D 0; + + if (access & IB_ACCESS_REMOTE_WRITE) + flags |=3D IONIC_QPF_REMOTE_WRITE; + + if (access & IB_ACCESS_REMOTE_READ) + flags |=3D IONIC_QPF_REMOTE_READ; + + if (access & IB_ACCESS_REMOTE_ATOMIC) + flags |=3D IONIC_QPF_REMOTE_ATOMIC; + + if (sqd_notify) + flags |=3D IONIC_QPF_SQD_NOTIFY; + + if (sq_is_cmb) + flags |=3D IONIC_QPF_SQ_CMB; + + if (rq_is_cmb) + flags |=3D IONIC_QPF_RQ_CMB; + + if (sq_spec) + flags |=3D IONIC_QPF_SQ_SPEC; + + if (rq_spec) + flags |=3D IONIC_QPF_RQ_SPEC; + + if (privileged) + flags |=3D IONIC_QPF_PRIVILEGED; + + if (remote_privileged) + flags |=3D IONIC_QPF_REMOTE_PRIVILEGED; + + return flags; +} + +/* admin queue qp type */ +enum ionic_qp_type { + IONIC_QPT_RC, + IONIC_QPT_UC, + IONIC_QPT_RD, + IONIC_QPT_UD, + IONIC_QPT_SRQ, + IONIC_QPT_XRC_INI, + IONIC_QPT_XRC_TGT, + IONIC_QPT_XRC_SRQ, +}; + +static inline int to_ionic_qp_type(enum ib_qp_type type) +{ + switch (type) { + case IB_QPT_GSI: + case IB_QPT_UD: + return IONIC_QPT_UD; + case IB_QPT_RC: + return IONIC_QPT_RC; + case IB_QPT_UC: + return IONIC_QPT_UC; + case IB_QPT_XRC_INI: + return IONIC_QPT_XRC_INI; + case IB_QPT_XRC_TGT: + return IONIC_QPT_XRC_TGT; + default: + return -EINVAL; + } +} + +/* admin queue qp state */ +enum ionic_qp_state { + IONIC_QPS_RESET, + IONIC_QPS_INIT, + IONIC_QPS_RTR, + IONIC_QPS_RTS, + IONIC_QPS_SQD, + IONIC_QPS_SQE, + IONIC_QPS_ERR, +}; + +static inline int from_ionic_qp_state(enum ionic_qp_state state) +{ + switch (state) { + case IONIC_QPS_RESET: + return IB_QPS_RESET; + case IONIC_QPS_INIT: + return IB_QPS_INIT; + case IONIC_QPS_RTR: + return IB_QPS_RTR; + case IONIC_QPS_RTS: + return IB_QPS_RTS; + case IONIC_QPS_SQD: + return IB_QPS_SQD; + case IONIC_QPS_SQE: + return IB_QPS_SQE; + case IONIC_QPS_ERR: + return IB_QPS_ERR; + default: + return -EINVAL; + } +} + +static inline int to_ionic_qp_state(enum ib_qp_state state) +{ + switch (state) { + case IB_QPS_RESET: + return IONIC_QPS_RESET; + case IB_QPS_INIT: + return IONIC_QPS_INIT; + case IB_QPS_RTR: + return IONIC_QPS_RTR; + case IB_QPS_RTS: + return IONIC_QPS_RTS; + case IB_QPS_SQD: + return IONIC_QPS_SQD; + case IB_QPS_SQE: + return IONIC_QPS_SQE; + case IB_QPS_ERR: + return IONIC_QPS_ERR; + default: + return 0; + } +} + +static inline int to_ionic_qp_modify_state(enum ib_qp_state to_state, + enum ib_qp_state from_state) +{ + return to_ionic_qp_state(to_state) | + (to_ionic_qp_state(from_state) << 4); +} + +/* fw abi v1 */ + +/* data payload part of v1 wqe */ +union ionic_v1_pld { + struct ionic_sge sgl[2]; + __be32 spec32[8]; + __be16 spec16[16]; + __u8 data[32]; +}; =20 /* completion queue v1 cqe */ struct ionic_v1_cqe { @@ -78,6 +338,390 @@ static inline u32 ionic_v1_cqe_qtf_qid(u32 qtf) return qtf >> IONIC_V1_CQE_QID_SHIFT; } =20 +/* v1 base wqe header */ +struct ionic_v1_base_hdr { + __u64 wqe_id; + __u8 op; + __u8 num_sge_key; + __be16 flags; + __be32 imm_data_key; +}; + +/* v1 receive wqe body */ +struct ionic_v1_recv_bdy { + __u8 rsvd[16]; + union ionic_v1_pld pld; +}; + +/* v1 send/rdma wqe body (common, has sgl) */ +struct ionic_v1_common_bdy { + union { + struct { + __be32 ah_id; + __be32 dest_qpn; + __be32 dest_qkey; + } send; + struct { + __be32 remote_va_high; + __be32 remote_va_low; + __be32 remote_rkey; + } rdma; + }; + __be32 length; + union ionic_v1_pld pld; +}; + +/* v1 atomic wqe body */ +struct ionic_v1_atomic_bdy { + __be32 remote_va_high; + __be32 remote_va_low; + __be32 remote_rkey; + __be32 swap_add_high; + __be32 swap_add_low; + __be32 compare_high; + __be32 compare_low; + __u8 rsvd[4]; + struct ionic_sge sge; +}; + +/* v1 reg mr wqe body */ +struct ionic_v1_reg_mr_bdy { + __be64 va; + __be64 length; + __be64 offset; + __be64 dma_addr; + __be32 map_count; + __be16 flags; + __u8 dir_size_log2; + __u8 page_size_log2; + __u8 rsvd[8]; +}; + +/* v1 bind mw wqe body */ +struct ionic_v1_bind_mw_bdy { + __be64 va; + __be64 length; + __be32 lkey; + __be16 flags; + __u8 rsvd[26]; +}; + +/* v1 send/recv wqe */ +struct ionic_v1_wqe { + struct ionic_v1_base_hdr base; + union { + struct ionic_v1_recv_bdy recv; + struct ionic_v1_common_bdy common; + struct ionic_v1_atomic_bdy atomic; + struct ionic_v1_reg_mr_bdy reg_mr; + struct ionic_v1_bind_mw_bdy bind_mw; + }; +}; + +/* queue pair v1 send opcodes */ +enum ionic_v1_op { + IONIC_V1_OP_SEND, + IONIC_V1_OP_SEND_INV, + IONIC_V1_OP_SEND_IMM, + IONIC_V1_OP_RDMA_READ, + IONIC_V1_OP_RDMA_WRITE, + IONIC_V1_OP_RDMA_WRITE_IMM, + IONIC_V1_OP_ATOMIC_CS, + IONIC_V1_OP_ATOMIC_FA, + IONIC_V1_OP_REG_MR, + IONIC_V1_OP_LOCAL_INV, + IONIC_V1_OP_BIND_MW, + + /* flags */ + IONIC_V1_FLAG_FENCE =3D BIT(0), + IONIC_V1_FLAG_SOL =3D BIT(1), + IONIC_V1_FLAG_INL =3D BIT(2), + IONIC_V1_FLAG_SIG =3D BIT(3), + + /* flags last four bits for sgl spec format */ + IONIC_V1_FLAG_SPEC32 =3D (1u << 12), + IONIC_V1_FLAG_SPEC16 =3D (2u << 12), + IONIC_V1_SPEC_FIRST_SGE =3D 2, +}; + +static inline size_t ionic_v1_send_wqe_min_size(int min_sge, int min_data, + int spec, bool expdb) +{ + size_t sz_wqe, sz_sgl, sz_data; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + min_sge +=3D IONIC_V1_SPEC_FIRST_SGE; + + if (expdb) { + min_sge +=3D 1; + min_data +=3D IONIC_EXP_DBELL_SZ; + } + + sz_wqe =3D sizeof(struct ionic_v1_wqe); + sz_sgl =3D offsetof(struct ionic_v1_wqe, common.pld.sgl[min_sge]); + sz_data =3D offsetof(struct ionic_v1_wqe, common.pld.data[min_data]); + + if (sz_sgl > sz_wqe) + sz_wqe =3D sz_sgl; + + if (sz_data > sz_wqe) + sz_wqe =3D sz_data; + + return sz_wqe; +} + +static inline int ionic_v1_send_wqe_max_sge(u8 stride_log2, int spec, + bool expdb) +{ + struct ionic_sge *sge =3D (void *)(1ull << stride_log2); + struct ionic_v1_wqe *wqe =3D (void *)0; + int num_sge =3D 0; + + if (expdb) + sge -=3D 1; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + num_sge =3D IONIC_V1_SPEC_FIRST_SGE; + + num_sge =3D sge - &wqe->common.pld.sgl[num_sge]; + + if (spec && num_sge > spec) + num_sge =3D spec; + + return num_sge; +} + +static inline int ionic_v1_send_wqe_max_data(u8 stride_log2, bool expdb) +{ + struct ionic_v1_wqe *wqe =3D (void *)0; + __u8 *data =3D (void *)(1ull << stride_log2); + + if (expdb) + data -=3D IONIC_EXP_DBELL_SZ; + + return data - wqe->common.pld.data; +} + +static inline size_t ionic_v1_recv_wqe_min_size(int min_sge, int spec, + bool expdb) +{ + size_t sz_wqe, sz_sgl; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + min_sge +=3D IONIC_V1_SPEC_FIRST_SGE; + + if (expdb) + min_sge +=3D 1; + + sz_wqe =3D sizeof(struct ionic_v1_wqe); + sz_sgl =3D offsetof(struct ionic_v1_wqe, recv.pld.sgl[min_sge]); + + if (sz_sgl > sz_wqe) + sz_wqe =3D sz_sgl; + + return sz_wqe; +} + +static inline int ionic_v1_recv_wqe_max_sge(u8 stride_log2, int spec, + bool expdb) +{ + struct ionic_sge *sge =3D (void *)(1ull << stride_log2); + struct ionic_v1_wqe *wqe =3D (void *)0; + int num_sge =3D 0; + + if (expdb) + sge -=3D 1; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + num_sge =3D IONIC_V1_SPEC_FIRST_SGE; + + num_sge =3D sge - &wqe->recv.pld.sgl[num_sge]; + + if (spec && num_sge > spec) + num_sge =3D spec; + + return num_sge; +} + +static inline int ionic_v1_use_spec_sge(int min_sge, int spec) +{ + if (!spec || min_sge > spec) + return 0; + + if (min_sge <=3D IONIC_V1_SPEC_FIRST_SGE) + return IONIC_V1_SPEC_FIRST_SGE; + + return spec; +} + +struct ionic_admin_create_ah { + __le64 dma_addr; + __le32 length; + __le32 pd_id; + __le32 id_ver; + __le16 dbid_flags; + __u8 csum_profile; + __u8 crypto; +} __packed; + +#define IONIC_ADMIN_CREATE_AH_IN_V1_LEN 24 +static_assert(sizeof(struct ionic_admin_create_ah) =3D=3D + IONIC_ADMIN_CREATE_AH_IN_V1_LEN); + +struct ionic_admin_destroy_ah { + __le32 ah_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_AH_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_ah) =3D=3D + IONIC_ADMIN_DESTROY_AH_IN_V1_LEN); + +struct ionic_admin_query_ah { + __le64 dma_addr; +} __packed; + +#define IONIC_ADMIN_QUERY_AH_IN_V1_LEN 8 +static_assert(sizeof(struct ionic_admin_query_ah) =3D=3D + IONIC_ADMIN_QUERY_AH_IN_V1_LEN); + +struct ionic_admin_create_mr { + __le64 va; + __le64 length; + __le32 pd_id; + __le32 id_ver; + __le32 tbl_index; + __le32 map_count; + __le64 dma_addr; + __le16 dbid_flags; + __u8 pt_type; + __u8 dir_size_log2; + __u8 page_size_log2; +} __packed; + +#define IONIC_ADMIN_CREATE_MR_IN_V1_LEN 45 +static_assert(sizeof(struct ionic_admin_create_mr) =3D=3D + IONIC_ADMIN_CREATE_MR_IN_V1_LEN); + +struct ionic_admin_destroy_mr { + __le32 mr_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_MR_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_mr) =3D=3D + IONIC_ADMIN_DESTROY_MR_IN_V1_LEN); + +struct ionic_admin_create_cq { + __le32 eq_id; + __u8 depth_log2; + __u8 stride_log2; + __u8 dir_size_log2_rsvd; + __u8 page_size_log2; + __le32 cq_flags; + __le32 id_ver; + __le32 tbl_index; + __le32 map_count; + __le64 dma_addr; + __le16 dbid_flags; +} __packed; + +#define IONIC_ADMIN_CREATE_CQ_IN_V1_LEN 34 +static_assert(sizeof(struct ionic_admin_create_cq) =3D=3D + IONIC_ADMIN_CREATE_CQ_IN_V1_LEN); + +struct ionic_admin_destroy_cq { + __le32 cq_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_cq) =3D=3D + IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN); + +struct ionic_admin_create_qp { + __le32 pd_id; + __be32 priv_flags; + __le32 sq_cq_id; + __u8 sq_depth_log2; + __u8 sq_stride_log2; + __u8 sq_dir_size_log2_rsvd; + __u8 sq_page_size_log2; + __le32 sq_tbl_index_xrcd_id; + __le32 sq_map_count; + __le64 sq_dma_addr; + __le32 rq_cq_id; + __u8 rq_depth_log2; + __u8 rq_stride_log2; + __u8 rq_dir_size_log2_rsvd; + __u8 rq_page_size_log2; + __le32 rq_tbl_index_srq_id; + __le32 rq_map_count; + __le64 rq_dma_addr; + __le32 id_ver; + __le16 dbid_flags; + __u8 type_state; + __u8 rsvd; +} __packed; + +#define IONIC_ADMIN_CREATE_QP_IN_V1_LEN 64 +static_assert(sizeof(struct ionic_admin_create_qp) =3D=3D + IONIC_ADMIN_CREATE_QP_IN_V1_LEN); + +struct ionic_admin_destroy_qp { + __le32 qp_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_QP_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_qp) =3D=3D + IONIC_ADMIN_DESTROY_QP_IN_V1_LEN); + +struct ionic_admin_mod_qp { + __be32 attr_mask; + __u8 dcqcn_profile; + __u8 tfp_csum_profile; + __be16 access_flags; + __le32 rq_psn; + __le32 sq_psn; + __le32 qkey_dest_qpn; + __le32 rate_limit_kbps; + __u8 pmtu; + __u8 retry; + __u8 rnr_timer; + __u8 retry_timeout; + __u8 rsq_depth; + __u8 rrq_depth; + __le16 pkey_id; + __le32 ah_id_len; + __u8 en_pcp; + __u8 ip_dscp; + __u8 rsvd2; + __u8 type_state; + union { + struct { + __le16 rsvd1; + }; + __le32 rrq_index; + }; + __le32 rsq_index; + __le64 dma_addr; + __le32 id_ver; +} __packed; + +#define IONIC_ADMIN_MODIFY_QP_IN_V1_LEN 60 +static_assert(sizeof(struct ionic_admin_mod_qp) =3D=3D + IONIC_ADMIN_MODIFY_QP_IN_V1_LEN); + +struct ionic_admin_query_qp { + __le64 hdr_dma_addr; + __le64 sq_dma_addr; + __le64 rq_dma_addr; + __le32 ah_id; + __le32 id_ver; + __le16 dbid_flags; +} __packed; + +#define IONIC_ADMIN_QUERY_QP_IN_V1_LEN 34 +static_assert(sizeof(struct ionic_admin_query_qp) =3D=3D + IONIC_ADMIN_QUERY_QP_IN_V1_LEN); + #define ADMIN_WQE_STRIDE 64 #define ADMIN_WQE_HDR_LEN 4 =20 @@ -88,9 +732,66 @@ struct ionic_v1_admin_wqe { __le16 len; =20 union { + struct ionic_admin_create_ah create_ah; + struct ionic_admin_destroy_ah destroy_ah; + struct ionic_admin_query_ah query_ah; + struct ionic_admin_create_mr create_mr; + struct ionic_admin_destroy_mr destroy_mr; + struct ionic_admin_create_cq create_cq; + struct ionic_admin_destroy_cq destroy_cq; + struct ionic_admin_create_qp create_qp; + struct ionic_admin_destroy_qp destroy_qp; + struct ionic_admin_mod_qp mod_qp; + struct ionic_admin_query_qp query_qp; } cmd; }; =20 +/* side data for query qp */ +struct ionic_v1_admin_query_qp_sq { + __u8 rnr_timer; + __u8 retry_timeout; + __be16 access_perms_flags; + __be16 rsvd; + __be16 pkey_id; + __be32 qkey_dest_qpn; + __be32 rate_limit_kbps; + __be32 rq_psn; +}; + +struct ionic_v1_admin_query_qp_rq { + __u8 state_pmtu; + __u8 retry_rnrtry; + __u8 rrq_depth; + __u8 rsq_depth; + __be32 sq_psn; + __be16 access_perms_flags; + __be16 rsvd; +}; + +/* admin queue v1 opcodes */ +enum ionic_v1_admin_op { + IONIC_V1_ADMIN_NOOP, + IONIC_V1_ADMIN_CREATE_CQ, + IONIC_V1_ADMIN_CREATE_QP, + IONIC_V1_ADMIN_CREATE_MR, + IONIC_V1_ADMIN_STATS_HDRS, + IONIC_V1_ADMIN_STATS_VALS, + IONIC_V1_ADMIN_DESTROY_MR, + IONIC_v1_ADMIN_RSVD_7, /* RESIZE_CQ */ + IONIC_V1_ADMIN_DESTROY_CQ, + IONIC_V1_ADMIN_MODIFY_QP, + IONIC_V1_ADMIN_QUERY_QP, + IONIC_V1_ADMIN_DESTROY_QP, + IONIC_V1_ADMIN_DEBUG, + IONIC_V1_ADMIN_CREATE_AH, + IONIC_V1_ADMIN_QUERY_AH, + IONIC_V1_ADMIN_MODIFY_DCQCN, + IONIC_V1_ADMIN_DESTROY_AH, + IONIC_V1_ADMIN_QP_STATS_HDRS, + IONIC_V1_ADMIN_QP_STATS_VALS, + IONIC_V1_ADMIN_OPCODES_MAX, +}; + /* admin queue v1 cqe status */ enum ionic_v1_admin_status { IONIC_V1_ASTS_OK, @@ -136,6 +837,22 @@ enum ionic_v1_eqe_evt_bits { IONIC_V1_EQE_QP_ERR_ACCESS =3D 10, }; =20 +enum ionic_tfp_csum_profiles { + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP =3D 0, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP =3D 1, + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP =3D 2, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP =3D 3, + IONIC_TFP_CSUM_PROF_IPV4_UDP_VXLAN_ETH_QTAG_IPV4_UDP =3D 4, + IONIC_TFP_CSUM_PROF_IPV4_UDP_VXLAN_ETH_QTAG_IPV6_UDP =3D 5, + IONIC_TFP_CSUM_PROF_QTAG_IPV4_UDP_VXLAN_ETH_QTAG_IPV4_UDP =3D 6, + IONIC_TFP_CSUM_PROF_QTAG_IPV4_UDP_VXLAN_ETH_QTAG_IPV6_UDP =3D 7, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_ESP_IPV4_UDP =3D 8, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_ESP_UDP =3D 9, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_ESP_UDP =3D 10, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_ESP_UDP =3D 11, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_CSUM =3D 12, +}; + static inline bool ionic_v1_eqe_color(struct ionic_v1_eqe *eqe) { return eqe->evt & cpu_to_be32(IONIC_V1_EQE_COLOR); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 7710190ff65f..6833abbfb1dc 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -15,6 +15,44 @@ MODULE_DESCRIPTION(DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("NET_IONIC"); =20 +static const struct ib_device_ops ionic_dev_ops =3D { + .owner =3D THIS_MODULE, + .driver_id =3D RDMA_DRIVER_IONIC, + .uverbs_abi_ver =3D IONIC_ABI_VERSION, + + .alloc_ucontext =3D ionic_alloc_ucontext, + .dealloc_ucontext =3D ionic_dealloc_ucontext, + .mmap =3D ionic_mmap, + .mmap_free =3D ionic_mmap_free, + .alloc_pd =3D ionic_alloc_pd, + .dealloc_pd =3D ionic_dealloc_pd, + .create_ah =3D ionic_create_ah, + .query_ah =3D ionic_query_ah, + .destroy_ah =3D ionic_destroy_ah, + .create_user_ah =3D ionic_create_ah, + .get_dma_mr =3D ionic_get_dma_mr, + .reg_user_mr =3D ionic_reg_user_mr, + .reg_user_mr_dmabuf =3D ionic_reg_user_mr_dmabuf, + .dereg_mr =3D ionic_dereg_mr, + .alloc_mr =3D ionic_alloc_mr, + .map_mr_sg =3D ionic_map_mr_sg, + .alloc_mw =3D ionic_alloc_mw, + .dealloc_mw =3D ionic_dealloc_mw, + .create_cq =3D ionic_create_cq, + .destroy_cq =3D ionic_destroy_cq, + .create_qp =3D ionic_create_qp, + .modify_qp =3D ionic_modify_qp, + .query_qp =3D ionic_query_qp, + .destroy_qp =3D ionic_destroy_qp, + + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), + INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), + INIT_RDMA_OBJ_SIZE(ib_cq, ionic_vcq, ibcq), + INIT_RDMA_OBJ_SIZE(ib_qp, ionic_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_mw, ionic_mr, ibmw), +}; + static void ionic_init_resids(struct ionic_ibdev *dev) { ionic_resid_init(&dev->inuse_cqid, dev->lif_cfg.cq_count); @@ -48,6 +86,8 @@ static void ionic_destroy_ibdev(struct ionic_ibdev *dev) ib_unregister_device(&dev->ibdev); ionic_destroy_rdma_admin(dev); ionic_destroy_resids(dev); + WARN_ON(!xa_empty(&dev->qp_tbl)); + xa_destroy(&dev->qp_tbl); WARN_ON(!xa_empty(&dev->cq_tbl)); xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); @@ -66,6 +106,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct ion= ic_aux_dev *ionic_adev) =20 ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); =20 + xa_init_flags(&dev->qp_tbl, GFP_ATOMIC); xa_init_flags(&dev->cq_tbl, GFP_ATOMIC); =20 ionic_init_resids(dev); @@ -98,6 +139,8 @@ static struct ionic_ibdev *ionic_create_ibdev(struct ion= ic_aux_dev *ionic_adev) if (rc) goto err_admin; =20 + ib_set_device_ops(&dev->ibdev, &ionic_dev_ops); + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); if (rc) goto err_register; @@ -110,6 +153,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) ionic_destroy_rdma_admin(dev); err_reset: ionic_destroy_resids(dev); + xa_destroy(&dev->qp_tbl); xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); =20 @@ -161,7 +205,7 @@ static int __init ionic_mod_init(void) { int rc; =20 - ionic_evt_workq =3D create_workqueue(DRIVER_NAME "-evt"); + ionic_evt_workq =3D create_workqueue(KBUILD_MODNAME "-evt"); if (!ionic_evt_workq) return -ENOMEM; =20 diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 490897628f41..94ef76aaca43 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -6,7 +6,10 @@ =20 #include #include +#include +#include =20 +#include #include #include =20 @@ -24,9 +27,26 @@ #define IONIC_EQ_ISR_BUDGET 10 #define IONIC_EQ_WORK_BUDGET 1000 #define IONIC_MAX_PD 1024 +#define IONIC_SPEC_HIGH 8 +#define IONIC_SQCMB_ORDER 5 +#define IONIC_RQCMB_ORDER 0 + +#define IONIC_META_LAST ((void *)1ul) +#define IONIC_META_POSTED ((void *)2ul) =20 #define IONIC_CQ_GRACE 100 =20 +#define IONIC_ROCE_UDP_SPORT 28272 +#define IONIC_DMA_LKEY 0 +#define IONIC_DMA_RKEY IONIC_DMA_LKEY + +#define IONIC_CMB_SUPPORTED \ + (IONIC_CMB_ENABLE | IONIC_CMB_REQUIRE | IONIC_CMB_EXPDB | \ + IONIC_CMB_WC | IONIC_CMB_UC) + +/* resource is not reserved on the device, indicated in tbl_order */ +#define IONIC_RES_INVALID -1 + struct ionic_aq; struct ionic_cq; struct ionic_eq; @@ -44,14 +64,6 @@ enum ionic_admin_flags { IONIC_ADMIN_F_INTERRUPT =3D BIT(2), /* Interruptible w/timeout */ }; =20 -struct ionic_qdesc { - __aligned_u64 addr; - __u32 size; - __u16 mask; - __u8 depth_log2; - __u8 stride_log2; -}; - enum ionic_mmap_flag { IONIC_MMAP_WC =3D BIT(0), }; @@ -160,6 +172,13 @@ struct ionic_tbl_buf { u8 page_size_log2; }; =20 +struct ionic_pd { + struct ib_pd ibpd; + + u32 pdid; + u32 flags; +}; + struct ionic_cq { struct ionic_vcq *vcq; =20 @@ -193,11 +212,188 @@ struct ionic_vcq { u8 poll_idx; }; =20 +struct ionic_sq_meta { + u64 wrid; + u32 len; + u16 seq; + u8 ibop; + u8 ibsts; + u8 remote:1; + u8 signal:1; + u8 local_comp:1; +}; + +struct ionic_rq_meta { + struct ionic_rq_meta *next; + u64 wrid; +}; + +struct ionic_qp { + struct ib_qp ibqp; + enum ib_qp_state state; + + u32 qpid; + u32 ahid; + u32 sq_cqid; + u32 rq_cqid; + u8 udma_idx; + u8 has_ah:1; + u8 has_sq:1; + u8 has_rq:1; + u8 sig_all:1; + + struct list_head qp_list_counter; + + struct list_head cq_poll_sq; + struct list_head cq_flush_sq; + struct list_head cq_flush_rq; + struct list_head ibkill_flush_ent; + + spinlock_t sq_lock; /* for posting and polling */ + struct ionic_queue sq; + struct ionic_sq_meta *sq_meta; + u16 *sq_msn_idx; + int sq_spec; + u16 sq_old_prod; + u16 sq_msn_prod; + u16 sq_msn_cons; + u8 sq_cmb; + bool sq_flush; + bool sq_flush_rcvd; + + spinlock_t rq_lock; /* for posting and polling */ + struct ionic_queue rq; + struct ionic_rq_meta *rq_meta; + struct ionic_rq_meta *rq_meta_head; + int rq_spec; + u16 rq_old_prod; + u8 rq_cmb; + bool rq_flush; + + struct kref qp_kref; + struct completion qp_rel_comp; + + /* infrequently accessed, keep at end */ + int sgid_index; + int sq_cmb_order; + u32 sq_cmb_pgid; + phys_addr_t sq_cmb_addr; + struct rdma_user_mmap_entry *mmap_sq_cmb; + + struct ib_umem *sq_umem; + + int rq_cmb_order; + u32 rq_cmb_pgid; + phys_addr_t rq_cmb_addr; + struct rdma_user_mmap_entry *mmap_rq_cmb; + + struct ib_umem *rq_umem; + + int dcqcn_profile; + + struct ib_ud_header *hdr; +}; + +struct ionic_ah { + struct ib_ah ibah; + u32 ahid; + int sgid_index; + struct ib_ud_header hdr; +}; + +struct ionic_mr { + union { + struct ib_mr ibmr; + struct ib_mw ibmw; + }; + + u32 mrid; + int flags; + + struct ib_umem *umem; + struct ionic_tbl_buf buf; + bool created; +}; + static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) { return container_of(ibdev, struct ionic_ibdev, ibdev); } =20 +static inline struct ionic_ctx *to_ionic_ctx(struct ib_ucontext *ibctx) +{ + return container_of(ibctx, struct ionic_ctx, ibctx); +} + +static inline struct ionic_ctx *to_ionic_ctx_uobj(struct ib_uobject *uobj) +{ + if (!uobj) + return NULL; + + if (!uobj->context) + return NULL; + + return to_ionic_ctx(uobj->context); +} + +static inline struct ionic_pd *to_ionic_pd(struct ib_pd *ibpd) +{ + return container_of(ibpd, struct ionic_pd, ibpd); +} + +static inline struct ionic_mr *to_ionic_mr(struct ib_mr *ibmr) +{ + return container_of(ibmr, struct ionic_mr, ibmr); +} + +static inline struct ionic_mr *to_ionic_mw(struct ib_mw *ibmw) +{ + return container_of(ibmw, struct ionic_mr, ibmw); +} + +static inline struct ionic_vcq *to_ionic_vcq(struct ib_cq *ibcq) +{ + return container_of(ibcq, struct ionic_vcq, ibcq); +} + +static inline struct ionic_cq *to_ionic_vcq_cq(struct ib_cq *ibcq, + uint8_t udma_idx) +{ + return &to_ionic_vcq(ibcq)->cq[udma_idx]; +} + +static inline struct ionic_qp *to_ionic_qp(struct ib_qp *ibqp) +{ + return container_of(ibqp, struct ionic_qp, ibqp); +} + +static inline struct ionic_ah *to_ionic_ah(struct ib_ah *ibah) +{ + return container_of(ibah, struct ionic_ah, ibah); +} + +static inline u32 ionic_ctx_dbid(struct ionic_ibdev *dev, + struct ionic_ctx *ctx) +{ + if (!ctx) + return dev->lif_cfg.dbid; + + return ctx->dbid; +} + +static inline u32 ionic_obj_dbid(struct ionic_ibdev *dev, + struct ib_uobject *uobj) +{ + return ionic_ctx_dbid(dev, to_ionic_ctx_uobj(uobj)); +} + +static inline void ionic_qp_complete(struct kref *kref) +{ + struct ionic_qp *qp =3D container_of(kref, struct ionic_qp, qp_kref); + + complete(&qp->qp_rel_comp); +} + static inline void ionic_cq_complete(struct kref *kref) { struct ionic_cq *cq =3D container_of(kref, struct ionic_cq, cq_kref); @@ -227,8 +423,47 @@ int ionic_create_cq_common(struct ionic_vcq *vcq, __u32 *resp_cqid, int udma_idx); void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq); +void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp); +void ionic_notify_flush_cq(struct ionic_cq *cq); + +int ionic_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata= ); +void ionic_dealloc_ucontext(struct ib_ucontext *ibctx); +int ionic_mmap(struct ib_ucontext *ibctx, struct vm_area_struct *vma); +void ionic_mmap_free(struct rdma_user_mmap_entry *rdma_entry); +int ionic_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); +int ionic_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); +int ionic_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_att= r, + struct ib_udata *udata); +int ionic_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); +int ionic_destroy_ah(struct ib_ah *ibah, u32 flags); +struct ib_mr *ionic_get_dma_mr(struct ib_pd *ibpd, int access); +struct ib_mr *ionic_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 addr, int access, struct ib_dmah *dmah, + struct ib_udata *udata); +struct ib_mr *ionic_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 offset, + u64 length, u64 addr, int fd, int access, + struct ib_dmah *dmah, + struct uverbs_attr_bundle *attrs); +int ionic_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); +struct ib_mr *ionic_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type type, + u32 max_sg); +int ionic_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nen= ts, + unsigned int *sg_offset); +int ionic_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); +int ionic_dealloc_mw(struct ib_mw *ibmw); +int ionic_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct uverbs_attr_bundle *attrs); +int ionic_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); +int ionic_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr, + struct ib_udata *udata); +int ionic_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int mask, + struct ib_udata *udata); +int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int mask, + struct ib_qp_init_attr *init_attr); +int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata); =20 /* ionic_pgtbl.c */ +__le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); int ionic_pgtbl_init(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf, diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c index 11461f7642bc..a8eb73be6f86 100644 --- a/drivers/infiniband/hw/ionic/ionic_pgtbl.c +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -7,6 +7,25 @@ #include "ionic_fw.h" #include "ionic_ibdev.h" =20 +__le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va) +{ + u64 pg_mask =3D BIT_ULL(buf->page_size_log2) - 1; + u64 dma; + + if (!buf->tbl_pages) + return cpu_to_le64(0); + + if (buf->tbl_pages > 1) + return cpu_to_le64(buf->tbl_dma); + + if (buf->tbl_buf) + dma =3D le64_to_cpu(buf->tbl_buf[0]); + else + dma =3D buf->tbl_dma; + + return cpu_to_le64(dma + (va & pg_mask)); +} + int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) { if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) diff --git a/include/uapi/rdma/ionic-abi.h b/include/uapi/rdma/ionic-abi.h new file mode 100644 index 000000000000..7b589d3e9728 --- /dev/null +++ b/include/uapi/rdma/ionic-abi.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc */ + +#ifndef IONIC_ABI_H +#define IONIC_ABI_H + +#include + +#define IONIC_ABI_VERSION 1 + +#define IONIC_EXPDB_64 1 +#define IONIC_EXPDB_128 2 +#define IONIC_EXPDB_256 4 +#define IONIC_EXPDB_512 8 + +#define IONIC_EXPDB_SQ 1 +#define IONIC_EXPDB_RQ 2 + +#define IONIC_CMB_ENABLE 1 +#define IONIC_CMB_REQUIRE 2 +#define IONIC_CMB_EXPDB 4 +#define IONIC_CMB_WC 8 +#define IONIC_CMB_UC 16 + +struct ionic_ctx_req { + __u32 rsvd[2]; +}; + +struct ionic_ctx_resp { + __u32 rsvd; + __u32 page_shift; + + __aligned_u64 dbell_offset; + + __u16 version; + __u8 qp_opcodes; + __u8 admin_opcodes; + + __u8 sq_qtype; + __u8 rq_qtype; + __u8 cq_qtype; + __u8 admin_qtype; + + __u8 max_stride; + __u8 max_spec; + __u8 udma_count; + __u8 expdb_mask; + __u8 expdb_qtypes; + + __u8 rsvd2[3]; +}; + +struct ionic_qdesc { + __aligned_u64 addr; + __u32 size; + __u16 mask; + __u8 depth_log2; + __u8 stride_log2; +}; + +struct ionic_ah_resp { + __u32 ahid; + __u32 pad; +}; + +struct ionic_cq_req { + struct ionic_qdesc cq[2]; + __u8 udma_mask; + __u8 rsvd[7]; +}; + +struct ionic_cq_resp { + __u32 cqid[2]; + __u8 udma_mask; + __u8 rsvd[7]; +}; + +struct ionic_qp_req { + struct ionic_qdesc sq; + struct ionic_qdesc rq; + __u8 sq_spec; + __u8 rq_spec; + __u8 sq_cmb; + __u8 rq_cmb; + __u8 udma_mask; + __u8 rsvd[3]; +}; + +struct ionic_qp_resp { + __u32 qpid; + __u8 sq_cmb; + __u8 rq_cmb; + __u8 udma_idx; + __u8 rsvd[1]; + __aligned_u64 sq_cmb_offset; + __aligned_u64 rq_cmb_offset; +}; + +struct ionic_srq_req { + struct ionic_qdesc rq; + __u8 rq_spec; + __u8 rq_cmb; + __u8 udma_mask; + __u8 rsvd[5]; +}; + +struct ionic_srq_resp { + __u32 qpid; + __u8 rq_cmb; + __u8 udma_idx; + __u8 rsvd[2]; + __aligned_u64 rq_cmb_offset; +}; + +#endif /* IONIC_ABI_H */ --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2087.outbound.protection.outlook.com [40.107.236.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCB2F2D8375; Wed, 3 Sep 2025 06:18:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.87 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880301; cv=fail; b=r7bHtdez0Dmgk34Qs8A0ZZyrs5BX7yXA/3qkuX7U9kT9gNnpC2dDaraMk/LkhAGnPQ+1G9YzgcjH6A8XYDKWMUpJnq8e0yruC725bqGIp7UoFbPmUVuNsBVpu+8kGQM6bV0Fykrv5a/439c/64ycHK3bju1VJSyLaTe7BqT5G0E= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880301; c=relaxed/simple; bh=sqJRhSMM9MtkoU8TmMd3EtXEJ41BgHPqdtxZ5W60lfM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=E3B2O0LUi1ty4o5eOFnOZAta4oKWokZ07Iii2UqHKVzmBbxfepKfuCnI5dMf4z0i+J/A0G0552JHze1HvCAUzAjZlFGxQc3Rg4drm6q4+MhREUUIHwvvlsFoROScq7smK8JY6Itn0ce/l82gOI/RFRGMvY2PAjfYIbIdnUF9MpM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=3kGUpR5j; arc=fail smtp.client-ip=40.107.236.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="3kGUpR5j" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RXrDLCLENSLNQgt28wDnDBKDaDTCbVDUX1Mkqr1qIds132hhnkw8W4+FHWJWR2QuiwOGMsoZwekcVVFRdJabRo667WUln0kGw2WqvuVaZwMqrmqKKfqmgk2ljc5q87vMW7WOwXPYy7jG08TUDSb2K4/oCnPKRO9BvJEHQF7zTQDV+dF6RU6Dk71CmSBrxu2NvRlXlJZlVHGw7zM0WA6I1yj9HyZ+nOIsHEGe/wHAAmN/AbtPfudaZ9CZPb0lkDi6Xily6zwKklDc5/IZ5HqZ8Ws2sQW1izU30NzS6qA2RNJRTZe18UjIvs+m6DqR14DT0IRcHJzebhsvatiMbE5Rtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WBk1reaVebmzy7ubp1afYXNzQdBFpJRMcZW4AKGma4o=; b=yyjUdjEztUODUkBXPEV+kAzpmUEMSMRzZ0zwbIVTwksrDHQscoajnU5DHchL6ye+f9N2r/67prOxm9cCd1l2NN34EM/1uat0/Erc1UisKnXU4wwGy5dr1Ls/k+RlTE0O7mtPeNzKQt23CuKgBcGhJacBZmYtALG5UzGmlydt6vbqrCWYsevlCB4hFSBSFBcU+dE/wfIEhrzgiI16J16MR/MQnCChjFAoYoZq73HRlLerm6v4YZrvIfAaetRG+nSxyVPiNxIhgRLqbREyoUY+cMfZ/2cjpOXhGVc5EHvJfjjfZli/VnPA15qtw3FMI52DddqDbidsuCHD/d4NKuI3Ew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WBk1reaVebmzy7ubp1afYXNzQdBFpJRMcZW4AKGma4o=; b=3kGUpR5jNcEVhC91aHsooM6hLwZHgK9d4VWe+d2T6Fy4QtL0oSzZy7Dg3410J7SB46qKDC6VUzhgExxsd3ZRwaYCsz/3PM0rqcShdVcBlL4+G0u5aQz7Ow38z3jsvkv+hWezmOYfoyWmiefOUO2YEMjF5ReY2hZT6+vtGt416II= Received: from DS7PR05CA0010.namprd05.prod.outlook.com (2603:10b6:5:3b9::15) by DS0PR12MB8525.namprd12.prod.outlook.com (2603:10b6:8:159::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:18:11 +0000 Received: from DS2PEPF0000343C.namprd02.prod.outlook.com (2603:10b6:5:3b9:cafe::87) by DS7PR05CA0010.outlook.office365.com (2603:10b6:5:3b9::15) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9115.6 via Frontend Transport; Wed, 3 Sep 2025 06:18:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by DS2PEPF0000343C.mail.protection.outlook.com (10.167.18.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:18:10 +0000 Received: from satlexmb10.amd.com (10.181.42.219) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:18:09 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb10.amd.com (10.181.42.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:18:09 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:18:05 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v6 11/14] RDMA/ionic: Register device ops for datapath Date: Wed, 3 Sep 2025 11:46:03 +0530 Message-ID: <20250903061606.4139957-12-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF0000343C:EE_|DS0PR12MB8525:EE_ X-MS-Office365-Filtering-Correlation-Id: d3191de0-8f3c-4201-eac5-08ddeab1a800 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|82310400026|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?k7cg33/Xw+7MQAPOQ5O+v0gehDJAcx9Y1nyePFxml9jHMlUAr4AhPdhlj79T?= =?us-ascii?Q?eIWdd3v5RQz55t78WZTiqScVqtYC6E1s9cfk5ms9jr2CV3f5UV1mMzB/gudb?= =?us-ascii?Q?TxENM23JhDTWJ0w28EOwZwmGJtPUffpGkPiC6P9wIWAOY/2aKoFUaG6NPEGR?= =?us-ascii?Q?LGDxr8LvWlIua/u5D8R3vWJV70mf/wkmuy85iNYWn6BYD+eRTzKuAXmyk7mt?= =?us-ascii?Q?emnBOq6OXdaMnFXq8FCTC6VeThxYi7/J/AehkQIP/uDtlGKpsdxS9y7veJJ5?= =?us-ascii?Q?LP9chSniUlrAgBSglqOpy3egQTIhJgyc82IwwYLIufgy2rwv38exz345y0Aq?= =?us-ascii?Q?rQnLlr2krqqxaGR6QfecUKKG7BkdmBqHycEmrzgYcImhXjX7M8wkNXR34L6y?= =?us-ascii?Q?9mMYF7SO7zFQfhVD6dLLfkTLkk7cnbdf8mX2t5m8eF1xSa20O1cPqvNDDepI?= =?us-ascii?Q?5PpX/pWKovdmJBQFxjcrWfYY6ASl78g1bpjhLu+32bTrG48WkAlwJ3RZoEuB?= =?us-ascii?Q?DNRWAI8ArGSCGoUx8Z9vOIXCuQCkhOW/gDZgCO7zLGw9w0854wgI95MGabID?= =?us-ascii?Q?C+W0jLI+N7Bqxg0h3bOgvM2EKWEwZry9POMP1MRS2kXjt6DJV4mesj+DtyIb?= =?us-ascii?Q?WXG4QVjtc5G4E3lw6AqifN8dGqopzUjlijxLZsHNUFQUCYSzh8zUYvhE5UWv?= =?us-ascii?Q?kNFJpJm2ug4oQGrr2mVEabiYONC0eO6yVK16CAX5GP9MK/uFEMAnzQmsxiiX?= =?us-ascii?Q?74b4ZOVHmk9PtS07D23BKuI1f/lcD5nR8Gqis7tREqw5YTUUbqeVnFDZ3UrG?= =?us-ascii?Q?tp3A3ba83MNKzTPZeh56jAdWCU0+RAuJs5tlx6mpLJsrH+cE0uPjlaKg6uSZ?= =?us-ascii?Q?xyR62ix2c64O3ji+i9OnELeM4Us/lqQBQGiLLQlZ8MQpEB7AtQ7mW7QLz7e6?= =?us-ascii?Q?Cc8MQ/RQ1tIMc1RJlQatUoGku3e4qpbnhRcg3N/h9CJ2az3Cx//lW8d2iuir?= =?us-ascii?Q?a4UglJqhpD6keotCgpyh0dXkpdV2q0X4yuDsxkKUsQFnbw6Nwh22tXMmgnOv?= =?us-ascii?Q?nKoD90Oh1+HKDUprsUTgPbUviwHWdyK1bwWJ6nS3C8X6x/OaapzS6JoK9okk?= =?us-ascii?Q?KtKw6QkUDwM87NEZPWH72Tc7AbRJ4eznPi2E7oqDqorCEfUSRZxthqOZcNP0?= =?us-ascii?Q?uHBWYVgxV19H7YQpYCP+xt5i0j7cowAIAhZrpfUPDCruVEGATY4pSsrI2KMF?= =?us-ascii?Q?pdZz9CqEbMn9kZk8yib3X8Xf+3GGyqJQjaPesaIhW1HOT3pUJK1RFEwbRsOc?= =?us-ascii?Q?YbTh48G7+RLATUIL2hXY/hPeRTZF/YVCaT9PJbLu0bscI2eR6TLkrTlpeTRf?= =?us-ascii?Q?n4iI/w1sELHIrjmhyxBzri35K8RdyAezeG8R4Szwdl8cvMOxAb9KBnFDNCKR?= =?us-ascii?Q?vz4w4s9PFfZvYYpUaBWv/dn4+2m8NPd/w66Kn3uVFaKuPq7aOs1iVw9creA9?= =?us-ascii?Q?noS/FeexMNDMuj9zHAdelLwMHuBfVrppE1Fb?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(82310400026)(7416014)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:18:10.4385 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d3191de0-8f3c-4201-eac5-08ddeab1a800 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF0000343C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8525 Content-Type: text/plain; charset="utf-8" Implement device supported verb APIs for datapath. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v5->v6 - Added misc comment v2->v3 - Registered main ib ops at once - Removed uverbs_cmd_mask drivers/infiniband/hw/ionic/ionic_datapath.c | 1399 ++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_fw.h | 105 ++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 5 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 14 + drivers/infiniband/hw/ionic/ionic_pgtbl.c | 11 + 5 files changed, 1534 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_datapath.c diff --git a/drivers/infiniband/hw/ionic/ionic_datapath.c b/drivers/infinib= and/hw/ionic/ionic_datapath.c new file mode 100644 index 000000000000..aa2944887f23 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_datapath.c @@ -0,0 +1,1399 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +#define IONIC_OP(version, opname) \ + ((version) < 2 ? IONIC_V1_OP_##opname : IONIC_V2_OP_##opname) + +static bool ionic_next_cqe(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_v1_cqe **cqe) +{ + struct ionic_v1_cqe *qcqe =3D ionic_queue_at_prod(&cq->q); + + if (unlikely(cq->color !=3D ionic_v1_cqe_color(qcqe))) + return false; + + /* Prevent out-of-order reads of the CQE */ + dma_rmb(); + + *cqe =3D qcqe; + + return true; +} + +static int ionic_flush_recv(struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_rq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (!qp->rq_flush) + return 0; + + if (ionic_queue_empty(&qp->rq)) + return 0; + + wqe =3D ionic_queue_at_cons(&qp->rq); + + /* wqe_id must be a valid queue index */ + if (unlikely(wqe->base.wqe_id >> qp->rq.depth_log2)) { + ibdev_warn(qp->ibqp.device, + "flush qp %u recv index %llu invalid\n", + qp->qpid, (unsigned long long)wqe->base.wqe_id); + return -EIO; + } + + /* wqe_id must indicate a request that is outstanding */ + meta =3D &qp->rq_meta[wqe->base.wqe_id]; + if (unlikely(meta->next !=3D IONIC_META_POSTED)) { + ibdev_warn(qp->ibqp.device, + "flush qp %u recv index %llu not posted\n", + qp->qpid, (unsigned long long)wqe->base.wqe_id); + return -EIO; + } + + ionic_queue_consume(&qp->rq); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D IB_WC_WR_FLUSH_ERR; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + meta->next =3D qp->rq_meta_head; + qp->rq_meta_head =3D meta; + + return 1; +} + +static int ionic_flush_recv_many(struct ionic_qp *qp, + struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_flush_recv(qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_flush_send(struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_sq_meta *meta; + + if (!qp->sq_flush) + return 0; + + if (ionic_queue_empty(&qp->sq)) + return 0; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + ionic_queue_consume(&qp->sq); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D IB_WC_WR_FLUSH_ERR; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + return 1; +} + +static int ionic_flush_send_many(struct ionic_qp *qp, + struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_flush_send(qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_poll_recv(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_qp *cqe_qp, struct ionic_v1_cqe *cqe, + struct ib_wc *wc) +{ + struct ionic_qp *qp =3D NULL; + struct ionic_rq_meta *meta; + u32 src_qpn, st_len; + u16 vlan_tag; + u8 op; + + if (cqe_qp->rq_flush) + return 0; + + qp =3D cqe_qp; + + st_len =3D be32_to_cpu(cqe->status_length); + + /* ignore wqe_id in case of flush error */ + if (ionic_v1_cqe_error(cqe) && st_len =3D=3D IONIC_STS_WQE_FLUSHED_ERR) { + cqe_qp->rq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + + /* posted recvs (if any) flushed by ionic_flush_recv */ + return 0; + } + + /* there had better be something in the recv queue to complete */ + if (ionic_queue_empty(&qp->rq)) { + ibdev_warn(&dev->ibdev, "qp %u is empty\n", qp->qpid); + return -EIO; + } + + /* wqe_id must be a valid queue index */ + if (unlikely(cqe->recv.wqe_id >> qp->rq.depth_log2)) { + ibdev_warn(&dev->ibdev, + "qp %u recv index %llu invalid\n", + qp->qpid, (unsigned long long)cqe->recv.wqe_id); + return -EIO; + } + + /* wqe_id must indicate a request that is outstanding */ + meta =3D &qp->rq_meta[cqe->recv.wqe_id]; + if (unlikely(meta->next !=3D IONIC_META_POSTED)) { + ibdev_warn(&dev->ibdev, + "qp %u recv index %llu not posted\n", + qp->qpid, (unsigned long long)cqe->recv.wqe_id); + return -EIO; + } + + meta->next =3D qp->rq_meta_head; + qp->rq_meta_head =3D meta; + + memset(wc, 0, sizeof(*wc)); + + wc->wr_id =3D meta->wrid; + + wc->qp =3D &cqe_qp->ibqp; + + if (ionic_v1_cqe_error(cqe)) { + wc->vendor_err =3D st_len; + wc->status =3D ionic_to_ib_status(st_len); + + cqe_qp->rq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + + ibdev_warn(&dev->ibdev, + "qp %d recv cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, BIT(cq->q.stride_log2), true); + goto out; + } + + wc->vendor_err =3D 0; + wc->status =3D IB_WC_SUCCESS; + + src_qpn =3D be32_to_cpu(cqe->recv.src_qpn_op); + op =3D src_qpn >> IONIC_V1_CQE_RECV_OP_SHIFT; + + src_qpn &=3D IONIC_V1_CQE_RECV_QPN_MASK; + op &=3D IONIC_V1_CQE_RECV_OP_MASK; + + wc->opcode =3D IB_WC_RECV; + switch (op) { + case IONIC_V1_CQE_RECV_OP_RDMA_IMM: + wc->opcode =3D IB_WC_RECV_RDMA_WITH_IMM; + wc->wc_flags |=3D IB_WC_WITH_IMM; + wc->ex.imm_data =3D cqe->recv.imm_data_rkey; /* be32 in wc */ + break; + case IONIC_V1_CQE_RECV_OP_SEND_IMM: + wc->wc_flags |=3D IB_WC_WITH_IMM; + wc->ex.imm_data =3D cqe->recv.imm_data_rkey; /* be32 in wc */ + break; + case IONIC_V1_CQE_RECV_OP_SEND_INV: + wc->wc_flags |=3D IB_WC_WITH_INVALIDATE; + wc->ex.invalidate_rkey =3D be32_to_cpu(cqe->recv.imm_data_rkey); + break; + } + + wc->byte_len =3D st_len; + wc->src_qp =3D src_qpn; + + if (qp->ibqp.qp_type =3D=3D IB_QPT_UD || + qp->ibqp.qp_type =3D=3D IB_QPT_GSI) { + wc->wc_flags |=3D IB_WC_GRH | IB_WC_WITH_SMAC; + ether_addr_copy(wc->smac, cqe->recv.src_mac); + + wc->wc_flags |=3D IB_WC_WITH_NETWORK_HDR_TYPE; + if (ionic_v1_cqe_recv_is_ipv4(cqe)) + wc->network_hdr_type =3D RDMA_NETWORK_IPV4; + else + wc->network_hdr_type =3D RDMA_NETWORK_IPV6; + + if (ionic_v1_cqe_recv_is_vlan(cqe)) + wc->wc_flags |=3D IB_WC_WITH_VLAN; + + /* vlan_tag in cqe will be valid from dpath even if no vlan */ + vlan_tag =3D be16_to_cpu(cqe->recv.vlan_tag); + wc->vlan_id =3D vlan_tag & 0xfff; /* 802.1q VID */ + wc->sl =3D vlan_tag >> VLAN_PRIO_SHIFT; /* 802.1q PCP */ + } + + wc->pkey_index =3D 0; + wc->port_num =3D 1; + +out: + ionic_queue_consume(&qp->rq); + + return 1; +} + +static bool ionic_peek_send(struct ionic_qp *qp) +{ + struct ionic_sq_meta *meta; + + if (qp->sq_flush) + return false; + + /* completed all send queue requests */ + if (ionic_queue_empty(&qp->sq)) + return false; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + /* waiting for remote completion */ + if (meta->remote && meta->seq =3D=3D qp->sq_msn_cons) + return false; + + /* waiting for local completion */ + if (!meta->remote && !meta->local_comp) + return false; + + return true; +} + +static int ionic_poll_send(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_sq_meta *meta; + + if (qp->sq_flush) + return 0; + + do { + /* completed all send queue requests */ + if (ionic_queue_empty(&qp->sq)) + goto out_empty; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + /* waiting for remote completion */ + if (meta->remote && meta->seq =3D=3D qp->sq_msn_cons) + goto out_empty; + + /* waiting for local completion */ + if (!meta->remote && !meta->local_comp) + goto out_empty; + + ionic_queue_consume(&qp->sq); + + /* produce wc only if signaled or error status */ + } while (!meta->signal && meta->ibsts =3D=3D IB_WC_SUCCESS); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D meta->ibsts; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + if (meta->ibsts =3D=3D IB_WC_SUCCESS) { + wc->byte_len =3D meta->len; + wc->opcode =3D meta->ibop; + } else { + wc->vendor_err =3D meta->len; + + qp->sq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + + return 1; + +out_empty: + if (qp->sq_flush_rcvd) { + qp->sq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + return 0; +} + +static int ionic_poll_send_many(struct ionic_ibdev *dev, struct ionic_cq *= cq, + struct ionic_qp *qp, struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_poll_send(dev, cq, qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_validate_cons(u16 prod, u16 cons, + u16 comp, u16 mask) +{ + if (((prod - cons) & mask) <=3D ((comp - cons) & mask)) + return -EIO; + + return 0; +} + +static int ionic_comp_msn(struct ionic_qp *qp, struct ionic_v1_cqe *cqe) +{ + struct ionic_sq_meta *meta; + u16 cqe_seq, cqe_idx; + int rc; + + if (qp->sq_flush) + return 0; + + cqe_seq =3D be32_to_cpu(cqe->send.msg_msn) & qp->sq.mask; + + rc =3D ionic_validate_cons(qp->sq_msn_prod, + qp->sq_msn_cons, + cqe_seq - 1, + qp->sq.mask); + if (rc) { + ibdev_warn(qp->ibqp.device, + "qp %u bad msn %#x seq %u for prod %u cons %u\n", + qp->qpid, be32_to_cpu(cqe->send.msg_msn), + cqe_seq, qp->sq_msn_prod, qp->sq_msn_cons); + return rc; + } + + qp->sq_msn_cons =3D cqe_seq; + + if (ionic_v1_cqe_error(cqe)) { + cqe_idx =3D qp->sq_msn_idx[(cqe_seq - 1) & qp->sq.mask]; + + meta =3D &qp->sq_meta[cqe_idx]; + meta->len =3D be32_to_cpu(cqe->status_length); + meta->ibsts =3D ionic_to_ib_status(meta->len); + + ibdev_warn(qp->ibqp.device, + "qp %d msn cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, sizeof(*cqe), true); + } + + return 0; +} + +static int ionic_comp_npg(struct ionic_qp *qp, struct ionic_v1_cqe *cqe) +{ + struct ionic_sq_meta *meta; + u16 cqe_idx; + u32 st_len; + + if (qp->sq_flush) + return 0; + + st_len =3D be32_to_cpu(cqe->status_length); + + if (ionic_v1_cqe_error(cqe) && st_len =3D=3D IONIC_STS_WQE_FLUSHED_ERR) { + /* + * Flush cqe does not consume a wqe on the device, and maybe + * no such work request is posted. + * + * The driver should begin flushing after the last indicated + * normal or error completion. Here, only set a hint that the + * flush request was indicated. In poll_send, if nothing more + * can be polled normally, then begin flushing. + */ + qp->sq_flush_rcvd =3D true; + return 0; + } + + cqe_idx =3D cqe->send.npg_wqe_id & qp->sq.mask; + meta =3D &qp->sq_meta[cqe_idx]; + meta->local_comp =3D true; + + if (ionic_v1_cqe_error(cqe)) { + meta->len =3D st_len; + meta->ibsts =3D ionic_to_ib_status(st_len); + meta->remote =3D false; + ibdev_warn(qp->ibqp.device, + "qp %d npg cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, sizeof(*cqe), true); + } + + return 0; +} + +static void ionic_reserve_sync_cq(struct ionic_ibdev *dev, struct ionic_cq= *cq) +{ + if (!ionic_queue_empty(&cq->q)) { + cq->credit +=3D ionic_queue_length(&cq->q); + cq->q.cons =3D cq->q.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + ionic_queue_dbell_val(&cq->q)); + } +} + +static void ionic_reserve_cq(struct ionic_ibdev *dev, struct ionic_cq *cq, + int spend) +{ + cq->credit -=3D spend; + + if (cq->credit <=3D 0) + ionic_reserve_sync_cq(dev, cq); +} + +static int ionic_poll_vcq_cq(struct ionic_ibdev *dev, + struct ionic_cq *cq, + int nwc, struct ib_wc *wc) +{ + struct ionic_qp *qp, *qp_next; + struct ionic_v1_cqe *cqe; + int rc =3D 0, npolled =3D 0; + unsigned long irqflags; + u32 qtf, qid; + bool peek; + u8 type; + + if (nwc < 1) + return 0; + + spin_lock_irqsave(&cq->lock, irqflags); + + /* poll already indicated work completions for send queue */ + list_for_each_entry_safe(qp, qp_next, &cq->poll_sq, cq_poll_sq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->sq_lock); + rc =3D ionic_poll_send_many(dev, cq, qp, wc + npolled, + nwc - npolled); + spin_unlock(&qp->sq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_poll_sq); + } + + /* poll for more work completions */ + while (likely(ionic_next_cqe(dev, cq, &cqe))) { + if (npolled =3D=3D nwc) + goto out; + + qtf =3D ionic_v1_cqe_qtf(cqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + /* + * Safe to access QP without additional reference here as, + * 1. We hold cq->lock throughout + * 2. ionic_destroy_qp() acquires the same cq->lock before cleanup + * 3. QP is removed from qp_tbl before any cleanup begins + * This ensures no concurrent access between polling and destruction. + */ + qp =3D xa_load(&dev->qp_tbl, qid); + if (unlikely(!qp)) { + ibdev_dbg(&dev->ibdev, "missing qp for qid %u\n", qid); + goto cq_next; + } + + switch (type) { + case IONIC_V1_CQE_TYPE_RECV: + spin_lock(&qp->rq_lock); + rc =3D ionic_poll_recv(dev, cq, qp, cqe, wc + npolled); + spin_unlock(&qp->rq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + break; + + case IONIC_V1_CQE_TYPE_SEND_MSN: + spin_lock(&qp->sq_lock); + rc =3D ionic_comp_msn(qp, cqe); + if (!rc) { + rc =3D ionic_poll_send_many(dev, cq, qp, + wc + npolled, + nwc - npolled); + peek =3D ionic_peek_send(qp); + } + spin_unlock(&qp->sq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + if (peek) + list_move_tail(&qp->cq_poll_sq, &cq->poll_sq); + break; + + case IONIC_V1_CQE_TYPE_SEND_NPG: + spin_lock(&qp->sq_lock); + rc =3D ionic_comp_npg(qp, cqe); + if (!rc) { + rc =3D ionic_poll_send_many(dev, cq, qp, + wc + npolled, + nwc - npolled); + peek =3D ionic_peek_send(qp); + } + spin_unlock(&qp->sq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + if (peek) + list_move_tail(&qp->cq_poll_sq, &cq->poll_sq); + break; + + default: + ibdev_warn(&dev->ibdev, + "unexpected cqe type %u\n", type); + rc =3D -EIO; + goto out; + } + +cq_next: + ionic_queue_produce(&cq->q); + cq->color =3D ionic_color_wrap(cq->q.prod, cq->color); + } + + /* lastly, flush send and recv queues */ + if (likely(!cq->flush)) + goto out; + + cq->flush =3D false; + + list_for_each_entry_safe(qp, qp_next, &cq->flush_sq, cq_flush_sq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->sq_lock); + rc =3D ionic_flush_send_many(qp, wc + npolled, nwc - npolled); + spin_unlock(&qp->sq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_flush_sq); + else + cq->flush =3D true; + } + + list_for_each_entry_safe(qp, qp_next, &cq->flush_rq, cq_flush_rq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->rq_lock); + rc =3D ionic_flush_recv_many(qp, wc + npolled, nwc - npolled); + spin_unlock(&qp->rq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_flush_rq); + else + cq->flush =3D true; + } + +out: + /* in case credit was depleted (more work posted than cq depth) */ + if (cq->credit <=3D 0) + ionic_reserve_sync_cq(dev, cq); + + spin_unlock_irqrestore(&cq->lock, irqflags); + + return npolled ?: rc; +} + +int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int rc_tmp, rc =3D 0, npolled =3D 0; + int cq_i, cq_x, cq_ix; + + cq_x =3D vcq->poll_idx; + vcq->poll_idx ^=3D dev->lif_cfg.udma_count - 1; + + for (cq_i =3D 0; npolled < nwc && cq_i < dev->lif_cfg.udma_count; ++cq_i)= { + cq_ix =3D cq_i ^ cq_x; + + if (!(vcq->udma_mask & BIT(cq_ix))) + continue; + + rc_tmp =3D ionic_poll_vcq_cq(dev, &vcq->cq[cq_ix], + nwc - npolled, + wc + npolled); + + if (rc_tmp >=3D 0) + npolled +=3D rc_tmp; + else if (!rc) + rc =3D rc_tmp; + } + + return npolled ?: rc; +} + +static int ionic_req_notify_vcq_cq(struct ionic_ibdev *dev, struct ionic_c= q *cq, + enum ib_cq_notify_flags flags) +{ + u64 dbell_val =3D cq->q.dbell; + + if (flags & IB_CQ_SOLICITED) { + cq->arm_sol_prod =3D ionic_queue_next(&cq->q, cq->arm_sol_prod); + dbell_val |=3D cq->arm_sol_prod | IONIC_CQ_RING_SOL; + } else { + cq->arm_any_prod =3D ionic_queue_next(&cq->q, cq->arm_any_prod); + dbell_val |=3D cq->arm_any_prod | IONIC_CQ_RING_ARM; + } + + ionic_reserve_sync_cq(dev, cq); + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, dbell_val); + + /* + * IB_CQ_REPORT_MISSED_EVENTS: + * + * The queue index in ring zero guarantees no missed events. + * + * Here, we check if the color bit in the next cqe is flipped. If it + * is flipped, then progress can be made by immediately polling the cq. + * Still, the cq will be armed, and an event will be generated. The cq + * may be empty when polled after the event, because the next poll + * after arming the cq can empty it. + */ + return (flags & IB_CQ_REPORT_MISSED_EVENTS) && + cq->color =3D=3D ionic_v1_cqe_color(ionic_queue_at_prod(&cq->q)); +} + +int ionic_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int rc =3D 0, cq_i; + + for (cq_i =3D 0; cq_i < dev->lif_cfg.udma_count; ++cq_i) { + if (!(vcq->udma_mask & BIT(cq_i))) + continue; + + if (ionic_req_notify_vcq_cq(dev, &vcq->cq[cq_i], flags)) + rc =3D 1; + } + + return rc; +} + +static s64 ionic_prep_inline(void *data, u32 max_data, + const struct ib_sge *ib_sgl, int num_sge) +{ + static const s64 bit_31 =3D 1u << 31; + s64 len =3D 0, sg_len; + int sg_i; + + for (sg_i =3D 0; sg_i < num_sge; ++sg_i) { + sg_len =3D ib_sgl[sg_i].length; + + /* sge length zero means 2GB */ + if (unlikely(sg_len =3D=3D 0)) + sg_len =3D bit_31; + + /* greater than max inline data is invalid */ + if (unlikely(len + sg_len > max_data)) + return -EINVAL; + + memcpy(data + len, (void *)ib_sgl[sg_i].addr, sg_len); + + len +=3D sg_len; + } + + return len; +} + +static s64 ionic_prep_pld(struct ionic_v1_wqe *wqe, + union ionic_v1_pld *pld, + int spec, u32 max_sge, + const struct ib_sge *ib_sgl, + int num_sge) +{ + static const s64 bit_31 =3D 1l << 31; + struct ionic_sge *sgl; + __be32 *spec32 =3D NULL; + __be16 *spec16 =3D NULL; + s64 len =3D 0, sg_len; + int sg_i =3D 0; + + if (unlikely(num_sge < 0 || (u32)num_sge > max_sge)) + return -EINVAL; + + if (spec && num_sge > IONIC_V1_SPEC_FIRST_SGE) { + sg_i =3D IONIC_V1_SPEC_FIRST_SGE; + + if (num_sge > 8) { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SPEC16); + spec16 =3D pld->spec16; + } else { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SPEC32); + spec32 =3D pld->spec32; + } + } + + sgl =3D &pld->sgl[sg_i]; + + for (sg_i =3D 0; sg_i < num_sge; ++sg_i) { + sg_len =3D ib_sgl[sg_i].length; + + /* sge length zero means 2GB */ + if (unlikely(sg_len =3D=3D 0)) + sg_len =3D bit_31; + + /* greater than 2GB data is invalid */ + if (unlikely(len + sg_len > bit_31)) + return -EINVAL; + + sgl[sg_i].va =3D cpu_to_be64(ib_sgl[sg_i].addr); + sgl[sg_i].len =3D cpu_to_be32(sg_len); + sgl[sg_i].lkey =3D cpu_to_be32(ib_sgl[sg_i].lkey); + + if (spec32) { + spec32[sg_i] =3D sgl[sg_i].len; + } else if (spec16) { + if (unlikely(sg_len > U16_MAX)) + return -EINVAL; + spec16[sg_i] =3D cpu_to_be16(sg_len); + } + + len +=3D sg_len; + } + + return len; +} + +static void ionic_prep_base(struct ionic_qp *qp, + const struct ib_send_wr *wr, + struct ionic_sq_meta *meta, + struct ionic_v1_wqe *wqe) +{ + meta->wrid =3D wr->wr_id; + meta->ibsts =3D IB_WC_SUCCESS; + meta->signal =3D false; + meta->local_comp =3D false; + + wqe->base.wqe_id =3D qp->sq.prod; + + if (wr->send_flags & IB_SEND_FENCE) + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_FENCE); + + if (wr->send_flags & IB_SEND_SOLICITED) + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SOL); + + if (qp->sig_all || wr->send_flags & IB_SEND_SIGNALED) { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SIG); + meta->signal =3D true; + } + + meta->seq =3D qp->sq_msn_prod; + meta->remote =3D + qp->ibqp.qp_type !=3D IB_QPT_UD && + qp->ibqp.qp_type !=3D IB_QPT_GSI && + !ionic_ibop_is_local(wr->opcode); + + if (meta->remote) { + qp->sq_msn_idx[meta->seq] =3D qp->sq.prod; + qp->sq_msn_prod =3D ionic_queue_next(&qp->sq, qp->sq_msn_prod); + } + + ionic_queue_produce(&qp->sq); +} + +static int ionic_prep_common(struct ionic_qp *qp, + const struct ib_send_wr *wr, + struct ionic_sq_meta *meta, + struct ionic_v1_wqe *wqe) +{ + s64 signed_len; + u32 mval; + + if (wr->send_flags & IB_SEND_INLINE) { + wqe->base.num_sge_key =3D 0; + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_INL); + mval =3D ionic_v1_send_wqe_max_data(qp->sq.stride_log2, false); + signed_len =3D ionic_prep_inline(wqe->common.pld.data, mval, + wr->sg_list, wr->num_sge); + } else { + wqe->base.num_sge_key =3D wr->num_sge; + mval =3D ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + false); + signed_len =3D ionic_prep_pld(wqe, &wqe->common.pld, + qp->sq_spec, mval, + wr->sg_list, wr->num_sge); + } + + if (unlikely(signed_len < 0)) + return signed_len; + + meta->len =3D signed_len; + wqe->common.length =3D cpu_to_be32(signed_len); + + ionic_prep_base(qp, wr, meta, wqe); + + return 0; +} + +static void ionic_prep_sq_wqe(struct ionic_qp *qp, void *wqe) +{ + memset(wqe, 0, 1u << qp->sq.stride_log2); +} + +static void ionic_prep_rq_wqe(struct ionic_qp *qp, void *wqe) +{ + memset(wqe, 0, 1u << qp->rq.stride_log2); +} + +static int ionic_prep_send(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_SEND; + + switch (wr->opcode) { + case IB_WR_SEND: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND); + break; + case IB_WR_SEND_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_IMM); + wqe->base.imm_data_key =3D wr->ex.imm_data; + break; + case IB_WR_SEND_WITH_INV: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_INV); + wqe->base.imm_data_key =3D + cpu_to_be32(wr->ex.invalidate_rkey); + break; + default: + return -EINVAL; + } + + return ionic_prep_common(qp, wr, meta, wqe); +} + +static int ionic_prep_send_ud(struct ionic_qp *qp, + const struct ib_ud_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + struct ionic_ah *ah; + + if (unlikely(!wr->ah)) + return -EINVAL; + + ah =3D to_ionic_ah(wr->ah); + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + wqe->common.send.ah_id =3D cpu_to_be32(ah->ahid); + wqe->common.send.dest_qpn =3D cpu_to_be32(wr->remote_qpn); + wqe->common.send.dest_qkey =3D cpu_to_be32(wr->remote_qkey); + + meta->ibop =3D IB_WC_SEND; + + switch (wr->wr.opcode) { + case IB_WR_SEND: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND); + break; + case IB_WR_SEND_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_IMM); + wqe->base.imm_data_key =3D wr->wr.ex.imm_data; + break; + default: + return -EINVAL; + } + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_rdma(struct ionic_qp *qp, + const struct ib_rdma_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_RDMA_WRITE; + + switch (wr->wr.opcode) { + case IB_WR_RDMA_READ: + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + meta->ibop =3D IB_WC_RDMA_READ; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_READ); + break; + case IB_WR_RDMA_WRITE: + if (wr->wr.send_flags & IB_SEND_SOLICITED) + return -EINVAL; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_WRITE); + break; + case IB_WR_RDMA_WRITE_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_WRITE_IMM); + wqe->base.imm_data_key =3D wr->wr.ex.imm_data; + break; + default: + return -EINVAL; + } + + wqe->common.rdma.remote_va_high =3D cpu_to_be32(wr->remote_addr >> 32); + wqe->common.rdma.remote_va_low =3D cpu_to_be32(wr->remote_addr); + wqe->common.rdma.remote_rkey =3D cpu_to_be32(wr->rkey); + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_atomic(struct ionic_qp *qp, + const struct ib_atomic_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (wr->wr.num_sge !=3D 1 || wr->wr.sg_list[0].length !=3D 8) + return -EINVAL; + + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_RDMA_WRITE; + + switch (wr->wr.opcode) { + case IB_WR_ATOMIC_CMP_AND_SWP: + meta->ibop =3D IB_WC_COMP_SWAP; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, ATOMIC_CS); + wqe->atomic.swap_add_high =3D cpu_to_be32(wr->swap >> 32); + wqe->atomic.swap_add_low =3D cpu_to_be32(wr->swap); + wqe->atomic.compare_high =3D cpu_to_be32(wr->compare_add >> 32); + wqe->atomic.compare_low =3D cpu_to_be32(wr->compare_add); + break; + case IB_WR_ATOMIC_FETCH_AND_ADD: + meta->ibop =3D IB_WC_FETCH_ADD; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, ATOMIC_FA); + wqe->atomic.swap_add_high =3D cpu_to_be32(wr->compare_add >> 32); + wqe->atomic.swap_add_low =3D cpu_to_be32(wr->compare_add); + break; + default: + return -EINVAL; + } + + wqe->atomic.remote_va_high =3D cpu_to_be32(wr->remote_addr >> 32); + wqe->atomic.remote_va_low =3D cpu_to_be32(wr->remote_addr); + wqe->atomic.remote_rkey =3D cpu_to_be32(wr->rkey); + + wqe->base.num_sge_key =3D 1; + wqe->atomic.sge.va =3D cpu_to_be64(wr->wr.sg_list[0].addr); + wqe->atomic.sge.len =3D cpu_to_be32(8); + wqe->atomic.sge.lkey =3D cpu_to_be32(wr->wr.sg_list[0].lkey); + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_inv(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (wr->send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, LOCAL_INV); + wqe->base.imm_data_key =3D cpu_to_be32(wr->ex.invalidate_rkey); + + meta->len =3D 0; + meta->ibop =3D IB_WC_LOCAL_INV; + + ionic_prep_base(qp, wr, meta, wqe); + + return 0; +} + +static int ionic_prep_reg(struct ionic_qp *qp, + const struct ib_reg_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_mr *mr =3D to_ionic_mr(wr->mr); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + __le64 dma_addr; + int flags; + + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + /* must call ib_map_mr_sg before posting reg wr */ + if (!mr->buf.tbl_pages) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + flags =3D to_ionic_mr_flags(wr->access); + + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, REG_MR); + wqe->base.num_sge_key =3D wr->key; + wqe->base.imm_data_key =3D cpu_to_be32(mr->ibmr.lkey); + wqe->reg_mr.va =3D cpu_to_be64(mr->ibmr.iova); + wqe->reg_mr.length =3D cpu_to_be64(mr->ibmr.length); + wqe->reg_mr.offset =3D ionic_pgtbl_off(&mr->buf, mr->ibmr.iova); + dma_addr =3D ionic_pgtbl_dma(&mr->buf, mr->ibmr.iova); + wqe->reg_mr.dma_addr =3D cpu_to_be64(le64_to_cpu(dma_addr)); + + wqe->reg_mr.map_count =3D cpu_to_be32(mr->buf.tbl_pages); + wqe->reg_mr.flags =3D cpu_to_be16(flags); + wqe->reg_mr.dir_size_log2 =3D 0; + wqe->reg_mr.page_size_log2 =3D order_base_2(mr->ibmr.page_size); + + meta->len =3D 0; + meta->ibop =3D IB_WC_REG_MR; + + ionic_prep_base(qp, &wr->wr, meta, wqe); + + return 0; +} + +static int ionic_prep_one_rc(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + int rc =3D 0; + + switch (wr->opcode) { + case IB_WR_SEND: + case IB_WR_SEND_WITH_IMM: + case IB_WR_SEND_WITH_INV: + rc =3D ionic_prep_send(qp, wr); + break; + case IB_WR_RDMA_READ: + case IB_WR_RDMA_WRITE: + case IB_WR_RDMA_WRITE_WITH_IMM: + rc =3D ionic_prep_rdma(qp, rdma_wr(wr)); + break; + case IB_WR_ATOMIC_CMP_AND_SWP: + case IB_WR_ATOMIC_FETCH_AND_ADD: + rc =3D ionic_prep_atomic(qp, atomic_wr(wr)); + break; + case IB_WR_LOCAL_INV: + rc =3D ionic_prep_inv(qp, wr); + break; + case IB_WR_REG_MR: + rc =3D ionic_prep_reg(qp, reg_wr(wr)); + break; + default: + ibdev_dbg(&dev->ibdev, "invalid opcode %d\n", wr->opcode); + rc =3D -EINVAL; + } + + return rc; +} + +static int ionic_prep_one_ud(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + int rc =3D 0; + + switch (wr->opcode) { + case IB_WR_SEND: + case IB_WR_SEND_WITH_IMM: + rc =3D ionic_prep_send_ud(qp, ud_wr(wr)); + break; + default: + ibdev_dbg(&dev->ibdev, "invalid opcode %d\n", wr->opcode); + rc =3D -EINVAL; + } + + return rc; +} + +static int ionic_prep_recv(struct ionic_qp *qp, + const struct ib_recv_wr *wr) +{ + struct ionic_rq_meta *meta; + struct ionic_v1_wqe *wqe; + s64 signed_len; + u32 mval; + + wqe =3D ionic_queue_at_prod(&qp->rq); + + /* if wqe is owned by device, caller can try posting again soon */ + if (wqe->base.flags & cpu_to_be16(IONIC_V1_FLAG_FENCE)) + return -EAGAIN; + + meta =3D qp->rq_meta_head; + if (unlikely(meta =3D=3D IONIC_META_LAST) || + unlikely(meta =3D=3D IONIC_META_POSTED)) + return -EIO; + + ionic_prep_rq_wqe(qp, wqe); + + mval =3D ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, qp->rq_spec, + false); + signed_len =3D ionic_prep_pld(wqe, &wqe->recv.pld, + qp->rq_spec, mval, + wr->sg_list, wr->num_sge); + if (signed_len < 0) + return signed_len; + + meta->wrid =3D wr->wr_id; + + wqe->base.wqe_id =3D meta - qp->rq_meta; + wqe->base.num_sge_key =3D wr->num_sge; + + /* total length for recv goes in base imm_data_key */ + wqe->base.imm_data_key =3D cpu_to_be32(signed_len); + + ionic_queue_produce(&qp->rq); + + qp->rq_meta_head =3D meta->next; + meta->next =3D IONIC_META_POSTED; + + return 0; +} + +static int ionic_post_send_common(struct ionic_ibdev *dev, + struct ionic_vcq *vcq, + struct ionic_cq *cq, + struct ionic_qp *qp, + const struct ib_send_wr *wr, + const struct ib_send_wr **bad) +{ + unsigned long irqflags; + bool notify =3D false; + int spend, rc =3D 0; + + if (!bad) + return -EINVAL; + + if (!qp->has_sq) { + *bad =3D wr; + return -EINVAL; + } + + if (qp->state < IB_QPS_RTS) { + *bad =3D wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->sq_lock, irqflags); + + while (wr) { + if (ionic_queue_full(&qp->sq)) { + ibdev_dbg(&dev->ibdev, "queue full"); + rc =3D -ENOMEM; + goto out; + } + + if (qp->ibqp.qp_type =3D=3D IB_QPT_UD || + qp->ibqp.qp_type =3D=3D IB_QPT_GSI) + rc =3D ionic_prep_one_ud(qp, wr); + else + rc =3D ionic_prep_one_rc(qp, wr); + if (rc) + goto out; + + wr =3D wr->next; + } + +out: + spin_unlock_irqrestore(&qp->sq_lock, irqflags); + + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->sq_lock); + + if (likely(qp->sq.prod !=3D qp->sq_old_prod)) { + /* ring cq doorbell just in time */ + spend =3D (qp->sq.prod - qp->sq_old_prod) & qp->sq.mask; + ionic_reserve_cq(dev, cq, spend); + + qp->sq_old_prod =3D qp->sq.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.sq_qtype, + ionic_queue_dbell_val(&qp->sq)); + } + + if (qp->sq_flush) { + notify =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + + spin_unlock(&qp->sq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + + if (notify && vcq->ibcq.comp_handler) + vcq->ibcq.comp_handler(&vcq->ibcq, vcq->ibcq.cq_context); + + *bad =3D wr; + return rc; +} + +static int ionic_post_recv_common(struct ionic_ibdev *dev, + struct ionic_vcq *vcq, + struct ionic_cq *cq, + struct ionic_qp *qp, + const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad) +{ + unsigned long irqflags; + bool notify =3D false; + int spend, rc =3D 0; + + if (!bad) + return -EINVAL; + + if (!qp->has_rq) { + *bad =3D wr; + return -EINVAL; + } + + if (qp->state < IB_QPS_INIT) { + *bad =3D wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->rq_lock, irqflags); + + while (wr) { + if (ionic_queue_full(&qp->rq)) { + ibdev_dbg(&dev->ibdev, "queue full"); + rc =3D -ENOMEM; + goto out; + } + + rc =3D ionic_prep_recv(qp, wr); + if (rc) + goto out; + + wr =3D wr->next; + } + +out: + if (!cq) { + spin_unlock_irqrestore(&qp->rq_lock, irqflags); + goto out_unlocked; + } + spin_unlock_irqrestore(&qp->rq_lock, irqflags); + + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->rq_lock); + + if (likely(qp->rq.prod !=3D qp->rq_old_prod)) { + /* ring cq doorbell just in time */ + spend =3D (qp->rq.prod - qp->rq_old_prod) & qp->rq.mask; + ionic_reserve_cq(dev, cq, spend); + + qp->rq_old_prod =3D qp->rq.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.rq_qtype, + ionic_queue_dbell_val(&qp->rq)); + } + + if (qp->rq_flush) { + notify =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + } + + spin_unlock(&qp->rq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + + if (notify && vcq->ibcq.comp_handler) + vcq->ibcq.comp_handler(&vcq->ibcq, vcq->ibcq.cq_context); + +out_unlocked: + *bad =3D wr; + return rc; +} + +int ionic_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, + const struct ib_send_wr **bad) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibqp->send_cq); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_cq *cq =3D + to_ionic_vcq_cq(ibqp->send_cq, qp->udma_idx); + + return ionic_post_send_common(dev, vcq, cq, qp, wr, bad); +} + +int ionic_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibqp->recv_cq); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_cq *cq =3D + to_ionic_vcq_cq(ibqp->recv_cq, qp->udma_idx); + + return ionic_post_recv_common(dev, vcq, cq, qp, wr, bad); +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index 8c1c0a07c527..d48ee000f334 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -163,6 +163,61 @@ static inline int to_ionic_qp_flags(int access, bool s= qd_notify, return flags; } =20 +/* cqe non-admin status indicated in status_length field when err bit is s= et */ +enum ionic_status { + IONIC_STS_OK, + IONIC_STS_LOCAL_LEN_ERR, + IONIC_STS_LOCAL_QP_OPER_ERR, + IONIC_STS_LOCAL_PROT_ERR, + IONIC_STS_WQE_FLUSHED_ERR, + IONIC_STS_MEM_MGMT_OPER_ERR, + IONIC_STS_BAD_RESP_ERR, + IONIC_STS_LOCAL_ACC_ERR, + IONIC_STS_REMOTE_INV_REQ_ERR, + IONIC_STS_REMOTE_ACC_ERR, + IONIC_STS_REMOTE_OPER_ERR, + IONIC_STS_RETRY_EXCEEDED, + IONIC_STS_RNR_RETRY_EXCEEDED, + IONIC_STS_XRC_VIO_ERR, + IONIC_STS_LOCAL_SGL_INV_ERR, +}; + +static inline int ionic_to_ib_status(int sts) +{ + switch (sts) { + case IONIC_STS_OK: + return IB_WC_SUCCESS; + case IONIC_STS_LOCAL_LEN_ERR: + return IB_WC_LOC_LEN_ERR; + case IONIC_STS_LOCAL_QP_OPER_ERR: + case IONIC_STS_LOCAL_SGL_INV_ERR: + return IB_WC_LOC_QP_OP_ERR; + case IONIC_STS_LOCAL_PROT_ERR: + return IB_WC_LOC_PROT_ERR; + case IONIC_STS_WQE_FLUSHED_ERR: + return IB_WC_WR_FLUSH_ERR; + case IONIC_STS_MEM_MGMT_OPER_ERR: + return IB_WC_MW_BIND_ERR; + case IONIC_STS_BAD_RESP_ERR: + return IB_WC_BAD_RESP_ERR; + case IONIC_STS_LOCAL_ACC_ERR: + return IB_WC_LOC_ACCESS_ERR; + case IONIC_STS_REMOTE_INV_REQ_ERR: + return IB_WC_REM_INV_REQ_ERR; + case IONIC_STS_REMOTE_ACC_ERR: + return IB_WC_REM_ACCESS_ERR; + case IONIC_STS_REMOTE_OPER_ERR: + return IB_WC_REM_OP_ERR; + case IONIC_STS_RETRY_EXCEEDED: + return IB_WC_RETRY_EXC_ERR; + case IONIC_STS_RNR_RETRY_EXCEEDED: + return IB_WC_RNR_RETRY_EXC_ERR; + case IONIC_STS_XRC_VIO_ERR: + default: + return IB_WC_GENERAL_ERR; + } +} + /* admin queue qp type */ enum ionic_qp_type { IONIC_QPT_RC, @@ -294,6 +349,24 @@ struct ionic_v1_cqe { __be32 qid_type_flags; }; =20 +/* bits for cqe recv */ +enum ionic_v1_cqe_src_qpn_bits { + IONIC_V1_CQE_RECV_QPN_MASK =3D 0xffffff, + IONIC_V1_CQE_RECV_OP_SHIFT =3D 24, + + /* MASK could be 0x3, but need 0x1f for makeshift values: + * OP_TYPE_RDMA_OPER_WITH_IMM, OP_TYPE_SEND_RCVD + */ + IONIC_V1_CQE_RECV_OP_MASK =3D 0x1f, + IONIC_V1_CQE_RECV_OP_SEND =3D 0, + IONIC_V1_CQE_RECV_OP_SEND_INV =3D 1, + IONIC_V1_CQE_RECV_OP_SEND_IMM =3D 2, + IONIC_V1_CQE_RECV_OP_RDMA_IMM =3D 3, + + IONIC_V1_CQE_RECV_IS_IPV4 =3D BIT(7 + IONIC_V1_CQE_RECV_OP_SHIFT), + IONIC_V1_CQE_RECV_IS_VLAN =3D BIT(6 + IONIC_V1_CQE_RECV_OP_SHIFT), +}; + /* bits for cqe qid_type_flags */ enum ionic_v1_cqe_qtf_bits { IONIC_V1_CQE_COLOR =3D BIT(0), @@ -318,6 +391,16 @@ static inline bool ionic_v1_cqe_error(struct ionic_v1_= cqe *cqe) return cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_ERROR); } =20 +static inline bool ionic_v1_cqe_recv_is_ipv4(struct ionic_v1_cqe *cqe) +{ + return cqe->recv.src_qpn_op & cpu_to_be32(IONIC_V1_CQE_RECV_IS_IPV4); +} + +static inline bool ionic_v1_cqe_recv_is_vlan(struct ionic_v1_cqe *cqe) +{ + return cqe->recv.src_qpn_op & cpu_to_be32(IONIC_V1_CQE_RECV_IS_VLAN); +} + static inline void ionic_v1_cqe_clean(struct ionic_v1_cqe *cqe) { cqe->qid_type_flags |=3D cpu_to_be32(~0u << IONIC_V1_CQE_QID_SHIFT); @@ -444,6 +527,28 @@ enum ionic_v1_op { IONIC_V1_SPEC_FIRST_SGE =3D 2, }; =20 +/* queue pair v2 send opcodes */ +enum ionic_v2_op { + IONIC_V2_OPSL_OUT =3D 0x20, + IONIC_V2_OPSL_IMM =3D 0x40, + IONIC_V2_OPSL_INV =3D 0x80, + + IONIC_V2_OP_SEND =3D 0x0 | IONIC_V2_OPSL_OUT, + IONIC_V2_OP_SEND_IMM =3D IONIC_V2_OP_SEND | IONIC_V2_OPSL_IMM, + IONIC_V2_OP_SEND_INV =3D IONIC_V2_OP_SEND | IONIC_V2_OPSL_INV, + + IONIC_V2_OP_RDMA_WRITE =3D 0x1 | IONIC_V2_OPSL_OUT, + IONIC_V2_OP_RDMA_WRITE_IMM =3D IONIC_V2_OP_RDMA_WRITE | IONIC_V2_OPSL_IMM, + + IONIC_V2_OP_RDMA_READ =3D 0x2, + + IONIC_V2_OP_ATOMIC_CS =3D 0x4, + IONIC_V2_OP_ATOMIC_FA =3D 0x5, + IONIC_V2_OP_REG_MR =3D 0x6, + IONIC_V2_OP_LOCAL_INV =3D 0x7, + IONIC_V2_OP_BIND_MW =3D 0x8, +}; + static inline size_t ionic_v1_send_wqe_min_size(int min_sge, int min_data, int spec, bool expdb) { diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 6833abbfb1dc..ab080d945a13 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -45,6 +45,11 @@ static const struct ib_device_ops ionic_dev_ops =3D { .query_qp =3D ionic_query_qp, .destroy_qp =3D ionic_destroy_qp, =20 + .post_send =3D ionic_post_send, + .post_recv =3D ionic_post_recv, + .poll_cq =3D ionic_poll_cq, + .req_notify_cq =3D ionic_req_notify_cq, + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 94ef76aaca43..bbec041b378b 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -387,6 +387,11 @@ static inline u32 ionic_obj_dbid(struct ionic_ibdev *d= ev, return ionic_ctx_dbid(dev, to_ionic_ctx_uobj(uobj)); } =20 +static inline bool ionic_ibop_is_local(enum ib_wr_opcode op) +{ + return op =3D=3D IB_WR_LOCAL_INV || op =3D=3D IB_WR_REG_MR; +} + static inline void ionic_qp_complete(struct kref *kref) { struct ionic_qp *qp =3D container_of(kref, struct ionic_qp, qp_kref); @@ -462,8 +467,17 @@ int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_at= tr *attr, int mask, struct ib_qp_init_attr *init_attr); int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata); =20 +/* ionic_datapath.c */ +int ionic_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, + const struct ib_send_wr **bad); +int ionic_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad); +int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc); +int ionic_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags); + /* ionic_pgtbl.c */ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); +__be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va); int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); int ionic_pgtbl_init(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf, diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c index a8eb73be6f86..e74db73c9246 100644 --- a/drivers/infiniband/hw/ionic/ionic_pgtbl.c +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -26,6 +26,17 @@ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va) return cpu_to_le64(dma + (va & pg_mask)); } =20 +__be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va) +{ + if (buf->tbl_pages > 1) { + u64 pg_mask =3D BIT_ULL(buf->page_size_log2) - 1; + + return cpu_to_be64(va & pg_mask); + } + + return 0; +} + int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) { if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2089.outbound.protection.outlook.com [40.107.93.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D37442D8773; Wed, 3 Sep 2025 06:18:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.89 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880302; cv=fail; b=X2BIbw+/fwBUNfxyOssvO3ocpTZlhRAI1ArY9e+xW64R6BHJ5zB52bhxLiYleYOnL5rLrMaWhFEIC5nrl6HtPv/oN9KuWeAD3dce5f6qFhS8a3ZkC1jyNXUtdHY7dcPsBy8qwl/DLFrppB/XePVqswBNSyItXhjIkzlHIbnbR8o= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880302; c=relaxed/simple; bh=HzDxcdTcU0Ax7Ty5+WDsFNblI+WPs31N66Awk/RJAwM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=r4cR3aYOwGuihNtInQwEH3/sYfFdeBRTdc/xNCz590/xp4ueg1gjsdYDu9DeFnFwfjCJ6ixFO6rpjoom95hAxMVau5ZpRSfAIDgHRiLw6GzdDg2dpfjrQ8+BGgrgl43ssLvsMSCX1+q1kakY5wvH44XCDmwrXz/OD9U0KSe33zQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=RfgZD7R6; arc=fail smtp.client-ip=40.107.93.89 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="RfgZD7R6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=aGMm7Li1852Cmlnav51Re5FZaeSsOs3db7iYTOJ6F45ANgN45NTP6vBmcuLfcscyGq5NhPyxV83JUTp7IKDS4XayQ9vLXN6TB7FQTNrlZAzWGt0PfoEiliReu4M13sAxe8DOBPDquShMF2HcvB5eeddvq95DqlmHR85+2TUudcSL1X7NAg70DFgLiIoBj6x3H9saIQUM4Eg3LbLeMTaY9OpWQSHQWqcG/Ux+U86xzgGbxgPG24BnP4y3yiR6WMKBG83MbH/vD1VCJEiZA81W2mdE5zrsyyJQLjRJSqLu/U4PE0eIbLOFawDSR3riwa1oihZnJfoGFUNK/plGtzcQMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UxKETJtG4WQBRv4ybZ11/Qg6YhMaRXd0Xecej/i8EVs=; b=kDvc4oQteicr5TqATiUfNqo5twmhcPB0G8N/buBlv4hd/agnG3yxwDlRQEb7HTqQa1MpCcv5td0jQdkaPvyU+9ZxC6zUL6XqUSPhefb9IUyHk9BqnsmiTYt0sWdIxqr60/XfKd1UICbwod713s/Y5BovlOtS1GUH16bhARHnGhPwCkohKIBDLBa4Q12+5RGnnO02ULY//E4wlCsU7jfEuAE+EJ87WHmmRhbUPzXh7aciXHqaqnkXH0Vo0pVc14OtxN7WnfC7tMZX9WkDflAlSqfu2NmvmQzsHNRcvczcCZugxyb9XgPswwkaE9m1Me9nUzO4s8iBCvlB/iaIWxVF8Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UxKETJtG4WQBRv4ybZ11/Qg6YhMaRXd0Xecej/i8EVs=; b=RfgZD7R6rwSntR73eNjeLY8TeJZOU75pNUy+xn1FFTGXHd/puPlnqsu2dzpEJsO0Jj80VdMYhWV/a6vuj4qElSWe1BuHIoRMaohu/nVjN0A/DQmk5UkeEjKcJnLsXZZiGfMgzKQTSCw4/7LwKSDO7+K6SdOEWjMK5rt4GBOmIds= Received: from DM6PR10CA0007.namprd10.prod.outlook.com (2603:10b6:5:60::20) by PH8PR12MB6868.namprd12.prod.outlook.com (2603:10b6:510:1cb::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9009.21; Wed, 3 Sep 2025 06:18:16 +0000 Received: from DS2PEPF00003441.namprd04.prod.outlook.com (2603:10b6:5:60:cafe::76) by DM6PR10CA0007.outlook.office365.com (2603:10b6:5:60::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.16 via Frontend Transport; Wed, 3 Sep 2025 06:18:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003441.mail.protection.outlook.com (10.167.17.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:18:15 +0000 Received: from satlexmb10.amd.com (10.181.42.219) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:18:14 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb10.amd.com (10.181.42.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:18:14 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:18:10 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v6 12/14] RDMA/ionic: Register device ops for miscellaneous functionality Date: Wed, 3 Sep 2025 11:46:04 +0530 Message-ID: <20250903061606.4139957-13-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003441:EE_|PH8PR12MB6868:EE_ X-MS-Office365-Filtering-Correlation-Id: 76ce20e8-1d02-44dc-9df4-08ddeab1ab33 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|36860700013|1800799024|82310400026|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?WVrBrW1hXAYAjx0Tc7r4f4B2GQkHjeKYO/7yTef2lv+LgQnmKUetpGhTMmyn?= =?us-ascii?Q?I7BHILjaW8gzXB/crxSOFwzFXuAoIcNAeRP0rNHkyXgj5K1ENCbtwvgW1NXn?= =?us-ascii?Q?gD/FIz6sD1/3zLrbTjvef/5/6Zz2csXOhoiZeXN9XqwhJfXU515m9Uky1TA0?= =?us-ascii?Q?DeIBoGzucMQ8nRWP//7ddxwpfISmjzkWgXys90XWqQFu2ZBkd/wu6XdykmZC?= =?us-ascii?Q?K4smpVWwXWcN4bqWgr6+kkgf7IZnrQ/5StGzGGHvv5h/SkDKrwIyeW+xPjrp?= =?us-ascii?Q?pCCwB2KMUik9PRAtkjDwTgxspvNOshOWeIq2FFLjjTGhssweCbVq6vT4F0cH?= =?us-ascii?Q?Zw2aN0PoDxgeJXSgRK0mlTfqZM0QbnhxX8IU9UgRrqSwE9+F+WVX4pGOAw6e?= =?us-ascii?Q?d9Uw9acELOkqv+dxvRj/IsRoQ0dK/bqCYQQliMH0Jp0/gmRRzjalhzczyG1b?= =?us-ascii?Q?bv8AYLxNTCga6HCgU4t6hhRCaqpaEW6upfK2gvdhqUP9EMavxOvKeJybkrLG?= =?us-ascii?Q?zDrp/wkFOYL0ClLaulgZKEgP4gwdqC7vBC3zwOaUftKP1dU9myk75jb6cADN?= =?us-ascii?Q?AOhoDig9QPeai/UO2ovygnwAIlWy72J1Gytu1hqFAvoD6LG/0MkuljUFjcvt?= =?us-ascii?Q?Cw0rYL7k0tlFilogSSofIRTZzHe8aEDakBZFnObAhrM9DpRPrkjeeFQ7wxMF?= =?us-ascii?Q?fOCi5eE/Tu7QeVclP9p678njaDFwkIblTQ7L6sQqYnQEGI/Be55LPFw8hVmc?= =?us-ascii?Q?Oh/3ash/Lf581TbEoTDN78WmXhp+x1aizGI66aYI5juMIT3UzCZcXa+B3Zxc?= =?us-ascii?Q?ocVoP1EvRuH1JIKNvUmCRZcmRRrfKsBlofQ3wb6ISzhsf/sqWx3UKmnv00b9?= =?us-ascii?Q?B6rPgiL6L0QGPaJ2QVaTmOl3iKml2nmn6gAssLJjpee029T7OZUpZwVyqdHk?= =?us-ascii?Q?ACtjPd45KHgeYCX4xHaS2xF2DN4Qm3DQWGAY+u/WZN+tebX/1gbcXaY5vAOA?= =?us-ascii?Q?LUwgEBxuCBYaO1IMa1qt4tPuw3gHmfCe9N24dGywZYpKpIiiFHy32FfjpTLq?= =?us-ascii?Q?3tx02ie+Fem6ANH4aMNNZgszfeUC141We2ihSLL63bRHcAqQfmg9yH46+hR+?= =?us-ascii?Q?ixreImsD65rX1KE+veuAL3CRMgslvzKW0znG3dLoaNGwAOkPa5o/wgdK7lI0?= =?us-ascii?Q?nQBWoN0DKENcqkEScjNqFFagTogCHFq7PRwIKSc79i/HV/i4HJpnPCMN3QhF?= =?us-ascii?Q?RvYNiMAdbo8S+HgV7GLYkVDRepF1YOeb/fjMDldpJxYkipxlgizq0mDtyM7U?= =?us-ascii?Q?auj8z2OV0ctOLWklBod5imNwlD/NvulS4rf2DzQNOw4+AOKuUSP9tzVj/cF6?= =?us-ascii?Q?ObEOSv7q79nN/RErLOsxSdYbMyyoqDPN2uvyF44x2D2VymV7nnC7dRjfWnel?= =?us-ascii?Q?3/9MqxQgBuBbfn3pVrhM3ntEuFVBbZn4EpFqcTX2iaYDu7mR66vFrYmAj9Yc?= =?us-ascii?Q?45PpmDm95IXqDnJ/YO8ZRF/psmzSvK94a0PL?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(36860700013)(1800799024)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:18:15.8072 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 76ce20e8-1d02-44dc-9df4-08ddeab1ab33 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003441.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6868 Content-Type: text/plain; charset="utf-8" Implement idbdev ops for device and port information. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v5->v6 - Removed ib callback for get_vector_affinity v2->v3 - Registered main ib ops at once - Removed uverbs_cmd_mask drivers/infiniband/hw/ionic/ionic_ibdev.c | 200 ++++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 5 + drivers/infiniband/hw/ionic/ionic_lif_cfg.c | 10 + drivers/infiniband/hw/ionic/ionic_lif_cfg.h | 2 + 4 files changed, 217 insertions(+) diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index ab080d945a13..5f51873af350 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -3,7 +3,11 @@ =20 #include #include +#include +#include #include +#include +#include =20 #include "ionic_ibdev.h" =20 @@ -15,6 +19,192 @@ MODULE_DESCRIPTION(DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("NET_IONIC"); =20 +static int ionic_query_device(struct ib_device *ibdev, + struct ib_device_attr *attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + struct net_device *ndev; + + ndev =3D ib_device_get_netdev(ibdev, 1); + addrconf_ifid_eui48((u8 *)&attr->sys_image_guid, ndev); + dev_put(ndev); + attr->max_mr_size =3D dev->lif_cfg.npts_per_lif * PAGE_SIZE / 2; + attr->page_size_cap =3D dev->lif_cfg.page_size_supported; + + attr->vendor_id =3D to_pci_dev(dev->lif_cfg.hwdev)->vendor; + attr->vendor_part_id =3D to_pci_dev(dev->lif_cfg.hwdev)->device; + + attr->hw_ver =3D ionic_lif_asic_rev(dev->lif_cfg.lif); + attr->fw_ver =3D 0; + attr->max_qp =3D dev->lif_cfg.qp_count; + attr->max_qp_wr =3D IONIC_MAX_DEPTH; + attr->device_cap_flags =3D + IB_DEVICE_MEM_WINDOW | + IB_DEVICE_MEM_MGT_EXTENSIONS | + IB_DEVICE_MEM_WINDOW_TYPE_2B | + 0; + attr->max_send_sge =3D + min(ionic_v1_send_wqe_max_sge(dev->lif_cfg.max_stride, 0, false), + IONIC_SPEC_HIGH); + attr->max_recv_sge =3D + min(ionic_v1_recv_wqe_max_sge(dev->lif_cfg.max_stride, 0, false), + IONIC_SPEC_HIGH); + attr->max_sge_rd =3D attr->max_send_sge; + attr->max_cq =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + attr->max_cqe =3D IONIC_MAX_CQ_DEPTH - IONIC_CQ_GRACE; + attr->max_mr =3D dev->lif_cfg.nmrs_per_lif; + attr->max_pd =3D IONIC_MAX_PD; + attr->max_qp_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_ee_rd_atom =3D 0; + attr->max_res_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_qp_init_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_ee_init_rd_atom =3D 0; + attr->atomic_cap =3D IB_ATOMIC_GLOB; + attr->masked_atomic_cap =3D IB_ATOMIC_GLOB; + attr->max_mw =3D dev->lif_cfg.nmrs_per_lif; + attr->max_mcast_grp =3D 0; + attr->max_mcast_qp_attach =3D 0; + attr->max_ah =3D dev->lif_cfg.nahs_per_lif; + attr->max_fast_reg_page_list_len =3D dev->lif_cfg.npts_per_lif / 2; + attr->max_pkeys =3D IONIC_PKEY_TBL_LEN; + + return 0; +} + +static int ionic_query_port(struct ib_device *ibdev, u32 port, + struct ib_port_attr *attr) +{ + struct net_device *ndev; + + if (port !=3D 1) + return -EINVAL; + + ndev =3D ib_device_get_netdev(ibdev, port); + + if (netif_running(ndev) && netif_carrier_ok(ndev)) { + attr->state =3D IB_PORT_ACTIVE; + attr->phys_state =3D IB_PORT_PHYS_STATE_LINK_UP; + } else if (netif_running(ndev)) { + attr->state =3D IB_PORT_DOWN; + attr->phys_state =3D IB_PORT_PHYS_STATE_POLLING; + } else { + attr->state =3D IB_PORT_DOWN; + attr->phys_state =3D IB_PORT_PHYS_STATE_DISABLED; + } + + attr->max_mtu =3D iboe_get_mtu(ndev->max_mtu); + attr->active_mtu =3D min(attr->max_mtu, iboe_get_mtu(ndev->mtu)); + attr->gid_tbl_len =3D IONIC_GID_TBL_LEN; + attr->ip_gids =3D true; + attr->port_cap_flags =3D 0; + attr->max_msg_sz =3D 0x80000000; + attr->pkey_tbl_len =3D IONIC_PKEY_TBL_LEN; + attr->max_vl_num =3D 1; + attr->subnet_prefix =3D 0xfe80000000000000ull; + + dev_put(ndev); + + return ib_get_eth_speed(ibdev, port, + &attr->active_speed, + &attr->active_width); +} + +static enum rdma_link_layer ionic_get_link_layer(struct ib_device *ibdev, + u32 port) +{ + return IB_LINK_LAYER_ETHERNET; +} + +static int ionic_query_pkey(struct ib_device *ibdev, u32 port, u16 index, + u16 *pkey) +{ + if (port !=3D 1) + return -EINVAL; + + if (index !=3D 0) + return -EINVAL; + + *pkey =3D IB_DEFAULT_PKEY_FULL; + + return 0; +} + +static int ionic_modify_device(struct ib_device *ibdev, int mask, + struct ib_device_modify *attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (mask & ~IB_DEVICE_MODIFY_NODE_DESC) + return -EOPNOTSUPP; + + if (mask & IB_DEVICE_MODIFY_NODE_DESC) + memcpy(dev->ibdev.node_desc, attr->node_desc, + IB_DEVICE_NODE_DESC_MAX); + + return 0; +} + +static int ionic_get_port_immutable(struct ib_device *ibdev, u32 port, + struct ib_port_immutable *attr) +{ + if (port !=3D 1) + return -EINVAL; + + attr->core_cap_flags =3D RDMA_CORE_PORT_IBA_ROCE_UDP_ENCAP; + + attr->pkey_tbl_len =3D IONIC_PKEY_TBL_LEN; + attr->gid_tbl_len =3D IONIC_GID_TBL_LEN; + attr->max_mad_size =3D IB_MGMT_MAD_SIZE; + + return 0; +} + +static void ionic_get_dev_fw_str(struct ib_device *ibdev, char *str) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + ionic_lif_fw_version(dev->lif_cfg.lif, str, IB_FW_VERSION_NAME_MAX); +} + +static ssize_t hw_rev_show(struct device *device, struct device_attribute = *attr, + char *buf) +{ + struct ionic_ibdev *dev =3D + rdma_device_to_drv_device(device, struct ionic_ibdev, ibdev); + + return sysfs_emit(buf, "0x%x\n", ionic_lif_asic_rev(dev->lif_cfg.lif)); +} +static DEVICE_ATTR_RO(hw_rev); + +static ssize_t hca_type_show(struct device *device, + struct device_attribute *attr, char *buf) +{ + struct ionic_ibdev *dev =3D + rdma_device_to_drv_device(device, struct ionic_ibdev, ibdev); + + return sysfs_emit(buf, "%s\n", dev->ibdev.node_desc); +} +static DEVICE_ATTR_RO(hca_type); + +static struct attribute *ionic_rdma_attributes[] =3D { + &dev_attr_hw_rev.attr, + &dev_attr_hca_type.attr, + NULL +}; + +static const struct attribute_group ionic_rdma_attr_group =3D { + .attrs =3D ionic_rdma_attributes, +}; + +static void ionic_disassociate_ucontext(struct ib_ucontext *ibcontext) +{ + /* + * Dummy define disassociate_ucontext so that it does not + * wait for user context before cleaning up hw resources. + */ +} + static const struct ib_device_ops ionic_dev_ops =3D { .owner =3D THIS_MODULE, .driver_id =3D RDMA_DRIVER_IONIC, @@ -50,6 +240,16 @@ static const struct ib_device_ops ionic_dev_ops =3D { .poll_cq =3D ionic_poll_cq, .req_notify_cq =3D ionic_req_notify_cq, =20 + .query_device =3D ionic_query_device, + .query_port =3D ionic_query_port, + .get_link_layer =3D ionic_get_link_layer, + .query_pkey =3D ionic_query_pkey, + .modify_device =3D ionic_modify_device, + .get_port_immutable =3D ionic_get_port_immutable, + .get_dev_fw_str =3D ionic_get_dev_fw_str, + .device_group =3D &ionic_rdma_attr_group, + .disassociate_ucontext =3D ionic_disassociate_ucontext, + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index bbec041b378b..c750e049fecc 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -26,6 +26,11 @@ #define IONIC_AQ_COUNT 4 #define IONIC_EQ_ISR_BUDGET 10 #define IONIC_EQ_WORK_BUDGET 1000 +#define IONIC_MAX_RD_ATOM 16 +#define IONIC_PKEY_TBL_LEN 1 +#define IONIC_GID_TBL_LEN 256 + +#define IONIC_SPEC_HIGH 8 #define IONIC_MAX_PD 1024 #define IONIC_SPEC_HIGH 8 #define IONIC_SQCMB_ORDER 5 diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.c index 8d0d209227e9..f3cd281c3a2f 100644 --- a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.c @@ -99,3 +99,13 @@ struct net_device *ionic_lif_netdev(struct ionic_lif *li= f) dev_hold(netdev); return netdev; } + +void ionic_lif_fw_version(struct ionic_lif *lif, char *str, size_t len) +{ + strscpy(str, lif->ionic->idev.dev_info.fw_version, len); +} + +u8 ionic_lif_asic_rev(struct ionic_lif *lif) +{ + return lif->ionic->idev.dev_info.asic_rev; +} diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.h index 5b04b8a9937e..20853429f623 100644 --- a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.h @@ -60,5 +60,7 @@ struct ionic_lif_cfg { =20 void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg); struct net_device *ionic_lif_netdev(struct ionic_lif *lif); +void ionic_lif_fw_version(struct ionic_lif *lif, char *str, size_t len); +u8 ionic_lif_asic_rev(struct ionic_lif *lif); =20 #endif /* _IONIC_LIF_CFG_H_ */ --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2087.outbound.protection.outlook.com [40.107.94.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EDF92E2EE5; Wed, 3 Sep 2025 06:18:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.87 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880309; cv=fail; b=ssFo0BJFmvQYveCXi0wlt0Kex1/OK6sFgpbxwr5yEl5PF6h9I3gluswFJt0B0CKOWNZrl0HkUBsNZtVF3bAIouhysjZT1VwkkHq8t67vTSWmt9EHCawycKlhdJneVCwb0z1l8SIzJt9c69+NYcXJGCRR09O86IDHuX/TiHlp1xc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880309; c=relaxed/simple; bh=99hbURMuhVfbPAfZGTRkNM3EqRhxO9+W3T7P5aGDrDo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NtU8It6ITX/Jj8a7wr358di0rHlBM+f6tk7s+Ww8uV/Njw6url78tU3dHzDbeK9AKbO4w+sbXX0q/QML3ep/TDTMI/dDhrCfz99/S3MXq9qyp+Op4cf58l1TgdNDKtdwS+csqVpkZ7349CwFc8IudnJJihQU0kjVs78bFb0Sa1o= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=tSrehu9q; arc=fail smtp.client-ip=40.107.94.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="tSrehu9q" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VPyiacc1yMe8Hz2EkvQ4H1TITtuN4CSTP1CFk4jBKTr5HBmehJLz7piDhlv6HS6a3HDv6E8zhyNfENzKvqulhcKWrshcEAqnzwrxFUqPaxA3gZ/6IVETxwjQ2sWjNJmQrDdNNgyTdrSM9JgRi1k8iCoc+00PDj/ZUmtWqm6x1irBEgaw3LSgum54ppvyT2xsn0ICr8hRhM+ug1oZtRfmmhywK3w2M92UoQMFLeRLm3NQ29ZK7yCFWAInLU/Fcbsu2+TwQPoshwgmVLW6WOKZSM3Vp+SNaHvnnzX465gHNTLGyufpkCcMvCgfXQS9Y9qv5a41t+M6TdTjfcO0eaQl4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VC29UU9BkpBaAZYNr0IQJ2Pqaie12wsP53Oe2rPr0vM=; b=OcNxsz1Z6y6wSIHb2Kfu96S3/rDL/BrixmUViH0m2C0C/fo8FlH+VEToq7xN6pnsvEmB0x63ORKpe2ntTl7fMUplwg8FzhqucgJlaTawlbVPqAH61HdrTZAbTmRfpvGH2kw3qVOiDb84zRpoIlZ+ood4UiN1kVRTLbkxVCoS3pzcIGNadg/7r/YaekkCopLK7yeeZcWQI+oO+NcMuZVVW4z91Ezf2VkMitCjlXOuATIlYy3AFLJeyuzfQUBpqsc36h/dwsNCMxB3wRO1/9VNOrco6WRr3qfJfAD9AX5gePzuox17rlOz3XFVaJqsRDuP0H+vq7OzqeYjx8+d/+SPAg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VC29UU9BkpBaAZYNr0IQJ2Pqaie12wsP53Oe2rPr0vM=; b=tSrehu9qWYRmbjyj6wbn8Gh6StK741r0s29csUQvI7YFyx7CmNnscOm+7qJaQL//D8azdIn+2ON6YZen94ogV0pZF13wfKLLT4cfgRyn/bYs+vIxg/y/TftzZE0BPEe5gIWZMlw9Vvrsxba4UqHRNjZG24A23974dldnQfbvKNo= Received: from DS7PR03CA0292.namprd03.prod.outlook.com (2603:10b6:5:3ad::27) by SA3PR12MB8022.namprd12.prod.outlook.com (2603:10b6:806:307::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.17; Wed, 3 Sep 2025 06:18:19 +0000 Received: from DS2PEPF00003442.namprd04.prod.outlook.com (2603:10b6:5:3ad:cafe::ef) by DS7PR03CA0292.outlook.office365.com (2603:10b6:5:3ad::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9073.27 via Frontend Transport; Wed, 3 Sep 2025 06:18:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS2PEPF00003442.mail.protection.outlook.com (10.167.17.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:18:19 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:18:18 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:18:14 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde Subject: [PATCH v6 13/14] RDMA/ionic: Implement device stats ops Date: Wed, 3 Sep 2025 11:46:05 +0530 Message-ID: <20250903061606.4139957-14-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003442:EE_|SA3PR12MB8022:EE_ X-MS-Office365-Filtering-Correlation-Id: cc29ee61-7629-4ef2-a7bc-08ddeab1ad75 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|7416014|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?HrZXKqH3PUFHJ7tnDVTt0EXq2TuZ72KiH6e7DU8iS3myzwKj9W0Hga0XhM6H?= =?us-ascii?Q?EAqt0zIo+M6YetwekBLQMnqFDekUWV1OzyiqU1pQ6RvebSMZ603nsLw3L6BY?= =?us-ascii?Q?GLX7/fNgo2hz1s8XQoKaMmH3CXVK96T/u8vm4/arb653v+ljrWUwAsHLiM1A?= =?us-ascii?Q?iSpT/iJTwPoYy/8LY3mClon0wDozKYPnES2HtA2FWG03QSlv1OIEvIOAysxI?= =?us-ascii?Q?JlRS/13DDe85n6V234RnSAkxBiloneNlUow2Bjh+kaxkK/T9ClW8iNewMvIU?= =?us-ascii?Q?dqDPxzcTV3XaMzgP8x0Wkhw0n3kuEvISKQhw4pMN/bKITCEh3ACzA5lhJoFf?= =?us-ascii?Q?j3G0b+4SwQAyvgaYTgoW/1JOG0cP0KQF/w+cMtQ66jxzBLxtSWXPJRGNZf9m?= =?us-ascii?Q?iBXg56Im94cbPPeyGgT+rjPZwuaZ1JUVUjgYegEQ9QOmJEUcuPz0zujUAaCK?= =?us-ascii?Q?PIgVvn1+7DboFX5t8G2GW4O2VZXfzntxhTfv6Bx2u2haSTKBzA4I9sLEF7zb?= =?us-ascii?Q?7sE+KUusQ0xqXLFmKacPnQ+aZeD7R6XR0bDnLvgkSeSzH4owshDFlp9lDYPw?= =?us-ascii?Q?25C/zMFToH3AeVLbKMmvH8B66fesDwSpWTXc4U+oZ16FrZdrJ5Njc9QHAXH7?= =?us-ascii?Q?fJLQwQkyzXnbnnA764cl7AXmbQ0cr/2f1NpB3dLYfP/RGrLoc5w0kd0Go1nQ?= =?us-ascii?Q?5GCQCUqhH4Gdw83iKFPtuAx5tRlwnVcpGO8LI32WWR7WOkipRh5g4HNIo+H/?= =?us-ascii?Q?QrkfO+DYISaccEYwOW2r1ey99Tn2eyhTzQcyKOIJIQCx/vndwepKuvg1CB4a?= =?us-ascii?Q?c93X2iluTWPj5VsEHk3i9m3aW2RVt0vuD8UVQnz2VDOYJfP+vIHuKmXg0/I3?= =?us-ascii?Q?Xt/rmSCDg2sdQ+NxdMngQRaENZkBJ54qZr1RNEoXItp1BntWfWmhP5Z7AB5A?= =?us-ascii?Q?Akeg6UK6zah9CDA9qHp4U54TUzOUtD6QoeKDr4KuvrsJDm8mu0Fhl+ddIdk5?= =?us-ascii?Q?G09kmwTn0U9jcnFkRuL6HCCQzgoXgb1gkYB/e6fjqflEduz0ZvcBMGSpNf5Q?= =?us-ascii?Q?xMmHhVJEAuMSK6d9xUu9l8Qi5dRlPMWXsdijz0ncPKLlQqX7lMsqtGYmx4im?= =?us-ascii?Q?jm1DphqAo7YvUf/RejZgdNlNvnly0xdMMhdC8CTC3OSjPezUqAzh96wavQOA?= =?us-ascii?Q?ZCiwZgRAL3+3h0napo6oxnSkMsK0/rW/H3PU93O+v6/jAdllyNvvq27YaZ2R?= =?us-ascii?Q?HfZzPgwBZI3Z+P7sUH6zPGEiBY/avxxzapNMwT86XhTMmzHOGrcV8ERraQ7R?= =?us-ascii?Q?cLNODiiYzL5sywxt6ztss1c+hvWnOx+1qt3j3cc/TttnoVFv2qSVAWRIdLY3?= =?us-ascii?Q?JxESYqAxCoKL5jZJzbEX7Xtn9f1sLD7lDSIsoDT05lBC796LyuVBA2TQmT2E?= =?us-ascii?Q?LA/+2jnYKeLp6RTvpIN5Kc+wAG4+ixgs8CvrbVieADvIuGVU5K6cCwuFH0wb?= =?us-ascii?Q?j9JkQb+14OrcTEvOeF/Zz6oMj4Uc3S/xmPt5?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(7416014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:18:19.5949 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cc29ee61-7629-4ef2-a7bc-08ddeab1ad75 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003442.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8022 Content-Type: text/plain; charset="utf-8" Implement device stats operations for hw stats and qp stats. Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v2->v3 - Fixed sparse checks drivers/infiniband/hw/ionic/ionic_fw.h | 43 ++ drivers/infiniband/hw/ionic/ionic_hw_stats.c | 484 +++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 4 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 23 + 4 files changed, 554 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_hw_stats.c diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index d48ee000f334..8575a374808d 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -659,6 +659,17 @@ static inline int ionic_v1_use_spec_sge(int min_sge, i= nt spec) return spec; } =20 +struct ionic_admin_stats_hdr { + __le64 dma_addr; + __le32 length; + __le32 id_ver; + __u8 type_state; +} __packed; + +#define IONIC_ADMIN_STATS_HDRS_IN_V1_LEN 17 +static_assert(sizeof(struct ionic_admin_stats_hdr) =3D=3D + IONIC_ADMIN_STATS_HDRS_IN_V1_LEN); + struct ionic_admin_create_ah { __le64 dma_addr; __le32 length; @@ -837,6 +848,7 @@ struct ionic_v1_admin_wqe { __le16 len; =20 union { + struct ionic_admin_stats_hdr stats; struct ionic_admin_create_ah create_ah; struct ionic_admin_destroy_ah destroy_ah; struct ionic_admin_query_ah query_ah; @@ -983,4 +995,35 @@ static inline u32 ionic_v1_eqe_evt_qid(u32 evt) return evt >> IONIC_V1_EQE_QID_SHIFT; } =20 +enum ionic_v1_stat_bits { + IONIC_V1_STAT_TYPE_SHIFT =3D 28, + IONIC_V1_STAT_TYPE_NONE =3D 0, + IONIC_V1_STAT_TYPE_8 =3D 1, + IONIC_V1_STAT_TYPE_LE16 =3D 2, + IONIC_V1_STAT_TYPE_LE32 =3D 3, + IONIC_V1_STAT_TYPE_LE64 =3D 4, + IONIC_V1_STAT_TYPE_BE16 =3D 5, + IONIC_V1_STAT_TYPE_BE32 =3D 6, + IONIC_V1_STAT_TYPE_BE64 =3D 7, + IONIC_V1_STAT_OFF_MASK =3D BIT(IONIC_V1_STAT_TYPE_SHIFT) - 1, +}; + +struct ionic_v1_stat { + union { + __be32 be_type_off; + u32 type_off; + }; + char name[28]; +}; + +static inline int ionic_v1_stat_type(struct ionic_v1_stat *hdr) +{ + return hdr->type_off >> IONIC_V1_STAT_TYPE_SHIFT; +} + +static inline unsigned int ionic_v1_stat_off(struct ionic_v1_stat *hdr) +{ + return hdr->type_off & IONIC_V1_STAT_OFF_MASK; +} + #endif /* _IONIC_FW_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_hw_stats.c b/drivers/infinib= and/hw/ionic/ionic_hw_stats.c new file mode 100644 index 000000000000..244a80dde08f --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_hw_stats.c @@ -0,0 +1,484 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +static int ionic_v1_stat_normalize(struct ionic_v1_stat *hw_stats, + int hw_stats_count) +{ + int hw_stat_i; + + for (hw_stat_i =3D 0; hw_stat_i < hw_stats_count; ++hw_stat_i) { + struct ionic_v1_stat *stat =3D &hw_stats[hw_stat_i]; + + stat->type_off =3D be32_to_cpu(stat->be_type_off); + stat->name[sizeof(stat->name) - 1] =3D 0; + if (ionic_v1_stat_type(stat) =3D=3D IONIC_V1_STAT_TYPE_NONE) + break; + } + + return hw_stat_i; +} + +static void ionic_fill_stats_desc(struct rdma_stat_desc *hw_stats_hdrs, + struct ionic_v1_stat *hw_stats, + int hw_stats_count) +{ + int hw_stat_i; + + for (hw_stat_i =3D 0; hw_stat_i < hw_stats_count; ++hw_stat_i) { + struct ionic_v1_stat *stat =3D &hw_stats[hw_stat_i]; + + hw_stats_hdrs[hw_stat_i].name =3D stat->name; + } +} + +static u64 ionic_v1_stat_val(struct ionic_v1_stat *stat, + void *vals_buf, size_t vals_len) +{ + unsigned int off =3D ionic_v1_stat_off(stat); + int type =3D ionic_v1_stat_type(stat); + +#define __ionic_v1_stat_validate(__type) \ + ((off + sizeof(__type) <=3D vals_len) && \ + (IS_ALIGNED(off, sizeof(__type)))) + + switch (type) { + case IONIC_V1_STAT_TYPE_8: + if (__ionic_v1_stat_validate(u8)) + return *(u8 *)(vals_buf + off); + break; + case IONIC_V1_STAT_TYPE_LE16: + if (__ionic_v1_stat_validate(__le16)) + return le16_to_cpu(*(__le16 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_LE32: + if (__ionic_v1_stat_validate(__le32)) + return le32_to_cpu(*(__le32 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_LE64: + if (__ionic_v1_stat_validate(__le64)) + return le64_to_cpu(*(__le64 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE16: + if (__ionic_v1_stat_validate(__be16)) + return be16_to_cpu(*(__be16 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE32: + if (__ionic_v1_stat_validate(__be32)) + return be32_to_cpu(*(__be32 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE64: + if (__ionic_v1_stat_validate(__be64)) + return be64_to_cpu(*(__be64 *)(vals_buf + off)); + break; + } + + return ~0ull; +#undef __ionic_v1_stat_validate +} + +static int ionic_hw_stats_cmd(struct ionic_ibdev *dev, + dma_addr_t dma, size_t len, int qid, int op) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D op, + .len =3D cpu_to_le16(IONIC_ADMIN_STATS_HDRS_IN_V1_LEN), + .cmd.stats =3D { + .dma_addr =3D cpu_to_le64(dma), + .length =3D cpu_to_le32(len), + .id_ver =3D cpu_to_le32(qid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D op) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_INTERRUPT); +} + +static int ionic_init_hw_stats(struct ionic_ibdev *dev) +{ + dma_addr_t hw_stats_dma; + int rc, hw_stats_count; + + if (dev->hw_stats_hdrs) + return 0; + + dev->hw_stats_count =3D 0; + + /* buffer for current values from the device */ + dev->hw_stats_buf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!dev->hw_stats_buf) { + rc =3D -ENOMEM; + goto err_buf; + } + + /* buffer for names, sizes, offsets of values */ + dev->hw_stats =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!dev->hw_stats) { + rc =3D -ENOMEM; + goto err_hw_stats; + } + + /* request the names, sizes, offsets */ + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, dev->hw_stats, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, 0, + IONIC_V1_ADMIN_STATS_HDRS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + + /* normalize and count the number of hw_stats */ + hw_stats_count =3D + ionic_v1_stat_normalize(dev->hw_stats, + PAGE_SIZE / sizeof(*dev->hw_stats)); + if (!hw_stats_count) { + rc =3D -ENODATA; + goto err_dma; + } + + dev->hw_stats_count =3D hw_stats_count; + + /* alloc and init array of names, for alloc_hw_stats */ + dev->hw_stats_hdrs =3D kcalloc(hw_stats_count, + sizeof(*dev->hw_stats_hdrs), + GFP_KERNEL); + if (!dev->hw_stats_hdrs) { + rc =3D -ENOMEM; + goto err_dma; + } + + ionic_fill_stats_desc(dev->hw_stats_hdrs, dev->hw_stats, + hw_stats_count); + + return 0; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); +err_dma: + kfree(dev->hw_stats); +err_hw_stats: + kfree(dev->hw_stats_buf); +err_buf: + dev->hw_stats_count =3D 0; + dev->hw_stats =3D NULL; + dev->hw_stats_buf =3D NULL; + dev->hw_stats_hdrs =3D NULL; + return rc; +} + +static struct rdma_hw_stats *ionic_alloc_hw_stats(struct ib_device *ibdev, + u32 port) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (port !=3D 1) + return NULL; + + return rdma_alloc_hw_stats_struct(dev->hw_stats_hdrs, + dev->hw_stats_count, + RDMA_HW_STATS_DEFAULT_LIFESPAN); +} + +static int ionic_get_hw_stats(struct ib_device *ibdev, + struct rdma_hw_stats *hw_stats, + u32 port, int index) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + dma_addr_t hw_stats_dma; + int rc, hw_stat_i; + + if (port !=3D 1) + return -EINVAL; + + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, dev->hw_stats_buf, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, + 0, IONIC_V1_ADMIN_STATS_VALS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, + PAGE_SIZE, DMA_FROM_DEVICE); + + for (hw_stat_i =3D 0; hw_stat_i < dev->hw_stats_count; ++hw_stat_i) + hw_stats->value[hw_stat_i] =3D + ionic_v1_stat_val(&dev->hw_stats[hw_stat_i], + dev->hw_stats_buf, PAGE_SIZE); + + return hw_stat_i; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, + PAGE_SIZE, DMA_FROM_DEVICE); +err_dma: + return rc; +} + +static struct rdma_hw_stats * +ionic_counter_alloc_stats(struct rdma_counter *counter) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_counter *cntr; + int err; + + cntr =3D kzalloc(sizeof(*cntr), GFP_KERNEL); + if (!cntr) + return NULL; + + /* buffer for current values from the device */ + cntr->vals =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!cntr->vals) + goto err_vals; + + err =3D xa_alloc(&dev->counter_stats->xa_counters, &counter->id, + cntr, + XA_LIMIT(0, IONIC_MAX_QPID), + GFP_KERNEL); + if (err) + goto err_xa; + + INIT_LIST_HEAD(&cntr->qp_list); + + return rdma_alloc_hw_stats_struct(dev->counter_stats->stats_hdrs, + dev->counter_stats->queue_stats_count, + RDMA_HW_STATS_DEFAULT_LIFESPAN); +err_xa: + kfree(cntr->vals); +err_vals: + kfree(cntr); + + return NULL; +} + +static int ionic_counter_dealloc(struct rdma_counter *counter) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_counter *cntr; + + cntr =3D xa_erase(&dev->counter_stats->xa_counters, counter->id); + if (!cntr) + return -EINVAL; + + kfree(cntr->vals); + kfree(cntr); + + return 0; +} + +static int ionic_counter_bind_qp(struct rdma_counter *counter, + struct ib_qp *ibqp, + u32 port) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_counter *cntr; + + cntr =3D xa_load(&dev->counter_stats->xa_counters, counter->id); + if (!cntr) + return -EINVAL; + + list_add_tail(&qp->qp_list_counter, &cntr->qp_list); + ibqp->counter =3D counter; + + return 0; +} + +static int ionic_counter_unbind_qp(struct ib_qp *ibqp, u32 port) +{ + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + + if (ibqp->counter) { + list_del(&qp->qp_list_counter); + ibqp->counter =3D NULL; + } + + return 0; +} + +static int ionic_get_qp_stats(struct ib_device *ibdev, + struct rdma_hw_stats *hw_stats, + u32 counter_id) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + struct ionic_counter_stats *cs; + struct ionic_counter *cntr; + dma_addr_t hw_stats_dma; + struct ionic_qp *qp; + int rc, stat_i =3D 0; + + cs =3D dev->counter_stats; + cntr =3D xa_load(&cs->xa_counters, counter_id); + if (!cntr) + return -EINVAL; + + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, cntr->vals, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + return rc; + + memset(hw_stats->value, 0, sizeof(u64) * hw_stats->num_counters); + + list_for_each_entry(qp, &cntr->qp_list, qp_list_counter) { + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, + qp->qpid, + IONIC_V1_ADMIN_QP_STATS_VALS); + if (rc) + goto err_cmd; + + for (stat_i =3D 0; stat_i < cs->queue_stats_count; ++stat_i) + hw_stats->value[stat_i] +=3D + ionic_v1_stat_val(&cs->hdr[stat_i], + cntr->vals, + PAGE_SIZE); + } + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + return stat_i; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + + return rc; +} + +static int ionic_counter_update_stats(struct rdma_counter *counter) +{ + return ionic_get_qp_stats(counter->device, counter->stats, counter->id); +} + +static int ionic_alloc_counters(struct ionic_ibdev *dev) +{ + struct ionic_counter_stats *cs =3D dev->counter_stats; + int rc, hw_stats_count; + dma_addr_t hdr_dma; + + /* buffer for names, sizes, offsets of values */ + cs->hdr =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!cs->hdr) + return -ENOMEM; + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, cs->hdr, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hdr_dma, PAGE_SIZE, 0, + IONIC_V1_ADMIN_QP_STATS_HDRS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, PAGE_SIZE, DMA_FROM_DEVICE); + + /* normalize and count the number of hw_stats */ + hw_stats_count =3D ionic_v1_stat_normalize(cs->hdr, + PAGE_SIZE / sizeof(*cs->hdr)); + if (!hw_stats_count) { + rc =3D -ENODATA; + goto err_dma; + } + + cs->queue_stats_count =3D hw_stats_count; + + /* alloc and init array of names */ + cs->stats_hdrs =3D kcalloc(hw_stats_count, sizeof(*cs->stats_hdrs), + GFP_KERNEL); + if (!cs->stats_hdrs) { + rc =3D -ENOMEM; + goto err_dma; + } + + ionic_fill_stats_desc(cs->stats_hdrs, cs->hdr, hw_stats_count); + + return 0; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, PAGE_SIZE, DMA_FROM_DEVICE); +err_dma: + kfree(cs->hdr); + + return rc; +} + +static const struct ib_device_ops ionic_hw_stats_ops =3D { + .driver_id =3D RDMA_DRIVER_IONIC, + .alloc_hw_port_stats =3D ionic_alloc_hw_stats, + .get_hw_stats =3D ionic_get_hw_stats, +}; + +static const struct ib_device_ops ionic_counter_stats_ops =3D { + .counter_alloc_stats =3D ionic_counter_alloc_stats, + .counter_dealloc =3D ionic_counter_dealloc, + .counter_bind_qp =3D ionic_counter_bind_qp, + .counter_unbind_qp =3D ionic_counter_unbind_qp, + .counter_update_stats =3D ionic_counter_update_stats, +}; + +void ionic_stats_init(struct ionic_ibdev *dev) +{ + u16 stats_type =3D dev->lif_cfg.stats_type; + int rc; + + if (stats_type & IONIC_LIF_RDMA_STAT_GLOBAL) { + rc =3D ionic_init_hw_stats(dev); + if (rc) + ibdev_dbg(&dev->ibdev, "Failed to init hw stats\n"); + else + ib_set_device_ops(&dev->ibdev, &ionic_hw_stats_ops); + } + + if (stats_type & IONIC_LIF_RDMA_STAT_QP) { + dev->counter_stats =3D kzalloc(sizeof(*dev->counter_stats), + GFP_KERNEL); + if (!dev->counter_stats) + return; + + rc =3D ionic_alloc_counters(dev); + if (rc) { + ibdev_dbg(&dev->ibdev, "Failed to init counter stats\n"); + kfree(dev->counter_stats); + dev->counter_stats =3D NULL; + return; + } + + xa_init_flags(&dev->counter_stats->xa_counters, XA_FLAGS_ALLOC); + + ib_set_device_ops(&dev->ibdev, &ionic_counter_stats_ops); + } +} + +void ionic_stats_cleanup(struct ionic_ibdev *dev) +{ + if (dev->counter_stats) { + xa_destroy(&dev->counter_stats->xa_counters); + kfree(dev->counter_stats->hdr); + kfree(dev->counter_stats->stats_hdrs); + kfree(dev->counter_stats); + dev->counter_stats =3D NULL; + } + + kfree(dev->hw_stats); + kfree(dev->hw_stats_buf); + kfree(dev->hw_stats_hdrs); +} diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 5f51873af350..164046d00e5d 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -289,6 +289,7 @@ static void ionic_destroy_ibdev(struct ionic_ibdev *dev) { ionic_kill_rdma_admin(dev, false); ib_unregister_device(&dev->ibdev); + ionic_stats_cleanup(dev); ionic_destroy_rdma_admin(dev); ionic_destroy_resids(dev); WARN_ON(!xa_empty(&dev->qp_tbl)); @@ -346,6 +347,8 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 ib_set_device_ops(&dev->ibdev, &ionic_dev_ops); =20 + ionic_stats_init(dev); + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); if (rc) goto err_register; @@ -353,6 +356,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) return dev; =20 err_register: + ionic_stats_cleanup(dev); err_admin: ionic_kill_rdma_admin(dev, false); ionic_destroy_rdma_admin(dev); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index c750e049fecc..b7a1a57bae03 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -30,6 +30,7 @@ #define IONIC_PKEY_TBL_LEN 1 #define IONIC_GID_TBL_LEN 256 =20 +#define IONIC_MAX_QPID 0xffffff #define IONIC_SPEC_HIGH 8 #define IONIC_MAX_PD 1024 #define IONIC_SPEC_HIGH 8 @@ -109,6 +110,12 @@ struct ionic_ibdev { atomic_t admin_state; =20 struct ionic_eq **eq_vec; + + struct ionic_v1_stat *hw_stats; + void *hw_stats_buf; + struct rdma_stat_desc *hw_stats_hdrs; + struct ionic_counter_stats *counter_stats; + int hw_stats_count; }; =20 struct ionic_eq { @@ -320,6 +327,18 @@ struct ionic_mr { bool created; }; =20 +struct ionic_counter_stats { + int queue_stats_count; + struct ionic_v1_stat *hdr; + struct rdma_stat_desc *stats_hdrs; + struct xarray xa_counters; +}; + +struct ionic_counter { + void *vals; + struct list_head qp_list; +}; + static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) { return container_of(ibdev, struct ionic_ibdev, ibdev); @@ -480,6 +499,10 @@ int ionic_post_recv(struct ib_qp *ibqp, const struct i= b_recv_wr *wr, int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc); int ionic_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags); =20 +/* ionic_hw_stats.c */ +void ionic_stats_init(struct ionic_ibdev *dev); +void ionic_stats_cleanup(struct ionic_ibdev *dev); + /* ionic_pgtbl.c */ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); __be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va); --=20 2.43.0 From nobody Fri Oct 3 08:51:12 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2048.outbound.protection.outlook.com [40.107.223.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 296CA2E62C7; Wed, 3 Sep 2025 06:18:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880312; cv=fail; b=Asn+pPQaP+sY+CESPnUGzJsgyvF5ElJ0XTZIvFWE0OTpXzwv+LnUImuJl9XJaQflraU0FmV7WwV5+BC11tC56biAFMyF7tIM+c0u1taOja7sWn6J64CkY3MYuAd6lM7sLODjMfdtOPqxL4eT0PIO8RRgXR5Cc5dmrndcGkFSxsU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756880312; c=relaxed/simple; bh=DJYK71nbmjk+FYAwap5Uz69BaQWZRWOVmqRK7/0h7Dc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KN03d7cJiW3mJGHWlTnf6/TMOBOrGm3k/E09M8advAwank8i8NoWZbBm6eFkuAIP5Pk6gmSx6HxwITsVNFRvqnPz1II4TaLa9aJxY8Fw22e2FK7SFy+bc8JI4gnFHRrNALypwXZKa7GEwQBPAPdls2WWSEpqRiqa7jI38Fe0mh4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=jX8CSqdk; arc=fail smtp.client-ip=40.107.223.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="jX8CSqdk" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Yo5QoUf7LIDdZ3+pz2CyrVZSAeW159uStTgrkALPiR4wGQnITVAU1aUF7bjOJRONXy0TjS5EqbB9WZexCIdVtXRzTecEQlL9WBJXuCV6T+BaRcSkv6uVW4xYLk1PhcGrznWanq5m+h9hsCVemY3Ey7zQOFt1aXsKCqb8Xubj2uvoAjx5Cxj2G+vQnjvNicSe8KI6DurHW9YNo8yaTetDsrAsdF9w+ILVDw9keBlYrLA/LG4cq3D5KDrZ2UHyC3QSh4P7Yi/GKdcj2HjlJ1bE3DoBhAy0/ZoUYaq0rsKHJc8D0P+r8034aWf5n2jVKZLdjmYBd/hXJpAboMvxNIXCsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S3rtUed9V65pQCQs9cE+fjbD3tM4fIjeqau6b5B7fWc=; b=s+OuSVWMBm9p6lm9rkBubtzCBNCHTssrToHtv6SxOSO2xd24WnnQVirYf/7MlMQsP39Xjj4T7qwPC2oH597W41G6j/c1lhaM4EskCRWI6OklUiWMWP4N0BXCl8k8LHJxn1DJxCdmRpZynCPfD1YXQzx9LXSO1e7Ma8VrtYyI/NcY8zcbmd0r4TLmLbjdCHdoS6iH/85fPk+xTDlurQWZCedEEhR1kS78KwXglTVacvkhNwhnzXSlQnQ82d25YxuwyOsrDiR7kjJAouZlfwhZ2T32b1CjEWaa/r/BDPdMNH2SbVxdtUpldaxLB+qiZlpbNrOxDL3s2gOzUOuwcc8u6g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=S3rtUed9V65pQCQs9cE+fjbD3tM4fIjeqau6b5B7fWc=; b=jX8CSqdk7v7rtvJOXMxQTyea9/Imw76O6smfKAkP1epTxBt1sKD53ODj9Lqi5XRs6IvIMX7s93gTAfJJ9nXDNQZzD3bz8RNAJ4X8Y/SaJ3f8PVautQZnKLZkmaSnDghJ2cwXSgN0b/R0gsQUenOxGpa54Umhe+AQfBKMXwwLl4g= Received: from DS7PR05CA0002.namprd05.prod.outlook.com (2603:10b6:5:3b9::7) by IA0PR12MB8227.namprd12.prod.outlook.com (2603:10b6:208:406::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9073.27; Wed, 3 Sep 2025 06:18:24 +0000 Received: from DS2PEPF0000343C.namprd02.prod.outlook.com (2603:10b6:5:3b9:cafe::25) by DS7PR05CA0002.outlook.office365.com (2603:10b6:5:3b9::7) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9094.15 via Frontend Transport; Wed, 3 Sep 2025 06:18:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by DS2PEPF0000343C.mail.protection.outlook.com (10.167.18.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.9094.14 via Frontend Transport; Wed, 3 Sep 2025 06:18:24 +0000 Received: from Satlexmb09.amd.com (10.181.42.218) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 3 Sep 2025 01:18:23 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by satlexmb09.amd.com (10.181.42.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1748.10; Tue, 2 Sep 2025 23:18:23 -0700 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 3 Sep 2025 01:18:19 -0500 From: Abhijit Gangurde To: , , , , , , , , CC: , , , , , , , Abhijit Gangurde Subject: [PATCH v6 14/14] RDMA/ionic: Add Makefile/Kconfig to kernel build environment Date: Wed, 3 Sep 2025 11:46:06 +0530 Message-ID: <20250903061606.4139957-15-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250903061606.4139957-1-abhijit.gangurde@amd.com> References: <20250903061606.4139957-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF0000343C:EE_|IA0PR12MB8227:EE_ X-MS-Office365-Filtering-Correlation-Id: 4cf51dd5-9559-4330-91bc-08ddeab1b025 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?93MGkAasTZqkgszhlSmIZyelOV/zp+zM+GxhJfAO3lKqmQkSgx58RMu5TJnL?= =?us-ascii?Q?qjpjNM/R97oCUwNn6aBm8pI6lpG3njzZ1gizWYNZXGecs6JQgHLcsDhP/qu8?= =?us-ascii?Q?gCoRwV35nWFxL6lfUWSub2uahfmAVIcl+x9+Azt7hqH4sSzLSwuDtTvKapYF?= =?us-ascii?Q?KZFN+txvqug56hwHlTd++ziEQo6Njad06LFKMv88a1n9qk4HJEAZz3Spl1ia?= =?us-ascii?Q?IGZq4tM5m5AwFQsbC6hgBhe+qW8ACzzAnDRwYGaNVwUcSE3WWkx2astN94Gl?= =?us-ascii?Q?S6AaTIEN7+bh3fbubUHSYsjNFaUmh2zoLZQ5aI91Vwo1HMySOBYmKAAhpmXE?= =?us-ascii?Q?fTkCqstHTYlXbpZkPu5qjM9AKGD0hYuMs6IiC7tNMDKWnnWJBx1c63JVBVB/?= =?us-ascii?Q?O1HNwyswjOjED+zTjDF404oJS+f22kLnWRaUIzdDmbXDRmKlOM5KBOHuJixi?= =?us-ascii?Q?OxgBkvjtsDQSOQm4k0EN/02pQ1vNCelaBO4ep0jbXq3fshLvlLGbaHwLlwo+?= =?us-ascii?Q?tPLBwC6hDALYPHcYpVyyV8qcegpjH1kMh43i7AfVk26AB+YjVV1JS3q75XZj?= =?us-ascii?Q?muoNrwrmQzSp1SMiFzIKbGU/2bmMiA+6mDwaP0crDZVRaHU6kGon4xw8ehib?= =?us-ascii?Q?e5vdv4DZ0YOVOKKGRLXn0VRuay6h8mSFkl6Gu9S2GVju5fZ16gtWPzKIPc1x?= =?us-ascii?Q?N+UdAigp7LuNlJqHQMghydEnRF8oyIm8HeO2KJiuOnAl6F9P6ZUQiYwZmo29?= =?us-ascii?Q?ApPEEV002MzTKrUiar+Fp2ZfzPB97eBc2KCoA/EIfkVPcij1YgNeFA5C+jPM?= =?us-ascii?Q?IGqwXfgg+N08G/QqrmPP/p6jBAe+07tGK9cAdv9YVDRm4kTIFA3OF9po249M?= =?us-ascii?Q?KjCmI6XLpY2bmrfA2KZAVnn7SCbAVeOTqiygxOH7jxdrNfHWNi8PL1U8oygg?= =?us-ascii?Q?e1SZhJlzR0znoa3/0Y9BmR3MU1u3V/34+MxPKjnPB4WP3diZWq/zt3MqUNAP?= =?us-ascii?Q?5CVNs5Gva5Z+uAoTbTNlt2F1ZpaOwe/pQz6HseHVrArLybLQk7yXkQG/FqdD?= =?us-ascii?Q?S6jAP2upL3/f8L0YLWwJWcEQmMWQiWhISl6z9kHmN4A6WiT8z32V7KV2gy65?= =?us-ascii?Q?w6qS6X3tL0FJVLU1PS9qyBjmuAGhsmW//floiBNvg7co62UCvFIzeQeNp+S2?= =?us-ascii?Q?r9mx5SAuU4pYZFigA+OyJ0k7JjtdfNEg8kcEfV+CIDMO71iE5mVkaxxh+2Sj?= =?us-ascii?Q?26uXEG7KETyXIW+dCWFI8CSbAA8riXnpiV5Nsh6q85IPI9yHRjyFAzNK0H1H?= =?us-ascii?Q?p4Sxvf4OeF8PsyihoSVDLyJPFFFHd5P2/NfGfLoUAXfYCVZ6S0jSJUiFZzf9?= =?us-ascii?Q?0WWE3m7mTTB5RzJRcL9Cu/wCrke5/imEgpoxDkxYOkED/7f68EzZ0jUYD2bg?= =?us-ascii?Q?wu4DTC+zekwoClLzZfP1lBEWRFMEWt2i7DpYBczIo6wzAvVEjzTKQE8XPBaT?= =?us-ascii?Q?JggktoKrv9jZaUH5xkYSxoYlqj5Kimhpjm+Zr+yA9kMYBVnsmoL9u3H85A?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(7416014)(82310400026)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Sep 2025 06:18:24.1037 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4cf51dd5-9559-4330-91bc-08ddeab1b025 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF0000343C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8227 Content-Type: text/plain; charset="utf-8" Add ionic to the kernel build environment. Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v5->v6 - Updated documentation v4->v5 - Updated documentation v2->v3 - Removed select of ethernet driver - Fixed make htmldocs error .../device_drivers/ethernet/index.rst | 1 + .../ethernet/pensando/ionic_rdma.rst | 52 +++++++++++++++++++ MAINTAINERS | 9 ++++ drivers/infiniband/Kconfig | 1 + drivers/infiniband/hw/Makefile | 1 + drivers/infiniband/hw/ionic/Kconfig | 15 ++++++ drivers/infiniband/hw/ionic/Makefile | 9 ++++ 7 files changed, 88 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/pensan= do/ionic_rdma.rst create mode 100644 drivers/infiniband/hw/ionic/Kconfig create mode 100644 drivers/infiniband/hw/ionic/Makefile diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/D= ocumentation/networking/device_drivers/ethernet/index.rst index 40ac552641a3..1fabfe02eb12 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -50,6 +50,7 @@ Contents: neterion/s2io netronome/nfp pensando/ionic + pensando/ionic_rdma smsc/smc9 stmicro/stmmac ti/cpsw diff --git a/Documentation/networking/device_drivers/ethernet/pensando/ioni= c_rdma.rst b/Documentation/networking/device_drivers/ethernet/pensando/ioni= c_rdma.rst new file mode 100644 index 000000000000..42eb461d5f85 --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/pensando/ionic_rdma.= rst @@ -0,0 +1,52 @@ +.. SPDX-License-Identifier: GPL-2.0+ + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +RDMA Driver for the AMD Pensando(R) Ethernet adapter family +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +AMD Pensando RDMA driver. +Copyright (C) 2018-2025, Advanced Micro Devices, Inc. + +Overview +=3D=3D=3D=3D=3D=3D=3D=3D + +The ionic_rdma driver provides Remote Direct Memory Access functionality +for AMD Pensando DSC (Distributed Services Card) devices. This driver +implements RDMA capabilities as an auxiliary driver that operates in +conjunction with the ionic ethernet driver. + +The ionic ethernet driver detects RDMA capability during device +initialization and creates auxiliary devices that the ionic_rdma driver +binds to, establishing the RDMA data path and control interfaces. + +Identifying the Adapter +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +See Documentation/networking/device_drivers/ethernet/pensando/ionic.rst +for more information on identifying the adapter. + +Enabling the driver +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The ionic_rdma driver depends on the ionic ethernet driver. +See Documentation/networking/device_drivers/ethernet/pensando/ionic.rst +for detailed information on enabling and configuring the ionic driver. + +The ionic_rdma driver is enabled via the standard kernel configuration sys= tem, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> InfiniBand support + -> AMD Pensando DSC RDMA/RoCE Support + +Support +=3D=3D=3D=3D=3D=3D=3D + +For general Linux RDMA support, please use the RDMA mailing +list, which is monitored by AMD Pensando personnel:: + + linux-rdma@vger.kernel.org diff --git a/MAINTAINERS b/MAINTAINERS index fe168477caa4..088558cc8b18 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1176,6 +1176,15 @@ F: Documentation/networking/device_drivers/ethernet/= amd/pds_core.rst F: drivers/net/ethernet/amd/pds_core/ F: include/linux/pds/ =20 +AMD PENSANDO RDMA DRIVER +M: Abhijit Gangurde +M: Allen Hubbe +L: linux-rdma@vger.kernel.org +S: Maintained +F: Documentation/networking/device_drivers/ethernet/pensando/ionic_rdma.rst +F: drivers/infiniband/hw/ionic/ +F: include/uapi/rdma/ionic-abi.h + AMD PMC DRIVER M: Shyam Sundar S K L: platform-driver-x86@vger.kernel.org diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig index 3a394cd772f6..f0323f1d6f01 100644 --- a/drivers/infiniband/Kconfig +++ b/drivers/infiniband/Kconfig @@ -85,6 +85,7 @@ source "drivers/infiniband/hw/efa/Kconfig" source "drivers/infiniband/hw/erdma/Kconfig" source "drivers/infiniband/hw/hfi1/Kconfig" source "drivers/infiniband/hw/hns/Kconfig" +source "drivers/infiniband/hw/ionic/Kconfig" source "drivers/infiniband/hw/irdma/Kconfig" source "drivers/infiniband/hw/mana/Kconfig" source "drivers/infiniband/hw/mlx4/Kconfig" diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile index df61b2299ec0..b706dc0d0263 100644 --- a/drivers/infiniband/hw/Makefile +++ b/drivers/infiniband/hw/Makefile @@ -14,3 +14,4 @@ obj-$(CONFIG_INFINIBAND_HNS_HIP08) +=3D hns/ obj-$(CONFIG_INFINIBAND_QEDR) +=3D qedr/ obj-$(CONFIG_INFINIBAND_BNXT_RE) +=3D bnxt_re/ obj-$(CONFIG_INFINIBAND_ERDMA) +=3D erdma/ +obj-$(CONFIG_INFINIBAND_IONIC) +=3D ionic/ diff --git a/drivers/infiniband/hw/ionic/Kconfig b/drivers/infiniband/hw/io= nic/Kconfig new file mode 100644 index 000000000000..de6f10e9b6e9 --- /dev/null +++ b/drivers/infiniband/hw/ionic/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2018-2025, Advanced Micro Devices, Inc. + +config INFINIBAND_IONIC + tristate "AMD Pensando DSC RDMA/RoCE Support" + depends on NETDEVICES && ETHERNET && PCI && INET && IONIC + help + This enables RDMA/RoCE support for the AMD Pensando family of + Distributed Services Cards (DSCs). + + To learn more, visit our website at + . + + To compile this driver as a module, choose M here. The module + will be called ionic_rdma. diff --git a/drivers/infiniband/hw/ionic/Makefile b/drivers/infiniband/hw/i= onic/Makefile new file mode 100644 index 000000000000..957973742820 --- /dev/null +++ b/drivers/infiniband/hw/ionic/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 + +ccflags-y :=3D -I $(srctree)/drivers/net/ethernet/pensando/ionic + +obj-$(CONFIG_INFINIBAND_IONIC) +=3D ionic_rdma.o + +ionic_rdma-y :=3D \ + ionic_ibdev.o ionic_lif_cfg.o ionic_queue.o ionic_pgtbl.o ionic_admin.o \ + ionic_controlpath.o ionic_datapath.o ionic_hw_stats.o --=20 2.43.0