From nobody Mon Dec 15 21:59:00 2025 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2072.outbound.protection.outlook.com [40.107.244.72]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64B4A7494; Thu, 8 May 2025 05:00:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.72 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680449; cv=fail; b=Mp5mHMTD4d0OLXZDZ0hEX8nq5tdJ7OpWcFGQA3qAcdhXE7D7WiMtlTDBIycQU3umMX6VVXycEEAJbLCg8P4u5KQMZ5FmTHIxRvdBFY5U5HFMCn/5TG6zAGlGtlMN4QVB7/Gq9+jkFO4tjtABXsQRYQHymJ9flla/M4G0ihx+LVo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680449; c=relaxed/simple; bh=iv3plnzg66NjgFzhb5AioRk6iMC+cAleeeNmIqk4Acs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=SdT3n6zWZtDxRbyK9NYnx1POdnjALWWtQnW0GDjQLfnxwHWgQ4JuocRL3CrvpPOl20FotE+DRbhH3YfV7qJBQdgj1poLqjwLK8sL8uihUTONdl0RonnTcTT4/F/YYQDkWh0K+b6PrcRHB6xXED0oE7yGKXbDVIvxmTIci3xzE7Q= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=cYD9B/Vq; arc=fail smtp.client-ip=40.107.244.72 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="cYD9B/Vq" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DLlN4Ia93ExaQE+5pmp+gkE6CznsTGBGlrHS/jW9XRTEcfpHv9sanep4CL97EacHUROj5w9heHOJINePVaPhplE077oDjC1DsYI5P9Svw3n8C6tHTSoPes5gmdSx7hLYN2c7MmPBbykRLWw3F1zFjkZfV1ijzL2V6scIG4VyfSJP3/fh5+79SzoD6H+2aD5VuffMdGsaTPoTBmC5LKL3zIzfz2LVLFdU5bLFeEw+4C6inmub+zSznGO8JSukM/Pmi7h+I95+eSWQr/Ed49dbg59+qc5goXnNCV9orGSr4DTaTRd5JP1gUIK248g2XJvzli8Y4REEKwFmit7AmkBCLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=m+t4xBBf1VckZFvP/4xE3rX7IvzIZEjPaD2r2Co/HJg=; b=xovX3HloTU+6zxMsHs+KZMPiLNVWhf/lrWJt2yJd4epFtssk5z34zL9qCJUaywDPvm8iX83hOEqPMtWg3hjgIQ+QSzHrM0qLEW9EHoc3t4vAuIoL1JlOC0bdSJVFNaefzqEm0h1mXs6+CkbcC3a2ysSmQ96RgG0PbA8vOjGEJrI9yy0RoK//fLKJ5+lrsvh4PCs8nwgFOkToQuEmLnTMWoCZBBeO3ED/R9cqsecvxo430gI7fwhKU9t96MmnlKJmmiXdD5tjZ33cKTJSln/IBqvFiydVOvgSr92kpoAdLFoVnTx8BmhvAPr1U86dVeO1KCnz1OQZfZntCvuARD7PJQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m+t4xBBf1VckZFvP/4xE3rX7IvzIZEjPaD2r2Co/HJg=; b=cYD9B/VqyfiWY92v/9m7Y2bwOlxR8luUw7f20asNONeX3GiOs2NFzxg7EE0p1Y/HTlDFNmouMOgtl2/PJgFJaav/86W5DtcJ/R1DHAR0L87C3qb1EY/WPzZOoiY7VSix+XowqZhaGb6+hZUpRxe9tBY0WDE4nUD0oKYNrP5F1yg= Received: from MN2PR04CA0025.namprd04.prod.outlook.com (2603:10b6:208:d4::38) by BN3PR12MB9593.namprd12.prod.outlook.com (2603:10b6:408:2cb::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.19; Thu, 8 May 2025 05:00:38 +0000 Received: from BL6PEPF0001AB74.namprd02.prod.outlook.com (2603:10b6:208:d4:cafe::15) by MN2PR04CA0025.outlook.office365.com (2603:10b6:208:d4::38) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.30 via Frontend Transport; Thu, 8 May 2025 05:00:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB74.mail.protection.outlook.com (10.167.242.167) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:00:38 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:00:37 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:00:33 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 01/14] net: ionic: Create an auxiliary device for rdma driver Date: Thu, 8 May 2025 10:29:44 +0530 Message-ID: <20250508045957.2823318-2-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB03.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB74:EE_|BN3PR12MB9593:EE_ X-MS-Office365-Filtering-Correlation-Id: df7f77c5-0f7a-42aa-1f78-08dd8ded463b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|82310400026|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?I7+oSQuP7xo8VIkSuSqVcfQcMq/BvAmLRIX2rbWG3ql/dLh77peh6GKsEQOP?= =?us-ascii?Q?zDqdzyBsEWm4Sr+8AgyKn/2DA3jKYPNPhCJiQ8ursC/g1oKdPmRUdFa8HF+k?= =?us-ascii?Q?O82Dqp+EKpebiwk+3TLvEhtfmbZdvZzQc4xwxb4xwyR1eVCQqtr9y1qD25tZ?= =?us-ascii?Q?wpOoaKzHYmVoU/4z16UlbSC3vy8LQgKpkb9DkfttK5CGwkWDVV0iZRvJYgxu?= =?us-ascii?Q?c/TEXNjSfImFybb/2yVKnSEtpXLtJmz/m0+hujdhh/dAB/gHti2mymw9x4DN?= =?us-ascii?Q?K1OjaNQZSwAO+g6H0BAJYRouW0cIFad7GwuIYTkK8ruWBqBZlFBe3yS46u26?= =?us-ascii?Q?PQyk827L9QkCjYdASWJB/ofTDVlLs3wn7+z3KbGe3VL1elJKnoAQq2OoVV+m?= =?us-ascii?Q?C+IO72XW7VzbWqpA5YztfB89unEXDZb2wPUKqV6Bvf8EVFaAZ40v7gdkxgBa?= =?us-ascii?Q?bqKrb++viwxnHqG+ku9iSRZnTx78/DSovx0A4TnzalUkhIqJ/OHojjz2r9gv?= =?us-ascii?Q?hBe7EsTOWgvtQLMCxQ7i3TFGN84yaRwCu0sWwz/KARpVVPndcXdLIN2URpL8?= =?us-ascii?Q?3If4aBExT29i42yPIGR01MDknenNvPyfUYK+xmd/mbPA5mQt6bMuaP9YXgvC?= =?us-ascii?Q?OX1+CrMLrH/fG/2RszlDbUor0HtV/IyrtUJNcz+LkQqe95phov+NGoMRZ7ve?= =?us-ascii?Q?PGijI7LQyx/OcplNKhmGRfvyHed008jq44eEBL5nCUgTdFmV8/BT5efBLzBa?= =?us-ascii?Q?/5T/A+xYziSgFOnoj1Jrl5hGOQyqx65jidMJztAq4uFDhMnIVbrDd2uSaxpC?= =?us-ascii?Q?Kxo+8YS1RgLmeSShXYqwKkXA612u3i8nOyyS79oAhpNbWMdyuWa3DwktRix2?= =?us-ascii?Q?XczvS4VvgVfSbadPu5aIV0jehkFhP3Y+KAp+sVtECHCy9J3QTPAVKwB00rtY?= =?us-ascii?Q?mxIx+kCxq0VczgNfEozPpJMA65ydoejay3ChIEO18BN22+QH4JcJS78FATCW?= =?us-ascii?Q?w8y5PJ9wHOodcFtEdj/BYonkThgO3LgdpiaDMinvbmwppn5+evBCeLJHD2oI?= =?us-ascii?Q?O6ksJoI2mi5Xg+W5W8MwJ1xb/IojchVcHhVDhm7bKvmekZozSnjoADsMgHeC?= =?us-ascii?Q?alJlXZKpKsC9Pekz0Q6RZ6fW9jchjEdyh8eaLkFH2/L7ZbazETBxoM3L2sZ0?= =?us-ascii?Q?CjGfwIAu7X/uNqKZKNRlRw1RKuequTwOuXc6ONt892J/fT7+AYOT8zJhL1K9?= =?us-ascii?Q?IdEFbJ31T+4Iq/Nd3x3X90jUnnKU6+W0V84IfdRL4CZmW3sTcYoCBz4ZdFmW?= =?us-ascii?Q?tI2woOLJoGePCu8e68ao5upVooi64b1DFjUU4RNsGu6Ug7FzCD9ky9NE7aJR?= =?us-ascii?Q?FpsMY9sIT5yjj/QrMXhk2BRqPfKjdXnAJwVqAUDNIxI7L2E6JxWpKT+n1xfQ?= =?us-ascii?Q?HYik6zOyyY3ZU2BCtx7thaNI5d3cL2OhWrnbdAOXF//sy0BZnwtDRRtdmBDN?= =?us-ascii?Q?zY62M20I2dRNv+lyhGxpl8tjffwkdE4iU7g6unAzAa02vxKI+S9o40pdhA?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(82310400026)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:00:38.1076 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: df7f77c5-0f7a-42aa-1f78-08dd8ded463b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB74.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR12MB9593 Content-Type: text/plain; charset="utf-8" To support RDMA capable ethernet device, create an auxiliary device in the ionic Ethernet driver. The RDMA device is modeled as an auxiliary device to the Ethernet device. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- drivers/net/ethernet/pensando/Kconfig | 1 + drivers/net/ethernet/pensando/ionic/Makefile | 2 +- .../net/ethernet/pensando/ionic/ionic_api.h | 21 ++++ .../net/ethernet/pensando/ionic/ionic_aux.c | 95 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_aux.h | 10 ++ .../ethernet/pensando/ionic/ionic_bus_pci.c | 5 + .../net/ethernet/pensando/ionic/ionic_lif.c | 7 ++ .../net/ethernet/pensando/ionic/ionic_lif.h | 3 + 8 files changed, 143 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_api.h create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_aux.c create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_aux.h diff --git a/drivers/net/ethernet/pensando/Kconfig b/drivers/net/ethernet/p= ensando/Kconfig index 01fe76786f77..c99758adf3ad 100644 --- a/drivers/net/ethernet/pensando/Kconfig +++ b/drivers/net/ethernet/pensando/Kconfig @@ -24,6 +24,7 @@ config IONIC select NET_DEVLINK select DIMLIB select PAGE_POOL + select AUXILIARY_BUS help This enables the support for the Pensando family of Ethernet adapters. More specific information on this driver can be diff --git a/drivers/net/ethernet/pensando/ionic/Makefile b/drivers/net/eth= ernet/pensando/ionic/Makefile index 4e7642a2d25f..a598972fef41 100644 --- a/drivers/net/ethernet/pensando/ionic/Makefile +++ b/drivers/net/ethernet/pensando/ionic/Makefile @@ -5,5 +5,5 @@ obj-$(CONFIG_IONIC) :=3D ionic.o =20 ionic-y :=3D ionic_main.o ionic_bus_pci.o ionic_devlink.o ionic_dev.o \ ionic_debugfs.o ionic_lif.o ionic_rx_filter.o ionic_ethtool.o \ - ionic_txrx.o ionic_stats.o ionic_fw.o + ionic_txrx.o ionic_stats.o ionic_fw.o ionic_aux.o ionic-$(CONFIG_PTP_1588_CLOCK) +=3D ionic_phc.o diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h new file mode 100644 index 000000000000..f9fcd1b67b35 --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_API_H_ +#define _IONIC_API_H_ + +#include + +/** + * struct ionic_aux_dev - Auxiliary device information + * @lif: Logical interface + * @idx: Index identifier + * @adev: Auxiliary device + */ +struct ionic_aux_dev { + struct ionic_lif *lif; + int idx; + struct auxiliary_device adev; +}; + +#endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.c b/drivers/net/= ethernet/pensando/ionic/ionic_aux.c new file mode 100644 index 000000000000..ba29c456de00 --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include "ionic.h" +#include "ionic_lif.h" +#include "ionic_aux.h" + +static DEFINE_IDA(aux_ida); + +static void ionic_auxbus_release(struct device *dev) +{ + struct ionic_aux_dev *ionic_adev; + + ionic_adev =3D container_of(dev, struct ionic_aux_dev, adev.dev); + kfree(ionic_adev); +} + +int ionic_auxbus_register(struct ionic_lif *lif) +{ + struct ionic_aux_dev *ionic_adev; + struct auxiliary_device *aux_dev; + int err, id; + + if (!(lif->ionic->ident.lif.capabilities & IONIC_LIF_CAP_RDMA)) + return 0; + + ionic_adev =3D kzalloc(sizeof(*ionic_adev), GFP_KERNEL); + if (!ionic_adev) + return -ENOMEM; + + aux_dev =3D &ionic_adev->adev; + + id =3D ida_alloc_range(&aux_ida, 0, INT_MAX, GFP_KERNEL); + if (id < 0) { + dev_err(lif->ionic->dev, "Failed to allocate aux id: %d\n", + id); + err =3D id; + goto err_adev_free; + } + + aux_dev->id =3D id; + aux_dev->name =3D "rdma"; + aux_dev->dev.parent =3D &lif->ionic->pdev->dev; + aux_dev->dev.release =3D ionic_auxbus_release; + ionic_adev->lif =3D lif; + err =3D auxiliary_device_init(aux_dev); + if (err) { + dev_err(lif->ionic->dev, "Failed to initialize %s aux device: %d\n", + aux_dev->name, err); + goto err_ida_free; + } + + err =3D auxiliary_device_add(aux_dev); + if (err) { + dev_err(lif->ionic->dev, "Failed to add %s aux device: %d\n", + aux_dev->name, err); + goto err_aux_uninit; + } + + lif->ionic_adev =3D ionic_adev; + + return 0; + +err_aux_uninit: + auxiliary_device_uninit(aux_dev); +err_ida_free: + ida_free(&aux_ida, id); +err_adev_free: + kfree(ionic_adev); + + return err; +} + +void ionic_auxbus_unregister(struct ionic_lif *lif) +{ + struct auxiliary_device *aux_dev; + int id; + + mutex_lock(&lif->adev_lock); + if (!lif->ionic_adev) + goto out; + + aux_dev =3D &lif->ionic_adev->adev; + id =3D aux_dev->id; + + auxiliary_device_delete(aux_dev); + auxiliary_device_uninit(aux_dev); + ida_free(&aux_ida, id); + + lif->ionic_adev =3D NULL; + +out: + mutex_unlock(&lif->adev_lock); +} diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.h b/drivers/net/= ethernet/pensando/ionic/ionic_aux.h new file mode 100644 index 000000000000..f5528a9f187d --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_AUX_H_ +#define _IONIC_AUX_H_ + +int ionic_auxbus_register(struct ionic_lif *lif); +void ionic_auxbus_unregister(struct ionic_lif *lif); + +#endif /* _IONIC_AUX_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/= net/ethernet/pensando/ionic/ionic_bus_pci.c index 4c377bdc62c8..bb75044dfb82 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -9,6 +9,7 @@ #include "ionic.h" #include "ionic_bus.h" #include "ionic_lif.h" +#include "ionic_aux.h" #include "ionic_debugfs.h" =20 /* Supported devices */ @@ -375,6 +376,8 @@ static int ionic_probe(struct pci_dev *pdev, const stru= ct pci_device_id *ent) goto err_out_deregister_devlink; } =20 + ionic_auxbus_register(ionic->lif); + mod_timer(&ionic->watchdog_timer, round_jiffies(jiffies + ionic->watchdog_period)); ionic_queue_doorbell_check(ionic, IONIC_NAPI_DEADLINE); @@ -415,6 +418,7 @@ static void ionic_remove(struct pci_dev *pdev) =20 if (ionic->lif->doorbell_wa) cancel_delayed_work_sync(&ionic->doorbell_check_dwork); + ionic_auxbus_unregister(ionic->lif); ionic_lif_unregister(ionic->lif); ionic_devlink_unregister(ionic); ionic_lif_deinit(ionic->lif); @@ -444,6 +448,7 @@ static void ionic_reset_prepare(struct pci_dev *pdev) timer_delete_sync(&ionic->watchdog_timer); cancel_work_sync(&lif->deferred.work); =20 + ionic_auxbus_unregister(ionic->lif); mutex_lock(&lif->queue_lock); ionic_stop_queues_reconfig(lif); ionic_txrx_free(lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 7707a9e53c43..146659f6862a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -19,6 +19,7 @@ #include "ionic_bus.h" #include "ionic_dev.h" #include "ionic_lif.h" +#include "ionic_aux.h" #include "ionic_txrx.h" #include "ionic_ethtool.h" #include "ionic_debugfs.h" @@ -3293,6 +3294,7 @@ int ionic_lif_alloc(struct ionic *ionic) =20 mutex_init(&lif->queue_lock); mutex_init(&lif->config_lock); + mutex_init(&lif->adev_lock); =20 spin_lock_init(&lif->adminq_lock); =20 @@ -3349,6 +3351,7 @@ int ionic_lif_alloc(struct ionic *ionic) lif->info =3D NULL; lif->info_pa =3D 0; err_out_free_mutex: + mutex_destroy(&lif->adev_lock); mutex_destroy(&lif->config_lock); mutex_destroy(&lif->queue_lock); err_out_free_netdev: @@ -3384,6 +3387,7 @@ static void ionic_lif_handle_fw_down(struct ionic_lif= *lif) =20 netif_device_detach(lif->netdev); =20 + ionic_auxbus_unregister(ionic->lif); mutex_lock(&lif->queue_lock); if (test_bit(IONIC_LIF_F_UP, lif->state)) { dev_info(ionic->dev, "Surprise FW stop, stopping queues\n"); @@ -3446,6 +3450,8 @@ int ionic_restart_lif(struct ionic_lif *lif) netif_device_attach(lif->netdev); ionic_queue_doorbell_check(ionic, IONIC_NAPI_DEADLINE); =20 + ionic_auxbus_register(ionic->lif); + return 0; =20 err_txrx_free: @@ -3532,6 +3538,7 @@ void ionic_lif_free(struct ionic_lif *lif) =20 mutex_destroy(&lif->config_lock); mutex_destroy(&lif->queue_lock); + mutex_destroy(&lif->adev_lock); =20 /* free netdev & lif */ ionic_debugfs_del_lif(lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/= ethernet/pensando/ionic/ionic_lif.h index e01756fb7fdd..43bdd0fb8733 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h @@ -10,6 +10,7 @@ #include #include #include "ionic_rx_filter.h" +#include "ionic_api.h" =20 #define IONIC_ADMINQ_LENGTH 16 /* must be a power of two */ #define IONIC_NOTIFYQ_LENGTH 64 /* must be a power of two */ @@ -225,6 +226,8 @@ struct ionic_lif { dma_addr_t info_pa; u32 info_sz; struct ionic_qtype_info qtype_info[IONIC_QTYPE_MAX]; + struct ionic_aux_dev *ionic_adev; + struct mutex adev_lock; /* lock for aux_dev actions */ =20 u8 rss_hash_key[IONIC_RSS_HASH_KEY_SIZE]; u8 *rss_ind_tbl; --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2057.outbound.protection.outlook.com [40.107.94.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E66B22144D6; Thu, 8 May 2025 05:00:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680454; cv=fail; b=t8VMLVkNSlG6oyuX/PntK2uLE4y2yQIZzGE6uu48b0BTKkvBKZH1IOmr8O7BIhmQHjvvNXr6LwbB+rlTbpU6Yf0m+hILwl0uAKt7i9FEeRMnggUUeKZSkGnW/3z/zacrkFfWByssNFY5aXxJOMMj2KFxdcGBpqIHOSwKftE5QXc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680454; c=relaxed/simple; bh=8p4t1fCLt1KHn//oXmVq5U+F3+9RBnbA2c/fY218Q6M=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=GDkYXtrLUhMKiIS7Nf/6ZVO2oBnZtIoZbaWmzngD1QNWFU3ZxB8V1hRN8NYy4xcOhKRAXMOn5++C6MxSPIlrJ4/N3D3CDdJ0djEfpeTHWADTCBRiqkjaFIGPLZ1kX6Jqu+Z0en9oRUK3aUL9cZHZErohFis2QXvSYF3a035fblk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=Bewruuh4; arc=fail smtp.client-ip=40.107.94.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="Bewruuh4" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=lTPt+GIi4Eru74r5/3ZSa2V2iqv6wWdGM5Tvhc5OC90J70aCWehvFYLBHKKKI5UcWYXQTrQr+eMmPm6twjpC0SQY5nSLK/N/BOuMVCkl7GwXFLGxK/fuqbLiywVjYzz2U/jqzTJxF1iuxumP2Sbdtgc6hxieZjC4Q+Wy+n/cX22Qz2wF86qCWnHe/d0YNONGpyd9yhBe+gDlPBQjS9qGI5uXEa6brfkP2dOzOzq8JtKl4Et5GdOv45s2iZyL0lAy7pGgMmBmRzF+MMRKW6Ip6a7ytM4pKUSEsavAFge0kxy0sBd0c+SDGBNDOQXqABGHbMUsRlNkagxHjrI3E1VWbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KZSVZ2YKj7dfjDtPRBHrcBqRBP7XJBHLZXP7XiuhDlA=; b=oxFrRkomYOeVLflYhWDH9KvKN1HC9dmG0Yfo6T0jtxGhliwmj2kTBZsvRoYh7LBsSNMsIXS7ilYTI5stRXEtU9F08KlbqC89hnkR7oq/aaEmCvrojYHRicquWUltoazHxpDkJnLdTaXi3rGsEb7oOyLy/7Xertgs1qZKnnqTc501/mdwxKnfauxhdEHZMjt4DH5eHPbOs4oFOS238NqTA/K2AIBbpvj3bFDltA/upevUmb/08MGS7QT8L77iArVc+u2GECbxWBNni7xv8HgwiFEwZUrKwCz6WCnnfs69R99+H6V9iLws9kRcYTTPrhRGJ5Zfmxmj8linmgWb+zW6VQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KZSVZ2YKj7dfjDtPRBHrcBqRBP7XJBHLZXP7XiuhDlA=; b=Bewruuh44RoOYbAVVzw2Cvug7MoMy2zgRv22kVHXa1llYNFQqdD21gvAAaFXJ4pyZDgqahO7kYktCThfgGYjaD2IOrhdGEThXJJ0IXAfVI3HGPux/0KuPX29g+qFG0zab7n8vBai6Lard8VStlIZZ5h/Cz07vbgdPjPm6f2pdrQ= Received: from BY5PR17CA0070.namprd17.prod.outlook.com (2603:10b6:a03:167::47) by SJ2PR12MB8035.namprd12.prod.outlook.com (2603:10b6:a03:4d3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.37; Thu, 8 May 2025 05:00:44 +0000 Received: from CY4PEPF0000FCC3.namprd03.prod.outlook.com (2603:10b6:a03:167:cafe::c8) by BY5PR17CA0070.outlook.office365.com (2603:10b6:a03:167::47) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8722.20 via Frontend Transport; Thu, 8 May 2025 05:00:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC3.mail.protection.outlook.com (10.167.242.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:00:43 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:00:42 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:00:38 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 02/14] net: ionic: Update LIF identity with additional RDMA capabilities Date: Thu, 8 May 2025 10:29:45 +0530 Message-ID: <20250508045957.2823318-3-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC3:EE_|SJ2PR12MB8035:EE_ X-MS-Office365-Filtering-Correlation-Id: e48eacaf-15c9-49c0-d118-08dd8ded4991 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|36860700013|1800799024|376014|82310400026|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?z527fyLFnTAz6GTYwsraz31liNcMgqsybl0kNkFhsuEgC3vN3C0e6If/a+20?= =?us-ascii?Q?apxMT27eYVEAxCBn35GFPTJd1Aj6EhwvdI5RK96ofdWkoXRJEwhxS3XVxgLl?= =?us-ascii?Q?Eqh0dJZJnruPCLW8ZyYXOvxkClRiEdLvG+GzytDPUroG7olSVVyS0Y7nQ1eK?= =?us-ascii?Q?M/jXgEJjUP9I1z4d7dtQqeejcfD349PqnErh/jZvyEurznLH/hYahjH7GS5T?= =?us-ascii?Q?msc5RT7xrT1atVYBFgGLQOM2aoJXGJY5LmC3QqkQgOtB9Wtbbpe39sDrPQJ+?= =?us-ascii?Q?bruapnPHNr8Vfyjh5m2iTptMoBLGuRZx6MGWBERaiYLhJm37qSozfqcF+gwo?= =?us-ascii?Q?TYn1yYK7CloABePy1EgDh5Dng6KOX2A6/OUHP6kYx+DnK/qdPQ+82EqSDCFT?= =?us-ascii?Q?Yz23VuTbYQO2TLp8w0IxgvOVQys3Tz16p5BvE1nj31XHvH/wOLooTqvIHBTc?= =?us-ascii?Q?94npRzMvfqRTqLHQ62sYJZiW5aWBh8QfOWfXwbUpD3atNMDgqATqT3yMwURo?= =?us-ascii?Q?BzWUcfgW4ROkNFlN+alnlNNCxacmdZE6cP64U+Ujq8wFz/FJudq6XNY/4RDl?= =?us-ascii?Q?QG1+zWdaeR80CLCu0o29JVJjMgA35i0w68fm5s7LjKvS4FZufv+jfGcpMwMM?= =?us-ascii?Q?A36uGG1b6vZ8eptIvRJfuAXt70FLZtipaKiiRhiC8GPiI9/+CbplY1IDM0NS?= =?us-ascii?Q?RTvcRU+9T7K9r9EEjZe7YhcDZKXEx3k+3b5jHFzi9CFpSKQc6swnnUT0FOqX?= =?us-ascii?Q?diQtegSnEs+9rMwUxUHTb3E3i52lDLJNYPo0J0xiV289UbiEjagk6iPgaud/?= =?us-ascii?Q?FgaYKpZvmDtBQh+leWNaHDYBF+Ox/75pCa5IcfOaVVgBzNKR3zfNX2nA0ZB/?= =?us-ascii?Q?6RBQU6yZk2pJsRTuwFvVWSxrHaGdwJR1xtpgF7u6MYmy+a/8NIK/smtnwnbT?= =?us-ascii?Q?lQtbAusVr+gVo9+p/njn3EWRpkqmw1Vr7EnzX5LZu0so3OX5cZBiBO2evtB0?= =?us-ascii?Q?StMHSMQQxwlN/xMSf0IyB78MLXJuM0YIrqonsJM0Yxx9f/xy6vNFk3ypPFd0?= =?us-ascii?Q?vpqt1f1n4ci6wWHmKS5TV8CcsP6tHjpUYOnB9OJGOjRGuBfMWvR4QRfkrGqo?= =?us-ascii?Q?TMozjihlDVWZWfl6/HJxAUH+/W+/FTfkDcy2JteoqbBvfAEsxwksmGWMQL45?= =?us-ascii?Q?NNv7SJ+AMsfBkA+Kl6SHgFTz3gjLmbgxa4lu9+HRs4baZnbjKuK8mUtkBFMB?= =?us-ascii?Q?rTglYOaXoItXJ2WYl8luz3P50fhdQ/8HUyeq/ddETonNI1OXKwjaCwolhXzE?= =?us-ascii?Q?jO8cEA5pMHI52L+HPIByuPbbCl1WexM3JWI+pbE2lAI8xyH+oLO9/OtCWmp5?= =?us-ascii?Q?sdkcHY3t6vbTu/Rn3dl1h2Qp4+8lcrR5C7JVJvPKWmJe6Bc1G1A8hcdv7N1T?= =?us-ascii?Q?HR1Q/Sch0Em0bsMJldhrJlFfzXmwRbLX61Z3MbC9hqo/yq/he7mKzLNXXUOg?= =?us-ascii?Q?daEG7pEbGNdxl4x1r2Cwla/G7LBf2wsT2h1JBJwN9rX29lov6FhsmJn1qg?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(36860700013)(1800799024)(376014)(82310400026)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:00:43.6078 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e48eacaf-15c9-49c0-d118-08dd8ded4991 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC3.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8035 Content-Type: text/plain; charset="utf-8" Firmware sends the RDMA capability in a response for LIF_IDENTIFY device command. Update the LIF indentify with additional RDMA capabilities used by driver and firmware. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_if.h | 29 +++++++++++++++---- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/e= thernet/pensando/ionic/ionic_if.h index 830c8adbfbee..02cda3536dcb 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -494,6 +494,16 @@ union ionic_lif_config { __le32 words[64]; }; =20 +/** + * enum ionic_lif_rdma_cap_stats - LIF stat type + * @IONIC_LIF_RDMA_STAT_GLOBAL: Global stats + * @IONIC_LIF_RDMA_STAT_QP: Queue pair stats + */ +enum ionic_lif_rdma_cap_stats { + IONIC_LIF_RDMA_STAT_GLOBAL =3D BIT(0), + IONIC_LIF_RDMA_STAT_QP =3D BIT(1), +}; + /** * struct ionic_lif_identity - LIF identity information (type-specific) * @@ -513,10 +523,10 @@ union ionic_lif_config { * @eth.config: LIF config struct with features, mtu, mac, q c= ounts * * @rdma: RDMA identify structure - * @rdma.version: RDMA version of opcodes and queue descriptors + * @rdma.version: RDMA capability version * @rdma.qp_opcodes: Number of RDMA queue pair opcodes supported * @rdma.admin_opcodes: Number of RDMA admin opcodes supported - * @rdma.rsvd: reserved byte(s) + * @rdma.minor_version: RDMA capability minor version * @rdma.npts_per_lif: Page table size per LIF * @rdma.nmrs_per_lif: Number of memory regions per LIF * @rdma.nahs_per_lif: Number of address handles per LIF @@ -526,12 +536,17 @@ union ionic_lif_config { * @rdma.rrq_stride: Remote RQ work request stride * @rdma.rsq_stride: Remote SQ work request stride * @rdma.dcqcn_profiles: Number of DCQCN profiles - * @rdma.rsvd_dimensions: reserved byte(s) + * @rdma.udma_shift: Log2 number of queues per queue group + * @rdma.rsvd_dimensions: Reserved byte + * @rdma.page_size_cap: Supported page sizes * @rdma.aq_qtype: RDMA Admin Qtype * @rdma.sq_qtype: RDMA Send Qtype * @rdma.rq_qtype: RDMA Receive Qtype * @rdma.cq_qtype: RDMA Completion Qtype * @rdma.eq_qtype: RDMA Event Qtype + * @rdma.stats_type: Supported statistics type + * (enum ionic_lif_rdma_cap_stats) + * @rdma.rsvd1: Reserved byte(s) * @words: word access to struct contents */ union ionic_lif_identity { @@ -557,7 +572,7 @@ union ionic_lif_identity { u8 version; u8 qp_opcodes; u8 admin_opcodes; - u8 rsvd; + u8 minor_version; __le32 npts_per_lif; __le32 nmrs_per_lif; __le32 nahs_per_lif; @@ -567,12 +582,16 @@ union ionic_lif_identity { u8 rrq_stride; u8 rsq_stride; u8 dcqcn_profiles; - u8 rsvd_dimensions[10]; + u8 udma_shift; + u8 rsvd_dimensions; + __le64 page_size_cap; struct ionic_lif_logical_qtype aq_qtype; struct ionic_lif_logical_qtype sq_qtype; struct ionic_lif_logical_qtype rq_qtype; struct ionic_lif_logical_qtype cq_qtype; struct ionic_lif_logical_qtype eq_qtype; + __le16 stats_type; + u8 rsvd1[162]; } __packed rdma; } __packed; __le32 words[478]; --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2068.outbound.protection.outlook.com [40.107.220.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 499E5215F7C; Thu, 8 May 2025 05:00:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.68 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680460; cv=fail; b=KbeXF9pwZWHvisoC2KfgSE5lW6p76JFndiRsQPdPTl0Mlhy9jbnlRIM38OPgf58wVUun2K/anDTAE8QfO/doeVJFKtREA5AIuKfCT2vERHOsARHaYxywmgyGHZz6cfksxP/W+G7/gl+B/ESAa5XO2Pw9cEZIlhV43qQPP0muxR4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680460; c=relaxed/simple; bh=e0QEhaGgqO3+sZHE83XtQSI6fMYhd7JXNksmoapxZrM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=fcvcyazUT7+rU0ZdjLCcP12oYAJpO16PE21/nAdyASJRyKLZsss6nZD2iDMGk62oI7EIldAVpcMmB6jHt29RodMyiFR1tRZy3E9cYQZ5t/MQb0n3GLyHBXq7IGl2Xsv4fyLcyKxDerzEoXArMyW1Mtz4Uk46GgUI/UGZqllXkXE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=idoM4gz0; arc=fail smtp.client-ip=40.107.220.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="idoM4gz0" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=asaj8ZI8HqvqAhPgXA+ekRPFc12VCyrsFzzFhl2n89aJs1Tncbb4Tt5yQDDc8NEE9qlF+hiBLOtY+7PQuVH1x2enOOJTPSqOymUOF+aL7+zxZgp8KlWwC4x3YRDpWA9kAERzLwjNaPwumAmpH0zV9qzHWXfgGCOKwPDs5dlGTHjzl9/rwshdkVg5f0V3mMHR69JMX/Byd6REF0H8jeR7IiImz6reKpAQK8poai59Q0NHLfg4/DgBpMpomp0fb/FHvN3cZYw270f2BuiUtpR+jbyXscpg9ltAYIYVXGJOl7XHc0IjH9uZVChRTkab3O4I8V/n5rXWmrCyi6okUsfGIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Y4R2uzn3Dr9UmDqVRyzs3y6uTgF8aGRhQmM38NgbEJc=; b=VO8TxHmKgq0ziyw2kSaRVeZxakTtJnIsjnfTRuyADIHghmSsrZ8O2bLsh1aRij4nfpquhOzSeojwiPzWYxJVa13yot0ZCUP4iJTbccDjTsZ/3TU5lk1t2/oDgsZA7Kf8DgD1u7xWRIrRDlYqmLSArK97pVuSZfNao71Fybr25Uh82zSu2sACH4yfWRCXHU0aliAaUXc19MZbw6CB3PY+jeGLOMXIqxajiEQ5EwmUKxLtf8987tT112RYAvWYe9aQZwFn++JRyjupBtFYndx/5y9CEh00Y1ub0G7x0l0yLHlOgjogSGnaueAKSw+fLxQV0atjkzOKCbcYIklb0qba5g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Y4R2uzn3Dr9UmDqVRyzs3y6uTgF8aGRhQmM38NgbEJc=; b=idoM4gz0LIvstQB5KOIcRvOrZuRv1JAxP8zUXlyHg99DqHHcofvuBNla5ENRM/DSp1CVfLvr/ti/9ustS+drGPFgmMWSKS9pcvfou1kw1747bEFcye957679hUj7/LyWsYcCQuVBeJIALKZ+MtiUsgFz/IrvpdQAhpoZMUn6edY= Received: from PH7P220CA0179.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:33b::27) by DS0PR12MB8786.namprd12.prod.outlook.com (2603:10b6:8:149::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8722.23; Thu, 8 May 2025 05:00:49 +0000 Received: from CY4PEPF0000FCC4.namprd03.prod.outlook.com (2603:10b6:510:33b:cafe::8a) by PH7P220CA0179.outlook.office365.com (2603:10b6:510:33b::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.31 via Frontend Transport; Thu, 8 May 2025 05:00:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC4.mail.protection.outlook.com (10.167.242.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:00:48 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:00:46 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:00:42 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 03/14] net: ionic: Export the APIs from net driver to support device commands Date: Thu, 8 May 2025 10:29:46 +0530 Message-ID: <20250508045957.2823318-4-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC4:EE_|DS0PR12MB8786:EE_ X-MS-Office365-Filtering-Correlation-Id: 729c6e61-8287-4759-bacc-08dd8ded4c44 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|7416014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?BzAWjpajD8RlKp+TYqhNAaz/J9NXwl/oDj1/Gy9gVRojMacNYxkYFPSSPjAQ?= =?us-ascii?Q?fAA0LANq+WxLh8fB683JD2BFszcJwhpeoXGKgm3C4l5VY3DxUc61v9ldyCbY?= =?us-ascii?Q?bMFab5hG1NIpaVuP7GQ722xhUFp1EOrdDKjR8rL/CrLJ6+LWguLsqOvOV8w6?= =?us-ascii?Q?DTBbARud66NmNaQ8FXs5i9DGHv8hTWXsLFotWdIRMecVY0YqvPCotasu56yn?= =?us-ascii?Q?2ReiwWK0SGZoML1FXxzrAKxBor7oyxYo5P50ZQwRBcc8Ce/bj9wdYX43re/u?= =?us-ascii?Q?Hv/Lq8NS6x+UR+rtowdN9KGuEcNNwocRVVOjt5Oi6yG+TqF0rugM/z8MqBPj?= =?us-ascii?Q?FoEDEeJ4QQ9suSZreWvuIwajsw6Fjr86UvtIIwvYAkpPyxS3RNbx7BBDQYra?= =?us-ascii?Q?8nDhGuhKEboZNPp72I6qLjrURhyowgzcUTdpeF0XSdLKx5F6DVfQtFEAGgVq?= =?us-ascii?Q?j69ccZ1Y+uQzHYKxI/KfiZoNMgkfwBjpzZMtz4Sl6eIMak5UIOiiNHvHS81D?= =?us-ascii?Q?viMg5OmvMtf3ES4hm1u4t60DX5AQQRRo9TiQLD6jpLVtwVpVanxIBwwu9KKI?= =?us-ascii?Q?Z0Z/+fgYnYAHV0Em/LFBghzQcOXnPyKtPkMWoIm6B86/feW3+b7DfkzDmKWG?= =?us-ascii?Q?Iyld0gAvuNIabs3zSEPJBD4X0Lan7eo+IRJPl/IZD3x03hH+sdzx36hqahtO?= =?us-ascii?Q?f24NsEPwzKdHbex2sD2Zq7Pc5e/wzF4atf3PmXiUOwvQ6edofp6nNnK6uwFp?= =?us-ascii?Q?jS7Y14MKTCOBZnUNb6gxJFPkwIo8XeSHZkENJDZnJOKOrZaKaF/j09lc3F7a?= =?us-ascii?Q?HOqtwTwW/djdE4KZ/g4iJverlS3rxaGQV7gMXR6qShJfZu7xcIIj8qtZ8nca?= =?us-ascii?Q?o44XNV+1I36+BNcrD77krZaTo860wvzd36zKnaLO4UzRiYCw2r3nqcFkZVvr?= =?us-ascii?Q?UIT6xni9m0B+RHo9L21/7YjTMcyyRzyfKptzzFNCgpFHaYgAmzyvDFCAWNCf?= =?us-ascii?Q?8IagSJavp8Vid89RYwmJ2/VcW7C5djEamsEkSb56mf3nBLVMhseSHHPrQOtA?= =?us-ascii?Q?bqjRD5mj86uesT+DnCnoh/BeyUiRNcOvBGKdRFBfQC45zYqLGCxPtWkidNxt?= =?us-ascii?Q?MT7MoQ5cklmXksu91myjXCW5CXekXNbHQ6di0KxFOb6qa2JvbRBEiPiLREcC?= =?us-ascii?Q?LfFJOjP0FJv0GSJ9bNWAloAD7LR6vKLmfcdW+mY7PPAiConnkmEb+9LCQV99?= =?us-ascii?Q?/ZQGFrDRTcgVFuE3hGY/MsswrElOFhqhI7vUtH2dU2JvWdpxhKq6cxL73RO6?= =?us-ascii?Q?rIO5Bsh1/b8tZo+hP/GkJDcq1i/lJOpnw3s9h4P3PcbFP5ZTKijZve6n9SVt?= =?us-ascii?Q?+mn9hauNzPwT8GCpmTJ3cyKhxWOScODfzBQBfVUr4yn8WlE58rm79hBOzk0x?= =?us-ascii?Q?yArOeAbEbH+ZuH6qii24PjXRIZgJEIouOqHHttpaW6S1Eaxy8cBalLzyVaVD?= =?us-ascii?Q?mfrViVk4vWiCOcVCxCP+jJW595r9E4xjsl6USime9WX0oY5uiTEUqrRxYg?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(7416014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:00:48.1296 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 729c6e61-8287-4759-bacc-08dd8ded4c44 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8786 Content-Type: text/plain; charset="utf-8" RDMA driver needs to establish admin queues to support admin operations. Export the APIs to send device commands for the RDMA driver. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- drivers/net/ethernet/pensando/ionic/ionic.h | 7 ---- .../net/ethernet/pensando/ionic/ionic_api.h | 36 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_dev.h | 1 + .../net/ethernet/pensando/ionic/ionic_main.c | 4 ++- 4 files changed, 40 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic.h b/drivers/net/ethe= rnet/pensando/ionic/ionic.h index 04f00ea94230..85198e6a806e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic.h +++ b/drivers/net/ethernet/pensando/ionic/ionic.h @@ -65,16 +65,9 @@ struct ionic { int watchdog_period; }; =20 -struct ionic_admin_ctx { - struct completion work; - union ionic_adminq_cmd cmd; - union ionic_adminq_comp comp; -}; - int ionic_adminq_post(struct ionic_lif *lif, struct ionic_admin_ctx *ctx); int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx, const int err, const bool do_msg); -int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *= ctx); int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin= _ctx *ctx); void ionic_adminq_netdev_err_print(struct ionic_lif *lif, u8 opcode, u8 status, int err); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index f9fcd1b67b35..d75902ca34af 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -5,6 +5,8 @@ #define _IONIC_API_H_ =20 #include +#include "ionic_if.h" +#include "ionic_regs.h" =20 /** * struct ionic_aux_dev - Auxiliary device information @@ -18,4 +20,38 @@ struct ionic_aux_dev { struct auxiliary_device adev; }; =20 +/** + * struct ionic_admin_ctx - Admin command context + * @work: Work completion wait queue element + * @cmd: Admin command (64B) to be copied to the queue + * @comp: Admin completion (16B) copied from the queue + */ +struct ionic_admin_ctx { + struct completion work; + union ionic_adminq_cmd cmd; + union ionic_adminq_comp comp; +}; + +/** + * ionic_adminq_post_wait - Post an admin command and wait for response + * @lif: Logical interface + * @ctx: API admin command context + * + * Post the command to an admin queue in the ethernet driver. If this com= mand + * succeeds, then the command has been posted, but that does not indicate a + * completion. If this command returns success, then the completion callb= ack + * will eventually be called. + * + * Return: zero or negative error status + */ +int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *= ctx); + +/** + * ionic_error_to_errno - Transform ionic_if errors to os errno + * @code: Ionic error number + * + * Return: Negative OS error number or zero + */ +int ionic_error_to_errno(enum ionic_status_code code); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index c8c710cfe70c..bc26eb8f5779 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -12,6 +12,7 @@ =20 #include "ionic_if.h" #include "ionic_regs.h" +#include "ionic_api.h" =20 #define IONIC_MAX_TX_DESC 8192 #define IONIC_MAX_RX_DESC 16384 diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net= /ethernet/pensando/ionic/ionic_main.c index daf1e82cb76b..60fc232338b9 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_main.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c @@ -72,7 +72,7 @@ static const char *ionic_error_to_str(enum ionic_status_c= ode code) } } =20 -static int ionic_error_to_errno(enum ionic_status_code code) +int ionic_error_to_errno(enum ionic_status_code code) { switch (code) { case IONIC_RC_SUCCESS: @@ -114,6 +114,7 @@ static int ionic_error_to_errno(enum ionic_status_code = code) return -EIO; } } +EXPORT_SYMBOL_NS(ionic_error_to_errno, "NET_IONIC"); =20 static const char *ionic_opcode_to_str(enum ionic_cmd_opcode opcode) { @@ -480,6 +481,7 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struc= t ionic_admin_ctx *ctx) { return __ionic_adminq_post_wait(lif, ctx, true); } +EXPORT_SYMBOL_NS(ionic_adminq_post_wait, "NET_IONIC"); =20 int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin= _ctx *ctx) { --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2074.outbound.protection.outlook.com [40.107.92.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D9DD215F41; Thu, 8 May 2025 05:00:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680458; cv=fail; b=urU+sJMoZyjNPnALTwjwWiGfXvAQ7mwjNl3HNslzOP1ehcgRQA6CRc7lbychiAsJRbGXge90w8EupVpDCrCAj/A3rLXDuLElnD0uQ4Z2eEa82eE9YkX1YJhXi0kUlFE6ak9scuyfFveloEyupDl8fIS/AlKKCfNXTa2N6wPRQfs= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680458; c=relaxed/simple; bh=YTWiCEPF+O295MyEgiyq8upE3zAAp7KmnaFkZkPt28w=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WzWQtzax3Ch2iWOkCMwuC0UBnTfPKCKEkRjop+La+1pXcd1O7LuQXo6Jf+vcCRnookuagExttwiIYXdxWBAfpXgA3fyB9Jr5mcraAVui93qid72fEIEKJvBczZ6ucTjmPfeobQG9uvhKFO0isYlaGEUbgEcgiRYImQtHHk2epRk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=LGYrusOU; arc=fail smtp.client-ip=40.107.92.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="LGYrusOU" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xPRWQnsqfRPsXJKrw5aRqz7vcPARMVjVdfI1nYg1AjQgzg8Dad/kt25q8ChavUWit23Mx0Lffj8luckTJY4SqtM9yr747bO1/J2SsHRFHXMoRPw/4ybRu+bp+4XPJqPKht/C2lid6td24oXKA8MoPwxYzxtyW+oyK4CA+65B8CVyjrkfpa3u1o9Xa7qB49GzOsfkeX8oe2anlFIPgZcFXJyWm0NPWmt4dP+wJIJNZulpoKNajho+BqpIaQdPh45d2rIAOsLTDKNRJ4G/ZzRbljrWCk2jvgF5fl3yEFyNEkZIuztUbA2PaWcfDr9yYdWhV3NsVPHBxfiXliwktNpykA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8x/w07Ukgfegt7uUaI11b4U7dY/CodK+sMeDR7Fw61I=; b=a5pXgJnL90Z+ToRG0IwiYCof7tO9r1vU0CDYPJjUBLuI913EejDcfm4Paztcq9U7bqdhpoYSOVOVQlaCtYL1wksqk4SgRwJj/8WqG25/F5C5O9N2WfQTa2KDzOW89rWB3IJebzUmbZ4kg5iQWnhqunh6BPDCSGU5K2G56QJJkgoN/V81DelNx6aOMz1IdMH8FuWY3m12yVpjOE4w8/NTi21vcKowNmGPVKyE6jWTvKSL2ng47F5FAKGZnVwm5U6AERlKSpwUfggW98zylBr4MO0JCBBghfN94YE8pIPzeUKfZ214hjBPcl/Xae8Dp264F9TrOBeJqySm/Wc+soixSA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8x/w07Ukgfegt7uUaI11b4U7dY/CodK+sMeDR7Fw61I=; b=LGYrusOUuJkcBalkFjyurweHH141O4W8rwfm+gbpf9WDaW9WjdRQ9xWic1tDaBJvIXpxP0eV7bNxkgMwRRtJGPkueQmZMfWb6pmxxA2cSiLmQV7gk7r6+AXj800zOQaEKSbWetMxV8Yst4s2cRg2USjEqpomUO5bnmgeBs44l6g= Received: from BL1PR13CA0003.namprd13.prod.outlook.com (2603:10b6:208:256::8) by SN7PR12MB7811.namprd12.prod.outlook.com (2603:10b6:806:34f::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8722.21; Thu, 8 May 2025 05:00:52 +0000 Received: from BL6PEPF0001AB78.namprd02.prod.outlook.com (2603:10b6:208:256:cafe::54) by BL1PR13CA0003.outlook.office365.com (2603:10b6:208:256::8) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8722.19 via Frontend Transport; Thu, 8 May 2025 05:00:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB78.mail.protection.outlook.com (10.167.242.171) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:00:51 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:00:51 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:00:47 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 04/14] net: ionic: Provide RDMA reset support for the RDMA driver Date: Thu, 8 May 2025 10:29:47 +0530 Message-ID: <20250508045957.2823318-5-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB03.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB78:EE_|SN7PR12MB7811:EE_ X-MS-Office365-Filtering-Correlation-Id: a7502cb1-049c-40b3-5d73-08dd8ded4e50 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|36860700013|376014|7416014|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?S0+cNmhRhqBHlbu6BzYX2JjS3LB2oR7xevhxFX9uHvFPcmtUvqzKpauCvpbi?= =?us-ascii?Q?sVP184rjzZuZ4Izk+6n6MHRyWHDUtnSjsT4rTSx43Mo7i3jKE/lIUAWuKW+N?= =?us-ascii?Q?E07wZLJLJ2JWKfoX71cSnuyUm1M1kFhjKwVZ7etDI19KrRTn0zzs47Er1kcq?= =?us-ascii?Q?Fi4alKTPfXDKZ1BN9rTn6UTbbU17oDpPJkx2unIshUhmi3L1Q9e6Vc5mbxgf?= =?us-ascii?Q?YdAkSaIUzwjE2GnaJ8BHcC5U8tpgEebkwhN3TlUxoYeswwMadtaGjryKo8hD?= =?us-ascii?Q?xITw2WKtkUsWJq0RvA8ZhSEAc2V4MmbWEpFX6dO1rlvMPwITflZtnSCqV2L1?= =?us-ascii?Q?fPqhaPDWslZqQoisejmJNSEpSCApZ78u6nmPkdt0XjyOX9oO6v2NRrS4hOc2?= =?us-ascii?Q?5BYgBWXeMSmNNhCIyN7nY9sa30XxsYW0LKIGrDONxvFV3rZdq8bINowd0mXE?= =?us-ascii?Q?ABslHSLraijfhhgEZn/otCTrDp64qZ28RV/NF7glzTFLWR+rtPi+sBMjhOje?= =?us-ascii?Q?wzgPEQy2jZnRUpPySj1uNU8V+QsNA50u+hFNBb+6JVmibu2oJ7QEt/jy22Xv?= =?us-ascii?Q?xFTeGtj1ZtWRT8hsdDfx2vgtVSVXU9ehfsI561QQ7WFvG0EcZLLOodD75tn0?= =?us-ascii?Q?2lTZbODXX9l/DmzO3+7J1AOuR5bHqA2S7BAuLBeL50P8QDJUvaSdDde0mIA9?= =?us-ascii?Q?qiN4tPTKQlK/UhM3OY3CLgjVPGX/pUaa1m53lekXz/jzVRM1qKppNhVV2pcZ?= =?us-ascii?Q?Nlkao6sD8TdgaHKRafP9bMKYgGQrkchKLE6zVk9RdETCVUfYeRUz9nCzGWa7?= =?us-ascii?Q?mUqZygFcmWSx/dwy+gEaG79XBWmZdHAICY8nDmkYbC4tS97yvu768bBLJ2qs?= =?us-ascii?Q?P7pY1vLeBqMjDJmpRqKKRje3PONTimOF9iNB5AP8zfJS+gUMVMtLxy1a7n+h?= =?us-ascii?Q?17aPMKIUiw/kyiCRr7VRJC0cm0kOUcHEY5qlVruqAst/ugsTtMiwtGw0ZZno?= =?us-ascii?Q?9iQ3DL2rSEKEmcu0wwDp5l4RVxkqaH0KwolT2pCfDpkboLoIvl47argVIOCK?= =?us-ascii?Q?IUTNokRsqgRXWICYd34t9ZRDdvs90WvvWtm5UMWhyhTEkFnN/Wi8K/Pd+Hql?= =?us-ascii?Q?erh9AP39UwSUrv1ckDicsoFel4g/GLEBVMTfv7BPllM1J8vjX95cBhdJb8Pq?= =?us-ascii?Q?gNytl8NWio9zPMUismHbb96KC/8vGR1i/W0MM2RAbB5qjjU5qexPqzdUl3CR?= =?us-ascii?Q?Elt+Eny2lBFsbTF9NwMPfKPr9/lVU99Xy2g62SNZtP2aUD0gaXWpr7vP6/at?= =?us-ascii?Q?Fakp98THFr68Uf8GfEggHQbAKCXDaro3ZBIcgkU5MKfXL/Py6ttBzG+Ntzgk?= =?us-ascii?Q?7rvQ1wSFDJTdfh37rmYCJurcaD3yENXzWr0wAaV208ckR0LFub8TS9+QaEaM?= =?us-ascii?Q?LZqOXjpTY8sjzn7lTDatyDs16yky4fg/olhBsp9xDY+4yZjMjz3NORSU3Sua?= =?us-ascii?Q?HcGoUrQEcLCBVyoJJBfDxA3RNPBm7/xtVFTfzDHoB1hzG5uJMFgWDuDdIg?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(36860700013)(376014)(7416014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:00:51.6672 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a7502cb1-049c-40b3-5d73-08dd8ded4e50 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB78.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7811 Content-Type: text/plain; charset="utf-8" The Ethernet driver holds the privilege to execute the device commands. Export the function to execute RDMA reset command for use by RDMA driver. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 9 ++++++++ .../net/ethernet/pensando/ionic/ionic_aux.c | 22 +++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index d75902ca34af..e0b766d1769f 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -54,4 +54,13 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struct= ionic_admin_ctx *ctx); */ int ionic_error_to_errno(enum ionic_status_code code); =20 +/** + * ionic_request_rdma_reset - request reset or disable the device or lif + * @lif: Logical interface + * + * The reset is triggered asynchronously. It will wait until reset request + * completes or times out. + */ +void ionic_request_rdma_reset(struct ionic_lif *lif); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.c b/drivers/net/= ethernet/pensando/ionic/ionic_aux.c index ba29c456de00..bbad848d2dce 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_aux.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.c @@ -93,3 +93,25 @@ void ionic_auxbus_unregister(struct ionic_lif *lif) out: mutex_unlock(&lif->adev_lock); } + +void ionic_request_rdma_reset(struct ionic_lif *lif) +{ + struct ionic *ionic =3D lif->ionic; + int err; + + union ionic_dev_cmd cmd =3D { + .cmd.opcode =3D IONIC_CMD_RDMA_RESET_LIF, + .cmd.lif_index =3D cpu_to_le16(lif->index), + }; + + mutex_lock(&ionic->dev_cmd_lock); + + ionic_dev_cmd_go(&ionic->idev, &cmd); + err =3D ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + + mutex_unlock(&ionic->dev_cmd_lock); + + if (err) + pr_warn("%s request_reset: error %d\n", __func__, err); +} +EXPORT_SYMBOL_NS(ionic_request_rdma_reset, "NET_IONIC"); --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2061.outbound.protection.outlook.com [40.107.92.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A43019DF6A; Thu, 8 May 2025 05:01:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680464; cv=fail; b=BLdQJj/1zVOhawdX0djVscDC8nRX7/jYkUw5Cz6Xn13a0rlWRyqQhxJBUp1DMwepbnBXCBJEiFiTjmVEgqC0sHQZWHskx+Yk0DBUp3lKBJU+Cpum31Sf4pNIfWQicFffzpRpVdulbeIAZA3JLVi6cAZ6dMJa+ltycgqlBxH+0t8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680464; c=relaxed/simple; bh=ruFTXOuach/z3DXbblJmujzUdkbNkPBFW+YQDcAEhzM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=q37JtJPL+AYCFbDJU0H1PO1NJcde/5bA8Bx/URZ44pcnouCgcPuHyJdgS3XUoLv+LCyf/XntstsjSv0ImOXXZAMcuuEr9KLDR9T+EsFsfWQwXeElQMSkxii19xHo6X1HBh2dTqsF5vJtZ4SaSazOCoQF9K7vGRb7ZfiWubfe8/Y= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=X075Xl3c; arc=fail smtp.client-ip=40.107.92.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="X075Xl3c" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Ua5IrWIzvt3tMvXUcJQ6lMHe1I+v9bEUbeI3rVEgbGIXSIAH8Q56yJNw7gROUzmjP4vSjJxaYYL9pf7hlGK3lKmRM0stBu7t2iI59ieirsXlKLjtwY+8NjjF1xvyKje+RBoGoNHAa3xQ/naEsgJkc99ecSFeCFn3uIN+4y2LPn2G/2d234AFij+iZMekGPn0kLcAht5MMlJAkXFm/Idkn3xwD0fufncF/sl0ITUQ84SKz+l0IsWYIURpe6T7CVc/+4dZs7+AA9YTLLRIZZVFSEApp627OrgvSOI/GFxBhgc8kc8VtuDuiQPfvZcP1aeY390HuAcgoNU5vvHe5JdeyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bS29eiVl+FmoAoRpYJTh5dBcSrBsIvbzDImtulGCMsI=; b=Q17lLrLODJ49A97Lobo1oqFdTa+/6xG2tf/RXVO6elQTv+ji8fFgexTI++szU2Jpx1TBaJjrM99gCkWe7Z9/8vBgZR1bleiKxG5sGTZskgEm8e7nfQPPo87cSg3D8Jtik7VVyTEC1elUQSEdLaLyTJKkO1cV7GSDi5V1QuYb58Qn42Nvc+kcXGkpZR34lzUejaUTlq69uM2313rswHSFBYcNYxQ7c3cT4rAt33iyXrqf5QDWgebjCI/jnevBU6ycMRPt7lfleqsjPV2AdYNJuWIE9q+DwG5Eu4Dqpx3kMUGqfeB/yVdVyjXrKE+jFAF0IDQGYM1rHWQXP+tWhCiUgQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bS29eiVl+FmoAoRpYJTh5dBcSrBsIvbzDImtulGCMsI=; b=X075Xl3cD5OCOxYa+wxOkOko7byOn+6q4KXBAexaHSnCfW1/uez4Tc1cP7wRY+N2novfe25JsAVbWoc6CuRz9gOQVI7RE8wzXtxcpL7fSC5cH8a0JhcATT1CibfhkPGOmRHhPDPU9TXV/6cMq0g+DeV6SUaHBqH8nNwyLxYSEkg= Received: from PH7P220CA0158.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:33b::34) by DS7PR12MB9551.namprd12.prod.outlook.com (2603:10b6:8:24f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.20; Thu, 8 May 2025 05:00:58 +0000 Received: from CY4PEPF0000FCC4.namprd03.prod.outlook.com (2603:10b6:510:33b:cafe::35) by PH7P220CA0158.outlook.office365.com (2603:10b6:510:33b::34) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.31 via Frontend Transport; Thu, 8 May 2025 05:00:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC4.mail.protection.outlook.com (10.167.242.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:00:58 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:00:55 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:00:51 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 05/14] net: ionic: Provide interrupt allocation support for the RDMA driver Date: Thu, 8 May 2025 10:29:48 +0530 Message-ID: <20250508045957.2823318-6-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC4:EE_|DS7PR12MB9551:EE_ X-MS-Office365-Filtering-Correlation-Id: 315e61e8-a86e-499b-39f9-08dd8ded5254 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|7416014|376014|1800799024|82310400026|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?UY5FIwdtYW9dRuAa68HSlsHxosNpH9R2H7mLSiVRhjwJFf+YJaQ9mN402mLl?= =?us-ascii?Q?tx/mq8RJJQcVcpX1PKQBJ6/vQAZ3xPWCgPbOb2aQkvf35li8wBdbMI3/Sp4a?= =?us-ascii?Q?7GEmVlNmKbgf+mj1iMb6xZ5XU23em1JBI8J8TGbNFLsR96yw2QVkmknEYEqy?= =?us-ascii?Q?wW4xwIlLjPQSf4QmIRYPrZ5gsdRrauI6tkWzemUIUX8QbLNK+9MQEPiRzHfe?= =?us-ascii?Q?x2deSqDFQ65V/mC3HxnKLl1ANqk1pOkzUQahBVaLoqtTHGHmKleIeMSOgg8f?= =?us-ascii?Q?p7eLF0QmC3l+wO/BSzx/JsbphcX2UHo0HZadk/yT4L8XWQqS2i7vDTg5Tkrp?= =?us-ascii?Q?t2L04GcRFo3mHwfwhR9422vzPUgYpfYuauWElK6qZ0DqI1eFObxVJzxrNzOD?= =?us-ascii?Q?QfgBP5/eq9KIaSADS8NbVKKljLisFhSau7IsuYfb93RbC1rD/9lvujRAMkdO?= =?us-ascii?Q?jPJOSzPcJhWf9dfI8ONe/5rnZiZ8WO15ad5fentraWqw37bJSdGWI/+lWAOa?= =?us-ascii?Q?r+i4oZIxio5BJxUqLstF727qcpM5Km7ugZp6fpZzSXkvtTDQq4i5xxJeCOrf?= =?us-ascii?Q?bj+EDvmh9fB7J9mOJ8WWkJNdmgdsEqdk393qpUHxyzsH42PtY+3Ct/HatYvp?= =?us-ascii?Q?Mq7j0SUnc0CzNGt69OUkrPcyd6LRH/+bg0fUu4N6ELs8o7jYFDCmTotmxW95?= =?us-ascii?Q?qgSMukdOLQGVQ6zlOuXSl8IVOgKm9cv4e+QPLbdIpE5FRxChQdni8ZkGFEBm?= =?us-ascii?Q?4qxXCjgoTNsnIKJVXQD/blRIUw2Ldx0mOVt4PFKFUgSnwTK0kupYD4pep969?= =?us-ascii?Q?Lcg1As9GL5PXOqSw6JrbddF8bpu/fEvuMClxMkadwXoqiUK/JNdEU9pWS9Ob?= =?us-ascii?Q?gp/UU4TmnddPg08CcHz9rcEF71rk26ZZqlm+jF0XFMAIqgolOvIOGSFQK1h8?= =?us-ascii?Q?SAi1iBn+yhvCAeM2Oku9WiE2pfjWoUGXmUVcvi6OVI+3Wqn4n1htl9CccN+w?= =?us-ascii?Q?hlHAUtzPr5zxN6n4hunmDY6Aai0UJ5Dcxpr7l5xqeO5Fqxpzx/WGfrbtuMGm?= =?us-ascii?Q?AuHPUIBlk38e9x0rWTDxBzs+RagYzxf0TzgWLG14gb4SEr/NGbMZ8wsJ51iX?= =?us-ascii?Q?2nkwVvELSYgpUwTMCr8a5C2gCZVuMrjqyk80UPQZTXAwv6ub1ygQlso8fzfj?= =?us-ascii?Q?Op16V6bwcqolgKNz3zwyYiWFQAI2HQnAxOKkMBx2hsv7GsQ6xvLyZGK9GeeA?= =?us-ascii?Q?rMqQdLmMlThZsWZG79OuCoCOcZspGn5fCqeUxCwUls52n8GX67SEmb/XBllK?= =?us-ascii?Q?J3+nzC95iyu17x5SEcvlfPBy4QpUoL9eVsfu11BR8dvu/HX1V2HI22w9pq0c?= =?us-ascii?Q?nru0iB0Z6ygQi3K/zHCoKTwEgefhSFa7/ia/qV9jafurCC/vnwkxUsa7hQ8b?= =?us-ascii?Q?Q+wvH5+GgUOZIkZ/GDdTm/OEO+9KFwt3mJGAnf1Yk641YP0jkDGYrwW0QdNc?= =?us-ascii?Q?laPZRnlfWwr7+9z7NsEvMjP+YUMe0/JRXS825te2OhtmVEF0gPedxD6fMw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(7416014)(376014)(1800799024)(82310400026)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:00:58.3039 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 315e61e8-a86e-499b-39f9-08dd8ded5254 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB9551 Content-Type: text/plain; charset="utf-8" RDMA driver needs an interrupt for an event queue. Export function from net driver to allocate an interrupt. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 43 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_dev.h | 13 ------ .../net/ethernet/pensando/ionic/ionic_lif.c | 38 ++++++++-------- 3 files changed, 62 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index e0b766d1769f..5fd23aa8c5a1 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -32,6 +32,29 @@ struct ionic_admin_ctx { union ionic_adminq_comp comp; }; =20 +#define IONIC_INTR_INDEX_NOT_ASSIGNED -1 +#define IONIC_INTR_NAME_MAX_SZ 32 + +/** + * struct ionic_intr_info - Interrupt information + * @name: Name identifier + * @rearm_count: Interrupt rearm count + * @index: Interrupt index position + * @vector: Interrupt number + * @dim_coal_hw: Interrupt coalesce value in hardware units + * @affinity_mask: CPU affinity mask + * @aff_notify: context for notification of IRQ affinity changes + */ +struct ionic_intr_info { + char name[IONIC_INTR_NAME_MAX_SZ]; + u64 rearm_count; + unsigned int index; + unsigned int vector; + u32 dim_coal_hw; + cpumask_var_t *affinity_mask; + struct irq_affinity_notify aff_notify; +}; + /** * ionic_adminq_post_wait - Post an admin command and wait for response * @lif: Logical interface @@ -63,4 +86,24 @@ int ionic_error_to_errno(enum ionic_status_code code); */ void ionic_request_rdma_reset(struct ionic_lif *lif); =20 +/** + * ionic_intr_alloc - Reserve a device interrupt + * @lif: Logical interface + * @intr: Reserved ionic interrupt structure + * + * Reserve an interrupt index and get irq number for that index. + * + * Return: zero or negative error status + */ +int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr); + +/** + * ionic_intr_free - Release a device interrupt index + * @lif: Logical interface + * @intr: Interrupt index + * + * Mark the interrupt index unused so that it can be reserved again. + */ +void ionic_intr_free(struct ionic_lif *lif, int intr); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index bc26eb8f5779..68cf4da3c6b3 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -274,19 +274,6 @@ struct ionic_queue { char name[IONIC_QUEUE_NAME_MAX_SZ]; } ____cacheline_aligned_in_smp; =20 -#define IONIC_INTR_INDEX_NOT_ASSIGNED -1 -#define IONIC_INTR_NAME_MAX_SZ 32 - -struct ionic_intr_info { - char name[IONIC_INTR_NAME_MAX_SZ]; - u64 rearm_count; - unsigned int index; - unsigned int vector; - u32 dim_coal_hw; - cpumask_var_t *affinity_mask; - struct irq_affinity_notify aff_notify; -}; - struct ionic_cq { struct ionic_lif *lif; struct ionic_queue *bound_q; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 146659f6862a..f89b458bd20a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -244,29 +244,36 @@ static int ionic_request_irq(struct ionic_lif *lif, s= truct ionic_qcq *qcq) 0, intr->name, &qcq->napi); } =20 -static int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info = *intr) +int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr) { struct ionic *ionic =3D lif->ionic; - int index; + int index, err; =20 index =3D find_first_zero_bit(ionic->intrs, ionic->nintrs); - if (index =3D=3D ionic->nintrs) { - netdev_warn(lif->netdev, "%s: no intr, index=3D%d nintrs=3D%d\n", - __func__, index, ionic->nintrs); + if (index =3D=3D ionic->nintrs) return -ENOSPC; - } =20 set_bit(index, ionic->intrs); ionic_intr_init(&ionic->idev, intr, index); =20 + err =3D ionic_bus_get_irq(ionic, intr->index); + if (err < 0) { + clear_bit(index, ionic->intrs); + return err; + } + + intr->vector =3D err; + return 0; } +EXPORT_SYMBOL_NS(ionic_intr_alloc, "NET_IONIC"); =20 -static void ionic_intr_free(struct ionic *ionic, int index) +void ionic_intr_free(struct ionic_lif *lif, int index) { - if (index !=3D IONIC_INTR_INDEX_NOT_ASSIGNED && index < ionic->nintrs) - clear_bit(index, ionic->intrs); + if (index !=3D IONIC_INTR_INDEX_NOT_ASSIGNED && index < lif->ionic->nintr= s) + clear_bit(index, lif->ionic->intrs); } +EXPORT_SYMBOL_NS(ionic_intr_free, "NET_IONIC"); =20 static void ionic_irq_aff_notify(struct irq_affinity_notify *notify, const cpumask_t *mask) @@ -401,7 +408,7 @@ static void ionic_qcq_intr_free(struct ionic_lif *lif, = struct ionic_qcq *qcq) irq_set_affinity_hint(qcq->intr.vector, NULL); devm_free_irq(lif->ionic->dev, qcq->intr.vector, &qcq->napi); qcq->intr.vector =3D 0; - ionic_intr_free(lif->ionic, qcq->intr.index); + ionic_intr_free(lif, qcq->intr.index); qcq->intr.index =3D IONIC_INTR_INDEX_NOT_ASSIGNED; } =20 @@ -511,13 +518,6 @@ static int ionic_alloc_qcq_interrupt(struct ionic_lif = *lif, struct ionic_qcq *qc goto err_out; } =20 - err =3D ionic_bus_get_irq(lif->ionic, qcq->intr.index); - if (err < 0) { - netdev_warn(lif->netdev, "no vector for %s: %d\n", - qcq->q.name, err); - goto err_out_free_intr; - } - qcq->intr.vector =3D err; ionic_intr_mask_assert(lif->ionic->idev.intr_ctrl, qcq->intr.index, IONIC_INTR_MASK_SET); =20 @@ -546,7 +546,7 @@ static int ionic_alloc_qcq_interrupt(struct ionic_lif *= lif, struct ionic_qcq *qc return 0; =20 err_out_free_intr: - ionic_intr_free(lif->ionic, qcq->intr.index); + ionic_intr_free(lif, qcq->intr.index); err_out: return err; } @@ -741,7 +741,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsig= ned int type, err_out_free_irq: if (flags & IONIC_QCQ_F_INTR) { devm_free_irq(dev, new->intr.vector, &new->napi); - ionic_intr_free(lif->ionic, new->intr.index); + ionic_intr_free(lif, new->intr.index); } err_out_free_page_pool: page_pool_destroy(new->q.page_pool); --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2082.outbound.protection.outlook.com [40.107.237.82]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8C0320E000; Thu, 8 May 2025 05:01:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.82 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680478; cv=fail; b=SC5tfpCYiHdIfnvRaL+u8SgGcia3U0Efti0NX2fDPCSSYQ9wHYXAr5fdLLqgzrdWG5GGuTE3WjWFv3EPOM03t4XfU/Ye2HuZjQQczUoHVfwv97zQ7VNRccVZNPIkbJc0ekflnhq0HmZUuerpgd6XXR/U8Erqynx7urLjPGFRB3I= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680478; c=relaxed/simple; bh=Pj7th5Kk1LbaOdfP8F1QmbaGJ9tUVaLO645VYNwWftY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=TmImhuX4vVXK0z8q2+0aflwTkBW8hy/4okLVEDTG/C/ptFT8JbenyuyRrxirYkidMP37/A3sUcOY9WatIIzUwQhmeCAgaAkWVcUfJo6KAA6I1WwMbQ8iAuNb+rcMxxIEzfR0AbDbAP4t1/GJte4w4Gk6zmUk7KxILJGkHpOt1do= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=1DXAMwd2; arc=fail smtp.client-ip=40.107.237.82 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="1DXAMwd2" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=YtV+nKP41TQ//GN+EoRTI27bkBrt+Nk9QYU/WV/qaYHDkDQVLSbe9xRTP5YTZ/s5qZUgoiht1+Ynck9s7d1nr4FC+TcNkqqos4NzlM+3y9FLCKgLvp4rEal0BJ2pWUalUdxkVQjDaK8/YP98AubC25O886A97oZ3/2R8d9kvDc50F/tLqaQ6l2G/eKaAlR0n8zVakH2QAWQHbJOEknCkgXxn9V2YYRVQeeBg1kJeCXBo/MoXM1Lzo+6MvyxR298NBDIJgVEheUPh4UbTqzotlQHtp0PhM3rS00YN5Th7ofHqR7mMZHlybDQjd5NKvJITPSChFSBX3byRXPmSKL8olg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vrmTijwsrplb1XYR1omfQow2aYPsKYWefZ1HrQbDvBQ=; b=e5IuubaJ3Yy3AWQm6giPyiREpQfJOvlfmxRWpFS8I/uMFGfLTvDez5mKN0vWlmAecYJ18AA+kT05kmK8M5BkRALnqlz5E8HFr8LiSC6n6YHVVBBbSrEWwJL3RyWwAeFJa9cuKTiREed9smcQemnH2SYQlgMnGaaOiDfDkIygV6/BEXP09v1pPQ9OsZNYzlHwc7VgUWPlI7cCkI5lGbwjav8UsUCygnoGaPe3LxeeFonmQTXbzayUKfOIBBUvvcA0vQ4qmmQ+6gDhhxDkrp5ukl0Kezz03gAHxTdaEeUwjxcUzbPlg8D5h1+mps/VUchWqnol9mJH74IRzWwJBzCGlg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vrmTijwsrplb1XYR1omfQow2aYPsKYWefZ1HrQbDvBQ=; b=1DXAMwd2tcgtr1XDYnbMrF8idwNVjAdIAKomqNhqocboycf/1qhEI7yzUA0AqMPwsRnF18yyhKvw0kLnGlZhclKecN4p8gVaPOZ7V08B2/1SWVcUKgb3C4S01AL/J8/4ixkACCB6PQofmR/QnTiX5JqiVLsz8G9VDKTi9B02y+E= Received: from PH7P220CA0155.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:33b::8) by CYYPR12MB8890.namprd12.prod.outlook.com (2603:10b6:930:c7::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.19; Thu, 8 May 2025 05:01:06 +0000 Received: from CY4PEPF0000FCC4.namprd03.prod.outlook.com (2603:10b6:510:33b:cafe::78) by PH7P220CA0155.outlook.office365.com (2603:10b6:510:33b::8) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.32 via Frontend Transport; Thu, 8 May 2025 05:01:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC4.mail.protection.outlook.com (10.167.242.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:06 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:00 -0500 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:00 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:00:56 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , =?UTF-8?q?Pablo=20Casc=C3=B3n?= Subject: [PATCH v2 06/14] net: ionic: Provide doorbell and CMB region information Date: Thu, 8 May 2025 10:29:49 +0530 Message-ID: <20250508045957.2823318-7-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC4:EE_|CYYPR12MB8890:EE_ X-MS-Office365-Filtering-Correlation-Id: a3d39bb1-5ab0-4ce3-e4a3-08dd8ded56f7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|7416014|376014|921020; X-Microsoft-Antispam-Message-Info: =?utf-8?B?bHJRblBJMWVjTlZORG8vZk9ZeG8rK2Vkdk1nckpZSzNsWVAvWVNMYnBybzVY?= =?utf-8?B?YmNPS2dWSEJKZExuTlJ3L1ZoTGRuVjJmRjNNR21EWWpOV0YzNW04V3pvNEFu?= =?utf-8?B?amZhbXZET2d3UUdTRW0wOFp0cTZsL21paHJNcUQ1aVYxM2hId1dsaThzWERF?= =?utf-8?B?bEE2SWNVSGcyS3l1a2VRa3RqQWFkRnRoWStHYTJha1NwSnlCVWFubXVEZWxS?= =?utf-8?B?VG1zVFlEWXk3MWlIMFBKMURudTNoclIzaVpmbHF6M3FVekNBdW4wUzNoNnJE?= =?utf-8?B?WkoyUnQ0b0hWVm94NGowaGN1Zk1ZZzczZjBiZ0M3V2Y0UitEMkNEVVdvRlRp?= =?utf-8?B?Nk5KSjV6Vis2aGpyQTl5ZUpLbWNySVVoNmVrRE9oeUVuMkpteUVUOWltNEVQ?= =?utf-8?B?UUNJYmhjT3N0bzZRNzhla3BuMmk1TWpCT01nNWx3aEdhdDdVYlpiRFp6QkRs?= =?utf-8?B?SDlldlZFNUE5Z2ZicHc4UjBieXQwOVZTL0JrZmFUZDd6bUNPaEg3R2VJTm9u?= =?utf-8?B?RVVXdVllZDRnOHArSnB4b0RTQ3pUQk00QTI2Wjc4dTlDcEtjREVlZEdyWDkx?= =?utf-8?B?VlY1Skd2b3VsRHdRaUt6MjY3Q25kbmdneUZDakhHRnRDYk92alZtWk13TGZl?= =?utf-8?B?dHJ3V2VyMGFEekZ5VTVFaVk4aHVNREkrSXBhdklkL0pkSDVtV0ZyNzBldVhJ?= =?utf-8?B?anJFTWsrSDZqbFQvUG4vb2N4ajlBOHFLNWs5Vyt4WHBtcW9TZFVVU25GSkk4?= =?utf-8?B?TmRxSzI2QWhNeUdDQThuNEphbFhoVCtnczhlUjFOWGVwL3Z5Y3pUaEVkbGRR?= =?utf-8?B?SFQzNlcwcW41Um1hYVAvZ1NyNWY3OFhPclJPQURMMDNTME11OTVhS2U3NjVu?= =?utf-8?B?cHQ1cnhNMm42a3RzaFpSNG10KzRoRGtkL3hmN1k4dk85YThtOWJyY2RkaUIw?= =?utf-8?B?ajF6ZUVDUHFvcmx4WWhxcGZsaTF5ZWcxc2JZQmgzb0hYam5NM2t4d2ZCdjBs?= =?utf-8?B?b2lVK055TkNaejVLaEMvRzVVTVordEhJc1g1emFyZlJJRVNtR3lqNjhqQWJ4?= =?utf-8?B?RmhvY1UxZWJ5aUlkL1l1YjRMdmlZc2U0NWpKWmVtdlcrVXhqTDAvbGFIM1A5?= =?utf-8?B?MHNkblJWcUdIUFJJVitkSitKV01JSDhNSG9uZzV2RnFyYTMzUTUxZEE0ZDI5?= =?utf-8?B?V1B2WFZTUXp5V2JEbjhIblhHMlJOQi9ZdU5FQUdpUVJOYnR3VzMzVXhBcTJD?= =?utf-8?B?S1RoaGZrdVRuRXJTRVVQZDRvc2ZWV0h2MDJkRXFiNDlXR1lxRDQ3bTZrOVlE?= =?utf-8?B?VVF4djM5dHl0NkMxcnZSaTJBaC8zM2dRVno0UTgvT0pjM1NvSVRSRjRNR2Nj?= =?utf-8?B?UlN6VEhUMVhWQ1BKdmkrQ2p6cGpCbkVpdmpFNWRBWE85aXhlT0RHL2MwWmhX?= =?utf-8?B?VGltUnVJczFhTGpzSnVJd0c1MVNYczdKQmlYMCtpbm5HcFE0a0FjVGdjejVM?= =?utf-8?B?QUVlNnQ0ZE8xNmpjcGtRYXlxZjNQTjBTSGJRSll1SmpERzlUQndXamhpMCtF?= =?utf-8?B?YmMvSkxVcFFIdHpORHVKbEF0UXViNlhIQ05yVHJNcXlqY1IwZ0hlK3VZaTRr?= =?utf-8?B?OUd4SWUzTVRJeFh3UDQ1dUdabkRxS2toRlNvQjNuYmRhV01Fa0hGNkpEMkVQ?= =?utf-8?B?S1k4L3RkRW45NUZPWlltbERGaDRBcnZJT2NHaVBZRFNpbDNwKytFODhmV1JT?= =?utf-8?B?ZWs4S0Z5ajQxV0gvSmV0NWs2WmpFOHQvSXY2QXVMNU9kMHBqTUxBSDBRTUxh?= =?utf-8?B?cnlEUzZHRStMUUg1ZDdjMkRDMXdCdnZSUnVpQk4yWnQrM3JDVUg2cDI2SFVj?= =?utf-8?B?ZVVBbU1Nd0Q2aHlwSi9vOVBmdlBoRlRhRm1IUlNhM1dDeHFsK1BabnEzZXlU?= =?utf-8?B?M09icHhKREFYb3c0ZjFWU0svT2JtL1F3c0l6WDdoN2hZWVBZb25rLzRWZ3h6?= =?utf-8?B?SzdTbHRSSy9WTzZBYjhqNXc3R3hEWHdTZHNQdTZBRGYxZkF0L3lMcE1vK3Vs?= =?utf-8?B?VXhaTGsxWmZ6WVI3eGk1eUVRZVdpTXg0QzFuQT09?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(7416014)(376014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:06.0886 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a3d39bb1-5ab0-4ce3-e4a3-08dd8ded56f7 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8890 The RDMA device needs information of controller memory bar and doorbell capability to share with user context. Discover CMB regions and express doorbell capabilities on device init. Reviewed-by: Shannon Nelson Co-developed-by: Pablo Casc=C3=B3n Signed-off-by: Pablo Casc=C3=B3n Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 22 ++ .../ethernet/pensando/ionic/ionic_bus_pci.c | 2 + .../net/ethernet/pensando/ionic/ionic_dev.c | 270 +++++++++++++++++- .../net/ethernet/pensando/ionic/ionic_dev.h | 14 +- .../net/ethernet/pensando/ionic/ionic_if.h | 89 ++++++ .../net/ethernet/pensando/ionic/ionic_lif.c | 2 +- 6 files changed, 381 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index 5fd23aa8c5a1..bd88666836b8 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -106,4 +106,26 @@ int ionic_intr_alloc(struct ionic_lif *lif, struct ion= ic_intr_info *intr); */ void ionic_intr_free(struct ionic_lif *lif, int intr); =20 +/** + * ionic_get_cmb - Reserve cmb pages + * @lif: Logical interface + * @pgid: First page index + * @pgaddr: First page bus addr (contiguous) + * @order: Log base two number of pages (PAGE_SIZE) + * @stride_log2: Size of stride to determine CMB pool + * @expdb: Will be set to true if this CMB region has expdb enabled + * + * Return: zero or negative error status + */ +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, + int order, u8 stride_log2, bool *expdb); + +/** + * ionic_put_cmb - Release cmb pages + * @lif: Logical interface + * @pgid: First page index + * @order: Log base two number of pages (PAGE_SIZE) + */ +void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/= net/ethernet/pensando/ionic/ionic_bus_pci.c index bb75044dfb82..4f13dc908ed8 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -272,6 +272,8 @@ static int ionic_setup_one(struct ionic *ionic) } ionic_debugfs_add_ident(ionic); =20 + ionic_map_cmb(ionic); + err =3D ionic_init(ionic); if (err) { dev_err(dev, "Cannot init device: %d, aborting\n", err); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/= ethernet/pensando/ionic/ionic_dev.c index 57edcde9e6f8..c4d1ecb2d7e4 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c @@ -199,13 +199,201 @@ void ionic_init_devinfo(struct ionic *ionic) dev_dbg(ionic->dev, "fw_version %s\n", idev->dev_info.fw_version); } =20 +static void ionic_map_disc_cmb(struct ionic *ionic) +{ + struct ionic_identity *ident =3D &ionic->ident; + u32 length_reg0, length, offset, num_regions; + struct ionic_dev_bar *bar =3D ionic->bars; + struct ionic_dev *idev =3D &ionic->idev; + struct device *dev =3D ionic->dev; + int err, sz, i; + u64 end; + + mutex_lock(&ionic->dev_cmd_lock); + + ionic_dev_cmd_discover_cmb(idev); + err =3D ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + if (!err) { + sz =3D min(sizeof(ident->cmb_layout), + sizeof(idev->dev_cmd_regs->data)); + memcpy_fromio(&ident->cmb_layout, + &idev->dev_cmd_regs->data, sz); + } + mutex_unlock(&ionic->dev_cmd_lock); + + if (err) { + dev_warn(dev, "Cannot discover CMB layout, disabling CMB\n"); + return; + } + + bar +=3D 2; + + num_regions =3D le32_to_cpu(ident->cmb_layout.num_regions); + if (!num_regions || num_regions > IONIC_MAX_CMB_REGIONS) { + dev_warn(dev, "Invalid number of CMB entries (%d)\n", + num_regions); + return; + } + + dev_dbg(dev, "ionic_cmb_layout_identity num_regions %d flags %x:\n", + num_regions, ident->cmb_layout.flags); + + for (i =3D 0; i < num_regions; i++) { + offset =3D le32_to_cpu(ident->cmb_layout.region[i].offset); + length =3D le32_to_cpu(ident->cmb_layout.region[i].length); + end =3D offset + length; + + dev_dbg(dev, "CMB entry %d: bar_num %u cmb_type %u offset %x length %u\n= ", + i, ident->cmb_layout.region[i].bar_num, + ident->cmb_layout.region[i].cmb_type, + offset, length); + + if (end > (bar->len >> IONIC_CMB_SHIFT_64K)) { + dev_warn(dev, "Out of bounds CMB region %d offset %x length %u\n", + i, offset, length); + return; + } + } + + /* if first entry matches PCI config, expdb is not supported */ + if (ident->cmb_layout.region[0].bar_num =3D=3D bar->res_index && + le32_to_cpu(ident->cmb_layout.region[0].length) =3D=3D bar->len && + !ident->cmb_layout.region[0].offset) { + dev_warn(dev, "No CMB mapping discovered\n"); + return; + } + + /* process first entry for regular mapping */ + length_reg0 =3D le32_to_cpu(ident->cmb_layout.region[0].length); + if (!length_reg0) { + dev_warn(dev, "region len =3D 0. No CMB mapping discovered\n"); + return; + } + + /* Verify first entry size matches expected 8MB size (in 64KB pages) */ + if (length_reg0 !=3D IONIC_BAR2_CMB_ENTRY_SIZE >> IONIC_CMB_SHIFT_64K) { + dev_warn(dev, "Unexpected CMB size in entry 0: %u pages\n", + length_reg0); + return; + } + + sz =3D BITS_TO_LONGS((length_reg0 << IONIC_CMB_SHIFT_64K) / + PAGE_SIZE) * sizeof(long); + idev->cmb_inuse =3D kzalloc(sz, GFP_KERNEL); + if (!idev->cmb_inuse) { + dev_warn(dev, "No memory for CMB, disabling\n"); + idev->phy_cmb_pages =3D 0; + idev->phy_cmb_expdb64_pages =3D 0; + idev->phy_cmb_expdb128_pages =3D 0; + idev->phy_cmb_expdb256_pages =3D 0; + idev->phy_cmb_expdb512_pages =3D 0; + idev->cmb_npages =3D 0; + return; + } + + for (i =3D 0; i < num_regions; i++) { + /* check this region matches first region length as to + * ease implementation + */ + if (le32_to_cpu(ident->cmb_layout.region[i].length) !=3D + length_reg0) + continue; + + offset =3D le32_to_cpu(ident->cmb_layout.region[i].offset); + + switch (ident->cmb_layout.region[i].cmb_type) { + case IONIC_CMB_TYPE_DEVMEM: + idev->phy_cmb_pages =3D bar->bus_addr + offset; + idev->cmb_npages =3D + (length_reg0 << IONIC_CMB_SHIFT_64K) / PAGE_SIZE; + dev_dbg(dev, "regular cmb mapping: bar->bus_addr %pa region[%d].length = %u\n", + &bar->bus_addr, i, length); + dev_dbg(dev, "idev->phy_cmb_pages %pad, idev->cmb_npages %u\n", + &idev->phy_cmb_pages, idev->cmb_npages); + break; + + case IONIC_CMB_TYPE_EXPDB64: + idev->phy_cmb_expdb64_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb64_pages %pad\n", + &idev->phy_cmb_expdb64_pages); + break; + + case IONIC_CMB_TYPE_EXPDB128: + idev->phy_cmb_expdb128_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb128_pages %pad\n", + &idev->phy_cmb_expdb128_pages); + break; + + case IONIC_CMB_TYPE_EXPDB256: + idev->phy_cmb_expdb256_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb256_pages %pad\n", + &idev->phy_cmb_expdb256_pages); + break; + + case IONIC_CMB_TYPE_EXPDB512: + idev->phy_cmb_expdb512_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb512_pages %pad\n", + &idev->phy_cmb_expdb512_pages); + break; + + default: + dev_warn(dev, "[%d] Invalid cmb_type (%d)\n", + i, ident->cmb_layout.region[i].cmb_type); + break; + } + } +} + +static void ionic_map_classic_cmb(struct ionic *ionic) +{ + struct ionic_dev_bar *bar =3D ionic->bars; + struct ionic_dev *idev =3D &ionic->idev; + struct device *dev =3D ionic->dev; + int sz; + + bar +=3D 2; + /* classic CMB mapping */ + idev->phy_cmb_pages =3D bar->bus_addr; + idev->cmb_npages =3D bar->len / PAGE_SIZE; + dev_dbg(dev, "classic cmb mapping: bar->bus_addr %pa bar->len %lu\n", + &bar->bus_addr, bar->len); + dev_dbg(dev, "idev->phy_cmb_pages %pad, idev->cmb_npages %u\n", + &idev->phy_cmb_pages, idev->cmb_npages); + + sz =3D BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); + idev->cmb_inuse =3D kzalloc(sz, GFP_KERNEL); + if (!idev->cmb_inuse) { + idev->phy_cmb_pages =3D 0; + idev->cmb_npages =3D 0; + } +} + +void ionic_map_cmb(struct ionic *ionic) +{ + struct pci_dev *pdev =3D ionic->pdev; + struct device *dev =3D ionic->dev; + + if (!(pci_resource_flags(pdev, 4) & IORESOURCE_MEM)) { + dev_dbg(dev, "No CMB, disabling\n"); + return; + } + + if (ionic->ident.dev.capabilities & cpu_to_le64(IONIC_DEV_CAP_DISC_CMB)) + ionic_map_disc_cmb(ionic); + else + ionic_map_classic_cmb(ionic); +} + int ionic_dev_setup(struct ionic *ionic) { struct ionic_dev_bar *bar =3D ionic->bars; unsigned int num_bars =3D ionic->num_bars; struct ionic_dev *idev =3D &ionic->idev; struct device *dev =3D ionic->dev; - int size; u32 sig; int err; =20 @@ -255,16 +443,11 @@ int ionic_dev_setup(struct ionic *ionic) mutex_init(&idev->cmb_inuse_lock); if (num_bars < 3 || !ionic->bars[IONIC_PCI_BAR_CMB].len) { idev->cmb_inuse =3D NULL; + idev->phy_cmb_pages =3D 0; + idev->cmb_npages =3D 0; return 0; } =20 - idev->phy_cmb_pages =3D bar->bus_addr; - idev->cmb_npages =3D bar->len / PAGE_SIZE; - size =3D BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); - idev->cmb_inuse =3D kzalloc(size, GFP_KERNEL); - if (!idev->cmb_inuse) - dev_warn(dev, "No memory for CMB, disabling\n"); - return 0; } =20 @@ -277,6 +460,11 @@ void ionic_dev_teardown(struct ionic *ionic) idev->phy_cmb_pages =3D 0; idev->cmb_npages =3D 0; =20 + idev->phy_cmb_expdb64_pages =3D 0; + idev->phy_cmb_expdb128_pages =3D 0; + idev->phy_cmb_expdb256_pages =3D 0; + idev->phy_cmb_expdb512_pages =3D 0; + if (ionic->wq) { destroy_workqueue(ionic->wq); ionic->wq =3D NULL; @@ -698,28 +886,79 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev= , struct ionic_qcq *qcq, ionic_dev_cmd_go(idev, &cmd); } =20 +void ionic_dev_cmd_discover_cmb(struct ionic_dev *idev) +{ + union ionic_dev_cmd cmd =3D { + .discover_cmb.opcode =3D IONIC_CMD_DISCOVER_CMB, + }; + + ionic_dev_cmd_go(idev, &cmd); +} + int ionic_db_page_num(struct ionic_lif *lif, int pid) { return (lif->hw_index * lif->dbid_count) + pid; } =20 -int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, i= nt order) +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, + int order, u8 stride_log2, bool *expdb) { struct ionic_dev *idev =3D &lif->ionic->idev; - int ret; + void __iomem *nonexpdb_pgptr; + phys_addr_t nonexpdb_pgaddr; + int i, idx; =20 mutex_lock(&idev->cmb_inuse_lock); - ret =3D bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); + idx =3D bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); mutex_unlock(&idev->cmb_inuse_lock); =20 - if (ret < 0) - return ret; + if (idx < 0) + return idx; + + *pgid =3D (u32)idx; + + if (idev->phy_cmb_expdb64_pages && + stride_log2 =3D=3D IONIC_EXPDB_64B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb64_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb128_pages && + stride_log2 =3D=3D IONIC_EXPDB_128B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb128_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb256_pages && + stride_log2 =3D=3D IONIC_EXPDB_256B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb256_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb512_pages && + stride_log2 =3D=3D IONIC_EXPDB_512B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb512_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else { + *pgaddr =3D idev->phy_cmb_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D false; + } =20 - *pgid =3D ret; - *pgaddr =3D idev->phy_cmb_pages + ret * PAGE_SIZE; + /* clear the requested CMB region, 1 PAGE_SIZE ioremap at a time */ + nonexpdb_pgaddr =3D idev->phy_cmb_pages + idx * PAGE_SIZE; + for (i =3D 0; i < (1 << order); i++) { + nonexpdb_pgptr =3D + ioremap_wc(nonexpdb_pgaddr + i * PAGE_SIZE, PAGE_SIZE); + if (!nonexpdb_pgptr) { + ionic_put_cmb(lif, *pgid, order); + return -ENOMEM; + } + memset_io(nonexpdb_pgptr, 0, PAGE_SIZE); + iounmap(nonexpdb_pgptr); + } =20 return 0; } +EXPORT_SYMBOL_NS(ionic_get_cmb, "NET_IONIC"); =20 void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order) { @@ -729,6 +968,7 @@ void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int= order) bitmap_release_region(idev->cmb_inuse, pgid, order); mutex_unlock(&idev->cmb_inuse_lock); } +EXPORT_SYMBOL_NS(ionic_put_cmb, "NET_IONIC"); =20 int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index 68cf4da3c6b3..35566f97eaea 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -35,6 +35,11 @@ #define IONIC_RX_MIN_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ #define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 4) /* 4s */ =20 +#define IONIC_EXPDB_64B_WQE_LG2 6 +#define IONIC_EXPDB_128B_WQE_LG2 7 +#define IONIC_EXPDB_256B_WQE_LG2 8 +#define IONIC_EXPDB_512B_WQE_LG2 9 + struct ionic_dev_bar { void __iomem *vaddr; phys_addr_t bus_addr; @@ -171,6 +176,11 @@ struct ionic_dev { dma_addr_t phy_cmb_pages; u32 cmb_npages; =20 + dma_addr_t phy_cmb_expdb64_pages; + dma_addr_t phy_cmb_expdb128_pages; + dma_addr_t phy_cmb_expdb256_pages; + dma_addr_t phy_cmb_expdb512_pages; + u32 port_info_sz; struct ionic_port_info *port_info; dma_addr_t port_info_pa; @@ -351,8 +361,8 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, = struct ionic_qcq *qcq, =20 int ionic_db_page_num(struct ionic_lif *lif, int pid); =20 -int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, i= nt order); -void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); +void ionic_dev_cmd_discover_cmb(struct ionic_dev *idev); +void ionic_map_cmb(struct ionic *ionic); =20 int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/e= thernet/pensando/ionic/ionic_if.h index 02cda3536dcb..2fe96b0cb0ae 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -56,6 +56,9 @@ enum ionic_cmd_opcode { IONIC_CMD_VF_SETATTR =3D 61, IONIC_CMD_VF_CTRL =3D 62, =20 + /* CMB command */ + IONIC_CMD_DISCOVER_CMB =3D 80, + /* QoS commands */ IONIC_CMD_QOS_CLASS_IDENTIFY =3D 240, IONIC_CMD_QOS_CLASS_INIT =3D 241, @@ -269,9 +272,11 @@ union ionic_drv_identity { /** * enum ionic_dev_capability - Device capabilities * @IONIC_DEV_CAP_VF_CTRL: Device supports VF ctrl operations + * @IONIC_DEV_CAP_DISC_CMB: Device supports CMB discovery operations */ enum ionic_dev_capability { IONIC_DEV_CAP_VF_CTRL =3D BIT(0), + IONIC_DEV_CAP_DISC_CMB =3D BIT(1), }; =20 /** @@ -395,6 +400,7 @@ enum ionic_logical_qtype { * @IONIC_Q_F_4X_DESC: Quadruple main descriptor size * @IONIC_Q_F_4X_CQ_DESC: Quadruple cq descriptor size * @IONIC_Q_F_4X_SG_DESC: Quadruple sg descriptor size + * @IONIC_QIDENT_F_EXPDB: Queue supports express doorbell */ enum ionic_q_feature { IONIC_QIDENT_F_CQ =3D BIT_ULL(0), @@ -407,6 +413,7 @@ enum ionic_q_feature { IONIC_Q_F_4X_DESC =3D BIT_ULL(7), IONIC_Q_F_4X_CQ_DESC =3D BIT_ULL(8), IONIC_Q_F_4X_SG_DESC =3D BIT_ULL(9), + IONIC_QIDENT_F_EXPDB =3D BIT_ULL(10), }; =20 /** @@ -2213,6 +2220,80 @@ struct ionic_vf_ctrl_comp { u8 rsvd[15]; }; =20 +/** + * struct ionic_discover_cmb_cmd - CMB discovery command + * @opcode: Opcode for the command + * @rsvd: Reserved bytes + */ +struct ionic_discover_cmb_cmd { + u8 opcode; + u8 rsvd[63]; +}; + +/** + * struct ionic_discover_cmb_comp - CMB discover command completion. + * @status: Status of the command (enum ionic_status_code) + * @rsvd: Reserved bytes + */ +struct ionic_discover_cmb_comp { + u8 status; + u8 rsvd[15]; +}; + +#define IONIC_MAX_CMB_REGIONS 16 +#define IONIC_CMB_SHIFT_64K 16 + +enum ionic_cmb_type { + IONIC_CMB_TYPE_DEVMEM =3D 0, + IONIC_CMB_TYPE_EXPDB64 =3D 1, + IONIC_CMB_TYPE_EXPDB128 =3D 2, + IONIC_CMB_TYPE_EXPDB256 =3D 3, + IONIC_CMB_TYPE_EXPDB512 =3D 4, +}; + +/** + * union ionic_cmb_region - Configuration for CMB region + * @bar_num: CMB mapping number from FW + * @cmb_type: Type of CMB this region describes (enum ionic_cmb_type) + * @rsvd: Reserved + * @offset: Offset within BAR in 64KB pages + * @length: Length of the CMB region + * @words: 32-bit words for direct access to the entire region + */ +union ionic_cmb_region { + struct { + u8 bar_num; + u8 cmb_type; + u8 rsvd[6]; + __le32 offset; + __le32 length; + } __packed; + __le32 words[4]; +}; + +/** + * union ionic_discover_cmb_identity - CMB layout identity structure + * @num_regions: Number of CMB regions, up to 16 + * @flags: Feature and capability bits (0 for express + * doorbell, 1 for 4K alignment indicator, + * 31-24 for version information) + * @region: CMB mappings region, entry 0 for regular + * mapping, entries 1-7 for WQE sizes 64, + * 128, 256, 512, 1024, 2048 and 4096 bytes + * @words: Full union buffer size + */ +union ionic_discover_cmb_identity { + struct { + __le32 num_regions; +#define IONIC_CMB_FLAG_EXPDB BIT(0) +#define IONIC_CMB_FLAG_4KALIGN BIT(1) +#define IONIC_CMB_FLAG_VERSION 0xff000000 + __le32 flags; + union ionic_cmb_region region[IONIC_MAX_CMB_REGIONS]; + }; + __le32 words[478]; +}; + /** * struct ionic_qos_identify_cmd - QoS identify command * @opcode: opcode @@ -3060,6 +3141,8 @@ union ionic_dev_cmd { struct ionic_vf_getattr_cmd vf_getattr; struct ionic_vf_ctrl_cmd vf_ctrl; =20 + struct ionic_discover_cmb_cmd discover_cmb; + struct ionic_lif_identify_cmd lif_identify; struct ionic_lif_init_cmd lif_init; struct ionic_lif_reset_cmd lif_reset; @@ -3099,6 +3182,8 @@ union ionic_dev_cmd_comp { struct ionic_vf_getattr_comp vf_getattr; struct ionic_vf_ctrl_comp vf_ctrl; =20 + struct ionic_discover_cmb_comp discover_cmb; + struct ionic_lif_identify_comp lif_identify; struct ionic_lif_init_comp lif_init; ionic_lif_reset_comp lif_reset; @@ -3240,6 +3325,9 @@ union ionic_adminq_comp { #define IONIC_BAR0_DEV_CMD_DATA_REGS_OFFSET 0x0c00 #define IONIC_BAR0_INTR_STATUS_OFFSET 0x1000 #define IONIC_BAR0_INTR_CTRL_OFFSET 0x2000 + +/* BAR2 */ +#define IONIC_BAR2_CMB_ENTRY_SIZE 0x800000 #define IONIC_DEV_CMD_DONE 0x00000001 =20 #define IONIC_ASIC_TYPE_NONE 0 @@ -3293,6 +3381,7 @@ struct ionic_identity { union ionic_port_identity port; union ionic_qos_identity qos; union ionic_q_identity txq; + union ionic_discover_cmb_identity cmb_layout; }; =20 #endif /* _IONIC_IF_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index f89b458bd20a..1bd2202f263a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -673,7 +673,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsig= ned int type, new->cmb_order =3D order_base_2(new->cmb_q_size / PAGE_SIZE); =20 err =3D ionic_get_cmb(lif, &new->cmb_pgid, &new->cmb_q_base_pa, - new->cmb_order); + new->cmb_order, 0, NULL); if (err) { netdev_err(lif->netdev, "Cannot allocate queue order %d from cmb: err %d\n", --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2086.outbound.protection.outlook.com [40.107.237.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4213D213E7E; Thu, 8 May 2025 05:01:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680478; cv=fail; b=VWHOxA5JDcyLZRfcoXRT82mj6QogJbAgXPB/ePhBZbItIdCkRcjAYYPvspcsndGOEgc95sM1W6Uy1RdqF2n92p7N7fyoKtf5uABMzqoun0JSY1N5h9rCXMP1K4cK4WDW4PcSZOoVohw9hBwEznGPpMzcvOdLBSZhBUHI0WF6gnU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680478; c=relaxed/simple; bh=8+NtQf5lkiRQPrS/xcz6OK0iODeCy7ElP2SsFMr69ak=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RX5D/WmnrpUnPKY/cCC/lQEdb244ndY9fVYOEI30qG42DwJiL2qk9pRFugcN/oZTtnaFz0wrSl0PfihqLYjhP1SXl9qPeuRsvY+qnipAw2A9L26LkwBMtrscjlXPS9We657XVOJiL+HJbnnREtqtNPtLqhQYlKt+Xm6KbCI0qoc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=QQbw0Of4; arc=fail smtp.client-ip=40.107.237.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="QQbw0Of4" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=pdIPDPbELdJylK5NsKJw+KL0Uuxnm4vxK/CEx2EjQRKSMZGn2bqJdtkwf6dJoMWfHk86IUrqmZni6VB5N13xRHPmjEAtBMPjfH9m5TXpO9ADOQj/EffsDMrPOfxEIS6xyvFSTypxh5Ih1bzj6OecXhIcOoDX9cf0bWV0QDjD/KLwMBQGXK/IVWdFsTkIxvCg79OVjAvavR5f3wBmAkE1QuNMs49iyL2lzsIoZCCJZsKGNzPtI9nll4Xsm1JnAWtHhLnfjZg2XiE+9FrFder8Nv8yltbDaJlkw4cgjlxZPPzlKxMRzwv5nQpnKTztwUIgcJtwZhd2PNVrb0EO4uxi7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=s70M94ctOYpR99iVT7Q0DBV8zpCout837W9PlFOf4QM=; b=f1FH+CHCuRt51SZIbKP923c51eGnj2m16QIgV63Lqgst3q3UpLMG84wluQucXZ73UCDlYh7mSAidwMG30yxvihgA0dYD1T1BljA8bZH0vXYHJ5pyN/TESzB4VLDKl9D3W0JIbuJ57oZwoVsUZiXeP6L4NjGiwOr2oIYqJDv8khXKlLYfKwRDk4ds5Y0BqvPCspPndRK2D9XPdjZkpbEzlQE1tFc0MS9k8XcNWKc93RjdLZjTt4+PLggqSkTjmtIkiIsejq21XgrwhJJTcGoydDuwym8MS8QxAQCBtIWvUJ1aa6uPzTv+XWaIe+bF+Tn+9RI060mi3PJTRW2YnKLxww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=s70M94ctOYpR99iVT7Q0DBV8zpCout837W9PlFOf4QM=; b=QQbw0Of4LbfH3uRulgH/ucRYavDR/oBkkPuB+OZFLt3z4VgAKW9+MY0MrG48dDEskPd4VZJEJr+i4d2eRqpqTigWQWf/h7SYI92jxZSey4vWQHlI/8Ha89ANq3gNucMG/CDyIrKCCN8MC0K98aYUg2Ge5E6tUkNgKt/X5HsyMKI= Received: from PH7P220CA0173.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:33b::6) by SJ0PR12MB6966.namprd12.prod.outlook.com (2603:10b6:a03:449::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.25; Thu, 8 May 2025 05:01:10 +0000 Received: from CY4PEPF0000FCC4.namprd03.prod.outlook.com (2603:10b6:510:33b:cafe::cb) by PH7P220CA0173.outlook.office365.com (2603:10b6:510:33b::6) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.31 via Frontend Transport; Thu, 8 May 2025 05:01:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC4.mail.protection.outlook.com (10.167.242.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:10 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:05 -0500 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:05 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:00 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 07/14] RDMA: Add IONIC to rdma_driver_id definition Date: Thu, 8 May 2025 10:29:50 +0530 Message-ID: <20250508045957.2823318-8-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC4:EE_|SJ0PR12MB6966:EE_ X-MS-Office365-Filtering-Correlation-Id: 48a654a9-20e6-4451-351a-08dd8ded5969 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|7416014|1800799024|376014|82310400026|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?zhbKY/x4aMLF/hk7GNX2Fyh6LYENb94W+2KNMI2WOt/bxws7rt8Wmzk+JxdH?= =?us-ascii?Q?a7G2NOCLPh5Y/UZwTOAz3QTa9Bg6/208748PA8Uvcd59gtQZOKkBoWqwLOUG?= =?us-ascii?Q?TAjvquseZ0sMtFwhYI2XgQHSz6oe6czxRZjzN8WdPB6nJdQcO/SprCz3/HJE?= =?us-ascii?Q?0xRMpd7P6LB2x205B35rELsykiiuJt9rrVLsDq66mwQqA4auBF9TtG77gDc7?= =?us-ascii?Q?9iJ0dUj7Q0q9DuUlRNcCaEgXy8PKOXXWLmD5YbwbYzhMvZJfh6koSwv+yYGa?= =?us-ascii?Q?UXKIMT9rxpzY0ryfdIao0tH/hu/wCAFgre098ZITl58QpQnaqpkmIZXamKYR?= =?us-ascii?Q?vHDFf/LPzarAsf3g8iSXHjV7QR4ph/mQUWWesCQhcvHiO99IQ1OLupqkrrn8?= =?us-ascii?Q?YvH0ekdh//J0LWbxJiTO5d6MrNrjxO4b9+pabte6bixqmLMFxBYlp+fOQqPQ?= =?us-ascii?Q?UWLe/HRZ7HRG6ykb6gbXpi1YcrohEX9bzxNmUO5nj1tVpwfFnwhgXj0kmQNQ?= =?us-ascii?Q?5g3Uw8EHbvoCZsCf7VDMimHC9dQaHK8TFABq4pdwsfS15YUce01VG8p4JK5I?= =?us-ascii?Q?ARePc/OjBSL0sEazc87PeRmnJnn9mRYsHtn3VsfvYx9bb0boB/YcWjelcqJF?= =?us-ascii?Q?YPWFvXW2gi1p6bTanD6MHB9RtVY1P/Xm+N+px/891Ponh5k00HcAthmc3HXq?= =?us-ascii?Q?79p6zBWWeh4HTFNtsIuD60bmboLAMvKgFG1BBfICiGu0rVZNmwwGh0Sfxzal?= =?us-ascii?Q?LMPjrzoJiWEy2l4zF2N3UwQ8ZSzwZAQHCVOpPQQR0NxYecIoK2nCTxB9St9B?= =?us-ascii?Q?OxPoe3qOH/iSzeijqHkL/mIM5NidusSGAMsEBC6+56rwsgp3VnMOSJTIL8dA?= =?us-ascii?Q?SGjJJUNBTuecC8iSPw5ltRk3dxATGt7A15UltbiJRunnyOpba8lNiNeflxJn?= =?us-ascii?Q?BrdoLyl3WBbJehvh5WCxQfAi30oGuHzV8I5QRGvYCshFMrV5yy12FZMPh27l?= =?us-ascii?Q?XBjgEusbtRZsxXUl0D3CQoKZFMBiDpnwgMC7I7mbE6zEMorzZMkoljJ3DaHZ?= =?us-ascii?Q?gmYT1MsqQWfB895hyuMVgIH+jam0VGVwZ3fx5Rjv850dRFylO2H4Txn187o0?= =?us-ascii?Q?/B2xjhYjd7NDHdGHrUW1NjkYj/lvELkbUtGsuTNWZdAbpGTGzCMFXjgGflke?= =?us-ascii?Q?paXDcriFfXpSPtVaob8M6aidQQz5KIrQ6DATezE/5LWX4E5mcW7vSJpFbRGP?= =?us-ascii?Q?/E532BX5915Y0XTTVSVj6Om8Eoe+d2zfUMCb2f89IF/c9Fp6cjdMgwrgitaI?= =?us-ascii?Q?c9CWYEVCzEYY5nxbCgnO1/+cyvE3adTjbP4uFpoppvCr40Atc0whfYbaBODg?= =?us-ascii?Q?buyCUFs46+Ltgi8R8pMdtcH7LF5dkqzexWwDsz6/CmVR8zhtihQAqZMbJNo5?= =?us-ascii?Q?SYsq5qwVuKjSYBiKsBlKjB9NtnJf3j318A80F9Del6ZDXgvO4WGZQhYXIcS0?= =?us-ascii?Q?uK1NDl6oRxaLG8rsbes7pHNp7fh788d4SCKcXz5h5nlTJsnQcT9Ft3jrSA?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(7416014)(1800799024)(376014)(82310400026)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:10.1866 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 48a654a9-20e6-4451-351a-08dd8ded5969 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6966 Content-Type: text/plain; charset="utf-8" Define RDMA_DRIVER_IONIC in enum rdma_driver_id. Signed-off-by: Abhijit Gangurde --- include/uapi/rdma/ib_user_ioctl_verbs.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib= _user_ioctl_verbs.h index fe15bc7e9f70..89e6a3f13191 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -255,6 +255,7 @@ enum rdma_driver_id { RDMA_DRIVER_SIW, RDMA_DRIVER_ERDMA, RDMA_DRIVER_MANA, + RDMA_DRIVER_IONIC, }; =20 enum ib_uverbs_gid_type { --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2055.outbound.protection.outlook.com [40.107.220.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DB9921420E; Thu, 8 May 2025 05:01:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.55 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680479; cv=fail; b=Y//jvj9zOiiNKLNyx8MDzIEsZ+iGMU6PzeOmTCseCujVIQaopVkDKqbLIXHvEjSw0bk8VayvSj1rmRO64Vx12Ilb5t3JrdSz8dOZnla8OuiXe/DUQXOGdGfHwkbVaWUkubZpPKER5JG9O6TvDfxltN/nYnQMUC/iWhA3I1eHUGs= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680479; c=relaxed/simple; bh=PTEdx4lWog3nfKCGYkKz4jfJEM/kgGgtH/KEgs7Hkrc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=t9b7ix5liu3QAL4JhB1Fq+DTuS6x2QHYaMtlSH75YdXqd3GvgjeivcTYDxhbtexkcwqKHvj53LyKzeG4MN3XGNZBwn/OYIazFn0KRFvXhKGeme1IQwuuxrSsnqpbrXfxZxoaEhvWGI85Tnd9uIjySyfMfQ7NxaDr3OHW+EJ0Wyw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=3lvI9UAh; arc=fail smtp.client-ip=40.107.220.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="3lvI9UAh" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DpnOwe00b4PJDTpnSq/V9GxRcUhHneooOQpUffwLfdoAM4Kakg4cWXC7S3XfqVHlHTUv9eM6nV2E3/VxIxFJIJKBLu5gs60DvOCvQuiu7c42FqiIV8JyPx2MWGyGuMDd4ZwRLtC2E84GzZW6Z5D4gAALLN7mOU1IUDFcM4dyrncvOYFwWIhvG5h1y7IIBIwIWHjNHrVEWS31TTucRj7icnx1xmZhwi2+0dqo4IwbVfoo0hkv4AnAf9+PjbHXm5Z/HRH+G0xtdAdTZr9h0Ntk4vWN8xhBYbEeUZBFpK/6z9XtKHeTMzRbPGp0uB0J3IiJ+8CgknTi9a2QF1HaQwIcAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8Kfxj/MU57YdTuxCw6SyI+yUvYE0RsZ3yRGZK2i0fP4=; b=BnAvaNGBPlqV1R6R9R02RRCJzFcNHOgnfWyQncwKEs5OHX6dJ7w0wULXgq7kMI8SiBx0mmdAFmRaB/8F6wUiiUERHM263kIeD2O+MpKnB4HC0iT+1Hf7WFmU1nJnkfePh6mpjy9laMgr+gP4M6HiN0bFTwXvdLPGSSvdx85YzWkhmlKwQI6IZR9CE66FWd2d50g0Oo9r92alJs+Xq1HgOKlNus1p7oIMoCoGvinCgTpG4tvyNjOuzOjPS8d/CRHYOY3NsPDAi/dbolzafYfhBcowSqvYdm11H5/Jrp2fEzfuBW25knPJX2Ww5Wkr1s66AZdu4RzZuJwjr1HVhNapJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8Kfxj/MU57YdTuxCw6SyI+yUvYE0RsZ3yRGZK2i0fP4=; b=3lvI9UAhfW8MZ8usiO/68vsgP8xf9p1TSMD+XXiqOa9bqLEqPALGBllEy9jm2h9IfBt3OrVRG2wBzSpAY4r6gb6Lg42x+YqvGyXkO/fO/L6ZIjN8fVkOdkvm5OIKTokFHta9UWv96hBneSrbhb3ZLb2tNl2OhSJEGEyxhADjypY= Received: from PH7P220CA0152.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:33b::10) by DS2PR12MB9638.namprd12.prod.outlook.com (2603:10b6:8:27b::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.24; Thu, 8 May 2025 05:01:12 +0000 Received: from CY4PEPF0000FCC4.namprd03.prod.outlook.com (2603:10b6:510:33b:cafe::56) by PH7P220CA0152.outlook.office365.com (2603:10b6:510:33b::10) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.32 via Frontend Transport; Thu, 8 May 2025 05:01:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC4.mail.protection.outlook.com (10.167.242.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:11 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:09 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:05 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v2 08/14] RDMA/ionic: Register auxiliary module for ionic ethernet adapter Date: Thu, 8 May 2025 10:29:51 +0530 Message-ID: <20250508045957.2823318-9-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC4:EE_|DS2PR12MB9638:EE_ X-MS-Office365-Filtering-Correlation-Id: 49b859cc-26de-4e1d-53fb-08dd8ded5a46 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|376014|7416014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?6n9nwxnQbX75leGq7YiPZzdf5C1GTS5FXlfpvCT8FplqVnzCuNcZeZfqVxh6?= =?us-ascii?Q?3awJ07fWRt1HsHQfJ2AWdLYUMWIZYhBjczaFcYryUqDF8An7VCTu4U7lZXNq?= =?us-ascii?Q?+ThsMrOcpmICheBZPjD5atBq06/lcWljTMNkTgfMUrCnefEO5JwIYRqMiDEF?= =?us-ascii?Q?A/j2Wv+m5WcYHoq9wX/hmHgriAWg1iPFCo1LJnFV+M7WqcUfXR5A5dq2zkve?= =?us-ascii?Q?0/wg7pv9dDPV4gGUGWJB0Q3mmVRSc2FOghKugFXl0yJciLHMiiXo08I0Lrzi?= =?us-ascii?Q?l0lIzFmPz97BCpMC4HxI+hpHT9o5OFHzlGRyPvwc6dorPyjqRP03XQ5tgKZy?= =?us-ascii?Q?bZf3qkVcbh4IRAk6EqElc8rHmSlMsHfidOT2BBxZWNLlm0HMqerg0NEL/kyc?= =?us-ascii?Q?Ar1CwVZceRK2X+H1CuTxf3ED68DUmDYX5oCNrj8px/tR5CAA6iNRWLjkVCN7?= =?us-ascii?Q?OYmwWtgY96uBJjEhISXjQ4RPeZ0spOEmR1tftSrv4QV51/q9pcbVEBUyEee6?= =?us-ascii?Q?Fl+h5FUg4BKctf0GaSI0QXfoCfD3AVZdiOyNKWKYOauqwJGzKuoIc2BX3pMH?= =?us-ascii?Q?oWx+59ss8RHfuMpNOiduvi9tmGXI4x7dMGXgXFlIfoKf9uc/eMR8Rhh5DQ5j?= =?us-ascii?Q?j8Yu6eqE/DqlzAzyRO6ZVQe7KZo/kX9n6ovV09AI8y1xF+EiQqhM6PVMzv4R?= =?us-ascii?Q?3fDmodoC2QU6Eo6f1W8Txg4zWjn5/n7z3kR8VFQxqO1j3HlupiBfvMXmUELE?= =?us-ascii?Q?2Lp7NgOhi5LGrMjbawM2D8ZhIx0jEUdeOkJ9ZxtgKBfxVqoLcpaZbLDO90by?= =?us-ascii?Q?wTqwo8y6bdr1PpOFDR1R+XEXNcSM8QID5Ny+Vkxqj4xWDgvf8TbV33Iw9gSr?= =?us-ascii?Q?uH8ERevzexI9ZoanZ7hSUZCqSeS+sw3ICYAlQGH3fTTmTmq4fFM8iHZLC38g?= =?us-ascii?Q?riyY/WsmhjBE+oYnsRsYwj9A3+6SHY85eDHkqU8CswUExAqPpEGXMrD9avdp?= =?us-ascii?Q?NFbw6iA9CZGYewSg5gxLAvrk2yiq5yoBw+bYKXGugQWILkJEtu1yGZJ/qIi0?= =?us-ascii?Q?sk9Yb0gZvhfSl2UJUgIyM3B3heCKqpY5J8vbpCU7pPAPveKahNmqfR1mIUvI?= =?us-ascii?Q?/D6ZQ8R6zbPRJek5BfeLvTfcPSJpDgbvr5xQaccK6tg/uMuIh5cJVyHhkvFw?= =?us-ascii?Q?3Z9XtOGQoLFtD9/tfCAGuBLwf/vxBqBZ0XpNRnUFVX8jG3qxEHj/uFyWtAVb?= =?us-ascii?Q?aDZK9T6AF3rBOigZhM/uQR3QJ99Yy09HaON3KSBwEtXXDwKyfiDRmjFVkTs2?= =?us-ascii?Q?j3Tk7PQk4R0d5sT4r249VzHHnzk6JLIaALkfQVZTFCI8STwYhj1vipE/dpZr?= =?us-ascii?Q?kywFWjidYeBKJMLyWI7vEKlHLIRBYCTeZsstF0swjzMLtGAWc/9/wsgAcNPz?= =?us-ascii?Q?A4wV6zSn2p+hXcaGPltMTDoPhyZ/6aRjKrOw0Lm+Ql7ifM6ffqZKBDYAR71p?= =?us-ascii?Q?COVFphx9ht/bg2UGx1d+oeeGuzP1Res4Tthla27VjTaP5IPRrPTkdkbVPw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(376014)(7416014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:11.6356 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 49b859cc-26de-4e1d-53fb-08dd8ded5a46 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS2PR12MB9638 Content-Type: text/plain; charset="utf-8" Register auxiliary module to create ibdevice for ionic ethernet adapter. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v1->v2 - Removed netdev references from ionic RDMA driver - Moved to ionic_lif* instead of void* to convey information between aux devices and drivers. drivers/infiniband/hw/ionic/ionic_ibdev.c | 135 ++++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 21 +++ drivers/infiniband/hw/ionic/ionic_lif_cfg.c | 121 ++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_lif_cfg.h | 65 ++++++++++ 4 files changed, 342 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_ibdev.c create mode 100644 drivers/infiniband/hw/ionic/ionic_ibdev.h create mode 100644 drivers/infiniband/hw/ionic/ionic_lif_cfg.c create mode 100644 drivers/infiniband/hw/ionic/ionic_lif_cfg.h diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c new file mode 100644 index 000000000000..ca047a789378 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -0,0 +1,135 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_ibdev.h" + +#define DRIVER_DESCRIPTION "AMD Pensando RoCE HCA driver" +#define DEVICE_DESCRIPTION "AMD Pensando RoCE HCA" + +MODULE_AUTHOR("Allen Hubbe "); +MODULE_DESCRIPTION(DRIVER_DESCRIPTION); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS("NET_IONIC"); + +static const struct auxiliary_device_id ionic_aux_id_table[] =3D { + { .name =3D "ionic.rdma", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, ionic_aux_id_table); + +static void ionic_destroy_ibdev(struct ionic_ibdev *dev) +{ + ib_unregister_device(&dev->ibdev); + ib_dealloc_device(&dev->ibdev); +} + +static struct ionic_ibdev *ionic_create_ibdev(struct ionic_aux_dev *ionic_= adev) +{ + struct ib_device *ibdev; + struct ionic_ibdev *dev; + int rc; + + rc =3D ionic_version_check(&ionic_adev->adev.dev, ionic_adev->lif); + if (rc) + goto err_dev; + + dev =3D ib_alloc_device(ionic_ibdev, ibdev); + if (!dev) { + rc =3D -ENOMEM; + goto err_dev; + } + + ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); + + ibdev =3D &dev->ibdev; + ibdev->dev.parent =3D dev->lif_cfg.hwdev; + + strscpy(ibdev->name, "ionic_%d", IB_DEVICE_NAME_MAX); + strscpy(ibdev->node_desc, DEVICE_DESCRIPTION, IB_DEVICE_NODE_DESC_MAX); + + ibdev->node_type =3D RDMA_NODE_IB_CA; + ibdev->phys_port_cnt =3D 1; + + /* the first two eq are reserved for async events */ + ibdev->num_comp_vectors =3D dev->lif_cfg.eq_count - 2; + + addrconf_ifid_eui48((u8 *)&ibdev->node_guid, + ionic_lif_netdev(ionic_adev->lif)); + + rc =3D ib_device_set_netdev(ibdev, ionic_lif_netdev(ionic_adev->lif), 1); + if (rc) + goto err_admin; + + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); + if (rc) + goto err_register; + + return dev; + +err_register: +err_admin: + ib_dealloc_device(&dev->ibdev); +err_dev: + return ERR_PTR(rc); +} + +static int ionic_aux_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *id) +{ + struct ionic_aux_dev *ionic_adev; + struct ionic_ibdev *dev; + + ionic_adev =3D container_of(adev, struct ionic_aux_dev, adev); + dev =3D ionic_create_ibdev(ionic_adev); + if (IS_ERR(dev)) + return dev_err_probe(&adev->dev, PTR_ERR(dev), + "Failed to register ibdev\n"); + + auxiliary_set_drvdata(adev, dev); + ibdev_dbg(&dev->ibdev, "registered\n"); + + return 0; +} + +static void ionic_aux_remove(struct auxiliary_device *adev) +{ + struct ionic_ibdev *dev =3D auxiliary_get_drvdata(adev); + + dev_dbg(&adev->dev, "unregister ibdev\n"); + ionic_destroy_ibdev(dev); + dev_dbg(&adev->dev, "unregistered\n"); +} + +static struct auxiliary_driver ionic_aux_r_driver =3D { + .name =3D "rdma", + .probe =3D ionic_aux_probe, + .remove =3D ionic_aux_remove, + .id_table =3D ionic_aux_id_table, +}; + +static int __init ionic_mod_init(void) +{ + int rc; + + rc =3D auxiliary_driver_register(&ionic_aux_r_driver); + if (rc) + goto err_aux; + + return 0; + +err_aux: + return rc; +} + +static void __exit ionic_mod_exit(void) +{ + auxiliary_driver_unregister(&ionic_aux_r_driver); +} + +module_init(ionic_mod_init); +module_exit(ionic_mod_exit); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h new file mode 100644 index 000000000000..e13adff390d7 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_IBDEV_H_ +#define _IONIC_IBDEV_H_ + +#include +#include + +#include "ionic_lif_cfg.h" + +#define IONIC_MIN_RDMA_VERSION 0 +#define IONIC_MAX_RDMA_VERSION 2 + +struct ionic_ibdev { + struct ib_device ibdev; + + struct ionic_lif_cfg lif_cfg; +}; + +#endif /* _IONIC_IBDEV_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.c new file mode 100644 index 000000000000..a02eb2f5bd45 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include +#include + +#include "ionic_lif_cfg.h" + +#define IONIC_MIN_RDMA_VERSION 0 +#define IONIC_MAX_RDMA_VERSION 2 + +static u8 ionic_get_expdb(struct ionic_lif *lif) +{ + u8 expdb_support =3D 0; + + if (lif->ionic->idev.phy_cmb_expdb64_pages) + expdb_support |=3D IONIC_EXPDB_64B_WQE; + if (lif->ionic->idev.phy_cmb_expdb128_pages) + expdb_support |=3D IONIC_EXPDB_128B_WQE; + if (lif->ionic->idev.phy_cmb_expdb256_pages) + expdb_support |=3D IONIC_EXPDB_256B_WQE; + if (lif->ionic->idev.phy_cmb_expdb512_pages) + expdb_support |=3D IONIC_EXPDB_512B_WQE; + + return expdb_support; +} + +void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg) +{ + union ionic_lif_identity *ident =3D &lif->ionic->ident.lif; + + cfg->lif =3D lif; + cfg->hwdev =3D &lif->ionic->pdev->dev; + cfg->lif_index =3D lif->index; + cfg->lif_hw_index =3D lif->hw_index; + + cfg->dbid =3D lif->kern_pid; + cfg->dbid_count =3D le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif); + cfg->dbpage =3D lif->kern_dbpage; + cfg->intr_ctrl =3D lif->ionic->idev.intr_ctrl; + + cfg->db_phys =3D lif->ionic->bars[IONIC_PCI_BAR_DBELL].bus_addr; + + if (IONIC_VERSION(ident->rdma.version, ident->rdma.minor_version) >=3D + IONIC_VERSION(2, 1)) + cfg->page_size_supported =3D + cpu_to_le64(ident->rdma.page_size_cap); + else + cfg->page_size_supported =3D IONIC_PAGE_SIZE_SUPPORTED; + + cfg->rdma_version =3D ident->rdma.version; + cfg->qp_opcodes =3D ident->rdma.qp_opcodes; + cfg->admin_opcodes =3D ident->rdma.admin_opcodes; + + cfg->stats_type =3D cpu_to_le16(ident->rdma.stats_type); + cfg->npts_per_lif =3D le32_to_cpu(ident->rdma.npts_per_lif); + cfg->nmrs_per_lif =3D le32_to_cpu(ident->rdma.nmrs_per_lif); + cfg->nahs_per_lif =3D le32_to_cpu(ident->rdma.nahs_per_lif); + + cfg->aq_base =3D le32_to_cpu(ident->rdma.aq_qtype.qid_base); + cfg->cq_base =3D le32_to_cpu(ident->rdma.cq_qtype.qid_base); + cfg->eq_base =3D le32_to_cpu(ident->rdma.eq_qtype.qid_base); + + /* + * ionic_create_rdma_admin() may reduce aq_count or eq_count if + * it is unable to allocate all that were requested. + * aq_count is tunable; see ionic_aq_count + * eq_count is tunable; see ionic_eq_count + */ + cfg->aq_count =3D le32_to_cpu(ident->rdma.aq_qtype.qid_count); + cfg->eq_count =3D le32_to_cpu(ident->rdma.eq_qtype.qid_count); + cfg->cq_count =3D le32_to_cpu(ident->rdma.cq_qtype.qid_count); + cfg->qp_count =3D le32_to_cpu(ident->rdma.sq_qtype.qid_count); + cfg->dbid_count =3D le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif); + + cfg->aq_qtype =3D ident->rdma.aq_qtype.qtype; + cfg->sq_qtype =3D ident->rdma.sq_qtype.qtype; + cfg->rq_qtype =3D ident->rdma.rq_qtype.qtype; + cfg->cq_qtype =3D ident->rdma.cq_qtype.qtype; + cfg->eq_qtype =3D ident->rdma.eq_qtype.qtype; + cfg->udma_qgrp_shift =3D ident->rdma.udma_shift; + cfg->udma_count =3D 2; + + cfg->max_stride =3D ident->rdma.max_stride; + cfg->expdb_mask =3D ionic_get_expdb(lif); + + cfg->sq_expdb =3D + !!(lif->qtype_info[IONIC_QTYPE_TXQ].features & IONIC_QIDENT_F_EXPDB); + cfg->rq_expdb =3D + !!(lif->qtype_info[IONIC_QTYPE_RXQ].features & IONIC_QIDENT_F_EXPDB); +} + +struct net_device *ionic_lif_netdev(struct ionic_lif *lif) +{ + return lif->netdev; +} + +int ionic_version_check(const struct device *dev, struct ionic_lif *lif) +{ + union ionic_lif_identity *ident =3D &lif->ionic->ident.lif; + int rc; + + if (ident->rdma.version < IONIC_MIN_RDMA_VERSION || + ident->rdma.version > IONIC_MAX_RDMA_VERSION) { + rc =3D -EINVAL; + dev_err_probe(dev, rc, + "ionic_rdma: incompatible version, fw ver %u\n", + ident->rdma.version); + dev_err_probe(dev, rc, + "ionic_rdma: Driver Min Version %u\n", + IONIC_MIN_RDMA_VERSION); + dev_err_probe(dev, rc, + "ionic_rdma: Driver Max Version %u\n", + IONIC_MAX_RDMA_VERSION); + return rc; + } + + return 0; +} diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.h new file mode 100644 index 000000000000..b095637c54cf --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_LIF_CFG_H_ + +#define IONIC_VERSION(a, b) (((a) << 16) + ((b) << 8)) +#define IONIC_PAGE_SIZE_SUPPORTED 0x40201000 /* 4kb, 2Mb, 1Gb */ + +#define IONIC_EXPDB_64B_WQE BIT(0) +#define IONIC_EXPDB_128B_WQE BIT(1) +#define IONIC_EXPDB_256B_WQE BIT(2) +#define IONIC_EXPDB_512B_WQE BIT(3) + +struct ionic_lif_cfg { + struct device *hwdev; + struct ionic_lif *lif; + + int lif_index; + int lif_hw_index; + + u32 dbid; + int dbid_count; + u64 __iomem *dbpage; + struct ionic_intr __iomem *intr_ctrl; + phys_addr_t db_phys; + + u64 page_size_supported; + u32 npts_per_lif; + u32 nmrs_per_lif; + u32 nahs_per_lif; + + u32 aq_base; + u32 cq_base; + u32 eq_base; + + int aq_count; + int eq_count; + int cq_count; + int qp_count; + + u16 stats_type; + u8 aq_qtype; + u8 sq_qtype; + u8 rq_qtype; + u8 cq_qtype; + u8 eq_qtype; + + u8 udma_count; + u8 udma_qgrp_shift; + + u8 rdma_version; + u8 qp_opcodes; + u8 admin_opcodes; + + u8 max_stride; + bool sq_expdb; + bool rq_expdb; + u8 expdb_mask; +}; + +int ionic_version_check(const struct device *dev, struct ionic_lif *lif); +void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg); +struct net_device *ionic_lif_netdev(struct ionic_lif *lif); + +#endif /* _IONIC_LIF_CFG_H_ */ --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2073.outbound.protection.outlook.com [40.107.223.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82FE92116F1; Thu, 8 May 2025 05:01:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.73 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680490; cv=fail; b=RXMD998gDLkmivipCWhlBOByMJKwEAPK4iJqxfSTfATcVyyRtH2BhIOmznrlwTi1lzmxosy+Tq+tkPIldikawSCfSub/YpIqO3qG5K4lMH/YI8xuq83WYWTfJa0CXZjIB9VPMK8kmpaV5kBP62QuKn1PJgsSmWRssvYHkf+ge8c= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680490; c=relaxed/simple; bh=nfiv2olqZelXPo8mPmdAJLJL84t1fM6/5NHM833ruOU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=pTj/RQvU6UbzIdk4vYNyRkaDYTUWuYKykczd+V9MzIuidM2BFVb1ZJtEMTIo3ITC2JguGkeWhT6mAoFsPEEzxUybMiAo9SA18F/g7C8UjVB0QOxFwNEVk0vv3H4Ig3jBF/P6eB3P0yblJPtMVs+OUe/k/JmnqZ28dWs4QZ3sYx0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=QKvWPCrS; arc=fail smtp.client-ip=40.107.223.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="QKvWPCrS" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RXugb7RTweJQjQ1WXkyZfXCzidehRz+EUsH5Ez89raIB8F2X01RoAou2nM69dYtHEGqO/yy/4hJnPObkBR32UD33sYKXL6moszZvG5gW/WPG09VZMkrpLANUzMo/rc6wlyOYa4quGpcM5rpbwx783kOuskknqZONIhAqjffVPsr9RPViIUv6RlxUDUbHzjMrHVN7U+3lfbzZ3F+WTJ2gGDidPPO0CxU4onsyHUtaZCIUUuXKoyE9SB7mOhpYHd+qsAp8y1gQ7aNiigxB9a0OXgj4A/dvDlUPVdetBezrNYEo2zGQ3sIf9cLIgsXwK2DRpE9uFXUrcS0+34CDcPkrqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/bwxM5KIrYBxPEJpScenkz5tSUvr+Ee9yqnjqXXZNOQ=; b=eMgxAuD/tCsTz3IVG6c/RH0KXRyjFmWyD8ARfd2dWtAYEJsN+QNmyXn/4UQPWSmUB22pQN3xCHWvHKVLI9lo2Lwdm73/OFCyJjalFs8NntJnWL3ZytRyiUyRs+C8EwrH2CI2S2AZbZT40SZnYAsjT65NjlVopdn/G4Wc6k0ZLgahuJ29q/qLbHAcFKyu97IpSIgVBpA14QKqrie3was6tH8Ksiael2iucx+5A8f3/wXjFAHYpS4B+eKXTj2Dd0TLdg6zmH5zdRRr4GK5x9zLoY1Gmy9UtNTWjjlZ2DzydhZ8QtJEaODVVO8giwrs6F4Fwm3nvdnwISRy6S2H0RzedA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/bwxM5KIrYBxPEJpScenkz5tSUvr+Ee9yqnjqXXZNOQ=; b=QKvWPCrSaPkvYWQHap8OdtdfJs8F3ApTAlzJHM/JVugQqLVtOI4nkTD6XQ+SnzVwLxEXRb8pnzb3YFFV6H7ycmSxqUiUESy5Dmr1lCPJKKH+k6gpkkOA4XsrS1AwPhG+I0e34oI7qoViX7ZkY8z1Xl0JmxtzOjjstEL/C2mDfhA= Received: from BL1PR13CA0011.namprd13.prod.outlook.com (2603:10b6:208:256::16) by SA1PR12MB7412.namprd12.prod.outlook.com (2603:10b6:806:2b2::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8722.23; Thu, 8 May 2025 05:01:15 +0000 Received: from BL6PEPF0001AB78.namprd02.prod.outlook.com (2603:10b6:208:256:cafe::78) by BL1PR13CA0011.outlook.office365.com (2603:10b6:208:256::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8722.19 via Frontend Transport; Thu, 8 May 2025 05:01:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB78.mail.protection.outlook.com (10.167.242.171) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:15 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:14 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:10 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v2 09/14] RDMA/ionic: Create device queues to support admin operations Date: Thu, 8 May 2025 10:29:52 +0530 Message-ID: <20250508045957.2823318-10-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB03.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB78:EE_|SA1PR12MB7412:EE_ X-MS-Office365-Filtering-Correlation-Id: 6cbfbd55-d45f-4122-01df-08dd8ded5c6f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|7416014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?a0l3Q+HA719NGWHgucaR6c7jSuT2BuvqaVyZhAGIXFGvl25LgPPLYiGtKaM1?= =?us-ascii?Q?HUJ/RmwdERt0LUjrGOJ6I/MYvOINi8ZzbZ65KXB5k6C/TkPU/1MASZLkK8xl?= =?us-ascii?Q?4xiwglwkEWukFKd2J9NSG4hdnUDukUxiykbft7HMv6axJ+z4T+xegg5J4Mif?= =?us-ascii?Q?ZJFQa8ncSALM8EKMJ7nOjmpgL0GtsedvNkWhSo4SNzMcwnaZmZ0iNUPnqEUp?= =?us-ascii?Q?+yK4sL1k0IB22KFNCareFqUtEEBeIOrvl74ARIapYATety6Fm+pFa+d6sDd/?= =?us-ascii?Q?3onTyt40IybS5A810PTyjBZEmFUym5KlctHhEprPXfn2rFzevEJvZM61RfIe?= =?us-ascii?Q?LPvCVROa81ODytEbgKN/z2W4t0HY5ajR1D5TGfU5dd9+TWGN9ksFlZvxj3uI?= =?us-ascii?Q?d7P0Ywwp+ouqu2s9oYwQM+K0C2b01nr1TFNX143wETtpWZa+yWheUb2cEKkG?= =?us-ascii?Q?eBzY4uoFCSTM0/Zn5Q/sMQOpFXObx8o/aopclIZCHcfAu8PxhIbepTbDy8wO?= =?us-ascii?Q?tIOoj2HcXW440fqY8N7qdNII8UEnzYa6iNnAw9u5qkVdHiMwulTbkNVM/sm/?= =?us-ascii?Q?s6QY3YM/gzjiHoxtAiRNdOR0fRAjt2S0H5qcNyvNWRS1DLEMO9W05i+lcUPd?= =?us-ascii?Q?I22tbOOJG+reYtuEPG7DmlaN8QLwDFL9UnDpZdPIMsTvzakbo5w/PecEOKk3?= =?us-ascii?Q?BPjWQ449rHmE6HXBMkB6OV2HnP2iRN/psSiO2n/KaBlEcLFOrnfDAPInQwh6?= =?us-ascii?Q?kHWhNtAuQlvSQRD64klHmTwS5agfIzTLndXhkX5SHrzsZd+24ScrSdA/JBun?= =?us-ascii?Q?sQde46TqQ/W2ZKBY/RQzzZjmBcaCGGztWygiEnAWQPYBRBhptg6SRafiWgIT?= =?us-ascii?Q?RT5MS3wuZ8P/jRRCMHE5JDMudGNO+Ch/sZjU13E+V7FRIH2zEXxtU+ePzarv?= =?us-ascii?Q?qBsHSm6U1RWvT7NAbGKWPCUyrBJnBgkzIM4SgJo2BqE7m6W9rCdKghq3Hyeg?= =?us-ascii?Q?cUlpTSWP7nrEQ7w3C0U7nLjClxVlNOkEI/ILJRc9YgEL6SPQo+iOC60TY7Es?= =?us-ascii?Q?TsTBK9J1chQsVF1KxBMr9ldIMddZwB8bBZWl/hqoLqjwZlCWtA5iRHO0M+fE?= =?us-ascii?Q?klNyX35kYYvHLhDFGIAb0dSgHUPZDex+FGqtfYe9s4TejSW/TeWVRvYBoDvl?= =?us-ascii?Q?JC1bM/pxQPNIGr+I90HGrnJb2Mp9jeo4MqnpHXzSUaCKxt5BZyqHpuxe6DLV?= =?us-ascii?Q?iw1J69uaSa3eR/RiEEqeMdpPCTHXFuAfo7Slpb8Eq9WQD0B3UFM/0u8ZEpmw?= =?us-ascii?Q?FNw+EAKq8NaIFzvtboq2pXRaFoOagj7QvNXm5YuunstEFwr9+m2trn+omDQ5?= =?us-ascii?Q?Kjg5k2ql/s5g1frJ5XuNKhAlXnVvvgMjHwV7jcVa7zWLqkFeSL5jFxSuf1Qk?= =?us-ascii?Q?swsUbGnDnnyINWVgdGcERM3PAEkMI5JZtfWH4wdrQFsm3i7cnTtfAdae4H+n?= =?us-ascii?Q?PdGwZYIQClbcVKu5K2UPz2jHH9/p9RMC59PMZRTNZPPI3Y7PHacC/m5ZgQ?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(7416014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:15.3410 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6cbfbd55-d45f-4122-01df-08dd8ded5c6f X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB78.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7412 Content-Type: text/plain; charset="utf-8" Setup RDMA admin queues using device command exposed over auxiliary device and manage these queues using bit-map tracking structures. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- drivers/infiniband/hw/ionic/ionic_admin.c | 1160 +++++++++++++++++ .../infiniband/hw/ionic/ionic_controlpath.c | 191 +++ drivers/infiniband/hw/ionic/ionic_fw.h | 164 +++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 118 ++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 245 ++++ drivers/infiniband/hw/ionic/ionic_pgtbl.c | 113 ++ drivers/infiniband/hw/ionic/ionic_queue.c | 52 + drivers/infiniband/hw/ionic/ionic_queue.h | 234 ++++ drivers/infiniband/hw/ionic/ionic_res.c | 42 + drivers/infiniband/hw/ionic/ionic_res.h | 182 +++ 10 files changed, 2501 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_admin.c create mode 100644 drivers/infiniband/hw/ionic/ionic_controlpath.c create mode 100644 drivers/infiniband/hw/ionic/ionic_fw.h create mode 100644 drivers/infiniband/hw/ionic/ionic_pgtbl.c create mode 100644 drivers/infiniband/hw/ionic/ionic_queue.c create mode 100644 drivers/infiniband/hw/ionic/ionic_queue.h create mode 100644 drivers/infiniband/hw/ionic/ionic_res.c create mode 100644 drivers/infiniband/hw/ionic/ionic_res.h diff --git a/drivers/infiniband/hw/ionic/ionic_admin.c b/drivers/infiniband= /hw/ionic/ionic_admin.c new file mode 100644 index 000000000000..5a414c0600a1 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_admin.c @@ -0,0 +1,1160 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +#define IONIC_EQ_COUNT_MIN 4 +#define IONIC_AQ_COUNT_MIN 1 + +/* not a valid queue position or negative error status */ +#define IONIC_ADMIN_POSTED 0x10000 + +/* cpu can be held with irq disabled for COUNT * MS (for create/destroy_a= h) */ +#define IONIC_ADMIN_BUSY_RETRY_COUNT 2000 +#define IONIC_ADMIN_BUSY_RETRY_MS 1 + +/* admin queue will be considered failed if a command takes longer */ +#define IONIC_ADMIN_TIMEOUT (HZ * 2) +#define IONIC_ADMIN_WARN (HZ / 8) + +/* will poll for admin cq to tolerate and report from missed event */ +#define IONIC_ADMIN_DELAY (HZ / 8) + +/* work queue for polling the event queue and admin cq */ +struct workqueue_struct *ionic_evt_workq; + +static void ionic_admin_timedout(struct ionic_aq *aq) +{ + struct ionic_cq *cq =3D &aq->vcq->cq[0]; + struct ionic_ibdev *dev =3D aq->dev; + unsigned long irqflags; + u16 pos; + + spin_lock_irqsave(&aq->lock, irqflags); + if (ionic_queue_empty(&aq->q)) + goto out; + + /* Reset ALL adminq if any one times out */ + queue_work(ionic_evt_workq, &dev->reset_work); + + ibdev_err(&dev->ibdev, "admin command timed out, aq %d\n", aq->aqid); + + ibdev_warn(&dev->ibdev, "admin timeout was set for %ums\n", + (u32)jiffies_to_msecs(IONIC_ADMIN_TIMEOUT)); + ibdev_warn(&dev->ibdev, "admin inactivity for %ums\n", + (u32)jiffies_to_msecs(jiffies - aq->stamp)); + + ibdev_warn(&dev->ibdev, "admin commands outstanding %u\n", + ionic_queue_length(&aq->q)); + ibdev_warn(&dev->ibdev, "%s more commands pending\n", + list_empty(&aq->wr_post) ? "no" : "some"); + + pos =3D cq->q.prod; + + ibdev_warn(&dev->ibdev, "admin cq pos %u (next to complete)\n", pos); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at(&cq->q, pos), + BIT(cq->q.stride_log2), true); + + pos =3D (pos - 1) & cq->q.mask; + + ibdev_warn(&dev->ibdev, "admin cq pos %u (last completed)\n", pos); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at(&cq->q, pos), + BIT(cq->q.stride_log2), true); + + pos =3D aq->q.cons; + + ibdev_warn(&dev->ibdev, "admin pos %u (next to complete)\n", pos); + print_hex_dump(KERN_WARNING, "cmd ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at(&aq->q, pos), + BIT(aq->q.stride_log2), true); + + pos =3D (aq->q.prod - 1) & aq->q.mask; + if (pos =3D=3D aq->q.cons) + goto out; + + ibdev_warn(&dev->ibdev, "admin pos %u (last posted)\n", pos); + print_hex_dump(KERN_WARNING, "cmd ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at(&aq->q, pos), + BIT(aq->q.stride_log2), true); + +out: + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_admin_reset_dwork(struct ionic_ibdev *dev) +{ + if (dev->admin_state < IONIC_ADMIN_KILLED) + queue_delayed_work(ionic_evt_workq, &dev->admin_dwork, + IONIC_ADMIN_DELAY); +} + +static void ionic_admin_reset_wdog(struct ionic_aq *aq) +{ + aq->stamp =3D jiffies; + ionic_admin_reset_dwork(aq->dev); +} + +static bool ionic_admin_next_cqe(struct ionic_ibdev *dev, struct ionic_cq = *cq, + struct ionic_v1_cqe **cqe) +{ + struct ionic_v1_cqe *qcqe =3D ionic_queue_at_prod(&cq->q); + + if (unlikely(cq->color !=3D ionic_v1_cqe_color(qcqe))) + return false; + + /* Prevent out-of-order reads of the CQE */ + rmb(); + + ibdev_dbg(&dev->ibdev, "poll admin cq %u prod %u\n", + cq->cqid, cq->q.prod); + print_hex_dump_debug("cqe ", DUMP_PREFIX_OFFSET, 16, 1, + qcqe, BIT(cq->q.stride_log2), true); + *cqe =3D qcqe; + + return true; +} + +static void ionic_admin_poll_locked(struct ionic_aq *aq) +{ + struct ionic_cq *cq =3D &aq->vcq->cq[0]; + struct ionic_admin_wr *wr, *wr_next; + struct ionic_ibdev *dev =3D aq->dev; + u32 wr_strides, avlbl_strides; + struct ionic_v1_cqe *cqe; + u32 qtf, qid; + u16 old_prod; + u8 type; + + if (dev->admin_state >=3D IONIC_ADMIN_KILLED) { + list_for_each_entry_safe(wr, wr_next, &aq->wr_prod, aq_ent) { + INIT_LIST_HEAD(&wr->aq_ent); + aq->q_wr[wr->status].wr =3D NULL; + wr->status =3D dev->admin_state; + complete_all(&wr->work); + } + INIT_LIST_HEAD(&aq->wr_prod); + + list_for_each_entry_safe(wr, wr_next, &aq->wr_post, aq_ent) { + INIT_LIST_HEAD(&wr->aq_ent); + wr->status =3D dev->admin_state; + complete_all(&wr->work); + } + INIT_LIST_HEAD(&aq->wr_post); + + return; + } + + old_prod =3D cq->q.prod; + + while (ionic_admin_next_cqe(dev, cq, &cqe)) { + qtf =3D ionic_v1_cqe_qtf(cqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + if (unlikely(type !=3D IONIC_V1_CQE_TYPE_ADMIN)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe type %u\n", type); + goto cq_next; + } + + if (unlikely(qid !=3D aq->aqid)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe qid %u\n", qid); + goto cq_next; + } + + if (unlikely(be16_to_cpu(cqe->admin.cmd_idx) !=3D aq->q.cons)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad idx %u cons %u qid %u\n", + be16_to_cpu(cqe->admin.cmd_idx), + aq->q.cons, qid); + goto cq_next; + } + + if (unlikely(ionic_queue_empty(&aq->q))) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe for empty adminq\n"); + goto cq_next; + } + + wr =3D aq->q_wr[aq->q.cons].wr; + if (wr) { + aq->q_wr[aq->q.cons].wr =3D NULL; + list_del_init(&wr->aq_ent); + + wr->cqe =3D *cqe; + wr->status =3D dev->admin_state; + complete_all(&wr->work); + } + + ionic_queue_consume_entries(&aq->q, + aq->q_wr[aq->q.cons].wqe_strides); + +cq_next: + ionic_queue_produce(&cq->q); + cq->color =3D ionic_color_wrap(cq->q.prod, cq->color); + } + + if (old_prod !=3D cq->q.prod) { + ionic_admin_reset_wdog(aq); + cq->q.cons =3D cq->q.prod; + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + ionic_queue_dbell_val(&cq->q)); + queue_work(ionic_evt_workq, &aq->work); + } else if (!aq->armed) { + aq->armed =3D true; + cq->arm_any_prod =3D ionic_queue_next(&cq->q, cq->arm_any_prod); + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + cq->q.dbell | IONIC_CQ_RING_ARM | + cq->arm_any_prod); + queue_work(ionic_evt_workq, &aq->work); + } + + if (dev->admin_state !=3D IONIC_ADMIN_ACTIVE) + return; + + old_prod =3D aq->q.prod; + + if (ionic_queue_empty(&aq->q) && !list_empty(&aq->wr_post)) + ionic_admin_reset_wdog(aq); + + if (list_empty(&aq->wr_post)) + return; + + do { + u8 *src; + int i, src_len; + size_t stride_len; + + wr =3D list_first_entry(&aq->wr_post, struct ionic_admin_wr, + aq_ent); + wr_strides =3D (wr->wqe.len + ADMIN_WQE_HDR_LEN + + (ADMIN_WQE_STRIDE - 1)) >> aq->q.stride_log2; + avlbl_strides =3D ionic_queue_length_remaining(&aq->q); + + if (wr_strides > avlbl_strides) + break; + + list_move(&wr->aq_ent, &aq->wr_prod); + wr->status =3D aq->q.prod; + aq->q_wr[aq->q.prod].wr =3D wr; + aq->q_wr[aq->q.prod].wqe_strides =3D wr_strides; + + src_len =3D wr->wqe.len; + src =3D (uint8_t *)&wr->wqe.cmd; + + /* First stride */ + memcpy(ionic_queue_at_prod(&aq->q), &wr->wqe, + ADMIN_WQE_HDR_LEN); + stride_len =3D ADMIN_WQE_STRIDE - ADMIN_WQE_HDR_LEN; + if (stride_len > src_len) + stride_len =3D src_len; + memcpy(ionic_queue_at_prod(&aq->q) + ADMIN_WQE_HDR_LEN, + src, stride_len); + ibdev_dbg(&dev->ibdev, "post admin prod %u (%u strides)\n", + aq->q.prod, wr_strides); + print_hex_dump_debug("wqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at_prod(&aq->q), + BIT(aq->q.stride_log2), true); + ionic_queue_produce(&aq->q); + + /* Remaining strides */ + for (i =3D stride_len; i < src_len; i +=3D stride_len) { + stride_len =3D ADMIN_WQE_STRIDE; + + if (i + stride_len > src_len) + stride_len =3D src_len - i; + + memcpy(ionic_queue_at_prod(&aq->q), src + i, + stride_len); + print_hex_dump_debug("wqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at_prod(&aq->q), + BIT(aq->q.stride_log2), true); + ionic_queue_produce(&aq->q); + } + } while (!list_empty(&aq->wr_post)); + + if (old_prod !=3D aq->q.prod) + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.aq_qtype, + ionic_queue_dbell_val(&aq->q)); +} + +static void ionic_admin_dwork(struct work_struct *ws) +{ + struct ionic_ibdev *dev =3D + container_of(ws, struct ionic_ibdev, admin_dwork.work); + struct ionic_aq *aq, *bad_aq =3D NULL; + bool do_reschedule =3D false; + unsigned long irqflags; + bool do_reset =3D false; + u16 pos; + int i; + + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + aq =3D dev->aq_vec[i]; + + spin_lock_irqsave(&aq->lock, irqflags); + + if (ionic_queue_empty(&aq->q)) + goto next_aq; + + /* Reschedule if any queue has outstanding work */ + do_reschedule =3D true; + + if (time_is_after_eq_jiffies(aq->stamp + IONIC_ADMIN_WARN)) + /* Warning threshold not met, nothing to do */ + goto next_aq; + + /* See if polling now makes some progress */ + pos =3D aq->q.cons; + ionic_admin_poll_locked(aq); + if (pos !=3D aq->q.cons) { + ibdev_dbg(&dev->ibdev, + "missed event for acq %d\n", aq->cqid); + goto next_aq; + } + + if (time_is_after_eq_jiffies(aq->stamp + + IONIC_ADMIN_TIMEOUT)) { + /* Timeout threshold not met */ + ibdev_dbg(&dev->ibdev, "no progress after %ums\n", + (u32)jiffies_to_msecs(jiffies - aq->stamp)); + goto next_aq; + } + + /* Queue timed out */ + bad_aq =3D aq; + do_reset =3D true; +next_aq: + spin_unlock_irqrestore(&aq->lock, irqflags); + } + + if (do_reset) + /* Reset device on a timeout */ + ionic_admin_timedout(bad_aq); + else if (do_reschedule) + /* Try to poll again later */ + ionic_admin_reset_dwork(dev); +} + +static void ionic_admin_work(struct work_struct *ws) +{ + struct ionic_aq *aq =3D container_of(ws, struct ionic_aq, work); + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_admin_post_aq(struct ionic_aq *aq, struct ionic_admin_wr= *wr) +{ + unsigned long irqflags; + bool poll; + + wr->status =3D IONIC_ADMIN_POSTED; + wr->aq =3D aq; + + spin_lock_irqsave(&aq->lock, irqflags); + poll =3D list_empty(&aq->wr_post); + list_add(&wr->aq_ent, &aq->wr_post); + if (poll) + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +void ionic_admin_post(struct ionic_ibdev *dev, struct ionic_admin_wr *wr) +{ + int aq_idx; + + aq_idx =3D raw_smp_processor_id() % dev->lif_cfg.aq_count; + ionic_admin_post_aq(dev->aq_vec[aq_idx], wr); +} + +static void ionic_admin_cancel(struct ionic_admin_wr *wr) +{ + struct ionic_aq *aq =3D wr->aq; + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + + if (!list_empty(&wr->aq_ent)) { + list_del(&wr->aq_ent); + if (wr->status !=3D IONIC_ADMIN_POSTED) + aq->q_wr[wr->status].wr =3D NULL; + } + + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static int ionic_admin_busy_wait(struct ionic_admin_wr *wr) +{ + struct ionic_aq *aq =3D wr->aq; + unsigned long irqflags; + int try_i; + + for (try_i =3D 0; try_i < IONIC_ADMIN_BUSY_RETRY_COUNT; ++try_i) { + if (completion_done(&wr->work)) + return 0; + + mdelay(IONIC_ADMIN_BUSY_RETRY_MS); + + spin_lock_irqsave(&aq->lock, irqflags); + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); + } + + /* + * we timed out. Initiate RDMA LIF reset and indicate + * error to caller. + */ + ionic_admin_timedout(aq); + return -ETIMEDOUT; +} + +int ionic_admin_wait(struct ionic_ibdev *dev, struct ionic_admin_wr *wr, + enum ionic_admin_flags flags) +{ + int rc, timo; + + if (flags & IONIC_ADMIN_F_BUSYWAIT) { + /* Spin */ + rc =3D ionic_admin_busy_wait(wr); + } else if (flags & IONIC_ADMIN_F_INTERRUPT) { + /* + * Interruptible sleep, 1s timeout + * This is used for commands which are safe for the caller + * to clean up without killing and resetting the adminq. + */ + timo =3D wait_for_completion_interruptible_timeout(&wr->work, + HZ); + if (timo > 0) + rc =3D 0; + else if (timo =3D=3D 0) + rc =3D -ETIMEDOUT; + else + rc =3D timo; + } else { + /* + * Uninterruptible sleep + * This is used for commands which are NOT safe for the + * caller to clean up. Cleanup must be handled by the + * adminq kill and reset process so that host memory is + * not corrupted by the device. + */ + wait_for_completion(&wr->work); + rc =3D 0; + } + + if (rc) { + ibdev_warn(&dev->ibdev, "wait status %d\n", rc); + ionic_admin_cancel(wr); + } else if (wr->status =3D=3D IONIC_ADMIN_KILLED) { + ibdev_dbg(&dev->ibdev, "admin killed\n"); + + /* No error if admin already killed during teardown */ + rc =3D (flags & IONIC_ADMIN_F_TEARDOWN) ? 0 : -ENODEV; + } else if (ionic_v1_cqe_error(&wr->cqe)) { + ibdev_warn(&dev->ibdev, "opcode %u error %u\n", + wr->wqe.op, + be32_to_cpu(wr->cqe.status_length)); + rc =3D -EINVAL; + } + return rc; +} + +static int ionic_rdma_devcmd(struct ionic_ibdev *dev, + struct ionic_admin_ctx *admin) +{ + int rc; + + rc =3D ionic_adminq_post_wait(dev->lif_cfg.lif, admin); + if (rc) + return rc; + + return ionic_error_to_errno(admin->comp.comp.status); +} + +int ionic_rdma_reset_devcmd(struct ionic_ibdev *dev) +{ + struct ionic_admin_ctx admin =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(admin.work), + .cmd.rdma_reset =3D { + .opcode =3D IONIC_CMD_RDMA_RESET_LIF, + .lif_index =3D cpu_to_le16(dev->lif_cfg.lif_index), + }, + }; + + return ionic_rdma_devcmd(dev, &admin); +} + +static int ionic_rdma_queue_devcmd(struct ionic_ibdev *dev, + struct ionic_queue *q, + u32 qid, u32 cid, u16 opcode) +{ + struct ionic_admin_ctx admin =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(admin.work), + .cmd.rdma_queue =3D { + .opcode =3D opcode, + .lif_index =3D cpu_to_le16(dev->lif_cfg.lif_index), + .qid_ver =3D cpu_to_le32(qid), + .cid =3D cpu_to_le32(cid), + .dbid =3D cpu_to_le16(dev->lif_cfg.dbid), + .depth_log2 =3D q->depth_log2, + .stride_log2 =3D q->stride_log2, + .dma_addr =3D cpu_to_le64(q->dma), + }, + }; + + return ionic_rdma_devcmd(dev, &admin); +} + +static void ionic_rdma_admincq_comp(struct ib_cq *ibcq, void *cq_context) +{ + struct ionic_aq *aq =3D cq_context; + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + aq->armed =3D false; + if (aq->dev->admin_state < IONIC_ADMIN_KILLED) + queue_work(ionic_evt_workq, &aq->work); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_rdma_admincq_event(struct ib_event *event, void *cq_cont= ext) +{ + struct ionic_aq *aq =3D cq_context; + + ibdev_err(&aq->dev->ibdev, "admincq event %d\n", event->event); +} + +static struct ionic_vcq *ionic_create_rdma_admincq(struct ionic_ibdev *dev, + int comp_vector) +{ + struct ib_cq_init_attr attr =3D { + .cqe =3D IONIC_AQ_DEPTH, + .comp_vector =3D comp_vector, + }; + struct ionic_tbl_buf buf =3D {}; + struct ionic_vcq *vcq; + struct ionic_cq *cq; + int rc; + + vcq =3D kzalloc(sizeof(*vcq), GFP_KERNEL); + if (!vcq) { + rc =3D -ENOMEM; + goto err_alloc; + } + + vcq->ibcq.device =3D &dev->ibdev; + vcq->ibcq.uobject =3D NULL; + vcq->ibcq.comp_handler =3D ionic_rdma_admincq_comp; + vcq->ibcq.event_handler =3D ionic_rdma_admincq_event; + vcq->ibcq.cq_context =3D NULL; + atomic_set(&vcq->ibcq.usecnt, 0); + + vcq->udma_mask =3D 1; + cq =3D &vcq->cq[0]; + + rc =3D ionic_create_cq_common(vcq, &buf, &attr, NULL, NULL, + NULL, NULL, 0); + if (rc) + goto err_init; + + rc =3D ionic_rdma_queue_devcmd(dev, &cq->q, cq->cqid, cq->eqid, + IONIC_CMD_RDMA_CREATE_CQ); + if (rc) + goto err_cmd; + + return vcq; + +err_cmd: + ionic_destroy_cq_common(dev, cq); +err_init: + kfree(vcq); +err_alloc: + return ERR_PTR(rc); +} + +static struct ionic_aq *__ionic_create_rdma_adminq(struct ionic_ibdev *dev, + u32 aqid, u32 cqid) +{ + struct ionic_aq *aq; + int rc; + + aq =3D kmalloc(sizeof(*aq), GFP_KERNEL); + if (!aq) { + rc =3D -ENOMEM; + goto err_aq; + } + + aq->dev =3D dev; + aq->aqid =3D aqid; + aq->cqid =3D cqid; + spin_lock_init(&aq->lock); + + rc =3D ionic_queue_init(&aq->q, dev->lif_cfg.hwdev, IONIC_EQ_DEPTH, + ADMIN_WQE_STRIDE); + if (rc) + goto err_q; + + ionic_queue_dbell_init(&aq->q, aq->aqid); + + aq->q_wr =3D kcalloc((u32)aq->q.mask + 1, sizeof(*aq->q_wr), GFP_KERNEL); + if (!aq->q_wr) { + rc =3D -ENOMEM; + goto err_wr; + } + + INIT_LIST_HEAD(&aq->wr_prod); + INIT_LIST_HEAD(&aq->wr_post); + + INIT_WORK(&aq->work, ionic_admin_work); + aq->armed =3D false; + + return aq; + +err_wr: + ionic_queue_destroy(&aq->q, dev->lif_cfg.hwdev); +err_q: + kfree(aq); +err_aq: + return ERR_PTR(rc); +} + +static void __ionic_destroy_rdma_adminq(struct ionic_ibdev *dev, + struct ionic_aq *aq) +{ + ionic_queue_destroy(&aq->q, dev->lif_cfg.hwdev); + kfree(aq); +} + +static struct ionic_aq *ionic_create_rdma_adminq(struct ionic_ibdev *dev, + u32 aqid, u32 cqid) +{ + struct ionic_aq *aq; + int rc; + + aq =3D __ionic_create_rdma_adminq(dev, aqid, cqid); + if (IS_ERR(aq)) { + rc =3D PTR_ERR(aq); + goto err_aq; + } + + rc =3D ionic_rdma_queue_devcmd(dev, &aq->q, aq->aqid, aq->cqid, + IONIC_CMD_RDMA_CREATE_ADMINQ); + if (rc) + goto err_cmd; + + return aq; + +err_cmd: + __ionic_destroy_rdma_adminq(dev, aq); +err_aq: + return ERR_PTR(rc); +} + +static void ionic_kill_ibdev(struct ionic_ibdev *dev, bool fatal_path) +{ + unsigned long irqflags; + bool do_flush =3D false; + int i; + + local_irq_save(irqflags); + + /* Mark the admin queue, flushing at most once */ + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) + spin_lock(&dev->aq_vec[i]->lock); + + if (dev->admin_state !=3D IONIC_ADMIN_KILLED) { + dev->admin_state =3D IONIC_ADMIN_KILLED; + do_flush =3D true; + } + + for (i =3D dev->lif_cfg.aq_count - 1; i >=3D 0; i--) { + /* Flush incomplete admin commands */ + if (do_flush) + ionic_admin_poll_locked(dev->aq_vec[i]); + spin_unlock(&dev->aq_vec[i]->lock); + } + + local_irq_restore(irqflags); + + /* Post a fatal event if requested */ + if (fatal_path) + ionic_port_event(dev, IB_EVENT_DEVICE_FATAL); +} + +void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path) +{ + unsigned long irqflags =3D 0; + bool do_reset =3D false; + int i, rc; + + if (!dev->aq_vec) + return; + + local_irq_save(irqflags); + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) + spin_lock(&dev->aq_vec[i]->lock); + + /* pause rdma admin queues to reset device */ + if (dev->admin_state =3D=3D IONIC_ADMIN_ACTIVE) { + dev->admin_state =3D IONIC_ADMIN_PAUSED; + do_reset =3D true; + } + + while (i-- > 0) + spin_unlock(&dev->aq_vec[i]->lock); + local_irq_restore(irqflags); + + if (!do_reset) + return; + + /* After resetting the device, it will be safe to resume the rdma admin + * queues in the killed state. Commands will not be issued to the + * device, but will complete locally with status IONIC_ADMIN_KILLED. + * Handling completion will ensure that creating or modifying resources + * fails, but destroying resources succeeds. + * + * If there was a failure resetting the device using this strategy, + * then the state of the device is unknown. The rdma admin queue is + * left here in the paused state. No new commands are issued to the + * device, nor are any completed locally. The eth driver will use a + * different strategy to reset the device. A callback from the eth + * driver will indicate that the reset is done and it is safe to + * continue. Then, the rdma admin queue will be transitioned to the + * killed state and new and outstanding commands will complete locally. + */ + + rc =3D ionic_rdma_reset_devcmd(dev); + if (unlikely(rc)) { + ibdev_err(&dev->ibdev, "failed to reset rdma %d\n", rc); + ionic_request_rdma_reset(dev->lif_cfg.lif); + } + + ionic_kill_ibdev(dev, fatal_path); +} + +static void ionic_reset_work(struct work_struct *ws) +{ + struct ionic_ibdev *dev =3D + container_of(ws, struct ionic_ibdev, reset_work); + + ionic_kill_rdma_admin(dev, true); +} + +static bool ionic_next_eqe(struct ionic_eq *eq, struct ionic_v1_eqe *eqe) +{ + struct ionic_v1_eqe *qeqe; + bool color; + + qeqe =3D ionic_queue_at_prod(&eq->q); + color =3D ionic_v1_eqe_color(qeqe); + + /* cons is color for eq */ + if (eq->q.cons !=3D color) + return false; + + /* Prevent out-of-order reads of the EQE */ + rmb(); + + ibdev_dbg(&eq->dev->ibdev, "poll eq prod %u\n", eq->q.prod); + print_hex_dump_debug("eqe ", DUMP_PREFIX_OFFSET, 16, 1, + qeqe, BIT(eq->q.stride_log2), true); + *eqe =3D *qeqe; + + return true; +} + +static void ionic_cq_event(struct ionic_ibdev *dev, u32 cqid, u8 code) +{ + unsigned long irqflags; + struct ib_event ibev; + struct ionic_cq *cq; + + read_lock_irqsave(&dev->cq_tbl_rw, irqflags); + cq =3D xa_load(&dev->cq_tbl, cqid); + if (cq) + kref_get(&cq->cq_kref); + read_unlock_irqrestore(&dev->cq_tbl_rw, irqflags); + + if (!cq) { + ibdev_dbg(&dev->ibdev, + "missing cqid %#x code %u\n", cqid, code); + return; + } + + switch (code) { + case IONIC_V1_EQE_CQ_NOTIFY: + if (cq->vcq->ibcq.comp_handler) + cq->vcq->ibcq.comp_handler(&cq->vcq->ibcq, + cq->vcq->ibcq.cq_context); + break; + + case IONIC_V1_EQE_CQ_ERR: + if (cq->vcq->ibcq.event_handler) { + ibev.event =3D IB_EVENT_CQ_ERR; + ibev.device =3D &dev->ibdev; + ibev.element.cq =3D &cq->vcq->ibcq; + + cq->vcq->ibcq.event_handler(&ibev, + cq->vcq->ibcq.cq_context); + } + break; + + default: + ibdev_dbg(&dev->ibdev, + "unrecognized cqid %#x code %u\n", cqid, code); + break; + } + + kref_put(&cq->cq_kref, ionic_cq_complete); +} + +static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budget) +{ + struct ionic_ibdev *dev =3D eq->dev; + struct ionic_v1_eqe eqe; + u16 npolled =3D 0; + u8 type, code; + u32 evt, qid; + + while (npolled < budget) { + if (!ionic_next_eqe(eq, &eqe)) + break; + + ionic_queue_produce(&eq->q); + + /* cons is color for eq */ + eq->q.cons =3D ionic_color_wrap(eq->q.prod, eq->q.cons); + + ++npolled; + + evt =3D ionic_v1_eqe_evt(&eqe); + type =3D ionic_v1_eqe_evt_type(evt); + code =3D ionic_v1_eqe_evt_code(evt); + qid =3D ionic_v1_eqe_evt_qid(evt); + + switch (type) { + case IONIC_V1_EQE_TYPE_CQ: + ionic_cq_event(dev, qid, code); + break; + + default: + ibdev_dbg(&dev->ibdev, + "unknown event %#x type %u\n", evt, type); + } + } + + return npolled; +} + +static void ionic_poll_eq_work(struct work_struct *work) +{ + struct ionic_eq *eq =3D container_of(work, struct ionic_eq, work); + u32 npolled; + + if (unlikely(!eq->enable) || WARN_ON(eq->armed)) + return; + + npolled =3D ionic_poll_eq(eq, IONIC_EQ_WORK_BUDGET); + eq->poll_wq +=3D npolled; + if (npolled =3D=3D 1) + eq->poll_wq_single++; + + if (npolled =3D=3D IONIC_EQ_WORK_BUDGET) { + eq->poll_wq_full++; + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + npolled, 0); + queue_work(ionic_evt_workq, &eq->work); + } else { + xchg(&eq->armed, true); + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + 0, IONIC_INTR_CRED_UNMASK); + } +} + +static irqreturn_t ionic_poll_eq_isr(int irq, void *eqptr) +{ + struct ionic_eq *eq =3D eqptr; + bool was_armed; + u32 npolled; + + was_armed =3D xchg(&eq->armed, false); + + if (unlikely(!eq->enable) || !was_armed) + return IRQ_HANDLED; + + npolled =3D ionic_poll_eq(eq, IONIC_EQ_ISR_BUDGET); + eq->poll_isr +=3D npolled; + if (npolled =3D=3D 1) + eq->poll_isr_single++; + + if (npolled =3D=3D IONIC_EQ_ISR_BUDGET) { + eq->poll_isr_full++; + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + npolled, 0); + queue_work(ionic_evt_workq, &eq->work); + } else { + xchg(&eq->armed, true); + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + 0, IONIC_INTR_CRED_UNMASK); + } + + return IRQ_HANDLED; +} + +static struct ionic_eq *ionic_create_eq(struct ionic_ibdev *dev, int eqid) +{ + struct ionic_intr_info intr_obj =3D { }; + struct ionic_eq *eq; + int rc; + + eq =3D kzalloc(sizeof(*eq), GFP_KERNEL); + if (!eq) { + rc =3D -ENOMEM; + goto err_eq; + } + + eq->dev =3D dev; + + rc =3D ionic_queue_init(&eq->q, dev->lif_cfg.hwdev, IONIC_EQ_DEPTH, + sizeof(struct ionic_v1_eqe)); + if (rc) + goto err_q; + + eq->eqid =3D eqid; + + eq->armed =3D true; + eq->enable =3D false; + INIT_WORK(&eq->work, ionic_poll_eq_work); + + rc =3D ionic_intr_alloc(dev->lif_cfg.lif, &intr_obj); + if (rc < 0) + goto err_intr; + + eq->irq =3D intr_obj.vector; + eq->intr =3D intr_obj.index; + + ionic_queue_dbell_init(&eq->q, eq->eqid); + + /* cons is color for eq */ + eq->q.cons =3D true; + + snprintf(eq->name, sizeof(eq->name), "%s-%d-%d-eq", + DRIVER_SHORTNAME, dev->lif_cfg.lif_index, eq->eqid); + + ionic_intr_mask(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_SET); + ionic_intr_mask_assert(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_= SET); + ionic_intr_coal_init(dev->lif_cfg.intr_ctrl, eq->intr, 0); + ionic_intr_clean(dev->lif_cfg.intr_ctrl, eq->intr); + + eq->enable =3D true; + + rc =3D request_irq(eq->irq, ionic_poll_eq_isr, 0, eq->name, eq); + if (rc) + goto err_irq; + + rc =3D ionic_rdma_queue_devcmd(dev, &eq->q, eq->eqid, eq->intr, + IONIC_CMD_RDMA_CREATE_EQ); + if (rc) + goto err_cmd; + + ionic_intr_mask(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_CLEAR); + + return eq; + +err_cmd: + eq->enable =3D false; + flush_work(&eq->work); + free_irq(eq->irq, eq); +err_irq: + ionic_intr_free(dev->lif_cfg.lif, eq->intr); +err_intr: + ionic_queue_destroy(&eq->q, dev->lif_cfg.hwdev); +err_q: + kfree(eq); +err_eq: + return ERR_PTR(rc); +} + +static void ionic_destroy_eq(struct ionic_eq *eq) +{ + struct ionic_ibdev *dev =3D eq->dev; + + eq->enable =3D false; + flush_work(&eq->work); + free_irq(eq->irq, eq); + + ionic_intr_free(dev->lif_cfg.lif, eq->intr); + ionic_queue_destroy(&eq->q, dev->lif_cfg.hwdev); + kfree(eq); +} + +int ionic_create_rdma_admin(struct ionic_ibdev *dev) +{ + int eq_i =3D 0, aq_i =3D 0, rc =3D 0; + struct ionic_vcq *vcq; + struct ionic_aq *aq; + struct ionic_eq *eq; + + dev->eq_vec =3D NULL; + dev->aq_vec =3D NULL; + + INIT_WORK(&dev->reset_work, ionic_reset_work); + INIT_DELAYED_WORK(&dev->admin_dwork, ionic_admin_dwork); + dev->admin_state =3D IONIC_ADMIN_KILLED; + + if (dev->lif_cfg.aq_count > IONIC_AQ_COUNT) { + ibdev_dbg(&dev->ibdev, "limiting adminq count to %d\n", + IONIC_AQ_COUNT); + dev->lif_cfg.aq_count =3D IONIC_AQ_COUNT; + } + + if (dev->lif_cfg.eq_count > IONIC_EQ_COUNT) { + dev_dbg(&dev->ibdev.dev, "limiting eventq count to %d\n", + IONIC_EQ_COUNT); + dev->lif_cfg.eq_count =3D IONIC_EQ_COUNT; + } + + /* need at least two eq and one aq */ + if (dev->lif_cfg.eq_count < IONIC_EQ_COUNT_MIN || + dev->lif_cfg.aq_count < IONIC_AQ_COUNT_MIN) { + rc =3D -EINVAL; + goto out; + } + + dev->eq_vec =3D kmalloc_array(dev->lif_cfg.eq_count, sizeof(*dev->eq_vec), + GFP_KERNEL); + if (!dev->eq_vec) { + rc =3D -ENOMEM; + goto out; + } + + for (eq_i =3D 0; eq_i < dev->lif_cfg.eq_count; ++eq_i) { + eq =3D ionic_create_eq(dev, eq_i + dev->lif_cfg.eq_base); + if (IS_ERR(eq)) { + rc =3D PTR_ERR(eq); + + if (eq_i < IONIC_EQ_COUNT_MIN) { + ibdev_err(&dev->ibdev, + "fail create eq %d\n", rc); + goto out; + } + + /* ok, just fewer eq than device supports */ + ibdev_dbg(&dev->ibdev, "eq count %d want %d rc %d\n", + eq_i, dev->lif_cfg.eq_count, rc); + + rc =3D 0; + break; + } + + dev->eq_vec[eq_i] =3D eq; + } + + dev->lif_cfg.eq_count =3D eq_i; + + dev->aq_vec =3D kmalloc_array(dev->lif_cfg.aq_count, sizeof(*dev->aq_vec), + GFP_KERNEL); + if (!dev->aq_vec) { + rc =3D -ENOMEM; + goto out; + } + + /* Create one CQ per AQ */ + for (aq_i =3D 0; aq_i < dev->lif_cfg.aq_count; ++aq_i) { + vcq =3D ionic_create_rdma_admincq(dev, aq_i % eq_i); + if (IS_ERR(vcq)) { + rc =3D PTR_ERR(vcq); + + if (!aq_i) { + ibdev_err(&dev->ibdev, + "failed to create acq %d\n", rc); + goto out; + } + + /* ok, just fewer adminq than device supports */ + ibdev_dbg(&dev->ibdev, "acq count %d want %d rc %d\n", + aq_i, dev->lif_cfg.aq_count, rc); + break; + } + + aq =3D ionic_create_rdma_adminq(dev, aq_i + dev->lif_cfg.aq_base, + vcq->cq[0].cqid); + if (IS_ERR(aq)) { + /* Clean up the dangling CQ */ + ionic_destroy_cq_common(dev, &vcq->cq[0]); + kfree(vcq); + + rc =3D PTR_ERR(aq); + + if (!aq_i) { + ibdev_err(&dev->ibdev, + "failed to create aq %d\n", rc); + goto out; + } + + /* ok, just fewer adminq than device supports */ + ibdev_dbg(&dev->ibdev, "aq count %d want %d rc %d\n", + aq_i, dev->lif_cfg.aq_count, rc); + break; + } + + vcq->ibcq.cq_context =3D aq; + aq->vcq =3D vcq; + + dev->aq_vec[aq_i] =3D aq; + } + + dev->admin_state =3D IONIC_ADMIN_ACTIVE; +out: + dev->lif_cfg.eq_count =3D eq_i; + dev->lif_cfg.aq_count =3D aq_i; + + return rc; +} + +void ionic_destroy_rdma_admin(struct ionic_ibdev *dev) +{ + struct ionic_vcq *vcq; + struct ionic_aq *aq; + struct ionic_eq *eq; + + cancel_delayed_work_sync(&dev->admin_dwork); + cancel_work_sync(&dev->reset_work); + + if (dev->aq_vec) { + while (dev->lif_cfg.aq_count > 0) { + aq =3D dev->aq_vec[--dev->lif_cfg.aq_count]; + vcq =3D aq->vcq; + + cancel_work_sync(&aq->work); + + __ionic_destroy_rdma_adminq(dev, aq); + if (vcq) { + ionic_destroy_cq_common(dev, &vcq->cq[0]); + kfree(vcq); + } + } + + kfree(dev->aq_vec); + } + + if (dev->eq_vec) { + while (dev->lif_cfg.eq_count > 0) { + eq =3D dev->eq_vec[--dev->lif_cfg.eq_count]; + ionic_destroy_eq(eq); + } + + kfree(dev->eq_vec); + } +} diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infi= niband/hw/ionic/ionic_controlpath.c new file mode 100644 index 000000000000..1004659c7554 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include "ionic_ibdev.h" + +static int ionic_validate_qdesc(struct ionic_qdesc *q) +{ + if (!q->addr || !q->size || !q->mask || + !q->depth_log2 || !q->stride_log2) + return -EINVAL; + + if (q->addr & (PAGE_SIZE - 1)) + return -EINVAL; + + if (q->mask !=3D BIT(q->depth_log2) - 1) + return -EINVAL; + + if (q->size < BIT_ULL(q->depth_log2 + q->stride_log2)) + return -EINVAL; + + return 0; +} + +static u32 ionic_get_eqid(struct ionic_ibdev *dev, u32 comp_vector, u8 udm= a_idx) +{ + /* EQ per vector per udma, and the first eqs reserved for async events. + * The rest of the vectors can be requested for completions. + */ + u32 comp_vec_count =3D dev->lif_cfg.eq_count / dev->lif_cfg.udma_count - = 1; + + return (comp_vector % comp_vec_count + 1) * dev->lif_cfg.udma_count + udm= a_idx; +} + +static int ionic_get_cqid(struct ionic_ibdev *dev, u32 *cqid, u8 udma_idx) +{ + int rc, size, base, bound, next; + + size =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + base =3D size * udma_idx; + bound =3D base + size; + + mutex_lock(&dev->inuse_lock); + next =3D dev->next_cqid[udma_idx]; + rc =3D ionic_resid_get_shared(&dev->inuse_cqid, base, next, bound); + if (rc >=3D 0) + dev->next_cqid[udma_idx] =3D rc + 1; + mutex_unlock(&dev->inuse_lock); + + if (rc >=3D 0) { + /* cq_base is zero or a multiple of two queue groups */ + *cqid =3D dev->lif_cfg.cq_base + + ionic_bitid_to_qid(rc, dev->lif_cfg.udma_qgrp_shift, + dev->half_cqid_udma_shift); + + rc =3D 0; + } + + return rc; +} + +static void ionic_put_cqid(struct ionic_ibdev *dev, u32 cqid) +{ + u32 bitid =3D ionic_qid_to_bitid(cqid - dev->lif_cfg.cq_base, + dev->lif_cfg.udma_qgrp_shift, + dev->half_cqid_udma_shift); + + ionic_resid_put(&dev->inuse_cqid, bitid); +} + +int ionic_create_cq_common(struct ionic_vcq *vcq, + struct ionic_tbl_buf *buf, + const struct ib_cq_init_attr *attr, + struct ionic_ctx *ctx, + struct ib_udata *udata, + struct ionic_qdesc *req_cq, + __u32 *resp_cqid, + int udma_idx) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(vcq->ibcq.device); + struct ionic_cq *cq =3D &vcq->cq[udma_idx]; + unsigned long irqflags; + int rc; + + cq->vcq =3D vcq; + + if (attr->cqe < 1 || attr->cqe + IONIC_CQ_GRACE > 0xffff) { + rc =3D -EINVAL; + goto err_args; + } + + rc =3D ionic_get_cqid(dev, &cq->cqid, udma_idx); + if (rc) + goto err_cqid; + + cq->eqid =3D ionic_get_eqid(dev, attr->comp_vector, udma_idx); + + spin_lock_init(&cq->lock); + INIT_LIST_HEAD(&cq->poll_sq); + INIT_LIST_HEAD(&cq->flush_sq); + INIT_LIST_HEAD(&cq->flush_rq); + + if (udata) { + rc =3D ionic_validate_qdesc(req_cq); + if (rc) + goto err_qdesc; + + cq->umem =3D ib_umem_get(&dev->ibdev, req_cq->addr, req_cq->size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(cq->umem)) { + rc =3D PTR_ERR(cq->umem); + goto err_umem; + } + + cq->q.ptr =3D NULL; + cq->q.size =3D req_cq->size; + cq->q.mask =3D req_cq->mask; + cq->q.depth_log2 =3D req_cq->depth_log2; + cq->q.stride_log2 =3D req_cq->stride_log2; + + *resp_cqid =3D cq->cqid; + } else { + rc =3D ionic_queue_init(&cq->q, dev->lif_cfg.hwdev, + attr->cqe + IONIC_CQ_GRACE, + sizeof(struct ionic_v1_cqe)); + if (rc) + goto err_q_init; + + ionic_queue_dbell_init(&cq->q, cq->cqid); + cq->color =3D true; + cq->reserve =3D cq->q.mask; + } + + rc =3D ionic_pgtbl_init(dev, buf, cq->umem, cq->q.dma, 1, PAGE_SIZE); + if (rc) { + ibdev_dbg(&dev->ibdev, + "create cq %u pgtbl_init error %d\n", cq->cqid, rc); + goto err_pgtbl_init; + } + + init_completion(&cq->cq_rel_comp); + kref_init(&cq->cq_kref); + + write_lock_irqsave(&dev->cq_tbl_rw, irqflags); + rc =3D xa_err(xa_store(&dev->cq_tbl, cq->cqid, cq, GFP_KERNEL)); + write_unlock_irqrestore(&dev->cq_tbl_rw, irqflags); + if (rc) + goto err_xa; + + return 0; + +err_xa: + ionic_pgtbl_unbuf(dev, buf); +err_pgtbl_init: + if (!udata) + ionic_queue_destroy(&cq->q, dev->lif_cfg.hwdev); +err_q_init: + if (cq->umem) + ib_umem_release(cq->umem); +err_umem: +err_qdesc: + ionic_put_cqid(dev, cq->cqid); +err_cqid: +err_args: + cq->vcq =3D NULL; + + return rc; +} + +void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq) +{ + unsigned long irqflags; + + if (!cq->vcq) + return; + + write_lock_irqsave(&dev->cq_tbl_rw, irqflags); + xa_erase(&dev->cq_tbl, cq->cqid); + write_unlock_irqrestore(&dev->cq_tbl_rw, irqflags); + + kref_put(&cq->cq_kref, ionic_cq_complete); + wait_for_completion(&cq->cq_rel_comp); + + if (cq->umem) + ib_umem_release(cq->umem); + else + ionic_queue_destroy(&cq->q, dev->lif_cfg.hwdev); + + ionic_put_cqid(dev, cq->cqid); + + cq->vcq =3D NULL; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h new file mode 100644 index 000000000000..b4f029dde3a9 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -0,0 +1,164 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_FW_H_ +#define _IONIC_FW_H_ + +#include + +/* completion queue v1 cqe */ +struct ionic_v1_cqe { + union { + struct { + __be16 cmd_idx; + __u8 cmd_op; + __u8 rsvd[17]; + __le16 old_sq_cindex; + __le16 old_rq_cq_cindex; + } admin; + struct { + __u64 wqe_id; + __be32 src_qpn_op; + __u8 src_mac[6]; + __be16 vlan_tag; + __be32 imm_data_rkey; + } recv; + struct { + __u8 rsvd[4]; + __be32 msg_msn; + __u8 rsvd2[8]; + __u64 npg_wqe_id; + } send; + }; + __be32 status_length; + __be32 qid_type_flags; +}; + +/* bits for cqe qid_type_flags */ +enum ionic_v1_cqe_qtf_bits { + IONIC_V1_CQE_COLOR =3D BIT(0), + IONIC_V1_CQE_ERROR =3D BIT(1), + IONIC_V1_CQE_TYPE_SHIFT =3D 5, + IONIC_V1_CQE_TYPE_MASK =3D 0x7, + IONIC_V1_CQE_QID_SHIFT =3D 8, + + IONIC_V1_CQE_TYPE_ADMIN =3D 0, + IONIC_V1_CQE_TYPE_RECV =3D 1, + IONIC_V1_CQE_TYPE_SEND_MSN =3D 2, + IONIC_V1_CQE_TYPE_SEND_NPG =3D 3, +}; + +static inline bool ionic_v1_cqe_color(struct ionic_v1_cqe *cqe) +{ + return !!(cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_COLOR)); +} + +static inline bool ionic_v1_cqe_error(struct ionic_v1_cqe *cqe) +{ + return !!(cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_ERROR)); +} + +static inline void ionic_v1_cqe_clean(struct ionic_v1_cqe *cqe) +{ + cqe->qid_type_flags |=3D cpu_to_be32(~0u << IONIC_V1_CQE_QID_SHIFT); +} + +static inline u32 ionic_v1_cqe_qtf(struct ionic_v1_cqe *cqe) +{ + return be32_to_cpu(cqe->qid_type_flags); +} + +static inline u8 ionic_v1_cqe_qtf_type(u32 qtf) +{ + return (qtf >> IONIC_V1_CQE_TYPE_SHIFT) & IONIC_V1_CQE_TYPE_MASK; +} + +static inline u32 ionic_v1_cqe_qtf_qid(u32 qtf) +{ + return qtf >> IONIC_V1_CQE_QID_SHIFT; +} + +#define ADMIN_WQE_STRIDE 64 +#define ADMIN_WQE_HDR_LEN 4 + +/* admin queue v1 wqe */ +struct ionic_v1_admin_wqe { + __u8 op; + __u8 rsvd; + __le16 len; + + union { + } cmd; +}; + +/* admin queue v1 cqe status */ +enum ionic_v1_admin_status { + IONIC_V1_ASTS_OK, + IONIC_V1_ASTS_BAD_CMD, + IONIC_V1_ASTS_BAD_INDEX, + IONIC_V1_ASTS_BAD_STATE, + IONIC_V1_ASTS_BAD_TYPE, + IONIC_V1_ASTS_BAD_ATTR, + IONIC_V1_ASTS_MSG_TOO_BIG, +}; + +/* event queue v1 eqe */ +struct ionic_v1_eqe { + __be32 evt; +}; + +/* bits for cqe queue_type_flags */ +enum ionic_v1_eqe_evt_bits { + IONIC_V1_EQE_COLOR =3D BIT(0), + IONIC_V1_EQE_TYPE_SHIFT =3D 1, + IONIC_V1_EQE_TYPE_MASK =3D 0x7, + IONIC_V1_EQE_CODE_SHIFT =3D 4, + IONIC_V1_EQE_CODE_MASK =3D 0xf, + IONIC_V1_EQE_QID_SHIFT =3D 8, + + /* cq events */ + IONIC_V1_EQE_TYPE_CQ =3D 0, + /* cq normal events */ + IONIC_V1_EQE_CQ_NOTIFY =3D 0, + /* cq error events */ + IONIC_V1_EQE_CQ_ERR =3D 8, + + /* qp and srq events */ + IONIC_V1_EQE_TYPE_QP =3D 1, + /* qp normal events */ + IONIC_V1_EQE_SRQ_LEVEL =3D 0, + IONIC_V1_EQE_SQ_DRAIN =3D 1, + IONIC_V1_EQE_QP_COMM_EST =3D 2, + IONIC_V1_EQE_QP_LAST_WQE =3D 3, + /* qp error events */ + IONIC_V1_EQE_QP_ERR =3D 8, + IONIC_V1_EQE_QP_ERR_REQUEST =3D 9, + IONIC_V1_EQE_QP_ERR_ACCESS =3D 10, +}; + +static inline bool ionic_v1_eqe_color(struct ionic_v1_eqe *eqe) +{ + return !!(eqe->evt & cpu_to_be32(IONIC_V1_EQE_COLOR)); +} + +static inline u32 ionic_v1_eqe_evt(struct ionic_v1_eqe *eqe) +{ + return be32_to_cpu(eqe->evt); +} + +static inline u8 ionic_v1_eqe_evt_type(u32 evt) +{ + return (evt >> IONIC_V1_EQE_TYPE_SHIFT) & IONIC_V1_EQE_TYPE_MASK; +} + +static inline u8 ionic_v1_eqe_evt_code(u32 evt) +{ + return (evt >> IONIC_V1_EQE_CODE_SHIFT) & IONIC_V1_EQE_CODE_MASK; +} + +static inline u32 ionic_v1_eqe_evt_qid(u32 evt) +{ + return evt >> IONIC_V1_EQE_QID_SHIFT; +} + +#endif /* _IONIC_FW_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index ca047a789378..21d2820f36ce 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -22,9 +22,99 @@ static const struct auxiliary_device_id ionic_aux_id_tab= le[] =3D { =20 MODULE_DEVICE_TABLE(auxiliary, ionic_aux_id_table); =20 +void ionic_port_event(struct ionic_ibdev *dev, enum ib_event_type event) +{ + struct ib_event ev; + + ev.device =3D &dev->ibdev; + ev.element.port_num =3D 1; + ev.event =3D event; + + ib_dispatch_event(&ev); +} + +static int ionic_init_resids(struct ionic_ibdev *dev) +{ + int rc; + + rc =3D ionic_resid_init(&dev->inuse_cqid, dev->lif_cfg.cq_count); + if (rc) + return rc; + + dev->next_cqid[0] =3D 0; + dev->next_cqid[1] =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + dev->half_cqid_udma_shift =3D + order_base_2(dev->lif_cfg.cq_count / dev->lif_cfg.udma_count); + + rc =3D ionic_resid_init(&dev->inuse_pdid, IONIC_MAX_PD); + if (rc) + goto err_pdid; + + rc =3D ionic_resid_init(&dev->inuse_ahid, dev->lif_cfg.nahs_per_lif); + if (rc) + goto err_ahid; + + rc =3D ionic_resid_init(&dev->inuse_mrid, dev->lif_cfg.nmrs_per_lif); + if (rc) + goto err_mrid; + + /* skip reserved lkey */ + dev->inuse_mrid.next_id =3D 1; + dev->next_mrkey =3D 1; + + rc =3D ionic_resid_init(&dev->inuse_qpid, dev->lif_cfg.qp_count); + if (rc) + goto err_qpid; + + /* skip reserved SMI and GSI qpids */ + dev->next_qpid[0] =3D 2; + dev->next_qpid[1] =3D dev->lif_cfg.qp_count / dev->lif_cfg.udma_count; + dev->half_qpid_udma_shift =3D + order_base_2(dev->lif_cfg.qp_count / dev->lif_cfg.udma_count); + + rc =3D ionic_resid_init(&dev->inuse_dbid, dev->lif_cfg.dbid_count); + if (rc) + goto err_dbid; + + /* Reserve dbid zero for kernel pid */ + dev->inuse_dbid.next_id =3D 1; + + mutex_init(&dev->inuse_lock); + spin_lock_init(&dev->inuse_splock); + + return 0; + +err_dbid: + ionic_resid_destroy(&dev->inuse_qpid); +err_qpid: + ionic_resid_destroy(&dev->inuse_mrid); +err_mrid: + ionic_resid_destroy(&dev->inuse_ahid); +err_ahid: + ionic_resid_destroy(&dev->inuse_pdid); +err_pdid: + ionic_resid_destroy(&dev->inuse_cqid); + + return rc; +} + +static void ionic_destroy_resids(struct ionic_ibdev *dev) +{ + ionic_resid_destroy(&dev->inuse_cqid); + ionic_resid_destroy(&dev->inuse_pdid); + ionic_resid_destroy(&dev->inuse_ahid); + ionic_resid_destroy(&dev->inuse_mrid); + ionic_resid_destroy(&dev->inuse_qpid); + ionic_resid_destroy(&dev->inuse_dbid); +} + static void ionic_destroy_ibdev(struct ionic_ibdev *dev) { + ionic_kill_rdma_admin(dev, false); ib_unregister_device(&dev->ibdev); + ionic_destroy_rdma_admin(dev); + ionic_destroy_resids(dev); + xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); } =20 @@ -46,6 +136,21 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); =20 + xa_init_flags(&dev->cq_tbl, GFP_ATOMIC); + rwlock_init(&dev->cq_tbl_rw); + + rc =3D ionic_init_resids(dev); + if (rc) + goto err_resids; + + rc =3D ionic_rdma_reset_devcmd(dev); + if (rc) + goto err_reset; + + rc =3D ionic_create_rdma_admin(dev); + if (rc) + goto err_admin; + ibdev =3D &dev->ibdev; ibdev->dev.parent =3D dev->lif_cfg.hwdev; =20 @@ -73,6 +178,12 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 err_register: err_admin: + ionic_kill_rdma_admin(dev, false); + ionic_destroy_rdma_admin(dev); +err_reset: + ionic_destroy_resids(dev); +err_resids: + xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); err_dev: return ERR_PTR(rc); @@ -116,6 +227,10 @@ static int __init ionic_mod_init(void) { int rc; =20 + ionic_evt_workq =3D create_workqueue(DRIVER_NAME "-evt"); + if (!ionic_evt_workq) + return -ENOMEM; + rc =3D auxiliary_driver_register(&ionic_aux_r_driver); if (rc) goto err_aux; @@ -123,12 +238,15 @@ static int __init ionic_mod_init(void) return 0; =20 err_aux: + destroy_workqueue(ionic_evt_workq); + return rc; } =20 static void __exit ionic_mod_exit(void) { auxiliary_driver_unregister(&ionic_aux_r_driver); + destroy_workqueue(ionic_evt_workq); } =20 module_init(ionic_mod_init); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index e13adff390d7..bd10bae2cf72 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -4,18 +4,263 @@ #ifndef _IONIC_IBDEV_H_ #define _IONIC_IBDEV_H_ =20 +#include #include + #include +#include + +#include "ionic_fw.h" +#include "ionic_queue.h" +#include "ionic_res.h" =20 #include "ionic_lif_cfg.h" =20 +#define DRIVER_NAME "ionic_rdma" +#define DRIVER_SHORTNAME "ionr" + #define IONIC_MIN_RDMA_VERSION 0 #define IONIC_MAX_RDMA_VERSION 2 =20 +/* Config knobs */ +#define IONIC_EQ_DEPTH 511 +#define IONIC_EQ_COUNT 32 +#define IONIC_AQ_DEPTH 63 +#define IONIC_AQ_COUNT 4 +#define IONIC_EQ_ISR_BUDGET 10 +#define IONIC_EQ_WORK_BUDGET 1000 +#define IONIC_MAX_PD 1024 + +#define IONIC_CQ_GRACE 100 + +struct ionic_aq; +struct ionic_cq; +struct ionic_eq; +struct ionic_vcq; + +enum ionic_admin_state { + IONIC_ADMIN_ACTIVE, /* submitting admin commands to queue */ + IONIC_ADMIN_PAUSED, /* not submitting, but may complete normally */ + IONIC_ADMIN_KILLED, /* not submitting, locally completed */ +}; + +enum ionic_admin_flags { + IONIC_ADMIN_F_BUSYWAIT =3D BIT(0), /* Don't sleep */ + IONIC_ADMIN_F_TEARDOWN =3D BIT(1), /* In destroy path */ + IONIC_ADMIN_F_INTERRUPT =3D BIT(2), /* Interruptible w/timeout */ +}; + +struct ionic_qdesc { + __aligned_u64 addr; + __u32 size; + __u16 mask; + __u8 depth_log2; + __u8 stride_log2; +}; + +struct ionic_mmap_info { + struct list_head ctx_ent; + unsigned long offset; + unsigned long size; + unsigned long pfn; + bool writecombine; +}; + struct ionic_ibdev { struct ib_device ibdev; =20 struct ionic_lif_cfg lif_cfg; + + /* These tables are used in the fast path. + * They are protected by rw locks. + */ + struct xarray qp_tbl; + struct xarray cq_tbl; + rwlock_t qp_tbl_rw; + rwlock_t cq_tbl_rw; + struct mutex inuse_lock; /* for id reservation */ + spinlock_t inuse_splock; /* for ahid reservation */ + + struct ionic_resid_bits inuse_dbid; + struct ionic_resid_bits inuse_pdid; + struct ionic_resid_bits inuse_ahid; + struct ionic_resid_bits inuse_mrid; + struct ionic_resid_bits inuse_qpid; + struct ionic_resid_bits inuse_cqid; + + int next_cqid[2]; + int next_qpid[2]; + u8 half_cqid_udma_shift; + u8 half_qpid_udma_shift; + u8 next_qpid_udma_idx; + u8 next_mrkey; + + struct work_struct reset_work; + bool reset_posted; + u32 reset_cnt; + + struct delayed_work admin_dwork; + struct ionic_aq **aq_vec; + enum ionic_admin_state admin_state; + + struct ionic_eq **eq_vec; }; =20 +struct ionic_eq { + struct ionic_ibdev *dev; + + u32 eqid; + u32 intr; + + struct ionic_queue q; + + int armed; + bool enable; + + struct work_struct work; + + int irq; + char name[32]; + + u64 poll_isr; + u64 poll_isr_single; + u64 poll_isr_full; + u64 poll_wq; + u64 poll_wq_single; + u64 poll_wq_full; +}; + +struct ionic_admin_wr { + struct completion work; + struct list_head aq_ent; + struct ionic_v1_admin_wqe wqe; + struct ionic_v1_cqe cqe; + struct ionic_aq *aq; + int status; +}; + +struct ionic_admin_wr_q { + struct ionic_admin_wr *wr; + int wqe_strides; +}; + +struct ionic_aq { + struct ionic_ibdev *dev; + struct ionic_vcq *vcq; + + struct work_struct work; + + unsigned long stamp; + bool armed; + + u32 aqid; + u32 cqid; + + spinlock_t lock; /* for posting */ + struct ionic_queue q; + struct ionic_admin_wr_q *q_wr; + struct list_head wr_prod; + struct list_head wr_post; +}; + +struct ionic_ctx { + struct ib_ucontext ibctx; + + u32 dbid; + + struct mutex mmap_mut; /* for mmap_list */ + unsigned long long mmap_off; + struct list_head mmap_list; + struct ionic_mmap_info mmap_dbell; +}; + +struct ionic_tbl_buf { + u32 tbl_limit; + u32 tbl_pages; + size_t tbl_size; + __le64 *tbl_buf; + dma_addr_t tbl_dma; + u8 page_size_log2; +}; + +struct ionic_cq { + struct ionic_vcq *vcq; + + u32 cqid; + u32 eqid; + + spinlock_t lock; /* for polling */ + struct list_head poll_sq; + bool flush; + struct list_head flush_sq; + struct list_head flush_rq; + struct list_head cq_list_ent; + + struct ionic_queue q; + bool color; + int reserve; + u16 arm_any_prod; + u16 arm_sol_prod; + + struct kref cq_kref; + struct completion cq_rel_comp; + + /* infrequently accessed, keep at end */ + struct ib_umem *umem; +}; + +struct ionic_vcq { + struct ib_cq ibcq; + struct ionic_cq cq[2]; + u8 udma_mask; + u8 poll_idx; +}; + +static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) +{ + return container_of(ibdev, struct ionic_ibdev, ibdev); +} + +static inline void ionic_cq_complete(struct kref *kref) +{ + struct ionic_cq *cq =3D container_of(kref, struct ionic_cq, cq_kref); + + complete(&cq->cq_rel_comp); +} + +/* ionic_admin.c */ +extern struct workqueue_struct *ionic_evt_workq; +void ionic_admin_post(struct ionic_ibdev *dev, struct ionic_admin_wr *wr); +int ionic_admin_wait(struct ionic_ibdev *dev, struct ionic_admin_wr *wr, + enum ionic_admin_flags); + +int ionic_rdma_reset_devcmd(struct ionic_ibdev *dev); + +int ionic_create_rdma_admin(struct ionic_ibdev *dev); +void ionic_destroy_rdma_admin(struct ionic_ibdev *dev); +void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path); + +/* ionic_controlpath.c */ +int ionic_create_cq_common(struct ionic_vcq *vcq, + struct ionic_tbl_buf *buf, + const struct ib_cq_init_attr *attr, + struct ionic_ctx *ctx, + struct ib_udata *udata, + struct ionic_qdesc *req_cq, + __u32 *resp_cqid, + int udma_idx); +void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq); + +/* ionic_pgtbl.c */ +int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); +int ionic_pgtbl_init(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf, + struct ib_umem *umem, + dma_addr_t dma, + int limit, + u64 page_size); +void ionic_pgtbl_unbuf(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf); + +/* ionic_ibdev.c */ +void ionic_port_event(struct ionic_ibdev *dev, enum ib_event_type event); #endif /* _IONIC_IBDEV_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c new file mode 100644 index 000000000000..11461f7642bc --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) +{ + if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) + return -ENOMEM; + + if (buf->tbl_buf) + buf->tbl_buf[buf->tbl_pages] =3D cpu_to_le64(dma); + else + buf->tbl_dma =3D dma; + + ++buf->tbl_pages; + + return 0; +} + +static int ionic_tbl_buf_alloc(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf) +{ + int rc; + + buf->tbl_size =3D buf->tbl_limit * sizeof(*buf->tbl_buf); + buf->tbl_buf =3D kmalloc(buf->tbl_size, GFP_KERNEL); + if (!buf->tbl_buf) + return -ENOMEM; + + buf->tbl_dma =3D dma_map_single(dev->lif_cfg.hwdev, buf->tbl_buf, + buf->tbl_size, DMA_TO_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, buf->tbl_dma); + if (rc) { + kfree(buf->tbl_buf); + return rc; + } + + return 0; +} + +static int ionic_pgtbl_umem(struct ionic_tbl_buf *buf, struct ib_umem *ume= m) +{ + struct ib_block_iter biter; + u64 page_dma; + int rc; + + rdma_umem_for_each_dma_block(umem, &biter, BIT_ULL(buf->page_size_log2)) { + page_dma =3D rdma_block_iter_dma_address(&biter); + rc =3D ionic_pgtbl_page(buf, page_dma); + if (rc) + return rc; + } + + return 0; +} + +void ionic_pgtbl_unbuf(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf) +{ + if (buf->tbl_buf) + dma_unmap_single(dev->lif_cfg.hwdev, buf->tbl_dma, + buf->tbl_size, DMA_TO_DEVICE); + + kfree(buf->tbl_buf); + memset(buf, 0, sizeof(*buf)); +} + +int ionic_pgtbl_init(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf, + struct ib_umem *umem, + dma_addr_t dma, + int limit, + u64 page_size) +{ + int rc; + + memset(buf, 0, sizeof(*buf)); + + if (umem) { + limit =3D ib_umem_num_dma_blocks(umem, page_size); + buf->page_size_log2 =3D order_base_2(page_size); + } + + if (limit < 1) + return -EINVAL; + + buf->tbl_limit =3D limit; + + /* skip pgtbl if contiguous / direct translation */ + if (limit > 1) { + rc =3D ionic_tbl_buf_alloc(dev, buf); + if (rc) + return rc; + } + + if (umem) + rc =3D ionic_pgtbl_umem(buf, umem); + else + rc =3D ionic_pgtbl_page(buf, dma); + + if (rc) + goto err_unbuf; + + return 0; + +err_unbuf: + ionic_pgtbl_unbuf(dev, buf); + return rc; +} diff --git a/drivers/infiniband/hw/ionic/ionic_queue.c b/drivers/infiniband= /hw/ionic/ionic_queue.c new file mode 100644 index 000000000000..aa897ed2a412 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_queue.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include "ionic_queue.h" + +int ionic_queue_init(struct ionic_queue *q, struct device *dma_dev, + int depth, size_t stride) +{ + if (depth < 0 || depth > 0xffff) + return -EINVAL; + + if (stride =3D=3D 0 || stride > 0x10000) + return -EINVAL; + + if (depth =3D=3D 0) + depth =3D 1; + + q->depth_log2 =3D order_base_2(depth + 1); + q->stride_log2 =3D order_base_2(stride); + + if (q->depth_log2 + q->stride_log2 < PAGE_SHIFT) + q->depth_log2 =3D PAGE_SHIFT - q->stride_log2; + + if (q->depth_log2 > 16 || q->stride_log2 > 16) + return -EINVAL; + + q->size =3D BIT_ULL(q->depth_log2 + q->stride_log2); + q->mask =3D BIT(q->depth_log2) - 1; + + q->ptr =3D dma_alloc_coherent(dma_dev, q->size, &q->dma, GFP_KERNEL); + if (!q->ptr) + return -ENOMEM; + + /* it will always be page aligned, but just to be sure... */ + if (!PAGE_ALIGNED(q->ptr)) { + dma_free_coherent(dma_dev, q->size, q->ptr, q->dma); + return -ENOMEM; + } + + q->prod =3D 0; + q->cons =3D 0; + q->dbell =3D 0; + + return 0; +} + +void ionic_queue_destroy(struct ionic_queue *q, struct device *dma_dev) +{ + dma_free_coherent(dma_dev, q->size, q->ptr, q->dma); +} diff --git a/drivers/infiniband/hw/ionic/ionic_queue.h b/drivers/infiniband= /hw/ionic/ionic_queue.h new file mode 100644 index 000000000000..d18020d4cad5 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_queue.h @@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_QUEUE_H_ +#define _IONIC_QUEUE_H_ + +#include +#include + +#define IONIC_MAX_DEPTH 0xffff +#define IONIC_MAX_CQ_DEPTH 0xffff +#define IONIC_CQ_RING_ARM IONIC_DBELL_RING_1 +#define IONIC_CQ_RING_SOL IONIC_DBELL_RING_2 + +/** + * struct ionic_queue - Ring buffer used between device and driver + * @size: Size of the buffer, in bytes + * @dma: Dma address of the buffer + * @ptr: Buffer virtual address + * @prod: Driver position in the queue + * @cons: Device position in the queue + * @mask: Capacity of the queue, subtracting the hole + * This value is equal to ((1 << depth_log2) - 1) + * @depth_log2: Log base two size depth of the queue + * @stride_log2: Log base two size of an element in the queue + * @dbell: Doorbell identifying bits + */ +struct ionic_queue { + size_t size; + dma_addr_t dma; + void *ptr; + u16 prod; + u16 cons; + u16 mask; + u8 depth_log2; + u8 stride_log2; + u64 dbell; +}; + +/** + * ionic_queue_init() - Initialize user space queue + * @q: Uninitialized queue structure + * @dma_dev: DMA device for mapping + * @depth: Depth of the queue + * @stride: Size of each element of the queue + * + * Return: status code + */ +int ionic_queue_init(struct ionic_queue *q, struct device *dma_dev, + int depth, size_t stride); + +/** + * ionic_queue_destroy() - Destroy user space queue + * @q: Queue structure + * @dma_dev: DMA device for mapping + * + * Return: status code + */ +void ionic_queue_destroy(struct ionic_queue *q, struct device *dma_dev); + +/** + * ionic_queue_empty() - Test if queue is empty + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: is empty + */ +static inline bool ionic_queue_empty(struct ionic_queue *q) +{ + return q->prod =3D=3D q->cons; +} + +/** + * ionic_queue_length() - Get the current length of the queue + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: length + */ +static inline u16 ionic_queue_length(struct ionic_queue *q) +{ + return (q->prod - q->cons) & q->mask; +} + +/** + * ionic_queue_length_remaining() - Get the remaining length of the queue + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: length remaining + */ +static inline u16 ionic_queue_length_remaining(struct ionic_queue *q) +{ + return q->mask - ionic_queue_length(q); +} + +/** + * ionic_queue_full() - Test if queue is full + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: is full + */ +static inline bool ionic_queue_full(struct ionic_queue *q) +{ + return q->mask =3D=3D ionic_queue_length(q); +} + +/** + * ionic_color_wrap() - Flip the color if prod is wrapped + * @prod: Queue index just after advancing + * @color: Queue color just prior to advancing the index + * + * Return: color after advancing the index + */ +static inline bool ionic_color_wrap(u16 prod, bool color) +{ + /* logical xor color with (prod =3D=3D 0) */ + return color !=3D (prod =3D=3D 0); +} + +/** + * ionic_queue_at() - Get the element at the given index + * @q: Queue structure + * @idx: Index in the queue + * + * The index must be within the bounds of the queue. It is not checked he= re. + * + * Return: pointer to element at index + */ +static inline void *ionic_queue_at(struct ionic_queue *q, u16 idx) +{ + return q->ptr + ((unsigned long)idx << q->stride_log2); +} + +/** + * ionic_queue_at_prod() - Get the element at the producer index + * @q: Queue structure + * + * Return: pointer to element at producer index + */ +static inline void *ionic_queue_at_prod(struct ionic_queue *q) +{ + return ionic_queue_at(q, q->prod); +} + +/** + * ionic_queue_at_cons() - Get the element at the consumer index + * @q: Queue structure + * + * Return: pointer to element at consumer index + */ +static inline void *ionic_queue_at_cons(struct ionic_queue *q) +{ + return ionic_queue_at(q, q->cons); +} + +/** + * ionic_queue_next() - Compute the next index + * @q: Queue structure + * @idx: Index + * + * Return: next index after idx + */ +static inline u16 ionic_queue_next(struct ionic_queue *q, u16 idx) +{ + return (idx + 1) & q->mask; +} + +/** + * ionic_queue_produce() - Increase the producer index + * @q: Queue structure + * + * Caller must ensure that the queue is not full. It is not checked here. + */ +static inline void ionic_queue_produce(struct ionic_queue *q) +{ + q->prod =3D ionic_queue_next(q, q->prod); +} + +/** + * ionic_queue_consume() - Increase the consumer index + * @q: Queue structure + * + * Caller must ensure that the queue is not empty. It is not checked here. + * + * This is only valid for to-device queues. + */ +static inline void ionic_queue_consume(struct ionic_queue *q) +{ + q->cons =3D ionic_queue_next(q, q->cons); +} + +/** + * ionic_queue_consume_entries() - Increase the consumer index by entries + * @q: Queue structure + * @entries: Number of entries to increment + * + * Caller must ensure that the queue is not empty. It is not checked here. + * + * This is only valid for to-device queues. + */ +static inline void ionic_queue_consume_entries(struct ionic_queue *q, + u16 entries) +{ + q->cons =3D (q->cons + entries) & q->mask; +} + +/** + * ionic_queue_dbell_init() - Initialize doorbell bits for queue id + * @q: Queue structure + * @qid: Queue identifying number + */ +static inline void ionic_queue_dbell_init(struct ionic_queue *q, u32 qid) +{ + q->dbell =3D IONIC_DBELL_QID(qid); +} + +/** + * ionic_queue_dbell_val() - Get current doorbell update value + * @q: Queue structure + * + * Return: current doorbell update value + */ +static inline u64 ionic_queue_dbell_val(struct ionic_queue *q) +{ + return q->dbell | q->prod; +} + +#endif /* _IONIC_QUEUE_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_res.c b/drivers/infiniband/h= w/ionic/ionic_res.c new file mode 100644 index 000000000000..a3b4f10aa4c8 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_res.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_res.h" + +int ionic_resid_init(struct ionic_resid_bits *resid, int size) +{ + int size_bytes =3D sizeof(long) * BITS_TO_LONGS(size); + + resid->next_id =3D 0; + resid->inuse_size =3D size; + + resid->inuse =3D kzalloc(size_bytes, GFP_KERNEL); + if (!resid->inuse) + return -ENOMEM; + + return 0; +} + +int ionic_resid_get_shared(struct ionic_resid_bits *resid, int wrap_id, + int next_id, int size) +{ + int id; + + id =3D find_next_zero_bit(resid->inuse, size, next_id); + if (id !=3D size) { + set_bit(id, resid->inuse); + return id; + } + + id =3D find_next_zero_bit(resid->inuse, next_id, wrap_id); + if (id !=3D next_id) { + set_bit(id, resid->inuse); + return id; + } + + return -ENOMEM; +} diff --git a/drivers/infiniband/hw/ionic/ionic_res.h b/drivers/infiniband/h= w/ionic/ionic_res.h new file mode 100644 index 000000000000..e833ced1466e --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_res.h @@ -0,0 +1,182 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_RES_H_ +#define _IONIC_RES_H_ + +/** + * struct ionic_resid_bits - Number allocator based on find_first_zero_bit + * + * @next_id: The bitnumber to start searching at + * @inuse_size: The bitmap size in bits + * @inuse: The address to base the search on + * + * The allocator find_first_zero_bit suffers O(N^2) search time complexity, + * for N allocations. This is because it starts from the beginning of the + * bitmap each time. To find a free bit in the bitmap, the search time + * increases for each allocation as the beginning of the bitmap is filled.= On + * the other hand, it is desirable for O(1) memory size complexity, assumi= ng + * the capacity is constant. + * + * This allocator is intended to keep the desired memory size complexity, = but + * improve the search time complexity for typical workloads. The search t= ime + * complexity is expected to be closer to O(N), for N allocations, althoug= h it + * remains bounded by O(N^2) in the worst case. + */ +struct ionic_resid_bits { + int next_id; + int inuse_size; + unsigned long *inuse; +}; + +/** + * ionic_resid_init() - Initialize a resid allocator + * @resid: Uninitialized resid allocator + * @size: Capacity of the allocator + * + * Return: Zero on success, or negative error number + */ +int ionic_resid_init(struct ionic_resid_bits *resid, int size); + +/** + * ionic_resid_destroy() - Destroy a resid allocator + * @resid: Resid allocator + */ +static inline void ionic_resid_destroy(struct ionic_resid_bits *resid) +{ + kfree(resid->inuse); +} + +/** + * ionic_resid_get_shared() - Allocate an available shared resource id + * @resid: Resid allocator + * @wrap_id: Smallest valid resource id + * @next_id: Start the search at resource id + * @size: One after largest valid resource id + * + * This does not update the next_id. Caller should update the next_id for + * the resource that shares the id space, and/or the shared resid->next_id= as + * appropriate. + * + * Return: Resource id, or negative error number + */ +int ionic_resid_get_shared(struct ionic_resid_bits *resid, int wrap_id, + int next_id, int size); + +/** + * ionic_resid_get_wrap() - Allocate an available resource id, wrap to non= zero + * @resid: Resid allocator + * @wrap_id: Smallest valid resource id + * + * Return: Resource id, or negative error number + */ +static inline int ionic_resid_get_wrap(struct ionic_resid_bits *resid, + int wrap_id) +{ + int rc; + + rc =3D ionic_resid_get_shared(resid, wrap_id, + resid->next_id, + resid->inuse_size); + if (rc >=3D 0) + resid->next_id =3D rc + 1; + + return rc; +} + +/** + * ionic_resid_get() - Allocate an available resource id + * @resid: Resid allocator + * + * Return: Resource id, or negative error number + */ +static inline int ionic_resid_get(struct ionic_resid_bits *resid) +{ + return ionic_resid_get_wrap(resid, 0); +} + +/** + * ionic_resid_put() - Free a resource id + * @resid: Resid allocator + * @id: Resource id + */ +static inline void ionic_resid_put(struct ionic_resid_bits *resid, int id) +{ + clear_bit(id, resid->inuse); +} + +/** + * ionic_bitid_to_qid() - Transform a resource bit index into a queue id + * @bitid: Bit index + * @qgrp_shift: Log2 number of queues per queue group + * @half_qid_shift: Log2 of half the total number of queues + * + * Return: Queue id + * + * Udma-constrained queues (QPs and CQs) are associated with their udma by + * queue group. Even queue groups are associated with udma0, and odd queue + * groups with udma1. + * + * For allocating queue ids, we want to arrange the bits into two halves, + * with the even queue groups of udma0 in the lower half of the bitset, + * and the odd queue groups of udma1 in the upper half of the bitset. + * Then, one or two calls of find_next_zero_bit can examine all the bits + * for queues of an entire udma. + * + * For example, assuming eight queue groups with qgrp qids per group: + * + * bitid 0*qgrp..1*qgrp-1 : qid 0*qgrp..1*qgrp-1 + * bitid 1*qgrp..2*qgrp-1 : qid 2*qgrp..3*qgrp-1 + * bitid 2*qgrp..3*qgrp-1 : qid 4*qgrp..5*qgrp-1 + * bitid 3*qgrp..4*qgrp-1 : qid 6*qgrp..7*qgrp-1 + * bitid 4*qgrp..5*qgrp-1 : qid 1*qgrp..2*qgrp-1 + * bitid 5*qgrp..6*qgrp-1 : qid 3*qgrp..4*qgrp-1 + * bitid 6*qgrp..7*qgrp-1 : qid 5*qgrp..6*qgrp-1 + * bitid 7*qgrp..8*qgrp-1 : qid 7*qgrp..8*qgrp-1 + * + * There are three important ranges of bits in the qid. There is the udma + * bit "U" at qgrp_shift, which is the least significant bit of the group + * index, and determines which udma a queue is associated with. + * The bits of lesser significance we can call the idx bits "I", which are + * the index of the queue within the group. The bits of greater significa= nce + * we can call the grp bits "G", which are other bits of the group index t= hat + * do not determine the udma. Those bits are just rearranged in the bit i= ndex + * in the bitset. A bitid has the udma bit in the most significant place, + * then the grp bits, then the idx bits. + * + * bitid: 00000000000000 U GGG IIIIII + * qid: 00000000000000 GGG U IIIIII + * + * Transforming from bit index to qid, or from qid to bit index, can be + * accomplished by rearranging the bits by masking and shifting. + */ +static inline u32 ionic_bitid_to_qid(u32 bitid, u8 qgrp_shift, + u8 half_qid_shift) +{ + u32 udma_bit =3D + (bitid & BIT(half_qid_shift)) >> (half_qid_shift - qgrp_shift); + u32 grp_bits =3D (bitid & GENMASK(half_qid_shift - 1, qgrp_shift)) << 1; + u32 idx_bits =3D bitid & (BIT(qgrp_shift) - 1); + + return grp_bits | udma_bit | idx_bits; +} + +/** + * ionic_qid_to_bitid() - Transform a queue id into a resource bit index + * @qid: queue index + * @qgrp_shift: Log2 number of queues per queue group + * @half_qid_shift: Log2 of half the total number of queues + * + * Return: Resource bit index + * + * This is the inverse of ionic_bitid_to_qid(). + */ +static inline u32 ionic_qid_to_bitid(u32 qid, u8 qgrp_shift, u8 half_qid_s= hift) +{ + u32 udma_bit =3D (qid & BIT(qgrp_shift)) << (half_qid_shift - qgrp_shift); + u32 grp_bits =3D (qid & GENMASK(half_qid_shift, qgrp_shift + 1)) >> 1; + u32 idx_bits =3D qid & (BIT(qgrp_shift) - 1); + + return udma_bit | grp_bits | idx_bits; +} +#endif /* _IONIC_RES_H_ */ --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2071.outbound.protection.outlook.com [40.107.223.71]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A7F2221D9A; Thu, 8 May 2025 05:01:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.71 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680489; cv=fail; b=BDXm3c82xIHxplan7qWGbP2e66uErdB36r12JyIU1IzBVMQ5maxNrrLZUNBLf3zV9N09xIlYv6i4ghL+x6ZvDlMmb6VsHdGld5UIsFTBZj5iBgaVIAOGVm7/hdHtmt4RIR1mIL8+re9skkzJhvJUN7jDYsiEk+FTJdUh4BstgRw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680489; c=relaxed/simple; bh=3mrJ6XATWi+jjPEAvTCTG+oqMUjRqZsobrLUJHcjiCI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=CvBEpEb/vefyvu8TGavPLYdNVQImFKtplJn1KJsrwFUtjYyFdoa4BNBc92obWT37DZhUaMpCPlRB5PYlY3NBUDPp19vhyUHXX3L3bz9YF9+BCFujcsODr5A+z9MhM4o9qqceNiW7x+JGn5WVlyyd7u2tHRBcY9q26gEkrRg+WH8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=BnY32BII; arc=fail smtp.client-ip=40.107.223.71 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="BnY32BII" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Kyv1DTMIvO7AXPI4oQ6qx8Jcq3RJE8VoAYp388iDv4FwqPW6kZARQHw/a67mwI+orPSB0X9ZO+Q3Qll1tuOYbJijOs4Wvswqhw2K2tbKqKqhe3lIhZE9FHYNNxjZzi/QvmKHRlH8BpXTIq/vwjpb6Nir4WV7nkFu0jbguvJad9HDV/XTg69apvwXDeJGnfYDaxBzX/UQASm3HB3uD07c8nMfgIbqy/xbvBrIz2s7w6+kakTznz6m7GdIyPGPTF+AIOljgxzHBAjWLaH7rBDD6E0YGPpJYg8s8HuDBK5pt6wFmucxIQVySRJB0/24vvTosew1CE4AVl3/jXbWP0o7SA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VgemDhFOm8d8kux3cwbldR2YvZaZYqiriEgpzdA2CdU=; b=tyVvYCe0MVoHD5yEpbBROX6Y+kOsTZrx8Cu9Qx/GC6wMRU85HeBBvAphY+TZyORNnylwsC+8cBMeIZC+GpVjHWLRo6zESfKW/BDJxUOt53dU96M3jkLTgmu2wzI6voMZv71K/blNFOMaVd4WZtiXWzguBAZP9LdFZfgrXsrzk9Wo/iG5l+tyWSsT0DAwnWr6i4S6qN9OZb3Un3gEmC18qnGZpV7+cicQJ7dpCTTtzO0TXXgeOtZn7y2X28HQ26dF0nfeHHO40Qh5fcHXulnlWXS6hAVuG4WHxiCuUvBy3de1yERVJTqdkMeIAY4H5QaEK4+1LGIBr+h9EGE4Sr4nUA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VgemDhFOm8d8kux3cwbldR2YvZaZYqiriEgpzdA2CdU=; b=BnY32BIIRSUOnxMDTjixCy80HWLOUhue3VETkEv4vVLcHVsUw6TJiTizWllNR+wGTZm+dXV9iZ5plqzqhKWtTwWUUtjeQXxenEcAMm2GbBsSztCgvuJTC3EpbgFL5+xrYNVVDE4uDWagL1tYxVZd23rkNp68VkSyE6hBF7qyylU= Received: from BY5PR17CA0061.namprd17.prod.outlook.com (2603:10b6:a03:167::38) by MN2PR12MB4373.namprd12.prod.outlook.com (2603:10b6:208:261::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8722.21; Thu, 8 May 2025 05:01:20 +0000 Received: from CY4PEPF0000FCC3.namprd03.prod.outlook.com (2603:10b6:a03:167:cafe::43) by BY5PR17CA0061.outlook.office365.com (2603:10b6:a03:167::38) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.33 via Frontend Transport; Thu, 8 May 2025 05:01:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC3.mail.protection.outlook.com (10.167.242.105) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:20 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:19 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:15 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v2 10/14] RDMA/ionic: Register device ops for control path Date: Thu, 8 May 2025 10:29:53 +0530 Message-ID: <20250508045957.2823318-11-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC3:EE_|MN2PR12MB4373:EE_ X-MS-Office365-Filtering-Correlation-Id: 0373aad1-1afb-408d-aa14-08dd8ded5f81 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|7416014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?bCmmxDs8vjrcIpGjYFAEk+hsGj/lydHGFJGL2QiysK3pBApk8fBfUSqKYaCH?= =?us-ascii?Q?9F3Cf/+Cryu0GpyKOvcVKLK341aaWzecggM3BuqMbGmZyMnNH0vpO2rSzwd8?= =?us-ascii?Q?QXPRu3dKIm3GTAsQ/1T967m3SjypR4rIo0bDYmrEgjpm5WVUk0YM78cSFTxI?= =?us-ascii?Q?aeL4nM6BHpkef+Fr90coOsFMkDV1hdUdZ5Q6cSTy2OCZvFXmTNylMXj55QsG?= =?us-ascii?Q?q5OzuHdvNrEa83eR4DYKTL4xx2IvRhnD31QQgXN7fQ/1dVop4WJHISbAZF7k?= =?us-ascii?Q?RCcUc0MIsru1ogAicTXMaHR5jfOJur5yDq6gxVOF9HAsGgcBbzkFL/C1rOFa?= =?us-ascii?Q?ZQMzHqLAiCKhogfeCxeZBWBx6wGfgbMKC18hZU4ZcULEdKjG0IqN/YBWafsg?= =?us-ascii?Q?yHFwKpDFobKbuC9wy1ldcD8xtmBmYlZqWPdwLL2M7TsrLZJQPwdReJbhUFlI?= =?us-ascii?Q?Bdve7J66dbHAnLqhZd83Uf9kJ5mkUQiEcY3I8cLNmDylQ7mnE5eJDSQlFKj8?= =?us-ascii?Q?EUO5AlyvBbeWfh1cwnfhFMC/s9tD+Cks90NM+7TwvHB8RsRplwyykmHUjcnT?= =?us-ascii?Q?f+Z8MgXyj1hXtCjdtM8W0KxBZvOtPoJaLAH5Q+aapFXSglPoJ+5Y2yJ9R6B+?= =?us-ascii?Q?NgL3XUk+puMJoIrOxRAmBe5N+VUrRUv1hwPQPETeoJzTJ/9ZVoLCakWQeCFD?= =?us-ascii?Q?lrOZUkF7YpAMeKy4S8x8GVH+bvOXAhf47PfT2zU7EweXs0LRONlHICr4FxXk?= =?us-ascii?Q?2voa+6GKizDaL+To+ffqd1o4k39dNn/dTAGo65h8wi9XzBzBT414EUxnoAME?= =?us-ascii?Q?faE2Ll+kdJmwPLAPHmWR/z/7Q3h6JSrfHo1t4cHtIVCLXCxyX4yrgmjVaBfh?= =?us-ascii?Q?7FWNR0xqoz3PeOWhgv2exfpEbpZuInoPTsyskkDTbGEKSjOT55tAQ0nqvDV/?= =?us-ascii?Q?+ywuYRtUjoAKy5IuJgQ2sTRILbBiB5UnONRFQUQQ31kkMS22eaOTEHpaaK5N?= =?us-ascii?Q?esiA4CVvIpBYFC0jsQ32CDIBeCWCyzy1lhVKav/R5liJ3oQ2WhC12BHL8CFH?= =?us-ascii?Q?jgesB/WxjRPs8uc/zsqxHrNHZh7DH3SXeX+wIkZxx5DCt3f8/frjgaVZT0aO?= =?us-ascii?Q?IVieRdWNruxcYyuAk+tWv2tLq15kl6goSSiBbjtkwZE/pcjTpN8qKYoMmjNi?= =?us-ascii?Q?C4xyWp4q+zdpw9ChyDflXQSMHNZ/6Q83d3x3UyenW4qtY7uMauAot6OEw8lC?= =?us-ascii?Q?9aPoyAi4dR3d791kjQi0z0zy/EPIDKB6ZT18xSLqijp6JXWlZaBnWgd3D+SE?= =?us-ascii?Q?b3JTsmbq1LCBkuR2Sd5lluLHrFaj3dLnKj2wKpHup5snkkJA/BLMBJOOt/2l?= =?us-ascii?Q?qKEw5knR3bIKpanE8xp6UZ65IQK+uBqXg7a/Bg/Mw5YQ1dnbQZHY7FfLSdkH?= =?us-ascii?Q?tp/sQQgPDHUbfF3U4GMkPzKSq+kaS+ce2PBxIH3ofeTqRUUiD3YkpBlsNFBL?= =?us-ascii?Q?0JuRk7pCwMU7KUvoemD2EvlIuy8NAZBSJw1RjvF+fe3Tc/vhqSzvIKON2Q?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(7416014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:20.4027 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0373aad1-1afb-408d-aa14-08dd8ded5f81 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC3.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4373 Content-Type: text/plain; charset="utf-8" Implement device supported verb APIs for control path. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- drivers/infiniband/hw/ionic/ionic_admin.c | 81 + .../infiniband/hw/ionic/ionic_controlpath.c | 2738 +++++++++++++++++ drivers/infiniband/hw/ionic/ionic_fw.h | 717 +++++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 7 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 219 +- drivers/infiniband/hw/ionic/ionic_pgtbl.c | 19 + include/uapi/rdma/ionic-abi.h | 115 + 7 files changed, 3888 insertions(+), 8 deletions(-) create mode 100644 include/uapi/rdma/ionic-abi.h diff --git a/drivers/infiniband/hw/ionic/ionic_admin.c b/drivers/infiniband= /hw/ionic/ionic_admin.c index 5a414c0600a1..72daf03dc418 100644 --- a/drivers/infiniband/hw/ionic/ionic_admin.c +++ b/drivers/infiniband/hw/ionic/ionic_admin.c @@ -685,6 +685,24 @@ static void ionic_kill_ibdev(struct ionic_ibdev *dev, = bool fatal_path) spin_unlock(&dev->aq_vec[i]->lock); } =20 + if (do_flush) { + struct ionic_qp *qp; + struct ionic_cq *cq; + unsigned long index; + + /* Flush qp send and recv */ + read_lock(&dev->qp_tbl_rw); + xa_for_each(&dev->qp_tbl, index, qp) + ionic_flush_qp(dev, qp); + read_unlock(&dev->qp_tbl_rw); + + /* Notify completions */ + read_lock(&dev->cq_tbl_rw); + xa_for_each(&dev->cq_tbl, index, cq) + ionic_notify_flush_cq(cq); + read_unlock(&dev->cq_tbl_rw); + } + local_irq_restore(irqflags); =20 /* Post a fatal event if requested */ @@ -819,6 +837,65 @@ static void ionic_cq_event(struct ionic_ibdev *dev, u3= 2 cqid, u8 code) kref_put(&cq->cq_kref, ionic_cq_complete); } =20 +static void ionic_qp_event(struct ionic_ibdev *dev, u32 qpid, u8 code) +{ + unsigned long irqflags; + struct ib_event ibev; + struct ionic_qp *qp; + + read_lock_irqsave(&dev->qp_tbl_rw, irqflags); + qp =3D xa_load(&dev->qp_tbl, qpid); + if (qp) + kref_get(&qp->qp_kref); + read_unlock_irqrestore(&dev->qp_tbl_rw, irqflags); + + if (!qp) { + ibdev_dbg(&dev->ibdev, + "missing qpid %#x code %u\n", qpid, code); + return; + } + + ibev.device =3D &dev->ibdev; + ibev.element.qp =3D &qp->ibqp; + + switch (code) { + case IONIC_V1_EQE_SQ_DRAIN: + ibev.event =3D IB_EVENT_SQ_DRAINED; + break; + + case IONIC_V1_EQE_QP_COMM_EST: + ibev.event =3D IB_EVENT_COMM_EST; + break; + + case IONIC_V1_EQE_QP_LAST_WQE: + ibev.event =3D IB_EVENT_QP_LAST_WQE_REACHED; + break; + + case IONIC_V1_EQE_QP_ERR: + ibev.event =3D IB_EVENT_QP_FATAL; + break; + + case IONIC_V1_EQE_QP_ERR_REQUEST: + ibev.event =3D IB_EVENT_QP_REQ_ERR; + break; + + case IONIC_V1_EQE_QP_ERR_ACCESS: + ibev.event =3D IB_EVENT_QP_ACCESS_ERR; + break; + + default: + ibdev_dbg(&dev->ibdev, + "unrecognized qpid %#x code %u\n", qpid, code); + goto out; + } + + if (qp->ibqp.event_handler) + qp->ibqp.event_handler(&ibev, qp->ibqp.qp_context); + +out: + kref_put(&qp->qp_kref, ionic_qp_complete); +} + static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budget) { struct ionic_ibdev *dev =3D eq->dev; @@ -848,6 +925,10 @@ static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budg= et) ionic_cq_event(dev, qid, code); break; =20 + case IONIC_V1_EQE_TYPE_QP: + ionic_qp_event(dev, qid, code); + break; + default: ibdev_dbg(&dev->ibdev, "unknown event %#x type %u\n", evt, type); diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infi= niband/hw/ionic/ionic_controlpath.c index 1004659c7554..cd2929e80335 100644 --- a/drivers/infiniband/hw/ionic/ionic_controlpath.c +++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c @@ -1,8 +1,19 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ =20 +#include +#include +#include +#include +#include +#include + +#include "ionic_fw.h" #include "ionic_ibdev.h" =20 +#define ionic_set_ecn(tos) (((tos) | 2u) & ~1u) +#define ionic_clear_ecn(tos) ((tos) & ~3u) + static int ionic_validate_qdesc(struct ionic_qdesc *q) { if (!q->addr || !q->size || !q->mask || @@ -189,3 +200,2730 @@ void ionic_destroy_cq_common(struct ionic_ibdev *dev= , struct ionic_cq *cq) =20 cq->vcq =3D NULL; } + +static int ionic_validate_qdesc_zero(struct ionic_qdesc *q) +{ + if (q->addr || q->size || q->mask || q->depth_log2 || q->stride_log2) + return -EINVAL; + + return 0; +} + +static int ionic_get_pdid(struct ionic_ibdev *dev, u32 *pdid) +{ + int rc; + + mutex_lock(&dev->inuse_lock); + rc =3D ionic_resid_get(&dev->inuse_pdid); + mutex_unlock(&dev->inuse_lock); + + if (rc >=3D 0) { + *pdid =3D rc; + rc =3D 0; + } + + return rc; +} + +static int ionic_get_ahid(struct ionic_ibdev *dev, u32 *ahid) +{ + unsigned long irqflags; + int rc; + + spin_lock_irqsave(&dev->inuse_splock, irqflags); + rc =3D ionic_resid_get(&dev->inuse_ahid); + spin_unlock_irqrestore(&dev->inuse_splock, irqflags); + + if (rc >=3D 0) { + *ahid =3D rc; + rc =3D 0; + } + + return rc; +} + +static int ionic_get_mrid(struct ionic_ibdev *dev, u32 *mrid) +{ + int rc; + + mutex_lock(&dev->inuse_lock); + /* wrap to 1, skip reserved lkey */ + rc =3D ionic_resid_get_wrap(&dev->inuse_mrid, 1); + if (rc >=3D 0) { + *mrid =3D ionic_mrid(rc, dev->next_mrkey++); + rc =3D 0; + } + mutex_unlock(&dev->inuse_lock); + + return rc; +} + +static int ionic_get_gsi_qpid(struct ionic_ibdev *dev, u32 *qpid) +{ + int rc =3D 0; + + mutex_lock(&dev->inuse_lock); + if (test_bit(IB_QPT_GSI, dev->inuse_qpid.inuse)) { + rc =3D -EINVAL; + } else { + set_bit(IB_QPT_GSI, dev->inuse_qpid.inuse); + *qpid =3D IB_QPT_GSI; + } + mutex_unlock(&dev->inuse_lock); + + return rc; +} + +static int ionic_get_qpid(struct ionic_ibdev *dev, u32 *qpid, + u8 *udma_idx, u8 udma_mask) +{ + int udma_i, udma_x, udma_ix; + int size, base, bound, next; + int rc =3D -EINVAL; + + udma_x =3D dev->next_qpid_udma_idx; + + dev->next_qpid_udma_idx ^=3D dev->lif_cfg.udma_count - 1; + + for (udma_i =3D 0; udma_i < dev->lif_cfg.udma_count; ++udma_i) { + udma_ix =3D udma_i ^ udma_x; + + if (!(udma_mask & BIT(udma_ix))) + continue; + + size =3D dev->lif_cfg.qp_count / dev->lif_cfg.udma_count; + base =3D size * udma_ix; + bound =3D base + size; + next =3D dev->next_qpid[udma_ix]; + + /* skip the reserved qpids in group zero */ + if (!base) + base =3D 2; + + mutex_lock(&dev->inuse_lock); + rc =3D ionic_resid_get_shared(&dev->inuse_qpid, base, next, + bound); + if (rc >=3D 0) + dev->next_qpid[udma_ix] =3D rc + 1; + mutex_unlock(&dev->inuse_lock); + + if (rc >=3D 0) { + *qpid =3D ionic_bitid_to_qid(rc, + dev->lif_cfg.udma_qgrp_shift, + dev->half_qpid_udma_shift); + *udma_idx =3D udma_ix; + + rc =3D 0; + break; + } + } + + return rc; +} + +static int ionic_get_dbid(struct ionic_ibdev *dev, u32 *dbid, phys_addr_t = *addr) +{ + int rc, dbpage_num; + + mutex_lock(&dev->inuse_lock); + /* wrap to 1, skip kernel reserved */ + rc =3D ionic_resid_get_wrap(&dev->inuse_dbid, 1); + mutex_unlock(&dev->inuse_lock); + + if (rc < 0) + return rc; + + dbpage_num =3D (dev->lif_cfg.lif_hw_index * dev->lif_cfg.dbid_count) + rc; + *addr =3D dev->lif_cfg.db_phys + ((phys_addr_t)dbpage_num << PAGE_SHIFT); + + *dbid =3D rc; + + return 0; +} + +static void ionic_put_pdid(struct ionic_ibdev *dev, u32 pdid) +{ + ionic_resid_put(&dev->inuse_pdid, pdid); +} + +static void ionic_put_ahid(struct ionic_ibdev *dev, u32 ahid) +{ + ionic_resid_put(&dev->inuse_ahid, ahid); +} + +static void ionic_put_mrid(struct ionic_ibdev *dev, u32 mrid) +{ + ionic_resid_put(&dev->inuse_mrid, ionic_mrid_index(mrid)); +} + +static void ionic_put_qpid(struct ionic_ibdev *dev, u32 qpid) +{ + u32 bitid =3D ionic_qid_to_bitid(qpid, + dev->lif_cfg.udma_qgrp_shift, + dev->half_qpid_udma_shift); + + ionic_resid_put(&dev->inuse_qpid, bitid); +} + +static void ionic_put_dbid(struct ionic_ibdev *dev, u32 dbid) +{ + ionic_resid_put(&dev->inuse_dbid, dbid); +} + +static int ionic_alloc_ucontext(struct ib_ucontext *ibctx, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + struct ionic_ctx_resp resp =3D {}; + struct ionic_ctx_req req; + phys_addr_t db_phys =3D 0; + int rc; + + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + goto err_ctx; + + /* try to allocate dbid for user ctx */ + rc =3D ionic_get_dbid(dev, &ctx->dbid, &db_phys); + if (rc < 0) + goto err_dbid; + + ibdev_dbg(&dev->ibdev, "user space dbid %u\n", ctx->dbid); + + mutex_init(&ctx->mmap_mut); + ctx->mmap_off =3D PAGE_SIZE; + INIT_LIST_HEAD(&ctx->mmap_list); + + ctx->mmap_dbell.offset =3D 0; + ctx->mmap_dbell.size =3D PAGE_SIZE; + ctx->mmap_dbell.pfn =3D PHYS_PFN(db_phys); + ctx->mmap_dbell.writecombine =3D false; + list_add(&ctx->mmap_dbell.ctx_ent, &ctx->mmap_list); + + resp.page_shift =3D PAGE_SHIFT; + + resp.dbell_offset =3D db_phys & ~PAGE_MASK; + + resp.version =3D dev->lif_cfg.rdma_version; + resp.qp_opcodes =3D dev->lif_cfg.qp_opcodes; + resp.admin_opcodes =3D dev->lif_cfg.admin_opcodes; + + resp.sq_qtype =3D dev->lif_cfg.sq_qtype; + resp.rq_qtype =3D dev->lif_cfg.rq_qtype; + resp.cq_qtype =3D dev->lif_cfg.cq_qtype; + resp.admin_qtype =3D dev->lif_cfg.aq_qtype; + resp.max_stride =3D dev->lif_cfg.max_stride; + resp.max_spec =3D IONIC_SPEC_HIGH; + + resp.udma_count =3D dev->lif_cfg.udma_count; + resp.expdb_mask =3D dev->lif_cfg.expdb_mask; + + if (dev->lif_cfg.sq_expdb) + resp.expdb_qtypes |=3D IONIC_EXPDB_SQ; + if (dev->lif_cfg.rq_expdb) + resp.expdb_qtypes |=3D IONIC_EXPDB_RQ; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + + return 0; + +err_resp: + ionic_put_dbid(dev, ctx->dbid); +err_dbid: +err_ctx: + return rc; +} + +static void ionic_dealloc_ucontext(struct ib_ucontext *ibctx) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + + list_del(&ctx->mmap_dbell.ctx_ent); + + if (WARN_ON(!list_empty(&ctx->mmap_list))) + list_del(&ctx->mmap_list); + + ionic_put_dbid(dev, ctx->dbid); +} + +static int ionic_mmap(struct ib_ucontext *ibctx, struct vm_area_struct *vm= a) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + unsigned long offset =3D vma->vm_pgoff << PAGE_SHIFT; + unsigned long size =3D vma->vm_end - vma->vm_start; + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + struct ionic_mmap_info *info; + int rc =3D 0; + + mutex_lock(&ctx->mmap_mut); + + list_for_each_entry(info, &ctx->mmap_list, ctx_ent) + if (info->offset =3D=3D offset) + goto found; + + mutex_unlock(&ctx->mmap_mut); + + /* not found */ + ibdev_dbg(&dev->ibdev, "not found %#lx\n", offset); + rc =3D -EINVAL; + goto out; + +found: + list_del_init(&info->ctx_ent); + mutex_unlock(&ctx->mmap_mut); + + if (info->size !=3D size) { + ibdev_dbg(&dev->ibdev, "invalid size %#lx (%#lx)\n", + size, info->size); + rc =3D -EINVAL; + goto out; + } + + ibdev_dbg(&dev->ibdev, "writecombine? %d\n", info->writecombine); + if (info->writecombine) + vma->vm_page_prot =3D pgprot_writecombine(vma->vm_page_prot); + else + vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); + + ibdev_dbg(&dev->ibdev, "remap st %#lx pf %#lx sz %#lx\n", + vma->vm_start, info->pfn, size); + rc =3D rdma_user_mmap_io(&ctx->ibctx, vma, info->pfn, size, + vma->vm_page_prot, NULL); + if (rc) + ibdev_dbg(&dev->ibdev, "remap failed %d\n", rc); + +out: + return rc; +} + +static int ionic_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + int rc; + + rc =3D ionic_get_pdid(dev, &pd->pdid); + if (rc) + goto err_pdid; + + return 0; + +err_pdid: + return rc; +} + +static int ionic_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + + ionic_put_pdid(dev, pd->pdid); + + return 0; +} + +static int ionic_build_hdr(struct ionic_ibdev *dev, + struct ib_ud_header *hdr, + const struct rdma_ah_attr *attr, + u16 sport, bool want_ecn) +{ + const struct ib_global_route *grh; + enum rdma_network_type net; + u16 vlan; + int rc; + + if (attr->ah_flags !=3D IB_AH_GRH) + return -EINVAL; + if (attr->type !=3D RDMA_AH_ATTR_TYPE_ROCE) + return -EINVAL; + + grh =3D rdma_ah_read_grh(attr); + + vlan =3D rdma_vlan_dev_vlan_id(grh->sgid_attr->ndev); + net =3D rdma_gid_attr_network_type(grh->sgid_attr); + + rc =3D ib_ud_header_init(0, /* no payload */ + 0, /* no lrh */ + 1, /* yes eth */ + vlan !=3D 0xffff, + 0, /* no grh */ + net =3D=3D RDMA_NETWORK_IPV4 ? 4 : 6, + 1, /* yes udp */ + 0, /* no imm */ + hdr); + if (rc) + return rc; + + ether_addr_copy(hdr->eth.smac_h, grh->sgid_attr->ndev->dev_addr); + ether_addr_copy(hdr->eth.dmac_h, attr->roce.dmac); + + if (net =3D=3D RDMA_NETWORK_IPV4) { + hdr->eth.type =3D cpu_to_be16(ETH_P_IP); + hdr->ip4.frag_off =3D cpu_to_be16(0x4000); /* don't fragment */ + hdr->ip4.ttl =3D grh->hop_limit; + hdr->ip4.tot_len =3D cpu_to_be16(0xffff); + hdr->ip4.saddr =3D + *(const __be32 *)(grh->sgid_attr->gid.raw + 12); + hdr->ip4.daddr =3D *(const __be32 *)(grh->dgid.raw + 12); + + if (want_ecn) + hdr->ip4.tos =3D ionic_set_ecn(grh->traffic_class); + else + hdr->ip4.tos =3D ionic_clear_ecn(grh->traffic_class); + } else { + hdr->eth.type =3D cpu_to_be16(ETH_P_IPV6); + hdr->grh.flow_label =3D cpu_to_be32(grh->flow_label); + hdr->grh.hop_limit =3D grh->hop_limit; + hdr->grh.source_gid =3D grh->sgid_attr->gid; + hdr->grh.destination_gid =3D grh->dgid; + + if (want_ecn) + hdr->grh.traffic_class =3D + ionic_set_ecn(grh->traffic_class); + else + hdr->grh.traffic_class =3D + ionic_clear_ecn(grh->traffic_class); + } + + if (vlan !=3D 0xffff) { + vlan |=3D rdma_ah_get_sl(attr) << VLAN_PRIO_SHIFT; + hdr->vlan.tag =3D cpu_to_be16(vlan); + hdr->vlan.type =3D hdr->eth.type; + hdr->eth.type =3D cpu_to_be16(ETH_P_8021Q); + } + + hdr->udp.sport =3D cpu_to_be16(sport); + hdr->udp.dport =3D cpu_to_be16(ROCE_V2_UDP_DPORT); + + return 0; +} + +static void ionic_set_ah_attr(struct ionic_ibdev *dev, + struct rdma_ah_attr *ah_attr, + struct ib_ud_header *hdr, + int sgid_index) +{ + u32 flow_label; + u16 vlan =3D 0; + u8 tos, ttl; + + if (hdr->vlan_present) + vlan =3D be16_to_cpu(hdr->vlan.tag); + + if (hdr->ipv4_present) { + flow_label =3D 0; + ttl =3D hdr->ip4.ttl; + tos =3D hdr->ip4.tos; + *(__be16 *)(hdr->grh.destination_gid.raw + 10) =3D 0xffff; + *(__be32 *)(hdr->grh.destination_gid.raw + 12) =3D + hdr->ip4.daddr; + } else { + flow_label =3D be32_to_cpu(hdr->grh.flow_label); + ttl =3D hdr->grh.hop_limit; + tos =3D hdr->grh.traffic_class; + } + + memset(ah_attr, 0, sizeof(*ah_attr)); + ah_attr->type =3D RDMA_AH_ATTR_TYPE_ROCE; + if (hdr->eth_present) + memcpy(&ah_attr->roce.dmac, &hdr->eth.dmac_h, ETH_ALEN); + rdma_ah_set_sl(ah_attr, vlan >> VLAN_PRIO_SHIFT); + rdma_ah_set_port_num(ah_attr, 1); + rdma_ah_set_grh(ah_attr, NULL, flow_label, sgid_index, ttl, tos); + rdma_ah_set_dgid_raw(ah_attr, &hdr->grh.destination_gid); +} + +static int ionic_create_ah_cmd(struct ionic_ibdev *dev, + struct ionic_ah *ah, + struct ionic_pd *pd, + struct rdma_ah_attr *attr, + u32 flags) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_AH, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_AH_IN_V1_LEN), + .cmd.create_ah =3D { + .pd_id =3D cpu_to_le32(pd->pdid), + .dbid_flags =3D cpu_to_le16(dev->lif_cfg.dbid), + .id_ver =3D cpu_to_le32(ah->ahid), + } + } + }; + enum ionic_admin_flags admin_flags =3D 0; + dma_addr_t hdr_dma =3D 0; + void *hdr_buf; + gfp_t gfp =3D GFP_ATOMIC; + int rc, hdr_len =3D 0; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_AH) + return -EBADRQC; + + if (flags & RDMA_CREATE_AH_SLEEPABLE) + gfp =3D GFP_KERNEL; + else + admin_flags |=3D IONIC_ADMIN_F_BUSYWAIT; + + rc =3D ionic_build_hdr(dev, &ah->hdr, attr, IONIC_ROCE_UDP_SPORT, false); + if (rc) + goto err_hdr; + + if (ah->hdr.eth.type =3D=3D cpu_to_be16(ETH_P_8021Q)) { + if (ah->hdr.vlan.type =3D=3D cpu_to_be16(ETH_P_IP)) + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP; + else + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP; + } else { + if (ah->hdr.eth.type =3D=3D cpu_to_be16(ETH_P_IP)) + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP; + else + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP; + } + + ah->sgid_index =3D rdma_ah_read_grh(attr)->sgid_index; + + hdr_buf =3D kmalloc(PAGE_SIZE, gfp); + if (!hdr_buf) { + rc =3D -ENOMEM; + goto err_buf; + } + + hdr_len =3D ib_ud_header_pack(&ah->hdr, hdr_buf); + hdr_len -=3D IB_BTH_BYTES; + hdr_len -=3D IB_DETH_BYTES; + ibdev_dbg(&dev->ibdev, "roce packet header template\n"); + print_hex_dump_debug("hdr ", DUMP_PREFIX_OFFSET, 16, 1, + hdr_buf, hdr_len, true); + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, hdr_len, + DMA_TO_DEVICE); + + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + wr.wqe.cmd.create_ah.dma_addr =3D cpu_to_le64(hdr_dma); + wr.wqe.cmd.create_ah.length =3D cpu_to_le32(hdr_len); + + ionic_admin_post(dev, &wr); + rc =3D ionic_admin_wait(dev, &wr, admin_flags); + + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, hdr_len, + DMA_TO_DEVICE); +err_dma: + kfree(hdr_buf); +err_buf: +err_hdr: + return rc; +} + +static int ionic_destroy_ah_cmd(struct ionic_ibdev *dev, u32 ahid, u32 fla= gs) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_AH, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_AH_IN_V1_LEN), + .cmd.destroy_ah =3D { + .ah_id =3D cpu_to_le32(ahid), + }, + } + }; + enum ionic_admin_flags admin_flags =3D IONIC_ADMIN_F_TEARDOWN; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_AH) + return -EBADRQC; + + if (!(flags & RDMA_CREATE_AH_SLEEPABLE)) + admin_flags |=3D IONIC_ADMIN_F_BUSYWAIT; + + ionic_admin_post(dev, &wr); + ionic_admin_wait(dev, &wr, admin_flags); + + /* No host-memory resource is associated with ah, so it is ok + * to "succeed" and complete this destroy ah on the host. + */ + return 0; +} + +static int ionic_create_ah(struct ib_ah *ibah, + struct rdma_ah_init_attr *init_attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct rdma_ah_attr *attr =3D init_attr->ah_attr; + struct ionic_pd *pd =3D to_ionic_pd(ibah->pd); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + struct ionic_ah_resp resp =3D {}; + u32 flags =3D init_attr->flags; + int rc; + + rc =3D ionic_get_ahid(dev, &ah->ahid); + if (rc) + goto err_ahid; + + rc =3D ionic_create_ah_cmd(dev, ah, pd, attr, flags); + if (rc) + goto err_cmd; + + if (udata) { + resp.ahid =3D ah->ahid; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + return 0; + +err_resp: + ionic_destroy_ah_cmd(dev, ah->ahid, flags); +err_cmd: + ionic_put_ahid(dev, ah->ahid); +err_ahid: + return rc; +} + +static int ionic_query_ah(struct ib_ah *ibah, + struct rdma_ah_attr *ah_attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + + ionic_set_ah_attr(dev, ah_attr, &ah->hdr, ah->sgid_index); + + return 0; +} + +static int ionic_destroy_ah(struct ib_ah *ibah, u32 flags) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + int rc; + + rc =3D ionic_destroy_ah_cmd(dev, ah->ahid, flags); + if (rc) { + ibdev_warn(&dev->ibdev, "destroy_ah error %d\n", rc); + return rc; + } + + ionic_put_ahid(dev, ah->ahid); + + return 0; +} + +static int ionic_create_mr_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_mr *mr, + u64 addr, + u64 length) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_MR, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_MR_IN_V1_LEN), + .cmd.create_mr =3D { + .va =3D cpu_to_le64(addr), + .length =3D cpu_to_le64(length), + .pd_id =3D cpu_to_le32(pd->pdid), + .page_size_log2 =3D mr->buf.page_size_log2, + .tbl_index =3D ~0, + .map_count =3D cpu_to_le32(mr->buf.tbl_pages), + .dma_addr =3D ionic_pgtbl_dma(&mr->buf, addr), + .dbid_flags =3D cpu_to_le16(mr->flags), + .id_ver =3D cpu_to_le32(mr->mrid), + } + } + }; + int rc; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_MR) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + rc =3D ionic_admin_wait(dev, &wr, 0); + if (!rc) + mr->created =3D true; + + return rc; +} + +static int ionic_destroy_mr_cmd(struct ionic_ibdev *dev, u32 mrid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_MR, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_MR_IN_V1_LEN), + .cmd.destroy_mr =3D { + .mr_id =3D cpu_to_le32(mrid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_MR) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +static struct ib_mr *ionic_get_dma_mr(struct ib_pd *ibpd, int access) +{ + struct ionic_mr *mr; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + return &mr->ibmr; +} + +static struct ib_mr *ionic_reg_user_mr(struct ib_pd *ibpd, u64 start, + u64 length, u64 addr, int access, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + unsigned long pg_sz; + int rc; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) { + rc =3D -ENOMEM; + goto err_mr; + } + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + mr->umem =3D ib_umem_get(&dev->ibdev, start, length, access); + if (IS_ERR(mr->umem)) { + rc =3D PTR_ERR(mr->umem); + goto err_umem; + } + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + ibdev_err(&dev->ibdev, "%s umem page size unsupported!", + __func__); + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) { + ibdev_dbg(&dev->ibdev, + "create user_mr pgtbl_init error %d\n", rc); + goto err_pgtbl; + } + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &mr->buf); + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); +err_umem: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); +err_mr: + return ERR_PTR(rc); +} + +static struct ib_mr *ionic_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 offs= et, + u64 length, u64 addr, int fd, + int access, + struct uverbs_attr_bundle *attrs) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ib_umem_dmabuf *umem_dmabuf; + struct ionic_mr *mr; + u64 pg_sz; + int rc; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) { + rc =3D -ENOMEM; + goto err_mr; + } + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + umem_dmabuf =3D ib_umem_dmabuf_get_pinned(&dev->ibdev, offset, length, + fd, access); + if (IS_ERR(umem_dmabuf)) { + rc =3D PTR_ERR(umem_dmabuf); + goto err_umem; + } + + mr->umem =3D &umem_dmabuf->umem; + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + ibdev_err(&dev->ibdev, "%s umem page size unsupported!", + __func__); + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) { + ibdev_dbg(&dev->ibdev, + "create user_mr_dmabuf pgtbl_init error %d\n", rc); + goto err_pgtbl; + } + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &mr->buf); + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); +err_umem: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); +err_mr: + return ERR_PTR(rc); +} + +static struct ib_mr *ionic_rereg_user_mr(struct ib_mr *ibmr, int flags, + u64 start, u64 length, u64 addr, + int access, struct ib_pd *ibpd, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + struct ionic_pd *pd; + u64 pg_sz; + int rc; + + if (!mr->ibmr.lkey) { + rc =3D -EINVAL; + goto err_out; + } + + if (!mr->created) { + /* must set translation if not already on device */ + if (~flags & IB_MR_REREG_TRANS) { + rc =3D -EINVAL; + goto err_out; + } + } else { + /* destroy on device first if already on device */ + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + goto err_out; + + mr->created =3D false; + } + + if (~flags & IB_MR_REREG_PD) + ibpd =3D mr->ibmr.pd; + pd =3D to_ionic_pd(ibpd); + + mr->mrid =3D ib_inc_rkey(mr->mrid); + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + + if (flags & IB_MR_REREG_ACCESS) + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + if (flags & IB_MR_REREG_TRANS) { + ionic_pgtbl_unbuf(dev, &mr->buf); + + if (mr->umem) + ib_umem_release(mr->umem); + + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->umem =3D ib_umem_get(&dev->ibdev, start, length, access); + if (IS_ERR(mr->umem)) { + rc =3D PTR_ERR(mr->umem); + goto err_out; + } + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + ibdev_err(&dev->ibdev, "%s umem page size unsupported!", + __func__); + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) { + ibdev_dbg(&dev->ibdev, + "rereg user_mr pgtbl_init error %d\n", rc); + goto err_pgtbl; + } + } + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + /* + * Container object 'ibmr' was not recreated. Indicate + * this to ib_uverbs_rereg_mr() by returning NULL here. + */ + return NULL; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); + mr->umem =3D NULL; +err_out: + return ERR_PTR(rc); +} + +static int ionic_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + int rc; + + if (!mr->ibmr.lkey) + goto out; + + if (mr->created) { + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + return rc; + } + + ionic_pgtbl_unbuf(dev, &mr->buf); + + if (mr->umem) + ib_umem_release(mr->umem); + + ionic_put_mrid(dev, mr->mrid); + +out: + kfree(mr); + + return 0; +} + +static struct ib_mr *ionic_alloc_mr(struct ib_pd *ibpd, + enum ib_mr_type type, + u32 max_sg) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + int rc; + + if (type !=3D IB_MR_TYPE_MEM_REG) { + rc =3D -EINVAL; + goto err_mr; + } + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) { + rc =3D -ENOMEM; + goto err_mr; + } + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + + mr->flags =3D IONIC_MRF_PHYS_MR; + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, max_sg, PAGE_SIZE); + if (rc) { + ibdev_dbg(&dev->ibdev, + "create mr pgtbl_init error %d\n", rc); + goto err_pgtbl; + } + + mr->buf.tbl_pages =3D 0; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, 0, 0); + if (rc) + goto err_cmd; + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); +err_mr: + return ERR_PTR(rc); +} + +static int ionic_map_mr_page(struct ib_mr *ibmr, u64 dma) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + + ibdev_dbg(&dev->ibdev, "dma %p\n", (void *)dma); + return ionic_pgtbl_page(&mr->buf, dma); +} + +static int ionic_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + int rc; + + /* mr must be allocated using ib_alloc_mr() */ + if (unlikely(!mr->buf.tbl_limit)) + return -EINVAL; + + mr->buf.tbl_pages =3D 0; + + if (mr->buf.tbl_buf) + dma_sync_single_for_cpu(dev->lif_cfg.hwdev, mr->buf.tbl_dma, + mr->buf.tbl_size, DMA_TO_DEVICE); + + ibdev_dbg(&dev->ibdev, "sg %p nent %d\n", sg, sg_nents); + rc =3D ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, ionic_map_mr_page); + + mr->buf.page_size_log2 =3D order_base_2(ibmr->page_size); + + if (mr->buf.tbl_buf) + dma_sync_single_for_device(dev->lif_cfg.hwdev, mr->buf.tbl_dma, + mr->buf.tbl_size, DMA_TO_DEVICE); + + return rc; +} + +static int ionic_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmw->device); + struct ionic_pd *pd =3D to_ionic_pd(ibmw->pd); + struct ionic_mr *mr =3D to_ionic_mw(ibmw); + int rc; + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmw.rkey =3D mr->mrid; + + if (mr->ibmw.type =3D=3D IB_MW_TYPE_1) + mr->flags =3D IONIC_MRF_MW_1; + else + mr->flags =3D IONIC_MRF_MW_2; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, 0, 0); + if (rc) + goto err_cmd; + + return 0; + +err_cmd: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + return rc; +} + +static int ionic_dealloc_mw(struct ib_mw *ibmw) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmw->device); + struct ionic_mr *mr =3D to_ionic_mw(ibmw); + int rc; + + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + return rc; + + ionic_put_mrid(dev, mr->mrid); + + return 0; +} + +static int ionic_create_cq_cmd(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_cq *cq, + struct ionic_tbl_buf *buf) +{ + const u16 dbid =3D ionic_ctx_dbid(dev, ctx); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_CQ, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_CQ_IN_V1_LEN), + .cmd.create_cq =3D { + .eq_id =3D cpu_to_le32(cq->eqid), + .depth_log2 =3D cq->q.depth_log2, + .stride_log2 =3D cq->q.stride_log2, + .page_size_log2 =3D buf->page_size_log2, + .tbl_index =3D ~0, + .map_count =3D cpu_to_le32(buf->tbl_pages), + .dma_addr =3D ionic_pgtbl_dma(buf, 0), + .dbid_flags =3D cpu_to_le16(dbid), + .id_ver =3D cpu_to_le32(cq->cqid), + } + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_CQ) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, 0); +} + +static int ionic_destroy_cq_cmd(struct ionic_ibdev *dev, u32 cqid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_CQ, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN), + .cmd.destroy_cq =3D { + .cq_id =3D cpu_to_le32(cqid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_CQ) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +static int ionic_create_cq(struct ib_cq *ibcq, + const struct ib_cq_init_attr *attr, + struct uverbs_attr_bundle *attrs) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ib_udata *udata =3D &attrs->driver_udata; + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + struct ionic_tbl_buf buf =3D {}; + struct ionic_cq_resp resp; + struct ionic_cq_req req; + int udma_idx =3D 0, rc; + + if (udata) { + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + goto err_req; + } + + vcq->udma_mask =3D BIT(dev->lif_cfg.udma_count) - 1; + + if (udata) + vcq->udma_mask &=3D req.udma_mask; + + if (!vcq->udma_mask) { + rc =3D -EINVAL; + goto err_init; + } + + for (; udma_idx < dev->lif_cfg.udma_count; ++udma_idx) { + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + + rc =3D ionic_create_cq_common(vcq, &buf, attr, ctx, udata, + &req.cq[udma_idx], + &resp.cqid[udma_idx], + udma_idx); + if (rc) + goto err_init; + + rc =3D ionic_create_cq_cmd(dev, ctx, &vcq->cq[udma_idx], &buf); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &buf); + } + + vcq->ibcq.cqe =3D attr->cqe; + + if (udata) { + resp.udma_mask =3D vcq->udma_mask; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + return 0; + +err_resp: + while (udma_idx) { + --udma_idx; + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + ionic_destroy_cq_cmd(dev, vcq->cq[udma_idx].cqid); +err_cmd: + ionic_pgtbl_unbuf(dev, &buf); + ionic_destroy_cq_common(dev, &vcq->cq[udma_idx]); +err_init: + ; + } +err_req: + return rc; +} + +static int ionic_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int udma_idx, rc_tmp, rc =3D 0; + + for (udma_idx =3D dev->lif_cfg.udma_count; udma_idx; ) { + --udma_idx; + + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + + rc_tmp =3D ionic_destroy_cq_cmd(dev, vcq->cq[udma_idx].cqid); + if (rc_tmp) { + if (!rc) + rc =3D rc_tmp; + + ibdev_warn(&dev->ibdev, "destroy_cq error %d\n", + rc_tmp); + continue; + } + + ionic_destroy_cq_common(dev, &vcq->cq[udma_idx]); + } + + return rc; +} + +static bool pd_local_privileged(struct ib_pd *pd) +{ + return !pd->uobject; +} + +static bool pd_remote_privileged(struct ib_pd *pd) +{ + return pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; +} + +static int ionic_create_qp_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_cq *send_cq, + struct ionic_cq *recv_cq, + struct ionic_qp *qp, + struct ionic_tbl_buf *sq_buf, + struct ionic_tbl_buf *rq_buf, + struct ib_qp_init_attr *attr) +{ + const u16 dbid =3D ionic_obj_dbid(dev, pd->ibpd.uobject); + const u32 flags =3D to_ionic_qp_flags(0, 0, + qp->sq_cmb & IONIC_CMB_ENABLE, + qp->rq_cmb & IONIC_CMB_ENABLE, + qp->sq_spec, qp->rq_spec, + pd_local_privileged(&pd->ibpd), + pd_remote_privileged(&pd->ibpd)); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_QP_IN_V1_LEN), + .cmd.create_qp =3D { + .pd_id =3D cpu_to_le32(pd->pdid), + .priv_flags =3D cpu_to_be32(flags), + .type_state =3D to_ionic_qp_type(attr->qp_type), + .dbid_flags =3D cpu_to_le16(dbid), + .id_ver =3D cpu_to_le32(qp->qpid), + } + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_QP) + return -EBADRQC; + + if (qp->has_sq) { + wr.wqe.cmd.create_qp.sq_cq_id =3D cpu_to_le32(send_cq->cqid); + wr.wqe.cmd.create_qp.sq_depth_log2 =3D qp->sq.depth_log2; + wr.wqe.cmd.create_qp.sq_stride_log2 =3D qp->sq.stride_log2; + wr.wqe.cmd.create_qp.sq_page_size_log2 =3D sq_buf->page_size_log2; + wr.wqe.cmd.create_qp.sq_tbl_index_xrcd_id =3D ~0; + wr.wqe.cmd.create_qp.sq_map_count =3D + cpu_to_le32(sq_buf->tbl_pages); + wr.wqe.cmd.create_qp.sq_dma_addr =3D ionic_pgtbl_dma(sq_buf, 0); + } + + if (qp->has_rq) { + wr.wqe.cmd.create_qp.rq_cq_id =3D cpu_to_le32(recv_cq->cqid); + wr.wqe.cmd.create_qp.rq_depth_log2 =3D qp->rq.depth_log2; + wr.wqe.cmd.create_qp.rq_stride_log2 =3D qp->rq.stride_log2; + wr.wqe.cmd.create_qp.rq_page_size_log2 =3D rq_buf->page_size_log2; + wr.wqe.cmd.create_qp.rq_tbl_index_srq_id =3D ~0; + wr.wqe.cmd.create_qp.rq_map_count =3D + cpu_to_le32(rq_buf->tbl_pages); + wr.wqe.cmd.create_qp.rq_dma_addr =3D ionic_pgtbl_dma(rq_buf, 0); + } + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, 0); +} + +static int ionic_modify_qp_cmd(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_qp_attr *attr, + int mask) +{ + const u32 flags =3D to_ionic_qp_flags(attr->qp_access_flags, + attr->en_sqd_async_notify, + qp->sq_cmb & IONIC_CMB_ENABLE, + qp->rq_cmb & IONIC_CMB_ENABLE, + qp->sq_spec, qp->rq_spec, + pd_local_privileged(qp->ibqp.pd), + pd_remote_privileged(qp->ibqp.pd)); + const u8 state =3D to_ionic_qp_modify_state(attr->qp_state, + attr->cur_qp_state); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_MODIFY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_MODIFY_QP_IN_V1_LEN), + .cmd.mod_qp =3D { + .attr_mask =3D cpu_to_be32(mask), + .access_flags =3D cpu_to_be16(flags), + .rq_psn =3D cpu_to_le32(attr->rq_psn), + .sq_psn =3D cpu_to_le32(attr->sq_psn), + .rate_limit_kbps =3D + cpu_to_le32(attr->rate_limit), + .pmtu =3D (attr->path_mtu + 7), + .retry =3D (attr->retry_cnt | + (attr->rnr_retry << 4)), + .rnr_timer =3D attr->min_rnr_timer, + .retry_timeout =3D attr->timeout, + .type_state =3D state, + .id_ver =3D cpu_to_le32(qp->qpid), + } + } + }; + const struct ib_global_route *grh =3D rdma_ah_read_grh(&attr->ah_attr); + void *hdr_buf =3D NULL; + dma_addr_t hdr_dma =3D 0; + int rc, hdr_len =3D 0; + u16 sport; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_MODIFY_QP) + return -EBADRQC; + + if ((mask & IB_QP_MAX_DEST_RD_ATOMIC) && attr->max_dest_rd_atomic) { + /* Note, round up/down was already done for allocating + * resources on the device. The allocation order is in cache + * line size. We can't use the order of the resource + * allocation to determine the order wqes here, because for + * queue length <=3D one cache line it is not distinct. + * + * Therefore, order wqes is computed again here. + * + * Account for hole and round up to the next order. + */ + wr.wqe.cmd.mod_qp.rsq_depth =3D + order_base_2(attr->max_dest_rd_atomic + 1); + wr.wqe.cmd.mod_qp.rsq_index =3D ~0; + } + + if ((mask & IB_QP_MAX_QP_RD_ATOMIC) && attr->max_rd_atomic) { + /* Account for hole and round down to the next order */ + wr.wqe.cmd.mod_qp.rrq_depth =3D + order_base_2(attr->max_rd_atomic + 2) - 1; + wr.wqe.cmd.mod_qp.rrq_index =3D ~0; + } + + if (qp->ibqp.qp_type =3D=3D IB_QPT_RC || qp->ibqp.qp_type =3D=3D IB_QPT_U= C) + wr.wqe.cmd.mod_qp.qkey_dest_qpn =3D + cpu_to_le32(attr->dest_qp_num); + else + wr.wqe.cmd.mod_qp.qkey_dest_qpn =3D cpu_to_le32(attr->qkey); + + if (mask & IB_QP_AV) { + if (!qp->hdr) { + rc =3D -ENOMEM; + goto err_hdr; + } + + sport =3D rdma_get_udp_sport(grh->flow_label, + qp->qpid, + attr->dest_qp_num); + + rc =3D ionic_build_hdr(dev, qp->hdr, &attr->ah_attr, sport, true); + if (rc) + goto err_hdr; + + qp->sgid_index =3D grh->sgid_index; + + hdr_buf =3D kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!hdr_buf) { + rc =3D -ENOMEM; + goto err_buf; + } + + hdr_len =3D ib_ud_header_pack(qp->hdr, hdr_buf); + hdr_len -=3D IB_BTH_BYTES; + hdr_len -=3D IB_DETH_BYTES; + ibdev_dbg(&dev->ibdev, "roce packet header template\n"); + print_hex_dump_debug("hdr ", DUMP_PREFIX_OFFSET, 16, 1, + hdr_buf, hdr_len, true); + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, hdr_len, + DMA_TO_DEVICE); + + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + if (qp->hdr->ipv4_present) { + wr.wqe.cmd.mod_qp.tfp_csum_profile =3D + qp->hdr->vlan_present ? + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP : + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP; + } else { + wr.wqe.cmd.mod_qp.tfp_csum_profile =3D + qp->hdr->vlan_present ? + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP : + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP; + } + + wr.wqe.cmd.mod_qp.ah_id_len =3D + cpu_to_le32(qp->ahid | (hdr_len << 24)); + wr.wqe.cmd.mod_qp.dma_addr =3D cpu_to_le64(hdr_dma); + + wr.wqe.cmd.mod_qp.en_pcp =3D attr->ah_attr.sl; + wr.wqe.cmd.mod_qp.ip_dscp =3D grh->traffic_class >> 2; + } + + ionic_admin_post(dev, &wr); + + rc =3D ionic_admin_wait(dev, &wr, 0); + + if (mask & IB_QP_AV) + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, hdr_len, + DMA_TO_DEVICE); +err_dma: + if (mask & IB_QP_AV) + kfree(hdr_buf); +err_buf: +err_hdr: + return rc; +} + +static int ionic_query_qp_cmd(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_qp_attr *attr, + int mask) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_QUERY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_QUERY_QP_IN_V1_LEN), + .cmd.query_qp =3D { + .id_ver =3D cpu_to_le32(qp->qpid), + }, + } + }; + struct ionic_v1_admin_query_qp_sq *query_sqbuf; + struct ionic_v1_admin_query_qp_rq *query_rqbuf; + dma_addr_t query_sqdma; + dma_addr_t query_rqdma; + dma_addr_t hdr_dma =3D 0; + void *hdr_buf =3D NULL; + int flags, rc; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_QUERY_QP) + return -EBADRQC; + + if (qp->has_sq) { + bool expdb =3D !!(qp->sq_cmb & IONIC_CMB_EXPDB); + + attr->cap.max_send_sge =3D + ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + expdb); + attr->cap.max_inline_data =3D + ionic_v1_send_wqe_max_data(qp->sq.stride_log2, expdb); + } + + if (qp->has_rq) { + attr->cap.max_recv_sge =3D + ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, + qp->rq_spec, + qp->rq_cmb & IONIC_CMB_EXPDB); + } + + query_sqbuf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!query_sqbuf) { + rc =3D -ENOMEM; + goto err_sqbuf; + } + query_rqbuf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!query_rqbuf) { + rc =3D -ENOMEM; + goto err_rqbuf; + } + + query_sqdma =3D dma_map_single(dev->lif_cfg.hwdev, query_sqbuf, PAGE_SIZE, + DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, query_sqdma); + if (rc) + goto err_sqdma; + + query_rqdma =3D dma_map_single(dev->lif_cfg.hwdev, query_rqbuf, PAGE_SIZE, + DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, query_rqdma); + if (rc) + goto err_rqdma; + + if (mask & IB_QP_AV) { + hdr_buf =3D kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!hdr_buf) { + rc =3D -ENOMEM; + goto err_hdrbuf; + } + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_hdrdma; + } + + wr.wqe.cmd.query_qp.sq_dma_addr =3D cpu_to_le64(query_sqdma); + wr.wqe.cmd.query_qp.rq_dma_addr =3D cpu_to_le64(query_rqdma); + wr.wqe.cmd.query_qp.hdr_dma_addr =3D cpu_to_le64(hdr_dma); + wr.wqe.cmd.query_qp.ah_id =3D cpu_to_le32(qp->ahid); + + ionic_admin_post(dev, &wr); + + rc =3D ionic_admin_wait(dev, &wr, 0); + + if (rc) + goto err_hdrdma; + + flags =3D be16_to_cpu(query_sqbuf->access_perms_flags | + query_rqbuf->access_perms_flags); + + print_hex_dump_debug("sqbuf ", DUMP_PREFIX_OFFSET, 16, 1, + query_sqbuf, sizeof(*query_sqbuf), true); + print_hex_dump_debug("rqbuf ", DUMP_PREFIX_OFFSET, 16, 1, + query_rqbuf, sizeof(*query_rqbuf), true); + ibdev_dbg(&dev->ibdev, "query qp %u state_pmtu %#x flags %#x", + qp->qpid, query_rqbuf->state_pmtu, flags); + + attr->qp_state =3D from_ionic_qp_state(query_rqbuf->state_pmtu >> 4); + attr->cur_qp_state =3D attr->qp_state; + attr->path_mtu =3D (query_rqbuf->state_pmtu & 0xf) - 7; + attr->path_mig_state =3D IB_MIG_MIGRATED; + attr->qkey =3D be32_to_cpu(query_sqbuf->qkey_dest_qpn); + attr->rq_psn =3D be32_to_cpu(query_sqbuf->rq_psn); + attr->sq_psn =3D be32_to_cpu(query_rqbuf->sq_psn); + attr->dest_qp_num =3D attr->qkey; + attr->qp_access_flags =3D from_ionic_qp_flags(flags); + attr->pkey_index =3D 0; + attr->alt_pkey_index =3D 0; + attr->en_sqd_async_notify =3D !!(flags & IONIC_QPF_SQD_NOTIFY); + attr->sq_draining =3D !!(flags & IONIC_QPF_SQ_DRAINING); + attr->max_rd_atomic =3D BIT(query_rqbuf->rrq_depth) - 1; + attr->max_dest_rd_atomic =3D BIT(query_rqbuf->rsq_depth) - 1; + attr->min_rnr_timer =3D query_sqbuf->rnr_timer; + attr->port_num =3D 0; + attr->timeout =3D query_sqbuf->retry_timeout; + attr->retry_cnt =3D query_rqbuf->retry_rnrtry & 0xf; + attr->rnr_retry =3D query_rqbuf->retry_rnrtry >> 4; + attr->alt_port_num =3D 0; + attr->alt_timeout =3D 0; + attr->rate_limit =3D be32_to_cpu(query_sqbuf->rate_limit_kbps); + + if (mask & IB_QP_AV) + ionic_set_ah_attr(dev, &attr->ah_attr, + qp->hdr, qp->sgid_index); + +err_hdrdma: + if (mask & IB_QP_AV) { + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, + PAGE_SIZE, DMA_FROM_DEVICE); + kfree(hdr_buf); + } +err_hdrbuf: + dma_unmap_single(dev->lif_cfg.hwdev, query_rqdma, sizeof(*query_rqbuf), + DMA_FROM_DEVICE); +err_rqdma: + dma_unmap_single(dev->lif_cfg.hwdev, query_sqdma, sizeof(*query_sqbuf), + DMA_FROM_DEVICE); +err_sqdma: + kfree(query_rqbuf); +err_rqbuf: + kfree(query_sqbuf); +err_sqbuf: + return rc; +} + +static int ionic_destroy_qp_cmd(struct ionic_ibdev *dev, u32 qpid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_QP_IN_V1_LEN), + .cmd.destroy_qp =3D { + .qp_id =3D cpu_to_le32(qpid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_QP) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +static bool ionic_expdb_wqe_size_supported(struct ionic_ibdev *dev, + uint32_t wqe_size) +{ + switch (wqe_size) { + case 64: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_64; + case 128: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_128; + case 256: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_256; + case 512: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_512; + } + + return false; +} + +static void ionic_qp_sq_init_cmb(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_udata *udata, + int max_data) +{ + u8 expdb_stride_log2 =3D 0; + bool expdb; + int rc; + + if (!(qp->sq_cmb & IONIC_CMB_ENABLE)) + goto not_in_cmb; + + if (qp->sq_cmb & ~IONIC_CMB_SUPPORTED) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->sq_cmb &=3D IONIC_CMB_SUPPORTED; + } + + if ((qp->sq_cmb & IONIC_CMB_EXPDB) && !dev->lif_cfg.sq_expdb) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + qp->sq_cmb_order =3D order_base_2(qp->sq.size / PAGE_SIZE); + + if (qp->sq_cmb_order >=3D IONIC_SQCMB_ORDER) + goto not_in_cmb; + + if (qp->sq_cmb & IONIC_CMB_EXPDB) + expdb_stride_log2 =3D qp->sq.stride_log2; + + rc =3D ionic_get_cmb(dev->lif_cfg.lif, &qp->sq_cmb_pgid, + &qp->sq_cmb_addr, qp->sq_cmb_order, + expdb_stride_log2, &expdb); + if (rc) + goto not_in_cmb; + + if ((qp->sq_cmb & IONIC_CMB_EXPDB) && !expdb) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto err_map; + + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + return; + +err_map: + ionic_put_cmb(dev->lif_cfg.lif, qp->sq_cmb_pgid, qp->sq_cmb_order); +not_in_cmb: + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + ibdev_warn(&dev->ibdev, "could not place sq in cmb as required\n"); + + qp->sq_cmb =3D 0; + qp->sq_cmb_order =3D IONIC_RES_INVALID; + qp->sq_cmb_pgid =3D 0; + qp->sq_cmb_addr =3D 0; + + qp->sq_cmb_mmap.offset =3D 0; + qp->sq_cmb_mmap.size =3D 0; + qp->sq_cmb_mmap.pfn =3D 0; +} + +static void ionic_qp_sq_destroy_cmb(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!(qp->sq_cmb & IONIC_CMB_ENABLE)) + return; + + if (ctx) { + mutex_lock(&ctx->mmap_mut); + list_del(&qp->sq_cmb_mmap.ctx_ent); + mutex_unlock(&ctx->mmap_mut); + } + + ionic_put_cmb(dev->lif_cfg.lif, qp->sq_cmb_pgid, qp->sq_cmb_order); +} + +static int ionic_qp_sq_init(struct ionic_ibdev *dev, struct ionic_ctx *ctx, + struct ionic_qp *qp, struct ionic_qdesc *sq, + struct ionic_tbl_buf *buf, int max_wr, int max_sge, + int max_data, int sq_spec, struct ib_udata *udata) +{ + u32 wqe_size; + int rc =3D 0; + + qp->sq_msn_prod =3D 0; + qp->sq_msn_cons =3D 0; + + INIT_LIST_HEAD(&qp->sq_cmb_mmap.ctx_ent); + + if (!qp->has_sq) { + if (buf) { + buf->tbl_buf =3D NULL; + buf->tbl_limit =3D 0; + buf->tbl_pages =3D 0; + } + if (udata) + rc =3D ionic_validate_qdesc_zero(sq); + + return rc; + } + + rc =3D -EINVAL; + + if (max_wr < 0 || max_wr > 0xffff) + goto err_sq; + + if (max_sge < 1) + goto err_sq; + + if (max_sge > min(ionic_v1_send_wqe_max_sge(dev->lif_cfg.max_stride, 0, + qp->sq_cmb & + IONIC_CMB_EXPDB), + IONIC_SPEC_HIGH)) + goto err_sq; + + if (max_data < 0) + goto err_sq; + + if (max_data > ionic_v1_send_wqe_max_data(dev->lif_cfg.max_stride, + qp->sq_cmb & IONIC_CMB_EXPDB)) + goto err_sq; + + if (udata) { + rc =3D ionic_validate_qdesc(sq); + if (rc) + goto err_sq; + + qp->sq_spec =3D sq_spec; + + qp->sq.ptr =3D NULL; + qp->sq.size =3D sq->size; + qp->sq.mask =3D sq->mask; + qp->sq.depth_log2 =3D sq->depth_log2; + qp->sq.stride_log2 =3D sq->stride_log2; + + qp->sq_meta =3D NULL; + qp->sq_msn_idx =3D NULL; + + qp->sq_umem =3D ib_umem_get(&dev->ibdev, sq->addr, sq->size, 0); + if (IS_ERR(qp->sq_umem)) { + rc =3D PTR_ERR(qp->sq_umem); + goto err_sq; + } + } else { + qp->sq_umem =3D NULL; + + qp->sq_spec =3D ionic_v1_use_spec_sge(max_sge, sq_spec); + if (sq_spec && !qp->sq_spec) + ibdev_dbg(&dev->ibdev, + "init sq: max_sge %u disables spec\n", + max_sge); + + if (qp->sq_cmb & IONIC_CMB_EXPDB) { + wqe_size =3D ionic_v1_send_wqe_min_size(max_sge, max_data, + qp->sq_spec, + true); + + if (!ionic_expdb_wqe_size_supported(dev, wqe_size)) + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + if (!(qp->sq_cmb & IONIC_CMB_EXPDB)) + wqe_size =3D ionic_v1_send_wqe_min_size(max_sge, max_data, + qp->sq_spec, + false); + + rc =3D ionic_queue_init(&qp->sq, dev->lif_cfg.hwdev, + max_wr, wqe_size); + if (rc) + goto err_sq; + + ionic_queue_dbell_init(&qp->sq, qp->qpid); + + qp->sq_meta =3D kmalloc_array((u32)qp->sq.mask + 1, + sizeof(*qp->sq_meta), + GFP_KERNEL); + if (!qp->sq_meta) { + rc =3D -ENOMEM; + goto err_sq_meta; + } + + qp->sq_msn_idx =3D kmalloc_array((u32)qp->sq.mask + 1, + sizeof(*qp->sq_msn_idx), + GFP_KERNEL); + if (!qp->sq_msn_idx) { + rc =3D -ENOMEM; + goto err_sq_msn; + } + } + + ionic_qp_sq_init_cmb(dev, qp, udata, max_data); + + if (qp->sq_cmb & IONIC_CMB_ENABLE) + rc =3D ionic_pgtbl_init(dev, buf, NULL, + (u64)qp->sq_cmb_pgid << PAGE_SHIFT, + 1, PAGE_SIZE); + else + rc =3D ionic_pgtbl_init(dev, buf, + qp->sq_umem, qp->sq.dma, 1, PAGE_SIZE); + if (rc) { + ibdev_dbg(&dev->ibdev, + "create sq %u pgtbl_init error %d\n", qp->qpid, rc); + goto err_sq_tbl; + } + + return 0; + +err_sq_tbl: + ionic_qp_sq_destroy_cmb(dev, ctx, qp); + kfree(qp->sq_msn_idx); +err_sq_msn: + kfree(qp->sq_meta); +err_sq_meta: + if (qp->sq_umem) + ib_umem_release(qp->sq_umem); + else + ionic_queue_destroy(&qp->sq, dev->lif_cfg.hwdev); +err_sq: + return rc; +} + +static void ionic_qp_sq_destroy(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!qp->has_sq) + return; + + ionic_qp_sq_destroy_cmb(dev, ctx, qp); + + kfree(qp->sq_msn_idx); + kfree(qp->sq_meta); + + if (qp->sq_umem) + ib_umem_release(qp->sq_umem); + else + ionic_queue_destroy(&qp->sq, dev->lif_cfg.hwdev); +} + +static void ionic_qp_rq_init_cmb(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_udata *udata) +{ + u8 expdb_stride_log2 =3D 0; + bool expdb; + int rc; + + if (!(qp->rq_cmb & IONIC_CMB_ENABLE)) + goto not_in_cmb; + + if (qp->rq_cmb & ~IONIC_CMB_SUPPORTED) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->rq_cmb &=3D IONIC_CMB_SUPPORTED; + } + + if ((qp->rq_cmb & IONIC_CMB_EXPDB) && !dev->lif_cfg.rq_expdb) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + qp->rq_cmb_order =3D order_base_2(qp->rq.size / PAGE_SIZE); + + if (qp->rq_cmb_order >=3D IONIC_RQCMB_ORDER) + goto not_in_cmb; + + if (qp->rq_cmb & IONIC_CMB_EXPDB) + expdb_stride_log2 =3D qp->rq.stride_log2; + + rc =3D ionic_get_cmb(dev->lif_cfg.lif, &qp->rq_cmb_pgid, + &qp->rq_cmb_addr, qp->rq_cmb_order, + expdb_stride_log2, &expdb); + if (rc) + goto not_in_cmb; + + if ((qp->rq_cmb & IONIC_CMB_EXPDB) && !expdb) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto err_map; + + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + return; + +err_map: + ionic_put_cmb(dev->lif_cfg.lif, qp->rq_cmb_pgid, qp->rq_cmb_order); +not_in_cmb: + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + ibdev_warn(&dev->ibdev, "could not place rq in cmb as required\n"); + + qp->rq_cmb =3D 0; + qp->rq_cmb_order =3D IONIC_RES_INVALID; + qp->rq_cmb_pgid =3D 0; + qp->rq_cmb_addr =3D 0; + + qp->rq_cmb_mmap.offset =3D 0; + qp->rq_cmb_mmap.size =3D 0; + qp->rq_cmb_mmap.pfn =3D 0; +} + +static void ionic_qp_rq_destroy_cmb(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!(qp->rq_cmb & IONIC_CMB_ENABLE)) + return; + + if (ctx) { + mutex_lock(&ctx->mmap_mut); + list_del(&qp->rq_cmb_mmap.ctx_ent); + mutex_unlock(&ctx->mmap_mut); + } + + ionic_put_cmb(dev->lif_cfg.lif, qp->rq_cmb_pgid, qp->rq_cmb_order); +} + +static int ionic_qp_rq_init(struct ionic_ibdev *dev, struct ionic_ctx *ctx, + struct ionic_qp *qp, struct ionic_qdesc *rq, + struct ionic_tbl_buf *buf, int max_wr, int max_sge, + int rq_spec, struct ib_udata *udata) +{ + int rc =3D 0, i; + u32 wqe_size; + + INIT_LIST_HEAD(&qp->rq_cmb_mmap.ctx_ent); + + if (!qp->has_rq) { + if (buf) { + buf->tbl_buf =3D NULL; + buf->tbl_limit =3D 0; + buf->tbl_pages =3D 0; + } + if (udata) + rc =3D ionic_validate_qdesc_zero(rq); + + return rc; + } + + rc =3D -EINVAL; + + if (max_wr < 0 || max_wr > 0xffff) + goto err_rq; + + if (max_sge < 1) + goto err_rq; + + if (max_sge > min(ionic_v1_recv_wqe_max_sge(dev->lif_cfg.max_stride, 0, f= alse), + IONIC_SPEC_HIGH)) + goto err_rq; + + if (udata) { + rc =3D ionic_validate_qdesc(rq); + if (rc) + goto err_rq; + + qp->rq_spec =3D rq_spec; + + qp->rq.ptr =3D NULL; + qp->rq.size =3D rq->size; + qp->rq.mask =3D rq->mask; + qp->rq.depth_log2 =3D rq->depth_log2; + qp->rq.stride_log2 =3D rq->stride_log2; + + qp->rq_meta =3D NULL; + + qp->rq_umem =3D ib_umem_get(&dev->ibdev, rq->addr, rq->size, 0); + if (IS_ERR(qp->rq_umem)) { + rc =3D PTR_ERR(qp->rq_umem); + goto err_rq; + } + } else { + qp->rq_umem =3D NULL; + + qp->rq_spec =3D ionic_v1_use_spec_sge(max_sge, rq_spec); + if (rq_spec && !qp->rq_spec) + ibdev_dbg(&dev->ibdev, + "init rq: max_sge %u disables spec\n", + max_sge); + + if (qp->rq_cmb & IONIC_CMB_EXPDB) { + wqe_size =3D ionic_v1_recv_wqe_min_size(max_sge, + qp->rq_spec, + true); + + if (!ionic_expdb_wqe_size_supported(dev, wqe_size)) + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + if (!(qp->rq_cmb & IONIC_CMB_EXPDB)) + wqe_size =3D ionic_v1_recv_wqe_min_size(max_sge, + qp->rq_spec, + false); + + rc =3D ionic_queue_init(&qp->rq, dev->lif_cfg.hwdev, + max_wr, wqe_size); + if (rc) + goto err_rq; + + ionic_queue_dbell_init(&qp->rq, qp->qpid); + + qp->rq_meta =3D kmalloc_array((u32)qp->rq.mask + 1, + sizeof(*qp->rq_meta), + GFP_KERNEL); + if (!qp->rq_meta) { + rc =3D -ENOMEM; + goto err_rq_meta; + } + + for (i =3D 0; i < qp->rq.mask; ++i) + qp->rq_meta[i].next =3D &qp->rq_meta[i + 1]; + qp->rq_meta[i].next =3D IONIC_META_LAST; + qp->rq_meta_head =3D &qp->rq_meta[0]; + } + + ionic_qp_rq_init_cmb(dev, qp, udata); + + if (qp->rq_cmb & IONIC_CMB_ENABLE) + rc =3D ionic_pgtbl_init(dev, buf, NULL, + (u64)qp->rq_cmb_pgid << PAGE_SHIFT, + 1, PAGE_SIZE); + else + rc =3D ionic_pgtbl_init(dev, buf, + qp->rq_umem, qp->rq.dma, 1, PAGE_SIZE); + if (rc) { + ibdev_dbg(&dev->ibdev, + "create rq %u pgtbl_init error %d\n", qp->qpid, rc); + goto err_rq_tbl; + } + + return 0; + +err_rq_tbl: + ionic_qp_rq_destroy_cmb(dev, ctx, qp); + kfree(qp->rq_meta); +err_rq_meta: + if (qp->rq_umem) + ib_umem_release(qp->rq_umem); + else + ionic_queue_destroy(&qp->rq, dev->lif_cfg.hwdev); +err_rq: + + return rc; +} + +static void ionic_qp_rq_destroy(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!qp->has_rq) + return; + + ionic_qp_rq_destroy_cmb(dev, ctx, qp); + + kfree(qp->rq_meta); + + if (qp->rq_umem) + ib_umem_release(qp->rq_umem); + else + ionic_queue_destroy(&qp->rq, dev->lif_cfg.hwdev); +} + +static int ionic_create_qp(struct ib_qp *ibqp, + struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_tbl_buf sq_buf =3D {}, rq_buf =3D {}; + struct ionic_pd *pd =3D to_ionic_pd(ibqp->pd); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_qp_resp resp =3D {}; + struct ionic_qp_req req =3D {}; + unsigned long irqflags; + struct ionic_cq *cq; + u8 udma_mask; + int rc; + + if (udata) { + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + goto err_req; + } else { + req.sq_spec =3D IONIC_SPEC_HIGH; + req.rq_spec =3D IONIC_SPEC_HIGH; + } + + if (attr->qp_type =3D=3D IB_QPT_SMI || attr->qp_type > IB_QPT_UD) { + rc =3D -EOPNOTSUPP; + goto err_qp; + } + + qp->state =3D IB_QPS_RESET; + + INIT_LIST_HEAD(&qp->cq_poll_sq); + INIT_LIST_HEAD(&qp->cq_flush_sq); + INIT_LIST_HEAD(&qp->cq_flush_rq); + + spin_lock_init(&qp->sq_lock); + spin_lock_init(&qp->rq_lock); + + qp->has_sq =3D true; + qp->has_rq =3D true; + + if (attr->qp_type =3D=3D IB_QPT_GSI) { + rc =3D ionic_get_gsi_qpid(dev, &qp->qpid); + } else { + udma_mask =3D BIT(dev->lif_cfg.udma_count) - 1; + + if (qp->has_sq) + udma_mask &=3D to_ionic_vcq(attr->send_cq)->udma_mask; + + if (qp->has_rq) + udma_mask &=3D to_ionic_vcq(attr->recv_cq)->udma_mask; + + if (udata && req.udma_mask) + udma_mask &=3D req.udma_mask; + + if (!udma_mask) { + rc =3D -EINVAL; + goto err_qpid; + } + + rc =3D ionic_get_qpid(dev, &qp->qpid, &qp->udma_idx, udma_mask); + } + if (rc) + goto err_qpid; + + qp->sig_all =3D attr->sq_sig_type =3D=3D IB_SIGNAL_ALL_WR; + qp->has_ah =3D attr->qp_type =3D=3D IB_QPT_RC; + + if (qp->has_ah) { + qp->hdr =3D kzalloc(sizeof(*qp->hdr), GFP_KERNEL); + if (!qp->hdr) { + rc =3D -ENOMEM; + goto err_ah_alloc; + } + + rc =3D ionic_get_ahid(dev, &qp->ahid); + if (rc) + goto err_ahid; + } + + if (udata) { + if (req.rq_cmb & IONIC_CMB_ENABLE) + qp->rq_cmb =3D req.rq_cmb; + + if (req.sq_cmb & IONIC_CMB_ENABLE) + qp->sq_cmb =3D req.sq_cmb; + } + + rc =3D ionic_qp_sq_init(dev, ctx, qp, &req.sq, &sq_buf, + attr->cap.max_send_wr, attr->cap.max_send_sge, + attr->cap.max_inline_data, req.sq_spec, udata); + if (rc) + goto err_sq; + + rc =3D ionic_qp_rq_init(dev, ctx, qp, &req.rq, &rq_buf, + attr->cap.max_recv_wr, attr->cap.max_recv_sge, + req.rq_spec, udata); + if (rc) + goto err_rq; + + rc =3D ionic_create_qp_cmd(dev, pd, + to_ionic_vcq_cq(attr->send_cq, qp->udma_idx), + to_ionic_vcq_cq(attr->recv_cq, qp->udma_idx), + qp, &sq_buf, &rq_buf, attr); + if (rc) + goto err_cmd; + + if (udata) { + resp.qpid =3D qp->qpid; + resp.udma_idx =3D qp->udma_idx; + + if (qp->sq_cmb & IONIC_CMB_ENABLE) { + qp->sq_cmb_mmap.size =3D qp->sq.size; + qp->sq_cmb_mmap.pfn =3D PHYS_PFN(qp->sq_cmb_addr); + if ((qp->sq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) =3D=3D + (IONIC_CMB_WC | IONIC_CMB_UC)) { + ibdev_warn(&dev->ibdev, + "Both sq_cmb flags IONIC_CMB_WC and IONIC_CMB_UC are set, using de= fault driver mapping\n"); + qp->sq_cmb &=3D ~(IONIC_CMB_WC | IONIC_CMB_UC); + } + + qp->sq_cmb_mmap.writecombine =3D + (qp->sq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + !=3D IONIC_CMB_UC; + + /* let userspace know the mapping */ + if (qp->sq_cmb_mmap.writecombine) + qp->sq_cmb |=3D IONIC_CMB_WC; + else + qp->sq_cmb |=3D IONIC_CMB_UC; + + mutex_lock(&ctx->mmap_mut); + qp->sq_cmb_mmap.offset =3D ctx->mmap_off; + ctx->mmap_off +=3D qp->sq.size; + list_add(&qp->sq_cmb_mmap.ctx_ent, &ctx->mmap_list); + mutex_unlock(&ctx->mmap_mut); + + resp.sq_cmb =3D qp->sq_cmb; + resp.sq_cmb_offset =3D qp->sq_cmb_mmap.offset; + } + + if (qp->rq_cmb & IONIC_CMB_ENABLE) { + qp->rq_cmb_mmap.size =3D qp->rq.size; + qp->rq_cmb_mmap.pfn =3D PHYS_PFN(qp->rq_cmb_addr); + if ((qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) =3D=3D + (IONIC_CMB_WC | IONIC_CMB_UC)) { + ibdev_warn(&dev->ibdev, + "Both rq_cmb flags IONIC_CMB_WC and IONIC_CMB_UC are set, using de= fault driver mapping\n"); + qp->rq_cmb &=3D ~(IONIC_CMB_WC | IONIC_CMB_UC); + } + + if (qp->rq_cmb & IONIC_CMB_EXPDB) + qp->rq_cmb_mmap.writecombine =3D + (qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + =3D=3D IONIC_CMB_WC; + else + qp->rq_cmb_mmap.writecombine =3D + (qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + !=3D IONIC_CMB_UC; + + /* let userspace know the mapping */ + if (qp->rq_cmb_mmap.writecombine) + qp->rq_cmb |=3D IONIC_CMB_WC; + else + qp->rq_cmb |=3D IONIC_CMB_UC; + + mutex_lock(&ctx->mmap_mut); + qp->rq_cmb_mmap.offset =3D ctx->mmap_off; + ctx->mmap_off +=3D qp->rq.size; + list_add(&qp->rq_cmb_mmap.ctx_ent, &ctx->mmap_list); + mutex_unlock(&ctx->mmap_mut); + + resp.rq_cmb =3D qp->rq_cmb; + resp.rq_cmb_offset =3D qp->rq_cmb_mmap.offset; + } + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + ionic_pgtbl_unbuf(dev, &rq_buf); + ionic_pgtbl_unbuf(dev, &sq_buf); + + qp->ibqp.qp_num =3D qp->qpid; + + init_completion(&qp->qp_rel_comp); + kref_init(&qp->qp_kref); + + write_lock_irqsave(&dev->qp_tbl_rw, irqflags); + rc =3D xa_err(xa_store(&dev->qp_tbl, qp->qpid, qp, GFP_KERNEL)); + write_unlock_irqrestore(&dev->qp_tbl_rw, irqflags); + if (rc) + goto err_xa; + + if (qp->has_sq) { + cq =3D to_ionic_vcq_cq(attr->send_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + spin_unlock_irqrestore(&cq->lock, irqflags); + + attr->cap.max_send_wr =3D qp->sq.mask; + attr->cap.max_send_sge =3D + ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + qp->sq_cmb & IONIC_CMB_EXPDB); + attr->cap.max_inline_data =3D + ionic_v1_send_wqe_max_data(qp->sq.stride_log2, + qp->sq_cmb & + IONIC_CMB_EXPDB); + qp->sq_cqid =3D cq->cqid; + } + + if (qp->has_rq) { + cq =3D to_ionic_vcq_cq(attr->recv_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + spin_unlock_irqrestore(&cq->lock, irqflags); + + attr->cap.max_recv_wr =3D qp->rq.mask; + attr->cap.max_recv_sge =3D + ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, + qp->rq_spec, + qp->rq_cmb & IONIC_CMB_EXPDB); + qp->rq_cqid =3D cq->cqid; + } + + return 0; + +err_xa: +err_resp: + ionic_destroy_qp_cmd(dev, qp->qpid); +err_cmd: + ionic_pgtbl_unbuf(dev, &rq_buf); + ionic_qp_rq_destroy(dev, ctx, qp); +err_rq: + ionic_pgtbl_unbuf(dev, &sq_buf); + ionic_qp_sq_destroy(dev, ctx, qp); +err_sq: + if (qp->has_ah) + ionic_put_ahid(dev, qp->ahid); +err_ahid: + kfree(qp->hdr); +err_ah_alloc: + ionic_put_qpid(dev, qp->qpid); +err_qpid: +err_qp: +err_req: + return rc; +} + +void ionic_notify_flush_cq(struct ionic_cq *cq) +{ + if (cq->flush && cq->vcq->ibcq.comp_handler) + cq->vcq->ibcq.comp_handler(&cq->vcq->ibcq, + cq->vcq->ibcq.cq_context); +} + +static void ionic_notify_qp_cqs(struct ionic_ibdev *dev, struct ionic_qp *= qp) +{ + if (qp->ibqp.send_cq) + ionic_notify_flush_cq(to_ionic_vcq_cq(qp->ibqp.send_cq, + qp->udma_idx)); + if (qp->ibqp.recv_cq && qp->ibqp.recv_cq !=3D qp->ibqp.send_cq) + ionic_notify_flush_cq(to_ionic_vcq_cq(qp->ibqp.recv_cq, + qp->udma_idx)); +} + +void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp) +{ + unsigned long irqflags; + struct ionic_cq *cq; + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + + /* Hold the CQ lock and QP sq_lock to set up flush */ + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->sq_lock); + qp->sq_flush =3D true; + if (!ionic_queue_empty(&qp->sq)) { + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + spin_unlock(&qp->sq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + + /* Hold the CQ lock and QP rq_lock to set up flush */ + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->rq_lock); + qp->rq_flush =3D true; + if (!ionic_queue_empty(&qp->rq)) { + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + } + spin_unlock(&qp->rq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + } +} + +static void ionic_clean_cq(struct ionic_cq *cq, u32 qpid) +{ + struct ionic_v1_cqe *qcqe; + int prod, qtf, qid, type; + bool color; + + if (!cq->q.ptr) + return; + + color =3D cq->color; + prod =3D cq->q.prod; + qcqe =3D ionic_queue_at(&cq->q, prod); + + while (color =3D=3D ionic_v1_cqe_color(qcqe)) { + qtf =3D ionic_v1_cqe_qtf(qcqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + if (qid =3D=3D qpid && type !=3D IONIC_V1_CQE_TYPE_ADMIN) + ionic_v1_cqe_clean(qcqe); + + prod =3D ionic_queue_next(&cq->q, prod); + qcqe =3D ionic_queue_at(&cq->q, prod); + color =3D ionic_color_wrap(prod, color); + } +} + +static void ionic_reset_qp(struct ionic_ibdev *dev, struct ionic_qp *qp) +{ + unsigned long irqflags; + struct ionic_cq *cq; + int i; + + local_irq_save(irqflags); + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + spin_lock(&cq->lock); + ionic_clean_cq(cq, qp->qpid); + spin_unlock(&cq->lock); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + spin_lock(&cq->lock); + ionic_clean_cq(cq, qp->qpid); + spin_unlock(&cq->lock); + } + + if (qp->has_sq) { + spin_lock(&qp->sq_lock); + qp->sq_flush =3D false; + qp->sq_flush_rcvd =3D false; + qp->sq_msn_prod =3D 0; + qp->sq_msn_cons =3D 0; + qp->sq.prod =3D 0; + qp->sq.cons =3D 0; + spin_unlock(&qp->sq_lock); + } + + if (qp->has_rq) { + spin_lock(&qp->rq_lock); + qp->rq_flush =3D false; + qp->rq.prod =3D 0; + qp->rq.cons =3D 0; + if (qp->rq_meta) { + for (i =3D 0; i < qp->rq.mask; ++i) + qp->rq_meta[i].next =3D &qp->rq_meta[i + 1]; + qp->rq_meta[i].next =3D IONIC_META_LAST; + } + qp->rq_meta_head =3D &qp->rq_meta[0]; + spin_unlock(&qp->rq_lock); + } + + local_irq_restore(irqflags); +} + +static bool ionic_qp_cur_state_is_ok(enum ib_qp_state q_state, + enum ib_qp_state attr_state) +{ + if (q_state =3D=3D attr_state) + return true; + + if (attr_state =3D=3D IB_QPS_ERR) + return true; + + if (attr_state =3D=3D IB_QPS_SQE) + return q_state =3D=3D IB_QPS_RTS || q_state =3D=3D IB_QPS_SQD; + + return false; +} + +static int ionic_check_modify_qp(struct ionic_qp *qp, struct ib_qp_attr *a= ttr, + int mask) +{ + enum ib_qp_state cur_state =3D (mask & IB_QP_CUR_STATE) ? + attr->cur_qp_state : qp->state; + enum ib_qp_state next_state =3D (mask & IB_QP_STATE) ? + attr->qp_state : cur_state; + + if ((mask & IB_QP_CUR_STATE) && + !ionic_qp_cur_state_is_ok(qp->state, attr->cur_qp_state)) + return -EINVAL; + + if (!ib_modify_qp_is_ok(cur_state, next_state, qp->ibqp.qp_type, mask)) + return -EINVAL; + + /* unprivileged qp not allowed privileged qkey */ + if ((mask & IB_QP_QKEY) && (attr->qkey & 0x80000000) && + qp->ibqp.uobject) + return -EPERM; + + return 0; +} + +static int ionic_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int mask, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + int rc; + + rc =3D ionic_check_modify_qp(qp, attr, mask); + if (rc) + return rc; + + if (mask & IB_QP_CAP) + return -EINVAL; + + rc =3D ionic_modify_qp_cmd(dev, qp, attr, mask); + if (rc) + return rc; + + if (mask & IB_QP_STATE) { + qp->state =3D attr->qp_state; + + if (attr->qp_state =3D=3D IB_QPS_ERR) { + ionic_flush_qp(dev, qp); + ionic_notify_qp_cqs(dev, qp); + } else if (attr->qp_state =3D=3D IB_QPS_RESET) { + ionic_reset_qp(dev, qp); + } + } + + return 0; +} + +static int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int mask, struct ib_qp_init_attr *init_attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + int rc; + + memset(attr, 0, sizeof(*attr)); + memset(init_attr, 0, sizeof(*init_attr)); + + rc =3D ionic_query_qp_cmd(dev, qp, attr, mask); + if (rc) + goto err_cmd; + + if (qp->has_sq) + attr->cap.max_send_wr =3D qp->sq.mask; + + if (qp->has_rq) + attr->cap.max_recv_wr =3D qp->rq.mask; + + init_attr->event_handler =3D ibqp->event_handler; + init_attr->qp_context =3D ibqp->qp_context; + init_attr->send_cq =3D ibqp->send_cq; + init_attr->recv_cq =3D ibqp->recv_cq; + init_attr->srq =3D ibqp->srq; + init_attr->xrcd =3D ibqp->xrcd; + init_attr->cap =3D attr->cap; + init_attr->sq_sig_type =3D qp->sig_all ? + IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR; + init_attr->qp_type =3D ibqp->qp_type; + init_attr->create_flags =3D 0; + init_attr->port_num =3D 0; + init_attr->rwq_ind_tbl =3D ibqp->rwq_ind_tbl; + init_attr->source_qpn =3D 0; + +err_cmd: + return rc; +} + +static int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) +{ + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + unsigned long irqflags; + struct ionic_cq *cq; + int rc; + + rc =3D ionic_destroy_qp_cmd(dev, qp->qpid); + if (rc) + return rc; + + write_lock_irqsave(&dev->qp_tbl_rw, irqflags); + xa_erase(&dev->qp_tbl, qp->qpid); + write_unlock_irqrestore(&dev->qp_tbl_rw, irqflags); + + kref_put(&qp->qp_kref, ionic_qp_complete); + wait_for_completion(&qp->qp_rel_comp); + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + ionic_clean_cq(cq, qp->qpid); + list_del(&qp->cq_poll_sq); + list_del(&qp->cq_flush_sq); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + ionic_clean_cq(cq, qp->qpid); + list_del(&qp->cq_flush_rq); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + ionic_qp_rq_destroy(dev, ctx, qp); + ionic_qp_sq_destroy(dev, ctx, qp); + if (qp->has_ah) { + ionic_put_ahid(dev, qp->ahid); + kfree(qp->hdr); + } + ionic_put_qpid(dev, qp->qpid); + + return 0; +} + +static const struct ib_device_ops ionic_controlpath_ops =3D { + .driver_id =3D RDMA_DRIVER_IONIC, + .alloc_ucontext =3D ionic_alloc_ucontext, + .dealloc_ucontext =3D ionic_dealloc_ucontext, + .mmap =3D ionic_mmap, + + .alloc_pd =3D ionic_alloc_pd, + .dealloc_pd =3D ionic_dealloc_pd, + + .create_ah =3D ionic_create_ah, + .query_ah =3D ionic_query_ah, + .destroy_ah =3D ionic_destroy_ah, + .create_user_ah =3D ionic_create_ah, + .get_dma_mr =3D ionic_get_dma_mr, + .reg_user_mr =3D ionic_reg_user_mr, + .reg_user_mr_dmabuf =3D ionic_reg_user_mr_dmabuf, + .rereg_user_mr =3D ionic_rereg_user_mr, + .dereg_mr =3D ionic_dereg_mr, + .alloc_mr =3D ionic_alloc_mr, + .map_mr_sg =3D ionic_map_mr_sg, + + .alloc_mw =3D ionic_alloc_mw, + .dealloc_mw =3D ionic_dealloc_mw, + + .create_cq =3D ionic_create_cq, + .destroy_cq =3D ionic_destroy_cq, + + .create_qp =3D ionic_create_qp, + .modify_qp =3D ionic_modify_qp, + .query_qp =3D ionic_query_qp, + .destroy_qp =3D ionic_destroy_qp, + + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), + INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), + INIT_RDMA_OBJ_SIZE(ib_cq, ionic_vcq, ibcq), + INIT_RDMA_OBJ_SIZE(ib_qp, ionic_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_mw, ionic_mr, ibmw), +}; + +void ionic_controlpath_setops(struct ionic_ibdev *dev) +{ + ib_set_device_ops(&dev->ibdev, &ionic_controlpath_ops); + + dev->ibdev.uverbs_cmd_mask |=3D + BIT_ULL(IB_USER_VERBS_CMD_ALLOC_PD) | + BIT_ULL(IB_USER_VERBS_CMD_DEALLOC_PD) | + BIT_ULL(IB_USER_VERBS_CMD_CREATE_AH) | + BIT_ULL(IB_USER_VERBS_CMD_QUERY_AH) | + BIT_ULL(IB_USER_VERBS_CMD_DESTROY_AH) | + BIT_ULL(IB_USER_VERBS_CMD_REG_MR) | + BIT_ULL(IB_USER_VERBS_CMD_REREG_MR) | + BIT_ULL(IB_USER_VERBS_CMD_REG_SMR) | + BIT_ULL(IB_USER_VERBS_CMD_DEREG_MR) | + BIT_ULL(IB_USER_VERBS_CMD_ALLOC_MW) | + BIT_ULL(IB_USER_VERBS_CMD_BIND_MW) | + BIT_ULL(IB_USER_VERBS_CMD_DEALLOC_MW) | + BIT_ULL(IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL) | + BIT_ULL(IB_USER_VERBS_CMD_CREATE_CQ) | + BIT_ULL(IB_USER_VERBS_CMD_DESTROY_CQ) | + BIT_ULL(IB_USER_VERBS_CMD_CREATE_QP) | + BIT_ULL(IB_USER_VERBS_CMD_QUERY_QP) | + BIT_ULL(IB_USER_VERBS_CMD_MODIFY_QP) | + BIT_ULL(IB_USER_VERBS_CMD_DESTROY_QP) | + 0; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index b4f029dde3a9..881948a57341 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -5,6 +5,266 @@ #define _IONIC_FW_H_ =20 #include +#include + +/* common for ib spec */ + +#define IONIC_EXP_DBELL_SZ 8 + +enum ionic_mrid_bits { + IONIC_MRID_INDEX_SHIFT =3D 8, +}; + +static inline u32 ionic_mrid(u32 index, u8 key) +{ + return (index << IONIC_MRID_INDEX_SHIFT) | key; +} + +static inline u32 ionic_mrid_index(u32 lrkey) +{ + return lrkey >> IONIC_MRID_INDEX_SHIFT; +} + +/* common to all versions */ + +/* wqe scatter gather element */ +struct ionic_sge { + __be64 va; + __be32 len; + __be32 lkey; +}; + +/* admin queue mr type */ +enum ionic_mr_flags { + /* bits that determine mr access */ + IONIC_MRF_LOCAL_WRITE =3D BIT(0), + IONIC_MRF_REMOTE_WRITE =3D BIT(1), + IONIC_MRF_REMOTE_READ =3D BIT(2), + IONIC_MRF_REMOTE_ATOMIC =3D BIT(3), + IONIC_MRF_MW_BIND =3D BIT(4), + IONIC_MRF_ZERO_BASED =3D BIT(5), + IONIC_MRF_ON_DEMAND =3D BIT(6), + IONIC_MRF_PB =3D BIT(7), + IONIC_MRF_ACCESS_MASK =3D BIT(12) - 1, + + /* bits that determine mr type */ + IONIC_MRF_UKEY_EN =3D BIT(13), + IONIC_MRF_IS_MW =3D BIT(14), + IONIC_MRF_INV_EN =3D BIT(15), + + /* base flags combinations for mr types */ + IONIC_MRF_USER_MR =3D 0, + IONIC_MRF_PHYS_MR =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_INV_EN), + IONIC_MRF_MW_1 =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_IS_MW), + IONIC_MRF_MW_2 =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_IS_MW | + IONIC_MRF_INV_EN), +}; + +static inline int to_ionic_mr_flags(int access) +{ + int flags =3D 0; + + if (access & IB_ACCESS_LOCAL_WRITE) + flags |=3D IONIC_MRF_LOCAL_WRITE; + + if (access & IB_ACCESS_REMOTE_READ) + flags |=3D IONIC_MRF_REMOTE_READ; + + if (access & IB_ACCESS_REMOTE_WRITE) + flags |=3D IONIC_MRF_REMOTE_WRITE; + + if (access & IB_ACCESS_REMOTE_ATOMIC) + flags |=3D IONIC_MRF_REMOTE_ATOMIC; + + if (access & IB_ACCESS_MW_BIND) + flags |=3D IONIC_MRF_MW_BIND; + + if (access & IB_ZERO_BASED) + flags |=3D IONIC_MRF_ZERO_BASED; + + return flags; +} + +enum ionic_qp_flags { + /* bits that determine qp access */ + IONIC_QPF_REMOTE_WRITE =3D BIT(0), + IONIC_QPF_REMOTE_READ =3D BIT(1), + IONIC_QPF_REMOTE_ATOMIC =3D BIT(2), + + /* bits that determine other qp behavior */ + IONIC_QPF_SQ_PB =3D BIT(6), + IONIC_QPF_RQ_PB =3D BIT(7), + IONIC_QPF_SQ_SPEC =3D BIT(8), + IONIC_QPF_RQ_SPEC =3D BIT(9), + IONIC_QPF_REMOTE_PRIVILEGED =3D BIT(10), + IONIC_QPF_SQ_DRAINING =3D BIT(11), + IONIC_QPF_SQD_NOTIFY =3D BIT(12), + IONIC_QPF_SQ_CMB =3D BIT(13), + IONIC_QPF_RQ_CMB =3D BIT(14), + IONIC_QPF_PRIVILEGED =3D BIT(15), +}; + +static inline int from_ionic_qp_flags(int flags) +{ + int access_flags =3D 0; + + if (flags & IONIC_QPF_REMOTE_WRITE) + access_flags |=3D IB_ACCESS_REMOTE_WRITE; + + if (flags & IONIC_QPF_REMOTE_READ) + access_flags |=3D IB_ACCESS_REMOTE_READ; + + if (flags & IONIC_QPF_REMOTE_ATOMIC) + access_flags |=3D IB_ACCESS_REMOTE_ATOMIC; + + return access_flags; +} + +static inline int to_ionic_qp_flags(int access, bool sqd_notify, + bool sq_is_cmb, bool rq_is_cmb, + bool sq_spec, bool rq_spec, + bool privileged, bool remote_privileged) +{ + int flags =3D 0; + + if (access & IB_ACCESS_REMOTE_WRITE) + flags |=3D IONIC_QPF_REMOTE_WRITE; + + if (access & IB_ACCESS_REMOTE_READ) + flags |=3D IONIC_QPF_REMOTE_READ; + + if (access & IB_ACCESS_REMOTE_ATOMIC) + flags |=3D IONIC_QPF_REMOTE_ATOMIC; + + if (sqd_notify) + flags |=3D IONIC_QPF_SQD_NOTIFY; + + if (sq_is_cmb) + flags |=3D IONIC_QPF_SQ_CMB; + + if (rq_is_cmb) + flags |=3D IONIC_QPF_RQ_CMB; + + if (sq_spec) + flags |=3D IONIC_QPF_SQ_SPEC; + + if (rq_spec) + flags |=3D IONIC_QPF_RQ_SPEC; + + if (privileged) + flags |=3D IONIC_QPF_PRIVILEGED; + + if (remote_privileged) + flags |=3D IONIC_QPF_REMOTE_PRIVILEGED; + + return flags; +} + +/* admin queue qp type */ +enum ionic_qp_type { + IONIC_QPT_RC, + IONIC_QPT_UC, + IONIC_QPT_RD, + IONIC_QPT_UD, + IONIC_QPT_SRQ, + IONIC_QPT_XRC_INI, + IONIC_QPT_XRC_TGT, + IONIC_QPT_XRC_SRQ, +}; + +static inline int to_ionic_qp_type(enum ib_qp_type type) +{ + switch (type) { + case IB_QPT_GSI: + case IB_QPT_UD: + return IONIC_QPT_UD; + case IB_QPT_RC: + return IONIC_QPT_RC; + case IB_QPT_UC: + return IONIC_QPT_UC; + case IB_QPT_XRC_INI: + return IONIC_QPT_XRC_INI; + case IB_QPT_XRC_TGT: + return IONIC_QPT_XRC_TGT; + default: + return -EINVAL; + } +} + +/* admin queue qp state */ +enum ionic_qp_state { + IONIC_QPS_RESET, + IONIC_QPS_INIT, + IONIC_QPS_RTR, + IONIC_QPS_RTS, + IONIC_QPS_SQD, + IONIC_QPS_SQE, + IONIC_QPS_ERR, +}; + +static inline int from_ionic_qp_state(enum ionic_qp_state state) +{ + switch (state) { + case IONIC_QPS_RESET: + return IB_QPS_RESET; + case IONIC_QPS_INIT: + return IB_QPS_INIT; + case IONIC_QPS_RTR: + return IB_QPS_RTR; + case IONIC_QPS_RTS: + return IB_QPS_RTS; + case IONIC_QPS_SQD: + return IB_QPS_SQD; + case IONIC_QPS_SQE: + return IB_QPS_SQE; + case IONIC_QPS_ERR: + return IB_QPS_ERR; + default: + return -EINVAL; + } +} + +static inline int to_ionic_qp_state(enum ib_qp_state state) +{ + switch (state) { + case IB_QPS_RESET: + return IONIC_QPS_RESET; + case IB_QPS_INIT: + return IONIC_QPS_INIT; + case IB_QPS_RTR: + return IONIC_QPS_RTR; + case IB_QPS_RTS: + return IONIC_QPS_RTS; + case IB_QPS_SQD: + return IONIC_QPS_SQD; + case IB_QPS_SQE: + return IONIC_QPS_SQE; + case IB_QPS_ERR: + return IONIC_QPS_ERR; + default: + return 0; + } +} + +static inline int to_ionic_qp_modify_state(enum ib_qp_state to_state, + enum ib_qp_state from_state) +{ + return to_ionic_qp_state(to_state) | + (to_ionic_qp_state(from_state) << 4); +} + +/* fw abi v1 */ + +/* data payload part of v1 wqe */ +union ionic_v1_pld { + struct ionic_sge sgl[2]; + __be32 spec32[8]; + __be16 spec16[16]; + __u8 data[32]; +}; =20 /* completion queue v1 cqe */ struct ionic_v1_cqe { @@ -78,6 +338,390 @@ static inline u32 ionic_v1_cqe_qtf_qid(u32 qtf) return qtf >> IONIC_V1_CQE_QID_SHIFT; } =20 +/* v1 base wqe header */ +struct ionic_v1_base_hdr { + __u64 wqe_id; + __u8 op; + __u8 num_sge_key; + __be16 flags; + __be32 imm_data_key; +}; + +/* v1 receive wqe body */ +struct ionic_v1_recv_bdy { + __u8 rsvd[16]; + union ionic_v1_pld pld; +}; + +/* v1 send/rdma wqe body (common, has sgl) */ +struct ionic_v1_common_bdy { + union { + struct { + __be32 ah_id; + __be32 dest_qpn; + __be32 dest_qkey; + } send; + struct { + __be32 remote_va_high; + __be32 remote_va_low; + __be32 remote_rkey; + } rdma; + }; + __be32 length; + union ionic_v1_pld pld; +}; + +/* v1 atomic wqe body */ +struct ionic_v1_atomic_bdy { + __be32 remote_va_high; + __be32 remote_va_low; + __be32 remote_rkey; + __be32 swap_add_high; + __be32 swap_add_low; + __be32 compare_high; + __be32 compare_low; + __u8 rsvd[4]; + struct ionic_sge sge; +}; + +/* v1 reg mr wqe body */ +struct ionic_v1_reg_mr_bdy { + __be64 va; + __be64 length; + __be64 offset; + __le64 dma_addr; + __be32 map_count; + __be16 flags; + __u8 dir_size_log2; + __u8 page_size_log2; + __u8 rsvd[8]; +}; + +/* v1 bind mw wqe body */ +struct ionic_v1_bind_mw_bdy { + __be64 va; + __be64 length; + __be32 lkey; + __be16 flags; + __u8 rsvd[26]; +}; + +/* v1 send/recv wqe */ +struct ionic_v1_wqe { + struct ionic_v1_base_hdr base; + union { + struct ionic_v1_recv_bdy recv; + struct ionic_v1_common_bdy common; + struct ionic_v1_atomic_bdy atomic; + struct ionic_v1_reg_mr_bdy reg_mr; + struct ionic_v1_bind_mw_bdy bind_mw; + }; +}; + +/* queue pair v1 send opcodes */ +enum ionic_v1_op { + IONIC_V1_OP_SEND, + IONIC_V1_OP_SEND_INV, + IONIC_V1_OP_SEND_IMM, + IONIC_V1_OP_RDMA_READ, + IONIC_V1_OP_RDMA_WRITE, + IONIC_V1_OP_RDMA_WRITE_IMM, + IONIC_V1_OP_ATOMIC_CS, + IONIC_V1_OP_ATOMIC_FA, + IONIC_V1_OP_REG_MR, + IONIC_V1_OP_LOCAL_INV, + IONIC_V1_OP_BIND_MW, + + /* flags */ + IONIC_V1_FLAG_FENCE =3D BIT(0), + IONIC_V1_FLAG_SOL =3D BIT(1), + IONIC_V1_FLAG_INL =3D BIT(2), + IONIC_V1_FLAG_SIG =3D BIT(3), + + /* flags last four bits for sgl spec format */ + IONIC_V1_FLAG_SPEC32 =3D (1u << 12), + IONIC_V1_FLAG_SPEC16 =3D (2u << 12), + IONIC_V1_SPEC_FIRST_SGE =3D 2, +}; + +static inline size_t ionic_v1_send_wqe_min_size(int min_sge, int min_data, + int spec, bool expdb) +{ + size_t sz_wqe, sz_sgl, sz_data; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + min_sge +=3D IONIC_V1_SPEC_FIRST_SGE; + + if (expdb) { + min_sge +=3D 1; + min_data +=3D IONIC_EXP_DBELL_SZ; + } + + sz_wqe =3D sizeof(struct ionic_v1_wqe); + sz_sgl =3D offsetof(struct ionic_v1_wqe, common.pld.sgl[min_sge]); + sz_data =3D offsetof(struct ionic_v1_wqe, common.pld.data[min_data]); + + if (sz_sgl > sz_wqe) + sz_wqe =3D sz_sgl; + + if (sz_data > sz_wqe) + sz_wqe =3D sz_data; + + return sz_wqe; +} + +static inline int ionic_v1_send_wqe_max_sge(u8 stride_log2, int spec, + bool expdb) +{ + struct ionic_sge *sge =3D (void *)(1ull << stride_log2); + struct ionic_v1_wqe *wqe =3D (void *)0; + int num_sge =3D 0; + + if (expdb) + sge -=3D 1; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + num_sge =3D IONIC_V1_SPEC_FIRST_SGE; + + num_sge =3D sge - &wqe->common.pld.sgl[num_sge]; + + if (spec && num_sge > spec) + num_sge =3D spec; + + return num_sge; +} + +static inline int ionic_v1_send_wqe_max_data(u8 stride_log2, bool expdb) +{ + struct ionic_v1_wqe *wqe =3D (void *)0; + __u8 *data =3D (void *)(1ull << stride_log2); + + if (expdb) + data -=3D IONIC_EXP_DBELL_SZ; + + return data - wqe->common.pld.data; +} + +static inline size_t ionic_v1_recv_wqe_min_size(int min_sge, int spec, + bool expdb) +{ + size_t sz_wqe, sz_sgl; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + min_sge +=3D IONIC_V1_SPEC_FIRST_SGE; + + if (expdb) + min_sge +=3D 1; + + sz_wqe =3D sizeof(struct ionic_v1_wqe); + sz_sgl =3D offsetof(struct ionic_v1_wqe, recv.pld.sgl[min_sge]); + + if (sz_sgl > sz_wqe) + sz_wqe =3D sz_sgl; + + return sz_wqe; +} + +static inline int ionic_v1_recv_wqe_max_sge(u8 stride_log2, int spec, + bool expdb) +{ + struct ionic_sge *sge =3D (void *)(1ull << stride_log2); + struct ionic_v1_wqe *wqe =3D (void *)0; + int num_sge =3D 0; + + if (expdb) + sge -=3D 1; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + num_sge =3D IONIC_V1_SPEC_FIRST_SGE; + + num_sge =3D sge - &wqe->recv.pld.sgl[num_sge]; + + if (spec && num_sge > spec) + num_sge =3D spec; + + return num_sge; +} + +static inline int ionic_v1_use_spec_sge(int min_sge, int spec) +{ + if (!spec || min_sge > spec) + return 0; + + if (min_sge <=3D IONIC_V1_SPEC_FIRST_SGE) + return IONIC_V1_SPEC_FIRST_SGE; + + return spec; +} + +struct ionic_admin_create_ah { + __le64 dma_addr; + __le32 length; + __le32 pd_id; + __le32 id_ver; + __le16 dbid_flags; + __u8 csum_profile; + __u8 crypto; +} __packed; + +#define IONIC_ADMIN_CREATE_AH_IN_V1_LEN 24 +static_assert(sizeof(struct ionic_admin_create_ah) =3D=3D + IONIC_ADMIN_CREATE_AH_IN_V1_LEN); + +struct ionic_admin_destroy_ah { + __le32 ah_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_AH_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_ah) =3D=3D + IONIC_ADMIN_DESTROY_AH_IN_V1_LEN); + +struct ionic_admin_query_ah { + __le64 dma_addr; +} __packed; + +#define IONIC_ADMIN_QUERY_AH_IN_V1_LEN 8 +static_assert(sizeof(struct ionic_admin_query_ah) =3D=3D + IONIC_ADMIN_QUERY_AH_IN_V1_LEN); + +struct ionic_admin_create_mr { + __le64 va; + __le64 length; + __le32 pd_id; + __le32 id_ver; + __le32 tbl_index; + __le32 map_count; + __le64 dma_addr; + __le16 dbid_flags; + __u8 pt_type; + __u8 dir_size_log2; + __u8 page_size_log2; +} __packed; + +#define IONIC_ADMIN_CREATE_MR_IN_V1_LEN 45 +static_assert(sizeof(struct ionic_admin_create_mr) =3D=3D + IONIC_ADMIN_CREATE_MR_IN_V1_LEN); + +struct ionic_admin_destroy_mr { + __le32 mr_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_MR_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_mr) =3D=3D + IONIC_ADMIN_DESTROY_MR_IN_V1_LEN); + +struct ionic_admin_create_cq { + __le32 eq_id; + __u8 depth_log2; + __u8 stride_log2; + __u8 dir_size_log2_rsvd; + __u8 page_size_log2; + __le32 cq_flags; + __le32 id_ver; + __le32 tbl_index; + __le32 map_count; + __le64 dma_addr; + __le16 dbid_flags; +} __packed; + +#define IONIC_ADMIN_CREATE_CQ_IN_V1_LEN 34 +static_assert(sizeof(struct ionic_admin_create_cq) =3D=3D + IONIC_ADMIN_CREATE_CQ_IN_V1_LEN); + +struct ionic_admin_destroy_cq { + __le32 cq_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_cq) =3D=3D + IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN); + +struct ionic_admin_create_qp { + __le32 pd_id; + __be32 priv_flags; + __le32 sq_cq_id; + __u8 sq_depth_log2; + __u8 sq_stride_log2; + __u8 sq_dir_size_log2_rsvd; + __u8 sq_page_size_log2; + __le32 sq_tbl_index_xrcd_id; + __le32 sq_map_count; + __le64 sq_dma_addr; + __le32 rq_cq_id; + __u8 rq_depth_log2; + __u8 rq_stride_log2; + __u8 rq_dir_size_log2_rsvd; + __u8 rq_page_size_log2; + __le32 rq_tbl_index_srq_id; + __le32 rq_map_count; + __le64 rq_dma_addr; + __le32 id_ver; + __le16 dbid_flags; + __u8 type_state; + __u8 rsvd; +} __packed; + +#define IONIC_ADMIN_CREATE_QP_IN_V1_LEN 64 +static_assert(sizeof(struct ionic_admin_create_qp) =3D=3D + IONIC_ADMIN_CREATE_QP_IN_V1_LEN); + +struct ionic_admin_destroy_qp { + __le32 qp_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_QP_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_qp) =3D=3D + IONIC_ADMIN_DESTROY_QP_IN_V1_LEN); + +struct ionic_admin_mod_qp { + __be32 attr_mask; + __u8 dcqcn_profile; + __u8 tfp_csum_profile; + __be16 access_flags; + __le32 rq_psn; + __le32 sq_psn; + __le32 qkey_dest_qpn; + __le32 rate_limit_kbps; + __u8 pmtu; + __u8 retry; + __u8 rnr_timer; + __u8 retry_timeout; + __u8 rsq_depth; + __u8 rrq_depth; + __le16 pkey_id; + __le32 ah_id_len; + __u8 en_pcp; + __u8 ip_dscp; + __u8 rsvd2; + __u8 type_state; + union { + struct { + __le16 rsvd1; + }; + __le32 rrq_index; + }; + __le32 rsq_index; + __le64 dma_addr; + __le32 id_ver; +} __packed; + +#define IONIC_ADMIN_MODIFY_QP_IN_V1_LEN 60 +static_assert(sizeof(struct ionic_admin_mod_qp) =3D=3D + IONIC_ADMIN_MODIFY_QP_IN_V1_LEN); + +struct ionic_admin_query_qp { + __le64 hdr_dma_addr; + __le64 sq_dma_addr; + __le64 rq_dma_addr; + __le32 ah_id; + __le32 id_ver; + __le16 dbid_flags; +} __packed; + +#define IONIC_ADMIN_QUERY_QP_IN_V1_LEN 34 +static_assert(sizeof(struct ionic_admin_query_qp) =3D=3D + IONIC_ADMIN_QUERY_QP_IN_V1_LEN); + #define ADMIN_WQE_STRIDE 64 #define ADMIN_WQE_HDR_LEN 4 =20 @@ -88,9 +732,66 @@ struct ionic_v1_admin_wqe { __le16 len; =20 union { + struct ionic_admin_create_ah create_ah; + struct ionic_admin_destroy_ah destroy_ah; + struct ionic_admin_query_ah query_ah; + struct ionic_admin_create_mr create_mr; + struct ionic_admin_destroy_mr destroy_mr; + struct ionic_admin_create_cq create_cq; + struct ionic_admin_destroy_cq destroy_cq; + struct ionic_admin_create_qp create_qp; + struct ionic_admin_destroy_qp destroy_qp; + struct ionic_admin_mod_qp mod_qp; + struct ionic_admin_query_qp query_qp; } cmd; }; =20 +/* side data for query qp */ +struct ionic_v1_admin_query_qp_sq { + __u8 rnr_timer; + __u8 retry_timeout; + __be16 access_perms_flags; + __be16 rsvd; + __be16 pkey_id; + __be32 qkey_dest_qpn; + __be32 rate_limit_kbps; + __be32 rq_psn; +}; + +struct ionic_v1_admin_query_qp_rq { + __u8 state_pmtu; + __u8 retry_rnrtry; + __u8 rrq_depth; + __u8 rsq_depth; + __be32 sq_psn; + __be16 access_perms_flags; + __be16 rsvd; +}; + +/* admin queue v1 opcodes */ +enum ionic_v1_admin_op { + IONIC_V1_ADMIN_NOOP, + IONIC_V1_ADMIN_CREATE_CQ, + IONIC_V1_ADMIN_CREATE_QP, + IONIC_V1_ADMIN_CREATE_MR, + IONIC_V1_ADMIN_STATS_HDRS, + IONIC_V1_ADMIN_STATS_VALS, + IONIC_V1_ADMIN_DESTROY_MR, + IONIC_v1_ADMIN_RSVD_7, /* RESIZE_CQ */ + IONIC_V1_ADMIN_DESTROY_CQ, + IONIC_V1_ADMIN_MODIFY_QP, + IONIC_V1_ADMIN_QUERY_QP, + IONIC_V1_ADMIN_DESTROY_QP, + IONIC_V1_ADMIN_DEBUG, + IONIC_V1_ADMIN_CREATE_AH, + IONIC_V1_ADMIN_QUERY_AH, + IONIC_V1_ADMIN_MODIFY_DCQCN, + IONIC_V1_ADMIN_DESTROY_AH, + IONIC_V1_ADMIN_QP_STATS_HDRS, + IONIC_V1_ADMIN_QP_STATS_VALS, + IONIC_V1_ADMIN_OPCODES_MAX, +}; + /* admin queue v1 cqe status */ enum ionic_v1_admin_status { IONIC_V1_ASTS_OK, @@ -136,6 +837,22 @@ enum ionic_v1_eqe_evt_bits { IONIC_V1_EQE_QP_ERR_ACCESS =3D 10, }; =20 +enum ionic_tfp_csum_profiles { + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP =3D 0, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP =3D 1, + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP =3D 2, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP =3D 3, + IONIC_TFP_CSUM_PROF_IPV4_UDP_VXLAN_ETH_QTAG_IPV4_UDP =3D 4, + IONIC_TFP_CSUM_PROF_IPV4_UDP_VXLAN_ETH_QTAG_IPV6_UDP =3D 5, + IONIC_TFP_CSUM_PROF_QTAG_IPV4_UDP_VXLAN_ETH_QTAG_IPV4_UDP =3D 6, + IONIC_TFP_CSUM_PROF_QTAG_IPV4_UDP_VXLAN_ETH_QTAG_IPV6_UDP =3D 7, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_ESP_IPV4_UDP =3D 8, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_ESP_UDP =3D 9, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_ESP_UDP =3D 10, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_ESP_UDP =3D 11, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_CSUM =3D 12, +}; + static inline bool ionic_v1_eqe_color(struct ionic_v1_eqe *eqe) { return !!(eqe->evt & cpu_to_be32(IONIC_V1_EQE_COLOR)); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 21d2820f36ce..358645881a24 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -114,6 +114,7 @@ static void ionic_destroy_ibdev(struct ionic_ibdev *dev) ib_unregister_device(&dev->ibdev); ionic_destroy_rdma_admin(dev); ionic_destroy_resids(dev); + xa_destroy(&dev->qp_tbl); xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); } @@ -136,6 +137,9 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); =20 + xa_init_flags(&dev->qp_tbl, GFP_ATOMIC); + rwlock_init(&dev->qp_tbl_rw); + xa_init_flags(&dev->cq_tbl, GFP_ATOMIC); rwlock_init(&dev->cq_tbl_rw); =20 @@ -170,6 +174,8 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) if (rc) goto err_admin; =20 + ionic_controlpath_setops(dev); + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); if (rc) goto err_register; @@ -183,6 +189,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) err_reset: ionic_destroy_resids(dev); err_resids: + xa_destroy(&dev->qp_tbl); xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); err_dev: diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index bd10bae2cf72..2c730d0fbae8 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -6,7 +6,10 @@ =20 #include #include +#include +#include =20 +#include #include #include =20 @@ -30,9 +33,24 @@ #define IONIC_EQ_ISR_BUDGET 10 #define IONIC_EQ_WORK_BUDGET 1000 #define IONIC_MAX_PD 1024 +#define IONIC_SPEC_HIGH 8 +#define IONIC_SQCMB_ORDER 5 +#define IONIC_RQCMB_ORDER 0 + +#define IONIC_META_LAST ((void *)1ul) +#define IONIC_META_POSTED ((void *)2ul) =20 #define IONIC_CQ_GRACE 100 =20 +#define IONIC_ROCE_UDP_SPORT 28272 + +#define IONIC_CMB_SUPPORTED \ + (IONIC_CMB_ENABLE | IONIC_CMB_REQUIRE | IONIC_CMB_EXPDB | \ + IONIC_CMB_WC | IONIC_CMB_UC) + +/* resource is not reserved on the device, indicated in tbl_order */ +#define IONIC_RES_INVALID -1 + struct ionic_aq; struct ionic_cq; struct ionic_eq; @@ -50,14 +68,6 @@ enum ionic_admin_flags { IONIC_ADMIN_F_INTERRUPT =3D BIT(2), /* Interruptible w/timeout */ }; =20 -struct ionic_qdesc { - __aligned_u64 addr; - __u32 size; - __u16 mask; - __u8 depth_log2; - __u8 stride_log2; -}; - struct ionic_mmap_info { struct list_head ctx_ent; unsigned long offset; @@ -78,6 +88,7 @@ struct ionic_ibdev { struct xarray cq_tbl; rwlock_t qp_tbl_rw; rwlock_t cq_tbl_rw; + struct mutex inuse_lock; /* for id reservation */ spinlock_t inuse_splock; /* for ahid reservation */ =20 @@ -183,6 +194,12 @@ struct ionic_tbl_buf { u8 page_size_log2; }; =20 +struct ionic_pd { + struct ib_pd ibpd; + + u32 pdid; +}; + struct ionic_cq { struct ionic_vcq *vcq; =20 @@ -216,11 +233,193 @@ struct ionic_vcq { u8 poll_idx; }; =20 +struct ionic_sq_meta { + u64 wrid; + u32 len; + u16 seq; + u8 ibop; + u8 ibsts; + bool remote; + bool signal; + bool local_comp; +}; + +struct ionic_rq_meta { + struct ionic_rq_meta *next; + u64 wrid; +}; + +struct ionic_qp { + struct ib_qp ibqp; + enum ib_qp_state state; + + u32 qpid; + u32 ahid; + u32 sq_cqid; + u32 rq_cqid; + + u8 udma_idx; + + bool has_ah; + bool has_sq; + bool has_rq; + + bool sig_all; + + struct list_head qp_list_ent; + struct list_head qp_list_counter; + + struct list_head cq_poll_sq; + struct list_head cq_flush_sq; + struct list_head cq_flush_rq; + + spinlock_t sq_lock; /* for posting and polling */ + bool sq_flush; + bool sq_flush_rcvd; + struct ionic_queue sq; + u8 sq_cmb; + struct ionic_sq_meta *sq_meta; + u16 *sq_msn_idx; + + int sq_spec; + u16 sq_old_prod; + u16 sq_msn_prod; + u16 sq_msn_cons; + + spinlock_t rq_lock; /* for posting and polling */ + bool rq_flush; + struct ionic_queue rq; + u8 rq_cmb; + struct ionic_rq_meta *rq_meta; + struct ionic_rq_meta *rq_meta_head; + + int rq_spec; + u16 rq_old_prod; + + struct kref qp_kref; + struct completion qp_rel_comp; + + /* infrequently accessed, keep at end */ + int sgid_index; + int sq_cmb_order; + u32 sq_cmb_pgid; + phys_addr_t sq_cmb_addr; + struct ionic_mmap_info sq_cmb_mmap; + + struct ib_umem *sq_umem; + + int rq_cmb_order; + u32 rq_cmb_pgid; + phys_addr_t rq_cmb_addr; + struct ionic_mmap_info rq_cmb_mmap; + + struct ib_umem *rq_umem; + + int dcqcn_profile; + + struct ib_ud_header *hdr; +}; + +struct ionic_ah { + struct ib_ah ibah; + u32 ahid; + int sgid_index; + struct ib_ud_header hdr; +}; + +struct ionic_mr { + union { + struct ib_mr ibmr; + struct ib_mw ibmw; + }; + + u32 mrid; + int flags; + + struct ib_umem *umem; + struct ionic_tbl_buf buf; + bool created; +}; + static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) { return container_of(ibdev, struct ionic_ibdev, ibdev); } =20 +static inline struct ionic_ctx *to_ionic_ctx(struct ib_ucontext *ibctx) +{ + return container_of(ibctx, struct ionic_ctx, ibctx); +} + +static inline struct ionic_ctx *to_ionic_ctx_uobj(struct ib_uobject *uobj) +{ + if (!uobj) + return NULL; + + if (!uobj->context) + return NULL; + + return to_ionic_ctx(uobj->context); +} + +static inline struct ionic_pd *to_ionic_pd(struct ib_pd *ibpd) +{ + return container_of(ibpd, struct ionic_pd, ibpd); +} + +static inline struct ionic_mr *to_ionic_mr(struct ib_mr *ibmr) +{ + return container_of(ibmr, struct ionic_mr, ibmr); +} + +static inline struct ionic_mr *to_ionic_mw(struct ib_mw *ibmw) +{ + return container_of(ibmw, struct ionic_mr, ibmw); +} + +static inline struct ionic_vcq *to_ionic_vcq(struct ib_cq *ibcq) +{ + return container_of(ibcq, struct ionic_vcq, ibcq); +} + +static inline struct ionic_cq *to_ionic_vcq_cq(struct ib_cq *ibcq, + uint8_t udma_idx) +{ + return &to_ionic_vcq(ibcq)->cq[udma_idx]; +} + +static inline struct ionic_qp *to_ionic_qp(struct ib_qp *ibqp) +{ + return container_of(ibqp, struct ionic_qp, ibqp); +} + +static inline struct ionic_ah *to_ionic_ah(struct ib_ah *ibah) +{ + return container_of(ibah, struct ionic_ah, ibah); +} + +static inline u32 ionic_ctx_dbid(struct ionic_ibdev *dev, + struct ionic_ctx *ctx) +{ + if (!ctx) + return dev->lif_cfg.dbid; + + return ctx->dbid; +} + +static inline u32 ionic_obj_dbid(struct ionic_ibdev *dev, + struct ib_uobject *uobj) +{ + return ionic_ctx_dbid(dev, to_ionic_ctx_uobj(uobj)); +} + +static inline void ionic_qp_complete(struct kref *kref) +{ + struct ionic_qp *qp =3D container_of(kref, struct ionic_qp, qp_kref); + + complete(&qp->qp_rel_comp); +} + static inline void ionic_cq_complete(struct kref *kref) { struct ionic_cq *cq =3D container_of(kref, struct ionic_cq, cq_kref); @@ -241,6 +440,7 @@ void ionic_destroy_rdma_admin(struct ionic_ibdev *dev); void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path); =20 /* ionic_controlpath.c */ +void ionic_controlpath_setops(struct ionic_ibdev *dev); int ionic_create_cq_common(struct ionic_vcq *vcq, struct ionic_tbl_buf *buf, const struct ib_cq_init_attr *attr, @@ -250,8 +450,11 @@ int ionic_create_cq_common(struct ionic_vcq *vcq, __u32 *resp_cqid, int udma_idx); void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq); +void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp); +void ionic_notify_flush_cq(struct ionic_cq *cq); =20 /* ionic_pgtbl.c */ +__le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); int ionic_pgtbl_init(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf, diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c index 11461f7642bc..a8eb73be6f86 100644 --- a/drivers/infiniband/hw/ionic/ionic_pgtbl.c +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -7,6 +7,25 @@ #include "ionic_fw.h" #include "ionic_ibdev.h" =20 +__le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va) +{ + u64 pg_mask =3D BIT_ULL(buf->page_size_log2) - 1; + u64 dma; + + if (!buf->tbl_pages) + return cpu_to_le64(0); + + if (buf->tbl_pages > 1) + return cpu_to_le64(buf->tbl_dma); + + if (buf->tbl_buf) + dma =3D le64_to_cpu(buf->tbl_buf[0]); + else + dma =3D buf->tbl_dma; + + return cpu_to_le64(dma + (va & pg_mask)); +} + int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) { if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) diff --git a/include/uapi/rdma/ionic-abi.h b/include/uapi/rdma/ionic-abi.h new file mode 100644 index 000000000000..a18388ab7a1d --- /dev/null +++ b/include/uapi/rdma/ionic-abi.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc */ + +#ifndef IONIC_ABI_H +#define IONIC_ABI_H + +#include + +#define IONIC_ABI_VERSION 4 + +#define IONIC_EXPDB_64 1 +#define IONIC_EXPDB_128 2 +#define IONIC_EXPDB_256 4 +#define IONIC_EXPDB_512 8 + +#define IONIC_EXPDB_SQ 1 +#define IONIC_EXPDB_RQ 2 + +#define IONIC_CMB_ENABLE 1 +#define IONIC_CMB_REQUIRE 2 +#define IONIC_CMB_EXPDB 4 +#define IONIC_CMB_WC 8 +#define IONIC_CMB_UC 16 + +struct ionic_ctx_req { + __u32 rsvd[2]; +}; + +struct ionic_ctx_resp { + __u32 rsvd; + __u32 page_shift; + + __aligned_u64 dbell_offset; + + __u16 version; + __u8 qp_opcodes; + __u8 admin_opcodes; + + __u8 sq_qtype; + __u8 rq_qtype; + __u8 cq_qtype; + __u8 admin_qtype; + + __u8 max_stride; + __u8 max_spec; + __u8 udma_count; + __u8 expdb_mask; + __u8 expdb_qtypes; + + __u8 rsvd2[3]; +}; + +struct ionic_qdesc { + __aligned_u64 addr; + __u32 size; + __u16 mask; + __u8 depth_log2; + __u8 stride_log2; +}; + +struct ionic_ah_resp { + __u32 ahid; + __u32 pad; +}; + +struct ionic_cq_req { + struct ionic_qdesc cq[2]; + __u8 udma_mask; + __u8 rsvd[7]; +}; + +struct ionic_cq_resp { + __u32 cqid[2]; + __u8 udma_mask; + __u8 rsvd[7]; +}; + +struct ionic_qp_req { + struct ionic_qdesc sq; + struct ionic_qdesc rq; + __u8 sq_spec; + __u8 rq_spec; + __u8 sq_cmb; + __u8 rq_cmb; + __u8 udma_mask; + __u8 rsvd[3]; +}; + +struct ionic_qp_resp { + __u32 qpid; + __u8 sq_cmb; + __u8 rq_cmb; + __u8 udma_idx; + __u8 rsvd[1]; + __aligned_u64 sq_cmb_offset; + __aligned_u64 rq_cmb_offset; +}; + +struct ionic_srq_req { + struct ionic_qdesc rq; + __u8 rq_spec; + __u8 rq_cmb; + __u8 udma_mask; + __u8 rsvd[5]; +}; + +struct ionic_srq_resp { + __u32 qpid; + __u8 rq_cmb; + __u8 udma_idx; + __u8 rsvd[2]; + __aligned_u64 rq_cmb_offset; +}; + +#endif /* IONIC_ABI_H */ --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2062.outbound.protection.outlook.com [40.107.93.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F3DC224891; Thu, 8 May 2025 05:01:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680496; cv=fail; b=HjlgpayO4yqARrAi6qVGdh7Wo2QXAKFf1iFzoUBuVvNGr7FvOBJZNDhnFGAPkKE3sDL1cujPcq3qwrObyZoTdxBEKMMdouwTAJyZlpXF+AEFt38sCzJNJhQijlFL8gGl4zYAysyYlvXMuILKotIREbj/OnvdCjvHpQYewaaT8yE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680496; c=relaxed/simple; bh=aPwCAHnrt67KckFno6pT5NWPitl1PngKvUYOI14w06c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=hFvLvL9gIPpgL2bC1NTL0P4On3BIB2GqG6fslUXnq/OcU3wRHckvAPtqzDnXGIamTKAK1FwafJU3XJtpq12r0SZGGjSIzlLaTjZJfVV4U6KHhC/1cfwjkN9egH9R+ShG1dQeoYQjEB6KEhpc4Xg28KUBmD9Y5ezv2JRTeEj7TiU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=O6JY9v8r; arc=fail smtp.client-ip=40.107.93.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="O6JY9v8r" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LxB2voDASkmyIE7giWse70T3pdXylOFe1wusN5L0mKajnZPnJDnMzAKKTytCy+HduiiWBo2jeD949p85rgRyq1RZk4e9lxFnG4NU5gU6WIRmc1OxLEtAYw0phRCHiLN8Jv4lVWyeZvsBuiOb/Q528I9dz3OeXeAD3we00Icx2KGy2RhhyNGy35EwgtUDwk5Jp6vW3Q/qZ29o+/B1A3v/GjMl0E+V7w3XSUSWntKX9BV3z78lQzA22FHuRmKVPHvmqcxd+9WJCFR9drnJ4+a+jgAPFibN0++1Q33DjT0O3FLoiJD3hlDYf3A3O6kPzD6KWKPk5/xJkNoDmzFREHg2Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+21rY78zKHD/LgAjEfBsDWLseYWYbbbRQWid7G5bKrI=; b=IynkjXAOPm0lY/ubWH3Oz+zXM+1iIwEPjhAgOiviVGzl9svOgdGrHthpJpU6GdzPos/hjT79sCm5OfaExFQj02/MW/MuCgkwQLW/lm3ZG2S1uiqvR48UN1v1ZGLBGQ65w+B7j8d5kkSKfIl9et4IU0n9IHQ4PzWk38Yd6fMx4iJ3qz5Gpprrg/3kt2TEUSRg30zlaK78g1D4lQHI13dYcbvoOwKJxY6HUMDXzFU/iU2pTZDuzNxsUkxuLvCWikgZfSysf8cq9ZaPDXImmOJ4d5pcPSD8fFoe8MccHgOT1fJdADl6drd9DcnycoYjpo/CTy9l6FLkm/a5x4I+sT+R2g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+21rY78zKHD/LgAjEfBsDWLseYWYbbbRQWid7G5bKrI=; b=O6JY9v8rBtSdOjCBMN4sN0nUzZSe+LWRhnpvAj+ft8eRhKMytZ5wDTDbCInAtn09RA0F2geGyMc061RzTNU7/3IqoHfyrU+D/NzQOlANvoOwowZ6GyL6aFQMVkk2KzI/1gNcK3Wp3vWUb6rG8wUu2Yxiq19f6cKV0o8hsSx5s90= Received: from CY8PR12CA0045.namprd12.prod.outlook.com (2603:10b6:930:49::16) by CH3PR12MB8258.namprd12.prod.outlook.com (2603:10b6:610:128::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.26; Thu, 8 May 2025 05:01:26 +0000 Received: from CY4PEPF0000FCC1.namprd03.prod.outlook.com (2603:10b6:930:49:cafe::44) by CY8PR12CA0045.outlook.office365.com (2603:10b6:930:49::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.33 via Frontend Transport; Thu, 8 May 2025 05:01:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC1.mail.protection.outlook.com (10.167.242.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:25 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:24 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:20 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v2 11/14] RDMA/ionic: Register device ops for datapath Date: Thu, 8 May 2025 10:29:54 +0530 Message-ID: <20250508045957.2823318-12-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC1:EE_|CH3PR12MB8258:EE_ X-MS-Office365-Filtering-Correlation-Id: a9da755c-4792-441d-0cce-08dd8ded6295 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|7416014|36860700013|376014|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?KnnaF8XjRnM736js05az/Moxs8hTLZOijwO2mSpvD/Hsg/jtzZJRSiFajfXR?= =?us-ascii?Q?HL+E+plslWxayJxc8V/jCQtnvl0W38QosIyfGXCGqJ6MYiXxlbxkrtwqaTJT?= =?us-ascii?Q?kH5PBQUl2cfp10qvCCqqPd1gzTKR8OqF66Ssxkpsyl3Fv6waH/bvEuni+4a5?= =?us-ascii?Q?coGzI0H6fgXvVQ/5Xkp2VMJtcqgm9LFRvSZLbQp85VKAh1QH8bd83/Mhx0SW?= =?us-ascii?Q?TDWOltS9N6caxlPl648V31Xaa+8njfllEyHRFYp/Fq07qSOjmsVOijwPXLw1?= =?us-ascii?Q?0r8uZvyTlyvWAsZyGO1j0Fgr1igJG11Pw6kF9XSplxVUEq3ak/eWArSIXs1z?= =?us-ascii?Q?eHM9xgkKVXJSaASJFa62EtPa6S/d6GPSEjG2gQtZrn4honfBweWorvtFcoV7?= =?us-ascii?Q?bpxGbALw/wUSxv2Ie1symeLoKwuDksUKtu/aMhC7NDbJMUZ1CDPPViCcCwFS?= =?us-ascii?Q?4Pp/51yKJR+WUUHCLdVhu/bDidFzJN99OmEgk7wBBT65nc4+DOLVRZlINPjo?= =?us-ascii?Q?4PiG/yJXE4/Zu+6xvUG7OmZfODs0pZuE90j+B3IqQcGL4WK+NxJxuN93tM9w?= =?us-ascii?Q?bi6Pa/BabDip0W18u5FEDUwiDrkUKPM0Fym7hbipiZpiZ8CQk9sfR6RaxTwd?= =?us-ascii?Q?FW0Nd0HohMj+GkzxfUgMiaIN9vpnuK7jGLHhIhjfmHjd4wqZOn+yrlc6C1AO?= =?us-ascii?Q?QxuzJJVOZSGRFmj+CHrQRjCQKOSsndGFX5XZAD8yz3lavg/xqGtc2JfjlMVL?= =?us-ascii?Q?u9IapgA1Ctr3LB4rtV2pf0TO7eBOKTjkFsTv/hU8Hwm7miIWcyc0gEwzh+Iv?= =?us-ascii?Q?FTLwUF/XRbmxyKjOL1GDtnEzGg+UylBIKswYFtnnh2quTfzTcGo4EcEFU9cS?= =?us-ascii?Q?nc2IDsxXwbHmy5GSjIV27hl7ualWWweR3MaLSG1gYHtEt47+wm2wtKXwg8xm?= =?us-ascii?Q?pIk8n5+A6l64AciCyllspzRqwlbeSFa0hVl/kedMOS8NRTiHGrCFT5DlcDMF?= =?us-ascii?Q?9xNncUyh06UG559q8hIHpAzO/1dHlDnmNpIUlxbBpHz/5jqUXpms2Xd2pgyh?= =?us-ascii?Q?2rwJae44o+JM92SvL4/9/lr1l/f0F6LHigVRS0XOtf0jybTAHpiPy5jEHPuA?= =?us-ascii?Q?N/b79cPFA7WskV5HPlGwEfINEQbnUIE1lIYrZutWp0yqkNgn94TXQz58lXc2?= =?us-ascii?Q?iUVu0Z394vaJOyBcqmHs8O0FL0dxkB2S6TsPG4h612I+xO6ryTWBvIFJt9Qc?= =?us-ascii?Q?a0H4sCB60z7RFbDNcF4uAJRB4vMdhhYH3ocEcQHQh4nniAOO8Xu2LoazhGiu?= =?us-ascii?Q?Km43LhhIfxCqzNz6KBS4Rn5QvQXK/VRAhmVXP+3o5IRvkZxpQ1uBN9xDHIbb?= =?us-ascii?Q?14lCAdSywv+f93Aewy8AX9DTeGcoXKq5aLpkpFHgQ9aGjpfYqULYFALdomHJ?= =?us-ascii?Q?Sh5vYU0OtVL8uIB0vtBEe1wMuoGl6HV31LFhFQiRRT3MSfwo3khainQBBF0n?= =?us-ascii?Q?5GnXHK5fDaH2sdV/xiFRlflrzju3rfuYugvd5IdjrkQoetbdYJrqVAU8Hw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(7416014)(36860700013)(376014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:25.5718 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a9da755c-4792-441d-0cce-08dd8ded6295 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC1.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8258 Content-Type: text/plain; charset="utf-8" Implement device supported verb APIs for datapath. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- drivers/infiniband/hw/ionic/ionic_datapath.c | 1422 ++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_fw.h | 107 ++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 1 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 9 + drivers/infiniband/hw/ionic/ionic_pgtbl.c | 11 + 5 files changed, 1550 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_datapath.c diff --git a/drivers/infiniband/hw/ionic/ionic_datapath.c b/drivers/infinib= and/hw/ionic/ionic_datapath.c new file mode 100644 index 000000000000..1262ba30172d --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_datapath.c @@ -0,0 +1,1422 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +#define IONIC_OP(version, opname) \ + ((version) < 2 ? IONIC_V1_OP_##opname : IONIC_V2_OP_##opname) + +static bool ionic_next_cqe(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_v1_cqe **cqe) +{ + struct ionic_v1_cqe *qcqe =3D ionic_queue_at_prod(&cq->q); + + if (unlikely(cq->color !=3D ionic_v1_cqe_color(qcqe))) + return false; + + /* Prevent out-of-order reads of the CQE */ + rmb(); + + *cqe =3D qcqe; + + return true; +} + +static int ionic_flush_recv(struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_rq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (!qp->rq_flush) + return 0; + + if (ionic_queue_empty(&qp->rq)) + return 0; + + wqe =3D ionic_queue_at_cons(&qp->rq); + + /* wqe_id must be a valid queue index */ + if (unlikely(wqe->base.wqe_id >> qp->rq.depth_log2)) { + ibdev_warn(qp->ibqp.device, + "flush qp %u recv index %llu invalid\n", + qp->qpid, (unsigned long long)wqe->base.wqe_id); + return -EIO; + } + + /* wqe_id must indicate a request that is outstanding */ + meta =3D &qp->rq_meta[wqe->base.wqe_id]; + if (unlikely(meta->next !=3D IONIC_META_POSTED)) { + ibdev_warn(qp->ibqp.device, + "flush qp %u recv index %llu not posted\n", + qp->qpid, (unsigned long long)wqe->base.wqe_id); + return -EIO; + } + + ionic_queue_consume(&qp->rq); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D IB_WC_WR_FLUSH_ERR; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + meta->next =3D qp->rq_meta_head; + qp->rq_meta_head =3D meta; + + return 1; +} + +static int ionic_flush_recv_many(struct ionic_qp *qp, + struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_flush_recv(qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_flush_send(struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_sq_meta *meta; + + if (!qp->sq_flush) + return 0; + + if (ionic_queue_empty(&qp->sq)) + return 0; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + ionic_queue_consume(&qp->sq); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D IB_WC_WR_FLUSH_ERR; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + return 1; +} + +static int ionic_flush_send_many(struct ionic_qp *qp, + struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_flush_send(qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_poll_recv(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_qp *cqe_qp, struct ionic_v1_cqe *cqe, + struct ib_wc *wc) +{ + struct ionic_qp *qp =3D NULL; + struct ionic_rq_meta *meta; + u32 src_qpn, st_len; + u16 vlan_tag; + u8 op; + + if (cqe_qp->rq_flush) + return 0; + + qp =3D cqe_qp; + + st_len =3D be32_to_cpu(cqe->status_length); + + /* ignore wqe_id in case of flush error */ + if (ionic_v1_cqe_error(cqe) && st_len =3D=3D IONIC_STS_WQE_FLUSHED_ERR) { + cqe_qp->rq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + + /* posted recvs (if any) flushed by ionic_flush_recv */ + return 0; + } + + /* there had better be something in the recv queue to complete */ + if (ionic_queue_empty(&qp->rq)) { + ibdev_warn(&dev->ibdev, "qp %u is empty\n", qp->qpid); + return -EIO; + } + + /* wqe_id must be a valid queue index */ + if (unlikely(cqe->recv.wqe_id >> qp->rq.depth_log2)) { + ibdev_warn(&dev->ibdev, + "qp %u recv index %llu invalid\n", + qp->qpid, (unsigned long long)cqe->recv.wqe_id); + return -EIO; + } + + /* wqe_id must indicate a request that is outstanding */ + meta =3D &qp->rq_meta[cqe->recv.wqe_id]; + if (unlikely(meta->next !=3D IONIC_META_POSTED)) { + ibdev_warn(&dev->ibdev, + "qp %u recv index %llu not posted\n", + qp->qpid, (unsigned long long)cqe->recv.wqe_id); + return -EIO; + } + + meta->next =3D qp->rq_meta_head; + qp->rq_meta_head =3D meta; + + memset(wc, 0, sizeof(*wc)); + + wc->wr_id =3D meta->wrid; + + wc->qp =3D &cqe_qp->ibqp; + + if (ionic_v1_cqe_error(cqe)) { + wc->vendor_err =3D st_len; + wc->status =3D ionic_to_ib_status(st_len); + + cqe_qp->rq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + + ibdev_warn(&dev->ibdev, + "qp %d recv cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, BIT(cq->q.stride_log2), true); + goto out; + } + + wc->vendor_err =3D 0; + wc->status =3D IB_WC_SUCCESS; + + src_qpn =3D be32_to_cpu(cqe->recv.src_qpn_op); + op =3D src_qpn >> IONIC_V1_CQE_RECV_OP_SHIFT; + + src_qpn &=3D IONIC_V1_CQE_RECV_QPN_MASK; + op &=3D IONIC_V1_CQE_RECV_OP_MASK; + + wc->opcode =3D IB_WC_RECV; + switch (op) { + case IONIC_V1_CQE_RECV_OP_RDMA_IMM: + wc->opcode =3D IB_WC_RECV_RDMA_WITH_IMM; + wc->wc_flags |=3D IB_WC_WITH_IMM; + wc->ex.imm_data =3D cqe->recv.imm_data_rkey; /* be32 in wc */ + break; + case IONIC_V1_CQE_RECV_OP_SEND_IMM: + wc->wc_flags |=3D IB_WC_WITH_IMM; + wc->ex.imm_data =3D cqe->recv.imm_data_rkey; /* be32 in wc */ + break; + case IONIC_V1_CQE_RECV_OP_SEND_INV: + wc->wc_flags |=3D IB_WC_WITH_INVALIDATE; + wc->ex.invalidate_rkey =3D be32_to_cpu(cqe->recv.imm_data_rkey); + break; + } + + wc->byte_len =3D st_len; + wc->src_qp =3D src_qpn; + + if (qp->ibqp.qp_type =3D=3D IB_QPT_UD || + qp->ibqp.qp_type =3D=3D IB_QPT_GSI) { + wc->wc_flags |=3D IB_WC_GRH | IB_WC_WITH_SMAC; + ether_addr_copy(wc->smac, cqe->recv.src_mac); + + wc->wc_flags |=3D IB_WC_WITH_NETWORK_HDR_TYPE; + if (ionic_v1_cqe_recv_is_ipv4(cqe)) + wc->network_hdr_type =3D RDMA_NETWORK_IPV4; + else + wc->network_hdr_type =3D RDMA_NETWORK_IPV6; + + if (ionic_v1_cqe_recv_is_vlan(cqe)) + wc->wc_flags |=3D IB_WC_WITH_VLAN; + + /* vlan_tag in cqe will be valid from dpath even if no vlan */ + vlan_tag =3D be16_to_cpu(cqe->recv.vlan_tag); + wc->vlan_id =3D vlan_tag & 0xfff; /* 802.1q VID */ + wc->sl =3D vlan_tag >> VLAN_PRIO_SHIFT; /* 802.1q PCP */ + } + + wc->pkey_index =3D 0; + wc->port_num =3D 1; + +out: + ionic_queue_consume(&qp->rq); + + return 1; +} + +static bool ionic_peek_send(struct ionic_qp *qp) +{ + struct ionic_sq_meta *meta; + + if (qp->sq_flush) + return false; + + /* completed all send queue requests */ + if (ionic_queue_empty(&qp->sq)) + return false; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + /* waiting for remote completion */ + if (meta->remote && meta->seq =3D=3D qp->sq_msn_cons) + return false; + + /* waiting for local completion */ + if (!meta->remote && !meta->local_comp) + return false; + + return true; +} + +static int ionic_poll_send(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_sq_meta *meta; + + if (qp->sq_flush) + return 0; + + do { + /* completed all send queue requests */ + if (ionic_queue_empty(&qp->sq)) + goto out_empty; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + /* waiting for remote completion */ + if (meta->remote && meta->seq =3D=3D qp->sq_msn_cons) + goto out_empty; + + /* waiting for local completion */ + if (!meta->remote && !meta->local_comp) + goto out_empty; + + ionic_queue_consume(&qp->sq); + + /* produce wc only if signaled or error status */ + } while (!meta->signal && meta->ibsts =3D=3D IB_WC_SUCCESS); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D meta->ibsts; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + if (meta->ibsts =3D=3D IB_WC_SUCCESS) { + wc->byte_len =3D meta->len; + wc->opcode =3D meta->ibop; + } else { + wc->vendor_err =3D meta->len; + + qp->sq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + + return 1; + +out_empty: + if (qp->sq_flush_rcvd) { + qp->sq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + return 0; +} + +static int ionic_poll_send_many(struct ionic_ibdev *dev, struct ionic_cq *= cq, + struct ionic_qp *qp, struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_poll_send(dev, cq, qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_validate_cons(u16 prod, u16 cons, + u16 comp, u16 mask) +{ + if (((prod - cons) & mask) <=3D ((comp - cons) & mask)) + return -EIO; + + return 0; +} + +static int ionic_comp_msn(struct ionic_qp *qp, struct ionic_v1_cqe *cqe) +{ + struct ionic_sq_meta *meta; + u16 cqe_seq, cqe_idx; + int rc; + + if (qp->sq_flush) + return 0; + + cqe_seq =3D be32_to_cpu(cqe->send.msg_msn) & qp->sq.mask; + + rc =3D ionic_validate_cons(qp->sq_msn_prod, + qp->sq_msn_cons, + cqe_seq - 1, + qp->sq.mask); + if (rc) { + ibdev_warn(qp->ibqp.device, + "qp %u bad msn %#x seq %u for prod %u cons %u\n", + qp->qpid, be32_to_cpu(cqe->send.msg_msn), + cqe_seq, qp->sq_msn_prod, qp->sq_msn_cons); + return rc; + } + + qp->sq_msn_cons =3D cqe_seq; + + if (ionic_v1_cqe_error(cqe)) { + cqe_idx =3D qp->sq_msn_idx[(cqe_seq - 1) & qp->sq.mask]; + + meta =3D &qp->sq_meta[cqe_idx]; + meta->len =3D be32_to_cpu(cqe->status_length); + meta->ibsts =3D ionic_to_ib_status(meta->len); + + ibdev_warn(qp->ibqp.device, + "qp %d msn cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, sizeof(*cqe), true); + } + + return 0; +} + +static int ionic_comp_npg(struct ionic_qp *qp, struct ionic_v1_cqe *cqe) +{ + struct ionic_sq_meta *meta; + u16 cqe_idx; + u32 st_len; + + if (qp->sq_flush) + return 0; + + st_len =3D be32_to_cpu(cqe->status_length); + + if (ionic_v1_cqe_error(cqe) && st_len =3D=3D IONIC_STS_WQE_FLUSHED_ERR) { + /* + * Flush cqe does not consume a wqe on the device, and maybe + * no such work request is posted. + * + * The driver should begin flushing after the last indicated + * normal or error completion. Here, only set a hint that the + * flush request was indicated. In poll_send, if nothing more + * can be polled normally, then begin flushing. + */ + qp->sq_flush_rcvd =3D true; + return 0; + } + + cqe_idx =3D cqe->send.npg_wqe_id & qp->sq.mask; + meta =3D &qp->sq_meta[cqe_idx]; + meta->local_comp =3D true; + + if (ionic_v1_cqe_error(cqe)) { + meta->len =3D st_len; + meta->ibsts =3D ionic_to_ib_status(st_len); + meta->remote =3D false; + ibdev_warn(qp->ibqp.device, + "qp %d npg cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, sizeof(*cqe), true); + } + + return 0; +} + +static void ionic_reserve_sync_cq(struct ionic_ibdev *dev, struct ionic_cq= *cq) +{ + if (!ionic_queue_empty(&cq->q)) { + cq->reserve +=3D ionic_queue_length(&cq->q); + cq->q.cons =3D cq->q.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + ionic_queue_dbell_val(&cq->q)); + } +} + +static void ionic_reserve_cq(struct ionic_ibdev *dev, struct ionic_cq *cq, + int spend) +{ + cq->reserve -=3D spend; + + if (cq->reserve <=3D 0) + ionic_reserve_sync_cq(dev, cq); +} + +static int ionic_poll_vcq_cq(struct ionic_ibdev *dev, + struct ionic_cq *cq, + int nwc, struct ib_wc *wc) +{ + struct ionic_qp *qp, *qp_next; + struct ionic_v1_cqe *cqe; + int rc =3D 0, npolled =3D 0; + unsigned long irqflags; + u32 qtf, qid; + bool peek; + u8 type; + + if (nwc < 1) + return 0; + + spin_lock_irqsave(&cq->lock, irqflags); + + /* poll already indicated work completions for send queue */ + list_for_each_entry_safe(qp, qp_next, &cq->poll_sq, cq_poll_sq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->sq_lock); + rc =3D ionic_poll_send_many(dev, cq, qp, wc + npolled, + nwc - npolled); + spin_unlock(&qp->sq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_poll_sq); + } + + /* poll for more work completions */ + while (likely(ionic_next_cqe(dev, cq, &cqe))) { + if (npolled =3D=3D nwc) + goto out; + + qtf =3D ionic_v1_cqe_qtf(cqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + qp =3D xa_load(&dev->qp_tbl, qid); + if (unlikely(!qp)) { + ibdev_dbg(&dev->ibdev, "missing qp for qid %u\n", qid); + goto cq_next; + } + + switch (type) { + case IONIC_V1_CQE_TYPE_RECV: + spin_lock(&qp->rq_lock); + rc =3D ionic_poll_recv(dev, cq, qp, cqe, wc + npolled); + spin_unlock(&qp->rq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + break; + + case IONIC_V1_CQE_TYPE_SEND_MSN: + spin_lock(&qp->sq_lock); + rc =3D ionic_comp_msn(qp, cqe); + if (!rc) { + rc =3D ionic_poll_send_many(dev, cq, qp, + wc + npolled, + nwc - npolled); + peek =3D ionic_peek_send(qp); + } + spin_unlock(&qp->sq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + if (peek) + list_move_tail(&qp->cq_poll_sq, &cq->poll_sq); + break; + + case IONIC_V1_CQE_TYPE_SEND_NPG: + spin_lock(&qp->sq_lock); + rc =3D ionic_comp_npg(qp, cqe); + if (!rc) { + rc =3D ionic_poll_send_many(dev, cq, qp, + wc + npolled, + nwc - npolled); + peek =3D ionic_peek_send(qp); + } + spin_unlock(&qp->sq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + if (peek) + list_move_tail(&qp->cq_poll_sq, &cq->poll_sq); + break; + + default: + ibdev_warn(&dev->ibdev, + "unexpected cqe type %u\n", type); + rc =3D -EIO; + goto out; + } + +cq_next: + ionic_queue_produce(&cq->q); + cq->color =3D ionic_color_wrap(cq->q.prod, cq->color); + } + + /* lastly, flush send and recv queues */ + if (likely(!cq->flush)) + goto out; + + cq->flush =3D false; + + list_for_each_entry_safe(qp, qp_next, &cq->flush_sq, cq_flush_sq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->sq_lock); + rc =3D ionic_flush_send_many(qp, wc + npolled, nwc - npolled); + spin_unlock(&qp->sq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_flush_sq); + else + cq->flush =3D true; + } + + list_for_each_entry_safe(qp, qp_next, &cq->flush_rq, cq_flush_rq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->rq_lock); + rc =3D ionic_flush_recv_many(qp, wc + npolled, nwc - npolled); + spin_unlock(&qp->rq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_flush_rq); + else + cq->flush =3D true; + } + +out: + /* in case reserve was depleted (more work posted than cq depth) */ + if (cq->reserve <=3D 0) + ionic_reserve_sync_cq(dev, cq); + + spin_unlock_irqrestore(&cq->lock, irqflags); + + return npolled ?: rc; +} + +static int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int rc_tmp, rc =3D 0, npolled =3D 0; + int cq_i, cq_x, cq_ix; + + /* poll_idx is not protected by a lock, but a race is benign */ + cq_x =3D vcq->poll_idx; + + vcq->poll_idx ^=3D dev->lif_cfg.udma_count - 1; + + for (cq_i =3D 0; npolled < nwc && cq_i < dev->lif_cfg.udma_count; ++cq_i)= { + cq_ix =3D cq_i ^ cq_x; + + if (!(vcq->udma_mask & BIT(cq_ix))) + continue; + + rc_tmp =3D ionic_poll_vcq_cq(dev, &vcq->cq[cq_ix], + nwc - npolled, + wc + npolled); + + if (rc_tmp >=3D 0) + npolled +=3D rc_tmp; + else if (!rc) + rc =3D rc_tmp; + } + + return npolled ?: rc; +} + +static int ionic_req_notify_vcq_cq(struct ionic_ibdev *dev, struct ionic_c= q *cq, + enum ib_cq_notify_flags flags) +{ + u64 dbell_val =3D cq->q.dbell; + + if (flags & IB_CQ_SOLICITED) { + cq->arm_sol_prod =3D ionic_queue_next(&cq->q, cq->arm_sol_prod); + dbell_val |=3D cq->arm_sol_prod | IONIC_CQ_RING_SOL; + } else { + cq->arm_any_prod =3D ionic_queue_next(&cq->q, cq->arm_any_prod); + dbell_val |=3D cq->arm_any_prod | IONIC_CQ_RING_ARM; + } + + ionic_reserve_sync_cq(dev, cq); + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, dbell_val); + + /* + * IB_CQ_REPORT_MISSED_EVENTS: + * + * The queue index in ring zero guarantees no missed events. + * + * Here, we check if the color bit in the next cqe is flipped. If it + * is flipped, then progress can be made by immediately polling the cq. + * Still, the cq will be armed, and an event will be generated. The cq + * may be empty when polled after the event, because the next poll + * after arming the cq can empty it. + */ + return (flags & IB_CQ_REPORT_MISSED_EVENTS) && + cq->color =3D=3D ionic_v1_cqe_color(ionic_queue_at_prod(&cq->q)); +} + +static int ionic_req_notify_cq(struct ib_cq *ibcq, + enum ib_cq_notify_flags flags) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int rc =3D 0, cq_i; + + for (cq_i =3D 0; cq_i < dev->lif_cfg.udma_count; ++cq_i) { + if (!(vcq->udma_mask & BIT(cq_i))) + continue; + + if (ionic_req_notify_vcq_cq(dev, &vcq->cq[cq_i], flags)) + rc =3D 1; + } + + return rc; +} + +static s64 ionic_prep_inline(void *data, u32 max_data, + const struct ib_sge *ib_sgl, int num_sge) +{ + static const s64 bit_31 =3D 1u << 31; + s64 len =3D 0, sg_len; + int sg_i; + + for (sg_i =3D 0; sg_i < num_sge; ++sg_i) { + sg_len =3D ib_sgl[sg_i].length; + + /* sge length zero means 2GB */ + if (unlikely(sg_len =3D=3D 0)) + sg_len =3D bit_31; + + /* greater than max inline data is invalid */ + if (unlikely(len + sg_len > max_data)) + return -EINVAL; + + memcpy(data + len, (void *)ib_sgl[sg_i].addr, sg_len); + + len +=3D sg_len; + } + + return len; +} + +static s64 ionic_prep_pld(struct ionic_v1_wqe *wqe, + union ionic_v1_pld *pld, + int spec, u32 max_sge, + const struct ib_sge *ib_sgl, + int num_sge) +{ + static const s64 bit_31 =3D 1l << 31; + struct ionic_sge *sgl; + __be32 *spec32 =3D NULL; + __be16 *spec16 =3D NULL; + s64 len =3D 0, sg_len; + int sg_i =3D 0; + + if (unlikely(num_sge < 0 || (u32)num_sge > max_sge)) + return -EINVAL; + + if (spec && num_sge > IONIC_V1_SPEC_FIRST_SGE) { + sg_i =3D IONIC_V1_SPEC_FIRST_SGE; + + if (num_sge > 8) { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SPEC16); + spec16 =3D pld->spec16; + } else { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SPEC32); + spec32 =3D pld->spec32; + } + } + + sgl =3D &pld->sgl[sg_i]; + + for (sg_i =3D 0; sg_i < num_sge; ++sg_i) { + sg_len =3D ib_sgl[sg_i].length; + + /* sge length zero means 2GB */ + if (unlikely(sg_len =3D=3D 0)) + sg_len =3D bit_31; + + /* greater than 2GB data is invalid */ + if (unlikely(len + sg_len > bit_31)) + return -EINVAL; + + sgl[sg_i].va =3D cpu_to_be64(ib_sgl[sg_i].addr); + sgl[sg_i].len =3D cpu_to_be32(sg_len); + sgl[sg_i].lkey =3D cpu_to_be32(ib_sgl[sg_i].lkey); + + if (spec32) { + spec32[sg_i] =3D sgl[sg_i].len; + } else if (spec16) { + if (unlikely(sg_len > U16_MAX)) + return -EINVAL; + spec16[sg_i] =3D cpu_to_be16(sg_len); + } + + len +=3D sg_len; + } + + return len; +} + +static void ionic_prep_base(struct ionic_qp *qp, + const struct ib_send_wr *wr, + struct ionic_sq_meta *meta, + struct ionic_v1_wqe *wqe) +{ + meta->wrid =3D wr->wr_id; + meta->ibsts =3D IB_WC_SUCCESS; + meta->signal =3D false; + meta->local_comp =3D false; + + wqe->base.wqe_id =3D qp->sq.prod; + + if (wr->send_flags & IB_SEND_FENCE) + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_FENCE); + + if (wr->send_flags & IB_SEND_SOLICITED) + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SOL); + + if (qp->sig_all || wr->send_flags & IB_SEND_SIGNALED) { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SIG); + meta->signal =3D true; + } + + meta->seq =3D qp->sq_msn_prod; + meta->remote =3D + qp->ibqp.qp_type !=3D IB_QPT_UD && + qp->ibqp.qp_type !=3D IB_QPT_GSI && + !ionic_ibop_is_local(wr->opcode); + + if (meta->remote) { + qp->sq_msn_idx[meta->seq] =3D qp->sq.prod; + qp->sq_msn_prod =3D ionic_queue_next(&qp->sq, qp->sq_msn_prod); + } + + ionic_queue_produce(&qp->sq); +} + +static int ionic_prep_common(struct ionic_qp *qp, + const struct ib_send_wr *wr, + struct ionic_sq_meta *meta, + struct ionic_v1_wqe *wqe) +{ + s64 signed_len; + u32 mval; + + if (wr->send_flags & IB_SEND_INLINE) { + wqe->base.num_sge_key =3D 0; + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_INL); + mval =3D ionic_v1_send_wqe_max_data(qp->sq.stride_log2, false); + signed_len =3D ionic_prep_inline(wqe->common.pld.data, mval, + wr->sg_list, wr->num_sge); + } else { + wqe->base.num_sge_key =3D wr->num_sge; + mval =3D ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + false); + signed_len =3D ionic_prep_pld(wqe, &wqe->common.pld, + qp->sq_spec, mval, + wr->sg_list, wr->num_sge); + } + + if (unlikely(signed_len < 0)) + return signed_len; + + meta->len =3D signed_len; + wqe->common.length =3D cpu_to_be32(signed_len); + + ionic_prep_base(qp, wr, meta, wqe); + + return 0; +} + +static void ionic_prep_sq_wqe(struct ionic_qp *qp, void *wqe) +{ + memset(wqe, 0, 1u << qp->sq.stride_log2); +} + +static void ionic_prep_rq_wqe(struct ionic_qp *qp, void *wqe) +{ + memset(wqe, 0, 1u << qp->rq.stride_log2); +} + +static int ionic_prep_send(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_SEND; + + switch (wr->opcode) { + case IB_WR_SEND: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND); + break; + case IB_WR_SEND_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_IMM); + wqe->base.imm_data_key =3D wr->ex.imm_data; + break; + case IB_WR_SEND_WITH_INV: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_INV); + wqe->base.imm_data_key =3D + cpu_to_be32(wr->ex.invalidate_rkey); + break; + default: + return -EINVAL; + } + + return ionic_prep_common(qp, wr, meta, wqe); +} + +static int ionic_prep_send_ud(struct ionic_qp *qp, + const struct ib_ud_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + struct ionic_ah *ah; + + if (unlikely(!wr->ah)) + return -EINVAL; + + ah =3D to_ionic_ah(wr->ah); + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + wqe->common.send.ah_id =3D cpu_to_be32(ah->ahid); + wqe->common.send.dest_qpn =3D cpu_to_be32(wr->remote_qpn); + wqe->common.send.dest_qkey =3D cpu_to_be32(wr->remote_qkey); + + meta->ibop =3D IB_WC_SEND; + + switch (wr->wr.opcode) { + case IB_WR_SEND: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND); + break; + case IB_WR_SEND_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_IMM); + wqe->base.imm_data_key =3D wr->wr.ex.imm_data; + break; + default: + return -EINVAL; + } + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_rdma(struct ionic_qp *qp, + const struct ib_rdma_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_RDMA_WRITE; + + switch (wr->wr.opcode) { + case IB_WR_RDMA_READ: + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + meta->ibop =3D IB_WC_RDMA_READ; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_READ); + break; + case IB_WR_RDMA_WRITE: + if (wr->wr.send_flags & IB_SEND_SOLICITED) + return -EINVAL; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_WRITE); + break; + case IB_WR_RDMA_WRITE_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_WRITE_IMM); + wqe->base.imm_data_key =3D wr->wr.ex.imm_data; + break; + default: + return -EINVAL; + } + + wqe->common.rdma.remote_va_high =3D cpu_to_be32(wr->remote_addr >> 32); + wqe->common.rdma.remote_va_low =3D cpu_to_be32(wr->remote_addr); + wqe->common.rdma.remote_rkey =3D cpu_to_be32(wr->rkey); + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_atomic(struct ionic_qp *qp, + const struct ib_atomic_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (wr->wr.num_sge !=3D 1 || wr->wr.sg_list[0].length !=3D 8) + return -EINVAL; + + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_RDMA_WRITE; + + switch (wr->wr.opcode) { + case IB_WR_ATOMIC_CMP_AND_SWP: + meta->ibop =3D IB_WC_COMP_SWAP; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, ATOMIC_CS); + wqe->atomic.swap_add_high =3D cpu_to_be32(wr->swap >> 32); + wqe->atomic.swap_add_low =3D cpu_to_be32(wr->swap); + wqe->atomic.compare_high =3D cpu_to_be32(wr->compare_add >> 32); + wqe->atomic.compare_low =3D cpu_to_be32(wr->compare_add); + break; + case IB_WR_ATOMIC_FETCH_AND_ADD: + meta->ibop =3D IB_WC_FETCH_ADD; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, ATOMIC_FA); + wqe->atomic.swap_add_high =3D cpu_to_be32(wr->compare_add >> 32); + wqe->atomic.swap_add_low =3D cpu_to_be32(wr->compare_add); + break; + default: + return -EINVAL; + } + + wqe->atomic.remote_va_high =3D cpu_to_be32(wr->remote_addr >> 32); + wqe->atomic.remote_va_low =3D cpu_to_be32(wr->remote_addr); + wqe->atomic.remote_rkey =3D cpu_to_be32(wr->rkey); + + wqe->base.num_sge_key =3D 1; + wqe->atomic.sge.va =3D cpu_to_be64(wr->wr.sg_list[0].addr); + wqe->atomic.sge.len =3D cpu_to_be32(8); + wqe->atomic.sge.lkey =3D cpu_to_be32(wr->wr.sg_list[0].lkey); + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_inv(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (wr->send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, LOCAL_INV); + wqe->base.imm_data_key =3D cpu_to_be32(wr->ex.invalidate_rkey); + + meta->len =3D 0; + meta->ibop =3D IB_WC_LOCAL_INV; + + ionic_prep_base(qp, wr, meta, wqe); + + return 0; +} + +static int ionic_prep_reg(struct ionic_qp *qp, + const struct ib_reg_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_mr *mr =3D to_ionic_mr(wr->mr); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + __le64 dma_addr; + int flags; + + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + /* must call ib_map_mr_sg before posting reg wr */ + if (!mr->buf.tbl_pages) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + flags =3D to_ionic_mr_flags(wr->access); + + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, REG_MR); + wqe->base.num_sge_key =3D wr->key; + wqe->base.imm_data_key =3D cpu_to_be32(mr->ibmr.lkey); + wqe->reg_mr.va =3D cpu_to_be64(mr->ibmr.iova); + wqe->reg_mr.length =3D cpu_to_be64(mr->ibmr.length); + wqe->reg_mr.offset =3D ionic_pgtbl_off(&mr->buf, mr->ibmr.iova); + dma_addr =3D ionic_pgtbl_dma(&mr->buf, mr->ibmr.iova); + wqe->reg_mr.dma_addr =3D cpu_to_be64(le64_to_cpu(dma_addr)); + + wqe->reg_mr.map_count =3D cpu_to_be32(mr->buf.tbl_pages); + wqe->reg_mr.flags =3D cpu_to_be16(flags); + wqe->reg_mr.dir_size_log2 =3D 0; + wqe->reg_mr.page_size_log2 =3D order_base_2(mr->ibmr.page_size); + + meta->len =3D 0; + meta->ibop =3D IB_WC_REG_MR; + + ionic_prep_base(qp, &wr->wr, meta, wqe); + + return 0; +} + +static int ionic_prep_one_rc(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + int rc =3D 0; + + switch (wr->opcode) { + case IB_WR_SEND: + case IB_WR_SEND_WITH_IMM: + case IB_WR_SEND_WITH_INV: + rc =3D ionic_prep_send(qp, wr); + break; + case IB_WR_RDMA_READ: + case IB_WR_RDMA_WRITE: + case IB_WR_RDMA_WRITE_WITH_IMM: + rc =3D ionic_prep_rdma(qp, rdma_wr(wr)); + break; + case IB_WR_ATOMIC_CMP_AND_SWP: + case IB_WR_ATOMIC_FETCH_AND_ADD: + rc =3D ionic_prep_atomic(qp, atomic_wr(wr)); + break; + case IB_WR_LOCAL_INV: + rc =3D ionic_prep_inv(qp, wr); + break; + case IB_WR_REG_MR: + rc =3D ionic_prep_reg(qp, reg_wr(wr)); + break; + default: + ibdev_dbg(&dev->ibdev, "invalid opcode %d\n", wr->opcode); + rc =3D -EINVAL; + } + + return rc; +} + +static int ionic_prep_one_ud(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + int rc =3D 0; + + switch (wr->opcode) { + case IB_WR_SEND: + case IB_WR_SEND_WITH_IMM: + rc =3D ionic_prep_send_ud(qp, ud_wr(wr)); + break; + default: + ibdev_dbg(&dev->ibdev, "invalid opcode %d\n", wr->opcode); + rc =3D -EINVAL; + } + + return rc; +} + +static int ionic_prep_recv(struct ionic_qp *qp, + const struct ib_recv_wr *wr) +{ + struct ionic_rq_meta *meta; + struct ionic_v1_wqe *wqe; + s64 signed_len; + u32 mval; + + wqe =3D ionic_queue_at_prod(&qp->rq); + + /* if wqe is owned by device, caller can try posting again soon */ + if (wqe->base.flags & cpu_to_be16(IONIC_V1_FLAG_FENCE)) + return -EAGAIN; + + meta =3D qp->rq_meta_head; + if (unlikely(meta =3D=3D IONIC_META_LAST) || + unlikely(meta =3D=3D IONIC_META_POSTED)) + return -EIO; + + ionic_prep_rq_wqe(qp, wqe); + + mval =3D ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, qp->rq_spec, + false); + signed_len =3D ionic_prep_pld(wqe, &wqe->recv.pld, + qp->rq_spec, mval, + wr->sg_list, wr->num_sge); + if (signed_len < 0) + return signed_len; + + meta->wrid =3D wr->wr_id; + + wqe->base.wqe_id =3D meta - qp->rq_meta; + wqe->base.num_sge_key =3D wr->num_sge; + + /* total length for recv goes in base imm_data_key */ + wqe->base.imm_data_key =3D cpu_to_be32(signed_len); + + ionic_queue_produce(&qp->rq); + + qp->rq_meta_head =3D meta->next; + meta->next =3D IONIC_META_POSTED; + + return 0; +} + +static int ionic_post_send_common(struct ionic_ibdev *dev, + struct ionic_vcq *vcq, + struct ionic_cq *cq, + struct ionic_qp *qp, + const struct ib_send_wr *wr, + const struct ib_send_wr **bad) +{ + unsigned long irqflags; + bool notify =3D false; + int spend, rc =3D 0; + + if (!bad) + return -EINVAL; + + if (!qp->has_sq) { + *bad =3D wr; + return -EINVAL; + } + + if (qp->state < IB_QPS_RTS) { + *bad =3D wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->sq_lock, irqflags); + + while (wr) { + if (ionic_queue_full(&qp->sq)) { + ibdev_dbg(&dev->ibdev, "queue full"); + rc =3D -ENOMEM; + goto out; + } + + if (qp->ibqp.qp_type =3D=3D IB_QPT_UD || + qp->ibqp.qp_type =3D=3D IB_QPT_GSI) + rc =3D ionic_prep_one_ud(qp, wr); + else + rc =3D ionic_prep_one_rc(qp, wr); + if (rc) + goto out; + + wr =3D wr->next; + } + +out: + /* irq remains saved here, not restored/saved again */ + if (!spin_trylock(&cq->lock)) { + spin_unlock(&qp->sq_lock); + spin_lock(&cq->lock); + spin_lock(&qp->sq_lock); + } + + if (likely(qp->sq.prod !=3D qp->sq_old_prod)) { + /* ring cq doorbell just in time */ + spend =3D (qp->sq.prod - qp->sq_old_prod) & qp->sq.mask; + ionic_reserve_cq(dev, cq, spend); + + qp->sq_old_prod =3D qp->sq.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.sq_qtype, + ionic_queue_dbell_val(&qp->sq)); + } + + if (qp->sq_flush) { + notify =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + + spin_unlock(&qp->sq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + + if (notify && vcq->ibcq.comp_handler) + vcq->ibcq.comp_handler(&vcq->ibcq, vcq->ibcq.cq_context); + + *bad =3D wr; + return rc; +} + +static int ionic_post_recv_common(struct ionic_ibdev *dev, + struct ionic_vcq *vcq, + struct ionic_cq *cq, + struct ionic_qp *qp, + const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad) +{ + unsigned long irqflags; + bool notify =3D false; + int spend, rc =3D 0; + + if (!bad) + return -EINVAL; + + if (!qp->has_rq) { + *bad =3D wr; + return -EINVAL; + } + + if (qp->state < IB_QPS_INIT) { + *bad =3D wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->rq_lock, irqflags); + + while (wr) { + if (ionic_queue_full(&qp->rq)) { + ibdev_dbg(&dev->ibdev, "queue full"); + rc =3D -ENOMEM; + goto out; + } + + rc =3D ionic_prep_recv(qp, wr); + if (rc) + goto out; + + wr =3D wr->next; + } + +out: + if (!cq) { + spin_unlock_irqrestore(&qp->rq_lock, irqflags); + goto out_unlocked; + } + + /* irq remains saved here, not restored/saved again */ + if (!spin_trylock(&cq->lock)) { + spin_unlock(&qp->rq_lock); + spin_lock(&cq->lock); + spin_lock(&qp->rq_lock); + } + + if (likely(qp->rq.prod !=3D qp->rq_old_prod)) { + /* ring cq doorbell just in time */ + spend =3D (qp->rq.prod - qp->rq_old_prod) & qp->rq.mask; + ionic_reserve_cq(dev, cq, spend); + + qp->rq_old_prod =3D qp->rq.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.rq_qtype, + ionic_queue_dbell_val(&qp->rq)); + } + + if (qp->rq_flush) { + notify =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + } + + spin_unlock(&qp->rq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + + if (notify && vcq->ibcq.comp_handler) + vcq->ibcq.comp_handler(&vcq->ibcq, vcq->ibcq.cq_context); + +out_unlocked: + *bad =3D wr; + return rc; +} + +static int ionic_post_send(struct ib_qp *ibqp, + const struct ib_send_wr *wr, + const struct ib_send_wr **bad) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibqp->send_cq); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_cq *cq =3D + to_ionic_vcq_cq(ibqp->send_cq, qp->udma_idx); + + return ionic_post_send_common(dev, vcq, cq, qp, wr, bad); +} + +static int ionic_post_recv(struct ib_qp *ibqp, + const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibqp->recv_cq); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_cq *cq =3D + to_ionic_vcq_cq(ibqp->recv_cq, qp->udma_idx); + + return ionic_post_recv_common(dev, vcq, cq, qp, wr, bad); +} + +static const struct ib_device_ops ionic_datapath_ops =3D { + .driver_id =3D RDMA_DRIVER_IONIC, + .post_send =3D ionic_post_send, + .post_recv =3D ionic_post_recv, + .poll_cq =3D ionic_poll_cq, + .req_notify_cq =3D ionic_req_notify_cq, +}; + +void ionic_datapath_setops(struct ionic_ibdev *dev) +{ + ib_set_device_ops(&dev->ibdev, &ionic_datapath_ops); + + dev->ibdev.uverbs_cmd_mask |=3D + BIT_ULL(IB_USER_VERBS_CMD_POST_SEND) | + BIT_ULL(IB_USER_VERBS_CMD_POST_RECV) | + BIT_ULL(IB_USER_VERBS_CMD_POLL_CQ) | + BIT_ULL(IB_USER_VERBS_CMD_REQ_NOTIFY_CQ) | + 0; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index 881948a57341..9971e1ccf4ee 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -163,6 +163,61 @@ static inline int to_ionic_qp_flags(int access, bool s= qd_notify, return flags; } =20 +/* cqe non-admin status indicated in status_length field when err bit is s= et */ +enum ionic_status { + IONIC_STS_OK, + IONIC_STS_LOCAL_LEN_ERR, + IONIC_STS_LOCAL_QP_OPER_ERR, + IONIC_STS_LOCAL_PROT_ERR, + IONIC_STS_WQE_FLUSHED_ERR, + IONIC_STS_MEM_MGMT_OPER_ERR, + IONIC_STS_BAD_RESP_ERR, + IONIC_STS_LOCAL_ACC_ERR, + IONIC_STS_REMOTE_INV_REQ_ERR, + IONIC_STS_REMOTE_ACC_ERR, + IONIC_STS_REMOTE_OPER_ERR, + IONIC_STS_RETRY_EXCEEDED, + IONIC_STS_RNR_RETRY_EXCEEDED, + IONIC_STS_XRC_VIO_ERR, + IONIC_STS_LOCAL_SGL_INV_ERR, +}; + +static inline int ionic_to_ib_status(int sts) +{ + switch (sts) { + case IONIC_STS_OK: + return IB_WC_SUCCESS; + case IONIC_STS_LOCAL_LEN_ERR: + return IB_WC_LOC_LEN_ERR; + case IONIC_STS_LOCAL_QP_OPER_ERR: + case IONIC_STS_LOCAL_SGL_INV_ERR: + return IB_WC_LOC_QP_OP_ERR; + case IONIC_STS_LOCAL_PROT_ERR: + return IB_WC_LOC_PROT_ERR; + case IONIC_STS_WQE_FLUSHED_ERR: + return IB_WC_WR_FLUSH_ERR; + case IONIC_STS_MEM_MGMT_OPER_ERR: + return IB_WC_MW_BIND_ERR; + case IONIC_STS_BAD_RESP_ERR: + return IB_WC_BAD_RESP_ERR; + case IONIC_STS_LOCAL_ACC_ERR: + return IB_WC_LOC_ACCESS_ERR; + case IONIC_STS_REMOTE_INV_REQ_ERR: + return IB_WC_REM_INV_REQ_ERR; + case IONIC_STS_REMOTE_ACC_ERR: + return IB_WC_REM_ACCESS_ERR; + case IONIC_STS_REMOTE_OPER_ERR: + return IB_WC_REM_OP_ERR; + case IONIC_STS_RETRY_EXCEEDED: + return IB_WC_RETRY_EXC_ERR; + case IONIC_STS_RNR_RETRY_EXCEEDED: + return IB_WC_RNR_RETRY_EXC_ERR; + case IONIC_STS_XRC_VIO_ERR: + default: + return IB_WC_GENERAL_ERR; + } +} + /* admin queue qp type */ enum ionic_qp_type { IONIC_QPT_RC, @@ -294,6 +349,24 @@ struct ionic_v1_cqe { __be32 qid_type_flags; }; =20 +/* bits for cqe recv */ +enum ionic_v1_cqe_src_qpn_bits { + IONIC_V1_CQE_RECV_QPN_MASK =3D 0xffffff, + IONIC_V1_CQE_RECV_OP_SHIFT =3D 24, + + /* MASK could be 0x3, but need 0x1f for makeshift values: + * OP_TYPE_RDMA_OPER_WITH_IMM, OP_TYPE_SEND_RCVD + */ + IONIC_V1_CQE_RECV_OP_MASK =3D 0x1f, + IONIC_V1_CQE_RECV_OP_SEND =3D 0, + IONIC_V1_CQE_RECV_OP_SEND_INV =3D 1, + IONIC_V1_CQE_RECV_OP_SEND_IMM =3D 2, + IONIC_V1_CQE_RECV_OP_RDMA_IMM =3D 3, + + IONIC_V1_CQE_RECV_IS_IPV4 =3D BIT(7 + IONIC_V1_CQE_RECV_OP_SHIFT), + IONIC_V1_CQE_RECV_IS_VLAN =3D BIT(6 + IONIC_V1_CQE_RECV_OP_SHIFT), +}; + /* bits for cqe qid_type_flags */ enum ionic_v1_cqe_qtf_bits { IONIC_V1_CQE_COLOR =3D BIT(0), @@ -318,6 +391,18 @@ static inline bool ionic_v1_cqe_error(struct ionic_v1_= cqe *cqe) return !!(cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_ERROR)); } =20 +static inline bool ionic_v1_cqe_recv_is_ipv4(struct ionic_v1_cqe *cqe) +{ + return !!(cqe->recv.src_qpn_op & + cpu_to_be32(IONIC_V1_CQE_RECV_IS_IPV4)); +} + +static inline bool ionic_v1_cqe_recv_is_vlan(struct ionic_v1_cqe *cqe) +{ + return !!(cqe->recv.src_qpn_op & + cpu_to_be32(IONIC_V1_CQE_RECV_IS_VLAN)); +} + static inline void ionic_v1_cqe_clean(struct ionic_v1_cqe *cqe) { cqe->qid_type_flags |=3D cpu_to_be32(~0u << IONIC_V1_CQE_QID_SHIFT); @@ -444,6 +529,28 @@ enum ionic_v1_op { IONIC_V1_SPEC_FIRST_SGE =3D 2, }; =20 +/* queue pair v2 send opcodes */ +enum ionic_v2_op { + IONIC_V2_OPSL_OUT =3D 0x20, + IONIC_V2_OPSL_IMM =3D 0x40, + IONIC_V2_OPSL_INV =3D 0x80, + + IONIC_V2_OP_SEND =3D 0x0 | IONIC_V2_OPSL_OUT, + IONIC_V2_OP_SEND_IMM =3D IONIC_V2_OP_SEND | IONIC_V2_OPSL_IMM, + IONIC_V2_OP_SEND_INV =3D IONIC_V2_OP_SEND | IONIC_V2_OPSL_INV, + + IONIC_V2_OP_RDMA_WRITE =3D 0x1 | IONIC_V2_OPSL_OUT, + IONIC_V2_OP_RDMA_WRITE_IMM =3D IONIC_V2_OP_RDMA_WRITE | IONIC_V2_OPSL_IMM, + + IONIC_V2_OP_RDMA_READ =3D 0x2, + + IONIC_V2_OP_ATOMIC_CS =3D 0x4, + IONIC_V2_OP_ATOMIC_FA =3D 0x5, + IONIC_V2_OP_REG_MR =3D 0x6, + IONIC_V2_OP_LOCAL_INV =3D 0x7, + IONIC_V2_OP_BIND_MW =3D 0x8, +}; + static inline size_t ionic_v1_send_wqe_min_size(int min_sge, int min_data, int spec, bool expdb) { diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 358645881a24..e7c5b15b27cf 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -174,6 +174,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) if (rc) goto err_admin; =20 + ionic_datapath_setops(dev); ionic_controlpath_setops(dev); =20 rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 2c730d0fbae8..c476f3781090 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -413,6 +413,11 @@ static inline u32 ionic_obj_dbid(struct ionic_ibdev *d= ev, return ionic_ctx_dbid(dev, to_ionic_ctx_uobj(uobj)); } =20 +static inline bool ionic_ibop_is_local(enum ib_wr_opcode op) +{ + return op =3D=3D IB_WR_LOCAL_INV || op =3D=3D IB_WR_REG_MR; +} + static inline void ionic_qp_complete(struct kref *kref) { struct ionic_qp *qp =3D container_of(kref, struct ionic_qp, qp_kref); @@ -453,8 +458,12 @@ void ionic_destroy_cq_common(struct ionic_ibdev *dev, = struct ionic_cq *cq); void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp); void ionic_notify_flush_cq(struct ionic_cq *cq); =20 +/* ionic_datapath.c */ +void ionic_datapath_setops(struct ionic_ibdev *dev); + /* ionic_pgtbl.c */ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); +__be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va); int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); int ionic_pgtbl_init(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf, diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c index a8eb73be6f86..e74db73c9246 100644 --- a/drivers/infiniband/hw/ionic/ionic_pgtbl.c +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -26,6 +26,17 @@ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va) return cpu_to_le64(dma + (va & pg_mask)); } =20 +__be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va) +{ + if (buf->tbl_pages > 1) { + u64 pg_mask =3D BIT_ULL(buf->page_size_log2) - 1; + + return cpu_to_be64(va & pg_mask); + } + + return 0; +} + int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) { if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2057.outbound.protection.outlook.com [40.107.92.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F49A2153D2; Thu, 8 May 2025 05:01:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680502; cv=fail; b=eqJunvFwBq5ftkxb8vqS9DCVIZ9IUDtoirGasKfn1b/5on99QLxP1Pxfa3N8fsWp2A6zOmOfut72B/F9a+BZs2TeUgBrLGQ8oBOw6ku7nJNFG0ozivhJevVYPeD9+9PaZHH6NNQi+HbhQ6/yeO2qjOEI1eGAMILjwpnNiQMmmQQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680502; c=relaxed/simple; bh=aI6N7TjAiDQxGrHpQdPjj30src2vNQDwiNZ4U9BBnBU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=P9mUwuZXlPZPisTdL2E6InN+oG+wV9iNjdAej3t4fSWib7wKOeLzOt3K89U2uTlgvzplRpP9IB/hQMmQ3DJxCxb7NJYdojPvAA+sjeZIDrwGadlN4L7AGf0Ry4PVfJ4AMiuGQ4cXP5COJRrr6iPKjapsFY7mK+tt63wywlMyB6Q= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=ZjHwaUwi; arc=fail smtp.client-ip=40.107.92.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="ZjHwaUwi" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=wvqQbcbzLV0ukjxvSQAj17ZXPgLbet8wyp2xNZgnm2CbRGKDj91pbxR+cO5/6Fi80tnwrlFLF5leZKKUJzQHQMAvj6e+C9hb6DA17nCb6GMDBrEhbDqTzVKKNgQWxJ4svpZUBfnyHJfF3NT2IavA/vkTcjm04KfiMYvr4IOAkTm1g6M1rrN8okInDEqDjRpL4oDa5qQrs67mWoPCnWXlmF3ARWOKs1gC0WtKIk3lXpEsyLMp3AjZDdw06C4TRmucokE3IKwZJSKD0yT7LvkqAKD4LSezh0EOS4e1yzPp0uTXKkDGXkYSQvzoGNVrZpDBsJlguEuKaYybdcHygi6Clw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ua3JvKiR60XdGqV7m5mzp4wc0VjcDCorLobviydKFP0=; b=MR98O+l0aMS3IKLY688ym31q+gkBm3pdxTqZDKUyDKC4luPitknYBxIQkIN3uix+BlZogtJEbgPRuuHSxbIfQF7i/ptaU9MUXk/Eh7d9/CNqdvjEZFxwDLHDNDVoPS8vnZs1D0W3vRNeqOpcngS8LcBzFXcYd7vJfVTEpg5aul6vf4G3XH2xCrM9TypRXpi2uwj12srNrjuc69w1rKzHsZA+aN/o0egNnA7Zj5d3jPMnp+p0YZmh7DQSSPC4YkO1czPX67GqCcDFNhL8qNZDN4qq6xgc6GFlcVAkZx/NNYKtiG73VEcWF6Zl6UC1jeYP+foTvTavtVAX9sZ514Es5Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ua3JvKiR60XdGqV7m5mzp4wc0VjcDCorLobviydKFP0=; b=ZjHwaUwiVJehJ2Kz1bider/O+J/GP2wH8eWWzXG3gaPro7tHe8rUsvQvXmOX+ciZYalDpioHq0Og3o6qAywTBcAjvIKkpBe5c9ARQK4Dj/p5h0oJa0xGT0jWTpVvnlYFmbjvqgR5uU4xAIACq7dzj9vyAm8hGINVRBfBApr/I3Y= Received: from BL1PR13CA0001.namprd13.prod.outlook.com (2603:10b6:208:256::6) by CH3PR12MB8970.namprd12.prod.outlook.com (2603:10b6:610:176::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.20; Thu, 8 May 2025 05:01:30 +0000 Received: from BL6PEPF0001AB78.namprd02.prod.outlook.com (2603:10b6:208:256:cafe::4f) by BL1PR13CA0001.outlook.office365.com (2603:10b6:208:256::6) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8722.19 via Frontend Transport; Thu, 8 May 2025 05:01:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB78.mail.protection.outlook.com (10.167.242.171) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:30 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:29 -0500 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:29 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:25 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v2 12/14] RDMA/ionic: Register device ops for miscellaneous functionality Date: Thu, 8 May 2025 10:29:55 +0530 Message-ID: <20250508045957.2823318-13-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB78:EE_|CH3PR12MB8970:EE_ X-MS-Office365-Filtering-Correlation-Id: 79deb0c7-df16-4d97-65ad-08dd8ded653e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|7416014|82310400026|1800799024|376014|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?d2p/W2V9p1nyOIDXZHHHuecYn0sSp/G7kMIz3quVaCuUKk6F/xX1/ehlDEkL?= =?us-ascii?Q?G8uZNEunIY6dSMUOI7xes39HwruMtz4huTL0W6Slp2M8sVl0+Xj3D0qb7H7+?= =?us-ascii?Q?VyKXizYJHJazuN9SfVramDA3LrezHjS8vuo2vwtOgsU13s+oHt/xw5KD2r/y?= =?us-ascii?Q?VxpfgUGYAlIl0geYoggBZQY77gRcyJWAn92tU/i+a1TRmeii4sZOI6JX6Myt?= =?us-ascii?Q?1NAHyZABgiuXCGplMW4Fc+7uCcL1rgHMVHZXE5eglcCJK57L4lqmf2iF/yiI?= =?us-ascii?Q?9uKne7dSFSa0g3rRfRrSJY5ChfLzDSMP5r25wk9tjRgMy3Mpztld74j1YMYK?= =?us-ascii?Q?+MdDwyG6kfIsuIKpOPmtTeUGRB8I/oDNWdQLH3kVweW7wYgCW9xf//+N8XQ1?= =?us-ascii?Q?hmA8wHT7fu+lKMWUea9Ub0/hUUPGp+K+4VLLyrD1HCHQj0UDToToQG1SEjr+?= =?us-ascii?Q?OtzQgsRGDS4ouUqHkTn5+DVz/KVBRsUqmyMbgPOMU6Hd8JlFhqCt6g+CDxdy?= =?us-ascii?Q?te/xj4QD31btnkCWrDWm3wotTiC+Zye5ZtHVej/Fi0Vjkkh77+/tIHaKqAtm?= =?us-ascii?Q?OR8S7xwsGbh+YTjq7DO6vBByz8+4XrY7n9h9Rf1dPPJhJI39oUG07qFAFkpG?= =?us-ascii?Q?qHYriID7rOzebFh1m3WMOtrjyyA7qXOUjwcCDcMa6DrSui+Kt5T8RnxL5IMF?= =?us-ascii?Q?VIRv6mnvdeSGbXlMN8HFbvF2eqWpscSQj8bHlW3bUCptSgLFHLoZiaKyKl0N?= =?us-ascii?Q?GzjkR9wYAozfzLnqrbtkkcB1OIbqcSRiCCqqDbJOP0oGh44ft0IupWXGbgCo?= =?us-ascii?Q?z5n6eAfoeXE8pHd0WLkrcaLZPNPm0dv0SZzs6YTgorklz9gK6cx5vvoBFqGm?= =?us-ascii?Q?d9w1tsDgVkbzOTCpjFF8z0sCTUXF9IUKSArB/cAAsUUh+8uv2whmLjjs2U30?= =?us-ascii?Q?1NJpxV4Vg03xDdyzUDwvXFAkxff9Mr2OtoXhuUprFwf/0YzLBTTSqOYxBDYu?= =?us-ascii?Q?EMMvgDljGd1tiz+BQFAHHsi4dpTd2bDtnDjOz8r7NH512C0wAXN1lq4uNHcT?= =?us-ascii?Q?xB9HElgLkK30EbArovAcF6rrZk15Z6oMkJ710Fp33JlT2EQuPGDOMbKrOt47?= =?us-ascii?Q?wWrfnTJpsHsSzKbxK+cmmDXIrL+270WuGufd/sM7zVsbNw1vaNQ7SEG5df1+?= =?us-ascii?Q?Er+CV29iRzd2MJOAqQ7dQTiIaSQlquU0JfWXblVvmYrVTJ9bLUSjONxd1KQa?= =?us-ascii?Q?LD6CCbK/dVL16qCQKFMRlrn8eBG6jHnx/0BIPHGP1E0ytgqn5h/kCV+rk8Qj?= =?us-ascii?Q?/iJ38Nsmzqy8Cpa9nwhPRGZp3pqaDIRTuo0+VhHMuLp1Mlz6q+mY/P+Ump1a?= =?us-ascii?Q?NhGPCcT6uQXVIBqKwem0j4kFuuKK7XDbp8NEwd+3wKkGcL11a4fXVvNEkFJH?= =?us-ascii?Q?JXoGIn0YjfcRGY37rasx5zY1T++O0Gen8L2NmWKkEV97AEwkGzca7s0Q+gPV?= =?us-ascii?Q?yTfhQMJZuaGHFC0S5cjGEE1WS/l7cOFfIxpNZYPSiVXojq8L33zMAV4ibA?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(7416014)(82310400026)(1800799024)(376014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:30.1373 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 79deb0c7-df16-4d97-65ad-08dd8ded653e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB78.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8970 Content-Type: text/plain; charset="utf-8" Implement idbdev ops for device and port information. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- drivers/infiniband/hw/ionic/ionic_ibdev.c | 224 ++++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 5 + drivers/infiniband/hw/ionic/ionic_lif_cfg.c | 10 + drivers/infiniband/hw/ionic/ionic_lif_cfg.h | 2 + 4 files changed, 241 insertions(+) diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index e7c5b15b27cf..1fe58ca2238f 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -3,7 +3,11 @@ =20 #include #include +#include +#include #include +#include +#include =20 #include "ionic_ibdev.h" =20 @@ -33,6 +37,219 @@ void ionic_port_event(struct ionic_ibdev *dev, enum ib_= event_type event) ib_dispatch_event(&ev); } =20 +static int ionic_query_device(struct ib_device *ibdev, + struct ib_device_attr *attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + addrconf_ifid_eui48((u8 *)&attr->sys_image_guid, + ionic_lif_netdev(dev->lif_cfg.lif)); + attr->max_mr_size =3D dev->lif_cfg.npts_per_lif * PAGE_SIZE / 2; + attr->page_size_cap =3D dev->lif_cfg.page_size_supported; + + attr->vendor_id =3D to_pci_dev(dev->lif_cfg.hwdev)->vendor; + attr->vendor_part_id =3D to_pci_dev(dev->lif_cfg.hwdev)->device; + + attr->hw_ver =3D ionic_lif_asic_rev(dev->lif_cfg.lif); + attr->fw_ver =3D 0; + attr->max_qp =3D dev->lif_cfg.qp_count; + attr->max_qp_wr =3D IONIC_MAX_DEPTH; + attr->device_cap_flags =3D + IB_DEVICE_MEM_WINDOW | + IB_DEVICE_MEM_MGT_EXTENSIONS | + IB_DEVICE_MEM_WINDOW_TYPE_2B | + 0; + attr->kernel_cap_flags =3D IBK_LOCAL_DMA_LKEY; + attr->max_send_sge =3D + min(ionic_v1_send_wqe_max_sge(dev->lif_cfg.max_stride, 0, false), + IONIC_SPEC_HIGH); + attr->max_recv_sge =3D + min(ionic_v1_recv_wqe_max_sge(dev->lif_cfg.max_stride, 0, false), + IONIC_SPEC_HIGH); + attr->max_sge_rd =3D attr->max_send_sge; + attr->max_cq =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + attr->max_cqe =3D IONIC_MAX_CQ_DEPTH - IONIC_CQ_GRACE; + attr->max_mr =3D dev->lif_cfg.nmrs_per_lif; + attr->max_pd =3D IONIC_MAX_PD; + attr->max_qp_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_ee_rd_atom =3D 0; + attr->max_res_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_qp_init_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_ee_init_rd_atom =3D 0; + attr->atomic_cap =3D IB_ATOMIC_GLOB; + attr->masked_atomic_cap =3D IB_ATOMIC_GLOB; + attr->max_mw =3D dev->lif_cfg.nmrs_per_lif; + attr->max_mcast_grp =3D 0; + attr->max_mcast_qp_attach =3D 0; + attr->max_ah =3D dev->lif_cfg.nahs_per_lif; + attr->max_fast_reg_page_list_len =3D dev->lif_cfg.npts_per_lif / 2; + attr->max_pkeys =3D IONIC_PKEY_TBL_LEN; + + return 0; +} + +static int ionic_query_port(struct ib_device *ibdev, u32 port, + struct ib_port_attr *attr) +{ + struct net_device *ndev; + + if (port !=3D 1) + return -EINVAL; + + ndev =3D ib_device_get_netdev(ibdev, port); + + if (netif_running(ndev) && netif_carrier_ok(ndev)) { + attr->state =3D IB_PORT_ACTIVE; + attr->phys_state =3D IB_PORT_PHYS_STATE_LINK_UP; + } else if (netif_running(ndev)) { + attr->state =3D IB_PORT_DOWN; + attr->phys_state =3D IB_PORT_PHYS_STATE_POLLING; + } else { + attr->state =3D IB_PORT_DOWN; + attr->phys_state =3D IB_PORT_PHYS_STATE_DISABLED; + } + + attr->max_mtu =3D iboe_get_mtu(ndev->max_mtu); + attr->active_mtu =3D min(attr->max_mtu, iboe_get_mtu(ndev->mtu)); + attr->gid_tbl_len =3D IONIC_GID_TBL_LEN; + attr->ip_gids =3D true; + attr->port_cap_flags =3D 0; + attr->max_msg_sz =3D 0x80000000; + attr->pkey_tbl_len =3D IONIC_PKEY_TBL_LEN; + attr->max_vl_num =3D 1; + attr->subnet_prefix =3D 0xfe80000000000000ull; + + dev_put(ndev); + + return ib_get_eth_speed(ibdev, port, + &attr->active_speed, + &attr->active_width); +} + +static enum rdma_link_layer ionic_get_link_layer(struct ib_device *ibdev, + u32 port) +{ + return IB_LINK_LAYER_ETHERNET; +} + +static int ionic_query_pkey(struct ib_device *ibdev, u32 port, u16 index, + u16 *pkey) +{ + if (port !=3D 1) + return -EINVAL; + + if (index !=3D 0) + return -EINVAL; + + *pkey =3D IB_DEFAULT_PKEY_FULL; + + return 0; +} + +static int ionic_modify_device(struct ib_device *ibdev, int mask, + struct ib_device_modify *attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (mask & ~IB_DEVICE_MODIFY_NODE_DESC) + return -EOPNOTSUPP; + + if (mask & IB_DEVICE_MODIFY_NODE_DESC) + memcpy(dev->ibdev.node_desc, attr->node_desc, + IB_DEVICE_NODE_DESC_MAX); + + return 0; +} + +static int ionic_get_port_immutable(struct ib_device *ibdev, u32 port, + struct ib_port_immutable *attr) +{ + if (port !=3D 1) + return -EINVAL; + + attr->core_cap_flags =3D RDMA_CORE_PORT_IBA_ROCE_UDP_ENCAP; + + attr->pkey_tbl_len =3D IONIC_PKEY_TBL_LEN; + attr->gid_tbl_len =3D IONIC_GID_TBL_LEN; + attr->max_mad_size =3D IB_MGMT_MAD_SIZE; + + return 0; +} + +static void ionic_get_dev_fw_str(struct ib_device *ibdev, char *str) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + ionic_lif_fw_version(dev->lif_cfg.lif, str, IB_FW_VERSION_NAME_MAX); +} + +static const struct cpumask *ionic_get_vector_affinity(struct ib_device *i= bdev, + int comp_vector) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (comp_vector < 0 || comp_vector >=3D dev->lif_cfg.eq_count) + return NULL; + + return irq_get_affinity_mask(dev->eq_vec[comp_vector]->irq); +} + +static ssize_t hw_rev_show(struct device *device, struct device_attribute = *attr, + char *buf) +{ + struct ionic_ibdev *dev =3D + rdma_device_to_drv_device(device, struct ionic_ibdev, ibdev); + + return sysfs_emit(buf, "0x%x\n", ionic_lif_asic_rev(dev->lif_cfg.lif)); +} +static DEVICE_ATTR_RO(hw_rev); + +static ssize_t hca_type_show(struct device *device, + struct device_attribute *attr, char *buf) +{ + struct ionic_ibdev *dev =3D + rdma_device_to_drv_device(device, struct ionic_ibdev, ibdev); + + return sysfs_emit(buf, "%s\n", dev->ibdev.node_desc); +} +static DEVICE_ATTR_RO(hca_type); + +static struct attribute *ionic_rdma_attributes[] =3D { + &dev_attr_hw_rev.attr, + &dev_attr_hca_type.attr, + NULL +}; + +static const struct attribute_group ionic_rdma_attr_group =3D { + .attrs =3D ionic_rdma_attributes, +}; + +static void ionic_disassociate_ucontext(struct ib_ucontext *ibcontext) +{ + /* + * Dummy define disassociate_ucontext so that it does not + * wait for user context before cleaning up hw resources. + */ +} + +static const struct ib_device_ops ionic_dev_ops =3D { + .owner =3D THIS_MODULE, + .driver_id =3D RDMA_DRIVER_IONIC, + .uverbs_abi_ver =3D IONIC_ABI_VERSION, + .query_device =3D ionic_query_device, + .query_port =3D ionic_query_port, + .get_link_layer =3D ionic_get_link_layer, + .query_pkey =3D ionic_query_pkey, + .modify_device =3D ionic_modify_device, + + .get_port_immutable =3D ionic_get_port_immutable, + .get_dev_fw_str =3D ionic_get_dev_fw_str, + .get_vector_affinity =3D ionic_get_vector_affinity, + .device_group =3D &ionic_rdma_attr_group, + .disassociate_ucontext =3D ionic_disassociate_ucontext, +}; + static int ionic_init_resids(struct ionic_ibdev *dev) { int rc; @@ -174,6 +391,13 @@ static struct ionic_ibdev *ionic_create_ibdev(struct i= onic_aux_dev *ionic_adev) if (rc) goto err_admin; =20 + ibdev->uverbs_cmd_mask =3D + BIT_ULL(IB_USER_VERBS_CMD_GET_CONTEXT) | + BIT_ULL(IB_USER_VERBS_CMD_QUERY_DEVICE) | + BIT_ULL(IB_USER_VERBS_CMD_QUERY_PORT) | + 0; + + ib_set_device_ops(&dev->ibdev, &ionic_dev_ops); ionic_datapath_setops(dev); ionic_controlpath_setops(dev); =20 diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index c476f3781090..446cb8d5e334 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -32,6 +32,11 @@ #define IONIC_AQ_COUNT 4 #define IONIC_EQ_ISR_BUDGET 10 #define IONIC_EQ_WORK_BUDGET 1000 +#define IONIC_MAX_RD_ATOM 16 +#define IONIC_PKEY_TBL_LEN 1 +#define IONIC_GID_TBL_LEN 256 + +#define IONIC_SPEC_HIGH 8 #define IONIC_MAX_PD 1024 #define IONIC_SPEC_HIGH 8 #define IONIC_SQCMB_ORDER 5 diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.c index a02eb2f5bd45..a4246de26b9b 100644 --- a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.c @@ -119,3 +119,13 @@ int ionic_version_check(const struct device *dev, stru= ct ionic_lif *lif) =20 return 0; } + +void ionic_lif_fw_version(struct ionic_lif *lif, char *str, size_t len) +{ + strscpy(str, lif->ionic->idev.dev_info.fw_version, len); +} + +u8 ionic_lif_asic_rev(struct ionic_lif *lif) +{ + return lif->ionic->idev.dev_info.asic_rev; +} diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.h index b095637c54cf..f92d8aee5af9 100644 --- a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.h @@ -61,5 +61,7 @@ struct ionic_lif_cfg { int ionic_version_check(const struct device *dev, struct ionic_lif *lif); void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg); struct net_device *ionic_lif_netdev(struct ionic_lif *lif); +void ionic_lif_fw_version(struct ionic_lif *lif, char *str, size_t len); +u8 ionic_lif_asic_rev(struct ionic_lif *lif); =20 #endif /* _IONIC_LIF_CFG_H_ */ --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2075.outbound.protection.outlook.com [40.107.220.75]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A421C22D9F3; Thu, 8 May 2025 05:01:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.75 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680509; cv=fail; b=F6+oMKCLg/omOt8v1yqLeWywL7o/+5HSJjMmmQGOB5dmS9biMQ3qBVqSf0qFU/r2ScESVGiYjZAYepp/O7UNTlz5xyJ+l2XdgDoViB4ZRKffYyM1AeWesFTpkJ3Za9i5n2GQDXrRAmfPTjtGwDetelxVSPs9/j8iYiwA141fl50= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680509; c=relaxed/simple; bh=WvYQ5MxjI6dEWkysbHKnafmvvsGe0ZfhtWz7oBzmtoY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Hl3WZxwOf3RTzjOzL3c7RYFyagq0HSTQK+l0Qjc2awe7JC1lHeX5PfYfVdkY7IDVkHz7Gy0rpp+P1PULQ4DDb5gVABJpXdg5TMnPJsOQ73LHIR1dlV0jGTMquM80uj3emzq56aONv8D7xMzHE8ahfMs+cOFPGMajcfqMMYgfpX8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=yO1B28l+; arc=fail smtp.client-ip=40.107.220.75 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="yO1B28l+" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=XNG+KdaV/cznDAEnAAGjndBpnBUeSP2djQrRvHB99bi8kNHESdPkskFLtF4V6iYHANoUVXAAHZMA4QaBdSZHtZUvEoDmQp6DYrSw44Hj2Ge6BCxY0ogSfrHuavbBdKHBgXOcquV4coRzuUDKpgtC02ZWaa8sIVG8IERlGlBFuTPAdWgoF/pKme2xZx41X5p/5wI7JiYVzFwWf2SDmB+G3bSNvdwRROO/8i4xriIIuDqHNs4sleuDKWIg+qHF+lFFZhzk4N+58FAY+zW74UHnrYrnEi7M9HuY9GYwQ1mqzW5oHztp0YWJ9NckDKK5wrnz6SwdSzQwXl5sEbwGwXlVhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2SoGSoxMk4TodhEZU4WFUQvvbvUpyyO9cIOpHdXDe9I=; b=T6rODppSi3+EwgV6z9MxV9cq5biA9saz4gx9okahXEmgTm349Py+LnBHKve8Xi34R0TpSYFPHpNdR2T130LQKHkZJJtqoEOaFw6AStDrJ0Wt/9br85WoKxa8mX7vWrtTNIddoSx2uufmfJYsvqtywVH8MtMUGV30VhdVPnLK5YrkI5PkfKzyFhW08PAxseC77yAu6U0f94tN4g3QBq3nGE9x32PZAG9bDrYlqjytY9SZV7xJOpYaEqkFkqpJsIKqn9fiZIhJOMQQ0Xirhn1lYO+wmRub1fFKFoYBGBfTMh0vz1ke4/gi/eGEmCe+FndfL7IYwyaAoCCTOGE53vc/qg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2SoGSoxMk4TodhEZU4WFUQvvbvUpyyO9cIOpHdXDe9I=; b=yO1B28l+LEsU8p3/6SSlKuXooZ2xrb0h8YDjB2NYPMWTIB6WhtnePSctY4kChAEt8DZ0HJAzJchdThZaXRZvzNny5Fb8mmggFHXnkcXdjlf+voCl69Xmd/K/eAknHmAIEgtNp+ghCoGhKn2i7SIC8ZONhk8AcvUsss++hNfI/aM= Received: from MN2PR13CA0025.namprd13.prod.outlook.com (2603:10b6:208:160::38) by MW4PR12MB7168.namprd12.prod.outlook.com (2603:10b6:303:22d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8699.26; Thu, 8 May 2025 05:01:37 +0000 Received: from BL6PEPF0001AB77.namprd02.prod.outlook.com (2603:10b6:208:160:cafe::70) by MN2PR13CA0025.outlook.office365.com (2603:10b6:208:160::38) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8722.13 via Frontend Transport; Thu, 8 May 2025 05:01:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by BL6PEPF0001AB77.mail.protection.outlook.com (10.167.242.170) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:36 +0000 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:33 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:29 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 13/14] RDMA/ionic: Implement device stats ops Date: Thu, 8 May 2025 10:29:56 +0530 Message-ID: <20250508045957.2823318-14-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB03.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL6PEPF0001AB77:EE_|MW4PR12MB7168:EE_ X-MS-Office365-Filtering-Correlation-Id: b4a649f7-3e1f-4a95-cf50-08dd8ded6910 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|82310400026|36860700013|1800799024|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?wIZFiVfNmQ648wtbAza3WCtnulwwHK5RWKOYVD+n8KC2YrDUp1XavYFyeGci?= =?us-ascii?Q?ZLiI88fGo99cYsFXTXXnJpVeRltwUraC7cF17v8Quu14c1O4Wh2OAmzbRNvn?= =?us-ascii?Q?DQQMjMEpHbBmBmBPEIxw30m1wtHzqgjQJXDVBDnNtODl0aGb2CeTDbkUoVjR?= =?us-ascii?Q?7u3qgboLlGag/Na9SnJEq6kgXx6RNxWy9CTIqQC9NoJ4dTiEWLFUiNiaFFHe?= =?us-ascii?Q?ZoQ9uOjDjamrMr3Ah05hSjLi7IXQIOkWGF/rmOPNEoFn91JxEo994Y1lboIu?= =?us-ascii?Q?QEutxgO2sxlYEQsPkVDz7KcWVumKwrcLkMC72SeyvP6AbonqSmfZB58xhnB8?= =?us-ascii?Q?2goiwoLoEUk1K6+u+QxB/XADXR/IB1GpBmQFPws9j9ZDJQZaRC0rEXm9xAWH?= =?us-ascii?Q?zW43+aBAafHIMIF+9UzNxcbvNWil9VP6KX8FDtjagXE2LUm8e1YuwcxPHYiz?= =?us-ascii?Q?qTajL65oS8VlHZQhquCTBXg3yDSaldcX6eXmP3pvxbN2oWKg9TGz2oBWPj02?= =?us-ascii?Q?oRIIo7J5lpkVkqU6cGuq++jdGHNVeb8d+8tbaVtcSrGTJwZ1FXNDbJwS+u2u?= =?us-ascii?Q?5nX1LHNot/11BYjFaOuDamrdIoRqXCY5aV+YmDZnfG/tdi+Z8QLRaWPkleAL?= =?us-ascii?Q?N++VtldLcJNdFdSilBEBhUS/SNXAE57dfU4gdPpv1Mk60c2jpjR6e+nTMY/z?= =?us-ascii?Q?5ImjJ9j7PoNid88Pb3tnP2vzhEX7msIPuu9u5brYBk1Ul9cUyf+4cJroSn/N?= =?us-ascii?Q?LGyZpx4ZQA9xFIiOJ1x+BKFW3h7KKGi+MvJzB/LWQH4+pdajf4xkBfBnnJd9?= =?us-ascii?Q?byu8ZcElMCw91OuYdyPU4OSb5UfZcueI0heZcA7HrcvlZNMZTxP7PYZ4sHHk?= =?us-ascii?Q?5F+1FluTIQ+WYK5YiCh7U9Tx63EiHqaqVTrFBgxNMP3Q+/yMsoAxvq8P5ty8?= =?us-ascii?Q?IYk2zxgW4PfS1iWUgtoifk1SyJGvC1+SqDvtWkuhb8DcOLTde2uw6UFwtL5T?= =?us-ascii?Q?7/2bNQDUX3n3XLki6VlPDRvOppq2vBxchjy/1s0JDK5bK/JSSr+og5GEaUTs?= =?us-ascii?Q?QCPAz8l5yeRbAdZY3mxVcLnuaiaFvMHhb2YBU7me8AQbeq+sg+c4RoXlt8c2?= =?us-ascii?Q?HgxK7Xs7OudWOao36TtLpPw5nJlS18kv4hpW+RqYWaIBfJBkrQWnbjobx1Cp?= =?us-ascii?Q?HzvSBp6of7S2q/mGWcuVPXYOhaHC8EjBMSYmCmisIu5YoB0P0MjSdRjJXg8w?= =?us-ascii?Q?DOLWhHPgL+8v0t3XI7Orqhu3gSV2ocyYGncPqYUQKYbWqPydCYRymClhhhLj?= =?us-ascii?Q?7GlTKISJ+MiBZOcHEB4kiFj6K9OXWg6VQtloohVeOjqPpf97sGKpIMSUH1xe?= =?us-ascii?Q?Z7QA8undgWDcDbisbsCJdgDOyT9/8ktMbuuCp7qm5fx6t8dotjBRw+CxDGxc?= =?us-ascii?Q?JSNctsld/Ehaxt0tcHzV0fpWLstfYZ0sK6katT6+Lks+tfSGCXFtDP10usRe?= =?us-ascii?Q?tR0/EAgtzASfO6G8tYq53k/AW1e01UHojX2BOM3esEpOiaJHuoReKK7pZw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(376014)(82310400026)(36860700013)(1800799024)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:36.5491 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b4a649f7-3e1f-4a95-cf50-08dd8ded6910 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL6PEPF0001AB77.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7168 Content-Type: text/plain; charset="utf-8" Implement device stats operations for hw stats and qp stats. Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- drivers/infiniband/hw/ionic/ionic_fw.h | 43 ++ drivers/infiniband/hw/ionic/ionic_hw_stats.c | 484 +++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 4 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 23 + 4 files changed, 554 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_hw_stats.c diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index 9971e1ccf4ee..2d9a2e9a0b60 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -661,6 +661,17 @@ static inline int ionic_v1_use_spec_sge(int min_sge, i= nt spec) return spec; } =20 +struct ionic_admin_stats_hdr { + __le64 dma_addr; + __le32 length; + __le32 id_ver; + __u8 type_state; +} __packed; + +#define IONIC_ADMIN_STATS_HDRS_IN_V1_LEN 17 +static_assert(sizeof(struct ionic_admin_stats_hdr) =3D=3D + IONIC_ADMIN_STATS_HDRS_IN_V1_LEN); + struct ionic_admin_create_ah { __le64 dma_addr; __le32 length; @@ -839,6 +850,7 @@ struct ionic_v1_admin_wqe { __le16 len; =20 union { + struct ionic_admin_stats_hdr stats; struct ionic_admin_create_ah create_ah; struct ionic_admin_destroy_ah destroy_ah; struct ionic_admin_query_ah query_ah; @@ -985,4 +997,35 @@ static inline u32 ionic_v1_eqe_evt_qid(u32 evt) return evt >> IONIC_V1_EQE_QID_SHIFT; } =20 +enum ionic_v1_stat_bits { + IONIC_V1_STAT_TYPE_SHIFT =3D 28, + IONIC_V1_STAT_TYPE_NONE =3D 0, + IONIC_V1_STAT_TYPE_8 =3D 1, + IONIC_V1_STAT_TYPE_LE16 =3D 2, + IONIC_V1_STAT_TYPE_LE32 =3D 3, + IONIC_V1_STAT_TYPE_LE64 =3D 4, + IONIC_V1_STAT_TYPE_BE16 =3D 5, + IONIC_V1_STAT_TYPE_BE32 =3D 6, + IONIC_V1_STAT_TYPE_BE64 =3D 7, + IONIC_V1_STAT_OFF_MASK =3D BIT(IONIC_V1_STAT_TYPE_SHIFT) - 1, +}; + +struct ionic_v1_stat { + union { + __be32 be_type_off; + u32 type_off; + }; + char name[28]; +}; + +static inline int ionic_v1_stat_type(struct ionic_v1_stat *hdr) +{ + return hdr->type_off >> IONIC_V1_STAT_TYPE_SHIFT; +} + +static inline unsigned int ionic_v1_stat_off(struct ionic_v1_stat *hdr) +{ + return hdr->type_off & IONIC_V1_STAT_OFF_MASK; +} + #endif /* _IONIC_FW_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_hw_stats.c b/drivers/infinib= and/hw/ionic/ionic_hw_stats.c new file mode 100644 index 000000000000..29f2571463ac --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_hw_stats.c @@ -0,0 +1,484 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +static int ionic_v1_stat_normalize(struct ionic_v1_stat *hw_stats, + int hw_stats_count) +{ + int hw_stat_i; + + for (hw_stat_i =3D 0; hw_stat_i < hw_stats_count; ++hw_stat_i) { + struct ionic_v1_stat *stat =3D &hw_stats[hw_stat_i]; + + stat->type_off =3D be32_to_cpu(stat->be_type_off); + stat->name[sizeof(stat->name) - 1] =3D 0; + if (ionic_v1_stat_type(stat) =3D=3D IONIC_V1_STAT_TYPE_NONE) + break; + } + + return hw_stat_i; +} + +static void ionic_fill_stats_desc(struct rdma_stat_desc *hw_stats_hdrs, + struct ionic_v1_stat *hw_stats, + int hw_stats_count) +{ + int hw_stat_i; + + for (hw_stat_i =3D 0; hw_stat_i < hw_stats_count; ++hw_stat_i) { + struct ionic_v1_stat *stat =3D &hw_stats[hw_stat_i]; + + hw_stats_hdrs[hw_stat_i].name =3D stat->name; + } +} + +static u64 ionic_v1_stat_val(struct ionic_v1_stat *stat, + void *vals_buf, size_t vals_len) +{ + unsigned int off =3D ionic_v1_stat_off(stat); + int type =3D ionic_v1_stat_type(stat); + +#define __ionic_v1_stat_validate(__type) \ + ((off + sizeof(__type) <=3D vals_len) && \ + (IS_ALIGNED(off, sizeof(__type)))) + + switch (type) { + case IONIC_V1_STAT_TYPE_8: + if (__ionic_v1_stat_validate(u8)) + return *(u8 *)(vals_buf + off); + break; + case IONIC_V1_STAT_TYPE_LE16: + if (__ionic_v1_stat_validate(__le16)) + return le16_to_cpu(*(__le16 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_LE32: + if (__ionic_v1_stat_validate(__le32)) + return le32_to_cpu(*(__le32 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_LE64: + if (__ionic_v1_stat_validate(__le64)) + return le64_to_cpu(*(__le64 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE16: + if (__ionic_v1_stat_validate(__be16)) + return be16_to_cpu(*(__be16 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE32: + if (__ionic_v1_stat_validate(__be32)) + return be32_to_cpu(*(__be32 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE64: + if (__ionic_v1_stat_validate(__be64)) + return be64_to_cpu(*(__be64 *)(vals_buf + off)); + break; + } + + return ~0ull; +#undef __ionic_v1_stat_validate +} + +static int ionic_hw_stats_cmd(struct ionic_ibdev *dev, + dma_addr_t dma, size_t len, int qid, int op) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D op, + .len =3D IONIC_ADMIN_STATS_HDRS_IN_V1_LEN, + .cmd.stats =3D { + .dma_addr =3D cpu_to_le64(dma), + .length =3D cpu_to_le32(len), + .id_ver =3D cpu_to_le32(qid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D op) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_INTERRUPT); +} + +static int ionic_init_hw_stats(struct ionic_ibdev *dev) +{ + dma_addr_t hw_stats_dma; + int rc, hw_stats_count; + + if (dev->hw_stats_hdrs) + return 0; + + dev->hw_stats_count =3D 0; + + /* buffer for current values from the device */ + dev->hw_stats_buf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!dev->hw_stats_buf) { + rc =3D -ENOMEM; + goto err_buf; + } + + /* buffer for names, sizes, offsets of values */ + dev->hw_stats =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!dev->hw_stats) { + rc =3D -ENOMEM; + goto err_hw_stats; + } + + /* request the names, sizes, offsets */ + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, dev->hw_stats, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, 0, + IONIC_V1_ADMIN_STATS_HDRS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + + /* normalize and count the number of hw_stats */ + hw_stats_count =3D + ionic_v1_stat_normalize(dev->hw_stats, + PAGE_SIZE / sizeof(*dev->hw_stats)); + if (!hw_stats_count) { + rc =3D -ENODATA; + goto err_dma; + } + + dev->hw_stats_count =3D hw_stats_count; + + /* alloc and init array of names, for alloc_hw_stats */ + dev->hw_stats_hdrs =3D kcalloc(hw_stats_count, + sizeof(*dev->hw_stats_hdrs), + GFP_KERNEL); + if (!dev->hw_stats_hdrs) { + rc =3D -ENOMEM; + goto err_dma; + } + + ionic_fill_stats_desc(dev->hw_stats_hdrs, dev->hw_stats, + hw_stats_count); + + return 0; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); +err_dma: + kfree(dev->hw_stats); +err_hw_stats: + kfree(dev->hw_stats_buf); +err_buf: + dev->hw_stats_count =3D 0; + dev->hw_stats =3D NULL; + dev->hw_stats_buf =3D NULL; + dev->hw_stats_hdrs =3D NULL; + return rc; +} + +static struct rdma_hw_stats *ionic_alloc_hw_stats(struct ib_device *ibdev, + u32 port) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (port !=3D 1) + return NULL; + + return rdma_alloc_hw_stats_struct(dev->hw_stats_hdrs, + dev->hw_stats_count, + RDMA_HW_STATS_DEFAULT_LIFESPAN); +} + +static int ionic_get_hw_stats(struct ib_device *ibdev, + struct rdma_hw_stats *hw_stats, + u32 port, int index) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + dma_addr_t hw_stats_dma; + int rc, hw_stat_i; + + if (port !=3D 1) + return -EINVAL; + + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, dev->hw_stats_buf, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, + 0, IONIC_V1_ADMIN_STATS_VALS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, + PAGE_SIZE, DMA_FROM_DEVICE); + + for (hw_stat_i =3D 0; hw_stat_i < dev->hw_stats_count; ++hw_stat_i) + hw_stats->value[hw_stat_i] =3D + ionic_v1_stat_val(&dev->hw_stats[hw_stat_i], + dev->hw_stats_buf, PAGE_SIZE); + + return hw_stat_i; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, + PAGE_SIZE, DMA_FROM_DEVICE); +err_dma: + return rc; +} + +static struct rdma_hw_stats * +ionic_counter_alloc_stats(struct rdma_counter *counter) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_counter *cntr; + int err; + + cntr =3D kzalloc(sizeof(*cntr), GFP_KERNEL); + if (!cntr) + return NULL; + + /* buffer for current values from the device */ + cntr->vals =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!cntr->vals) + goto err_vals; + + err =3D xa_alloc(&dev->counter_stats->xa_counters, &counter->id, + cntr, + XA_LIMIT(0, IONIC_MAX_QPID), + GFP_KERNEL); + if (err) + goto err_xa; + + INIT_LIST_HEAD(&cntr->qp_list); + + return rdma_alloc_hw_stats_struct(dev->counter_stats->stats_hdrs, + dev->counter_stats->queue_stats_count, + RDMA_HW_STATS_DEFAULT_LIFESPAN); +err_xa: + kfree(cntr->vals); +err_vals: + kfree(cntr); + + return NULL; +} + +static int ionic_counter_dealloc(struct rdma_counter *counter) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_counter *cntr; + + cntr =3D xa_erase(&dev->counter_stats->xa_counters, counter->id); + if (!cntr) + return -EINVAL; + + kfree(cntr->vals); + kfree(cntr); + + return 0; +} + +static int ionic_counter_bind_qp(struct rdma_counter *counter, + struct ib_qp *ibqp, + u32 port) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_counter *cntr; + + cntr =3D xa_load(&dev->counter_stats->xa_counters, counter->id); + if (!cntr) + return -EINVAL; + + list_add_tail(&qp->qp_list_counter, &cntr->qp_list); + ibqp->counter =3D counter; + + return 0; +} + +static int ionic_counter_unbind_qp(struct ib_qp *ibqp, u32 port) +{ + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + + if (ibqp->counter) { + list_del(&qp->qp_list_counter); + ibqp->counter =3D NULL; + } + + return 0; +} + +static int ionic_get_qp_stats(struct ib_device *ibdev, + struct rdma_hw_stats *hw_stats, + u32 counter_id) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + struct ionic_counter_stats *cs; + struct ionic_counter *cntr; + dma_addr_t hw_stats_dma; + struct ionic_qp *qp; + int rc, stat_i =3D 0; + + cs =3D dev->counter_stats; + cntr =3D xa_load(&cs->xa_counters, counter_id); + if (!cntr) + return -EINVAL; + + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, cntr->vals, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + return rc; + + memset(hw_stats->value, 0, sizeof(u64) * hw_stats->num_counters); + + list_for_each_entry(qp, &cntr->qp_list, qp_list_counter) { + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, + qp->qpid, + IONIC_V1_ADMIN_QP_STATS_VALS); + if (rc) + goto err_cmd; + + for (stat_i =3D 0; stat_i < cs->queue_stats_count; ++stat_i) + hw_stats->value[stat_i] +=3D + ionic_v1_stat_val(&cs->hdr[stat_i], + cntr->vals, + PAGE_SIZE); + } + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + return stat_i; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + + return rc; +} + +static int ionic_counter_update_stats(struct rdma_counter *counter) +{ + return ionic_get_qp_stats(counter->device, counter->stats, counter->id); +} + +static int ionic_alloc_counters(struct ionic_ibdev *dev) +{ + struct ionic_counter_stats *cs =3D dev->counter_stats; + int rc, hw_stats_count; + dma_addr_t hdr_dma; + + /* buffer for names, sizes, offsets of values */ + cs->hdr =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!cs->hdr) + return -ENOMEM; + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, cs->hdr, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hdr_dma, PAGE_SIZE, 0, + IONIC_V1_ADMIN_QP_STATS_HDRS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, PAGE_SIZE, DMA_FROM_DEVICE); + + /* normalize and count the number of hw_stats */ + hw_stats_count =3D ionic_v1_stat_normalize(cs->hdr, + PAGE_SIZE / sizeof(*cs->hdr)); + if (!hw_stats_count) { + rc =3D -ENODATA; + goto err_dma; + } + + cs->queue_stats_count =3D hw_stats_count; + + /* alloc and init array of names */ + cs->stats_hdrs =3D kcalloc(hw_stats_count, sizeof(*cs->stats_hdrs), + GFP_KERNEL); + if (!cs->stats_hdrs) { + rc =3D -ENOMEM; + goto err_dma; + } + + ionic_fill_stats_desc(cs->stats_hdrs, cs->hdr, hw_stats_count); + + return 0; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, PAGE_SIZE, DMA_FROM_DEVICE); +err_dma: + kfree(cs->hdr); + + return rc; +} + +static const struct ib_device_ops ionic_hw_stats_ops =3D { + .driver_id =3D RDMA_DRIVER_IONIC, + .alloc_hw_port_stats =3D ionic_alloc_hw_stats, + .get_hw_stats =3D ionic_get_hw_stats, +}; + +static const struct ib_device_ops ionic_counter_stats_ops =3D { + .counter_alloc_stats =3D ionic_counter_alloc_stats, + .counter_dealloc =3D ionic_counter_dealloc, + .counter_bind_qp =3D ionic_counter_bind_qp, + .counter_unbind_qp =3D ionic_counter_unbind_qp, + .counter_update_stats =3D ionic_counter_update_stats, +}; + +void ionic_stats_init(struct ionic_ibdev *dev) +{ + u16 stats_type =3D dev->lif_cfg.stats_type; + int rc; + + if (stats_type & IONIC_LIF_RDMA_STAT_GLOBAL) { + rc =3D ionic_init_hw_stats(dev); + if (rc) + ibdev_dbg(&dev->ibdev, "Failed to init hw stats\n"); + else + ib_set_device_ops(&dev->ibdev, &ionic_hw_stats_ops); + } + + if (stats_type & IONIC_LIF_RDMA_STAT_QP) { + dev->counter_stats =3D kzalloc(sizeof(*dev->counter_stats), + GFP_KERNEL); + if (!dev->counter_stats) + return; + + rc =3D ionic_alloc_counters(dev); + if (rc) { + ibdev_dbg(&dev->ibdev, "Failed to init counter stats\n"); + kfree(dev->counter_stats); + dev->counter_stats =3D NULL; + return; + } + + xa_init_flags(&dev->counter_stats->xa_counters, XA_FLAGS_ALLOC); + + ib_set_device_ops(&dev->ibdev, &ionic_counter_stats_ops); + } +} + +void ionic_stats_cleanup(struct ionic_ibdev *dev) +{ + if (dev->counter_stats) { + xa_destroy(&dev->counter_stats->xa_counters); + kfree(dev->counter_stats->hdr); + kfree(dev->counter_stats->stats_hdrs); + kfree(dev->counter_stats); + dev->counter_stats =3D NULL; + } + + kfree(dev->hw_stats); + kfree(dev->hw_stats_buf); + kfree(dev->hw_stats_hdrs); +} diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 1fe58ca2238f..e292278fcbba 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -329,6 +329,7 @@ static void ionic_destroy_ibdev(struct ionic_ibdev *dev) { ionic_kill_rdma_admin(dev, false); ib_unregister_device(&dev->ibdev); + ionic_stats_cleanup(dev); ionic_destroy_rdma_admin(dev); ionic_destroy_resids(dev); xa_destroy(&dev->qp_tbl); @@ -401,6 +402,8 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) ionic_datapath_setops(dev); ionic_controlpath_setops(dev); =20 + ionic_stats_init(dev); + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); if (rc) goto err_register; @@ -408,6 +411,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) return dev; =20 err_register: + ionic_stats_cleanup(dev); err_admin: ionic_kill_rdma_admin(dev, false); ionic_destroy_rdma_admin(dev); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 446cb8d5e334..803127c625cc 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -36,6 +36,7 @@ #define IONIC_PKEY_TBL_LEN 1 #define IONIC_GID_TBL_LEN 256 =20 +#define IONIC_MAX_QPID 0xffffff #define IONIC_SPEC_HIGH 8 #define IONIC_MAX_PD 1024 #define IONIC_SPEC_HIGH 8 @@ -120,6 +121,12 @@ struct ionic_ibdev { enum ionic_admin_state admin_state; =20 struct ionic_eq **eq_vec; + + struct ionic_v1_stat *hw_stats; + void *hw_stats_buf; + struct rdma_stat_desc *hw_stats_hdrs; + struct ionic_counter_stats *counter_stats; + int hw_stats_count; }; =20 struct ionic_eq { @@ -346,6 +353,18 @@ struct ionic_mr { bool created; }; =20 +struct ionic_counter_stats { + int queue_stats_count; + struct ionic_v1_stat *hdr; + struct rdma_stat_desc *stats_hdrs; + struct xarray xa_counters; +}; + +struct ionic_counter { + void *vals; + struct list_head qp_list; +}; + static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) { return container_of(ibdev, struct ionic_ibdev, ibdev); @@ -466,6 +485,10 @@ void ionic_notify_flush_cq(struct ionic_cq *cq); /* ionic_datapath.c */ void ionic_datapath_setops(struct ionic_ibdev *dev); =20 +/* ionic_hw_stats.c */ +void ionic_stats_init(struct ionic_ibdev *dev); +void ionic_stats_cleanup(struct ionic_ibdev *dev); + /* ionic_pgtbl.c */ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); __be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va); --=20 2.34.1 From nobody Mon Dec 15 21:59:00 2025 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2076.outbound.protection.outlook.com [40.107.93.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC23E22C322; Thu, 8 May 2025 05:01:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680507; cv=fail; b=hg4O8pYglIW+vZIWOAdjyr0SiQQPm0mI6Fi5PJiffqTMSfiqHgjlFHJSojhZc/0yMZVsKIeBBcbR8cyKraT8Mm15eCofjbOS7tC68GA4uX9iht0CP7c4PDeFmdqDrrQPZAzSffwCASsBhNHsqr1z7F5MsyYCvymYe0s6iaL03fc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746680507; c=relaxed/simple; bh=ZQWwMislFNHCsAdxt2OSkY/7WFFmGGPWs3+WIB6dqu4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AYxHTvZVuYiXAwDH6eXwrKI6lzXx9f4Vv0wZyZSXlRaS998EnNiG/DLAdmIfOLe7c76G3+u8yJLHC4rOKniT/xzZGcLg4Xe/bHSZA46z7pssvI6HcMr8tB5AGgOM3/fTTMkk9lCe7UwVkSLCoJ9kOlt+PPedLQt1d2mQ4E5IEU0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=U528d3C/; arc=fail smtp.client-ip=40.107.93.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="U528d3C/" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=iVoWHI69fAqNs5l0iDnXwr58xdVrF4KADB9U0jeQtVQ79xEllZzqPVLvLnFXFKbtC8kCwsVBkrqStrtRCYnx+1fd9JXOkfWRMz8i0zAYVLr5MR/toewYk7D26WjDEfPr2A+t+Lr7mGjbU9zW3v6LScBdmhL69Y1y3pmtLXAF++Bytsvqy2nfZ8VMM5jlE/3EU9DG73N2+GEYedT1000jFBUeXlkrXGVQbYRrpa7ksDN1ZdgFFPMvj5Krki5VYP/KwB+ke60D0DFhFGQ3ZXi7hguE/n7vXzDm+cHdFPgFqH02NKWZZ+16aXB8HleXwxGvMc3H9dpc88PXIYetkRPBYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0D/ITuynkAVWkoUS3AczOp3vNKOM/dv65+PUbln0nLo=; b=MKYZcwIgeNgGQCaHXOULtT7TrDgAdZPUbhESk1nmIGmRzpCVQ3hYLXQebUMXPup5UBLXkqY/P7tjoqWuSuxWxkAexQqSXxZH8ChGWT8u1KpbWGYd17E8zp4KWjg4YuFaPRjDSKhtqfNH7aI/XXVytOKah1iJaw6scQR8ooo6POlQ5LrHuQDaO88IAiHXo8Su1RTgaawWMBJkZAuSbITsLlV7APuOC64ICOdxcWG6mXGNkelXk11VgFkU58P+RhFogkKPDnQo5GWGRHAuuQ5FYI/N7WxN1Q/C0GxdwtBfrO1wvV6JetZwWDx6vfryCOWxNhs18+Luj2DmsVPHfwgWjA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0D/ITuynkAVWkoUS3AczOp3vNKOM/dv65+PUbln0nLo=; b=U528d3C/lFq4FwmiMzQn7bTbEiL8fggJUGTj0P9QzBLiF3Q64SIia1nmk1Par3QJL4Rd/FS/C16GAF4zDtPlhaM8ca0lZVyT8WXFYEe49xK5Ax8xquSQuip/cvt2HU91FxGueyneL2caYBBETWkmXCDKAQJnSj315wVvhAqZo+s= Received: from PH8P222CA0002.NAMP222.PROD.OUTLOOK.COM (2603:10b6:510:2d7::31) by PH8PR12MB7349.namprd12.prod.outlook.com (2603:10b6:510:217::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8678.31; Thu, 8 May 2025 05:01:40 +0000 Received: from CY4PEPF0000FCC5.namprd03.prod.outlook.com (2603:10b6:510:2d7:cafe::43) by PH8P222CA0002.outlook.office365.com (2603:10b6:510:2d7::31) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8699.29 via Frontend Transport; Thu, 8 May 2025 05:01:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000FCC5.mail.protection.outlook.com (10.167.242.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8722.18 via Frontend Transport; Thu, 8 May 2025 05:01:39 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:38 -0500 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 8 May 2025 00:01:38 -0500 Received: from xhdipdslab61.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Thu, 8 May 2025 00:01:34 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v2 14/14] RDMA/ionic: Add Makefile/Kconfig to kernel build environment Date: Thu, 8 May 2025 10:29:57 +0530 Message-ID: <20250508045957.2823318-15-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250508045957.2823318-1-abhijit.gangurde@amd.com> References: <20250508045957.2823318-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC5:EE_|PH8PR12MB7349:EE_ X-MS-Office365-Filtering-Correlation-Id: 2dffb7c4-3801-49b8-9a4e-08dd8ded6af8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|7416014|376014|1800799024|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?7266CTqb3U3AQxxibdrgdDES/oQZGun5F1SWe1hP5h/caxYL2enbxrujqrFC?= =?us-ascii?Q?T+K4wnnB1ICh9sjD0R0aVkvr3H2JTaF/rN1xnpHS9nKCsswt0QMYu2ocQJVc?= =?us-ascii?Q?FFr//0YVWTHsnxs99lLWmn2APpNN80yWquhmn/7LqvCq1c5FAkOPmrYCvJcv?= =?us-ascii?Q?lj97keURt8rxbD/9oX+mO7wcxhy2bYdDe/xR1RDWk31Tnq8wFFghHCTSwArT?= =?us-ascii?Q?6XjNSSJquVY2nQmfRW7L5btxWbRwRRy6aCmKB7E5HuuByNKTEKvv0AiiMMi0?= =?us-ascii?Q?zdAe3aoHt2Bu3uYXv548WmDfMDI8Bq33q/BQFKAqJkgj5MnT+JAlJClS5qGR?= =?us-ascii?Q?MEqkSFaNS4HlswHYTB6qk70DE4aj/CNiQV68FOtb2mzYsEShLV7/UoU3PgWG?= =?us-ascii?Q?Q7H4WOZOqq40b7m6wZwdDbUTlUFJq4pcRC6NTv+FFR/3ePO6mWNmDiPHNUjr?= =?us-ascii?Q?FZbrOCAzM+2rVaVnwFwG+LPejh9JnQe8iEkd46wXF4P4WMBAdcBSQaC2h8Be?= =?us-ascii?Q?lv6sdD/jrf7Jb9exBjqFiMlOx+tkV2I5s36XecnOoAGxsQ2ghIZj1lASSG+P?= =?us-ascii?Q?dqCQTKq4dfjgvi/7h9/NaS0g0ApuDejWLN4wJ0rcAWw2RGfTuMMejjV5CDgq?= =?us-ascii?Q?1/LPDhK7D9FJYKd5Fw5yjmu4ywkKSeCkwl2d4g7a8B6JakCfHrMK7DohR12w?= =?us-ascii?Q?b7ey5hfsWQ/MlMkeORBIW7Aj+qGkMl0vD8SMIphCtauex07BY35VHRr2+CbJ?= =?us-ascii?Q?zz+s+F/hLSbH4JJnjXEXVCiLQlpDMeVonW1JYYskgutFa9OpKoEKvgzlel5r?= =?us-ascii?Q?02U1sRlvWHtjBlLLbROOK0C7TZHQoLM18ShyhMRK9/mcQraqjA6dWwZmUQJ8?= =?us-ascii?Q?yigYT/3YsQ58G+k07tMB00dxZKHElTVKk3C2yHPq9HlLnPjnQ81bPuzm7D+k?= =?us-ascii?Q?W1yo+8RG+V2yFrUjPtd4YPhhl8ye/nAFP+MOqGMyMXY9QttRqCOjTGjoTQ9C?= =?us-ascii?Q?Q6bwWGmynql7so3hQfmCowSp/x1MCkHqDzD5RCK0TIDec+DwdfEo0eCRoa4k?= =?us-ascii?Q?v1u8KAhk47fEbg0y/mHluID19g7BDA1Z+gmr8MI8PfL1azb83xfUIxUBFPj0?= =?us-ascii?Q?I2kIAXjqFwzzNM0ogT+v7jmUd1bx4YHw7AAeOZo1bwBLDjVrDObVdQxPrMb1?= =?us-ascii?Q?4gzmnnoAjgvMHtSRh7NMde0CDZKONEAjarRzUFTjtzLA0c8+D/+Z4dmfEiS4?= =?us-ascii?Q?y6rzLTW+IJcEzhgRvugjUxdkH52pyhXcX088oDPXiO5unn2nqdKwzPPw5ZoU?= =?us-ascii?Q?xQ69X07G+efaHFr4qYYm4I5BlCabid2h19AhmMTanXBDvCRwIRNwij0BY0qy?= =?us-ascii?Q?U8niGB0Yy0ekZenKSRityYSXIMwQHYbzodoJ88a9WYHE75tNkaqnjHpdZq2f?= =?us-ascii?Q?diOqqqULAkFkDHveCG3IVTFlbVExQDDkZmu6Avh5HfsTm5IvIhLtKmptOs9S?= =?us-ascii?Q?e1tiIyO3GjJQZXYbhLAUUkmcU5K4AXiiQFzrECngkTPa4xagye6x4izljw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(7416014)(376014)(1800799024)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2025 05:01:39.6456 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2dffb7c4-3801-49b8-9a4e-08dd8ded6af8 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC5.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7349 Content-Type: text/plain; charset="utf-8" Add ionic to the kernel build environment. Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- .../ethernet/pensando/ionic_rdma.rst | 43 +++++++++++++++++++ MAINTAINERS | 9 ++++ drivers/infiniband/Kconfig | 1 + drivers/infiniband/hw/Makefile | 1 + drivers/infiniband/hw/ionic/Kconfig | 17 ++++++++ drivers/infiniband/hw/ionic/Makefile | 9 ++++ 6 files changed, 80 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/pensan= do/ionic_rdma.rst create mode 100644 drivers/infiniband/hw/ionic/Kconfig create mode 100644 drivers/infiniband/hw/ionic/Makefile diff --git a/Documentation/networking/device_drivers/ethernet/pensando/ioni= c_rdma.rst b/Documentation/networking/device_drivers/ethernet/pensando/ioni= c_rdma.rst new file mode 100644 index 000000000000..80c4d9876d3e --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/pensando/ionic_rdma.= rst @@ -0,0 +1,43 @@ +.. SPDX-License-Identifier: GPL-2.0+ + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Linux Driver for the AMD Pensando(R) Ethernet adapter family +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +AMD Pensando RDMA driver. +Copyright (C) 2018-2025, Advanced Micro Devices, Inc. + +Contents +=3D=3D=3D=3D=3D=3D=3D=3D + +- Identifying the Adapter +- Enabling the driver +- Support + +Identifying the Adapter +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +See Documentation/networking/device_drivers/ethernet/pensando/ionic.rst +for more information on identifying the adapter. + +Enabling the driver +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> InfiniBand support + -> AMD Pensando DSC RDMA/RoCE Support + +Support +=3D=3D=3D=3D=3D=3D=3D + +For general Linux rdma support, please use the rdma mailing +list, which is monitored by AMD Pensando personnel:: + + linux-rdma@vger.kernel.org diff --git a/MAINTAINERS b/MAINTAINERS index 96b827049501..9859d9690d31 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1154,6 +1154,15 @@ F: Documentation/networking/device_drivers/ethernet/= amd/pds_core.rst F: drivers/net/ethernet/amd/pds_core/ F: include/linux/pds/ =20 +AMD PENSANDO RDMA DRIVER +M: Abhijit Gangurde +M: Allen Hubbe +L: linux-rdma@vger.kernel.org +S: Maintained +F: Documentation/networking/device_drivers/ethernet/pensando/ionic_rdma.rst +F: drivers/infiniband/hw/ionic/ +F: include/uapi/rdma/ionic-abi.h + AMD PMC DRIVER M: Shyam Sundar S K L: platform-driver-x86@vger.kernel.org diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig index a5827d11e934..f3035edfb742 100644 --- a/drivers/infiniband/Kconfig +++ b/drivers/infiniband/Kconfig @@ -85,6 +85,7 @@ source "drivers/infiniband/hw/efa/Kconfig" source "drivers/infiniband/hw/erdma/Kconfig" source "drivers/infiniband/hw/hfi1/Kconfig" source "drivers/infiniband/hw/hns/Kconfig" +source "drivers/infiniband/hw/ionic/Kconfig" source "drivers/infiniband/hw/irdma/Kconfig" source "drivers/infiniband/hw/mana/Kconfig" source "drivers/infiniband/hw/mlx4/Kconfig" diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile index aba96ca9bce5..c30489902653 100644 --- a/drivers/infiniband/hw/Makefile +++ b/drivers/infiniband/hw/Makefile @@ -15,3 +15,4 @@ obj-$(CONFIG_INFINIBAND_HNS_HIP08) +=3D hns/ obj-$(CONFIG_INFINIBAND_QEDR) +=3D qedr/ obj-$(CONFIG_INFINIBAND_BNXT_RE) +=3D bnxt_re/ obj-$(CONFIG_INFINIBAND_ERDMA) +=3D erdma/ +obj-$(CONFIG_INFINIBAND_IONIC) +=3D ionic/ diff --git a/drivers/infiniband/hw/ionic/Kconfig b/drivers/infiniband/hw/io= nic/Kconfig new file mode 100644 index 000000000000..023a7fcdacb8 --- /dev/null +++ b/drivers/infiniband/hw/ionic/Kconfig @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2018-2025, Advanced Micro Devices, Inc. + +config INFINIBAND_IONIC + tristate "AMD Pensando DSC RDMA/RoCE Support" + depends on NETDEVICES && ETHERNET && PCI && INET && 64BIT + select NET_VENDOR_PENSANDO + select IONIC + help + This enables RDMA/RoCE support for the AMD Pensando family of + Distributed Services Cards (DSCs). + + To learn more, visit our website at + . + + To compile this driver as a module, choose M here. The module + will be called ionic_rdma. diff --git a/drivers/infiniband/hw/ionic/Makefile b/drivers/infiniband/hw/i= onic/Makefile new file mode 100644 index 000000000000..d1588d4cdb0f --- /dev/null +++ b/drivers/infiniband/hw/ionic/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 + +ccflags-y :=3D -I $(srctree)/drivers/net/ethernet/pensando/ionic + +obj-$(CONFIG_INFINIBAND_IONIC) +=3D ionic_rdma.o + +ionic_rdma-y :=3D \ + ionic_ibdev.o ionic_lif_cfg.o ionic_queue.o ionic_pgtbl.o ionic_res.o \ + ionic_admin.o ionic_controlpath.o ionic_datapath.o ionic_hw_stats.o --=20 2.34.1