From nobody Mon Oct 6 06:32:11 2025 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2054.outbound.protection.outlook.com [40.107.94.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4A661F4C8B; Wed, 23 Jul 2025 17:32:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.54 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291943; cv=fail; b=IrO+K3ily5R3GxMqiBAeoZmn1QdBWiQdRScYyYRWtSq9C5yB9Iwot9ElpIV+0Lcd9WAo5oh0i56Sjs6ZI8zUl+h9710kHFOy40h6Lom2di2rE/zh2kjhF1O4GMQ5RbCmnLSRw036GpENpCf1i3EfbP96AZS4U9ER/9BqnMIdQQk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291943; c=relaxed/simple; bh=Uk4Tp0eixEBxkiSXqQqyj4UBFh7RRID32avXeJ1wydg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Aj5RYISS8cCPwVZyUWQlWb6JkkeQSVM4cNFSsewlJKllBi1Mx/d+z2/nrXikccZ+CQGzh5jHemD71RLioVpHQIWJWo98K++FVL4o5nFNgxDKK0Duw2OyJXzk509bCFkXOkShV4abvDREnAskvwbWoahtOP3Rpca5yWAg+HCJ6A0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=3YlFFn9a; arc=fail smtp.client-ip=40.107.94.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="3YlFFn9a" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LQJcxha+Z3yFVMVBWDHTx6Gaz6XbkZBaqi5DVlJkVU28zwUi8aUdjFby3gCv/qtZt9SpqXBsvGzE+FVr6R8sGuLBzIbgwy8TlnrjT+I03RJrpVvCZmkW/jYeHzopDJ29KJz1pZV+IXFAu69q4XBEuZxlFK2krOjF/ItrTisoZr1aKLCTG4Frpf4B3eukUw9F5R5HT6fuhhTToCWJyd7ip5ThprEGF0ku1KrdM+IDZmbT+/toThdzO3wBUL6dlYsFAQgORN90OEA3tpG0CCmrWbrj9cs/jp6svnFsku7diXOSP6fKAdPwuNKCCh2u7koe6/f0kl0LOx6mUUzjruYGUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6ZMfsuqV9rzJVJCm+mCMO489ht3AwPQ7DvJloUHAY8c=; b=QwkCcNhttm6GqaviUzJzM8TVkT1qDBN2aipVL8uWbQLRgSDJnVKWum8VWVxXpoN38uvcCA5lTpKYnRLLfPVk7Rtxsqdh1m2d0s6FZ+NtrENnYrT4kBOcI4nFWe7V7vdz5auxK5bKHTIozrTMYnZh0/jqQAKfkhjH3CHyb5Ao6KEgCsEYja5WvuPgUs3jglgq54L+qGGIybAoCoI8YZGy0R6Y8mEB5O+UszvV7yr2ybb4Pz0FmaMcXNSRp9WCrNr0Zcql2Pq6Y3ZNMAVkU/n1CSz3yN/uZI/TLIb/nmSBgM+Rmn1xOnDlgL4uBp5yz1NkvkMQVpNGWfdL9SQHYIeIeg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6ZMfsuqV9rzJVJCm+mCMO489ht3AwPQ7DvJloUHAY8c=; b=3YlFFn9aieNwaMki6EZB4FdchV8jn8Z4YHqWVNR/0inijdmHUkpvFvZNgQH+ZY0pJnQHReQDjyFLoKI9nGr6eB4MZKa75JG0SHQO1d9hEO07x2G1lPDeNd48ys6vNYkqe9x/dv3SEVNOp6XyckDccWaUWD4AvrACvD01sW4MzBY= Received: from CH0PR03CA0088.namprd03.prod.outlook.com (2603:10b6:610:cc::33) by CH1PPFDA9B3771F.namprd12.prod.outlook.com (2603:10b6:61f:fc00::626) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8943.28; Wed, 23 Jul 2025 17:32:18 +0000 Received: from CY4PEPF0000EE3D.namprd03.prod.outlook.com (2603:10b6:610:cc:cafe::38) by CH0PR03CA0088.outlook.office365.com (2603:10b6:610:cc::33) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:32:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE3D.mail.protection.outlook.com (10.167.242.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:18 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:04 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:03 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:31:59 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 01/14] net: ionic: Create an auxiliary device for rdma driver Date: Wed, 23 Jul 2025 23:01:36 +0530 Message-ID: <20250723173149.2568776-2-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3D:EE_|CH1PPFDA9B3771F:EE_ X-MS-Office365-Filtering-Correlation-Id: 75149ae7-2d9b-46f7-bcec-08ddca0edf4a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|7416014|376014|1800799024|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?V7X4aPDsiTbAMzT6Bj25G2YP0CtKhuix1mjKDpvTtq59M3ye9n8e+IQeLMZw?= =?us-ascii?Q?pPJTyicsSd7Ab0mnFrbBHvHA9HktxF+gbfIXeHB8qOZ0oNJgzsVvAh6bcdB3?= =?us-ascii?Q?ANzN/ACI6rjMjWdwCRFuuQgEU6aQFTrDYU3NlUKnepg5ZwxwSkNAsdLp9Xe7?= =?us-ascii?Q?asr61Jfy9wfg7NeO6+J8uk0lG5TsGCcGFA1M7MADOUFYL/BEtaM1Mrdjjamd?= =?us-ascii?Q?tcJ3EEkym5BWV85FZYXMC9jXNvWhYY2nAULo9E/qeWBXMm48XnOztdz4ojlv?= =?us-ascii?Q?E+fxJRcgux9JQh0hpeVDHp0D+elFJ9QCiTLO4tcDossu5TSvoApyN3JFLNhe?= =?us-ascii?Q?qGwJqpyEbM22nyE/68X0J8E3h7e2E6YQW3ts8ukgQLaG/9gm8tEAzOXHB1+U?= =?us-ascii?Q?ApsabauB3jf5wDhWA1ME4fna+7HR/JpMB9K4U9aZGJH1Ib4y0AdvJx+kkGDa?= =?us-ascii?Q?C1kpqmK/wwuGcgEr+HMK/1s+32HKVla6lst4/eQy/+jrHcgm6Pj3yFt/RffC?= =?us-ascii?Q?s/F8p/0QwkACK/LtpdrUTkQ0T61OocRrXeKtf376Q+cIT+MKVyEnhe9zEPKI?= =?us-ascii?Q?7LX32aiyJtIZsIgb28HfP+SPKfwyFQoClkichfVv4G/fVBzANl10QyXDitDD?= =?us-ascii?Q?/txkvQszCmrJguzFnF7ykTLmMxMRA2EUtTkjPlNrUr2uAK8lrOPwT59u9xX6?= =?us-ascii?Q?3II3hhcW8h8XQG++RHxgPpgnpUYSiBbQxdMMM+idpU3IzS/9slPlBcMz9aL0?= =?us-ascii?Q?8VFJ19kIYxHyxp8j+bJY5oPTyEj8B6iDoMRf/2LRAm8UgtRYQ0y+mf39zrpx?= =?us-ascii?Q?SqN9LNagAM+nzeO1kwHDggy4RfompPJvAvXlAJ3+4Lh4ch6cs5usi4+X+al5?= =?us-ascii?Q?W1tu5aVvRYjUAayA+fnyEATe4DiZK9R520bp6MJLt/6xLQCSX8VoobEfUFsc?= =?us-ascii?Q?MKM4K/RWATt2jMngbvt9MKT4Y6lcxUJooXNJdTHeaGxl+FMzCLRfvCA4A24a?= =?us-ascii?Q?cUwkW0M3PBt4GGzPHPNj/Ak0UGRH9CgFLoAMkdh19SzfJR7cI4jJjs9a2XcY?= =?us-ascii?Q?3Me0SEmBCIEcK6ZJji694w3w12fFPrjAfMukqDqm2UrFiRleFqTcKXfDOl8b?= =?us-ascii?Q?ntJ1muB9KI61nBgoUDhjZIwJiws4A0moYQI3s/whRHt7sOqS/zL4oYFqJXXD?= =?us-ascii?Q?U053KAWNnmwwJO9uUcGUw4LGAAGeY+TodIwcMpYDrDlb95iHMIRbUMzPhtaI?= =?us-ascii?Q?W8Tn8BZZADilrSHZ+6IKkAasG7Bl5vnKbr2TE2b+NY/nEfPFpf5Q7D+/x7mO?= =?us-ascii?Q?zAVDFoG+xnHJEv/nwR1z6aXxKwQex163L7kwsUIC9EOfTDG4o/obNbKJCWoe?= =?us-ascii?Q?zoeqViFLJCshZzjZjOwu6shTIs9VXqjtVfdc/+VYk6vSO+PR+8CTvs5bemYS?= =?us-ascii?Q?ZU/X4d+r+ZIQT/kXIesOVfGLrclXKDWQ3GEcl5X/l/MaJpWwgrcEW/r5YniB?= =?us-ascii?Q?jv0u0LQeHRL+t6uc5Xl9J7nPzzn4l4qgJDNfTQCx2i28oGwgJNqvF0b7nA?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(7416014)(376014)(1800799024)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:18.0164 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 75149ae7-2d9b-46f7-bcec-08ddca0edf4a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3D.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH1PPFDA9B3771F Content-Type: text/plain; charset="utf-8" To support RDMA capable ethernet device, create an auxiliary device in the ionic Ethernet driver. The RDMA device is modeled as an auxiliary device to the Ethernet device. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- drivers/net/ethernet/pensando/Kconfig | 1 + drivers/net/ethernet/pensando/ionic/Makefile | 2 +- .../net/ethernet/pensando/ionic/ionic_api.h | 21 ++++ .../net/ethernet/pensando/ionic/ionic_aux.c | 95 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_aux.h | 10 ++ .../ethernet/pensando/ionic/ionic_bus_pci.c | 5 + .../net/ethernet/pensando/ionic/ionic_lif.c | 7 ++ .../net/ethernet/pensando/ionic/ionic_lif.h | 3 + 8 files changed, 143 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_api.h create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_aux.c create mode 100644 drivers/net/ethernet/pensando/ionic/ionic_aux.h diff --git a/drivers/net/ethernet/pensando/Kconfig b/drivers/net/ethernet/p= ensando/Kconfig index 01fe76786f77..c99758adf3ad 100644 --- a/drivers/net/ethernet/pensando/Kconfig +++ b/drivers/net/ethernet/pensando/Kconfig @@ -24,6 +24,7 @@ config IONIC select NET_DEVLINK select DIMLIB select PAGE_POOL + select AUXILIARY_BUS help This enables the support for the Pensando family of Ethernet adapters. More specific information on this driver can be diff --git a/drivers/net/ethernet/pensando/ionic/Makefile b/drivers/net/eth= ernet/pensando/ionic/Makefile index 4e7642a2d25f..a598972fef41 100644 --- a/drivers/net/ethernet/pensando/ionic/Makefile +++ b/drivers/net/ethernet/pensando/ionic/Makefile @@ -5,5 +5,5 @@ obj-$(CONFIG_IONIC) :=3D ionic.o =20 ionic-y :=3D ionic_main.o ionic_bus_pci.o ionic_devlink.o ionic_dev.o \ ionic_debugfs.o ionic_lif.o ionic_rx_filter.o ionic_ethtool.o \ - ionic_txrx.o ionic_stats.o ionic_fw.o + ionic_txrx.o ionic_stats.o ionic_fw.o ionic_aux.o ionic-$(CONFIG_PTP_1588_CLOCK) +=3D ionic_phc.o diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h new file mode 100644 index 000000000000..f9fcd1b67b35 --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_API_H_ +#define _IONIC_API_H_ + +#include + +/** + * struct ionic_aux_dev - Auxiliary device information + * @lif: Logical interface + * @idx: Index identifier + * @adev: Auxiliary device + */ +struct ionic_aux_dev { + struct ionic_lif *lif; + int idx; + struct auxiliary_device adev; +}; + +#endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.c b/drivers/net/= ethernet/pensando/ionic/ionic_aux.c new file mode 100644 index 000000000000..781218c3feba --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include "ionic.h" +#include "ionic_lif.h" +#include "ionic_aux.h" + +static DEFINE_IDA(aux_ida); + +static void ionic_auxbus_release(struct device *dev) +{ + struct ionic_aux_dev *ionic_adev; + + ionic_adev =3D container_of(dev, struct ionic_aux_dev, adev.dev); + kfree(ionic_adev); +} + +int ionic_auxbus_register(struct ionic_lif *lif) +{ + struct ionic_aux_dev *ionic_adev; + struct auxiliary_device *aux_dev; + int err, id; + + if (!(le64_to_cpu(lif->ionic->ident.lif.capabilities) & IONIC_LIF_CAP_RDM= A)) + return 0; + + ionic_adev =3D kzalloc(sizeof(*ionic_adev), GFP_KERNEL); + if (!ionic_adev) + return -ENOMEM; + + aux_dev =3D &ionic_adev->adev; + + id =3D ida_alloc_range(&aux_ida, 0, INT_MAX, GFP_KERNEL); + if (id < 0) { + dev_err(lif->ionic->dev, "Failed to allocate aux id: %d\n", + id); + err =3D id; + goto err_adev_free; + } + + aux_dev->id =3D id; + aux_dev->name =3D "rdma"; + aux_dev->dev.parent =3D &lif->ionic->pdev->dev; + aux_dev->dev.release =3D ionic_auxbus_release; + ionic_adev->lif =3D lif; + err =3D auxiliary_device_init(aux_dev); + if (err) { + dev_err(lif->ionic->dev, "Failed to initialize %s aux device: %d\n", + aux_dev->name, err); + goto err_ida_free; + } + + err =3D auxiliary_device_add(aux_dev); + if (err) { + dev_err(lif->ionic->dev, "Failed to add %s aux device: %d\n", + aux_dev->name, err); + goto err_aux_uninit; + } + + lif->ionic_adev =3D ionic_adev; + + return 0; + +err_aux_uninit: + auxiliary_device_uninit(aux_dev); +err_ida_free: + ida_free(&aux_ida, id); +err_adev_free: + kfree(ionic_adev); + + return err; +} + +void ionic_auxbus_unregister(struct ionic_lif *lif) +{ + struct auxiliary_device *aux_dev; + int id; + + mutex_lock(&lif->adev_lock); + if (!lif->ionic_adev) + goto out; + + aux_dev =3D &lif->ionic_adev->adev; + id =3D aux_dev->id; + + auxiliary_device_delete(aux_dev); + auxiliary_device_uninit(aux_dev); + ida_free(&aux_ida, id); + + lif->ionic_adev =3D NULL; + +out: + mutex_unlock(&lif->adev_lock); +} diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.h b/drivers/net/= ethernet/pensando/ionic/ionic_aux.h new file mode 100644 index 000000000000..f5528a9f187d --- /dev/null +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_AUX_H_ +#define _IONIC_AUX_H_ + +int ionic_auxbus_register(struct ionic_lif *lif); +void ionic_auxbus_unregister(struct ionic_lif *lif); + +#endif /* _IONIC_AUX_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/= net/ethernet/pensando/ionic/ionic_bus_pci.c index 4c377bdc62c8..bb75044dfb82 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -9,6 +9,7 @@ #include "ionic.h" #include "ionic_bus.h" #include "ionic_lif.h" +#include "ionic_aux.h" #include "ionic_debugfs.h" =20 /* Supported devices */ @@ -375,6 +376,8 @@ static int ionic_probe(struct pci_dev *pdev, const stru= ct pci_device_id *ent) goto err_out_deregister_devlink; } =20 + ionic_auxbus_register(ionic->lif); + mod_timer(&ionic->watchdog_timer, round_jiffies(jiffies + ionic->watchdog_period)); ionic_queue_doorbell_check(ionic, IONIC_NAPI_DEADLINE); @@ -415,6 +418,7 @@ static void ionic_remove(struct pci_dev *pdev) =20 if (ionic->lif->doorbell_wa) cancel_delayed_work_sync(&ionic->doorbell_check_dwork); + ionic_auxbus_unregister(ionic->lif); ionic_lif_unregister(ionic->lif); ionic_devlink_unregister(ionic); ionic_lif_deinit(ionic->lif); @@ -444,6 +448,7 @@ static void ionic_reset_prepare(struct pci_dev *pdev) timer_delete_sync(&ionic->watchdog_timer); cancel_work_sync(&lif->deferred.work); =20 + ionic_auxbus_unregister(ionic->lif); mutex_lock(&lif->queue_lock); ionic_stop_queues_reconfig(lif); ionic_txrx_free(lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 7707a9e53c43..146659f6862a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -19,6 +19,7 @@ #include "ionic_bus.h" #include "ionic_dev.h" #include "ionic_lif.h" +#include "ionic_aux.h" #include "ionic_txrx.h" #include "ionic_ethtool.h" #include "ionic_debugfs.h" @@ -3293,6 +3294,7 @@ int ionic_lif_alloc(struct ionic *ionic) =20 mutex_init(&lif->queue_lock); mutex_init(&lif->config_lock); + mutex_init(&lif->adev_lock); =20 spin_lock_init(&lif->adminq_lock); =20 @@ -3349,6 +3351,7 @@ int ionic_lif_alloc(struct ionic *ionic) lif->info =3D NULL; lif->info_pa =3D 0; err_out_free_mutex: + mutex_destroy(&lif->adev_lock); mutex_destroy(&lif->config_lock); mutex_destroy(&lif->queue_lock); err_out_free_netdev: @@ -3384,6 +3387,7 @@ static void ionic_lif_handle_fw_down(struct ionic_lif= *lif) =20 netif_device_detach(lif->netdev); =20 + ionic_auxbus_unregister(ionic->lif); mutex_lock(&lif->queue_lock); if (test_bit(IONIC_LIF_F_UP, lif->state)) { dev_info(ionic->dev, "Surprise FW stop, stopping queues\n"); @@ -3446,6 +3450,8 @@ int ionic_restart_lif(struct ionic_lif *lif) netif_device_attach(lif->netdev); ionic_queue_doorbell_check(ionic, IONIC_NAPI_DEADLINE); =20 + ionic_auxbus_register(ionic->lif); + return 0; =20 err_txrx_free: @@ -3532,6 +3538,7 @@ void ionic_lif_free(struct ionic_lif *lif) =20 mutex_destroy(&lif->config_lock); mutex_destroy(&lif->queue_lock); + mutex_destroy(&lif->adev_lock); =20 /* free netdev & lif */ ionic_debugfs_del_lif(lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/= ethernet/pensando/ionic/ionic_lif.h index e01756fb7fdd..43bdd0fb8733 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h @@ -10,6 +10,7 @@ #include #include #include "ionic_rx_filter.h" +#include "ionic_api.h" =20 #define IONIC_ADMINQ_LENGTH 16 /* must be a power of two */ #define IONIC_NOTIFYQ_LENGTH 64 /* must be a power of two */ @@ -225,6 +226,8 @@ struct ionic_lif { dma_addr_t info_pa; u32 info_sz; struct ionic_qtype_info qtype_info[IONIC_QTYPE_MAX]; + struct ionic_aux_dev *ionic_adev; + struct mutex adev_lock; /* lock for aux_dev actions */ =20 u8 rss_hash_key[IONIC_RSS_HASH_KEY_SIZE]; u8 *rss_ind_tbl; --=20 2.43.0 From nobody Mon Oct 6 06:32:11 2025 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2044.outbound.protection.outlook.com [40.107.102.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F68D223DC0; Wed, 23 Jul 2025 17:32:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.102.44 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291946; cv=fail; b=ryA723Dxzrk1TcwhXxOGgBQtdnqwsuZnJ2rcj6W3OWrGl7FNd9EFvRuD4cpa4XQWIKXic0XZHhHSGSiluCo5+K3ws3Rx5o1ZJyU7LdLCVrLCdNwbOv8ATDGikfvS+WhTyVbw/ek1LwOCoJh4hzPyjBw2H6f59F6QswhmPa0q7wQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291946; c=relaxed/simple; bh=nwe4n1jslWdRsh8TT24lHkQlTAxVDSYwcSul61MoSZU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=OIScLdoFg1pv3ePaL2EgxmIKIs2G+7OCl205vDuT1dLVsPkNLo9ocFErxmy0Ke9NqcGYFxilRLn8ROzAAo8n8NK14a0Z7gMJ30ghCFbXzA0ZIQMvlcUa3oS8RTq2t1Mwf5n3U+HC+v+8VKGt4RuAGEwBDtgLRXO1f8KcW/jgSx4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=wgMimTeA; arc=fail smtp.client-ip=40.107.102.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="wgMimTeA" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=qfY263CzwoJc0UCyCq8pt5+A5KL4s7DvYXZYJTvtZ/q2JVSkhGCgjizhZbat9I6MC7l4Lam2gr6m1fjNnwDUt3IKlrLnMjjt38IoV+8DXcmCLUr35I0vfyIvLhnq4nz/3oZtqwkXugAYbkKhUprEIRvzzLU8pQsgW2zf7o87/8dIrCgF81dmV43SaO5wiRbENwSDjffuaqdITboLsv7OhFTw7JCfS/DYP864ePtUb98xYFKdtvOTLfntWVmIsUrT/4Wy4JQ5QhYAF+zuJv7G7k42NwENwIlvJ2Trudx6Nq2GtonFQSMSEitq4QFM2FcgDKTyo6Gi3SaAhPw0fhd8RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/zIBqCpzJABVFWlOshMNx3Oyu/y12bn2cV+0Yquf//M=; b=Wn//LM7WKViEEdQShCwMpJagBvAWi7F6aEflGy76FKrtAQx8vEutERXugXFb7B+mnGsewRmm9wdyDR7jOth9bB1eSqIRqKQgy6dwUnjs9+vVe3LfZWs+hvF6LS+f2H/DR/kCIH7nAXR2md+xCY8LttikY6u8a4aD1Igp07PkDAX6HtoNbOawSZIcUHRhewZB0iLjLU6vbRbrw440ohEyU7zQYBZPdHTuE8xk3Pop+6XDT4UQ3pZnGltr0Qq79jLsAwsEBCRS6z90mVjui8+hR/aVS1WMJY2MQegUANJzFMkpMDqJ1GOShCJpSCguRpnKprHkYfrbPZZOoetDt+8JYA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/zIBqCpzJABVFWlOshMNx3Oyu/y12bn2cV+0Yquf//M=; b=wgMimTeA9JMD5FkaNgg2Zn+N1T4166eo/Nd2O9q7k5kY/+jRVw1+4IhmagNNkKiUSIT3huHcDhDwDTmDgG6yT81ug5ex27l4+WgKuKLQ+s9Q6BNUZ8YL+1BspHGu7gGEEwgXdenVGJ9Gc894KqedudEqngDtEfom5kPuKmPpEpM= Received: from CH0PR03CA0112.namprd03.prod.outlook.com (2603:10b6:610:cd::27) by MW5PR12MB5649.namprd12.prod.outlook.com (2603:10b6:303:19d::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8964.21; Wed, 23 Jul 2025 17:32:21 +0000 Received: from CY4PEPF0000EE3A.namprd03.prod.outlook.com (2603:10b6:610:cd:cafe::7a) by CH0PR03CA0112.outlook.office365.com (2603:10b6:610:cd::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:32:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE3A.mail.protection.outlook.com (10.167.242.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:20 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:08 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:07 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:03 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 02/14] net: ionic: Update LIF identity with additional RDMA capabilities Date: Wed, 23 Jul 2025 23:01:37 +0530 Message-ID: <20250723173149.2568776-3-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3A:EE_|MW5PR12MB5649:EE_ X-MS-Office365-Filtering-Correlation-Id: d2476e09-f3a8-4a1b-9d82-08ddca0ee0e9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|82310400026|376014|7416014|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?35zU/Bcv+EXSs2BU+E41ucg5Cy56ebfHYpqsrekN5BjE++wRnx4KfiegjB0V?= =?us-ascii?Q?kI2l4KOzeDUay6hjnL+ffEHWFly+BdAp0SIf2g7mhLFeGoUyjn0u3gDUCFoJ?= =?us-ascii?Q?0sFoWlYGct4tHVQhRDR+z+qEw1wljrXiq9kwxs3Mh+p8qgWUyIAz2uMZ64x7?= =?us-ascii?Q?31QGz9sLDZnIhgagfpR4Sb1Mj+ASQ5cyOInBYlDXKQbwwa+Q6DoiyRqoWn3y?= =?us-ascii?Q?WWOFFxn7Hs/M1HZKCL568j4B9Ixkg20AksCG3BPE6dauqbCA2M7Z+oU1TBDc?= =?us-ascii?Q?pOP70DtomrIfP/0HCY1MqeoBQDHF4+Ff2at/emxx9wFXvLz2JwRidN1UGnLB?= =?us-ascii?Q?xRsSDhwe0os5l3v8uBEC1K7/QXfrFtsnHtKX4hRQggC4xlRRCqhcAgcus01k?= =?us-ascii?Q?BXtG4JvOnKmEBuJGN5QJT/0+2NvlMe60Vo5W4y19rgfT5bL9WFf0Yk8QaMN7?= =?us-ascii?Q?FT9Bf5/k2iEr9cEtbuwW4+ddZo4r/71HtG5KanL/K2r0LJw3IuwZK8I4BL6X?= =?us-ascii?Q?XpXbgLUdgrTlhrIme/34UV0wBcwvwBkIq/wvzv4ssc3nivhxp/aJecFwEo1Y?= =?us-ascii?Q?WLRU5VQep19o2D0hIlOTX+/c00Ux6l+i+4revxaTUMggPDhK43n9JDLCvb/S?= =?us-ascii?Q?hoio9hFL8Msp2oaxe2/Ic3DQ+66yPTS0sHXVPyFTGoAJAXdhtEPKPdW9423u?= =?us-ascii?Q?Evtju/IpkImC6L8pjRgyU5Wsys4W6Q3ACLYGKRXwtU6lk1955gysTXhxx1FK?= =?us-ascii?Q?p5onDj5yFK8tWmVKnxRfU9exVaXGoHlShv42S1y6uVJHfhbVzyrIDJWhD8PN?= =?us-ascii?Q?oG4vE8cabgkza1w+4af7+Bi5stwsZ3CSvRdxuVePF/M/92X9OIzM4dAUvLcE?= =?us-ascii?Q?hWHc9UvWTtOq2MG+jD2dzQ4sAzn5HoIxjGI+ysMMNStyIVlYfVflXso2SEnU?= =?us-ascii?Q?VAwTvAfZHORJWm1nwLd76SSMZ++9upOuwrHo4i8Knl51w3g+alJFd9+WKV0O?= =?us-ascii?Q?FLF7nZgDmKCfskOW1jkmdXCEVNDktfKZhvOe81Tmi8AeeDlqWOSrE61QIVgJ?= =?us-ascii?Q?EqAhY1tYkSHKsxPFEj+/U430JDUFJLHy5oyDGH9XZ0DHL3EvpRTDmvO70qnf?= =?us-ascii?Q?EN4GD9Ee4FJM3OOfVPMub68uJ6S1c+3hNIPwQA0KxxkGaljDJoSPWekkSY7m?= =?us-ascii?Q?yKSRVw4SNf5YmYmfndxXJ9N1XoStJW3i78dzdHVfMe237UMRNjqtx0SSNoBv?= =?us-ascii?Q?7fj0SqDavI1fC1xsNa0oecKgU+FDEZY+s2pjgEm6MJcoUv+LdRb+Fgs03z9k?= =?us-ascii?Q?qLrPNvdbdgvYEPvkLvUimBYCmGb+aTPeUjFhSk7p78PEgFjrpgL0OHR/2iGt?= =?us-ascii?Q?Ma6bMXZO/WZ5t/YoLV1CIK2HysrbzGwERKkK0bNK6KuiHxBN5jdCxx1/3Nhj?= =?us-ascii?Q?gLWkC58ZIyW6lc5gfaXOhDOCfG/TcrtMqHYK3R1VkBtV69QVHeHM51AJ51LK?= =?us-ascii?Q?xKPOknWJ92EEnLIyZ9eY0iSHWKrev7gbI3AS5zE9p/fOLL1bGqHC/G4DUg?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(82310400026)(376014)(7416014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:20.7330 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d2476e09-f3a8-4a1b-9d82-08ddca0ee0e9 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3A.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR12MB5649 Content-Type: text/plain; charset="utf-8" Firmware sends the RDMA capability in a response for LIF_IDENTIFY device command. Update the LIF indentify with additional RDMA capabilities used by driver and firmware. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde Reviewed-by: Simon Horman --- .../net/ethernet/pensando/ionic/ionic_if.h | 29 +++++++++++++++---- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/e= thernet/pensando/ionic/ionic_if.h index f1ddbe9994a3..59d6e97b3986 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -494,6 +494,16 @@ union ionic_lif_config { __le32 words[64]; }; =20 +/** + * enum ionic_lif_rdma_cap_stats - LIF stat type + * @IONIC_LIF_RDMA_STAT_GLOBAL: Global stats + * @IONIC_LIF_RDMA_STAT_QP: Queue pair stats + */ +enum ionic_lif_rdma_cap_stats { + IONIC_LIF_RDMA_STAT_GLOBAL =3D BIT(0), + IONIC_LIF_RDMA_STAT_QP =3D BIT(1), +}; + /** * struct ionic_lif_identity - LIF identity information (type-specific) * @@ -513,10 +523,10 @@ union ionic_lif_config { * @eth.config: LIF config struct with features, mtu, mac, q c= ounts * * @rdma: RDMA identify structure - * @rdma.version: RDMA version of opcodes and queue descriptors + * @rdma.version: RDMA capability version * @rdma.qp_opcodes: Number of RDMA queue pair opcodes supported * @rdma.admin_opcodes: Number of RDMA admin opcodes supported - * @rdma.rsvd: reserved byte(s) + * @rdma.minor_version: RDMA capability minor version * @rdma.npts_per_lif: Page table size per LIF * @rdma.nmrs_per_lif: Number of memory regions per LIF * @rdma.nahs_per_lif: Number of address handles per LIF @@ -526,12 +536,17 @@ union ionic_lif_config { * @rdma.rrq_stride: Remote RQ work request stride * @rdma.rsq_stride: Remote SQ work request stride * @rdma.dcqcn_profiles: Number of DCQCN profiles - * @rdma.rsvd_dimensions: reserved byte(s) + * @rdma.udma_shift: Log2 number of queues per queue group + * @rdma.rsvd_dimensions: Reserved byte + * @rdma.page_size_cap: Supported page sizes * @rdma.aq_qtype: RDMA Admin Qtype * @rdma.sq_qtype: RDMA Send Qtype * @rdma.rq_qtype: RDMA Receive Qtype * @rdma.cq_qtype: RDMA Completion Qtype * @rdma.eq_qtype: RDMA Event Qtype + * @rdma.stats_type: Supported statistics type + * (enum ionic_lif_rdma_cap_stats) + * @rdma.rsvd1: Reserved byte(s) * @words: word access to struct contents */ union ionic_lif_identity { @@ -557,7 +572,7 @@ union ionic_lif_identity { u8 version; u8 qp_opcodes; u8 admin_opcodes; - u8 rsvd; + u8 minor_version; __le32 npts_per_lif; __le32 nmrs_per_lif; __le32 nahs_per_lif; @@ -567,12 +582,16 @@ union ionic_lif_identity { u8 rrq_stride; u8 rsq_stride; u8 dcqcn_profiles; - u8 rsvd_dimensions[10]; + u8 udma_shift; + u8 rsvd_dimensions; + __le64 page_size_cap; struct ionic_lif_logical_qtype aq_qtype; struct ionic_lif_logical_qtype sq_qtype; struct ionic_lif_logical_qtype rq_qtype; struct ionic_lif_logical_qtype cq_qtype; struct ionic_lif_logical_qtype eq_qtype; + __le16 stats_type; + u8 rsvd1[162]; } __packed rdma; } __packed; __le32 words[478]; --=20 2.43.0 From nobody Mon Oct 6 06:32:11 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2066.outbound.protection.outlook.com [40.107.236.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F1D220E717; Wed, 23 Jul 2025 17:32:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291946; cv=fail; b=jgp3LuoMdQSuf/hL6K7kwx5JSN0vTVLVHJgJParphV5PrecgghG8efIesewRvqu8AZosIwEpNDzRHWiESKEedctlEh6TCX2MPnPJ5uLbXPXeGHRn/wY8d1h7yNlr7tb6tDTY+mg+svatQhwjjZDO3uKMUfiOm5tUL8nJZzx9NMA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291946; c=relaxed/simple; bh=h/uVcETKC+3cGUzYGi4JWJzRSwkZpokJFdBJLWvU1P0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=b6fUfaEXIYyWLvHZGqMPVU0KPkV0GK+LodLeuc/E2i/fXvL4O2XcRRdO7wU9jqbIi6D28qAP+SuPAeCjmrICRhuTCMjnKdLYlqur9NK5jgmZ2pIP1rXWd3JBFRLthUVHkhxVcuLcL88oIUwZB8SbXbE5Obh5iSNigvqmAWyfa3Q= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=zacWs5Iv; arc=fail smtp.client-ip=40.107.236.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="zacWs5Iv" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RfklQ1iM1AHJ2zh63yz4TxAS8ZkjOCnA+0ZWSwvs+wGY1yC79RPoy/C2mQx+tDJT4hih49hhICiO5LEzDsqlupr+0LZvRh5+zW4BSjry8fCQLQm/BQxGBdUjxnyHOxfPQp9O2vgRnQBsQZEO9RgqBi4gbvTO6r2ET5q7BPKc9ufYt7lR8qR5M6eVIyq4M+58phfsOsxtFxSkXtXBl81rC2e+J14OtdYcGPNI+TXaBahAsVuIm6hMEd1GhZ0oOTjkWtjoJh1Ss4XME4QLeC4oXvv6OTJcuYrXYwvG/aAfJR37FLKdklm5UPvOxaokPwnUoj8YXprDzy+6Le0PO3YDsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oEM1AFwIxmqwEmxJg+E8KJNyIe2JtRP6v/yV8Zdz8VU=; b=yQ5wulAxZU5G1fEWjw06TvUuY/grIa+LhvyxkxnYLVaGvzp7WW/QS3j7T1Z+f2b3FJZ2jugazL36zqOzH0nfjsN/1usgnZ+pTT2UA+VDGhvr4J+ZkB3Acc9xdk1pJg2fIG2j9jBcev/52m3jKzs7jK/Z0awlu27eZHTOS8p7iRnoUYicxTej+PLdm0ce9x1BRo8n9GqGmAiUD0+cZH1cgEncO1apSj3kIm1SciQUYRcXNxrxefh9N/iGdCp4SzasA5DNWA8S4PwGIdtzcEw/GTAT/ex+5iM+C3MgwsswnHXOOjmVgplKKp2130X7wjAuGETPpcs/m0AX2GgPbpIoog== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oEM1AFwIxmqwEmxJg+E8KJNyIe2JtRP6v/yV8Zdz8VU=; b=zacWs5Ivc14GvfuifdXrDEARD/KZQeA+A+LnOzbzxSf89NkCGN0hqGwgxHcHudDUwzWvb6kVa3SRY7XO04zurP3JdYA11A9RaBZD+3x3Lv6SVNu5iCDHCYIDwR8iGvtZrIlvzwuRuUrkUsEeqliRJ+OsZurpkfK9kMX0tn9ERa8= Received: from PH8P222CA0027.NAMP222.PROD.OUTLOOK.COM (2603:10b6:510:2d7::28) by BY5PR12MB4305.namprd12.prod.outlook.com (2603:10b6:a03:213::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8943.30; Wed, 23 Jul 2025 17:32:21 +0000 Received: from CY4PEPF0000EE38.namprd03.prod.outlook.com (2603:10b6:510:2d7:cafe::98) by PH8P222CA0027.outlook.office365.com (2603:10b6:510:2d7::28) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.22 via Frontend Transport; Wed, 23 Jul 2025 17:32:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE38.mail.protection.outlook.com (10.167.242.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:20 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:12 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:12 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:08 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 03/14] net: ionic: Export the APIs from net driver to support device commands Date: Wed, 23 Jul 2025 23:01:38 +0530 Message-ID: <20250723173149.2568776-4-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE38:EE_|BY5PR12MB4305:EE_ X-MS-Office365-Filtering-Correlation-Id: 153d4271-4198-4f93-24a4-08ddca0ee0f4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|1800799024|36860700013|376014|82310400026|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?n2thhmM7Ts2lcWbPqVu7TXcN8WLXMsuMqd65b798v+MT4Cz6mmH6LKUqSXCp?= =?us-ascii?Q?Yvg4q/8z/iWAms2/GBqJUvBwcMd8cmC3BWgMj6YiE4IdpIGmqqK0/g4sVjko?= =?us-ascii?Q?K+Fif1txJetXKRVfZ0KUjnGN7jUFjtW9dDv3vwvXGi7XL6sKQdbv3LWBdD+U?= =?us-ascii?Q?XEZjFrAikkvtCT2PQYl+tsmhR60lSzefviQoM8LwUXK9eq3RiX3bpKoQG454?= =?us-ascii?Q?d6TsGZ85L0fuaiiKu25DJ9Q2t+LdCIY+O0gbKQhLzcAmKxnXqukJNE0AeAw3?= =?us-ascii?Q?VSKzELUBO8G5Xnz5NMTGYbaxeepq2jKxEWvKvHhBf0vksubm+GRBOi4lOtyF?= =?us-ascii?Q?Ryi/+Xz/unlHPzYOIP8ZbbrsIjpIxvy7Nb+cEZ5EeCjVltawZcwkT3t0mZib?= =?us-ascii?Q?s2wwqvLACOeSgEOwJaMOJHO3Q2tVn4fkr2ubN2FqTuRWfpoWS32apOW2Hk3k?= =?us-ascii?Q?1D0cjOElmHFgZqdn4pW17+kggh8PmPAijkEv1Z8FwyS/uUlXIiyklY7W+uMN?= =?us-ascii?Q?SxRmHi6oCZnGnTfru38aHCRZ/jWhNTuUeNG2Witl0975Gzx8SfiNz6g/QIy7?= =?us-ascii?Q?KonUN1VYPnj973MipRKmm2WXpVhTQ5qE5CZbU9nK52btZiZg7owQPLvpQ+mB?= =?us-ascii?Q?LfvHGKnlyKwAjkLH6tyvf1nilzU73+grB7f6mXd/51zzyig1HHPSZ7mU4FTR?= =?us-ascii?Q?8rNKgefn4qfOLF4+uwF1WMDbGilZcWbRyHZkH7vDNo6LcS7b5oXnp3q9GXZy?= =?us-ascii?Q?CVY7fHLLUzWdGtlhwwfKPD3Gyix+rPPzlKimX/uzQGt0t6PwMYEZ2JUKr+mv?= =?us-ascii?Q?JBbjLupVQNtHt/JrrkiOAbtmUX0PINjk/U9alPNMGYhoYl/RP1CNrEs1+pDp?= =?us-ascii?Q?noCQdT/WYjgy3DRsgVPqf8DWUqFtQ0mHJqzyjsSh2AoX5MS8zCsApkWgBot6?= =?us-ascii?Q?7RYkQR7t0nceyYo4MZ/RJ6n1dWcWpVY35ek5upELfU4pNixfge4+e32UMw8R?= =?us-ascii?Q?tVjIi2LTOtNz0p2CCe8v5KYIm7FpChN6irVXCtQupIE9uvHIRSjP84vW3gTD?= =?us-ascii?Q?2Uy0GcDECA/rRYelfYIDlwWMD40shOMG5LMc68ogqGwn5tbEunBjlymRoXqh?= =?us-ascii?Q?Wz9XvWZghZZt7ZT4sHj9ajlCEpB2xpT2CACkZ5DoK6F9ACU0XzHbbvR8vaeu?= =?us-ascii?Q?gqmAKrghblHA9GLOCaBp1gKruJRLYYGbuhVsF6lPC+AVSd9UGqPHtDMZY8nX?= =?us-ascii?Q?kQNVfSEBDAn2KFy6SgnahVEjxDYpTPHEAuZjf+mc17jgHVv2edwtmYVQDQWm?= =?us-ascii?Q?B0FjR7vfCX7+H5YlUIekJ92/5TK4RodmUb79nkm3zT1SgRG9IkuOkThp8ZmS?= =?us-ascii?Q?IPc1dgO9ticNHOxVf3ROT/S5o1+CCuwoHrlnWVUn51AhPn+qhNm0zbWD5Ike?= =?us-ascii?Q?S92nsL4bhlbhHAyNI1ufDrRcp9Yoz9DUNWhXkC9MxZmd7lRm6T5KhJ2ung7A?= =?us-ascii?Q?OdWatAbb53zjZWCRLdKrlxNyIiN6C9h3WPDYKky0QizwPQlGiIPEC8KCow?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(1800799024)(36860700013)(376014)(82310400026)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:20.8036 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 153d4271-4198-4f93-24a4-08ddca0ee0f4 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE38.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4305 Content-Type: text/plain; charset="utf-8" RDMA driver needs to establish admin queues to support admin operations. Export the APIs to send device commands for the RDMA driver. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- drivers/net/ethernet/pensando/ionic/ionic.h | 7 ---- .../net/ethernet/pensando/ionic/ionic_api.h | 36 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_dev.h | 1 + .../net/ethernet/pensando/ionic/ionic_main.c | 4 ++- 4 files changed, 40 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic.h b/drivers/net/ethe= rnet/pensando/ionic/ionic.h index 04f00ea94230..85198e6a806e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic.h +++ b/drivers/net/ethernet/pensando/ionic/ionic.h @@ -65,16 +65,9 @@ struct ionic { int watchdog_period; }; =20 -struct ionic_admin_ctx { - struct completion work; - union ionic_adminq_cmd cmd; - union ionic_adminq_comp comp; -}; - int ionic_adminq_post(struct ionic_lif *lif, struct ionic_admin_ctx *ctx); int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx, const int err, const bool do_msg); -int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *= ctx); int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin= _ctx *ctx); void ionic_adminq_netdev_err_print(struct ionic_lif *lif, u8 opcode, u8 status, int err); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index f9fcd1b67b35..d75902ca34af 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -5,6 +5,8 @@ #define _IONIC_API_H_ =20 #include +#include "ionic_if.h" +#include "ionic_regs.h" =20 /** * struct ionic_aux_dev - Auxiliary device information @@ -18,4 +20,38 @@ struct ionic_aux_dev { struct auxiliary_device adev; }; =20 +/** + * struct ionic_admin_ctx - Admin command context + * @work: Work completion wait queue element + * @cmd: Admin command (64B) to be copied to the queue + * @comp: Admin completion (16B) copied from the queue + */ +struct ionic_admin_ctx { + struct completion work; + union ionic_adminq_cmd cmd; + union ionic_adminq_comp comp; +}; + +/** + * ionic_adminq_post_wait - Post an admin command and wait for response + * @lif: Logical interface + * @ctx: API admin command context + * + * Post the command to an admin queue in the ethernet driver. If this com= mand + * succeeds, then the command has been posted, but that does not indicate a + * completion. If this command returns success, then the completion callb= ack + * will eventually be called. + * + * Return: zero or negative error status + */ +int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *= ctx); + +/** + * ionic_error_to_errno - Transform ionic_if errors to os errno + * @code: Ionic error number + * + * Return: Negative OS error number or zero + */ +int ionic_error_to_errno(enum ionic_status_code code); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index c8c710cfe70c..bc26eb8f5779 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -12,6 +12,7 @@ =20 #include "ionic_if.h" #include "ionic_regs.h" +#include "ionic_api.h" =20 #define IONIC_MAX_TX_DESC 8192 #define IONIC_MAX_RX_DESC 16384 diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net= /ethernet/pensando/ionic/ionic_main.c index daf1e82cb76b..60fc232338b9 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_main.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c @@ -72,7 +72,7 @@ static const char *ionic_error_to_str(enum ionic_status_c= ode code) } } =20 -static int ionic_error_to_errno(enum ionic_status_code code) +int ionic_error_to_errno(enum ionic_status_code code) { switch (code) { case IONIC_RC_SUCCESS: @@ -114,6 +114,7 @@ static int ionic_error_to_errno(enum ionic_status_code = code) return -EIO; } } +EXPORT_SYMBOL_NS(ionic_error_to_errno, "NET_IONIC"); =20 static const char *ionic_opcode_to_str(enum ionic_cmd_opcode opcode) { @@ -480,6 +481,7 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struc= t ionic_admin_ctx *ctx) { return __ionic_adminq_post_wait(lif, ctx, true); } +EXPORT_SYMBOL_NS(ionic_adminq_post_wait, "NET_IONIC"); =20 int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin= _ctx *ctx) { --=20 2.43.0 From nobody Mon Oct 6 06:32:11 2025 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2076.outbound.protection.outlook.com [40.107.243.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C58E2222AB; Wed, 23 Jul 2025 17:32:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291953; cv=fail; b=Gz7SpEYxBbOQnVT4OLtF8WCjEcxDwhrxxR8KVHBINuhOdOmLClKteTevMdl0gEJW/RjiYe5xBuJm4D/jKsH1iDB+GUgxOdGHrmFNDvMWCseZV2fRHdswBlSZlVuctHZT+yO3W1N3nwj4smxCNE60Pe+xrKYz66Te1AqXeDVVFd8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291953; c=relaxed/simple; bh=ovIz5ZJWELMPuRotgiHEq2cTsMFaEZPfb1dNsC7ONN0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WeDPjEurso/uUV4tsYkA3SnRlEcyNvXBnm7KrRmdLy+XhRBPKJ2JQuJx9TIEv+65TBsU6uYkqy1w2rgQH+8SxrHyuEeXlnS24/qOy4s5F0MHqEc+kzt2FQnryCX0jzd9jY6UZBR/lyy47ftv3QSTIrdz4VPy9T6fx7vx+KETg60= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=is9dHh14; arc=fail smtp.client-ip=40.107.243.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="is9dHh14" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=WwfYHGzZQW6cNZ2uwOVJ6ZRJsJM7TrxCs4M3aqKNiFN9zhWuEE7KwT9D+q2cxutz+ZwQD+3Is12Q6gkwS6jo5G+u7erBI+J50WmfgRknHKY+zL/aJPHcMCEDpeyuEKdoQSY4wlQzVR4pnqyKDXdGqx1yf6IkNML2YKQeZwB9sOQAohlBOiADRqGW99YbvbNnxPCbueastj08BkMJVT2IrjLnb/dW6vjXGvr9VoUoNY86SqnLIcbV18iD6qB6o6IYVDhfk2j1ilQ7swvxlrlZYeDY1G6CUYXTfrUdu3vSeq8wHRQZfk8AEFKMo1mTfl89R9TdmcSgvMoFby0qTXIauQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=m7HZyEFckxjOPPXEcvEo1wm2yiVAqhy7ar9AmpBjAbc=; b=XeehRw+3YuTO+Q3cjpcQL6Cki+eLLHtPHx5gQB2aBFH2JbzklxPIZL6f/aT83VTe7mKvnQYwptts4fOzoFMLfUtTMvDL3fsqvTyujx7IJBojV3QQ1IJbtb/7bWWCKkTqtnF41q7nx8wVIzojvAauhtidGhEP5SoBsmUz9v6ztNi99ZwLnn+7vXGyx10NaO+oFRdX+Ytr6aYdm2t3VQq0DRXvvjejVHFBPmnXMYS1snLeohTAvF1ku4Pqu+hNMEodKK/hqnkcQXS2e8DSxzsFLK7Yt4OVA4pJAYdKeSU781ri+24klgusE6oEoH1aLN2ddsXwL2p0cP0xU5LRDGKwXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m7HZyEFckxjOPPXEcvEo1wm2yiVAqhy7ar9AmpBjAbc=; b=is9dHh14wRN70ObW3lE66B7uAzXQS7BgRzsdJlzyzJ1QF6plvHk+YQZCtXxdnuBnWQwT14rJp9ryGHH9uMMUHmSnUYTtNtpzS2F2h1f6nkaOZmpvT6USKOXOY17b1q8VAel64Sh0SUcRsze8TkGSpHRyXSACr7LltVXTCXNH52o= Received: from CH2PR15CA0010.namprd15.prod.outlook.com (2603:10b6:610:51::20) by BL3PR12MB9049.namprd12.prod.outlook.com (2603:10b6:208:3b8::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8943.30; Wed, 23 Jul 2025 17:32:25 +0000 Received: from CH2PEPF0000013E.namprd02.prod.outlook.com (2603:10b6:610:51:cafe::23) by CH2PR15CA0010.outlook.office365.com (2603:10b6:610:51::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8943.23 via Frontend Transport; Wed, 23 Jul 2025 17:32:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF0000013E.mail.protection.outlook.com (10.167.244.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:24 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:17 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:17 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:12 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 04/14] net: ionic: Provide RDMA reset support for the RDMA driver Date: Wed, 23 Jul 2025 23:01:39 +0530 Message-ID: <20250723173149.2568776-5-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013E:EE_|BL3PR12MB9049:EE_ X-MS-Office365-Filtering-Correlation-Id: e9333e61-d419-441a-53fe-08ddca0ee347 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|82310400026|376014|7416014|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?5U7gKdWomKL66MUegvVgYyhWiF0mroyEMblGTjMLwFtz1aaOw4LHJZ+1uvkY?= =?us-ascii?Q?l3Hgrc9xxMYXNYttaBSMB/63Q3EblBJWVA1P/vOBZ+WlKMavQWjXmZ7eEx7p?= =?us-ascii?Q?yoOuRkGOZj9eyv3oTc3ZBz1SPwsoKU6rdJYrI8C9bI3B8aMi3kjd5mnBTpGQ?= =?us-ascii?Q?+c2/rYm5sKujtMUwZj9Gr/mG87EW7JygCIsBAnyigDX8STeW+ni1eFxiU7Fl?= =?us-ascii?Q?ssGzseUI7mx8FOrGAY08DZs1C5tI+h0mFZQHey80AzvB3mT5a4MQUwatHnmq?= =?us-ascii?Q?IgJtiWsL4hfaBHRzbsLUPC1PxyRBPueR8F8atcffxpb2busSrMkV3lR4BcrU?= =?us-ascii?Q?SZsp8cPB6ZDeER9MV4mX+4+rMJxUx8Q04VZYFGNpWEIifKjN6y99KYrduAoS?= =?us-ascii?Q?pDHSLaQBkgmJcTBr17UKFB7LLsjmQvc4ArZWFKfDV9Ucqdi0AESwbj9MEPC7?= =?us-ascii?Q?buIjB+aTu6T2Whi587OCNijFlXo5bLM4DbMyjvngh/VPigyHdc+Vxb8cWENI?= =?us-ascii?Q?/pziltwpVoUJJ/V9IQULM6Z1qsFefWF7MOHx0N6HZKNZ/toXNTQge5h0M/5C?= =?us-ascii?Q?CSJLdSSOdLapNt/x8gctQ6FPWy4Mjdc/Hy+bvfKT0RgGp77eeV1vptj7mYP7?= =?us-ascii?Q?wdbWU59P30Xh/CHad15Kr2Oi9U28YGbh1BrVGiNilPmx0jyMGYiDf6JTwCxb?= =?us-ascii?Q?b340WzT108YwQsZt/NIZPXE7nz+1ECxCie/JSgpaGggNcAvzMQ/VS4dt4Lhh?= =?us-ascii?Q?E6700ZJAlw0o41rpTwWhPvGgWP6OX9Gzeesobcqstmqetsmw3hwr1Ga8B40L?= =?us-ascii?Q?9KOCO8Pm7+05Bt+fCTZotlSsfENvbe1NoiVXjF14SWK4mYj5W7MXtpItApJ5?= =?us-ascii?Q?xeJW6gWA9YF4Ky3bD1dt2I3dPil9QPTEmnv614iTMaVHcg+b3ohGL0HLgfz/?= =?us-ascii?Q?1GFbC7n6nhrBKEhMCkpllH51V01Zs67aMnFN+jGuM50Yrp+oDVX7Y+ln6tqu?= =?us-ascii?Q?yV+tYys7bQv4qrTyqKqOWw1Doy4Z1atAGXqiDq/xoyj641kPwlz24M1B2GEo?= =?us-ascii?Q?PVRbYcQsLkz41hOb6t6hi8ssY9X3ADWadC93auEF+Zq/gRUjaCGuJd3QirLL?= =?us-ascii?Q?Qa4P+e27QMa1AQkAZl4GeZZNxmbONKOXK1AHRQVQzpOWaPTYr/o5vg/KJcT9?= =?us-ascii?Q?B6AK4XWQfDFLWsYXEpXtkmE7k1oEsqTgjeziBVqGaCFsz3WZTy7dislE3RJ0?= =?us-ascii?Q?VsR2Xq9qS21Ywb5FZFDI3yDCbaEIK0AYHNblAj1de9pL4q+HLHJo52i9qtv3?= =?us-ascii?Q?vG0pBA7gfjuTdfWZXfaOeK9sDC5xBknG7adtnYNYD5bb52R1NhnDWca1LzZ6?= =?us-ascii?Q?7vYORkpEOlBeiu9IZdzwNt3xKIwuL4b30sLQlO6CEOZQQXn7YBbFa933RWHm?= =?us-ascii?Q?OA6jDBZ7u3b4ellzFVoe01Vk5Jp0M0kDQGjAMFaaQIoujUas9IwcjMdwMOkU?= =?us-ascii?Q?nMfO9KEn5tesNsO5V6fkHcmDilPgcfUl8qUArGe8/NM8/ViYXfbNrMlJtA?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(82310400026)(376014)(7416014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:24.7298 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e9333e61-d419-441a-53fe-08ddca0ee347 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB9049 Content-Type: text/plain; charset="utf-8" The Ethernet driver holds the privilege to execute the device commands. Export the function to execute RDMA reset command for use by RDMA driver. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 9 ++++++++ .../net/ethernet/pensando/ionic/ionic_aux.c | 22 +++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index d75902ca34af..e0b766d1769f 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -54,4 +54,13 @@ int ionic_adminq_post_wait(struct ionic_lif *lif, struct= ionic_admin_ctx *ctx); */ int ionic_error_to_errno(enum ionic_status_code code); =20 +/** + * ionic_request_rdma_reset - request reset or disable the device or lif + * @lif: Logical interface + * + * The reset is triggered asynchronously. It will wait until reset request + * completes or times out. + */ +void ionic_request_rdma_reset(struct ionic_lif *lif); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_aux.c b/drivers/net/= ethernet/pensando/ionic/ionic_aux.c index 781218c3feba..6cd4c718836c 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_aux.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_aux.c @@ -93,3 +93,25 @@ void ionic_auxbus_unregister(struct ionic_lif *lif) out: mutex_unlock(&lif->adev_lock); } + +void ionic_request_rdma_reset(struct ionic_lif *lif) +{ + struct ionic *ionic =3D lif->ionic; + int err; + + union ionic_dev_cmd cmd =3D { + .cmd.opcode =3D IONIC_CMD_RDMA_RESET_LIF, + .cmd.lif_index =3D cpu_to_le16(lif->index), + }; + + mutex_lock(&ionic->dev_cmd_lock); + + ionic_dev_cmd_go(&ionic->idev, &cmd); + err =3D ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + + mutex_unlock(&ionic->dev_cmd_lock); + + if (err) + pr_warn("%s request_reset: error %d\n", __func__, err); +} +EXPORT_SYMBOL_NS(ionic_request_rdma_reset, "NET_IONIC"); --=20 2.43.0 From nobody Mon Oct 6 06:32:11 2025 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2051.outbound.protection.outlook.com [40.107.237.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C14A223DC0; Wed, 23 Jul 2025 17:32:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.51 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291952; cv=fail; b=mesZdWs6NzknupeV87QS8JqujaUVnbzBrQTus9fKg0W2mma3RjuNwu/bFp1MpDUMYvo3yu5uLIstB2hp/u3ho/aSdq/BPohWiE1FgblFdlF43Nt/9wRWdzXXVpgaVvGa/08YpdlxR5Q3PLNhzpBSJUqhNiIkrqOa2gUsKsS3piE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291952; c=relaxed/simple; bh=aqGv8x8xiCRuVokmXHPFXh0YmUkYQcz84Ac/ouQdmOY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=REnT88FHrEDiAOTIfQmz7cdeS4173i+9r3v0DX7EofpYVT49A0oXHclsP3U6CgPsDAGvjaLCqXHKbzaptLH+Mot11krsBnh25Dh182LE4nSWT+kuB3dPlypr6+DXkPPCrnyBUJPIJ4BpMiiXM5mWtVv04cXU5cF1tYsU2pysBgA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=V0OMQLWl; arc=fail smtp.client-ip=40.107.237.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="V0OMQLWl" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=pDVG9KVXzX0MlgXWf2hkl6ywHIDAV8Fdk/0htzJapNBy5YhUbe7EFrWtpPGY3xRaUOUbXnL05Lx6wOly5zNbgGPVY4uZsVa0ruqChcepdj1bF+35fl2NbVHd+ApzZ2X04FWPqLvxvZxWwj7Ri6T1M4+Z5UJHyGqRJBdZnCcL31FxZeUIDkGaozJ1gYJ5OFTFREJgLw4x8F4uKH3mNqojEimNMZi6AtFBxUrZa5LHef0e/lZLX6+RpVyh1ACl9v8sOSL1nN87sN9qpSc1CC1toB8xMBd77FdXBBdYTQ9MCSZJadQo8zmQ8cgWYxXbKd5PBr7Bp7vs7RBr+J1dKQSJBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LFx/dmppJWhrBmtc7FTHrJmb3RhB//ITRFbLXMcp5W0=; b=CeYUDl8dPBSa3iAEzm7V+XD1zIhEVdblRdcuLizkqhodNPA+LO/RntQBBCeXGiuaOUMVBCY8axhNHC/dzB7SsGB52tsY629kEGUKz6GrpjWA8dCbNbwoPZJezsXoo8dyH8neQ630kDEyXiZtDGyWP282X2+Jl0lyNweBklNFuVXxZ+6hHT4a111HAG4/IYrDsBXAL+D7q9k7upR5dil8b4DZB4/EhZOx8nuweiMWWUpciTf8zljoMXgZMGySwbhmbhT7HV6cEo//ca5D10bB1gx9XsXz+Iq9uCcH/c/VSfpfq45srV9DBvxTAbmzMmGR1NBKKobIu135C0L+JJPbXg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LFx/dmppJWhrBmtc7FTHrJmb3RhB//ITRFbLXMcp5W0=; b=V0OMQLWlrRc5niNZHn14KM2XLGu6StttzSyK9EA2/5l94iaZ8gZNMvv4jzZ/fAasoDZz7YbX6Be4hBNxQBjvZgWUaXH03G3TuwBAN4lbsOVBRUvh3PKplHQ4QOZmSiOEBHswWca+xsbVKEKALu22euoaL1yZTtoWkm1jH702pnA= Received: from CH2PR15CA0012.namprd15.prod.outlook.com (2603:10b6:610:51::22) by CYXPR12MB9425.namprd12.prod.outlook.com (2603:10b6:930:dc::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8964.21; Wed, 23 Jul 2025 17:32:26 +0000 Received: from CH2PEPF0000013E.namprd02.prod.outlook.com (2603:10b6:610:51:cafe::ce) by CH2PR15CA0012.outlook.office365.com (2603:10b6:610:51::22) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8943.23 via Frontend Transport; Wed, 23 Jul 2025 17:32:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF0000013E.mail.protection.outlook.com (10.167.244.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:26 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:22 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:21 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:17 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 05/14] net: ionic: Provide interrupt allocation support for the RDMA driver Date: Wed, 23 Jul 2025 23:01:40 +0530 Message-ID: <20250723173149.2568776-6-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013E:EE_|CYXPR12MB9425:EE_ X-MS-Office365-Filtering-Correlation-Id: 574ad7d4-7bd7-407c-8f3e-08ddca0ee446 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|376014|7416014|1800799024|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?FeMiLBmIaU43juAr4atqs8p52DzpJJ4SoUT9rrOGvu3lwPkiDTf+xbEG6P9k?= =?us-ascii?Q?z03RGkX91pO+TMdGvAbvkFa25+YkvE/OMFm5BybyGwYYQaw0qaPHBIH5Yfxd?= =?us-ascii?Q?7rf7rSYuK4J1FMzdfeBJw7+BK7NQ9DVwOpsWBdNw0oy6CoZjhVfu0138kgCc?= =?us-ascii?Q?pv6osYJ9J0BlGuufZe62XZASem9CvWD+umZ9/Kdh8p7SHAfD/LYpX4gYDZKo?= =?us-ascii?Q?KaQmMvPIFoO2m3S5s/QqS5uMzeVpJkLYMsKwsCua1Op74pyv8WDnsHWCc+tR?= =?us-ascii?Q?RBd4S7PBAAkpbaesfAvJoH8mT7ll/qO85r7fl9NkoVZ9KpVUGg6CxG0PsIoT?= =?us-ascii?Q?7n3GNH4YfMpdGhr8Xw7CxhAkax23+fsFvqrA2yL9Ybe6nQ17V/ldb3Z6uWk2?= =?us-ascii?Q?lpmD4sfEZDtlYCJVkk5qCrwwGFj1yTmrx+jMC1twN8jsb/hgAaKTqWTrUEqC?= =?us-ascii?Q?ESfXUEH2FFo0iHpss7Qoko6PTwOpKnspU0iMHRD/1aMJT+IF+lwGQyRS2nHT?= =?us-ascii?Q?MAG31sYbh/lFHgsEIwW9IyrTfiOt+mlAMtaHgir++w29M5/TISW3dOGO0cO0?= =?us-ascii?Q?03We9HT0rltoXXmM3mOz4r0jywN6JSD+WYU4oWu370mBCyP/DfRK7nzpEDsn?= =?us-ascii?Q?abMq2tOW9Aum1bNhpUhxpO+AIASUHVMFmDCR8mwKYpefzR/f4IMKsS7JyfgG?= =?us-ascii?Q?/MwElapMbi5KSigq2GeaZcJHgLfBlvOYRgM/ajVvcSnb1j5Ok6BWFSoeMWx7?= =?us-ascii?Q?4D1l4tQJE1Sn9Qbq8v6yOgod30Eu4tmkth+fs8bRr/tKU8Bec77snYoibSbd?= =?us-ascii?Q?M0mW6xzQ0SCdGHCaRWYSN8bMZXAZCO19qGzg3f0g7vPxRhEvg4Ilq3BpryxM?= =?us-ascii?Q?/hPQ14Ivc24ouiaMchh9RV6VngwUg1YKDPxzxN5bQzoGrpi/k0j9YY6EtKRg?= =?us-ascii?Q?IZ4FL/b4v+z3vg66hlhnbt5jCPM2q8WV80t28ccl2GUFanBdrPQrD+uxrQ+C?= =?us-ascii?Q?jACfMjoIJydpbysh+esukZQdeluFQXEcFmDQa+uFrXLEuYmjm0kR6g/8PQdj?= =?us-ascii?Q?QJ18R46T9NuqJqWtYwTPQLZN09+5fT9r+DIQbpAwIcwl19DtZOhMU5MUmHdW?= =?us-ascii?Q?RWOfJVWggOIyvg09SHgI5Vbff+BkZIXYjHHfxXCJvyDNwYna9tsngr458S7C?= =?us-ascii?Q?a09/DRBq/W2tREPeIe7x5O8VvmTd7TdnyA6fbSKhf5FcFaLsQjw2bU82ea5x?= =?us-ascii?Q?/+skGP+BpsJxjUaA9ZgFs8er1nDrqJoB38zDQorQ/W265nYXWMGnQQnwti1g?= =?us-ascii?Q?vXd48IYqslR+iOPN+agxrvxaYX1Pt4AWe/XqppTyudrDGSSm7+sXmSiwta/q?= =?us-ascii?Q?dO0H6Othapq+wMZlYndgOR63P1gx523oTruKA1P8Vw0EdKaC9VzRDEY2xfFQ?= =?us-ascii?Q?wg/nC2rWWPYDykEbG1c5k9D97sjz4qCzP6XWWl8BsYJ+6ig325f+2tu15sUe?= =?us-ascii?Q?vFQZbCjlhUndPSTCLHrOr4Wz3kq/ez3TmzbvA8ZXvddT72P5uUqCXh/L0w?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(376014)(7416014)(1800799024)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:26.4116 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 574ad7d4-7bd7-407c-8f3e-08ddca0ee446 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYXPR12MB9425 Content-Type: text/plain; charset="utf-8" RDMA driver needs an interrupt for an event queue. Export function from net driver to allocate an interrupt. Reviewed-by: Shannon Nelson Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 43 +++++++++++++++++++ .../net/ethernet/pensando/ionic/ionic_dev.h | 13 ------ .../net/ethernet/pensando/ionic/ionic_lif.c | 38 ++++++++-------- 3 files changed, 62 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index e0b766d1769f..5fd23aa8c5a1 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -32,6 +32,29 @@ struct ionic_admin_ctx { union ionic_adminq_comp comp; }; =20 +#define IONIC_INTR_INDEX_NOT_ASSIGNED -1 +#define IONIC_INTR_NAME_MAX_SZ 32 + +/** + * struct ionic_intr_info - Interrupt information + * @name: Name identifier + * @rearm_count: Interrupt rearm count + * @index: Interrupt index position + * @vector: Interrupt number + * @dim_coal_hw: Interrupt coalesce value in hardware units + * @affinity_mask: CPU affinity mask + * @aff_notify: context for notification of IRQ affinity changes + */ +struct ionic_intr_info { + char name[IONIC_INTR_NAME_MAX_SZ]; + u64 rearm_count; + unsigned int index; + unsigned int vector; + u32 dim_coal_hw; + cpumask_var_t *affinity_mask; + struct irq_affinity_notify aff_notify; +}; + /** * ionic_adminq_post_wait - Post an admin command and wait for response * @lif: Logical interface @@ -63,4 +86,24 @@ int ionic_error_to_errno(enum ionic_status_code code); */ void ionic_request_rdma_reset(struct ionic_lif *lif); =20 +/** + * ionic_intr_alloc - Reserve a device interrupt + * @lif: Logical interface + * @intr: Reserved ionic interrupt structure + * + * Reserve an interrupt index and get irq number for that index. + * + * Return: zero or negative error status + */ +int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr); + +/** + * ionic_intr_free - Release a device interrupt index + * @lif: Logical interface + * @intr: Interrupt index + * + * Mark the interrupt index unused so that it can be reserved again. + */ +void ionic_intr_free(struct ionic_lif *lif, int intr); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index bc26eb8f5779..68cf4da3c6b3 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -274,19 +274,6 @@ struct ionic_queue { char name[IONIC_QUEUE_NAME_MAX_SZ]; } ____cacheline_aligned_in_smp; =20 -#define IONIC_INTR_INDEX_NOT_ASSIGNED -1 -#define IONIC_INTR_NAME_MAX_SZ 32 - -struct ionic_intr_info { - char name[IONIC_INTR_NAME_MAX_SZ]; - u64 rearm_count; - unsigned int index; - unsigned int vector; - u32 dim_coal_hw; - cpumask_var_t *affinity_mask; - struct irq_affinity_notify aff_notify; -}; - struct ionic_cq { struct ionic_lif *lif; struct ionic_queue *bound_q; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index 146659f6862a..f89b458bd20a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -244,29 +244,36 @@ static int ionic_request_irq(struct ionic_lif *lif, s= truct ionic_qcq *qcq) 0, intr->name, &qcq->napi); } =20 -static int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info = *intr) +int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr) { struct ionic *ionic =3D lif->ionic; - int index; + int index, err; =20 index =3D find_first_zero_bit(ionic->intrs, ionic->nintrs); - if (index =3D=3D ionic->nintrs) { - netdev_warn(lif->netdev, "%s: no intr, index=3D%d nintrs=3D%d\n", - __func__, index, ionic->nintrs); + if (index =3D=3D ionic->nintrs) return -ENOSPC; - } =20 set_bit(index, ionic->intrs); ionic_intr_init(&ionic->idev, intr, index); =20 + err =3D ionic_bus_get_irq(ionic, intr->index); + if (err < 0) { + clear_bit(index, ionic->intrs); + return err; + } + + intr->vector =3D err; + return 0; } +EXPORT_SYMBOL_NS(ionic_intr_alloc, "NET_IONIC"); =20 -static void ionic_intr_free(struct ionic *ionic, int index) +void ionic_intr_free(struct ionic_lif *lif, int index) { - if (index !=3D IONIC_INTR_INDEX_NOT_ASSIGNED && index < ionic->nintrs) - clear_bit(index, ionic->intrs); + if (index !=3D IONIC_INTR_INDEX_NOT_ASSIGNED && index < lif->ionic->nintr= s) + clear_bit(index, lif->ionic->intrs); } +EXPORT_SYMBOL_NS(ionic_intr_free, "NET_IONIC"); =20 static void ionic_irq_aff_notify(struct irq_affinity_notify *notify, const cpumask_t *mask) @@ -401,7 +408,7 @@ static void ionic_qcq_intr_free(struct ionic_lif *lif, = struct ionic_qcq *qcq) irq_set_affinity_hint(qcq->intr.vector, NULL); devm_free_irq(lif->ionic->dev, qcq->intr.vector, &qcq->napi); qcq->intr.vector =3D 0; - ionic_intr_free(lif->ionic, qcq->intr.index); + ionic_intr_free(lif, qcq->intr.index); qcq->intr.index =3D IONIC_INTR_INDEX_NOT_ASSIGNED; } =20 @@ -511,13 +518,6 @@ static int ionic_alloc_qcq_interrupt(struct ionic_lif = *lif, struct ionic_qcq *qc goto err_out; } =20 - err =3D ionic_bus_get_irq(lif->ionic, qcq->intr.index); - if (err < 0) { - netdev_warn(lif->netdev, "no vector for %s: %d\n", - qcq->q.name, err); - goto err_out_free_intr; - } - qcq->intr.vector =3D err; ionic_intr_mask_assert(lif->ionic->idev.intr_ctrl, qcq->intr.index, IONIC_INTR_MASK_SET); =20 @@ -546,7 +546,7 @@ static int ionic_alloc_qcq_interrupt(struct ionic_lif *= lif, struct ionic_qcq *qc return 0; =20 err_out_free_intr: - ionic_intr_free(lif->ionic, qcq->intr.index); + ionic_intr_free(lif, qcq->intr.index); err_out: return err; } @@ -741,7 +741,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsig= ned int type, err_out_free_irq: if (flags & IONIC_QCQ_F_INTR) { devm_free_irq(dev, new->intr.vector, &new->napi); - ionic_intr_free(lif->ionic, new->intr.index); + ionic_intr_free(lif, new->intr.index); } err_out_free_page_pool: page_pool_destroy(new->q.page_pool); --=20 2.43.0 From nobody Mon Oct 6 06:32:11 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2057.outbound.protection.outlook.com [40.107.223.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 564AF23184F; Wed, 23 Jul 2025 17:32:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291955; cv=fail; b=Por9Xn307h/a+w55cngnfVZp1K3GWScjyPt7mnmLexYKPEqkIl4WIxOXC5Q6ppmNF8xTGQdY/R0072VMpMDz1oUGXKo4DpZzry1H6oC1dzw+fh5Wct9nI4lNeTrXe2cHWy+dBsBN8m6+E8ofQsUvtngqhR02XqlQScnZPD8MURc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291955; c=relaxed/simple; bh=3jNC5D9X4moqTBlTj3oHEbFC+xGYjlwhpYLCYN09x1c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ua8pf9rUDVh+P/gtsVngZPvUNhoZKfS2WRxnpWgTlBwik3YWq6cXPrEaEFT0oque1AVgGBFIeAIx/KapVXOjJsFk29M5abocjbsze2CLoS2OOYRACqvX/a9WjMcwvcdPcoYi/oned4Za8Hxkldz7NDD+NI4twbm0kUGw/XJw8+s= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=1ViEqWRS; arc=fail smtp.client-ip=40.107.223.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="1ViEqWRS" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=mSk22b2cTjNNiG9c9jU0XtGKv/jeVL8N1ZwI58Nq7vCjK1t4BmDP+s+xYZUD/ZsAGbi7+l8anitQ+7r0Cl1WmwWa37g90ygOmBTXHdEER2MmPtQHgd3MmyTPqJyC7ktj2FQI66J/OYzThcfhVHjFsnKLisHJisywT3gTVcYm22dI9HFk3KLMwMcMVNgf//aEeYskZtgS4dI/ptBLjfw3XVmD9Ss2jUHXKDw7fMC/bztv9gG7qxrkInJsWD1Iz7yoHj1rPbZuS9IZcs5ZlUj278vgV2Csz2LRv9/zuxHOH8W+5dfKeHydXQXiWeqfP7Vg9cI3UmIb2PTTHbX4EKIs/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ev9F1xIwMTpn7FnC9TiteFXYsmsD1+B7N4Rc2ym1kEE=; b=aBScYUX3C2xfCh29c7CeVY7EbdNhtjV+Qh43O9VUTJnPkAAXlTKoIWLBtTnUiljrQF5JNzy8ZVORVBMJ8wupm1azXz90KzCWAwdLK9nDs6HhSHsoR+8k52G+WroK3kNWHVYuwMdSzqR0lpRfHMzJTVCFCDOGVBSjkhu0rHJ93HH8XHHnQg4I8NoRIfxIC/IhskntyTv7mXcP1CMzvOLQhm3jAmSQL0JT+4kI620ab+jMO4yVfbw+Y0Kc3zv+v0ESES6vzp96S0Ki4sWaXbH4NpIGWBQ/ADJw/i3AqOAONYmpjjbkk6ClZky33dOlpCUc/UoG2RWDDozdVObuAuFVag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ev9F1xIwMTpn7FnC9TiteFXYsmsD1+B7N4Rc2ym1kEE=; b=1ViEqWRSaI+C21zAWcTyxwQrF9wZhmlr9ovpTmwd7EZEkMB28/uCH73zWRzNcAdRS41KHvJ3TPptUPlZd7uxmc/MjzHWaA9H0gXIbWK0XhSENrDxGoprsQBcXXa7ZXE8q+cuqtvz2K3i5T5w7JGrjVAsSvzOA5hDYHxBZmAbCDM= Received: from CH2PR19CA0016.namprd19.prod.outlook.com (2603:10b6:610:4d::26) by SJ2PR12MB8807.namprd12.prod.outlook.com (2603:10b6:a03:4d0::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8943.30; Wed, 23 Jul 2025 17:32:30 +0000 Received: from CH2PEPF00000141.namprd02.prod.outlook.com (2603:10b6:610:4d:cafe::b2) by CH2PR19CA0016.outlook.office365.com (2603:10b6:610:4d::26) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:32:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF00000141.mail.protection.outlook.com (10.167.244.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:29 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:27 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:26 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:22 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , =?UTF-8?q?Pablo=20Casc=C3=B3n?= Subject: [PATCH v4 06/14] net: ionic: Provide doorbell and CMB region information Date: Wed, 23 Jul 2025 23:01:41 +0530 Message-ID: <20250723173149.2568776-7-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF00000141:EE_|SJ2PR12MB8807:EE_ X-MS-Office365-Filtering-Correlation-Id: c1c85bd4-028a-40ff-f8bb-08ddca0ee5de X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|82310400026|7416014|376014|921020; X-Microsoft-Antispam-Message-Info: =?utf-8?B?VDBqQWxXTXpWYXd0WERWZFVHTVBRdmsxeVZIOFhEVlFJUFdNK2N3VzZsL1ds?= =?utf-8?B?NjcxcDkyTXZFT1VoSUtvTVI1b2xzSStCYU4vMnQ5ZmJ2MXBuN1FQZDQ5V0d4?= =?utf-8?B?ZEdrd25BUHp1RExrcXlJTlJsRU5lK3NNVlp4TW5odmpBZmVoOW1ydFdUNTVR?= =?utf-8?B?RWVOY3pwd3dwNHVVbHR3cDVxY1hDUE5WWFZ2Lzl6VGpwcjA3VTBUR3h2QnhZ?= =?utf-8?B?YW1kdVBKUDh5VzlocXVkWHBGYlFhaTdXT0NBK3NUS3lHamhIeWdETHlnam5R?= =?utf-8?B?VWtnMnQ3SW9mK25RWU9vYnZxZlRrUWFXUWxMZWFVWmRXeUp1QWt6WWM1dURo?= =?utf-8?B?aDlJelBCa2V5ZmZvcHZhdEM3WWhIUEROQm5CamdPWWRYaWZiSEQwRHhpKzla?= =?utf-8?B?eGVJR2dJekZxaUtSbU1NR2xSR21ueHdScDNFVWRubDV3bFBVM0I0Y0VUM3Ba?= =?utf-8?B?ejRzbkl6TnlVb1Z0NlM0NkVsQWhJK0p2c2lmellqeWxpeDhrbUxBUWxGdTNE?= =?utf-8?B?bm5sRlRzMjhBOEs0eU5oemlySmdXMVJxVkczM1pLM0lpdDVuUzJqY2ViNlN0?= =?utf-8?B?VjF0djEyVVZYd0hzcS9OdVRuUjlaLzdFb2VQQTVQQlFGS3FzYk4ybkxabzlL?= =?utf-8?B?Y1dVOUxxbFdvWVUvZmg4Rm1oOHlpUEl2V0dTL2pyT1gzLzdaYmlPZ3hzbkJB?= =?utf-8?B?NVRBNStnNXI3bWFqeWx2b1RweW01WHFNQ29xSkZUZEgvYm5IcXREcWVSRktG?= =?utf-8?B?WlF0cTZ3MHZWWGpTNWhLVTlUZ1FrdzMrM3k5SWhOZGtnNUdHdVR3dVlEMU1X?= =?utf-8?B?aVB1clNsYjdNdXprK0RrSWtqVVJuQzFnK0J5M3FQLzNwVmprcjJ3dWlXZXlj?= =?utf-8?B?L1FneFJ4bWxKVnJGSE9OYXhvRU90OEFkR0NNVXVOYmZKNW82QmIvY0g2bkRN?= =?utf-8?B?RlZ3U05NNVkzajB4K09GOC9jY3ZBTVJpL0tNQ05sMDdEdnQ1eHRHdFlvUHV1?= =?utf-8?B?dGVENmtqbXRBem5MYlFhME9oWUVyNlQ1K1UzSmpmdjdIQkorSFJ0K0RMN1Vr?= =?utf-8?B?TU5INUYwQm5ITTR1aDlxd3YxN1JsQkQvdkVaN051Wjc4dWZ1VzFmM2dWK3Qx?= =?utf-8?B?ZW9iZWl3MEVxM1ZXY2djd2Rvc1pEeVRWMDlBNmNCUXZBM2g4amNYeERBK2Y0?= =?utf-8?B?Yzk0VG1rRlZjT0NPUU9tUldSd3hPNnpuVVhIZ0JLeEFpN1BrNEx1dkJONEwy?= =?utf-8?B?Q0RSYllUZC9QSEZ0T1lpYWZsckNHcDNCNk9nbGxkcThwWE8zN29mSHpWM3dR?= =?utf-8?B?d3BrSC8xN1J2YlBhVHltU1B6YnNBbDJaK3Vpc2VDOUx0WEs0MExnUFhUOGo4?= =?utf-8?B?RnhVeFZHYlgyYUc3TEd2ZHZ5TmlUbzFjTisxeUVJcys2cy9NSmhwazI5YzNy?= =?utf-8?B?YlVPUGQxRzVmNXRxUXZHTTdsNmRIU2xWejc4ZzdHR2pJRzZ6dzFzVUFYL2lz?= =?utf-8?B?dDZMVUo0VG1hQUtNVzM5bE9nRitPdGljMXlUa3JDQWZ5QXRBQzFMRi9aSlBC?= =?utf-8?B?OWVvVHpmdzQrQ2NlNkxTVGs1eFdXc0Zhd0FTb21WRExpK1dzcVpCeFhTTmwv?= =?utf-8?B?eUtmUkQ2Z0oydjZUd1lHN2gvbHFJQ2tkb3VDb3VrU3NwK1loWXB6N2ZLbkht?= =?utf-8?B?TXhMMWkyWGNUSC9rL2l0bk1pTlZZMjYrUmgvQytLTmxISmgwS29LcGtpbFV2?= =?utf-8?B?UTFOZElVckRtUzBIZzRhV2pFemlSMFhsZzAyUFkrV0FlYXVJMTJNQXdMeHhB?= =?utf-8?B?ZldxMDNuMXl0REJXV2tQUGZYSmRZd0IwTDZWbFJnYkFWTDlMYlpOcGYxMXBS?= =?utf-8?B?Ymx3d2RTZjRKdElIcUpZaFFCK3B3MTRPejRwa2l2U21iTVdGeFVBTDJNanFr?= =?utf-8?B?SzM4ZUQ3UEVvL211Y0V1MnVBVmt5YkVwNnAwZ01XSkxvSU96b05oL3dqbFl6?= =?utf-8?B?Q0NSZFNVSnlnb0tDOFV4dUhVVkk5ZE5LV0prYlBnNG9IZmpPM0N6aUVqR1E4?= =?utf-8?B?SjFNNzVCa0hQeDJSVTNHYTRnUEpIbHVKeEZHUT09?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(82310400026)(7416014)(376014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:29.0873 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c1c85bd4-028a-40ff-f8bb-08ddca0ee5de X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF00000141.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8807 The RDMA device needs information of controller memory bar and doorbell capability to share with user context. Discover CMB regions and express doorbell capabilities on device init. Reviewed-by: Shannon Nelson Co-developed-by: Pablo Casc=C3=B3n Signed-off-by: Pablo Casc=C3=B3n Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- .../net/ethernet/pensando/ionic/ionic_api.h | 22 ++ .../ethernet/pensando/ionic/ionic_bus_pci.c | 2 + .../net/ethernet/pensando/ionic/ionic_dev.c | 270 +++++++++++++++++- .../net/ethernet/pensando/ionic/ionic_dev.h | 14 +- .../net/ethernet/pensando/ionic/ionic_if.h | 89 ++++++ .../net/ethernet/pensando/ionic/ionic_lif.c | 2 +- 6 files changed, 381 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_api.h b/drivers/net/= ethernet/pensando/ionic/ionic_api.h index 5fd23aa8c5a1..bd88666836b8 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_api.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_api.h @@ -106,4 +106,26 @@ int ionic_intr_alloc(struct ionic_lif *lif, struct ion= ic_intr_info *intr); */ void ionic_intr_free(struct ionic_lif *lif, int intr); =20 +/** + * ionic_get_cmb - Reserve cmb pages + * @lif: Logical interface + * @pgid: First page index + * @pgaddr: First page bus addr (contiguous) + * @order: Log base two number of pages (PAGE_SIZE) + * @stride_log2: Size of stride to determine CMB pool + * @expdb: Will be set to true if this CMB region has expdb enabled + * + * Return: zero or negative error status + */ +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, + int order, u8 stride_log2, bool *expdb); + +/** + * ionic_put_cmb - Release cmb pages + * @lif: Logical interface + * @pgid: First page index + * @order: Log base two number of pages (PAGE_SIZE) + */ +void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); + #endif /* _IONIC_API_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/= net/ethernet/pensando/ionic/ionic_bus_pci.c index bb75044dfb82..4f13dc908ed8 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -272,6 +272,8 @@ static int ionic_setup_one(struct ionic *ionic) } ionic_debugfs_add_ident(ionic); =20 + ionic_map_cmb(ionic); + err =3D ionic_init(ionic); if (err) { dev_err(dev, "Cannot init device: %d, aborting\n", err); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/= ethernet/pensando/ionic/ionic_dev.c index 18b9c8a810ae..60c3d3e69098 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c @@ -199,13 +199,201 @@ void ionic_init_devinfo(struct ionic *ionic) dev_dbg(ionic->dev, "fw_version %s\n", idev->dev_info.fw_version); } =20 +static void ionic_map_disc_cmb(struct ionic *ionic) +{ + struct ionic_identity *ident =3D &ionic->ident; + u32 length_reg0, length, offset, num_regions; + struct ionic_dev_bar *bar =3D ionic->bars; + struct ionic_dev *idev =3D &ionic->idev; + struct device *dev =3D ionic->dev; + int err, sz, i; + u64 end; + + mutex_lock(&ionic->dev_cmd_lock); + + ionic_dev_cmd_discover_cmb(idev); + err =3D ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + if (!err) { + sz =3D min(sizeof(ident->cmb_layout), + sizeof(idev->dev_cmd_regs->data)); + memcpy_fromio(&ident->cmb_layout, + &idev->dev_cmd_regs->data, sz); + } + mutex_unlock(&ionic->dev_cmd_lock); + + if (err) { + dev_warn(dev, "Cannot discover CMB layout, disabling CMB\n"); + return; + } + + bar +=3D 2; + + num_regions =3D le32_to_cpu(ident->cmb_layout.num_regions); + if (!num_regions || num_regions > IONIC_MAX_CMB_REGIONS) { + dev_warn(dev, "Invalid number of CMB entries (%d)\n", + num_regions); + return; + } + + dev_dbg(dev, "ionic_cmb_layout_identity num_regions %d flags %x:\n", + num_regions, ident->cmb_layout.flags); + + for (i =3D 0; i < num_regions; i++) { + offset =3D le32_to_cpu(ident->cmb_layout.region[i].offset); + length =3D le32_to_cpu(ident->cmb_layout.region[i].length); + end =3D offset + length; + + dev_dbg(dev, "CMB entry %d: bar_num %u cmb_type %u offset %x length %u\n= ", + i, ident->cmb_layout.region[i].bar_num, + ident->cmb_layout.region[i].cmb_type, + offset, length); + + if (end > (bar->len >> IONIC_CMB_SHIFT_64K)) { + dev_warn(dev, "Out of bounds CMB region %d offset %x length %u\n", + i, offset, length); + return; + } + } + + /* if first entry matches PCI config, expdb is not supported */ + if (ident->cmb_layout.region[0].bar_num =3D=3D bar->res_index && + le32_to_cpu(ident->cmb_layout.region[0].length) =3D=3D bar->len && + !ident->cmb_layout.region[0].offset) { + dev_warn(dev, "No CMB mapping discovered\n"); + return; + } + + /* process first entry for regular mapping */ + length_reg0 =3D le32_to_cpu(ident->cmb_layout.region[0].length); + if (!length_reg0) { + dev_warn(dev, "region len =3D 0. No CMB mapping discovered\n"); + return; + } + + /* Verify first entry size matches expected 8MB size (in 64KB pages) */ + if (length_reg0 !=3D IONIC_BAR2_CMB_ENTRY_SIZE >> IONIC_CMB_SHIFT_64K) { + dev_warn(dev, "Unexpected CMB size in entry 0: %u pages\n", + length_reg0); + return; + } + + sz =3D BITS_TO_LONGS((length_reg0 << IONIC_CMB_SHIFT_64K) / + PAGE_SIZE) * sizeof(long); + idev->cmb_inuse =3D kzalloc(sz, GFP_KERNEL); + if (!idev->cmb_inuse) { + dev_warn(dev, "No memory for CMB, disabling\n"); + idev->phy_cmb_pages =3D 0; + idev->phy_cmb_expdb64_pages =3D 0; + idev->phy_cmb_expdb128_pages =3D 0; + idev->phy_cmb_expdb256_pages =3D 0; + idev->phy_cmb_expdb512_pages =3D 0; + idev->cmb_npages =3D 0; + return; + } + + for (i =3D 0; i < num_regions; i++) { + /* check this region matches first region length as to + * ease implementation + */ + if (le32_to_cpu(ident->cmb_layout.region[i].length) !=3D + length_reg0) + continue; + + offset =3D le32_to_cpu(ident->cmb_layout.region[i].offset); + + switch (ident->cmb_layout.region[i].cmb_type) { + case IONIC_CMB_TYPE_DEVMEM: + idev->phy_cmb_pages =3D bar->bus_addr + offset; + idev->cmb_npages =3D + (length_reg0 << IONIC_CMB_SHIFT_64K) / PAGE_SIZE; + dev_dbg(dev, "regular cmb mapping: bar->bus_addr %pa region[%d].length = %u\n", + &bar->bus_addr, i, length); + dev_dbg(dev, "idev->phy_cmb_pages %pad, idev->cmb_npages %u\n", + &idev->phy_cmb_pages, idev->cmb_npages); + break; + + case IONIC_CMB_TYPE_EXPDB64: + idev->phy_cmb_expdb64_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb64_pages %pad\n", + &idev->phy_cmb_expdb64_pages); + break; + + case IONIC_CMB_TYPE_EXPDB128: + idev->phy_cmb_expdb128_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb128_pages %pad\n", + &idev->phy_cmb_expdb128_pages); + break; + + case IONIC_CMB_TYPE_EXPDB256: + idev->phy_cmb_expdb256_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb256_pages %pad\n", + &idev->phy_cmb_expdb256_pages); + break; + + case IONIC_CMB_TYPE_EXPDB512: + idev->phy_cmb_expdb512_pages =3D + bar->bus_addr + (offset << IONIC_CMB_SHIFT_64K); + dev_dbg(dev, "idev->phy_cmb_expdb512_pages %pad\n", + &idev->phy_cmb_expdb512_pages); + break; + + default: + dev_warn(dev, "[%d] Invalid cmb_type (%d)\n", + i, ident->cmb_layout.region[i].cmb_type); + break; + } + } +} + +static void ionic_map_classic_cmb(struct ionic *ionic) +{ + struct ionic_dev_bar *bar =3D ionic->bars; + struct ionic_dev *idev =3D &ionic->idev; + struct device *dev =3D ionic->dev; + int sz; + + bar +=3D 2; + /* classic CMB mapping */ + idev->phy_cmb_pages =3D bar->bus_addr; + idev->cmb_npages =3D bar->len / PAGE_SIZE; + dev_dbg(dev, "classic cmb mapping: bar->bus_addr %pa bar->len %lu\n", + &bar->bus_addr, bar->len); + dev_dbg(dev, "idev->phy_cmb_pages %pad, idev->cmb_npages %u\n", + &idev->phy_cmb_pages, idev->cmb_npages); + + sz =3D BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); + idev->cmb_inuse =3D kzalloc(sz, GFP_KERNEL); + if (!idev->cmb_inuse) { + idev->phy_cmb_pages =3D 0; + idev->cmb_npages =3D 0; + } +} + +void ionic_map_cmb(struct ionic *ionic) +{ + struct pci_dev *pdev =3D ionic->pdev; + struct device *dev =3D ionic->dev; + + if (!(pci_resource_flags(pdev, 4) & IORESOURCE_MEM)) { + dev_dbg(dev, "No CMB, disabling\n"); + return; + } + + if (ionic->ident.dev.capabilities & cpu_to_le64(IONIC_DEV_CAP_DISC_CMB)) + ionic_map_disc_cmb(ionic); + else + ionic_map_classic_cmb(ionic); +} + int ionic_dev_setup(struct ionic *ionic) { struct ionic_dev_bar *bar =3D ionic->bars; unsigned int num_bars =3D ionic->num_bars; struct ionic_dev *idev =3D &ionic->idev; struct device *dev =3D ionic->dev; - int size; u32 sig; int err; =20 @@ -255,16 +443,11 @@ int ionic_dev_setup(struct ionic *ionic) mutex_init(&idev->cmb_inuse_lock); if (num_bars < 3 || !ionic->bars[IONIC_PCI_BAR_CMB].len) { idev->cmb_inuse =3D NULL; + idev->phy_cmb_pages =3D 0; + idev->cmb_npages =3D 0; return 0; } =20 - idev->phy_cmb_pages =3D bar->bus_addr; - idev->cmb_npages =3D bar->len / PAGE_SIZE; - size =3D BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); - idev->cmb_inuse =3D kzalloc(size, GFP_KERNEL); - if (!idev->cmb_inuse) - dev_warn(dev, "No memory for CMB, disabling\n"); - return 0; } =20 @@ -277,6 +460,11 @@ void ionic_dev_teardown(struct ionic *ionic) idev->phy_cmb_pages =3D 0; idev->cmb_npages =3D 0; =20 + idev->phy_cmb_expdb64_pages =3D 0; + idev->phy_cmb_expdb128_pages =3D 0; + idev->phy_cmb_expdb256_pages =3D 0; + idev->phy_cmb_expdb512_pages =3D 0; + if (ionic->wq) { destroy_workqueue(ionic->wq); ionic->wq =3D NULL; @@ -698,28 +886,79 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev= , struct ionic_qcq *qcq, ionic_dev_cmd_go(idev, &cmd); } =20 +void ionic_dev_cmd_discover_cmb(struct ionic_dev *idev) +{ + union ionic_dev_cmd cmd =3D { + .discover_cmb.opcode =3D IONIC_CMD_DISCOVER_CMB, + }; + + ionic_dev_cmd_go(idev, &cmd); +} + int ionic_db_page_num(struct ionic_lif *lif, int pid) { return (lif->hw_index * lif->dbid_count) + pid; } =20 -int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, i= nt order) +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, + int order, u8 stride_log2, bool *expdb) { struct ionic_dev *idev =3D &lif->ionic->idev; - int ret; + void __iomem *nonexpdb_pgptr; + phys_addr_t nonexpdb_pgaddr; + int i, idx; =20 mutex_lock(&idev->cmb_inuse_lock); - ret =3D bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); + idx =3D bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); mutex_unlock(&idev->cmb_inuse_lock); =20 - if (ret < 0) - return ret; + if (idx < 0) + return idx; + + *pgid =3D (u32)idx; + + if (idev->phy_cmb_expdb64_pages && + stride_log2 =3D=3D IONIC_EXPDB_64B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb64_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb128_pages && + stride_log2 =3D=3D IONIC_EXPDB_128B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb128_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb256_pages && + stride_log2 =3D=3D IONIC_EXPDB_256B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb256_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else if (idev->phy_cmb_expdb512_pages && + stride_log2 =3D=3D IONIC_EXPDB_512B_WQE_LG2) { + *pgaddr =3D idev->phy_cmb_expdb512_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D true; + } else { + *pgaddr =3D idev->phy_cmb_pages + idx * PAGE_SIZE; + if (expdb) + *expdb =3D false; + } =20 - *pgid =3D ret; - *pgaddr =3D idev->phy_cmb_pages + ret * PAGE_SIZE; + /* clear the requested CMB region, 1 PAGE_SIZE ioremap at a time */ + nonexpdb_pgaddr =3D idev->phy_cmb_pages + idx * PAGE_SIZE; + for (i =3D 0; i < (1 << order); i++) { + nonexpdb_pgptr =3D + ioremap_wc(nonexpdb_pgaddr + i * PAGE_SIZE, PAGE_SIZE); + if (!nonexpdb_pgptr) { + ionic_put_cmb(lif, *pgid, order); + return -ENOMEM; + } + memset_io(nonexpdb_pgptr, 0, PAGE_SIZE); + iounmap(nonexpdb_pgptr); + } =20 return 0; } +EXPORT_SYMBOL_NS(ionic_get_cmb, "NET_IONIC"); =20 void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order) { @@ -729,6 +968,7 @@ void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int= order) bitmap_release_region(idev->cmb_inuse, pgid, order); mutex_unlock(&idev->cmb_inuse_lock); } +EXPORT_SYMBOL_NS(ionic_put_cmb, "NET_IONIC"); =20 int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/= ethernet/pensando/ionic/ionic_dev.h index 68cf4da3c6b3..35566f97eaea 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -35,6 +35,11 @@ #define IONIC_RX_MIN_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ #define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 4) /* 4s */ =20 +#define IONIC_EXPDB_64B_WQE_LG2 6 +#define IONIC_EXPDB_128B_WQE_LG2 7 +#define IONIC_EXPDB_256B_WQE_LG2 8 +#define IONIC_EXPDB_512B_WQE_LG2 9 + struct ionic_dev_bar { void __iomem *vaddr; phys_addr_t bus_addr; @@ -171,6 +176,11 @@ struct ionic_dev { dma_addr_t phy_cmb_pages; u32 cmb_npages; =20 + dma_addr_t phy_cmb_expdb64_pages; + dma_addr_t phy_cmb_expdb128_pages; + dma_addr_t phy_cmb_expdb256_pages; + dma_addr_t phy_cmb_expdb512_pages; + u32 port_info_sz; struct ionic_port_info *port_info; dma_addr_t port_info_pa; @@ -351,8 +361,8 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, = struct ionic_qcq *qcq, =20 int ionic_db_page_num(struct ionic_lif *lif, int pid); =20 -int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, i= nt order); -void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); +void ionic_dev_cmd_discover_cmb(struct ionic_dev *idev); +void ionic_map_cmb(struct ionic *ionic); =20 int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/e= thernet/pensando/ionic/ionic_if.h index 59d6e97b3986..f9e653349d6e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -56,6 +56,9 @@ enum ionic_cmd_opcode { IONIC_CMD_VF_SETATTR =3D 61, IONIC_CMD_VF_CTRL =3D 62, =20 + /* CMB command */ + IONIC_CMD_DISCOVER_CMB =3D 80, + /* QoS commands */ IONIC_CMD_QOS_CLASS_IDENTIFY =3D 240, IONIC_CMD_QOS_CLASS_INIT =3D 241, @@ -269,9 +272,11 @@ union ionic_drv_identity { /** * enum ionic_dev_capability - Device capabilities * @IONIC_DEV_CAP_VF_CTRL: Device supports VF ctrl operations + * @IONIC_DEV_CAP_DISC_CMB: Device supports CMB discovery operations */ enum ionic_dev_capability { IONIC_DEV_CAP_VF_CTRL =3D BIT(0), + IONIC_DEV_CAP_DISC_CMB =3D BIT(1), }; =20 /** @@ -395,6 +400,7 @@ enum ionic_logical_qtype { * @IONIC_Q_F_4X_DESC: Quadruple main descriptor size * @IONIC_Q_F_4X_CQ_DESC: Quadruple cq descriptor size * @IONIC_Q_F_4X_SG_DESC: Quadruple sg descriptor size + * @IONIC_QIDENT_F_EXPDB: Queue supports express doorbell */ enum ionic_q_feature { IONIC_QIDENT_F_CQ =3D BIT_ULL(0), @@ -407,6 +413,7 @@ enum ionic_q_feature { IONIC_Q_F_4X_DESC =3D BIT_ULL(7), IONIC_Q_F_4X_CQ_DESC =3D BIT_ULL(8), IONIC_Q_F_4X_SG_DESC =3D BIT_ULL(9), + IONIC_QIDENT_F_EXPDB =3D BIT_ULL(10), }; =20 /** @@ -2213,6 +2220,80 @@ struct ionic_vf_ctrl_comp { u8 rsvd[15]; }; =20 +/** + * struct ionic_discover_cmb_cmd - CMB discovery command + * @opcode: Opcode for the command + * @rsvd: Reserved bytes + */ +struct ionic_discover_cmb_cmd { + u8 opcode; + u8 rsvd[63]; +}; + +/** + * struct ionic_discover_cmb_comp - CMB discover command completion. + * @status: Status of the command (enum ionic_status_code) + * @rsvd: Reserved bytes + */ +struct ionic_discover_cmb_comp { + u8 status; + u8 rsvd[15]; +}; + +#define IONIC_MAX_CMB_REGIONS 16 +#define IONIC_CMB_SHIFT_64K 16 + +enum ionic_cmb_type { + IONIC_CMB_TYPE_DEVMEM =3D 0, + IONIC_CMB_TYPE_EXPDB64 =3D 1, + IONIC_CMB_TYPE_EXPDB128 =3D 2, + IONIC_CMB_TYPE_EXPDB256 =3D 3, + IONIC_CMB_TYPE_EXPDB512 =3D 4, +}; + +/** + * union ionic_cmb_region - Configuration for CMB region + * @bar_num: CMB mapping number from FW + * @cmb_type: Type of CMB this region describes (enum ionic_cmb_type) + * @rsvd: Reserved + * @offset: Offset within BAR in 64KB pages + * @length: Length of the CMB region + * @words: 32-bit words for direct access to the entire region + */ +union ionic_cmb_region { + struct { + u8 bar_num; + u8 cmb_type; + u8 rsvd[6]; + __le32 offset; + __le32 length; + } __packed; + __le32 words[4]; +}; + +/** + * union ionic_discover_cmb_identity - CMB layout identity structure + * @num_regions: Number of CMB regions, up to 16 + * @flags: Feature and capability bits (0 for express + * doorbell, 1 for 4K alignment indicator, + * 31-24 for version information) + * @region: CMB mappings region, entry 0 for regular + * mapping, entries 1-7 for WQE sizes 64, + * 128, 256, 512, 1024, 2048 and 4096 bytes + * @words: Full union buffer size + */ +union ionic_discover_cmb_identity { + struct { + __le32 num_regions; +#define IONIC_CMB_FLAG_EXPDB BIT(0) +#define IONIC_CMB_FLAG_4KALIGN BIT(1) +#define IONIC_CMB_FLAG_VERSION 0xff000000 + __le32 flags; + union ionic_cmb_region region[IONIC_MAX_CMB_REGIONS]; + }; + __le32 words[478]; +}; + /** * struct ionic_qos_identify_cmd - QoS identify command * @opcode: opcode @@ -3073,6 +3154,8 @@ union ionic_dev_cmd { struct ionic_vf_getattr_cmd vf_getattr; struct ionic_vf_ctrl_cmd vf_ctrl; =20 + struct ionic_discover_cmb_cmd discover_cmb; + struct ionic_lif_identify_cmd lif_identify; struct ionic_lif_init_cmd lif_init; struct ionic_lif_reset_cmd lif_reset; @@ -3112,6 +3195,8 @@ union ionic_dev_cmd_comp { struct ionic_vf_getattr_comp vf_getattr; struct ionic_vf_ctrl_comp vf_ctrl; =20 + struct ionic_discover_cmb_comp discover_cmb; + struct ionic_lif_identify_comp lif_identify; struct ionic_lif_init_comp lif_init; ionic_lif_reset_comp lif_reset; @@ -3253,6 +3338,9 @@ union ionic_adminq_comp { #define IONIC_BAR0_DEV_CMD_DATA_REGS_OFFSET 0x0c00 #define IONIC_BAR0_INTR_STATUS_OFFSET 0x1000 #define IONIC_BAR0_INTR_CTRL_OFFSET 0x2000 + +/* BAR2 */ +#define IONIC_BAR2_CMB_ENTRY_SIZE 0x800000 #define IONIC_DEV_CMD_DONE 0x00000001 =20 #define IONIC_ASIC_TYPE_NONE 0 @@ -3306,6 +3394,7 @@ struct ionic_identity { union ionic_port_identity port; union ionic_qos_identity qos; union ionic_q_identity txq; + union ionic_discover_cmb_identity cmb_layout; }; =20 #endif /* _IONIC_IF_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/= ethernet/pensando/ionic/ionic_lif.c index f89b458bd20a..1bd2202f263a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -673,7 +673,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsig= ned int type, new->cmb_order =3D order_base_2(new->cmb_q_size / PAGE_SIZE); =20 err =3D ionic_get_cmb(lif, &new->cmb_pgid, &new->cmb_q_base_pa, - new->cmb_order); + new->cmb_order, 0, NULL); if (err) { netdev_err(lif->netdev, "Cannot allocate queue order %d from cmb: err %d\n", --=20 2.43.0 From nobody Mon Oct 6 06:32:11 2025 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2040.outbound.protection.outlook.com [40.107.94.40]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64FCC223707; Wed, 23 Jul 2025 17:32:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.40 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291963; cv=fail; b=VtLmf7Ua++bxjs4oJPtbl6E7iK9tj93rp35Q6CBYiQQe2VNwgQNIdbc9g+hRQcuIbugQUQ1Q9QyxFstbFQqpMFO16j9xLsX/rJUWpE5ML61SwyTSYh2ewxq20YYotBChFfWhN4TpGMlaSCkyRSpFycRUE1glyiXrNi5W5lN9+1Q= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291963; c=relaxed/simple; bh=NfROQPYeGxHoicQQxDjEDs5h42zqtaKn9+xjzZTPuQY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XQ73w5a33RzU9Tme3wQSoX8URbwuY/zftfE/T1gNbdTObNgaCLJf8HYsmLsUMFPSnraxUaAPLJEA+NKMvXga+IkNVrJ5gkKik8KH/2KTpOE6uxAZ5vJ6W6z4SCd8AbLrdIKpRtxA4TfneM5yz2PnEpT89YEnSa3p4wC/LuQXfsY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=18txZHeJ; arc=fail smtp.client-ip=40.107.94.40 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="18txZHeJ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LFPJK+nEI8//XDbskeT1OmzbPnTJWjg2IKi8f8HDIQ68C9O20oYgdWNFgWZLf6Zi0c4HLHvEkxTnwVC23khTxi83N9wdjrwtyU53jOhISXISfNueVcd8oYsmteeOpyizWm0tHWgWrLhoKuG6ax0GYVoNnOFsdH4UGlIUvnYhe/qX70h7oErTimQCZkvcbLHEaE1iO4SJ9HAbD6mvCoMrGahzjUKxvQIQszKOyXpUW2zCSEr5t32BYfF2uv1DyvFuLjqxhFe3IXfYcnbkSNgOfBgc6MJf5AmpZbv9naneiXx6H9s8J7vOIoyvXg3+tp/2DBsxA8KIk5seDuDAQWWLdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jpRkPx+dQVGxL2AILTyXPMS7gdxA9IxJorx8b13lVVE=; b=aN38iyS4b3y6IgO9BbKK/0oei9tM6i0v6uYGaAuiprjsFh2fN8auoFjTZvdmCbgD8wqSxUqcuByeGAWCaYR3YpZkxh7Ku4ZV+fe4+xZFGzsWNMBRJGkHdeE9/rYwNZDJu3aIB35cdKPWJcswgJLTDWnfgg9/duN6kq1vW5QLFFfRdqfo0iDEpq2lMHc+eDRzV7HB6oI22iNJeGziInp7T74+jEGP5Ox46IymAC6bXnZ5ztluYvuUsWjelOYJRw+kh1Z7hkZYmaZmg2J8n5xntrl+8gf82c0fRfgsxFWo56zYcqum8nNdsKj7T/GEuumHNl5g5Yfyy5axJVR5NRAXNw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jpRkPx+dQVGxL2AILTyXPMS7gdxA9IxJorx8b13lVVE=; b=18txZHeJWpxlI6R3tbElfAdhoGbUiQ09tncduAM16+gwxcHZph6o6FLl7xWYz0+Eo9UoGWoUV7mAu37M7a7biS74FQ8MivZLLb449anBM3cLbM2siPTtfpGNy64MseVNou170MdQEZF6H8x93U53a3XqOwgqXBh6bHwJMpKKkYI= Received: from PH7P220CA0106.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:32d::7) by DS7PR12MB6261.namprd12.prod.outlook.com (2603:10b6:8:97::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8964.21; Wed, 23 Jul 2025 17:32:37 +0000 Received: from CY4PEPF0000EE3B.namprd03.prod.outlook.com (2603:10b6:510:32d:cafe::93) by PH7P220CA0106.outlook.office365.com (2603:10b6:510:32d::7) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8943.30 via Frontend Transport; Wed, 23 Jul 2025 17:32:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE3B.mail.protection.outlook.com (10.167.242.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:36 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:31 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:31 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:27 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 07/14] RDMA: Add IONIC to rdma_driver_id definition Date: Wed, 23 Jul 2025 23:01:42 +0530 Message-ID: <20250723173149.2568776-8-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3B:EE_|DS7PR12MB6261:EE_ X-MS-Office365-Filtering-Correlation-Id: 309b843e-75db-49c5-5f00-08ddca0eea2b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|7416014|36860700013|1800799024|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?omAH5NdZegeHg63tgRBSWxBgnL6/AmFYJVDeSEW1VDYNaAEl27UqEWYDMJY1?= =?us-ascii?Q?LhcTfERXRwtfY/6O/l8u3fLxsNQIhMNWyBKqnH/W04u0GDbBnhjr+Eapl8eR?= =?us-ascii?Q?eYYIH75h4q9MmYE0zG0G5F5OYCYJkvEGSBpiWvfuNfXpQFXt72Vsn5bpDchr?= =?us-ascii?Q?cqAhNollQ2gkcF/+69cJNWcsjOGWi+kAg3a8ag2ZJn3iUtxPDBklWgZW+J1e?= =?us-ascii?Q?iG2N+oGfS7Uebe6naNR5t/4nvUirvMFME0w01KIksfkJjFG+ugwAYqJuVxBA?= =?us-ascii?Q?9PLbU3fYST4wJHeb7RpbLzZrFkbJtFdwMNnGX1qzZvOgVBO/BDSwgyU6bgpv?= =?us-ascii?Q?Z+ws77Eolr5LZ7l3vH2luWyd0XJp8PfiRiuB+rnUJ8m8H0T85GDOjfc3aYWa?= =?us-ascii?Q?/p4hckgvyrAxCHJbyXcdIlbSzFPtwM3oLX3JeHfWfy0ffcRFuZR0FbHsD/pQ?= =?us-ascii?Q?+GY1cQ+ov/lazILRQTgHTZy8LBLgDSQ1Av7Z9m3cb5CQPVAYVp03prYU+msM?= =?us-ascii?Q?evVzBN2Hfh7YmmfI6//jFWUyMzZDI+yEYUi0yIr+raY8mwP4KqcUlhhHZj23?= =?us-ascii?Q?mGqaP/FaJKz7xzParLmzE0JeBFjJKkmbj/qqjFc2wGO+GUT+F9QOfJM8X5TW?= =?us-ascii?Q?vpv1xJWFzAu3Lbm8Y6ZU7eXOfSNATjfkwDpYb/m3HmpfV3Th/1mp9jtgqhjW?= =?us-ascii?Q?boJ3T9iKn+tlWG7LCAisYF5WIe1DJqkHH9OcB06CoeI3XVXp8tDwiDD2IHxY?= =?us-ascii?Q?VDcGl2zEldcNTX7DPhhPFHuFdx829w96rcYEAne31VP9SCsks9F11PehguoG?= =?us-ascii?Q?Sm+CNuqhXNbHIuNQv5vtDVBq1DvxAXDFJ0vouKf4k8B31VBMNl2u0PnNGl+W?= =?us-ascii?Q?xsG3Vipon1ZDVVwwSTPZZpahInN2thrNsBydhRLU9fJ4ci3THhWR/0peeLI1?= =?us-ascii?Q?Q02pwpU4voPm7QNHbH0Knks+cjZzpuGMkwCEyplSAT8uCOTO+XMl/t/5xa05?= =?us-ascii?Q?vcnAWBEoChAWiX3YWKi4WmwxkKg+7+6d1T8uXcxv2uRsjsYhMIDAIS5XQjRY?= =?us-ascii?Q?1+PZbZ1gObVf7Ztw/xAUiUyBag0Y7eenFsTYy3LZOHqXJH0z4AD1RDSRDtyn?= =?us-ascii?Q?+sNOs/TkZrXVzXKMbIITISxYWbHOftjDUb3E+5BN5Yc/USeUl1umGlubfAyc?= =?us-ascii?Q?ia2y6YPFZOJ3qkm90IHsaTFcStEagM0JGzgu7Eqp6RSz+LIi2Y3xd+oEXgRH?= =?us-ascii?Q?e9VQzAoVUrR01NrAZAhYLBahrMU3lWKahXMyCkVgMe0sfMGsYuO0o+CX1wyD?= =?us-ascii?Q?LCmKiv+O9iVKklk0ADtnIrql2nLqtJRzZRvWl1eyrUH9SeG8uHWhktEM6t+m?= =?us-ascii?Q?VOsc4cxqmRkeaB4jD4hIX9O6Lc0tl2YSpRNyuh/8t9pd6H1rp0UX9rJqcSqL?= =?us-ascii?Q?xNmTxYdBKs2BdAAeUpp97nQH1zSlfs8R5tgRUYnRw0SW4S5hgoDp0eV+6hsg?= =?us-ascii?Q?DGR/KyTFi4kThQI+VgTnbX2trCwO5hwRZx35kWB3scw1zIKO6XZIsVsKSQ?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(376014)(7416014)(36860700013)(1800799024)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:36.2661 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 309b843e-75db-49c5-5f00-08ddca0eea2b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3B.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6261 Content-Type: text/plain; charset="utf-8" Define RDMA_DRIVER_IONIC in enum rdma_driver_id. Signed-off-by: Abhijit Gangurde --- include/uapi/rdma/ib_user_ioctl_verbs.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib= _user_ioctl_verbs.h index fe15bc7e9f70..89e6a3f13191 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -255,6 +255,7 @@ enum rdma_driver_id { RDMA_DRIVER_SIW, RDMA_DRIVER_ERDMA, RDMA_DRIVER_MANA, + RDMA_DRIVER_IONIC, }; =20 enum ib_uverbs_gid_type { --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2059.outbound.protection.outlook.com [40.107.244.59]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 172CF220F37; Wed, 23 Jul 2025 17:32:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.59 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291964; cv=fail; b=TNdaKx+sGI9GI1BM4abumaJ5ZrRAZ8ZbarEsT8vhVV2iAIGUIrgwRIPYRdVTKSPFxgR5OBDqo2XyFzalAq/spbmOYN0Qnrv5NwleWx82mfYPqE4tkcOpeUYNb9MYWVaMbt1Xi4hREY0G6IGykx3e4pHnPBtCYfXa5C82gY3QE7k= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291964; c=relaxed/simple; bh=cfAPpr9FLjx1dDkl8128I0XDbVgK84Fov9ZJccVDNWc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=pyud/ErePm0ujzTjOCRepRDfQ+zZh7kUo613H2ioHmKDscZykd+4ylG8ZcKwy5lh1uDfLw2OvoIecY1kDH4y/IwYmM0W+Xr2opuE2PZH7e+1lWVXjFGxp2IUrwvFXGoUPfTDW2qAmxpCF3CRj/XCionriFsIKYTLIIJjD4XpIMo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=jlU3CXy9; arc=fail smtp.client-ip=40.107.244.59 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="jlU3CXy9" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=WPLN2lLJq1cCXwlDaEq9u07YtX0No4UduK0GJMuffPMFt0KTyVLcShOU1nQudsTfWd/xxM9gCFG7LEDlvDevRTVWKz6GMZoK5Gl0W6vjnxuxcVvPlLy+AAPgYXTzhxeLNTkVywWDEgSi2P4h/7Z68dOusaFYqdm663oeLqsl4o1eKzuIYjg8800o3DjHOBVN0B87yAt+SfUCl5rKqPID91zyTgU+f1ET9XulLv3SNwRYfgO7gTISawHieqGB2/GIW/x7oosXtxaPOtXybaLkBekU5ETLjI3nriwa9SSqvTbX5qokjdNR2Dr0lGbCGJDeaOwlgtLRkXnzrwGnpX2Rhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dSbuYxLGmefOiITmUeGLgntibKjgkrgO7SStEDGtPjY=; b=J0U4O0sLEuEv4qjDPSEBBzOe8QSm3cg8bUL88K9Gsrs3xUpVodY65M9U4I5QfhFwfBUpDYth+pwUJdDoSjXWZgHLA8NYGlwfT3GM2JQQBIX3mWeI8ChAqT3fyQOLUNs2r+ccH/r6EgCF/FKJrcfPP0tdycJUh/o3PcKzKaRhBV+UztDn2g+KeRAHDXHm0NH8qiOrK18IxkEeXuPmy/JftMOvEhmO7Zi5XX6HTxD4GdGbgehQcINyJP3M+yE/KolnmUfQVZQNpabJ4V4YwwJEgl5h2NGt1J9rmTrgZX5Wj8mfNXbRMbAMcVTzi0TXhIHgucXWu1qYCjbktQwVt/D3Eg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dSbuYxLGmefOiITmUeGLgntibKjgkrgO7SStEDGtPjY=; b=jlU3CXy9jpkSNUt6n+J1ohOqYapKiTQ+tfZQy/U4KLX1VuSUndduhxqROwiFFx1ZpER8frq2WZ68DacOgy6tSqDVPvHJqq53eQ0Pck3DPohZ0uFiMsR5ApbYo6iPXEC7s+DyazTliMTlEqphoozXRPT3YpHOkxIq4ybu9RyZIUA= Received: from CH0PR03CA0074.namprd03.prod.outlook.com (2603:10b6:610:cc::19) by MW4PR12MB6874.namprd12.prod.outlook.com (2603:10b6:303:20b::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8922.39; Wed, 23 Jul 2025 17:32:37 +0000 Received: from CH2PEPF00000142.namprd02.prod.outlook.com (2603:10b6:610:cc:cafe::30) by CH0PR03CA0074.outlook.office365.com (2603:10b6:610:cc::19) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.22 via Frontend Transport; Wed, 23 Jul 2025 17:32:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF00000142.mail.protection.outlook.com (10.167.244.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:36 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:36 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:31 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v4 08/14] RDMA/ionic: Register auxiliary module for ionic ethernet adapter Date: Wed, 23 Jul 2025 23:01:43 +0530 Message-ID: <20250723173149.2568776-9-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF00000142:EE_|MW4PR12MB6874:EE_ X-MS-Office365-Filtering-Correlation-Id: 51fdb5f0-50cc-41ac-238a-08ddca0eea56 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|82310400026|7416014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?fNO9/8T0bfR6W4sIX0F/EBXWvzBG8Y96LqTtdOu+Njx/b5ZPXG2LYGguUDQx?= =?us-ascii?Q?90CS8OOA8/pANBIMExT3RXKs3gCiHWnKQIUwRZasaGxluGQIxVPNQJhZKlj1?= =?us-ascii?Q?HNsIOIz02IupG77V5BwUWv/mrnCGA5E1SYg/tIMq+8vx2n3Ga16KqSNyR+4J?= =?us-ascii?Q?KsZMlatYojjYg0/N1z3worDTnEcCbbpnxEeQVU/oBqpj0BXsLu5+4wMvkaGR?= =?us-ascii?Q?bsKm7R+OFU1FtgJFkPDWCCYW+OE2KFC0vwcW8gz9FRCqGbMKkbAW6A7nstW6?= =?us-ascii?Q?t2fJh00xo/Z8HyG635DHdjDiITIPgG1fSklpVzfAriczeoCsIu66WrgaaRcB?= =?us-ascii?Q?xg4VnwIQ2bQZPiYptnZYAJnFRBI7B6bqWcbioapXtCzh7IDttVOGCp6euhDY?= =?us-ascii?Q?CUO6Ba3bNeute9Wqm46CvRPaEq/M/THbGIU4fE5753q0FbccA84Ws0Q8Iu6z?= =?us-ascii?Q?VNcywkfaCuGcOxjhQs7ty2zKDtMyArzI9iwsC7hB3MZI/HDpFUl3pQ/RORgk?= =?us-ascii?Q?KS9ZkoP8DWYiqFixVA+z1NikTjCc3UJ6qC2QVxCMo/fS4eL+09kM2UQyJNJ6?= =?us-ascii?Q?PUSb7Opcz8Br6uo0Og4Bq769uXaLeLZoSNzeNZqPgs5JB22NfavVIeWQy8NG?= =?us-ascii?Q?W7aSuWzsD2gZIRb0hXyNpfGoFYVjKNjL1fB5RlJnnWRxoeHSyxzA56c5RS5k?= =?us-ascii?Q?ApVHd4bv9zQ9dz0UgrSx+IaPr0kfjKzShjF4crud7ZVcXZ1qwzEWVDkdKO82?= =?us-ascii?Q?n8nZ3/NfAZimyMYvYn9iJrDf8x9gG0Y/sspjJNjXw2T0DhYCApeypFdiylDJ?= =?us-ascii?Q?19Eh7GtXYPVPOVIncJIDSxcyTaNGH541myVMQjnBMXIctVJvFhPmJkwnPQND?= =?us-ascii?Q?E4QqzZGPpHS3cWzRdGagIHnIXESKZLQDtLiemXVfv5fWEptexsbMO3CCNC7f?= =?us-ascii?Q?8Zc0pSfExTdarwVD289dlvIuT6Z71ufP+s3aLZmrG6jYuIxZyyV667jMPX7O?= =?us-ascii?Q?w2sUdyfxlJFuZpA5TO4QF3olJhCvq7H00n/Q7Y5f2nnkC/h7FNAXT/ZjHoLV?= =?us-ascii?Q?wQ+Vgdt5NHGXl3lYzPO4C6RvGW7UIsOF9fxaoNoxvk82q7vfCCEVjPIN4Dcj?= =?us-ascii?Q?Ox1dQRjRjnHBzv9sLbwH6m4kSOuDKHA3o2liICi0xKQ/WzvQ7aPTbf8QGkuZ?= =?us-ascii?Q?ZcLbPYc0lviHOdylUpeA4nSL4aDp+K3RUQLqN9ruA5F8IfRV6pCIVoIm4qTK?= =?us-ascii?Q?JaYJO9kpfMgej/lIHH455UxI23GdbhnRURdnsEqOWWqzW5RaH5M242QYEI4c?= =?us-ascii?Q?U+KtYZCEyv4SxssEAt0pYn1FA4zwv34bqAxp6Gg3AoGB+S2d0FCBAyMmmWHq?= =?us-ascii?Q?BM5xTUodyNC9bEzfW20od5D/1Gl74EC3c3GLjSPb2dGotg7dvdF/qG/2qHQY?= =?us-ascii?Q?byir9qF0uLj1t+WdUvpaR1iJE0ao925IKhN+AdTv+LijHmiEVMPFlQjQCN9a?= =?us-ascii?Q?lo6rlkLA1a9XlOxvWItiS9vTwtU15sMZSc+Sp2f49ANGF+2U3i6S+eGyvg?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(376014)(82310400026)(7416014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:36.5820 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 51fdb5f0-50cc-41ac-238a-08ddca0eea56 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF00000142.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6874 Content-Type: text/plain; charset="utf-8" Register auxiliary module to create ibdevice for ionic ethernet adapter. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v1->v2 - Removed netdev references from ionic RDMA driver - Moved to ionic_lif* instead of void* to convey information between aux devices and drivers. drivers/infiniband/hw/ionic/ionic_ibdev.c | 131 ++++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 18 +++ drivers/infiniband/hw/ionic/ionic_lif_cfg.c | 101 +++++++++++++++ drivers/infiniband/hw/ionic/ionic_lif_cfg.h | 64 ++++++++++ 4 files changed, 314 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_ibdev.c create mode 100644 drivers/infiniband/hw/ionic/ionic_ibdev.h create mode 100644 drivers/infiniband/hw/ionic/ionic_lif_cfg.c create mode 100644 drivers/infiniband/hw/ionic/ionic_lif_cfg.h diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c new file mode 100644 index 000000000000..d79470dae13a --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_ibdev.h" + +#define DRIVER_DESCRIPTION "AMD Pensando RoCE HCA driver" +#define DEVICE_DESCRIPTION "AMD Pensando RoCE HCA" + +MODULE_AUTHOR("Allen Hubbe "); +MODULE_DESCRIPTION(DRIVER_DESCRIPTION); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS("NET_IONIC"); + +static void ionic_destroy_ibdev(struct ionic_ibdev *dev) +{ + ib_unregister_device(&dev->ibdev); + ib_dealloc_device(&dev->ibdev); +} + +static struct ionic_ibdev *ionic_create_ibdev(struct ionic_aux_dev *ionic_= adev) +{ + struct ib_device *ibdev; + struct ionic_ibdev *dev; + struct net_device *ndev; + int rc; + + dev =3D ib_alloc_device(ionic_ibdev, ibdev); + if (!dev) + return ERR_PTR(-EINVAL); + + ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); + + ibdev =3D &dev->ibdev; + ibdev->dev.parent =3D dev->lif_cfg.hwdev; + + strscpy(ibdev->name, "ionic_%d", IB_DEVICE_NAME_MAX); + strscpy(ibdev->node_desc, DEVICE_DESCRIPTION, IB_DEVICE_NODE_DESC_MAX); + + ibdev->node_type =3D RDMA_NODE_IB_CA; + ibdev->phys_port_cnt =3D 1; + + /* the first two eq are reserved for async events */ + ibdev->num_comp_vectors =3D dev->lif_cfg.eq_count - 2; + + ndev =3D ionic_lif_netdev(ionic_adev->lif); + addrconf_ifid_eui48((u8 *)&ibdev->node_guid, ndev); + rc =3D ib_device_set_netdev(ibdev, ndev, 1); + /* ionic_lif_netdev() returns ndev with refcount held */ + dev_put(ndev); + if (rc) + goto err_admin; + + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); + if (rc) + goto err_register; + + return dev; + +err_register: +err_admin: + ib_dealloc_device(&dev->ibdev); + + return ERR_PTR(rc); +} + +static int ionic_aux_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *id) +{ + struct ionic_aux_dev *ionic_adev; + struct ionic_ibdev *dev; + + ionic_adev =3D container_of(adev, struct ionic_aux_dev, adev); + dev =3D ionic_create_ibdev(ionic_adev); + if (IS_ERR(dev)) + return dev_err_probe(&adev->dev, PTR_ERR(dev), + "Failed to register ibdev\n"); + + auxiliary_set_drvdata(adev, dev); + ibdev_dbg(&dev->ibdev, "registered\n"); + + return 0; +} + +static void ionic_aux_remove(struct auxiliary_device *adev) +{ + struct ionic_ibdev *dev =3D auxiliary_get_drvdata(adev); + + dev_dbg(&adev->dev, "unregister ibdev\n"); + ionic_destroy_ibdev(dev); + dev_dbg(&adev->dev, "unregistered\n"); +} + +static const struct auxiliary_device_id ionic_aux_id_table[] =3D { + { .name =3D "ionic.rdma", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, ionic_aux_id_table); + +static struct auxiliary_driver ionic_aux_r_driver =3D { + .name =3D "rdma", + .probe =3D ionic_aux_probe, + .remove =3D ionic_aux_remove, + .id_table =3D ionic_aux_id_table, +}; + +static int __init ionic_mod_init(void) +{ + int rc; + + rc =3D auxiliary_driver_register(&ionic_aux_r_driver); + if (rc) + goto err_aux; + + return 0; + +err_aux: + return rc; +} + +static void __exit ionic_mod_exit(void) +{ + auxiliary_driver_unregister(&ionic_aux_r_driver); +} + +module_init(ionic_mod_init); +module_exit(ionic_mod_exit); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h new file mode 100644 index 000000000000..224e5e74056d --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_IBDEV_H_ +#define _IONIC_IBDEV_H_ + +#include +#include + +#include "ionic_lif_cfg.h" + +struct ionic_ibdev { + struct ib_device ibdev; + + struct ionic_lif_cfg lif_cfg; +}; + +#endif /* _IONIC_IBDEV_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.c new file mode 100644 index 000000000000..8d0d209227e9 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.c @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include +#include + +#include "ionic_lif_cfg.h" + +#define IONIC_MIN_RDMA_VERSION 0 +#define IONIC_MAX_RDMA_VERSION 2 + +static u8 ionic_get_expdb(struct ionic_lif *lif) +{ + u8 expdb_support =3D 0; + + if (lif->ionic->idev.phy_cmb_expdb64_pages) + expdb_support |=3D IONIC_EXPDB_64B_WQE; + if (lif->ionic->idev.phy_cmb_expdb128_pages) + expdb_support |=3D IONIC_EXPDB_128B_WQE; + if (lif->ionic->idev.phy_cmb_expdb256_pages) + expdb_support |=3D IONIC_EXPDB_256B_WQE; + if (lif->ionic->idev.phy_cmb_expdb512_pages) + expdb_support |=3D IONIC_EXPDB_512B_WQE; + + return expdb_support; +} + +void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg) +{ + union ionic_lif_identity *ident =3D &lif->ionic->ident.lif; + + cfg->lif =3D lif; + cfg->hwdev =3D &lif->ionic->pdev->dev; + cfg->lif_index =3D lif->index; + cfg->lif_hw_index =3D lif->hw_index; + + cfg->dbid =3D lif->kern_pid; + cfg->dbid_count =3D le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif); + cfg->dbpage =3D lif->kern_dbpage; + cfg->intr_ctrl =3D lif->ionic->idev.intr_ctrl; + + cfg->db_phys =3D lif->ionic->bars[IONIC_PCI_BAR_DBELL].bus_addr; + + if (IONIC_VERSION(ident->rdma.version, ident->rdma.minor_version) >=3D + IONIC_VERSION(2, 1)) + cfg->page_size_supported =3D + le64_to_cpu(ident->rdma.page_size_cap); + else + cfg->page_size_supported =3D IONIC_PAGE_SIZE_SUPPORTED; + + cfg->rdma_version =3D ident->rdma.version; + cfg->qp_opcodes =3D ident->rdma.qp_opcodes; + cfg->admin_opcodes =3D ident->rdma.admin_opcodes; + + cfg->stats_type =3D le16_to_cpu(ident->rdma.stats_type); + cfg->npts_per_lif =3D le32_to_cpu(ident->rdma.npts_per_lif); + cfg->nmrs_per_lif =3D le32_to_cpu(ident->rdma.nmrs_per_lif); + cfg->nahs_per_lif =3D le32_to_cpu(ident->rdma.nahs_per_lif); + + cfg->aq_base =3D le32_to_cpu(ident->rdma.aq_qtype.qid_base); + cfg->cq_base =3D le32_to_cpu(ident->rdma.cq_qtype.qid_base); + cfg->eq_base =3D le32_to_cpu(ident->rdma.eq_qtype.qid_base); + + /* + * ionic_create_rdma_admin() may reduce aq_count or eq_count if + * it is unable to allocate all that were requested. + * aq_count is tunable; see ionic_aq_count + * eq_count is tunable; see ionic_eq_count + */ + cfg->aq_count =3D le32_to_cpu(ident->rdma.aq_qtype.qid_count); + cfg->eq_count =3D le32_to_cpu(ident->rdma.eq_qtype.qid_count); + cfg->cq_count =3D le32_to_cpu(ident->rdma.cq_qtype.qid_count); + cfg->qp_count =3D le32_to_cpu(ident->rdma.sq_qtype.qid_count); + cfg->dbid_count =3D le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif); + + cfg->aq_qtype =3D ident->rdma.aq_qtype.qtype; + cfg->sq_qtype =3D ident->rdma.sq_qtype.qtype; + cfg->rq_qtype =3D ident->rdma.rq_qtype.qtype; + cfg->cq_qtype =3D ident->rdma.cq_qtype.qtype; + cfg->eq_qtype =3D ident->rdma.eq_qtype.qtype; + cfg->udma_qgrp_shift =3D ident->rdma.udma_shift; + cfg->udma_count =3D 2; + + cfg->max_stride =3D ident->rdma.max_stride; + cfg->expdb_mask =3D ionic_get_expdb(lif); + + cfg->sq_expdb =3D + !!(lif->qtype_info[IONIC_QTYPE_TXQ].features & IONIC_QIDENT_F_EXPDB); + cfg->rq_expdb =3D + !!(lif->qtype_info[IONIC_QTYPE_RXQ].features & IONIC_QIDENT_F_EXPDB); +} + +struct net_device *ionic_lif_netdev(struct ionic_lif *lif) +{ + struct net_device *netdev =3D lif->netdev; + + dev_hold(netdev); + return netdev; +} diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.h new file mode 100644 index 000000000000..5b04b8a9937e --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_LIF_CFG_H_ + +#define IONIC_VERSION(a, b) (((a) << 16) + ((b) << 8)) +#define IONIC_PAGE_SIZE_SUPPORTED 0x40201000 /* 4kb, 2Mb, 1Gb */ + +#define IONIC_EXPDB_64B_WQE BIT(0) +#define IONIC_EXPDB_128B_WQE BIT(1) +#define IONIC_EXPDB_256B_WQE BIT(2) +#define IONIC_EXPDB_512B_WQE BIT(3) + +struct ionic_lif_cfg { + struct device *hwdev; + struct ionic_lif *lif; + + int lif_index; + int lif_hw_index; + + u32 dbid; + int dbid_count; + u64 __iomem *dbpage; + struct ionic_intr __iomem *intr_ctrl; + phys_addr_t db_phys; + + u64 page_size_supported; + u32 npts_per_lif; + u32 nmrs_per_lif; + u32 nahs_per_lif; + + u32 aq_base; + u32 cq_base; + u32 eq_base; + + int aq_count; + int eq_count; + int cq_count; + int qp_count; + + u16 stats_type; + u8 aq_qtype; + u8 sq_qtype; + u8 rq_qtype; + u8 cq_qtype; + u8 eq_qtype; + + u8 udma_count; + u8 udma_qgrp_shift; + + u8 rdma_version; + u8 qp_opcodes; + u8 admin_opcodes; + + u8 max_stride; + bool sq_expdb; + bool rq_expdb; + u8 expdb_mask; +}; + +void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg); +struct net_device *ionic_lif_netdev(struct ionic_lif *lif); + +#endif /* _IONIC_LIF_CFG_H_ */ --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2076.outbound.protection.outlook.com [40.107.94.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4B5D23B627; Wed, 23 Jul 2025 17:32:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291974; cv=fail; b=fBBFrWXhwpxisOwYC93rNoPlbdH3QXq5o5FEHT+6Ft1o+SO5wpVeceJY9Y5+Fm30RmIR/1ri6+mLSLpqA1wTZehfm1MSxrLuhlkoyAvSq2gXXz8YE7uL1Nf0ylKzfKwu5M4r2jHev14uLjGuYd1Pudm0oh0TD+GMUE8HQP8PtBM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291974; c=relaxed/simple; bh=mYmuwWAhod5Cu8zQgC0Hvo4XPK5iaP69OkW6rvQ8QyQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Or8uvtMdj9I/69zcD4p0wwy2CcqCv7CmfNYtkxNqP8sXGEa70oTI68O/74mHnRg5lzlj2/ZClIhnxl+RivxpokRLwfPViJjsJhDTqobMZ4/frHjRa3Cg3R3PjggcKoB8IvaU9li2WdgYhpV+giRa0Hp3Ey6EsvzLexMndTrMvKs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=THl7Gd7y; arc=fail smtp.client-ip=40.107.94.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="THl7Gd7y" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=X1K1RcJ2qQ7/X32VuHK/wsJHnW9DoP3hGN2lxd3xIKxcfEhEt1ilhqbkf7jL2iAnUh/whNzubIDA/SgqGf/tyIdJ/7NYb5mavp9SsgDiN0+KvUFzg6P6m+7QmRbvM0X/JDdYqxEjm9qEfudxlH5zLJxxKR/XY3PCK9LNz/NqnBYvCLOyk9icGsBvaPvWPPYEV9xXTU7+iaLQgmxpyWewN7qCplX/TtLyWcRyhT/zpXr+FRLF/aLLYJWVJke+paN+i7oMPNXaowlseqttUOYVtQWUIJn1WPC55CcTW0fjWClNRJ2CPSO7l0uv/LcMW7YT7o07xSDPM7FN/+RCrIOuVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2vP/baKu8b5NKsEfq3BlcaUZSfkJ6GJ1hcHohnSc/LU=; b=YCzNoROmg21F1LC4QPWRQeiUcTcz1T6DM0zGDeIDHsAphqZcToMMTpMRYc+Pkprog5eHDaOUvYqdb3pE0MdKtKzRCriTynLj/YGhfKMDxRiaAVCRE/2ybsz0vi9CPB2bhLMMEP6Ia/8zxZpYZkR6WFrJBYbCLEu4PCNJn4tapey08OF1Wo4LeNTdLy9GT8+pJ921oDXBoO5HAkMQSWSn0V+xQTGYCwoqdAUdwWtInzR7di/NKJWiCVfxkjpsnH48lwqrXk+y3KX3Bf/w94axXCqEuw1Oph0URuJlltMcYxcba7yC5677Dj8nUbYl9QeVkJU4El1xu7bBarrKwzCS7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2vP/baKu8b5NKsEfq3BlcaUZSfkJ6GJ1hcHohnSc/LU=; b=THl7Gd7ynKzoVkmBV9z85DrKoSoM+8u+uKPq+3xOS5B9FkpbrOBNOGsVIFl8sdb3VrAOMJARL6v+e7Pt09LngaXJyJoVnb0aso1ngArDTyZAZJfTX4mhT4hPnLmoS+Bl+mb6rpBb2ZS0r7VrJvQIdTQxdYWNpFwLHycQ2J08dTo= Received: from CH0PR08CA0003.namprd08.prod.outlook.com (2603:10b6:610:33::8) by LV3PR12MB9118.namprd12.prod.outlook.com (2603:10b6:408:1a1::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8964.21; Wed, 23 Jul 2025 17:32:45 +0000 Received: from CH2PEPF0000013C.namprd02.prod.outlook.com (2603:10b6:610:33:cafe::7) by CH0PR08CA0003.outlook.office365.com (2603:10b6:610:33::8) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8943.29 via Frontend Transport; Wed, 23 Jul 2025 17:32:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF0000013C.mail.protection.outlook.com (10.167.244.73) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:41 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:41 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:40 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:36 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v4 09/14] RDMA/ionic: Create device queues to support admin operations Date: Wed, 23 Jul 2025 23:01:44 +0530 Message-ID: <20250723173149.2568776-10-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013C:EE_|LV3PR12MB9118:EE_ X-MS-Office365-Filtering-Correlation-Id: d4349a80-4fdf-421d-f3b8-08ddca0eed85 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|7416014|376014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?Em96wOxZe37LESTwsUZjhLJUR3gDw1z+DNUzS7Cy6yD1tVYorOlI3pgDhUe2?= =?us-ascii?Q?Us+7q1mhSP0T6WGwwxJLycusXWJTBI1N3oNuiDOvk8LuGrcmU0KvKJP+LyA/?= =?us-ascii?Q?L/LUAJ7DD9m+tyLEaIUFd+SEMhc7NZBSCHvJUE3Ok6Sk9uzN2wsD/ECmnnq0?= =?us-ascii?Q?8CWaQoEJ0mjISpRZAhi6iBYMTnsdIt+6qMA0x5GYEIq+jl0tYuGBCjBfwZZH?= =?us-ascii?Q?/10L0laHERzCEslHiu2R5eIrY5nx0ge8zTgofitjZM4L6XKuJTi5tQ0+OoN/?= =?us-ascii?Q?O1nF9l0QOjy0fJp9H+hfE9zphcAbZxKcU+iQEpToSTvAynixVNu25fxRB9R5?= =?us-ascii?Q?TqNtGwQ3xqf2pzJ92V84/+MW/dkTO4EWaeyqYJ1Wouig8XY2T+L76EP6KHLA?= =?us-ascii?Q?KHaTXoj7d5aC1agqaP+jO+zg1IajILMKBNDzSu+GD0OYVTkBt0LWYustu80y?= =?us-ascii?Q?tUUCGZsh2USGjUkNoMduNttE2aQF1ERFGEiVq4az0SrbBq0lN74Ls2cWFBsD?= =?us-ascii?Q?QQU/lJdExh9IxbHlyRPG/7+iyU1YZ8xGvo5hvn1PNGTOBBlMQJfagPo7VRjO?= =?us-ascii?Q?dMl/D8yPrG8KYzyMRlBgC1G5Wzh6KUeFMzsEsENy70frS5df4FHcOLWAB0oa?= =?us-ascii?Q?jJ8M/Hz4oPLLNWqeX2YfOF6m7P71gfFrlur9EKG86SJbGAcrMq8eDyv5jl+n?= =?us-ascii?Q?JMxLPtGs6DIuNQ4fnrLgmzaJ7OvxaUI3TjPu3tjNw1TelYTLDVVFCZ7xE4HN?= =?us-ascii?Q?bNsjiVhJta2uVzbjF/33ZmRCVYKdnlp+/nCat9Fqc5veJAQHeJRoJnQo989A?= =?us-ascii?Q?f4j9oex3P+4x2HsUyVgrYdprCntHBq0dzx5veXOx9NC/n+Ok2Ra/7/r1JLDb?= =?us-ascii?Q?mkjJFUiyfq6fKOzBtHUfQmsYLFMHndLLkT8Hw9G6u7XrZgE+sIHbsf5iqYJG?= =?us-ascii?Q?HxjhgzNJ3CFL3EjgWIQmjVspU7O7WAZs3CVVeJAlFTN3Oge/uRx5uCjRRpBw?= =?us-ascii?Q?ZVBWtrWEzb2/hDxkhpmHS2AFLVz/mo1zUX3ftJeK5m5fgwqQz6IZevClE0QS?= =?us-ascii?Q?wOGSzTjlaU+yb9iBEBhjby5Efl49lpBskSQo4oJi37Dvtx5zHjvyw+rcq8sL?= =?us-ascii?Q?fJSiv5hxKjMrGRCgtwEUaeukSBTJY2YLgqJM64OHC745hMs8j2vRk7v6iFI5?= =?us-ascii?Q?KvddzPH88Wp3AzJZHoE0cesri+6L0U4F6wVA1tTABpoexIiPDkPZ6v8ubIKL?= =?us-ascii?Q?fwjmXDGpkHkKCS/LNPpUFAwYzTWTIu8r+aNuVFTIgh5pTvBuj4YJYlpgibCi?= =?us-ascii?Q?xHCp8oSrFl0I0dn7ltQ7nwqbPnImPszXTRQLD9fCXuMsgLNIsA81QycMCHVr?= =?us-ascii?Q?uvWPpgO3/VpKcKu7k3g6Z36WtpyIavb7sOIJW/C2xlT6pCn1TXXLRkc81ZlX?= =?us-ascii?Q?HF4Wvqg5Dmn30Xk0d/jM98MvyU+G4/JufEY7tIPSjYYxeQXys2sDkE0l+u/M?= =?us-ascii?Q?jiZJyimRVTCJGNSUaA0OtW2bIt8wWam0dd85ZHFnyF1fITRtBQUdhHF5MQ?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(7416014)(376014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:41.9260 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d4349a80-4fdf-421d-f3b8-08ddca0eed85 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9118 Content-Type: text/plain; charset="utf-8" Setup RDMA admin queues using device command exposed over auxiliary device and manage these queues using ida. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v3->v4 - Used xa lock instead of rcu lock for qp and cq access to handle async events - Improved comments - Removed unwanted warning and error prints v2->v3 - Fixed lockdep warning - Used IDA for resource id allocation - Removed rw locks around xarrays drivers/infiniband/hw/ionic/ionic_admin.c | 1124 +++++++++++++++++ .../infiniband/hw/ionic/ionic_controlpath.c | 181 +++ drivers/infiniband/hw/ionic/ionic_fw.h | 164 +++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 56 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 222 ++++ drivers/infiniband/hw/ionic/ionic_pgtbl.c | 113 ++ drivers/infiniband/hw/ionic/ionic_queue.c | 52 + drivers/infiniband/hw/ionic/ionic_queue.h | 234 ++++ drivers/infiniband/hw/ionic/ionic_res.h | 154 +++ 9 files changed, 2300 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_admin.c create mode 100644 drivers/infiniband/hw/ionic/ionic_controlpath.c create mode 100644 drivers/infiniband/hw/ionic/ionic_fw.h create mode 100644 drivers/infiniband/hw/ionic/ionic_pgtbl.c create mode 100644 drivers/infiniband/hw/ionic/ionic_queue.c create mode 100644 drivers/infiniband/hw/ionic/ionic_queue.h create mode 100644 drivers/infiniband/hw/ionic/ionic_res.h diff --git a/drivers/infiniband/hw/ionic/ionic_admin.c b/drivers/infiniband= /hw/ionic/ionic_admin.c new file mode 100644 index 000000000000..845c03f6d9fb --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_admin.c @@ -0,0 +1,1124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +#define IONIC_EQ_COUNT_MIN 4 +#define IONIC_AQ_COUNT_MIN 1 + +/* not a valid queue position or negative error status */ +#define IONIC_ADMIN_POSTED 0x10000 + +/* cpu can be held with irq disabled for COUNT * MS (for create/destroy_a= h) */ +#define IONIC_ADMIN_BUSY_RETRY_COUNT 2000 +#define IONIC_ADMIN_BUSY_RETRY_MS 1 + +/* admin queue will be considered failed if a command takes longer */ +#define IONIC_ADMIN_TIMEOUT (HZ * 2) +#define IONIC_ADMIN_WARN (HZ / 8) + +/* will poll for admin cq to tolerate and report from missed event */ +#define IONIC_ADMIN_DELAY (HZ / 8) + +/* work queue for polling the event queue and admin cq */ +struct workqueue_struct *ionic_evt_workq; + +static void ionic_admin_timedout(struct ionic_aq *aq) +{ + struct ionic_ibdev *dev =3D aq->dev; + unsigned long irqflags; + u16 pos; + + spin_lock_irqsave(&aq->lock, irqflags); + if (ionic_queue_empty(&aq->q)) + goto out; + + /* Reset ALL adminq if any one times out */ + if (atomic_read(&aq->admin_state) < IONIC_ADMIN_KILLED) + queue_work(ionic_evt_workq, &dev->reset_work); + + ibdev_err(&dev->ibdev, "admin command timed out, aq %d after: %ums\n", + aq->aqid, (u32)jiffies_to_msecs(jiffies - aq->stamp)); + + pos =3D (aq->q.prod - 1) & aq->q.mask; + if (pos =3D=3D aq->q.cons) + goto out; + + ibdev_warn(&dev->ibdev, "admin pos %u (last posted)\n", pos); + print_hex_dump(KERN_WARNING, "cmd ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at(&aq->q, pos), + BIT(aq->q.stride_log2), true); + +out: + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_admin_reset_dwork(struct ionic_ibdev *dev) +{ + if (atomic_read(&dev->admin_state) =3D=3D IONIC_ADMIN_KILLED) + return; + + queue_delayed_work(ionic_evt_workq, &dev->admin_dwork, + IONIC_ADMIN_DELAY); +} + +static void ionic_admin_reset_wdog(struct ionic_aq *aq) +{ + if (atomic_read(&aq->admin_state) =3D=3D IONIC_ADMIN_KILLED) + return; + + aq->stamp =3D jiffies; + ionic_admin_reset_dwork(aq->dev); +} + +static bool ionic_admin_next_cqe(struct ionic_ibdev *dev, struct ionic_cq = *cq, + struct ionic_v1_cqe **cqe) +{ + struct ionic_v1_cqe *qcqe =3D ionic_queue_at_prod(&cq->q); + + if (unlikely(cq->color !=3D ionic_v1_cqe_color(qcqe))) + return false; + + /* Prevent out-of-order reads of the CQE */ + dma_rmb(); + *cqe =3D qcqe; + + return true; +} + +static void ionic_admin_poll_locked(struct ionic_aq *aq) +{ + struct ionic_cq *cq =3D &aq->vcq->cq[0]; + struct ionic_admin_wr *wr, *wr_next; + struct ionic_ibdev *dev =3D aq->dev; + u32 wr_strides, avlbl_strides; + struct ionic_v1_cqe *cqe; + u32 qtf, qid; + u16 old_prod; + u8 type; + + lockdep_assert_held(&aq->lock); + + if (atomic_read(&aq->admin_state) =3D=3D IONIC_ADMIN_KILLED) { + list_for_each_entry_safe(wr, wr_next, &aq->wr_prod, aq_ent) { + INIT_LIST_HEAD(&wr->aq_ent); + aq->q_wr[wr->status].wr =3D NULL; + wr->status =3D atomic_read(&aq->admin_state); + complete_all(&wr->work); + } + INIT_LIST_HEAD(&aq->wr_prod); + + list_for_each_entry_safe(wr, wr_next, &aq->wr_post, aq_ent) { + INIT_LIST_HEAD(&wr->aq_ent); + wr->status =3D atomic_read(&aq->admin_state); + complete_all(&wr->work); + } + INIT_LIST_HEAD(&aq->wr_post); + + return; + } + + old_prod =3D cq->q.prod; + + while (ionic_admin_next_cqe(dev, cq, &cqe)) { + qtf =3D ionic_v1_cqe_qtf(cqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + if (unlikely(type !=3D IONIC_V1_CQE_TYPE_ADMIN)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe type %u\n", type); + goto cq_next; + } + + if (unlikely(qid !=3D aq->aqid)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe qid %u\n", qid); + goto cq_next; + } + + if (unlikely(be16_to_cpu(cqe->admin.cmd_idx) !=3D aq->q.cons)) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad idx %u cons %u qid %u\n", + be16_to_cpu(cqe->admin.cmd_idx), + aq->q.cons, qid); + goto cq_next; + } + + if (unlikely(ionic_queue_empty(&aq->q))) { + ibdev_warn_ratelimited(&dev->ibdev, + "bad cqe for empty adminq\n"); + goto cq_next; + } + + wr =3D aq->q_wr[aq->q.cons].wr; + if (wr) { + aq->q_wr[aq->q.cons].wr =3D NULL; + list_del_init(&wr->aq_ent); + + wr->cqe =3D *cqe; + wr->status =3D atomic_read(&aq->admin_state); + complete_all(&wr->work); + } + + ionic_queue_consume_entries(&aq->q, + aq->q_wr[aq->q.cons].wqe_strides); + +cq_next: + ionic_queue_produce(&cq->q); + cq->color =3D ionic_color_wrap(cq->q.prod, cq->color); + } + + if (old_prod !=3D cq->q.prod) { + ionic_admin_reset_wdog(aq); + cq->q.cons =3D cq->q.prod; + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + ionic_queue_dbell_val(&cq->q)); + queue_work(ionic_evt_workq, &aq->work); + } else if (!aq->armed) { + aq->armed =3D true; + cq->arm_any_prod =3D ionic_queue_next(&cq->q, cq->arm_any_prod); + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + cq->q.dbell | IONIC_CQ_RING_ARM | + cq->arm_any_prod); + queue_work(ionic_evt_workq, &aq->work); + } + + if (atomic_read(&aq->admin_state) !=3D IONIC_ADMIN_ACTIVE) + return; + + old_prod =3D aq->q.prod; + + if (ionic_queue_empty(&aq->q) && !list_empty(&aq->wr_post)) + ionic_admin_reset_wdog(aq); + + if (list_empty(&aq->wr_post)) + return; + + do { + u8 *src; + int i, src_len; + size_t stride_len; + + wr =3D list_first_entry(&aq->wr_post, struct ionic_admin_wr, + aq_ent); + wr_strides =3D (le16_to_cpu(wr->wqe.len) + ADMIN_WQE_HDR_LEN + + (ADMIN_WQE_STRIDE - 1)) >> aq->q.stride_log2; + avlbl_strides =3D ionic_queue_length_remaining(&aq->q); + + if (wr_strides > avlbl_strides) + break; + + list_move(&wr->aq_ent, &aq->wr_prod); + wr->status =3D aq->q.prod; + aq->q_wr[aq->q.prod].wr =3D wr; + aq->q_wr[aq->q.prod].wqe_strides =3D wr_strides; + + src_len =3D le16_to_cpu(wr->wqe.len); + src =3D (uint8_t *)&wr->wqe.cmd; + + /* First stride */ + memcpy(ionic_queue_at_prod(&aq->q), &wr->wqe, + ADMIN_WQE_HDR_LEN); + stride_len =3D ADMIN_WQE_STRIDE - ADMIN_WQE_HDR_LEN; + if (stride_len > src_len) + stride_len =3D src_len; + memcpy(ionic_queue_at_prod(&aq->q) + ADMIN_WQE_HDR_LEN, + src, stride_len); + ibdev_dbg(&dev->ibdev, "post admin prod %u (%u strides)\n", + aq->q.prod, wr_strides); + print_hex_dump_debug("wqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at_prod(&aq->q), + BIT(aq->q.stride_log2), true); + ionic_queue_produce(&aq->q); + + /* Remaining strides */ + for (i =3D stride_len; i < src_len; i +=3D stride_len) { + stride_len =3D ADMIN_WQE_STRIDE; + + if (i + stride_len > src_len) + stride_len =3D src_len - i; + + memcpy(ionic_queue_at_prod(&aq->q), src + i, + stride_len); + print_hex_dump_debug("wqe ", DUMP_PREFIX_OFFSET, 16, 1, + ionic_queue_at_prod(&aq->q), + BIT(aq->q.stride_log2), true); + ionic_queue_produce(&aq->q); + } + } while (!list_empty(&aq->wr_post)); + + if (old_prod !=3D aq->q.prod) + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.aq_qtype, + ionic_queue_dbell_val(&aq->q)); +} + +static void ionic_admin_dwork(struct work_struct *ws) +{ + struct ionic_ibdev *dev =3D + container_of(ws, struct ionic_ibdev, admin_dwork.work); + struct ionic_aq *aq, *bad_aq =3D NULL; + bool do_reschedule =3D false; + unsigned long irqflags; + bool do_reset =3D false; + u16 pos; + int i; + + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + aq =3D dev->aq_vec[i]; + + spin_lock_irqsave(&aq->lock, irqflags); + + if (ionic_queue_empty(&aq->q)) + goto next_aq; + + /* Reschedule if any queue has outstanding work */ + do_reschedule =3D true; + + if (time_is_after_eq_jiffies(aq->stamp + IONIC_ADMIN_WARN)) + /* Warning threshold not met, nothing to do */ + goto next_aq; + + /* See if polling now makes some progress */ + pos =3D aq->q.cons; + ionic_admin_poll_locked(aq); + if (pos !=3D aq->q.cons) { + ibdev_dbg(&dev->ibdev, + "missed event for acq %d\n", aq->cqid); + goto next_aq; + } + + if (time_is_after_eq_jiffies(aq->stamp + + IONIC_ADMIN_TIMEOUT)) { + /* Timeout threshold not met */ + ibdev_dbg(&dev->ibdev, "no progress after %ums\n", + (u32)jiffies_to_msecs(jiffies - aq->stamp)); + goto next_aq; + } + + /* Queue timed out */ + bad_aq =3D aq; + do_reset =3D true; +next_aq: + spin_unlock_irqrestore(&aq->lock, irqflags); + } + + if (do_reset) + /* Reset RDMA lif on a timeout */ + ionic_admin_timedout(bad_aq); + else if (do_reschedule) + /* Try to poll again later */ + ionic_admin_reset_dwork(dev); +} + +static void ionic_admin_work(struct work_struct *ws) +{ + struct ionic_aq *aq =3D container_of(ws, struct ionic_aq, work); + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_admin_post_aq(struct ionic_aq *aq, struct ionic_admin_wr= *wr) +{ + unsigned long irqflags; + bool poll; + + wr->status =3D IONIC_ADMIN_POSTED; + wr->aq =3D aq; + + spin_lock_irqsave(&aq->lock, irqflags); + poll =3D list_empty(&aq->wr_post); + list_add(&wr->aq_ent, &aq->wr_post); + if (poll) + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +void ionic_admin_post(struct ionic_ibdev *dev, struct ionic_admin_wr *wr) +{ + int aq_idx; + + /* Use cpu id for the adminq selection */ + aq_idx =3D raw_smp_processor_id() % dev->lif_cfg.aq_count; + ionic_admin_post_aq(dev->aq_vec[aq_idx], wr); +} + +static void ionic_admin_cancel(struct ionic_admin_wr *wr) +{ + struct ionic_aq *aq =3D wr->aq; + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + + if (!list_empty(&wr->aq_ent)) { + list_del(&wr->aq_ent); + if (wr->status !=3D IONIC_ADMIN_POSTED) + aq->q_wr[wr->status].wr =3D NULL; + } + + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static int ionic_admin_busy_wait(struct ionic_admin_wr *wr) +{ + struct ionic_aq *aq =3D wr->aq; + unsigned long irqflags; + int try_i; + + for (try_i =3D 0; try_i < IONIC_ADMIN_BUSY_RETRY_COUNT; ++try_i) { + if (completion_done(&wr->work)) + return 0; + + mdelay(IONIC_ADMIN_BUSY_RETRY_MS); + + spin_lock_irqsave(&aq->lock, irqflags); + ionic_admin_poll_locked(aq); + spin_unlock_irqrestore(&aq->lock, irqflags); + } + + /* + * we timed out. Initiate RDMA LIF reset and indicate + * error to caller. + */ + ionic_admin_timedout(aq); + return -ETIMEDOUT; +} + +int ionic_admin_wait(struct ionic_ibdev *dev, struct ionic_admin_wr *wr, + enum ionic_admin_flags flags) +{ + int rc, timo; + + if (flags & IONIC_ADMIN_F_BUSYWAIT) { + /* Spin */ + rc =3D ionic_admin_busy_wait(wr); + } else if (flags & IONIC_ADMIN_F_INTERRUPT) { + /* + * Interruptible sleep, 1s timeout + * This is used for commands which are safe for the caller + * to clean up without killing and resetting the adminq. + */ + timo =3D wait_for_completion_interruptible_timeout(&wr->work, + HZ); + if (timo > 0) + rc =3D 0; + else if (timo =3D=3D 0) + rc =3D -ETIMEDOUT; + else + rc =3D timo; + } else { + /* + * Uninterruptible sleep + * This is used for commands which are NOT safe for the + * caller to clean up. Cleanup must be handled by the + * adminq kill and reset process so that host memory is + * not corrupted by the device. + */ + wait_for_completion(&wr->work); + rc =3D 0; + } + + if (rc) { + ibdev_warn(&dev->ibdev, "wait status %d\n", rc); + ionic_admin_cancel(wr); + } else if (wr->status =3D=3D IONIC_ADMIN_KILLED) { + ibdev_dbg(&dev->ibdev, "admin killed\n"); + + /* No error if admin already killed during teardown */ + rc =3D (flags & IONIC_ADMIN_F_TEARDOWN) ? 0 : -ENODEV; + } else if (ionic_v1_cqe_error(&wr->cqe)) { + ibdev_warn(&dev->ibdev, "opcode %u error %u\n", + wr->wqe.op, + be32_to_cpu(wr->cqe.status_length)); + rc =3D -EINVAL; + } + return rc; +} + +static int ionic_rdma_devcmd(struct ionic_ibdev *dev, + struct ionic_admin_ctx *admin) +{ + int rc; + + rc =3D ionic_adminq_post_wait(dev->lif_cfg.lif, admin); + if (rc) + return rc; + + return ionic_error_to_errno(admin->comp.comp.status); +} + +int ionic_rdma_reset_devcmd(struct ionic_ibdev *dev) +{ + struct ionic_admin_ctx admin =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(admin.work), + .cmd.rdma_reset =3D { + .opcode =3D IONIC_CMD_RDMA_RESET_LIF, + .lif_index =3D cpu_to_le16(dev->lif_cfg.lif_index), + }, + }; + + return ionic_rdma_devcmd(dev, &admin); +} + +static int ionic_rdma_queue_devcmd(struct ionic_ibdev *dev, + struct ionic_queue *q, + u32 qid, u32 cid, u16 opcode) +{ + struct ionic_admin_ctx admin =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(admin.work), + .cmd.rdma_queue =3D { + .opcode =3D opcode, + .lif_index =3D cpu_to_le16(dev->lif_cfg.lif_index), + .qid_ver =3D cpu_to_le32(qid), + .cid =3D cpu_to_le32(cid), + .dbid =3D cpu_to_le16(dev->lif_cfg.dbid), + .depth_log2 =3D q->depth_log2, + .stride_log2 =3D q->stride_log2, + .dma_addr =3D cpu_to_le64(q->dma), + }, + }; + + return ionic_rdma_devcmd(dev, &admin); +} + +static void ionic_rdma_admincq_comp(struct ib_cq *ibcq, void *cq_context) +{ + struct ionic_aq *aq =3D cq_context; + unsigned long irqflags; + + spin_lock_irqsave(&aq->lock, irqflags); + aq->armed =3D false; + if (atomic_read(&aq->admin_state) < IONIC_ADMIN_KILLED) + queue_work(ionic_evt_workq, &aq->work); + spin_unlock_irqrestore(&aq->lock, irqflags); +} + +static void ionic_rdma_admincq_event(struct ib_event *event, void *cq_cont= ext) +{ + struct ionic_aq *aq =3D cq_context; + + ibdev_err(&aq->dev->ibdev, "admincq event %d\n", event->event); +} + +static struct ionic_vcq *ionic_create_rdma_admincq(struct ionic_ibdev *dev, + int comp_vector) +{ + struct ib_cq_init_attr attr =3D { + .cqe =3D IONIC_AQ_DEPTH, + .comp_vector =3D comp_vector, + }; + struct ionic_tbl_buf buf =3D {}; + struct ionic_vcq *vcq; + struct ionic_cq *cq; + int rc; + + vcq =3D kzalloc(sizeof(*vcq), GFP_KERNEL); + if (!vcq) + return ERR_PTR(-ENOMEM); + + vcq->ibcq.device =3D &dev->ibdev; + vcq->ibcq.comp_handler =3D ionic_rdma_admincq_comp; + vcq->ibcq.event_handler =3D ionic_rdma_admincq_event; + atomic_set(&vcq->ibcq.usecnt, 0); + + vcq->udma_mask =3D 1; + cq =3D &vcq->cq[0]; + + rc =3D ionic_create_cq_common(vcq, &buf, &attr, NULL, NULL, + NULL, NULL, 0); + if (rc) + goto err_init; + + rc =3D ionic_rdma_queue_devcmd(dev, &cq->q, cq->cqid, cq->eqid, + IONIC_CMD_RDMA_CREATE_CQ); + if (rc) + goto err_cmd; + + return vcq; + +err_cmd: + ionic_destroy_cq_common(dev, cq); +err_init: + kfree(vcq); + + return ERR_PTR(rc); +} + +static struct ionic_aq *__ionic_create_rdma_adminq(struct ionic_ibdev *dev, + u32 aqid, u32 cqid) +{ + struct ionic_aq *aq; + int rc; + + aq =3D kzalloc(sizeof(*aq), GFP_KERNEL); + if (!aq) + return ERR_PTR(-ENOMEM); + + atomic_set(&aq->admin_state, IONIC_ADMIN_KILLED); + aq->dev =3D dev; + aq->aqid =3D aqid; + aq->cqid =3D cqid; + spin_lock_init(&aq->lock); + + rc =3D ionic_queue_init(&aq->q, dev->lif_cfg.hwdev, IONIC_EQ_DEPTH, + ADMIN_WQE_STRIDE); + if (rc) + goto err_q; + + ionic_queue_dbell_init(&aq->q, aq->aqid); + + aq->q_wr =3D kcalloc((u32)aq->q.mask + 1, sizeof(*aq->q_wr), GFP_KERNEL); + if (!aq->q_wr) { + rc =3D -ENOMEM; + goto err_wr; + } + + INIT_LIST_HEAD(&aq->wr_prod); + INIT_LIST_HEAD(&aq->wr_post); + + INIT_WORK(&aq->work, ionic_admin_work); + aq->armed =3D false; + + return aq; + +err_wr: + ionic_queue_destroy(&aq->q, dev->lif_cfg.hwdev); +err_q: + kfree(aq); + + return ERR_PTR(rc); +} + +static void __ionic_destroy_rdma_adminq(struct ionic_ibdev *dev, + struct ionic_aq *aq) +{ + ionic_queue_destroy(&aq->q, dev->lif_cfg.hwdev); + kfree(aq); +} + +static struct ionic_aq *ionic_create_rdma_adminq(struct ionic_ibdev *dev, + u32 aqid, u32 cqid) +{ + struct ionic_aq *aq; + int rc; + + aq =3D __ionic_create_rdma_adminq(dev, aqid, cqid); + if (IS_ERR(aq)) + return aq; + + rc =3D ionic_rdma_queue_devcmd(dev, &aq->q, aq->aqid, aq->cqid, + IONIC_CMD_RDMA_CREATE_ADMINQ); + if (rc) + goto err_cmd; + + return aq; + +err_cmd: + __ionic_destroy_rdma_adminq(dev, aq); + + return ERR_PTR(rc); +} + +static void ionic_kill_ibdev(struct ionic_ibdev *dev, bool fatal_path) +{ + unsigned long irqflags; + bool do_flush =3D false; + int i; + + /* Mark AQs for drain and flush the QPs while irq is disabled */ + local_irq_save(irqflags); + + /* Mark the admin queue, flushing at most once */ + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + struct ionic_aq *aq =3D dev->aq_vec[i]; + + spin_lock(&aq->lock); + if (atomic_read(&aq->admin_state) !=3D IONIC_ADMIN_KILLED) { + atomic_set(&aq->admin_state, IONIC_ADMIN_KILLED); + /* Flush incomplete admin commands */ + ionic_admin_poll_locked(aq); + do_flush =3D true; + } + spin_unlock(&aq->lock); + } + + local_irq_restore(irqflags); + + /* Post a fatal event if requested */ + if (fatal_path) { + struct ib_event ev; + + ev.device =3D &dev->ibdev; + ev.element.port_num =3D 1; + ev.event =3D IB_EVENT_DEVICE_FATAL; + + ib_dispatch_event(&ev); + } + + atomic_set(&dev->admin_state, IONIC_ADMIN_KILLED); +} + +void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path) +{ + enum ionic_admin_state old_state; + unsigned long irqflags =3D 0; + int i, rc; + + if (!dev->aq_vec) + return; + + /* + * Admin queues are transitioned from active to paused to killed state. + * When in paused state, no new commands are issued to the device, + * nor are any completed locally. After resetting the lif, it will be + * safe to resume the rdma admin queues in the killed state. Commands + * will not be issued to the device, but will complete locally with status + * IONIC_ADMIN_KILLED. Handling completion will ensure that creating or + * modifying resources fails, but destroying resources succeeds. + * If there was a failure resetting the lif using this strategy, + * then the state of the device is unknown. + */ + old_state =3D atomic_cmpxchg(&dev->admin_state, IONIC_ADMIN_ACTIVE, + IONIC_ADMIN_PAUSED); + if (old_state !=3D IONIC_ADMIN_ACTIVE) + return; + + /* Pause all the AQs */ + local_irq_save(irqflags); + for (i =3D 0; i < dev->lif_cfg.aq_count; i++) { + struct ionic_aq *aq =3D dev->aq_vec[i]; + + spin_lock(&aq->lock); + /* pause rdma admin queues to reset lif */ + if (atomic_read(&aq->admin_state) =3D=3D IONIC_ADMIN_ACTIVE) + atomic_set(&aq->admin_state, IONIC_ADMIN_PAUSED); + spin_unlock(&aq->lock); + } + local_irq_restore(irqflags); + + rc =3D ionic_rdma_reset_devcmd(dev); + if (unlikely(rc)) { + ibdev_err(&dev->ibdev, "failed to reset rdma %d\n", rc); + ionic_request_rdma_reset(dev->lif_cfg.lif); + } + + ionic_kill_ibdev(dev, fatal_path); +} + +static void ionic_reset_work(struct work_struct *ws) +{ + struct ionic_ibdev *dev =3D + container_of(ws, struct ionic_ibdev, reset_work); + + ionic_kill_rdma_admin(dev, true); +} + +static bool ionic_next_eqe(struct ionic_eq *eq, struct ionic_v1_eqe *eqe) +{ + struct ionic_v1_eqe *qeqe; + bool color; + + qeqe =3D ionic_queue_at_prod(&eq->q); + color =3D ionic_v1_eqe_color(qeqe); + + /* cons is color for eq */ + if (eq->q.cons !=3D color) + return false; + + /* Prevent out-of-order reads of the EQE */ + dma_rmb(); + + ibdev_dbg(&eq->dev->ibdev, "poll eq prod %u\n", eq->q.prod); + print_hex_dump_debug("eqe ", DUMP_PREFIX_OFFSET, 16, 1, + qeqe, BIT(eq->q.stride_log2), true); + *eqe =3D *qeqe; + + return true; +} + +static void ionic_cq_event(struct ionic_ibdev *dev, u32 cqid, u8 code) +{ + unsigned long irqflags; + struct ib_event ibev; + struct ionic_cq *cq; + + xa_lock_irqsave(&dev->cq_tbl, irqflags); + cq =3D xa_load(&dev->cq_tbl, cqid); + if (cq) + kref_get(&cq->cq_kref); + xa_unlock_irqrestore(&dev->cq_tbl, irqflags); + + if (!cq) { + ibdev_dbg(&dev->ibdev, + "missing cqid %#x code %u\n", cqid, code); + return; + } + + switch (code) { + case IONIC_V1_EQE_CQ_NOTIFY: + if (cq->vcq->ibcq.comp_handler) + cq->vcq->ibcq.comp_handler(&cq->vcq->ibcq, + cq->vcq->ibcq.cq_context); + break; + + case IONIC_V1_EQE_CQ_ERR: + if (cq->vcq->ibcq.event_handler) { + ibev.event =3D IB_EVENT_CQ_ERR; + ibev.device =3D &dev->ibdev; + ibev.element.cq =3D &cq->vcq->ibcq; + + cq->vcq->ibcq.event_handler(&ibev, + cq->vcq->ibcq.cq_context); + } + break; + + default: + ibdev_dbg(&dev->ibdev, + "unrecognized cqid %#x code %u\n", cqid, code); + break; + } + + kref_put(&cq->cq_kref, ionic_cq_complete); +} + +static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budget) +{ + struct ionic_ibdev *dev =3D eq->dev; + struct ionic_v1_eqe eqe; + u16 npolled =3D 0; + u8 type, code; + u32 evt, qid; + + while (npolled < budget) { + if (!ionic_next_eqe(eq, &eqe)) + break; + + ionic_queue_produce(&eq->q); + + /* cons is color for eq */ + eq->q.cons =3D ionic_color_wrap(eq->q.prod, eq->q.cons); + + ++npolled; + + evt =3D ionic_v1_eqe_evt(&eqe); + type =3D ionic_v1_eqe_evt_type(evt); + code =3D ionic_v1_eqe_evt_code(evt); + qid =3D ionic_v1_eqe_evt_qid(evt); + + switch (type) { + case IONIC_V1_EQE_TYPE_CQ: + ionic_cq_event(dev, qid, code); + break; + + default: + ibdev_dbg(&dev->ibdev, + "unknown event %#x type %u\n", evt, type); + } + } + + return npolled; +} + +static void ionic_poll_eq_work(struct work_struct *work) +{ + struct ionic_eq *eq =3D container_of(work, struct ionic_eq, work); + u32 npolled; + + if (unlikely(!eq->enable) || WARN_ON(eq->armed)) + return; + + npolled =3D ionic_poll_eq(eq, IONIC_EQ_WORK_BUDGET); + if (npolled =3D=3D IONIC_EQ_WORK_BUDGET) { + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + npolled, 0); + queue_work(ionic_evt_workq, &eq->work); + } else { + xchg(&eq->armed, true); + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + 0, IONIC_INTR_CRED_UNMASK); + } +} + +static irqreturn_t ionic_poll_eq_isr(int irq, void *eqptr) +{ + struct ionic_eq *eq =3D eqptr; + bool was_armed; + u32 npolled; + + was_armed =3D xchg(&eq->armed, false); + + if (unlikely(!eq->enable) || !was_armed) + return IRQ_HANDLED; + + npolled =3D ionic_poll_eq(eq, IONIC_EQ_ISR_BUDGET); + if (npolled =3D=3D IONIC_EQ_ISR_BUDGET) { + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + npolled, 0); + queue_work(ionic_evt_workq, &eq->work); + } else { + xchg(&eq->armed, true); + ionic_intr_credits(eq->dev->lif_cfg.intr_ctrl, eq->intr, + 0, IONIC_INTR_CRED_UNMASK); + } + + return IRQ_HANDLED; +} + +static struct ionic_eq *ionic_create_eq(struct ionic_ibdev *dev, int eqid) +{ + struct ionic_intr_info intr_obj =3D { }; + struct ionic_eq *eq; + int rc; + + eq =3D kzalloc(sizeof(*eq), GFP_KERNEL); + if (!eq) + return ERR_PTR(-ENOMEM); + + eq->dev =3D dev; + + rc =3D ionic_queue_init(&eq->q, dev->lif_cfg.hwdev, IONIC_EQ_DEPTH, + sizeof(struct ionic_v1_eqe)); + if (rc) + goto err_q; + + eq->eqid =3D eqid; + + eq->armed =3D true; + eq->enable =3D false; + INIT_WORK(&eq->work, ionic_poll_eq_work); + + rc =3D ionic_intr_alloc(dev->lif_cfg.lif, &intr_obj); + if (rc < 0) + goto err_intr; + + eq->irq =3D intr_obj.vector; + eq->intr =3D intr_obj.index; + + ionic_queue_dbell_init(&eq->q, eq->eqid); + + /* cons is color for eq */ + eq->q.cons =3D true; + + snprintf(eq->name, sizeof(eq->name), "%s-%d-%d-eq", + "ionr", dev->lif_cfg.lif_index, eq->eqid); + + ionic_intr_mask(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_SET); + ionic_intr_mask_assert(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_= SET); + ionic_intr_coal_init(dev->lif_cfg.intr_ctrl, eq->intr, 0); + ionic_intr_clean(dev->lif_cfg.intr_ctrl, eq->intr); + + eq->enable =3D true; + + rc =3D request_irq(eq->irq, ionic_poll_eq_isr, 0, eq->name, eq); + if (rc) + goto err_irq; + + rc =3D ionic_rdma_queue_devcmd(dev, &eq->q, eq->eqid, eq->intr, + IONIC_CMD_RDMA_CREATE_EQ); + if (rc) + goto err_cmd; + + ionic_intr_mask(dev->lif_cfg.intr_ctrl, eq->intr, IONIC_INTR_MASK_CLEAR); + + return eq; + +err_cmd: + eq->enable =3D false; + free_irq(eq->irq, eq); + flush_work(&eq->work); +err_irq: + ionic_intr_free(dev->lif_cfg.lif, eq->intr); +err_intr: + ionic_queue_destroy(&eq->q, dev->lif_cfg.hwdev); +err_q: + kfree(eq); + + return ERR_PTR(rc); +} + +static void ionic_destroy_eq(struct ionic_eq *eq) +{ + struct ionic_ibdev *dev =3D eq->dev; + + eq->enable =3D false; + free_irq(eq->irq, eq); + flush_work(&eq->work); + + ionic_intr_free(dev->lif_cfg.lif, eq->intr); + ionic_queue_destroy(&eq->q, dev->lif_cfg.hwdev); + kfree(eq); +} + +int ionic_create_rdma_admin(struct ionic_ibdev *dev) +{ + int eq_i =3D 0, aq_i =3D 0, rc =3D 0; + struct ionic_vcq *vcq; + struct ionic_aq *aq; + struct ionic_eq *eq; + + dev->eq_vec =3D NULL; + dev->aq_vec =3D NULL; + + INIT_WORK(&dev->reset_work, ionic_reset_work); + INIT_DELAYED_WORK(&dev->admin_dwork, ionic_admin_dwork); + atomic_set(&dev->admin_state, IONIC_ADMIN_KILLED); + + if (dev->lif_cfg.aq_count > IONIC_AQ_COUNT) { + ibdev_dbg(&dev->ibdev, "limiting adminq count to %d\n", + IONIC_AQ_COUNT); + dev->lif_cfg.aq_count =3D IONIC_AQ_COUNT; + } + + if (dev->lif_cfg.eq_count > IONIC_EQ_COUNT) { + dev_dbg(&dev->ibdev.dev, "limiting eventq count to %d\n", + IONIC_EQ_COUNT); + dev->lif_cfg.eq_count =3D IONIC_EQ_COUNT; + } + + /* need at least two eq and one aq */ + if (dev->lif_cfg.eq_count < IONIC_EQ_COUNT_MIN || + dev->lif_cfg.aq_count < IONIC_AQ_COUNT_MIN) { + rc =3D -EINVAL; + goto out; + } + + dev->eq_vec =3D kmalloc_array(dev->lif_cfg.eq_count, sizeof(*dev->eq_vec), + GFP_KERNEL); + if (!dev->eq_vec) { + rc =3D -ENOMEM; + goto out; + } + + for (eq_i =3D 0; eq_i < dev->lif_cfg.eq_count; ++eq_i) { + eq =3D ionic_create_eq(dev, eq_i + dev->lif_cfg.eq_base); + if (IS_ERR(eq)) { + rc =3D PTR_ERR(eq); + + if (eq_i < IONIC_EQ_COUNT_MIN) { + ibdev_err(&dev->ibdev, + "fail create eq %d\n", rc); + goto out; + } + + /* ok, just fewer eq than device supports */ + ibdev_dbg(&dev->ibdev, "eq count %d want %d rc %d\n", + eq_i, dev->lif_cfg.eq_count, rc); + + rc =3D 0; + break; + } + + dev->eq_vec[eq_i] =3D eq; + } + + dev->lif_cfg.eq_count =3D eq_i; + + dev->aq_vec =3D kmalloc_array(dev->lif_cfg.aq_count, sizeof(*dev->aq_vec), + GFP_KERNEL); + if (!dev->aq_vec) { + rc =3D -ENOMEM; + goto out; + } + + /* Create one CQ per AQ */ + for (aq_i =3D 0; aq_i < dev->lif_cfg.aq_count; ++aq_i) { + vcq =3D ionic_create_rdma_admincq(dev, aq_i % eq_i); + if (IS_ERR(vcq)) { + rc =3D PTR_ERR(vcq); + + if (!aq_i) { + ibdev_err(&dev->ibdev, + "failed to create acq %d\n", rc); + goto out; + } + + /* ok, just fewer adminq than device supports */ + ibdev_dbg(&dev->ibdev, "acq count %d want %d rc %d\n", + aq_i, dev->lif_cfg.aq_count, rc); + break; + } + + aq =3D ionic_create_rdma_adminq(dev, aq_i + dev->lif_cfg.aq_base, + vcq->cq[0].cqid); + if (IS_ERR(aq)) { + /* Clean up the dangling CQ */ + ionic_destroy_cq_common(dev, &vcq->cq[0]); + kfree(vcq); + + rc =3D PTR_ERR(aq); + + if (!aq_i) { + ibdev_err(&dev->ibdev, + "failed to create aq %d\n", rc); + goto out; + } + + /* ok, just fewer adminq than device supports */ + ibdev_dbg(&dev->ibdev, "aq count %d want %d rc %d\n", + aq_i, dev->lif_cfg.aq_count, rc); + break; + } + + vcq->ibcq.cq_context =3D aq; + aq->vcq =3D vcq; + + atomic_set(&aq->admin_state, IONIC_ADMIN_ACTIVE); + dev->aq_vec[aq_i] =3D aq; + } + + atomic_set(&dev->admin_state, IONIC_ADMIN_ACTIVE); +out: + dev->lif_cfg.eq_count =3D eq_i; + dev->lif_cfg.aq_count =3D aq_i; + + return rc; +} + +void ionic_destroy_rdma_admin(struct ionic_ibdev *dev) +{ + struct ionic_vcq *vcq; + struct ionic_aq *aq; + struct ionic_eq *eq; + + /* + * Killing the admin before destroy makes sure all admin and + * completions are flushed. admin_state =3D IONIC_ADMIN_KILLED + * stops queueing up further works. + */ + cancel_delayed_work_sync(&dev->admin_dwork); + cancel_work_sync(&dev->reset_work); + + if (dev->aq_vec) { + while (dev->lif_cfg.aq_count > 0) { + aq =3D dev->aq_vec[--dev->lif_cfg.aq_count]; + vcq =3D aq->vcq; + + cancel_work_sync(&aq->work); + + __ionic_destroy_rdma_adminq(dev, aq); + if (vcq) { + ionic_destroy_cq_common(dev, &vcq->cq[0]); + kfree(vcq); + } + } + + kfree(dev->aq_vec); + } + + if (dev->eq_vec) { + while (dev->lif_cfg.eq_count > 0) { + eq =3D dev->eq_vec[--dev->lif_cfg.eq_count]; + ionic_destroy_eq(eq); + } + + kfree(dev->eq_vec); + } +} diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infi= niband/hw/ionic/ionic_controlpath.c new file mode 100644 index 000000000000..e1130573bd39 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c @@ -0,0 +1,181 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include "ionic_ibdev.h" + +static int ionic_validate_qdesc(struct ionic_qdesc *q) +{ + if (!q->addr || !q->size || !q->mask || + !q->depth_log2 || !q->stride_log2) + return -EINVAL; + + if (q->addr & (PAGE_SIZE - 1)) + return -EINVAL; + + if (q->mask !=3D BIT(q->depth_log2) - 1) + return -EINVAL; + + if (q->size < BIT_ULL(q->depth_log2 + q->stride_log2)) + return -EINVAL; + + return 0; +} + +static u32 ionic_get_eqid(struct ionic_ibdev *dev, u32 comp_vector, u8 udm= a_idx) +{ + /* EQ per vector per udma, and the first eqs reserved for async events. + * The rest of the vectors can be requested for completions. + */ + u32 comp_vec_count =3D dev->lif_cfg.eq_count / dev->lif_cfg.udma_count - = 1; + + return (comp_vector % comp_vec_count + 1) * dev->lif_cfg.udma_count + udm= a_idx; +} + +static int ionic_get_cqid(struct ionic_ibdev *dev, u32 *cqid, u8 udma_idx) +{ + unsigned int size, base, bound; + int rc; + + size =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + base =3D size * udma_idx; + bound =3D base + size; + + rc =3D ionic_resid_get_shared(&dev->inuse_cqid, base, bound); + if (rc >=3D 0) { + /* cq_base is zero or a multiple of two queue groups */ + *cqid =3D dev->lif_cfg.cq_base + + ionic_bitid_to_qid(rc, dev->lif_cfg.udma_qgrp_shift, + dev->half_cqid_udma_shift); + + rc =3D 0; + } + + return rc; +} + +static void ionic_put_cqid(struct ionic_ibdev *dev, u32 cqid) +{ + u32 bitid =3D ionic_qid_to_bitid(cqid - dev->lif_cfg.cq_base, + dev->lif_cfg.udma_qgrp_shift, + dev->half_cqid_udma_shift); + + ionic_resid_put(&dev->inuse_cqid, bitid); +} + +int ionic_create_cq_common(struct ionic_vcq *vcq, + struct ionic_tbl_buf *buf, + const struct ib_cq_init_attr *attr, + struct ionic_ctx *ctx, + struct ib_udata *udata, + struct ionic_qdesc *req_cq, + __u32 *resp_cqid, + int udma_idx) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(vcq->ibcq.device); + struct ionic_cq *cq =3D &vcq->cq[udma_idx]; + void *entry; + int rc; + + cq->vcq =3D vcq; + + if (attr->cqe < 1 || attr->cqe + IONIC_CQ_GRACE > 0xffff) { + rc =3D -EINVAL; + goto err_args; + } + + rc =3D ionic_get_cqid(dev, &cq->cqid, udma_idx); + if (rc) + goto err_args; + + cq->eqid =3D ionic_get_eqid(dev, attr->comp_vector, udma_idx); + + spin_lock_init(&cq->lock); + INIT_LIST_HEAD(&cq->poll_sq); + INIT_LIST_HEAD(&cq->flush_sq); + INIT_LIST_HEAD(&cq->flush_rq); + + if (udata) { + rc =3D ionic_validate_qdesc(req_cq); + if (rc) + goto err_qdesc; + + cq->umem =3D ib_umem_get(&dev->ibdev, req_cq->addr, req_cq->size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(cq->umem)) { + rc =3D PTR_ERR(cq->umem); + goto err_qdesc; + } + + cq->q.ptr =3D NULL; + cq->q.size =3D req_cq->size; + cq->q.mask =3D req_cq->mask; + cq->q.depth_log2 =3D req_cq->depth_log2; + cq->q.stride_log2 =3D req_cq->stride_log2; + + *resp_cqid =3D cq->cqid; + } else { + rc =3D ionic_queue_init(&cq->q, dev->lif_cfg.hwdev, + attr->cqe + IONIC_CQ_GRACE, + sizeof(struct ionic_v1_cqe)); + if (rc) + goto err_q_init; + + ionic_queue_dbell_init(&cq->q, cq->cqid); + cq->color =3D true; + cq->credit =3D cq->q.mask; + } + + rc =3D ionic_pgtbl_init(dev, buf, cq->umem, cq->q.dma, 1, PAGE_SIZE); + if (rc) + goto err_pgtbl_init; + + init_completion(&cq->cq_rel_comp); + kref_init(&cq->cq_kref); + + entry =3D xa_store_irq(&dev->cq_tbl, cq->cqid, cq, GFP_KERNEL); + if (entry) { + if (!xa_is_err(entry)) + rc =3D -EINVAL; + else + rc =3D xa_err(entry); + + goto err_xa; + } + + return 0; + +err_xa: + ionic_pgtbl_unbuf(dev, buf); +err_pgtbl_init: + if (!udata) + ionic_queue_destroy(&cq->q, dev->lif_cfg.hwdev); +err_q_init: + if (cq->umem) + ib_umem_release(cq->umem); +err_qdesc: + ionic_put_cqid(dev, cq->cqid); +err_args: + cq->vcq =3D NULL; + + return rc; +} + +void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq) +{ + if (!cq->vcq) + return; + + xa_erase_irq(&dev->cq_tbl, cq->cqid); + + kref_put(&cq->cq_kref, ionic_cq_complete); + wait_for_completion(&cq->cq_rel_comp); + + if (cq->umem) + ib_umem_release(cq->umem); + else + ionic_queue_destroy(&cq->q, dev->lif_cfg.hwdev); + + ionic_put_cqid(dev, cq->cqid); + + cq->vcq =3D NULL; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h new file mode 100644 index 000000000000..44ec69487519 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -0,0 +1,164 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_FW_H_ +#define _IONIC_FW_H_ + +#include + +/* completion queue v1 cqe */ +struct ionic_v1_cqe { + union { + struct { + __be16 cmd_idx; + __u8 cmd_op; + __u8 rsvd[17]; + __le16 old_sq_cindex; + __le16 old_rq_cq_cindex; + } admin; + struct { + __u64 wqe_id; + __be32 src_qpn_op; + __u8 src_mac[6]; + __be16 vlan_tag; + __be32 imm_data_rkey; + } recv; + struct { + __u8 rsvd[4]; + __be32 msg_msn; + __u8 rsvd2[8]; + __u64 npg_wqe_id; + } send; + }; + __be32 status_length; + __be32 qid_type_flags; +}; + +/* bits for cqe qid_type_flags */ +enum ionic_v1_cqe_qtf_bits { + IONIC_V1_CQE_COLOR =3D BIT(0), + IONIC_V1_CQE_ERROR =3D BIT(1), + IONIC_V1_CQE_TYPE_SHIFT =3D 5, + IONIC_V1_CQE_TYPE_MASK =3D 0x7, + IONIC_V1_CQE_QID_SHIFT =3D 8, + + IONIC_V1_CQE_TYPE_ADMIN =3D 0, + IONIC_V1_CQE_TYPE_RECV =3D 1, + IONIC_V1_CQE_TYPE_SEND_MSN =3D 2, + IONIC_V1_CQE_TYPE_SEND_NPG =3D 3, +}; + +static inline bool ionic_v1_cqe_color(struct ionic_v1_cqe *cqe) +{ + return cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_COLOR); +} + +static inline bool ionic_v1_cqe_error(struct ionic_v1_cqe *cqe) +{ + return cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_ERROR); +} + +static inline void ionic_v1_cqe_clean(struct ionic_v1_cqe *cqe) +{ + cqe->qid_type_flags |=3D cpu_to_be32(~0u << IONIC_V1_CQE_QID_SHIFT); +} + +static inline u32 ionic_v1_cqe_qtf(struct ionic_v1_cqe *cqe) +{ + return be32_to_cpu(cqe->qid_type_flags); +} + +static inline u8 ionic_v1_cqe_qtf_type(u32 qtf) +{ + return (qtf >> IONIC_V1_CQE_TYPE_SHIFT) & IONIC_V1_CQE_TYPE_MASK; +} + +static inline u32 ionic_v1_cqe_qtf_qid(u32 qtf) +{ + return qtf >> IONIC_V1_CQE_QID_SHIFT; +} + +#define ADMIN_WQE_STRIDE 64 +#define ADMIN_WQE_HDR_LEN 4 + +/* admin queue v1 wqe */ +struct ionic_v1_admin_wqe { + __u8 op; + __u8 rsvd; + __le16 len; + + union { + } cmd; +}; + +/* admin queue v1 cqe status */ +enum ionic_v1_admin_status { + IONIC_V1_ASTS_OK, + IONIC_V1_ASTS_BAD_CMD, + IONIC_V1_ASTS_BAD_INDEX, + IONIC_V1_ASTS_BAD_STATE, + IONIC_V1_ASTS_BAD_TYPE, + IONIC_V1_ASTS_BAD_ATTR, + IONIC_V1_ASTS_MSG_TOO_BIG, +}; + +/* event queue v1 eqe */ +struct ionic_v1_eqe { + __be32 evt; +}; + +/* bits for cqe queue_type_flags */ +enum ionic_v1_eqe_evt_bits { + IONIC_V1_EQE_COLOR =3D BIT(0), + IONIC_V1_EQE_TYPE_SHIFT =3D 1, + IONIC_V1_EQE_TYPE_MASK =3D 0x7, + IONIC_V1_EQE_CODE_SHIFT =3D 4, + IONIC_V1_EQE_CODE_MASK =3D 0xf, + IONIC_V1_EQE_QID_SHIFT =3D 8, + + /* cq events */ + IONIC_V1_EQE_TYPE_CQ =3D 0, + /* cq normal events */ + IONIC_V1_EQE_CQ_NOTIFY =3D 0, + /* cq error events */ + IONIC_V1_EQE_CQ_ERR =3D 8, + + /* qp and srq events */ + IONIC_V1_EQE_TYPE_QP =3D 1, + /* qp normal events */ + IONIC_V1_EQE_SRQ_LEVEL =3D 0, + IONIC_V1_EQE_SQ_DRAIN =3D 1, + IONIC_V1_EQE_QP_COMM_EST =3D 2, + IONIC_V1_EQE_QP_LAST_WQE =3D 3, + /* qp error events */ + IONIC_V1_EQE_QP_ERR =3D 8, + IONIC_V1_EQE_QP_ERR_REQUEST =3D 9, + IONIC_V1_EQE_QP_ERR_ACCESS =3D 10, +}; + +static inline bool ionic_v1_eqe_color(struct ionic_v1_eqe *eqe) +{ + return eqe->evt & cpu_to_be32(IONIC_V1_EQE_COLOR); +} + +static inline u32 ionic_v1_eqe_evt(struct ionic_v1_eqe *eqe) +{ + return be32_to_cpu(eqe->evt); +} + +static inline u8 ionic_v1_eqe_evt_type(u32 evt) +{ + return (evt >> IONIC_V1_EQE_TYPE_SHIFT) & IONIC_V1_EQE_TYPE_MASK; +} + +static inline u8 ionic_v1_eqe_evt_code(u32 evt) +{ + return (evt >> IONIC_V1_EQE_CODE_SHIFT) & IONIC_V1_EQE_CODE_MASK; +} + +static inline u32 ionic_v1_eqe_evt_qid(u32 evt) +{ + return evt >> IONIC_V1_EQE_QID_SHIFT; +} + +#endif /* _IONIC_FW_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index d79470dae13a..7710190ff65f 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -15,9 +15,41 @@ MODULE_DESCRIPTION(DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("NET_IONIC"); =20 +static void ionic_init_resids(struct ionic_ibdev *dev) +{ + ionic_resid_init(&dev->inuse_cqid, dev->lif_cfg.cq_count); + dev->half_cqid_udma_shift =3D + order_base_2(dev->lif_cfg.cq_count / dev->lif_cfg.udma_count); + ionic_resid_init(&dev->inuse_pdid, IONIC_MAX_PD); + ionic_resid_init(&dev->inuse_ahid, dev->lif_cfg.nahs_per_lif); + ionic_resid_init(&dev->inuse_mrid, dev->lif_cfg.nmrs_per_lif); + /* skip reserved lkey */ + dev->next_mrkey =3D 1; + ionic_resid_init(&dev->inuse_qpid, dev->lif_cfg.qp_count); + /* skip reserved SMI and GSI qpids */ + dev->half_qpid_udma_shift =3D + order_base_2(dev->lif_cfg.qp_count / dev->lif_cfg.udma_count); + ionic_resid_init(&dev->inuse_dbid, dev->lif_cfg.dbid_count); +} + +static void ionic_destroy_resids(struct ionic_ibdev *dev) +{ + ionic_resid_destroy(&dev->inuse_cqid); + ionic_resid_destroy(&dev->inuse_pdid); + ionic_resid_destroy(&dev->inuse_ahid); + ionic_resid_destroy(&dev->inuse_mrid); + ionic_resid_destroy(&dev->inuse_qpid); + ionic_resid_destroy(&dev->inuse_dbid); +} + static void ionic_destroy_ibdev(struct ionic_ibdev *dev) { + ionic_kill_rdma_admin(dev, false); ib_unregister_device(&dev->ibdev); + ionic_destroy_rdma_admin(dev); + ionic_destroy_resids(dev); + WARN_ON(!xa_empty(&dev->cq_tbl)); + xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); } =20 @@ -34,6 +66,18 @@ static struct ionic_ibdev *ionic_create_ibdev(struct ion= ic_aux_dev *ionic_adev) =20 ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); =20 + xa_init_flags(&dev->cq_tbl, GFP_ATOMIC); + + ionic_init_resids(dev); + + rc =3D ionic_rdma_reset_devcmd(dev); + if (rc) + goto err_reset; + + rc =3D ionic_create_rdma_admin(dev); + if (rc) + goto err_admin; + ibdev =3D &dev->ibdev; ibdev->dev.parent =3D dev->lif_cfg.hwdev; =20 @@ -62,6 +106,11 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 err_register: err_admin: + ionic_kill_rdma_admin(dev, false); + ionic_destroy_rdma_admin(dev); +err_reset: + ionic_destroy_resids(dev); + xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); =20 return ERR_PTR(rc); @@ -112,6 +161,10 @@ static int __init ionic_mod_init(void) { int rc; =20 + ionic_evt_workq =3D create_workqueue(DRIVER_NAME "-evt"); + if (!ionic_evt_workq) + return -ENOMEM; + rc =3D auxiliary_driver_register(&ionic_aux_r_driver); if (rc) goto err_aux; @@ -119,12 +172,15 @@ static int __init ionic_mod_init(void) return 0; =20 err_aux: + destroy_workqueue(ionic_evt_workq); + return rc; } =20 static void __exit ionic_mod_exit(void) { auxiliary_driver_unregister(&ionic_aux_r_driver); + destroy_workqueue(ionic_evt_workq); } =20 module_init(ionic_mod_init); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 224e5e74056d..490897628f41 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -4,15 +4,237 @@ #ifndef _IONIC_IBDEV_H_ #define _IONIC_IBDEV_H_ =20 +#include #include + #include +#include + +#include "ionic_fw.h" +#include "ionic_queue.h" +#include "ionic_res.h" =20 #include "ionic_lif_cfg.h" =20 +/* Config knobs */ +#define IONIC_EQ_DEPTH 511 +#define IONIC_EQ_COUNT 32 +#define IONIC_AQ_DEPTH 63 +#define IONIC_AQ_COUNT 4 +#define IONIC_EQ_ISR_BUDGET 10 +#define IONIC_EQ_WORK_BUDGET 1000 +#define IONIC_MAX_PD 1024 + +#define IONIC_CQ_GRACE 100 + +struct ionic_aq; +struct ionic_cq; +struct ionic_eq; +struct ionic_vcq; + +enum ionic_admin_state { + IONIC_ADMIN_ACTIVE, /* submitting admin commands to queue */ + IONIC_ADMIN_PAUSED, /* not submitting, but may complete normally */ + IONIC_ADMIN_KILLED, /* not submitting, locally completed */ +}; + +enum ionic_admin_flags { + IONIC_ADMIN_F_BUSYWAIT =3D BIT(0), /* Don't sleep */ + IONIC_ADMIN_F_TEARDOWN =3D BIT(1), /* In destroy path */ + IONIC_ADMIN_F_INTERRUPT =3D BIT(2), /* Interruptible w/timeout */ +}; + +struct ionic_qdesc { + __aligned_u64 addr; + __u32 size; + __u16 mask; + __u8 depth_log2; + __u8 stride_log2; +}; + +enum ionic_mmap_flag { + IONIC_MMAP_WC =3D BIT(0), +}; + +struct ionic_mmap_entry { + struct rdma_user_mmap_entry rdma_entry; + unsigned long size; + unsigned long pfn; + u8 mmap_flags; +}; + struct ionic_ibdev { struct ib_device ibdev; =20 struct ionic_lif_cfg lif_cfg; + + struct xarray qp_tbl; + struct xarray cq_tbl; + + struct ionic_resid_bits inuse_dbid; + struct ionic_resid_bits inuse_pdid; + struct ionic_resid_bits inuse_ahid; + struct ionic_resid_bits inuse_mrid; + struct ionic_resid_bits inuse_qpid; + struct ionic_resid_bits inuse_cqid; + + u8 half_cqid_udma_shift; + u8 half_qpid_udma_shift; + u8 next_qpid_udma_idx; + u8 next_mrkey; + + struct work_struct reset_work; + bool reset_posted; + u32 reset_cnt; + + struct delayed_work admin_dwork; + struct ionic_aq **aq_vec; + atomic_t admin_state; + + struct ionic_eq **eq_vec; +}; + +struct ionic_eq { + struct ionic_ibdev *dev; + + u32 eqid; + u32 intr; + + struct ionic_queue q; + + bool armed; + bool enable; + + struct work_struct work; + + int irq; + char name[32]; +}; + +struct ionic_admin_wr { + struct completion work; + struct list_head aq_ent; + struct ionic_v1_admin_wqe wqe; + struct ionic_v1_cqe cqe; + struct ionic_aq *aq; + int status; +}; + +struct ionic_admin_wr_q { + struct ionic_admin_wr *wr; + int wqe_strides; }; =20 +struct ionic_aq { + struct ionic_ibdev *dev; + struct ionic_vcq *vcq; + + struct work_struct work; + + atomic_t admin_state; + unsigned long stamp; + bool armed; + + u32 aqid; + u32 cqid; + + spinlock_t lock; /* for posting */ + struct ionic_queue q; + struct ionic_admin_wr_q *q_wr; + struct list_head wr_prod; + struct list_head wr_post; +}; + +struct ionic_ctx { + struct ib_ucontext ibctx; + u32 dbid; + struct rdma_user_mmap_entry *mmap_dbell; +}; + +struct ionic_tbl_buf { + u32 tbl_limit; + u32 tbl_pages; + size_t tbl_size; + __le64 *tbl_buf; + dma_addr_t tbl_dma; + u8 page_size_log2; +}; + +struct ionic_cq { + struct ionic_vcq *vcq; + + u32 cqid; + u32 eqid; + + spinlock_t lock; /* for polling */ + struct list_head poll_sq; + bool flush; + struct list_head flush_sq; + struct list_head flush_rq; + struct list_head ibkill_flush_ent; + + struct ionic_queue q; + bool color; + int credit; + u16 arm_any_prod; + u16 arm_sol_prod; + + struct kref cq_kref; + struct completion cq_rel_comp; + + /* infrequently accessed, keep at end */ + struct ib_umem *umem; +}; + +struct ionic_vcq { + struct ib_cq ibcq; + struct ionic_cq cq[2]; + u8 udma_mask; + u8 poll_idx; +}; + +static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) +{ + return container_of(ibdev, struct ionic_ibdev, ibdev); +} + +static inline void ionic_cq_complete(struct kref *kref) +{ + struct ionic_cq *cq =3D container_of(kref, struct ionic_cq, cq_kref); + + complete(&cq->cq_rel_comp); +} + +/* ionic_admin.c */ +extern struct workqueue_struct *ionic_evt_workq; +void ionic_admin_post(struct ionic_ibdev *dev, struct ionic_admin_wr *wr); +int ionic_admin_wait(struct ionic_ibdev *dev, struct ionic_admin_wr *wr, + enum ionic_admin_flags); + +int ionic_rdma_reset_devcmd(struct ionic_ibdev *dev); + +int ionic_create_rdma_admin(struct ionic_ibdev *dev); +void ionic_destroy_rdma_admin(struct ionic_ibdev *dev); +void ionic_kill_rdma_admin(struct ionic_ibdev *dev, bool fatal_path); + +/* ionic_controlpath.c */ +int ionic_create_cq_common(struct ionic_vcq *vcq, + struct ionic_tbl_buf *buf, + const struct ib_cq_init_attr *attr, + struct ionic_ctx *ctx, + struct ib_udata *udata, + struct ionic_qdesc *req_cq, + __u32 *resp_cqid, + int udma_idx); +void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq); + +/* ionic_pgtbl.c */ +int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); +int ionic_pgtbl_init(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf, + struct ib_umem *umem, + dma_addr_t dma, + int limit, + u64 page_size); +void ionic_pgtbl_unbuf(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf); #endif /* _IONIC_IBDEV_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c new file mode 100644 index 000000000000..11461f7642bc --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) +{ + if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) + return -ENOMEM; + + if (buf->tbl_buf) + buf->tbl_buf[buf->tbl_pages] =3D cpu_to_le64(dma); + else + buf->tbl_dma =3D dma; + + ++buf->tbl_pages; + + return 0; +} + +static int ionic_tbl_buf_alloc(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf) +{ + int rc; + + buf->tbl_size =3D buf->tbl_limit * sizeof(*buf->tbl_buf); + buf->tbl_buf =3D kmalloc(buf->tbl_size, GFP_KERNEL); + if (!buf->tbl_buf) + return -ENOMEM; + + buf->tbl_dma =3D dma_map_single(dev->lif_cfg.hwdev, buf->tbl_buf, + buf->tbl_size, DMA_TO_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, buf->tbl_dma); + if (rc) { + kfree(buf->tbl_buf); + return rc; + } + + return 0; +} + +static int ionic_pgtbl_umem(struct ionic_tbl_buf *buf, struct ib_umem *ume= m) +{ + struct ib_block_iter biter; + u64 page_dma; + int rc; + + rdma_umem_for_each_dma_block(umem, &biter, BIT_ULL(buf->page_size_log2)) { + page_dma =3D rdma_block_iter_dma_address(&biter); + rc =3D ionic_pgtbl_page(buf, page_dma); + if (rc) + return rc; + } + + return 0; +} + +void ionic_pgtbl_unbuf(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf) +{ + if (buf->tbl_buf) + dma_unmap_single(dev->lif_cfg.hwdev, buf->tbl_dma, + buf->tbl_size, DMA_TO_DEVICE); + + kfree(buf->tbl_buf); + memset(buf, 0, sizeof(*buf)); +} + +int ionic_pgtbl_init(struct ionic_ibdev *dev, + struct ionic_tbl_buf *buf, + struct ib_umem *umem, + dma_addr_t dma, + int limit, + u64 page_size) +{ + int rc; + + memset(buf, 0, sizeof(*buf)); + + if (umem) { + limit =3D ib_umem_num_dma_blocks(umem, page_size); + buf->page_size_log2 =3D order_base_2(page_size); + } + + if (limit < 1) + return -EINVAL; + + buf->tbl_limit =3D limit; + + /* skip pgtbl if contiguous / direct translation */ + if (limit > 1) { + rc =3D ionic_tbl_buf_alloc(dev, buf); + if (rc) + return rc; + } + + if (umem) + rc =3D ionic_pgtbl_umem(buf, umem); + else + rc =3D ionic_pgtbl_page(buf, dma); + + if (rc) + goto err_unbuf; + + return 0; + +err_unbuf: + ionic_pgtbl_unbuf(dev, buf); + return rc; +} diff --git a/drivers/infiniband/hw/ionic/ionic_queue.c b/drivers/infiniband= /hw/ionic/ionic_queue.c new file mode 100644 index 000000000000..aa897ed2a412 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_queue.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include "ionic_queue.h" + +int ionic_queue_init(struct ionic_queue *q, struct device *dma_dev, + int depth, size_t stride) +{ + if (depth < 0 || depth > 0xffff) + return -EINVAL; + + if (stride =3D=3D 0 || stride > 0x10000) + return -EINVAL; + + if (depth =3D=3D 0) + depth =3D 1; + + q->depth_log2 =3D order_base_2(depth + 1); + q->stride_log2 =3D order_base_2(stride); + + if (q->depth_log2 + q->stride_log2 < PAGE_SHIFT) + q->depth_log2 =3D PAGE_SHIFT - q->stride_log2; + + if (q->depth_log2 > 16 || q->stride_log2 > 16) + return -EINVAL; + + q->size =3D BIT_ULL(q->depth_log2 + q->stride_log2); + q->mask =3D BIT(q->depth_log2) - 1; + + q->ptr =3D dma_alloc_coherent(dma_dev, q->size, &q->dma, GFP_KERNEL); + if (!q->ptr) + return -ENOMEM; + + /* it will always be page aligned, but just to be sure... */ + if (!PAGE_ALIGNED(q->ptr)) { + dma_free_coherent(dma_dev, q->size, q->ptr, q->dma); + return -ENOMEM; + } + + q->prod =3D 0; + q->cons =3D 0; + q->dbell =3D 0; + + return 0; +} + +void ionic_queue_destroy(struct ionic_queue *q, struct device *dma_dev) +{ + dma_free_coherent(dma_dev, q->size, q->ptr, q->dma); +} diff --git a/drivers/infiniband/hw/ionic/ionic_queue.h b/drivers/infiniband= /hw/ionic/ionic_queue.h new file mode 100644 index 000000000000..d18020d4cad5 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_queue.h @@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_QUEUE_H_ +#define _IONIC_QUEUE_H_ + +#include +#include + +#define IONIC_MAX_DEPTH 0xffff +#define IONIC_MAX_CQ_DEPTH 0xffff +#define IONIC_CQ_RING_ARM IONIC_DBELL_RING_1 +#define IONIC_CQ_RING_SOL IONIC_DBELL_RING_2 + +/** + * struct ionic_queue - Ring buffer used between device and driver + * @size: Size of the buffer, in bytes + * @dma: Dma address of the buffer + * @ptr: Buffer virtual address + * @prod: Driver position in the queue + * @cons: Device position in the queue + * @mask: Capacity of the queue, subtracting the hole + * This value is equal to ((1 << depth_log2) - 1) + * @depth_log2: Log base two size depth of the queue + * @stride_log2: Log base two size of an element in the queue + * @dbell: Doorbell identifying bits + */ +struct ionic_queue { + size_t size; + dma_addr_t dma; + void *ptr; + u16 prod; + u16 cons; + u16 mask; + u8 depth_log2; + u8 stride_log2; + u64 dbell; +}; + +/** + * ionic_queue_init() - Initialize user space queue + * @q: Uninitialized queue structure + * @dma_dev: DMA device for mapping + * @depth: Depth of the queue + * @stride: Size of each element of the queue + * + * Return: status code + */ +int ionic_queue_init(struct ionic_queue *q, struct device *dma_dev, + int depth, size_t stride); + +/** + * ionic_queue_destroy() - Destroy user space queue + * @q: Queue structure + * @dma_dev: DMA device for mapping + * + * Return: status code + */ +void ionic_queue_destroy(struct ionic_queue *q, struct device *dma_dev); + +/** + * ionic_queue_empty() - Test if queue is empty + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: is empty + */ +static inline bool ionic_queue_empty(struct ionic_queue *q) +{ + return q->prod =3D=3D q->cons; +} + +/** + * ionic_queue_length() - Get the current length of the queue + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: length + */ +static inline u16 ionic_queue_length(struct ionic_queue *q) +{ + return (q->prod - q->cons) & q->mask; +} + +/** + * ionic_queue_length_remaining() - Get the remaining length of the queue + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: length remaining + */ +static inline u16 ionic_queue_length_remaining(struct ionic_queue *q) +{ + return q->mask - ionic_queue_length(q); +} + +/** + * ionic_queue_full() - Test if queue is full + * @q: Queue structure + * + * This is only valid for to-device queues. + * + * Return: is full + */ +static inline bool ionic_queue_full(struct ionic_queue *q) +{ + return q->mask =3D=3D ionic_queue_length(q); +} + +/** + * ionic_color_wrap() - Flip the color if prod is wrapped + * @prod: Queue index just after advancing + * @color: Queue color just prior to advancing the index + * + * Return: color after advancing the index + */ +static inline bool ionic_color_wrap(u16 prod, bool color) +{ + /* logical xor color with (prod =3D=3D 0) */ + return color !=3D (prod =3D=3D 0); +} + +/** + * ionic_queue_at() - Get the element at the given index + * @q: Queue structure + * @idx: Index in the queue + * + * The index must be within the bounds of the queue. It is not checked he= re. + * + * Return: pointer to element at index + */ +static inline void *ionic_queue_at(struct ionic_queue *q, u16 idx) +{ + return q->ptr + ((unsigned long)idx << q->stride_log2); +} + +/** + * ionic_queue_at_prod() - Get the element at the producer index + * @q: Queue structure + * + * Return: pointer to element at producer index + */ +static inline void *ionic_queue_at_prod(struct ionic_queue *q) +{ + return ionic_queue_at(q, q->prod); +} + +/** + * ionic_queue_at_cons() - Get the element at the consumer index + * @q: Queue structure + * + * Return: pointer to element at consumer index + */ +static inline void *ionic_queue_at_cons(struct ionic_queue *q) +{ + return ionic_queue_at(q, q->cons); +} + +/** + * ionic_queue_next() - Compute the next index + * @q: Queue structure + * @idx: Index + * + * Return: next index after idx + */ +static inline u16 ionic_queue_next(struct ionic_queue *q, u16 idx) +{ + return (idx + 1) & q->mask; +} + +/** + * ionic_queue_produce() - Increase the producer index + * @q: Queue structure + * + * Caller must ensure that the queue is not full. It is not checked here. + */ +static inline void ionic_queue_produce(struct ionic_queue *q) +{ + q->prod =3D ionic_queue_next(q, q->prod); +} + +/** + * ionic_queue_consume() - Increase the consumer index + * @q: Queue structure + * + * Caller must ensure that the queue is not empty. It is not checked here. + * + * This is only valid for to-device queues. + */ +static inline void ionic_queue_consume(struct ionic_queue *q) +{ + q->cons =3D ionic_queue_next(q, q->cons); +} + +/** + * ionic_queue_consume_entries() - Increase the consumer index by entries + * @q: Queue structure + * @entries: Number of entries to increment + * + * Caller must ensure that the queue is not empty. It is not checked here. + * + * This is only valid for to-device queues. + */ +static inline void ionic_queue_consume_entries(struct ionic_queue *q, + u16 entries) +{ + q->cons =3D (q->cons + entries) & q->mask; +} + +/** + * ionic_queue_dbell_init() - Initialize doorbell bits for queue id + * @q: Queue structure + * @qid: Queue identifying number + */ +static inline void ionic_queue_dbell_init(struct ionic_queue *q, u32 qid) +{ + q->dbell =3D IONIC_DBELL_QID(qid); +} + +/** + * ionic_queue_dbell_val() - Get current doorbell update value + * @q: Queue structure + * + * Return: current doorbell update value + */ +static inline u64 ionic_queue_dbell_val(struct ionic_queue *q) +{ + return q->dbell | q->prod; +} + +#endif /* _IONIC_QUEUE_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_res.h b/drivers/infiniband/h= w/ionic/ionic_res.h new file mode 100644 index 000000000000..46c8c584bd9a --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_res.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#ifndef _IONIC_RES_H_ +#define _IONIC_RES_H_ + +#include +#include + +/** + * struct ionic_resid_bits - Number allocator based on IDA + * + * @inuse: IDA handle + * @inuse_size: Highest ID limit for IDA + */ +struct ionic_resid_bits { + struct ida inuse; + unsigned int inuse_size; +}; + +/** + * ionic_resid_init() - Initialize a resid allocator + * @resid: Uninitialized resid allocator + * @size: Capacity of the allocator + * + * Return: Zero on success, or negative error number + */ +static inline void ionic_resid_init(struct ionic_resid_bits *resid, + unsigned int size) +{ + resid->inuse_size =3D size; + ida_init(&resid->inuse); +} + +/** + * ionic_resid_destroy() - Destroy a resid allocator + * @resid: Resid allocator + */ +static inline void ionic_resid_destroy(struct ionic_resid_bits *resid) +{ + ida_destroy(&resid->inuse); +} + +/** + * ionic_resid_get_shared() - Allocate an available shared resource id + * @resid: Resid allocator + * @min: Smallest valid resource id + * @size: One after largest valid resource id + * + * Return: Resource id, or negative error number + */ +static inline int ionic_resid_get_shared(struct ionic_resid_bits *resid, + unsigned int min, + unsigned int size) +{ + return ida_alloc_range(&resid->inuse, min, size - 1, GFP_KERNEL); +} + +/** + * ionic_resid_get() - Allocate an available resource id + * @resid: Resid allocator + * + * Return: Resource id, or negative error number + */ +static inline int ionic_resid_get(struct ionic_resid_bits *resid) +{ + return ionic_resid_get_shared(resid, 0, resid->inuse_size); +} + +/** + * ionic_resid_put() - Free a resource id + * @resid: Resid allocator + * @id: Resource id + */ +static inline void ionic_resid_put(struct ionic_resid_bits *resid, int id) +{ + ida_free(&resid->inuse, id); +} + +/** + * ionic_bitid_to_qid() - Transform a resource bit index into a queue id + * @bitid: Bit index + * @qgrp_shift: Log2 number of queues per queue group + * @half_qid_shift: Log2 of half the total number of queues + * + * Return: Queue id + * + * Udma-constrained queues (QPs and CQs) are associated with their udma by + * queue group. Even queue groups are associated with udma0, and odd queue + * groups with udma1. + * + * For allocating queue ids, we want to arrange the bits into two halves, + * with the even queue groups of udma0 in the lower half of the bitset, + * and the odd queue groups of udma1 in the upper half of the bitset. + * Then, one or two calls of find_next_zero_bit can examine all the bits + * for queues of an entire udma. + * + * For example, assuming eight queue groups with qgrp qids per group: + * + * bitid 0*qgrp..1*qgrp-1 : qid 0*qgrp..1*qgrp-1 + * bitid 1*qgrp..2*qgrp-1 : qid 2*qgrp..3*qgrp-1 + * bitid 2*qgrp..3*qgrp-1 : qid 4*qgrp..5*qgrp-1 + * bitid 3*qgrp..4*qgrp-1 : qid 6*qgrp..7*qgrp-1 + * bitid 4*qgrp..5*qgrp-1 : qid 1*qgrp..2*qgrp-1 + * bitid 5*qgrp..6*qgrp-1 : qid 3*qgrp..4*qgrp-1 + * bitid 6*qgrp..7*qgrp-1 : qid 5*qgrp..6*qgrp-1 + * bitid 7*qgrp..8*qgrp-1 : qid 7*qgrp..8*qgrp-1 + * + * There are three important ranges of bits in the qid. There is the udma + * bit "U" at qgrp_shift, which is the least significant bit of the group + * index, and determines which udma a queue is associated with. + * The bits of lesser significance we can call the idx bits "I", which are + * the index of the queue within the group. The bits of greater significa= nce + * we can call the grp bits "G", which are other bits of the group index t= hat + * do not determine the udma. Those bits are just rearranged in the bit i= ndex + * in the bitset. A bitid has the udma bit in the most significant place, + * then the grp bits, then the idx bits. + * + * bitid: 00000000000000 U GGG IIIIII + * qid: 00000000000000 GGG U IIIIII + * + * Transforming from bit index to qid, or from qid to bit index, can be + * accomplished by rearranging the bits by masking and shifting. + */ +static inline u32 ionic_bitid_to_qid(u32 bitid, u8 qgrp_shift, + u8 half_qid_shift) +{ + u32 udma_bit =3D + (bitid & BIT(half_qid_shift)) >> (half_qid_shift - qgrp_shift); + u32 grp_bits =3D (bitid & GENMASK(half_qid_shift - 1, qgrp_shift)) << 1; + u32 idx_bits =3D bitid & (BIT(qgrp_shift) - 1); + + return grp_bits | udma_bit | idx_bits; +} + +/** + * ionic_qid_to_bitid() - Transform a queue id into a resource bit index + * @qid: queue index + * @qgrp_shift: Log2 number of queues per queue group + * @half_qid_shift: Log2 of half the total number of queues + * + * Return: Resource bit index + * + * This is the inverse of ionic_bitid_to_qid(). + */ +static inline u32 ionic_qid_to_bitid(u32 qid, u8 qgrp_shift, u8 half_qid_s= hift) +{ + u32 udma_bit =3D (qid & BIT(qgrp_shift)) << (half_qid_shift - qgrp_shift); + u32 grp_bits =3D (qid & GENMASK(half_qid_shift, qgrp_shift + 1)) >> 1; + u32 idx_bits =3D qid & (BIT(qgrp_shift) - 1); + + return udma_bit | grp_bits | idx_bits; +} +#endif /* _IONIC_RES_H_ */ --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2044.outbound.protection.outlook.com [40.107.212.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 267FA23B605; Wed, 23 Jul 2025 17:32:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.44 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291975; cv=fail; b=B5VOido96/kid2Veh3WjW5i9OUvppuE/xB1iJb/fV9eiEV6wdtPPeaci2QwkVxLAKHNm5vsfXp9wQixZFBSkg4idANSSV3NekEJjjmktwmgMY4q2Fder+HzYhcTQQQ9LO4/SlMQ1lW4F/j0wO7QScElG+FFLhxeEGlZDR0IN9aY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291975; c=relaxed/simple; bh=sLeQMZaiwYFvkU+VeESVobXFPbAsm1F0Rv3gKDLHOAQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=PGPc/pSXT1rtEUjcPPTso9SkkbYla3SWaNIkfKuUx5+wgpQNwK2+SZuMnr9qHgxlYzT1XKPeBnprOMWD43t2+xzyNFQkvR+M6kXRcpjncUx/B0DmD3GezHp7y1MmxKeS3c7PLVpvoHaBgvTE9yT5ad6AALGIUABRoyBCvGFK0KQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=YCWPjaLm; arc=fail smtp.client-ip=40.107.212.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="YCWPjaLm" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=EIgxset/ns5tT6g9rdQ+Ci/a5ZIiuenGA2OcoUbJhD1R9+RYlzDhcFLfV+nXDog+hCQN9JNCZ4ADmhEubUPHYCa4l/lqJKicslUq6533mDCu6CDNmprNqq3vpBtxwuFEJ4ONJ+Rq6Rx6qg6t2toIWKUqd/jvP2AFWfpxt4pzR1/OtXqTbGOGJV2qzWCSmFql+LmifN4YDRtkYx9ABsYK0Ur+Obh/Eir2MT+P312oW/1mut90xLUKgVDnyPkYnV0iDuULrMChC1UkDWBUOkrCxuLRTrMHx7PZ5o0F6w09RfNhXoR+PBtgl+J1SThjdju8TIpda+BLrdG8MgLFukY9ZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4r3GJLMRZDzkrK+4DIep/UJFAcV7xCs+QBF7DFBOkE0=; b=AlrRHj6LWM2ggeMfLHFlyPsNluTze4cwwws2ACrcAYLC92Z8lbIUPfLNc+/bQ1/CcYdLPJ/915T/syZiWO5yoenAISy99N4TItsP+6KtkTJn4QN7CKPcH3m9SgTE4hqreaVhA0oQIuBjnyCJUMIpu76GwDCi3XdyVrmGlVtonKr4i5B8r3IH8eH3Zfz+kwQh/iqfcHEsnhgdRqbin0z6nPNwXa1tKy0tuDb2BIb35qXAqdg9Ok/ANpoOC0HZZ+2KCpjRwvzHFzPu/Y1jDjQUPN+2uq6Bfjz3eBQQLld4uEQ4873LF7fpN1txP9DQj8kvj7S0vGGW5a6WifT7XLN/fg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4r3GJLMRZDzkrK+4DIep/UJFAcV7xCs+QBF7DFBOkE0=; b=YCWPjaLm3J+GCQy8S4xkEJca3wYN/MwMnXHoaY/4l9Qz82tL7GC+dFq4fHEkr8mxN3nwxRbGnWBqw/mc8GBeKINEUvXwsiWVZt+pUtw5Mg9JPhLr86El5LhO6RUJ81MLzFrSFgkQCEMEBwx08m/wNB7xejCd4OUkILpYP+QvU1o= Received: from CH2PR15CA0012.namprd15.prod.outlook.com (2603:10b6:610:51::22) by CYXPR12MB9388.namprd12.prod.outlook.com (2603:10b6:930:e8::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8964.21; Wed, 23 Jul 2025 17:32:46 +0000 Received: from CH2PEPF0000013E.namprd02.prod.outlook.com (2603:10b6:610:51:cafe::9a) by CH2PR15CA0012.outlook.office365.com (2603:10b6:610:51::22) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8943.23 via Frontend Transport; Wed, 23 Jul 2025 17:32:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF0000013E.mail.protection.outlook.com (10.167.244.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:46 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:45 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:41 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v4 10/14] RDMA/ionic: Register device ops for control path Date: Wed, 23 Jul 2025 23:01:45 +0530 Message-ID: <20250723173149.2568776-11-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB04.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013E:EE_|CYXPR12MB9388:EE_ X-MS-Office365-Filtering-Correlation-Id: 6370da2f-c7d0-4b43-b7a0-08ddca0ef076 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|36860700013|376014|82310400026|1800799024|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?kj+Cukd9VhCXxPgrFjJZm5bT6sJvIhE+aiW8Kl4mI/tcO3jOKDbTu1y5l6mV?= =?us-ascii?Q?Mkd8hUCNy08RCyamkuntwxsNSpaI1j/Q4uwneZ+dGfZLMdEi5vmT3w/qmHlU?= =?us-ascii?Q?sPZxR795zLvHwi+OvhSQJMduWMQSXPIw9jef8QmfTzcScoFLr2jvvGBVJt3O?= =?us-ascii?Q?lFgO9IxNvJ32AOsAvPly7aijLAil6RITQNIlEVrgUS9SE36P0sj0DAcJO3Nm?= =?us-ascii?Q?efJg1+8pb+e/At9hXt/4C8I5P433qCxEVIxMG09TBosHwUilr+EyJjuT3KVG?= =?us-ascii?Q?2UPbLOMGQSU4YqpVi0scV+PlBrotkJu7B5FkKnSVcoYGg3DHDK7ctq3wlJVR?= =?us-ascii?Q?U2k6fMwUL3DbuQ52/LbaTgj/nIX1plPJ7dAbqumtRPAJzKW3DeJUsB5tcJ49?= =?us-ascii?Q?exZ9i3WnrLMSltvXbhoncwQ3zmJ+Zua2ilDMUFsFlSY1tK4v/d+IwewGdT4F?= =?us-ascii?Q?KaGaJT097Rfp6/97dHs+l9rVDoKEslxwUHX6dR24fEGE1L5iwef9SrstCvKB?= =?us-ascii?Q?aW+By/8a8Z7PgjAB9cca2ec06++n+ED2m/poO/Ew0LvwdJbXU1oMA0D/jv4U?= =?us-ascii?Q?2lbb3cdOPPgZnBnBa+cShohAeOD3QElXLJy4sFuODpueY+mKW+Bjh105+spz?= =?us-ascii?Q?JPLc7B0RJDWneJsOZ04Q+BWs6n+Go+p/QTsFDg6VRNva/JEYB0Tw9npn3G6y?= =?us-ascii?Q?xdvgRG1Mjk9P1MQnBEqOmJQqXa9Hd3A7j9AcLO+lNCn3CKxkjZL5eIR73fvp?= =?us-ascii?Q?dkyrOlCDDc/ccu1h1pvt2jdqZMkfL793Y3t4c5NzZYNCQobI5PFF1Ypy7qMr?= =?us-ascii?Q?Cy6I/I13w43L3jb0g2RyLJbL1D02GleeaaB+7g+AARUhGrCHU7jaefrzwVrV?= =?us-ascii?Q?RtL4xRzipN+ex64HYGV1hQ5FH7xTvcUf7by+GE0Ek13BREjTJu1SOQLZdGRn?= =?us-ascii?Q?vcI6eBcdTS+nSQZQWA/RYdCzl0DhhCccjd6DLCNuUF1A66PFAe/DUZlVQ/JG?= =?us-ascii?Q?5aJTzswiEWTS6D/IM7/pjbI2d/3TEeVh7Zc4ZBLUt6HUoFp+Bm5uXot+EFW1?= =?us-ascii?Q?ARd7hXrd62NzgHdU0Ot9VvxyS9d4D1SZ3F1HQ8QX6bkY0oU7yrZ4HDAp5th5?= =?us-ascii?Q?syfCzLwE/WMEq2j1USsFGurz+37hWUYVD3o+ghfTl7aPmLvBB8RDjrlqNPst?= =?us-ascii?Q?VferndRGugUovt65S2G7s/oKw/HN4xsvonLXItiCu2PpMBbEgD330NosrTP5?= =?us-ascii?Q?RAi7t4vQqhGA2A5dvC38jd5Y+H/DDzsLrcf6LoyElptWdd3b9XXHw2IiMJdW?= =?us-ascii?Q?9rKdgn7FSck5PLAcPydYBZUvsnGz2YvO5ZV0Oscb3SRsKGVpk2/z1Mor16H2?= =?us-ascii?Q?zuHY0mtVIYwnw1uME3l6oChu0JGdli4YalL8jkBkWyUtA55CqmnY60Fopq+0?= =?us-ascii?Q?VzJ9V2PmSGuPdPmZAAgxaiq/DMRAbuDfgL2uVQkYb0Gcmx0DRoc5PtBI+esc?= =?us-ascii?Q?ihIWgeLhxl9KLrHWQPLyRy37837x7IHMd97MeQRKPchj6+zL6bpqUi2hvA?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(36860700013)(376014)(82310400026)(1800799024)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:46.8581 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6370da2f-c7d0-4b43-b7a0-08ddca0ef076 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYXPR12MB9388 Content-Type: text/plain; charset="utf-8" Implement device supported verb APIs for control path. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v3->v4 - Removed empty labels - Used xa lock instead of rcu lock for qp and cq access and removed sync_rcu v2->v3 - Registered main ib ops at once - Removed uverbs_cmd_mask - Removed uverbs_cmd_mask - Used rdma_user_mmap_* APIs for mappings - Removed rw locks around xarrays - Fixed sparse checks drivers/infiniband/hw/ionic/ionic_admin.c | 104 + .../infiniband/hw/ionic/ionic_controlpath.c | 2490 +++++++++++++++++ drivers/infiniband/hw/ionic/ionic_fw.h | 717 +++++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 46 +- drivers/infiniband/hw/ionic/ionic_ibdev.h | 249 +- drivers/infiniband/hw/ionic/ionic_pgtbl.c | 19 + include/uapi/rdma/ionic-abi.h | 115 + 7 files changed, 3731 insertions(+), 9 deletions(-) create mode 100644 include/uapi/rdma/ionic-abi.h diff --git a/drivers/infiniband/hw/ionic/ionic_admin.c b/drivers/infiniband= /hw/ionic/ionic_admin.c index 845c03f6d9fb..1ba7a8ecc073 100644 --- a/drivers/infiniband/hw/ionic/ionic_admin.c +++ b/drivers/infiniband/hw/ionic/ionic_admin.c @@ -627,6 +627,44 @@ static struct ionic_aq *ionic_create_rdma_adminq(struc= t ionic_ibdev *dev, return ERR_PTR(rc); } =20 +static void ionic_flush_qs(struct ionic_ibdev *dev) +{ + struct ionic_qp *qp, *qp_tmp; + struct ionic_cq *cq, *cq_tmp; + LIST_HEAD(flush_list); + unsigned long index; + + WARN_ON(!irqs_disabled()); + + /* Flush qp send and recv */ + xa_lock(&dev->qp_tbl); + xa_for_each(&dev->qp_tbl, index, qp) { + kref_get(&qp->qp_kref); + list_add_tail(&qp->ibkill_flush_ent, &flush_list); + } + xa_unlock(&dev->qp_tbl); + + list_for_each_entry_safe(qp, qp_tmp, &flush_list, ibkill_flush_ent) { + ionic_flush_qp(dev, qp); + kref_put(&qp->qp_kref, ionic_qp_complete); + list_del(&qp->ibkill_flush_ent); + } + + /* Notify completions */ + xa_lock(&dev->cq_tbl); + xa_for_each(&dev->cq_tbl, index, cq) { + kref_get(&cq->cq_kref); + list_add_tail(&cq->ibkill_flush_ent, &flush_list); + } + xa_unlock(&dev->cq_tbl); + + list_for_each_entry_safe(cq, cq_tmp, &flush_list, ibkill_flush_ent) { + ionic_notify_flush_cq(cq); + kref_put(&cq->cq_kref, ionic_cq_complete); + list_del(&cq->ibkill_flush_ent); + } +} + static void ionic_kill_ibdev(struct ionic_ibdev *dev, bool fatal_path) { unsigned long irqflags; @@ -650,6 +688,9 @@ static void ionic_kill_ibdev(struct ionic_ibdev *dev, b= ool fatal_path) spin_unlock(&aq->lock); } =20 + if (do_flush) + ionic_flush_qs(dev); + local_irq_restore(irqflags); =20 /* Post a fatal event if requested */ @@ -789,6 +830,65 @@ static void ionic_cq_event(struct ionic_ibdev *dev, u3= 2 cqid, u8 code) kref_put(&cq->cq_kref, ionic_cq_complete); } =20 +static void ionic_qp_event(struct ionic_ibdev *dev, u32 qpid, u8 code) +{ + unsigned long irqflags; + struct ib_event ibev; + struct ionic_qp *qp; + + xa_lock_irqsave(&dev->qp_tbl, irqflags); + qp =3D xa_load(&dev->qp_tbl, qpid); + if (qp) + kref_get(&qp->qp_kref); + xa_unlock_irqrestore(&dev->qp_tbl, irqflags); + + if (!qp) { + ibdev_dbg(&dev->ibdev, + "missing qpid %#x code %u\n", qpid, code); + return; + } + + ibev.device =3D &dev->ibdev; + ibev.element.qp =3D &qp->ibqp; + + switch (code) { + case IONIC_V1_EQE_SQ_DRAIN: + ibev.event =3D IB_EVENT_SQ_DRAINED; + break; + + case IONIC_V1_EQE_QP_COMM_EST: + ibev.event =3D IB_EVENT_COMM_EST; + break; + + case IONIC_V1_EQE_QP_LAST_WQE: + ibev.event =3D IB_EVENT_QP_LAST_WQE_REACHED; + break; + + case IONIC_V1_EQE_QP_ERR: + ibev.event =3D IB_EVENT_QP_FATAL; + break; + + case IONIC_V1_EQE_QP_ERR_REQUEST: + ibev.event =3D IB_EVENT_QP_REQ_ERR; + break; + + case IONIC_V1_EQE_QP_ERR_ACCESS: + ibev.event =3D IB_EVENT_QP_ACCESS_ERR; + break; + + default: + ibdev_dbg(&dev->ibdev, + "unrecognized qpid %#x code %u\n", qpid, code); + goto out; + } + + if (qp->ibqp.event_handler) + qp->ibqp.event_handler(&ibev, qp->ibqp.qp_context); + +out: + kref_put(&qp->qp_kref, ionic_qp_complete); +} + static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budget) { struct ionic_ibdev *dev =3D eq->dev; @@ -818,6 +918,10 @@ static u16 ionic_poll_eq(struct ionic_eq *eq, u16 budg= et) ionic_cq_event(dev, qid, code); break; =20 + case IONIC_V1_EQE_TYPE_QP: + ionic_qp_event(dev, qid, code); + break; + default: ibdev_dbg(&dev->ibdev, "unknown event %#x type %u\n", evt, type); diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infi= niband/hw/ionic/ionic_controlpath.c index e1130573bd39..52aeb5cf8279 100644 --- a/drivers/infiniband/hw/ionic/ionic_controlpath.c +++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c @@ -1,8 +1,19 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ =20 +#include +#include +#include +#include +#include +#include + +#include "ionic_fw.h" #include "ionic_ibdev.h" =20 +#define ionic_set_ecn(tos) (((tos) | 2u) & ~1u) +#define ionic_clear_ecn(tos) ((tos) & ~3u) + static int ionic_validate_qdesc(struct ionic_qdesc *q) { if (!q->addr || !q->size || !q->mask || @@ -179,3 +190,2482 @@ void ionic_destroy_cq_common(struct ionic_ibdev *dev= , struct ionic_cq *cq) =20 cq->vcq =3D NULL; } + +static int ionic_validate_qdesc_zero(struct ionic_qdesc *q) +{ + if (q->addr || q->size || q->mask || q->depth_log2 || q->stride_log2) + return -EINVAL; + + return 0; +} + +static int ionic_get_pdid(struct ionic_ibdev *dev, u32 *pdid) +{ + int rc; + + rc =3D ionic_resid_get(&dev->inuse_pdid); + if (rc < 0) + return rc; + + *pdid =3D rc; + return 0; +} + +static int ionic_get_ahid(struct ionic_ibdev *dev, u32 *ahid) +{ + int rc; + + rc =3D ionic_resid_get(&dev->inuse_ahid); + if (rc < 0) + return rc; + + *ahid =3D rc; + return 0; +} + +static int ionic_get_mrid(struct ionic_ibdev *dev, u32 *mrid) +{ + int rc; + + /* wrap to 1, skip reserved lkey */ + rc =3D ionic_resid_get_shared(&dev->inuse_mrid, 1, + dev->inuse_mrid.inuse_size); + if (rc < 0) + return rc; + + *mrid =3D ionic_mrid(rc, dev->next_mrkey++); + return 0; +} + +static int ionic_get_gsi_qpid(struct ionic_ibdev *dev, u32 *qpid) +{ + int rc =3D 0; + + rc =3D ionic_resid_get_shared(&dev->inuse_qpid, IB_QPT_GSI, IB_QPT_GSI + = 1); + if (rc < 0) + return rc; + + *qpid =3D IB_QPT_GSI; + return 0; +} + +static int ionic_get_qpid(struct ionic_ibdev *dev, u32 *qpid, + u8 *udma_idx, u8 udma_mask) +{ + unsigned int size, base, bound; + int udma_i, udma_x, udma_ix; + int rc =3D -EINVAL; + + udma_x =3D dev->next_qpid_udma_idx; + + dev->next_qpid_udma_idx ^=3D dev->lif_cfg.udma_count - 1; + + for (udma_i =3D 0; udma_i < dev->lif_cfg.udma_count; ++udma_i) { + udma_ix =3D udma_i ^ udma_x; + + if (!(udma_mask & BIT(udma_ix))) + continue; + + size =3D dev->lif_cfg.qp_count / dev->lif_cfg.udma_count; + base =3D size * udma_ix; + bound =3D base + size; + + /* skip reserved SMI and GSI qpids in group zero */ + if (!base) + base =3D 2; + + rc =3D ionic_resid_get_shared(&dev->inuse_qpid, base, bound); + if (rc >=3D 0) { + *qpid =3D ionic_bitid_to_qid(rc, + dev->lif_cfg.udma_qgrp_shift, + dev->half_qpid_udma_shift); + *udma_idx =3D udma_ix; + + rc =3D 0; + break; + } + } + + return rc; +} + +static int ionic_get_dbid(struct ionic_ibdev *dev, u32 *dbid, phys_addr_t = *addr) +{ + int rc, dbpage_num; + + /* wrap to 1, skip kernel reserved */ + rc =3D ionic_resid_get_shared(&dev->inuse_dbid, 1, + dev->inuse_dbid.inuse_size); + if (rc < 0) + return rc; + + dbpage_num =3D (dev->lif_cfg.lif_hw_index * dev->lif_cfg.dbid_count) + rc; + *addr =3D dev->lif_cfg.db_phys + ((phys_addr_t)dbpage_num << PAGE_SHIFT); + + *dbid =3D rc; + + return 0; +} + +static void ionic_put_pdid(struct ionic_ibdev *dev, u32 pdid) +{ + ionic_resid_put(&dev->inuse_pdid, pdid); +} + +static void ionic_put_ahid(struct ionic_ibdev *dev, u32 ahid) +{ + ionic_resid_put(&dev->inuse_ahid, ahid); +} + +static void ionic_put_mrid(struct ionic_ibdev *dev, u32 mrid) +{ + ionic_resid_put(&dev->inuse_mrid, ionic_mrid_index(mrid)); +} + +static void ionic_put_qpid(struct ionic_ibdev *dev, u32 qpid) +{ + u32 bitid =3D ionic_qid_to_bitid(qpid, + dev->lif_cfg.udma_qgrp_shift, + dev->half_qpid_udma_shift); + + ionic_resid_put(&dev->inuse_qpid, bitid); +} + +static void ionic_put_dbid(struct ionic_ibdev *dev, u32 dbid) +{ + ionic_resid_put(&dev->inuse_dbid, dbid); +} + +static struct rdma_user_mmap_entry* +ionic_mmap_entry_insert(struct ionic_ctx *ctx, unsigned long size, + unsigned long pfn, u8 mmap_flags, u64 *offset) +{ + struct ionic_mmap_entry *entry; + int rc; + + entry =3D kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return NULL; + + entry->size =3D size; + entry->pfn =3D pfn; + entry->mmap_flags =3D mmap_flags; + + rc =3D rdma_user_mmap_entry_insert(&ctx->ibctx, &entry->rdma_entry, + entry->size); + if (rc) { + kfree(entry); + return NULL; + } + + if (offset) + *offset =3D rdma_user_mmap_get_offset(&entry->rdma_entry); + + return &entry->rdma_entry; +} + +int ionic_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + struct ionic_ctx_resp resp =3D {}; + struct ionic_ctx_req req; + phys_addr_t db_phys =3D 0; + int rc; + + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + return rc; + + /* try to allocate dbid for user ctx */ + rc =3D ionic_get_dbid(dev, &ctx->dbid, &db_phys); + if (rc < 0) + return rc; + + ibdev_dbg(&dev->ibdev, "user space dbid %u\n", ctx->dbid); + + ctx->mmap_dbell =3D ionic_mmap_entry_insert(ctx, PAGE_SIZE, + PHYS_PFN(db_phys), 0, NULL); + if (!ctx->mmap_dbell) { + rc =3D -ENOMEM; + goto err_mmap_dbell; + } + + resp.page_shift =3D PAGE_SHIFT; + + resp.dbell_offset =3D db_phys & ~PAGE_MASK; + + resp.version =3D dev->lif_cfg.rdma_version; + resp.qp_opcodes =3D dev->lif_cfg.qp_opcodes; + resp.admin_opcodes =3D dev->lif_cfg.admin_opcodes; + + resp.sq_qtype =3D dev->lif_cfg.sq_qtype; + resp.rq_qtype =3D dev->lif_cfg.rq_qtype; + resp.cq_qtype =3D dev->lif_cfg.cq_qtype; + resp.admin_qtype =3D dev->lif_cfg.aq_qtype; + resp.max_stride =3D dev->lif_cfg.max_stride; + resp.max_spec =3D IONIC_SPEC_HIGH; + + resp.udma_count =3D dev->lif_cfg.udma_count; + resp.expdb_mask =3D dev->lif_cfg.expdb_mask; + + if (dev->lif_cfg.sq_expdb) + resp.expdb_qtypes |=3D IONIC_EXPDB_SQ; + if (dev->lif_cfg.rq_expdb) + resp.expdb_qtypes |=3D IONIC_EXPDB_RQ; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + + return 0; + +err_resp: + rdma_user_mmap_entry_remove(ctx->mmap_dbell); +err_mmap_dbell: + ionic_put_dbid(dev, ctx->dbid); + + return rc; +} + +void ionic_dealloc_ucontext(struct ib_ucontext *ibctx) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + + rdma_user_mmap_entry_remove(ctx->mmap_dbell); + ionic_put_dbid(dev, ctx->dbid); +} + +int ionic_mmap(struct ib_ucontext *ibctx, struct vm_area_struct *vma) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibctx->device); + struct ionic_ctx *ctx =3D to_ionic_ctx(ibctx); + struct rdma_user_mmap_entry *rdma_entry; + struct ionic_mmap_entry *ionic_entry; + int rc =3D 0; + + rdma_entry =3D rdma_user_mmap_entry_get(&ctx->ibctx, vma); + if (!rdma_entry) { + ibdev_dbg(&dev->ibdev, "not found %#lx\n", + vma->vm_pgoff << PAGE_SHIFT); + return -EINVAL; + } + + ionic_entry =3D container_of(rdma_entry, struct ionic_mmap_entry, + rdma_entry); + + ibdev_dbg(&dev->ibdev, "writecombine? %d\n", + ionic_entry->mmap_flags & IONIC_MMAP_WC); + if (ionic_entry->mmap_flags & IONIC_MMAP_WC) + vma->vm_page_prot =3D pgprot_writecombine(vma->vm_page_prot); + else + vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); + + ibdev_dbg(&dev->ibdev, "remap st %#lx pf %#lx sz %#lx\n", + vma->vm_start, ionic_entry->pfn, ionic_entry->size); + rc =3D rdma_user_mmap_io(&ctx->ibctx, vma, ionic_entry->pfn, + ionic_entry->size, vma->vm_page_prot, + rdma_entry); + if (rc) + ibdev_dbg(&dev->ibdev, "remap failed %d\n", rc); + + rdma_user_mmap_entry_put(rdma_entry); + return rc; +} + +void ionic_mmap_free(struct rdma_user_mmap_entry *rdma_entry) +{ + struct ionic_mmap_entry *ionic_entry; + + ionic_entry =3D container_of(rdma_entry, struct ionic_mmap_entry, + rdma_entry); + kfree(ionic_entry); +} + +int ionic_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + + return ionic_get_pdid(dev, &pd->pdid); +} + +int ionic_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + + ionic_put_pdid(dev, pd->pdid); + + return 0; +} + +static int ionic_build_hdr(struct ionic_ibdev *dev, + struct ib_ud_header *hdr, + const struct rdma_ah_attr *attr, + u16 sport, bool want_ecn) +{ + const struct ib_global_route *grh; + enum rdma_network_type net; + u16 vlan; + int rc; + + if (attr->ah_flags !=3D IB_AH_GRH) + return -EINVAL; + if (attr->type !=3D RDMA_AH_ATTR_TYPE_ROCE) + return -EINVAL; + + grh =3D rdma_ah_read_grh(attr); + + rc =3D rdma_read_gid_l2_fields(grh->sgid_attr, &vlan, &hdr->eth.smac_h[0]= ); + if (rc) + return rc; + + net =3D rdma_gid_attr_network_type(grh->sgid_attr); + + rc =3D ib_ud_header_init(0, /* no payload */ + 0, /* no lrh */ + 1, /* yes eth */ + vlan !=3D 0xffff, + 0, /* no grh */ + net =3D=3D RDMA_NETWORK_IPV4 ? 4 : 6, + 1, /* yes udp */ + 0, /* no imm */ + hdr); + if (rc) + return rc; + + ether_addr_copy(hdr->eth.dmac_h, attr->roce.dmac); + + if (net =3D=3D RDMA_NETWORK_IPV4) { + hdr->eth.type =3D cpu_to_be16(ETH_P_IP); + hdr->ip4.frag_off =3D cpu_to_be16(0x4000); /* don't fragment */ + hdr->ip4.ttl =3D grh->hop_limit; + hdr->ip4.tot_len =3D cpu_to_be16(0xffff); + hdr->ip4.saddr =3D + *(const __be32 *)(grh->sgid_attr->gid.raw + 12); + hdr->ip4.daddr =3D *(const __be32 *)(grh->dgid.raw + 12); + + if (want_ecn) + hdr->ip4.tos =3D ionic_set_ecn(grh->traffic_class); + else + hdr->ip4.tos =3D ionic_clear_ecn(grh->traffic_class); + } else { + hdr->eth.type =3D cpu_to_be16(ETH_P_IPV6); + hdr->grh.flow_label =3D cpu_to_be32(grh->flow_label); + hdr->grh.hop_limit =3D grh->hop_limit; + hdr->grh.source_gid =3D grh->sgid_attr->gid; + hdr->grh.destination_gid =3D grh->dgid; + + if (want_ecn) + hdr->grh.traffic_class =3D + ionic_set_ecn(grh->traffic_class); + else + hdr->grh.traffic_class =3D + ionic_clear_ecn(grh->traffic_class); + } + + if (vlan !=3D 0xffff) { + vlan |=3D rdma_ah_get_sl(attr) << VLAN_PRIO_SHIFT; + hdr->vlan.tag =3D cpu_to_be16(vlan); + hdr->vlan.type =3D hdr->eth.type; + hdr->eth.type =3D cpu_to_be16(ETH_P_8021Q); + } + + hdr->udp.sport =3D cpu_to_be16(sport); + hdr->udp.dport =3D cpu_to_be16(ROCE_V2_UDP_DPORT); + + return 0; +} + +static void ionic_set_ah_attr(struct ionic_ibdev *dev, + struct rdma_ah_attr *ah_attr, + struct ib_ud_header *hdr, + int sgid_index) +{ + u32 flow_label; + u16 vlan =3D 0; + u8 tos, ttl; + + if (hdr->vlan_present) + vlan =3D be16_to_cpu(hdr->vlan.tag); + + if (hdr->ipv4_present) { + flow_label =3D 0; + ttl =3D hdr->ip4.ttl; + tos =3D hdr->ip4.tos; + *(__be16 *)(hdr->grh.destination_gid.raw + 10) =3D cpu_to_be16(0xffff); + *(__be32 *)(hdr->grh.destination_gid.raw + 12) =3D hdr->ip4.daddr; + } else { + flow_label =3D be32_to_cpu(hdr->grh.flow_label); + ttl =3D hdr->grh.hop_limit; + tos =3D hdr->grh.traffic_class; + } + + memset(ah_attr, 0, sizeof(*ah_attr)); + ah_attr->type =3D RDMA_AH_ATTR_TYPE_ROCE; + if (hdr->eth_present) + memcpy(&ah_attr->roce.dmac, &hdr->eth.dmac_h, ETH_ALEN); + rdma_ah_set_sl(ah_attr, vlan >> VLAN_PRIO_SHIFT); + rdma_ah_set_port_num(ah_attr, 1); + rdma_ah_set_grh(ah_attr, NULL, flow_label, sgid_index, ttl, tos); + rdma_ah_set_dgid_raw(ah_attr, &hdr->grh.destination_gid); +} + +static int ionic_create_ah_cmd(struct ionic_ibdev *dev, + struct ionic_ah *ah, + struct ionic_pd *pd, + struct rdma_ah_attr *attr, + u32 flags) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_AH, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_AH_IN_V1_LEN), + .cmd.create_ah =3D { + .pd_id =3D cpu_to_le32(pd->pdid), + .dbid_flags =3D cpu_to_le16(dev->lif_cfg.dbid), + .id_ver =3D cpu_to_le32(ah->ahid), + } + } + }; + enum ionic_admin_flags admin_flags =3D 0; + dma_addr_t hdr_dma =3D 0; + void *hdr_buf; + gfp_t gfp =3D GFP_ATOMIC; + int rc, hdr_len =3D 0; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_AH) + return -EBADRQC; + + if (flags & RDMA_CREATE_AH_SLEEPABLE) + gfp =3D GFP_KERNEL; + else + admin_flags |=3D IONIC_ADMIN_F_BUSYWAIT; + + rc =3D ionic_build_hdr(dev, &ah->hdr, attr, IONIC_ROCE_UDP_SPORT, false); + if (rc) + return rc; + + if (ah->hdr.eth.type =3D=3D cpu_to_be16(ETH_P_8021Q)) { + if (ah->hdr.vlan.type =3D=3D cpu_to_be16(ETH_P_IP)) + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP; + else + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP; + } else { + if (ah->hdr.eth.type =3D=3D cpu_to_be16(ETH_P_IP)) + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP; + else + wr.wqe.cmd.create_ah.csum_profile =3D + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP; + } + + ah->sgid_index =3D rdma_ah_read_grh(attr)->sgid_index; + + hdr_buf =3D kmalloc(PAGE_SIZE, gfp); + if (!hdr_buf) + return -ENOMEM; + + hdr_len =3D ib_ud_header_pack(&ah->hdr, hdr_buf); + hdr_len -=3D IB_BTH_BYTES; + hdr_len -=3D IB_DETH_BYTES; + ibdev_dbg(&dev->ibdev, "roce packet header template\n"); + print_hex_dump_debug("hdr ", DUMP_PREFIX_OFFSET, 16, 1, + hdr_buf, hdr_len, true); + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, hdr_len, + DMA_TO_DEVICE); + + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + wr.wqe.cmd.create_ah.dma_addr =3D cpu_to_le64(hdr_dma); + wr.wqe.cmd.create_ah.length =3D cpu_to_le32(hdr_len); + + ionic_admin_post(dev, &wr); + rc =3D ionic_admin_wait(dev, &wr, admin_flags); + + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, hdr_len, + DMA_TO_DEVICE); +err_dma: + kfree(hdr_buf); + + return rc; +} + +static int ionic_destroy_ah_cmd(struct ionic_ibdev *dev, u32 ahid, u32 fla= gs) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_AH, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_AH_IN_V1_LEN), + .cmd.destroy_ah =3D { + .ah_id =3D cpu_to_le32(ahid), + }, + } + }; + enum ionic_admin_flags admin_flags =3D IONIC_ADMIN_F_TEARDOWN; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_AH) + return -EBADRQC; + + if (!(flags & RDMA_CREATE_AH_SLEEPABLE)) + admin_flags |=3D IONIC_ADMIN_F_BUSYWAIT; + + ionic_admin_post(dev, &wr); + ionic_admin_wait(dev, &wr, admin_flags); + + /* No host-memory resource is associated with ah, so it is ok + * to "succeed" and complete this destroy ah on the host. + */ + return 0; +} + +int ionic_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_att= r, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct rdma_ah_attr *attr =3D init_attr->ah_attr; + struct ionic_pd *pd =3D to_ionic_pd(ibah->pd); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + struct ionic_ah_resp resp =3D {}; + u32 flags =3D init_attr->flags; + int rc; + + rc =3D ionic_get_ahid(dev, &ah->ahid); + if (rc) + return rc; + + rc =3D ionic_create_ah_cmd(dev, ah, pd, attr, flags); + if (rc) + goto err_cmd; + + if (udata) { + resp.ahid =3D ah->ahid; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + return 0; + +err_resp: + ionic_destroy_ah_cmd(dev, ah->ahid, flags); +err_cmd: + ionic_put_ahid(dev, ah->ahid); + return rc; +} + +int ionic_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + + ionic_set_ah_attr(dev, ah_attr, &ah->hdr, ah->sgid_index); + + return 0; +} + +int ionic_destroy_ah(struct ib_ah *ibah, u32 flags) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibah->device); + struct ionic_ah *ah =3D to_ionic_ah(ibah); + int rc; + + rc =3D ionic_destroy_ah_cmd(dev, ah->ahid, flags); + if (rc) + return rc; + + ionic_put_ahid(dev, ah->ahid); + + return 0; +} + +static int ionic_create_mr_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_mr *mr, + u64 addr, + u64 length) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_MR, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_MR_IN_V1_LEN), + .cmd.create_mr =3D { + .va =3D cpu_to_le64(addr), + .length =3D cpu_to_le64(length), + .pd_id =3D cpu_to_le32(pd->pdid), + .page_size_log2 =3D mr->buf.page_size_log2, + .tbl_index =3D cpu_to_le32(~0), + .map_count =3D cpu_to_le32(mr->buf.tbl_pages), + .dma_addr =3D ionic_pgtbl_dma(&mr->buf, addr), + .dbid_flags =3D cpu_to_le16(mr->flags), + .id_ver =3D cpu_to_le32(mr->mrid), + } + } + }; + int rc; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_MR) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + rc =3D ionic_admin_wait(dev, &wr, 0); + if (!rc) + mr->created =3D true; + + return rc; +} + +static int ionic_destroy_mr_cmd(struct ionic_ibdev *dev, u32 mrid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_MR, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_MR_IN_V1_LEN), + .cmd.destroy_mr =3D { + .mr_id =3D cpu_to_le32(mrid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_MR) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +struct ib_mr *ionic_get_dma_mr(struct ib_pd *ibpd, int access) +{ + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + mr->ibmr.lkey =3D IONIC_DMA_LKEY; + mr->ibmr.rkey =3D IONIC_DMA_RKEY; + + if (pd) + pd->flags |=3D IONIC_QPF_PRIVILEGED; + + return &mr->ibmr; +} + +struct ib_mr *ionic_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 addr, int access, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + unsigned long pg_sz; + int rc; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + mr->umem =3D ib_umem_get(&dev->ibdev, start, length, access); + if (IS_ERR(mr->umem)) { + rc =3D PTR_ERR(mr->umem); + goto err_umem; + } + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) + goto err_pgtbl; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &mr->buf); + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); +err_umem: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); + return ERR_PTR(rc); +} + +struct ib_mr *ionic_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 offset, + u64 length, u64 addr, int fd, int access, + struct uverbs_attr_bundle *attrs) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ib_umem_dmabuf *umem_dmabuf; + struct ionic_mr *mr; + u64 pg_sz; + int rc; + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + mr->ibmr.iova =3D addr; + mr->ibmr.length =3D length; + + mr->flags =3D IONIC_MRF_USER_MR | to_ionic_mr_flags(access); + + umem_dmabuf =3D ib_umem_dmabuf_get_pinned(&dev->ibdev, offset, length, + fd, access); + if (IS_ERR(umem_dmabuf)) { + rc =3D PTR_ERR(umem_dmabuf); + goto err_umem; + } + + mr->umem =3D &umem_dmabuf->umem; + + pg_sz =3D ib_umem_find_best_pgsz(mr->umem, + dev->lif_cfg.page_size_supported, + addr); + if (!pg_sz) { + rc =3D -EINVAL; + goto err_pgtbl; + } + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, 1, pg_sz); + if (rc) + goto err_pgtbl; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, addr, length); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &mr->buf); + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ib_umem_release(mr->umem); +err_umem: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); + return ERR_PTR(rc); +} + +int ionic_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + int rc; + + if (!mr->ibmr.lkey) + goto out; + + if (mr->created) { + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + return rc; + } + + ionic_pgtbl_unbuf(dev, &mr->buf); + + if (mr->umem) + ib_umem_release(mr->umem); + + ionic_put_mrid(dev, mr->mrid); + +out: + kfree(mr); + + return 0; +} + +struct ib_mr *ionic_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type type, + u32 max_sg) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibpd->device); + struct ionic_pd *pd =3D to_ionic_pd(ibpd); + struct ionic_mr *mr; + int rc; + + if (type !=3D IB_MR_TYPE_MEM_REG) + return ERR_PTR(-EINVAL); + + mr =3D kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + goto err_mrid; + + mr->ibmr.lkey =3D mr->mrid; + mr->ibmr.rkey =3D mr->mrid; + + mr->flags =3D IONIC_MRF_PHYS_MR; + + rc =3D ionic_pgtbl_init(dev, &mr->buf, mr->umem, 0, max_sg, PAGE_SIZE); + if (rc) + goto err_pgtbl; + + mr->buf.tbl_pages =3D 0; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, 0, 0); + if (rc) + goto err_cmd; + + return &mr->ibmr; + +err_cmd: + ionic_pgtbl_unbuf(dev, &mr->buf); +err_pgtbl: + ionic_put_mrid(dev, mr->mrid); +err_mrid: + kfree(mr); + return ERR_PTR(rc); +} + +static int ionic_map_mr_page(struct ib_mr *ibmr, u64 dma) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + + ibdev_dbg(&dev->ibdev, "dma %p\n", (void *)dma); + return ionic_pgtbl_page(&mr->buf, dma); +} + +int ionic_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nen= ts, + unsigned int *sg_offset) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmr->device); + struct ionic_mr *mr =3D to_ionic_mr(ibmr); + int rc; + + /* mr must be allocated using ib_alloc_mr() */ + if (unlikely(!mr->buf.tbl_limit)) + return -EINVAL; + + mr->buf.tbl_pages =3D 0; + + if (mr->buf.tbl_buf) + dma_sync_single_for_cpu(dev->lif_cfg.hwdev, mr->buf.tbl_dma, + mr->buf.tbl_size, DMA_TO_DEVICE); + + ibdev_dbg(&dev->ibdev, "sg %p nent %d\n", sg, sg_nents); + rc =3D ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, ionic_map_mr_page); + + mr->buf.page_size_log2 =3D order_base_2(ibmr->page_size); + + if (mr->buf.tbl_buf) + dma_sync_single_for_device(dev->lif_cfg.hwdev, mr->buf.tbl_dma, + mr->buf.tbl_size, DMA_TO_DEVICE); + + return rc; +} + +int ionic_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmw->device); + struct ionic_pd *pd =3D to_ionic_pd(ibmw->pd); + struct ionic_mr *mr =3D to_ionic_mw(ibmw); + int rc; + + rc =3D ionic_get_mrid(dev, &mr->mrid); + if (rc) + return rc; + + mr->ibmw.rkey =3D mr->mrid; + + if (mr->ibmw.type =3D=3D IB_MW_TYPE_1) + mr->flags =3D IONIC_MRF_MW_1; + else + mr->flags =3D IONIC_MRF_MW_2; + + rc =3D ionic_create_mr_cmd(dev, pd, mr, 0, 0); + if (rc) + goto err_cmd; + + return 0; + +err_cmd: + ionic_put_mrid(dev, mr->mrid); + return rc; +} + +int ionic_dealloc_mw(struct ib_mw *ibmw) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibmw->device); + struct ionic_mr *mr =3D to_ionic_mw(ibmw); + int rc; + + rc =3D ionic_destroy_mr_cmd(dev, mr->mrid); + if (rc) + return rc; + + ionic_put_mrid(dev, mr->mrid); + + return 0; +} + +static int ionic_create_cq_cmd(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_cq *cq, + struct ionic_tbl_buf *buf) +{ + const u16 dbid =3D ionic_ctx_dbid(dev, ctx); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_CQ, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_CQ_IN_V1_LEN), + .cmd.create_cq =3D { + .eq_id =3D cpu_to_le32(cq->eqid), + .depth_log2 =3D cq->q.depth_log2, + .stride_log2 =3D cq->q.stride_log2, + .page_size_log2 =3D buf->page_size_log2, + .tbl_index =3D cpu_to_le32(~0), + .map_count =3D cpu_to_le32(buf->tbl_pages), + .dma_addr =3D ionic_pgtbl_dma(buf, 0), + .dbid_flags =3D cpu_to_le16(dbid), + .id_ver =3D cpu_to_le32(cq->cqid), + } + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_CQ) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, 0); +} + +static int ionic_destroy_cq_cmd(struct ionic_ibdev *dev, u32 cqid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_CQ, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN), + .cmd.destroy_cq =3D { + .cq_id =3D cpu_to_le32(cqid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_CQ) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +int ionic_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct uverbs_attr_bundle *attrs) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ib_udata *udata =3D &attrs->driver_udata; + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + struct ionic_tbl_buf buf =3D {}; + struct ionic_cq_resp resp; + struct ionic_cq_req req; + int udma_idx =3D 0, rc; + + if (udata) { + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + return rc; + } + + vcq->udma_mask =3D BIT(dev->lif_cfg.udma_count) - 1; + + if (udata) + vcq->udma_mask &=3D req.udma_mask; + + if (!vcq->udma_mask) { + rc =3D -EINVAL; + goto err_init; + } + + for (; udma_idx < dev->lif_cfg.udma_count; ++udma_idx) { + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + + rc =3D ionic_create_cq_common(vcq, &buf, attr, ctx, udata, + &req.cq[udma_idx], + &resp.cqid[udma_idx], + udma_idx); + if (rc) + goto err_init; + + rc =3D ionic_create_cq_cmd(dev, ctx, &vcq->cq[udma_idx], &buf); + if (rc) + goto err_cmd; + + ionic_pgtbl_unbuf(dev, &buf); + } + + vcq->ibcq.cqe =3D attr->cqe; + + if (udata) { + resp.udma_mask =3D vcq->udma_mask; + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + return 0; + +err_resp: + while (udma_idx) { + --udma_idx; + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + ionic_destroy_cq_cmd(dev, vcq->cq[udma_idx].cqid); +err_cmd: + ionic_pgtbl_unbuf(dev, &buf); + ionic_destroy_cq_common(dev, &vcq->cq[udma_idx]); +err_init: + ; + } + + return rc; +} + +int ionic_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int udma_idx, rc_tmp, rc =3D 0; + + for (udma_idx =3D dev->lif_cfg.udma_count; udma_idx; ) { + --udma_idx; + + if (!(vcq->udma_mask & BIT(udma_idx))) + continue; + + rc_tmp =3D ionic_destroy_cq_cmd(dev, vcq->cq[udma_idx].cqid); + if (rc_tmp) { + if (!rc) + rc =3D rc_tmp; + + continue; + } + + ionic_destroy_cq_common(dev, &vcq->cq[udma_idx]); + } + + return rc; +} + +static bool pd_remote_privileged(struct ib_pd *pd) +{ + return pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; +} + +static int ionic_create_qp_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_cq *send_cq, + struct ionic_cq *recv_cq, + struct ionic_qp *qp, + struct ionic_tbl_buf *sq_buf, + struct ionic_tbl_buf *rq_buf, + struct ib_qp_init_attr *attr) +{ + const u16 dbid =3D ionic_obj_dbid(dev, pd->ibpd.uobject); + const u32 flags =3D to_ionic_qp_flags(0, 0, + qp->sq_cmb & IONIC_CMB_ENABLE, + qp->rq_cmb & IONIC_CMB_ENABLE, + qp->sq_spec, qp->rq_spec, + pd->flags & IONIC_QPF_PRIVILEGED, + pd_remote_privileged(&pd->ibpd)); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_CREATE_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_CREATE_QP_IN_V1_LEN), + .cmd.create_qp =3D { + .pd_id =3D cpu_to_le32(pd->pdid), + .priv_flags =3D cpu_to_be32(flags), + .type_state =3D to_ionic_qp_type(attr->qp_type), + .dbid_flags =3D cpu_to_le16(dbid), + .id_ver =3D cpu_to_le32(qp->qpid), + } + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_CREATE_QP) + return -EBADRQC; + + if (qp->has_sq) { + wr.wqe.cmd.create_qp.sq_cq_id =3D cpu_to_le32(send_cq->cqid); + wr.wqe.cmd.create_qp.sq_depth_log2 =3D qp->sq.depth_log2; + wr.wqe.cmd.create_qp.sq_stride_log2 =3D qp->sq.stride_log2; + wr.wqe.cmd.create_qp.sq_page_size_log2 =3D sq_buf->page_size_log2; + wr.wqe.cmd.create_qp.sq_tbl_index_xrcd_id =3D cpu_to_le32(~0); + wr.wqe.cmd.create_qp.sq_map_count =3D + cpu_to_le32(sq_buf->tbl_pages); + wr.wqe.cmd.create_qp.sq_dma_addr =3D ionic_pgtbl_dma(sq_buf, 0); + } + + if (qp->has_rq) { + wr.wqe.cmd.create_qp.rq_cq_id =3D cpu_to_le32(recv_cq->cqid); + wr.wqe.cmd.create_qp.rq_depth_log2 =3D qp->rq.depth_log2; + wr.wqe.cmd.create_qp.rq_stride_log2 =3D qp->rq.stride_log2; + wr.wqe.cmd.create_qp.rq_page_size_log2 =3D rq_buf->page_size_log2; + wr.wqe.cmd.create_qp.rq_tbl_index_srq_id =3D cpu_to_le32(~0); + wr.wqe.cmd.create_qp.rq_map_count =3D + cpu_to_le32(rq_buf->tbl_pages); + wr.wqe.cmd.create_qp.rq_dma_addr =3D ionic_pgtbl_dma(rq_buf, 0); + } + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, 0); +} + +static int ionic_modify_qp_cmd(struct ionic_ibdev *dev, + struct ionic_pd *pd, + struct ionic_qp *qp, + struct ib_qp_attr *attr, + int mask) +{ + const u32 flags =3D to_ionic_qp_flags(attr->qp_access_flags, + attr->en_sqd_async_notify, + qp->sq_cmb & IONIC_CMB_ENABLE, + qp->rq_cmb & IONIC_CMB_ENABLE, + qp->sq_spec, qp->rq_spec, + pd->flags & IONIC_QPF_PRIVILEGED, + pd_remote_privileged(qp->ibqp.pd)); + const u8 state =3D to_ionic_qp_modify_state(attr->qp_state, + attr->cur_qp_state); + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_MODIFY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_MODIFY_QP_IN_V1_LEN), + .cmd.mod_qp =3D { + .attr_mask =3D cpu_to_be32(mask), + .access_flags =3D cpu_to_be16(flags), + .rq_psn =3D cpu_to_le32(attr->rq_psn), + .sq_psn =3D cpu_to_le32(attr->sq_psn), + .rate_limit_kbps =3D + cpu_to_le32(attr->rate_limit), + .pmtu =3D (attr->path_mtu + 7), + .retry =3D (attr->retry_cnt | + (attr->rnr_retry << 4)), + .rnr_timer =3D attr->min_rnr_timer, + .retry_timeout =3D attr->timeout, + .type_state =3D state, + .id_ver =3D cpu_to_le32(qp->qpid), + } + } + }; + const struct ib_global_route *grh =3D rdma_ah_read_grh(&attr->ah_attr); + void *hdr_buf =3D NULL; + dma_addr_t hdr_dma =3D 0; + int rc, hdr_len =3D 0; + u16 sport; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_MODIFY_QP) + return -EBADRQC; + + if ((mask & IB_QP_MAX_DEST_RD_ATOMIC) && attr->max_dest_rd_atomic) { + /* Note, round up/down was already done for allocating + * resources on the device. The allocation order is in cache + * line size. We can't use the order of the resource + * allocation to determine the order wqes here, because for + * queue length <=3D one cache line it is not distinct. + * + * Therefore, order wqes is computed again here. + * + * Account for hole and round up to the next order. + */ + wr.wqe.cmd.mod_qp.rsq_depth =3D + order_base_2(attr->max_dest_rd_atomic + 1); + wr.wqe.cmd.mod_qp.rsq_index =3D cpu_to_le32(~0); + } + + if ((mask & IB_QP_MAX_QP_RD_ATOMIC) && attr->max_rd_atomic) { + /* Account for hole and round down to the next order */ + wr.wqe.cmd.mod_qp.rrq_depth =3D + order_base_2(attr->max_rd_atomic + 2) - 1; + wr.wqe.cmd.mod_qp.rrq_index =3D cpu_to_le32(~0); + } + + if (qp->ibqp.qp_type =3D=3D IB_QPT_RC || qp->ibqp.qp_type =3D=3D IB_QPT_U= C) + wr.wqe.cmd.mod_qp.qkey_dest_qpn =3D + cpu_to_le32(attr->dest_qp_num); + else + wr.wqe.cmd.mod_qp.qkey_dest_qpn =3D cpu_to_le32(attr->qkey); + + if (mask & IB_QP_AV) { + if (!qp->hdr) + return -ENOMEM; + + sport =3D rdma_get_udp_sport(grh->flow_label, + qp->qpid, + attr->dest_qp_num); + + rc =3D ionic_build_hdr(dev, qp->hdr, &attr->ah_attr, sport, true); + if (rc) + return rc; + + qp->sgid_index =3D grh->sgid_index; + + hdr_buf =3D kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!hdr_buf) + return -ENOMEM; + + hdr_len =3D ib_ud_header_pack(qp->hdr, hdr_buf); + hdr_len -=3D IB_BTH_BYTES; + hdr_len -=3D IB_DETH_BYTES; + ibdev_dbg(&dev->ibdev, "roce packet header template\n"); + print_hex_dump_debug("hdr ", DUMP_PREFIX_OFFSET, 16, 1, + hdr_buf, hdr_len, true); + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, hdr_len, + DMA_TO_DEVICE); + + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + if (qp->hdr->ipv4_present) { + wr.wqe.cmd.mod_qp.tfp_csum_profile =3D + qp->hdr->vlan_present ? + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP : + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP; + } else { + wr.wqe.cmd.mod_qp.tfp_csum_profile =3D + qp->hdr->vlan_present ? + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP : + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP; + } + + wr.wqe.cmd.mod_qp.ah_id_len =3D + cpu_to_le32(qp->ahid | (hdr_len << 24)); + wr.wqe.cmd.mod_qp.dma_addr =3D cpu_to_le64(hdr_dma); + + wr.wqe.cmd.mod_qp.en_pcp =3D attr->ah_attr.sl; + wr.wqe.cmd.mod_qp.ip_dscp =3D grh->traffic_class >> 2; + } + + ionic_admin_post(dev, &wr); + + rc =3D ionic_admin_wait(dev, &wr, 0); + + if (mask & IB_QP_AV) + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, hdr_len, + DMA_TO_DEVICE); +err_dma: + if (mask & IB_QP_AV) + kfree(hdr_buf); + + return rc; +} + +static int ionic_query_qp_cmd(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_qp_attr *attr, + int mask) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_QUERY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_QUERY_QP_IN_V1_LEN), + .cmd.query_qp =3D { + .id_ver =3D cpu_to_le32(qp->qpid), + }, + } + }; + struct ionic_v1_admin_query_qp_sq *query_sqbuf; + struct ionic_v1_admin_query_qp_rq *query_rqbuf; + dma_addr_t query_sqdma; + dma_addr_t query_rqdma; + dma_addr_t hdr_dma =3D 0; + void *hdr_buf =3D NULL; + int flags, rc; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_QUERY_QP) + return -EBADRQC; + + if (qp->has_sq) { + bool expdb =3D !!(qp->sq_cmb & IONIC_CMB_EXPDB); + + attr->cap.max_send_sge =3D + ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + expdb); + attr->cap.max_inline_data =3D + ionic_v1_send_wqe_max_data(qp->sq.stride_log2, expdb); + } + + if (qp->has_rq) { + attr->cap.max_recv_sge =3D + ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, + qp->rq_spec, + qp->rq_cmb & IONIC_CMB_EXPDB); + } + + query_sqbuf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!query_sqbuf) + return -ENOMEM; + + query_rqbuf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!query_rqbuf) { + rc =3D -ENOMEM; + goto err_rqbuf; + } + + query_sqdma =3D dma_map_single(dev->lif_cfg.hwdev, query_sqbuf, PAGE_SIZE, + DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, query_sqdma); + if (rc) + goto err_sqdma; + + query_rqdma =3D dma_map_single(dev->lif_cfg.hwdev, query_rqbuf, PAGE_SIZE, + DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, query_rqdma); + if (rc) + goto err_rqdma; + + if (mask & IB_QP_AV) { + hdr_buf =3D kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!hdr_buf) { + rc =3D -ENOMEM; + goto err_hdrbuf; + } + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, hdr_buf, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_hdrdma; + } + + wr.wqe.cmd.query_qp.sq_dma_addr =3D cpu_to_le64(query_sqdma); + wr.wqe.cmd.query_qp.rq_dma_addr =3D cpu_to_le64(query_rqdma); + wr.wqe.cmd.query_qp.hdr_dma_addr =3D cpu_to_le64(hdr_dma); + wr.wqe.cmd.query_qp.ah_id =3D cpu_to_le32(qp->ahid); + + ionic_admin_post(dev, &wr); + + rc =3D ionic_admin_wait(dev, &wr, 0); + + if (rc) + goto err_hdrdma; + + flags =3D be16_to_cpu(query_sqbuf->access_perms_flags | + query_rqbuf->access_perms_flags); + + print_hex_dump_debug("sqbuf ", DUMP_PREFIX_OFFSET, 16, 1, + query_sqbuf, sizeof(*query_sqbuf), true); + print_hex_dump_debug("rqbuf ", DUMP_PREFIX_OFFSET, 16, 1, + query_rqbuf, sizeof(*query_rqbuf), true); + ibdev_dbg(&dev->ibdev, "query qp %u state_pmtu %#x flags %#x", + qp->qpid, query_rqbuf->state_pmtu, flags); + + attr->qp_state =3D from_ionic_qp_state(query_rqbuf->state_pmtu >> 4); + attr->cur_qp_state =3D attr->qp_state; + attr->path_mtu =3D (query_rqbuf->state_pmtu & 0xf) - 7; + attr->path_mig_state =3D IB_MIG_MIGRATED; + attr->qkey =3D be32_to_cpu(query_sqbuf->qkey_dest_qpn); + attr->rq_psn =3D be32_to_cpu(query_sqbuf->rq_psn); + attr->sq_psn =3D be32_to_cpu(query_rqbuf->sq_psn); + attr->dest_qp_num =3D attr->qkey; + attr->qp_access_flags =3D from_ionic_qp_flags(flags); + attr->pkey_index =3D 0; + attr->alt_pkey_index =3D 0; + attr->en_sqd_async_notify =3D !!(flags & IONIC_QPF_SQD_NOTIFY); + attr->sq_draining =3D !!(flags & IONIC_QPF_SQ_DRAINING); + attr->max_rd_atomic =3D BIT(query_rqbuf->rrq_depth) - 1; + attr->max_dest_rd_atomic =3D BIT(query_rqbuf->rsq_depth) - 1; + attr->min_rnr_timer =3D query_sqbuf->rnr_timer; + attr->port_num =3D 0; + attr->timeout =3D query_sqbuf->retry_timeout; + attr->retry_cnt =3D query_rqbuf->retry_rnrtry & 0xf; + attr->rnr_retry =3D query_rqbuf->retry_rnrtry >> 4; + attr->alt_port_num =3D 0; + attr->alt_timeout =3D 0; + attr->rate_limit =3D be32_to_cpu(query_sqbuf->rate_limit_kbps); + + if (mask & IB_QP_AV) + ionic_set_ah_attr(dev, &attr->ah_attr, + qp->hdr, qp->sgid_index); + +err_hdrdma: + if (mask & IB_QP_AV) { + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, + PAGE_SIZE, DMA_FROM_DEVICE); + kfree(hdr_buf); + } +err_hdrbuf: + dma_unmap_single(dev->lif_cfg.hwdev, query_rqdma, sizeof(*query_rqbuf), + DMA_FROM_DEVICE); +err_rqdma: + dma_unmap_single(dev->lif_cfg.hwdev, query_sqdma, sizeof(*query_sqbuf), + DMA_FROM_DEVICE); +err_sqdma: + kfree(query_rqbuf); +err_rqbuf: + kfree(query_sqbuf); + + return rc; +} + +static int ionic_destroy_qp_cmd(struct ionic_ibdev *dev, u32 qpid) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D IONIC_V1_ADMIN_DESTROY_QP, + .len =3D cpu_to_le16(IONIC_ADMIN_DESTROY_QP_IN_V1_LEN), + .cmd.destroy_qp =3D { + .qp_id =3D cpu_to_le32(qpid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D IONIC_V1_ADMIN_DESTROY_QP) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_TEARDOWN); +} + +static bool ionic_expdb_wqe_size_supported(struct ionic_ibdev *dev, + uint32_t wqe_size) +{ + switch (wqe_size) { + case 64: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_64; + case 128: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_128; + case 256: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_256; + case 512: return dev->lif_cfg.expdb_mask & IONIC_EXPDB_512; + } + + return false; +} + +static void ionic_qp_sq_init_cmb(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_udata *udata, + int max_data) +{ + u8 expdb_stride_log2 =3D 0; + bool expdb; + int rc; + + if (!(qp->sq_cmb & IONIC_CMB_ENABLE)) + goto not_in_cmb; + + if (qp->sq_cmb & ~IONIC_CMB_SUPPORTED) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->sq_cmb &=3D IONIC_CMB_SUPPORTED; + } + + if ((qp->sq_cmb & IONIC_CMB_EXPDB) && !dev->lif_cfg.sq_expdb) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + qp->sq_cmb_order =3D order_base_2(qp->sq.size / PAGE_SIZE); + + if (qp->sq_cmb_order >=3D IONIC_SQCMB_ORDER) + goto not_in_cmb; + + if (qp->sq_cmb & IONIC_CMB_EXPDB) + expdb_stride_log2 =3D qp->sq.stride_log2; + + rc =3D ionic_get_cmb(dev->lif_cfg.lif, &qp->sq_cmb_pgid, + &qp->sq_cmb_addr, qp->sq_cmb_order, + expdb_stride_log2, &expdb); + if (rc) + goto not_in_cmb; + + if ((qp->sq_cmb & IONIC_CMB_EXPDB) && !expdb) { + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + goto err_map; + + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + return; + +err_map: + ionic_put_cmb(dev->lif_cfg.lif, qp->sq_cmb_pgid, qp->sq_cmb_order); +not_in_cmb: + if (qp->sq_cmb & IONIC_CMB_REQUIRE) + ibdev_dbg(&dev->ibdev, "could not place sq in cmb as required\n"); + + qp->sq_cmb =3D 0; + qp->sq_cmb_order =3D IONIC_RES_INVALID; + qp->sq_cmb_pgid =3D 0; + qp->sq_cmb_addr =3D 0; +} + +static void ionic_qp_sq_destroy_cmb(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!(qp->sq_cmb & IONIC_CMB_ENABLE)) + return; + + if (ctx) + rdma_user_mmap_entry_remove(qp->mmap_sq_cmb); + + ionic_put_cmb(dev->lif_cfg.lif, qp->sq_cmb_pgid, qp->sq_cmb_order); +} + +static int ionic_qp_sq_init(struct ionic_ibdev *dev, struct ionic_ctx *ctx, + struct ionic_qp *qp, struct ionic_qdesc *sq, + struct ionic_tbl_buf *buf, int max_wr, int max_sge, + int max_data, int sq_spec, struct ib_udata *udata) +{ + u32 wqe_size; + int rc =3D 0; + + qp->sq_msn_prod =3D 0; + qp->sq_msn_cons =3D 0; + + if (!qp->has_sq) { + if (buf) { + buf->tbl_buf =3D NULL; + buf->tbl_limit =3D 0; + buf->tbl_pages =3D 0; + } + if (udata) + rc =3D ionic_validate_qdesc_zero(sq); + + return rc; + } + + rc =3D -EINVAL; + + if (max_wr < 0 || max_wr > 0xffff) + return rc; + + if (max_sge < 1) + return rc; + + if (max_sge > min(ionic_v1_send_wqe_max_sge(dev->lif_cfg.max_stride, 0, + qp->sq_cmb & + IONIC_CMB_EXPDB), + IONIC_SPEC_HIGH)) + return rc; + + if (max_data < 0) + return rc; + + if (max_data > ionic_v1_send_wqe_max_data(dev->lif_cfg.max_stride, + qp->sq_cmb & IONIC_CMB_EXPDB)) + return rc; + + if (udata) { + rc =3D ionic_validate_qdesc(sq); + if (rc) + return rc; + + qp->sq_spec =3D sq_spec; + + qp->sq.ptr =3D NULL; + qp->sq.size =3D sq->size; + qp->sq.mask =3D sq->mask; + qp->sq.depth_log2 =3D sq->depth_log2; + qp->sq.stride_log2 =3D sq->stride_log2; + + qp->sq_meta =3D NULL; + qp->sq_msn_idx =3D NULL; + + qp->sq_umem =3D ib_umem_get(&dev->ibdev, sq->addr, sq->size, 0); + if (IS_ERR(qp->sq_umem)) + return PTR_ERR(qp->sq_umem); + } else { + qp->sq_umem =3D NULL; + + qp->sq_spec =3D ionic_v1_use_spec_sge(max_sge, sq_spec); + if (sq_spec && !qp->sq_spec) + ibdev_dbg(&dev->ibdev, + "init sq: max_sge %u disables spec\n", + max_sge); + + if (qp->sq_cmb & IONIC_CMB_EXPDB) { + wqe_size =3D ionic_v1_send_wqe_min_size(max_sge, max_data, + qp->sq_spec, + true); + + if (!ionic_expdb_wqe_size_supported(dev, wqe_size)) + qp->sq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + if (!(qp->sq_cmb & IONIC_CMB_EXPDB)) + wqe_size =3D ionic_v1_send_wqe_min_size(max_sge, max_data, + qp->sq_spec, + false); + + rc =3D ionic_queue_init(&qp->sq, dev->lif_cfg.hwdev, + max_wr, wqe_size); + if (rc) + return rc; + + ionic_queue_dbell_init(&qp->sq, qp->qpid); + + qp->sq_meta =3D kmalloc_array((u32)qp->sq.mask + 1, + sizeof(*qp->sq_meta), + GFP_KERNEL); + if (!qp->sq_meta) { + rc =3D -ENOMEM; + goto err_sq_meta; + } + + qp->sq_msn_idx =3D kmalloc_array((u32)qp->sq.mask + 1, + sizeof(*qp->sq_msn_idx), + GFP_KERNEL); + if (!qp->sq_msn_idx) { + rc =3D -ENOMEM; + goto err_sq_msn; + } + } + + ionic_qp_sq_init_cmb(dev, qp, udata, max_data); + + if (qp->sq_cmb & IONIC_CMB_ENABLE) + rc =3D ionic_pgtbl_init(dev, buf, NULL, + (u64)qp->sq_cmb_pgid << PAGE_SHIFT, + 1, PAGE_SIZE); + else + rc =3D ionic_pgtbl_init(dev, buf, + qp->sq_umem, qp->sq.dma, 1, PAGE_SIZE); + if (rc) + goto err_sq_tbl; + + return 0; + +err_sq_tbl: + ionic_qp_sq_destroy_cmb(dev, ctx, qp); + kfree(qp->sq_msn_idx); +err_sq_msn: + kfree(qp->sq_meta); +err_sq_meta: + if (qp->sq_umem) + ib_umem_release(qp->sq_umem); + else + ionic_queue_destroy(&qp->sq, dev->lif_cfg.hwdev); + return rc; +} + +static void ionic_qp_sq_destroy(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!qp->has_sq) + return; + + ionic_qp_sq_destroy_cmb(dev, ctx, qp); + + kfree(qp->sq_msn_idx); + kfree(qp->sq_meta); + + if (qp->sq_umem) + ib_umem_release(qp->sq_umem); + else + ionic_queue_destroy(&qp->sq, dev->lif_cfg.hwdev); +} + +static void ionic_qp_rq_init_cmb(struct ionic_ibdev *dev, + struct ionic_qp *qp, + struct ib_udata *udata) +{ + u8 expdb_stride_log2 =3D 0; + bool expdb; + int rc; + + if (!(qp->rq_cmb & IONIC_CMB_ENABLE)) + goto not_in_cmb; + + if (qp->rq_cmb & ~IONIC_CMB_SUPPORTED) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->rq_cmb &=3D IONIC_CMB_SUPPORTED; + } + + if ((qp->rq_cmb & IONIC_CMB_EXPDB) && !dev->lif_cfg.rq_expdb) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto not_in_cmb; + + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + qp->rq_cmb_order =3D order_base_2(qp->rq.size / PAGE_SIZE); + + if (qp->rq_cmb_order >=3D IONIC_RQCMB_ORDER) + goto not_in_cmb; + + if (qp->rq_cmb & IONIC_CMB_EXPDB) + expdb_stride_log2 =3D qp->rq.stride_log2; + + rc =3D ionic_get_cmb(dev->lif_cfg.lif, &qp->rq_cmb_pgid, + &qp->rq_cmb_addr, qp->rq_cmb_order, + expdb_stride_log2, &expdb); + if (rc) + goto not_in_cmb; + + if ((qp->rq_cmb & IONIC_CMB_EXPDB) && !expdb) { + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + goto err_map; + + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + return; + +err_map: + ionic_put_cmb(dev->lif_cfg.lif, qp->rq_cmb_pgid, qp->rq_cmb_order); +not_in_cmb: + if (qp->rq_cmb & IONIC_CMB_REQUIRE) + ibdev_dbg(&dev->ibdev, "could not place rq in cmb as required\n"); + + qp->rq_cmb =3D 0; + qp->rq_cmb_order =3D IONIC_RES_INVALID; + qp->rq_cmb_pgid =3D 0; + qp->rq_cmb_addr =3D 0; +} + +static void ionic_qp_rq_destroy_cmb(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!(qp->rq_cmb & IONIC_CMB_ENABLE)) + return; + + if (ctx) + rdma_user_mmap_entry_remove(qp->mmap_rq_cmb); + + ionic_put_cmb(dev->lif_cfg.lif, qp->rq_cmb_pgid, qp->rq_cmb_order); +} + +static int ionic_qp_rq_init(struct ionic_ibdev *dev, struct ionic_ctx *ctx, + struct ionic_qp *qp, struct ionic_qdesc *rq, + struct ionic_tbl_buf *buf, int max_wr, int max_sge, + int rq_spec, struct ib_udata *udata) +{ + int rc =3D 0, i; + u32 wqe_size; + + if (!qp->has_rq) { + if (buf) { + buf->tbl_buf =3D NULL; + buf->tbl_limit =3D 0; + buf->tbl_pages =3D 0; + } + if (udata) + rc =3D ionic_validate_qdesc_zero(rq); + + return rc; + } + + rc =3D -EINVAL; + + if (max_wr < 0 || max_wr > 0xffff) + return rc; + + if (max_sge < 1) + return rc; + + if (max_sge > min(ionic_v1_recv_wqe_max_sge(dev->lif_cfg.max_stride, 0, f= alse), + IONIC_SPEC_HIGH)) + return rc; + + if (udata) { + rc =3D ionic_validate_qdesc(rq); + if (rc) + return rc; + + qp->rq_spec =3D rq_spec; + + qp->rq.ptr =3D NULL; + qp->rq.size =3D rq->size; + qp->rq.mask =3D rq->mask; + qp->rq.depth_log2 =3D rq->depth_log2; + qp->rq.stride_log2 =3D rq->stride_log2; + + qp->rq_meta =3D NULL; + + qp->rq_umem =3D ib_umem_get(&dev->ibdev, rq->addr, rq->size, 0); + if (IS_ERR(qp->rq_umem)) + return PTR_ERR(qp->rq_umem); + } else { + qp->rq_umem =3D NULL; + + qp->rq_spec =3D ionic_v1_use_spec_sge(max_sge, rq_spec); + if (rq_spec && !qp->rq_spec) + ibdev_dbg(&dev->ibdev, + "init rq: max_sge %u disables spec\n", + max_sge); + + if (qp->rq_cmb & IONIC_CMB_EXPDB) { + wqe_size =3D ionic_v1_recv_wqe_min_size(max_sge, + qp->rq_spec, + true); + + if (!ionic_expdb_wqe_size_supported(dev, wqe_size)) + qp->rq_cmb &=3D ~IONIC_CMB_EXPDB; + } + + if (!(qp->rq_cmb & IONIC_CMB_EXPDB)) + wqe_size =3D ionic_v1_recv_wqe_min_size(max_sge, + qp->rq_spec, + false); + + rc =3D ionic_queue_init(&qp->rq, dev->lif_cfg.hwdev, + max_wr, wqe_size); + if (rc) + return rc; + + ionic_queue_dbell_init(&qp->rq, qp->qpid); + + qp->rq_meta =3D kmalloc_array((u32)qp->rq.mask + 1, + sizeof(*qp->rq_meta), + GFP_KERNEL); + if (!qp->rq_meta) { + rc =3D -ENOMEM; + goto err_rq_meta; + } + + for (i =3D 0; i < qp->rq.mask; ++i) + qp->rq_meta[i].next =3D &qp->rq_meta[i + 1]; + qp->rq_meta[i].next =3D IONIC_META_LAST; + qp->rq_meta_head =3D &qp->rq_meta[0]; + } + + ionic_qp_rq_init_cmb(dev, qp, udata); + + if (qp->rq_cmb & IONIC_CMB_ENABLE) + rc =3D ionic_pgtbl_init(dev, buf, NULL, + (u64)qp->rq_cmb_pgid << PAGE_SHIFT, + 1, PAGE_SIZE); + else + rc =3D ionic_pgtbl_init(dev, buf, + qp->rq_umem, qp->rq.dma, 1, PAGE_SIZE); + if (rc) + goto err_rq_tbl; + + return 0; + +err_rq_tbl: + ionic_qp_rq_destroy_cmb(dev, ctx, qp); + kfree(qp->rq_meta); +err_rq_meta: + if (qp->rq_umem) + ib_umem_release(qp->rq_umem); + else + ionic_queue_destroy(&qp->rq, dev->lif_cfg.hwdev); + return rc; +} + +static void ionic_qp_rq_destroy(struct ionic_ibdev *dev, + struct ionic_ctx *ctx, + struct ionic_qp *qp) +{ + if (!qp->has_rq) + return; + + ionic_qp_rq_destroy_cmb(dev, ctx, qp); + + kfree(qp->rq_meta); + + if (qp->rq_umem) + ib_umem_release(qp->rq_umem); + else + ionic_queue_destroy(&qp->rq, dev->lif_cfg.hwdev); +} + +int ionic_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_tbl_buf sq_buf =3D {}, rq_buf =3D {}; + struct ionic_pd *pd =3D to_ionic_pd(ibqp->pd); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_qp_resp resp =3D {}; + struct ionic_qp_req req =3D {}; + struct ionic_cq *cq; + u8 udma_mask; + void *entry; + int rc; + + if (udata) { + rc =3D ib_copy_from_udata(&req, udata, sizeof(req)); + if (rc) + return rc; + } else { + req.sq_spec =3D IONIC_SPEC_HIGH; + req.rq_spec =3D IONIC_SPEC_HIGH; + } + + if (attr->qp_type =3D=3D IB_QPT_SMI || attr->qp_type > IB_QPT_UD) + return -EOPNOTSUPP; + + qp->state =3D IB_QPS_RESET; + + INIT_LIST_HEAD(&qp->cq_poll_sq); + INIT_LIST_HEAD(&qp->cq_flush_sq); + INIT_LIST_HEAD(&qp->cq_flush_rq); + + spin_lock_init(&qp->sq_lock); + spin_lock_init(&qp->rq_lock); + + qp->has_sq =3D 1; + qp->has_rq =3D 1; + + if (attr->qp_type =3D=3D IB_QPT_GSI) { + rc =3D ionic_get_gsi_qpid(dev, &qp->qpid); + } else { + udma_mask =3D BIT(dev->lif_cfg.udma_count) - 1; + + if (qp->has_sq) + udma_mask &=3D to_ionic_vcq(attr->send_cq)->udma_mask; + + if (qp->has_rq) + udma_mask &=3D to_ionic_vcq(attr->recv_cq)->udma_mask; + + if (udata && req.udma_mask) + udma_mask &=3D req.udma_mask; + + if (!udma_mask) + return -EINVAL; + + rc =3D ionic_get_qpid(dev, &qp->qpid, &qp->udma_idx, udma_mask); + } + if (rc) + return rc; + + qp->sig_all =3D attr->sq_sig_type =3D=3D IB_SIGNAL_ALL_WR; + qp->has_ah =3D attr->qp_type =3D=3D IB_QPT_RC; + + if (qp->has_ah) { + qp->hdr =3D kzalloc(sizeof(*qp->hdr), GFP_KERNEL); + if (!qp->hdr) { + rc =3D -ENOMEM; + goto err_ah_alloc; + } + + rc =3D ionic_get_ahid(dev, &qp->ahid); + if (rc) + goto err_ahid; + } + + if (udata) { + if (req.rq_cmb & IONIC_CMB_ENABLE) + qp->rq_cmb =3D req.rq_cmb; + + if (req.sq_cmb & IONIC_CMB_ENABLE) + qp->sq_cmb =3D req.sq_cmb; + } + + rc =3D ionic_qp_sq_init(dev, ctx, qp, &req.sq, &sq_buf, + attr->cap.max_send_wr, attr->cap.max_send_sge, + attr->cap.max_inline_data, req.sq_spec, udata); + if (rc) + goto err_sq; + + rc =3D ionic_qp_rq_init(dev, ctx, qp, &req.rq, &rq_buf, + attr->cap.max_recv_wr, attr->cap.max_recv_sge, + req.rq_spec, udata); + if (rc) + goto err_rq; + + rc =3D ionic_create_qp_cmd(dev, pd, + to_ionic_vcq_cq(attr->send_cq, qp->udma_idx), + to_ionic_vcq_cq(attr->recv_cq, qp->udma_idx), + qp, &sq_buf, &rq_buf, attr); + if (rc) + goto err_cmd; + + if (udata) { + resp.qpid =3D qp->qpid; + resp.udma_idx =3D qp->udma_idx; + + if (qp->sq_cmb & IONIC_CMB_ENABLE) { + bool wc; + + if ((qp->sq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) =3D=3D + (IONIC_CMB_WC | IONIC_CMB_UC)) { + ibdev_dbg(&dev->ibdev, + "Both sq_cmb flags IONIC_CMB_WC and IONIC_CMB_UC are set, using def= ault driver mapping\n"); + qp->sq_cmb &=3D ~(IONIC_CMB_WC | IONIC_CMB_UC); + } + + wc =3D (qp->sq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + !=3D IONIC_CMB_UC; + + /* let userspace know the mapping */ + if (wc) + qp->sq_cmb |=3D IONIC_CMB_WC; + else + qp->sq_cmb |=3D IONIC_CMB_UC; + + qp->mmap_sq_cmb =3D + ionic_mmap_entry_insert(ctx, + qp->sq.size, + PHYS_PFN(qp->sq_cmb_addr), + wc ? IONIC_MMAP_WC : 0, + &resp.sq_cmb_offset); + if (!qp->mmap_sq_cmb) { + rc =3D -ENOMEM; + goto err_mmap_sq; + } + + resp.sq_cmb =3D qp->sq_cmb; + } + + if (qp->rq_cmb & IONIC_CMB_ENABLE) { + bool wc; + + if ((qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) =3D=3D + (IONIC_CMB_WC | IONIC_CMB_UC)) { + ibdev_dbg(&dev->ibdev, + "Both rq_cmb flags IONIC_CMB_WC and IONIC_CMB_UC are set, using def= ault driver mapping\n"); + qp->rq_cmb &=3D ~(IONIC_CMB_WC | IONIC_CMB_UC); + } + + if (qp->rq_cmb & IONIC_CMB_EXPDB) + wc =3D (qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + =3D=3D IONIC_CMB_WC; + else + wc =3D (qp->rq_cmb & (IONIC_CMB_WC | IONIC_CMB_UC)) + !=3D IONIC_CMB_UC; + + /* let userspace know the mapping */ + if (wc) + qp->rq_cmb |=3D IONIC_CMB_WC; + else + qp->rq_cmb |=3D IONIC_CMB_UC; + + qp->mmap_rq_cmb =3D + ionic_mmap_entry_insert(ctx, + qp->rq.size, + PHYS_PFN(qp->rq_cmb_addr), + wc ? IONIC_MMAP_WC : 0, + &resp.rq_cmb_offset); + if (!qp->mmap_rq_cmb) { + rc =3D -ENOMEM; + goto err_mmap_rq; + } + + resp.rq_cmb =3D qp->rq_cmb; + } + + rc =3D ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (rc) + goto err_resp; + } + + ionic_pgtbl_unbuf(dev, &rq_buf); + ionic_pgtbl_unbuf(dev, &sq_buf); + + qp->ibqp.qp_num =3D qp->qpid; + + init_completion(&qp->qp_rel_comp); + kref_init(&qp->qp_kref); + + entry =3D xa_store_irq(&dev->qp_tbl, qp->qpid, qp, GFP_KERNEL); + if (entry) { + if (!xa_is_err(entry)) + rc =3D -EINVAL; + else + rc =3D xa_err(entry); + + goto err_resp; + } + + if (qp->has_sq) { + cq =3D to_ionic_vcq_cq(attr->send_cq, qp->udma_idx); + + attr->cap.max_send_wr =3D qp->sq.mask; + attr->cap.max_send_sge =3D + ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + qp->sq_cmb & IONIC_CMB_EXPDB); + attr->cap.max_inline_data =3D + ionic_v1_send_wqe_max_data(qp->sq.stride_log2, + qp->sq_cmb & + IONIC_CMB_EXPDB); + qp->sq_cqid =3D cq->cqid; + } + + if (qp->has_rq) { + cq =3D to_ionic_vcq_cq(attr->recv_cq, qp->udma_idx); + + attr->cap.max_recv_wr =3D qp->rq.mask; + attr->cap.max_recv_sge =3D + ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, + qp->rq_spec, + qp->rq_cmb & IONIC_CMB_EXPDB); + qp->rq_cqid =3D cq->cqid; + } + + return 0; + +err_resp: + if (udata && (qp->rq_cmb & IONIC_CMB_ENABLE)) + rdma_user_mmap_entry_remove(qp->mmap_rq_cmb); +err_mmap_rq: + if (udata && (qp->sq_cmb & IONIC_CMB_ENABLE)) + rdma_user_mmap_entry_remove(qp->mmap_sq_cmb); +err_mmap_sq: + ionic_destroy_qp_cmd(dev, qp->qpid); +err_cmd: + ionic_pgtbl_unbuf(dev, &rq_buf); + ionic_qp_rq_destroy(dev, ctx, qp); +err_rq: + ionic_pgtbl_unbuf(dev, &sq_buf); + ionic_qp_sq_destroy(dev, ctx, qp); +err_sq: + if (qp->has_ah) + ionic_put_ahid(dev, qp->ahid); +err_ahid: + kfree(qp->hdr); +err_ah_alloc: + ionic_put_qpid(dev, qp->qpid); + return rc; +} + +void ionic_notify_flush_cq(struct ionic_cq *cq) +{ + if (cq->flush && cq->vcq->ibcq.comp_handler) + cq->vcq->ibcq.comp_handler(&cq->vcq->ibcq, + cq->vcq->ibcq.cq_context); +} + +static void ionic_notify_qp_cqs(struct ionic_ibdev *dev, struct ionic_qp *= qp) +{ + if (qp->ibqp.send_cq) + ionic_notify_flush_cq(to_ionic_vcq_cq(qp->ibqp.send_cq, + qp->udma_idx)); + if (qp->ibqp.recv_cq && qp->ibqp.recv_cq !=3D qp->ibqp.send_cq) + ionic_notify_flush_cq(to_ionic_vcq_cq(qp->ibqp.recv_cq, + qp->udma_idx)); +} + +void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp) +{ + unsigned long irqflags; + struct ionic_cq *cq; + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + + /* Hold the CQ lock and QP sq_lock to set up flush */ + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->sq_lock); + qp->sq_flush =3D true; + if (!ionic_queue_empty(&qp->sq)) { + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + spin_unlock(&qp->sq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + + /* Hold the CQ lock and QP rq_lock to set up flush */ + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->rq_lock); + qp->rq_flush =3D true; + if (!ionic_queue_empty(&qp->rq)) { + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + } + spin_unlock(&qp->rq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + } +} + +static void ionic_clean_cq(struct ionic_cq *cq, u32 qpid) +{ + struct ionic_v1_cqe *qcqe; + int prod, qtf, qid, type; + bool color; + + if (!cq->q.ptr) + return; + + color =3D cq->color; + prod =3D cq->q.prod; + qcqe =3D ionic_queue_at(&cq->q, prod); + + while (color =3D=3D ionic_v1_cqe_color(qcqe)) { + qtf =3D ionic_v1_cqe_qtf(qcqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + if (qid =3D=3D qpid && type !=3D IONIC_V1_CQE_TYPE_ADMIN) + ionic_v1_cqe_clean(qcqe); + + prod =3D ionic_queue_next(&cq->q, prod); + qcqe =3D ionic_queue_at(&cq->q, prod); + color =3D ionic_color_wrap(prod, color); + } +} + +static void ionic_reset_qp(struct ionic_ibdev *dev, struct ionic_qp *qp) +{ + unsigned long irqflags; + struct ionic_cq *cq; + int i; + + local_irq_save(irqflags); + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + spin_lock(&cq->lock); + ionic_clean_cq(cq, qp->qpid); + spin_unlock(&cq->lock); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + spin_lock(&cq->lock); + ionic_clean_cq(cq, qp->qpid); + spin_unlock(&cq->lock); + } + + if (qp->has_sq) { + spin_lock(&qp->sq_lock); + qp->sq_flush =3D false; + qp->sq_flush_rcvd =3D false; + qp->sq_msn_prod =3D 0; + qp->sq_msn_cons =3D 0; + qp->sq.prod =3D 0; + qp->sq.cons =3D 0; + spin_unlock(&qp->sq_lock); + } + + if (qp->has_rq) { + spin_lock(&qp->rq_lock); + qp->rq_flush =3D false; + qp->rq.prod =3D 0; + qp->rq.cons =3D 0; + if (qp->rq_meta) { + for (i =3D 0; i < qp->rq.mask; ++i) + qp->rq_meta[i].next =3D &qp->rq_meta[i + 1]; + qp->rq_meta[i].next =3D IONIC_META_LAST; + } + qp->rq_meta_head =3D &qp->rq_meta[0]; + spin_unlock(&qp->rq_lock); + } + + local_irq_restore(irqflags); +} + +static bool ionic_qp_cur_state_is_ok(enum ib_qp_state q_state, + enum ib_qp_state attr_state) +{ + if (q_state =3D=3D attr_state) + return true; + + if (attr_state =3D=3D IB_QPS_ERR) + return true; + + if (attr_state =3D=3D IB_QPS_SQE) + return q_state =3D=3D IB_QPS_RTS || q_state =3D=3D IB_QPS_SQD; + + return false; +} + +static int ionic_check_modify_qp(struct ionic_qp *qp, struct ib_qp_attr *a= ttr, + int mask) +{ + enum ib_qp_state cur_state =3D (mask & IB_QP_CUR_STATE) ? + attr->cur_qp_state : qp->state; + enum ib_qp_state next_state =3D (mask & IB_QP_STATE) ? + attr->qp_state : cur_state; + + if ((mask & IB_QP_CUR_STATE) && + !ionic_qp_cur_state_is_ok(qp->state, attr->cur_qp_state)) + return -EINVAL; + + if (!ib_modify_qp_is_ok(cur_state, next_state, qp->ibqp.qp_type, mask)) + return -EINVAL; + + /* unprivileged qp not allowed privileged qkey */ + if ((mask & IB_QP_QKEY) && (attr->qkey & 0x80000000) && + qp->ibqp.uobject) + return -EPERM; + + return 0; +} + +int ionic_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int mask, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_pd *pd =3D to_ionic_pd(ibqp->pd); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + int rc; + + rc =3D ionic_check_modify_qp(qp, attr, mask); + if (rc) + return rc; + + if (mask & IB_QP_CAP) + return -EINVAL; + + rc =3D ionic_modify_qp_cmd(dev, pd, qp, attr, mask); + if (rc) + return rc; + + if (mask & IB_QP_STATE) { + qp->state =3D attr->qp_state; + + if (attr->qp_state =3D=3D IB_QPS_ERR) { + ionic_flush_qp(dev, qp); + ionic_notify_qp_cqs(dev, qp); + } else if (attr->qp_state =3D=3D IB_QPS_RESET) { + ionic_reset_qp(dev, qp); + } + } + + return 0; +} + +int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int mask, struct ib_qp_init_attr *init_attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + int rc; + + memset(attr, 0, sizeof(*attr)); + memset(init_attr, 0, sizeof(*init_attr)); + + rc =3D ionic_query_qp_cmd(dev, qp, attr, mask); + if (rc) + return rc; + + if (qp->has_sq) + attr->cap.max_send_wr =3D qp->sq.mask; + + if (qp->has_rq) + attr->cap.max_recv_wr =3D qp->rq.mask; + + init_attr->event_handler =3D ibqp->event_handler; + init_attr->qp_context =3D ibqp->qp_context; + init_attr->send_cq =3D ibqp->send_cq; + init_attr->recv_cq =3D ibqp->recv_cq; + init_attr->srq =3D ibqp->srq; + init_attr->xrcd =3D ibqp->xrcd; + init_attr->cap =3D attr->cap; + init_attr->sq_sig_type =3D qp->sig_all ? + IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR; + init_attr->qp_type =3D ibqp->qp_type; + init_attr->create_flags =3D 0; + init_attr->port_num =3D 0; + init_attr->rwq_ind_tbl =3D ibqp->rwq_ind_tbl; + init_attr->source_qpn =3D 0; + + return rc; +} + +int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) +{ + struct ionic_ctx *ctx =3D + rdma_udata_to_drv_context(udata, struct ionic_ctx, ibctx); + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + unsigned long irqflags; + struct ionic_cq *cq; + int rc; + + rc =3D ionic_destroy_qp_cmd(dev, qp->qpid); + if (rc) + return rc; + + xa_erase_irq(&dev->qp_tbl, qp->qpid); + + kref_put(&qp->qp_kref, ionic_qp_complete); + wait_for_completion(&qp->qp_rel_comp); + + if (qp->ibqp.send_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.send_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + ionic_clean_cq(cq, qp->qpid); + list_del(&qp->cq_poll_sq); + list_del(&qp->cq_flush_sq); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + if (qp->ibqp.recv_cq) { + cq =3D to_ionic_vcq_cq(qp->ibqp.recv_cq, qp->udma_idx); + spin_lock_irqsave(&cq->lock, irqflags); + ionic_clean_cq(cq, qp->qpid); + list_del(&qp->cq_flush_rq); + spin_unlock_irqrestore(&cq->lock, irqflags); + } + + ionic_qp_rq_destroy(dev, ctx, qp); + ionic_qp_sq_destroy(dev, ctx, qp); + if (qp->has_ah) { + ionic_put_ahid(dev, qp->ahid); + kfree(qp->hdr); + } + ionic_put_qpid(dev, qp->qpid); + + return 0; +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index 44ec69487519..8c1c0a07c527 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -5,6 +5,266 @@ #define _IONIC_FW_H_ =20 #include +#include + +/* common for ib spec */ + +#define IONIC_EXP_DBELL_SZ 8 + +enum ionic_mrid_bits { + IONIC_MRID_INDEX_SHIFT =3D 8, +}; + +static inline u32 ionic_mrid(u32 index, u8 key) +{ + return (index << IONIC_MRID_INDEX_SHIFT) | key; +} + +static inline u32 ionic_mrid_index(u32 lrkey) +{ + return lrkey >> IONIC_MRID_INDEX_SHIFT; +} + +/* common to all versions */ + +/* wqe scatter gather element */ +struct ionic_sge { + __be64 va; + __be32 len; + __be32 lkey; +}; + +/* admin queue mr type */ +enum ionic_mr_flags { + /* bits that determine mr access */ + IONIC_MRF_LOCAL_WRITE =3D BIT(0), + IONIC_MRF_REMOTE_WRITE =3D BIT(1), + IONIC_MRF_REMOTE_READ =3D BIT(2), + IONIC_MRF_REMOTE_ATOMIC =3D BIT(3), + IONIC_MRF_MW_BIND =3D BIT(4), + IONIC_MRF_ZERO_BASED =3D BIT(5), + IONIC_MRF_ON_DEMAND =3D BIT(6), + IONIC_MRF_PB =3D BIT(7), + IONIC_MRF_ACCESS_MASK =3D BIT(12) - 1, + + /* bits that determine mr type */ + IONIC_MRF_UKEY_EN =3D BIT(13), + IONIC_MRF_IS_MW =3D BIT(14), + IONIC_MRF_INV_EN =3D BIT(15), + + /* base flags combinations for mr types */ + IONIC_MRF_USER_MR =3D 0, + IONIC_MRF_PHYS_MR =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_INV_EN), + IONIC_MRF_MW_1 =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_IS_MW), + IONIC_MRF_MW_2 =3D (IONIC_MRF_UKEY_EN | + IONIC_MRF_IS_MW | + IONIC_MRF_INV_EN), +}; + +static inline int to_ionic_mr_flags(int access) +{ + int flags =3D 0; + + if (access & IB_ACCESS_LOCAL_WRITE) + flags |=3D IONIC_MRF_LOCAL_WRITE; + + if (access & IB_ACCESS_REMOTE_READ) + flags |=3D IONIC_MRF_REMOTE_READ; + + if (access & IB_ACCESS_REMOTE_WRITE) + flags |=3D IONIC_MRF_REMOTE_WRITE; + + if (access & IB_ACCESS_REMOTE_ATOMIC) + flags |=3D IONIC_MRF_REMOTE_ATOMIC; + + if (access & IB_ACCESS_MW_BIND) + flags |=3D IONIC_MRF_MW_BIND; + + if (access & IB_ZERO_BASED) + flags |=3D IONIC_MRF_ZERO_BASED; + + return flags; +} + +enum ionic_qp_flags { + /* bits that determine qp access */ + IONIC_QPF_REMOTE_WRITE =3D BIT(0), + IONIC_QPF_REMOTE_READ =3D BIT(1), + IONIC_QPF_REMOTE_ATOMIC =3D BIT(2), + + /* bits that determine other qp behavior */ + IONIC_QPF_SQ_PB =3D BIT(6), + IONIC_QPF_RQ_PB =3D BIT(7), + IONIC_QPF_SQ_SPEC =3D BIT(8), + IONIC_QPF_RQ_SPEC =3D BIT(9), + IONIC_QPF_REMOTE_PRIVILEGED =3D BIT(10), + IONIC_QPF_SQ_DRAINING =3D BIT(11), + IONIC_QPF_SQD_NOTIFY =3D BIT(12), + IONIC_QPF_SQ_CMB =3D BIT(13), + IONIC_QPF_RQ_CMB =3D BIT(14), + IONIC_QPF_PRIVILEGED =3D BIT(15), +}; + +static inline int from_ionic_qp_flags(int flags) +{ + int access_flags =3D 0; + + if (flags & IONIC_QPF_REMOTE_WRITE) + access_flags |=3D IB_ACCESS_REMOTE_WRITE; + + if (flags & IONIC_QPF_REMOTE_READ) + access_flags |=3D IB_ACCESS_REMOTE_READ; + + if (flags & IONIC_QPF_REMOTE_ATOMIC) + access_flags |=3D IB_ACCESS_REMOTE_ATOMIC; + + return access_flags; +} + +static inline int to_ionic_qp_flags(int access, bool sqd_notify, + bool sq_is_cmb, bool rq_is_cmb, + bool sq_spec, bool rq_spec, + bool privileged, bool remote_privileged) +{ + int flags =3D 0; + + if (access & IB_ACCESS_REMOTE_WRITE) + flags |=3D IONIC_QPF_REMOTE_WRITE; + + if (access & IB_ACCESS_REMOTE_READ) + flags |=3D IONIC_QPF_REMOTE_READ; + + if (access & IB_ACCESS_REMOTE_ATOMIC) + flags |=3D IONIC_QPF_REMOTE_ATOMIC; + + if (sqd_notify) + flags |=3D IONIC_QPF_SQD_NOTIFY; + + if (sq_is_cmb) + flags |=3D IONIC_QPF_SQ_CMB; + + if (rq_is_cmb) + flags |=3D IONIC_QPF_RQ_CMB; + + if (sq_spec) + flags |=3D IONIC_QPF_SQ_SPEC; + + if (rq_spec) + flags |=3D IONIC_QPF_RQ_SPEC; + + if (privileged) + flags |=3D IONIC_QPF_PRIVILEGED; + + if (remote_privileged) + flags |=3D IONIC_QPF_REMOTE_PRIVILEGED; + + return flags; +} + +/* admin queue qp type */ +enum ionic_qp_type { + IONIC_QPT_RC, + IONIC_QPT_UC, + IONIC_QPT_RD, + IONIC_QPT_UD, + IONIC_QPT_SRQ, + IONIC_QPT_XRC_INI, + IONIC_QPT_XRC_TGT, + IONIC_QPT_XRC_SRQ, +}; + +static inline int to_ionic_qp_type(enum ib_qp_type type) +{ + switch (type) { + case IB_QPT_GSI: + case IB_QPT_UD: + return IONIC_QPT_UD; + case IB_QPT_RC: + return IONIC_QPT_RC; + case IB_QPT_UC: + return IONIC_QPT_UC; + case IB_QPT_XRC_INI: + return IONIC_QPT_XRC_INI; + case IB_QPT_XRC_TGT: + return IONIC_QPT_XRC_TGT; + default: + return -EINVAL; + } +} + +/* admin queue qp state */ +enum ionic_qp_state { + IONIC_QPS_RESET, + IONIC_QPS_INIT, + IONIC_QPS_RTR, + IONIC_QPS_RTS, + IONIC_QPS_SQD, + IONIC_QPS_SQE, + IONIC_QPS_ERR, +}; + +static inline int from_ionic_qp_state(enum ionic_qp_state state) +{ + switch (state) { + case IONIC_QPS_RESET: + return IB_QPS_RESET; + case IONIC_QPS_INIT: + return IB_QPS_INIT; + case IONIC_QPS_RTR: + return IB_QPS_RTR; + case IONIC_QPS_RTS: + return IB_QPS_RTS; + case IONIC_QPS_SQD: + return IB_QPS_SQD; + case IONIC_QPS_SQE: + return IB_QPS_SQE; + case IONIC_QPS_ERR: + return IB_QPS_ERR; + default: + return -EINVAL; + } +} + +static inline int to_ionic_qp_state(enum ib_qp_state state) +{ + switch (state) { + case IB_QPS_RESET: + return IONIC_QPS_RESET; + case IB_QPS_INIT: + return IONIC_QPS_INIT; + case IB_QPS_RTR: + return IONIC_QPS_RTR; + case IB_QPS_RTS: + return IONIC_QPS_RTS; + case IB_QPS_SQD: + return IONIC_QPS_SQD; + case IB_QPS_SQE: + return IONIC_QPS_SQE; + case IB_QPS_ERR: + return IONIC_QPS_ERR; + default: + return 0; + } +} + +static inline int to_ionic_qp_modify_state(enum ib_qp_state to_state, + enum ib_qp_state from_state) +{ + return to_ionic_qp_state(to_state) | + (to_ionic_qp_state(from_state) << 4); +} + +/* fw abi v1 */ + +/* data payload part of v1 wqe */ +union ionic_v1_pld { + struct ionic_sge sgl[2]; + __be32 spec32[8]; + __be16 spec16[16]; + __u8 data[32]; +}; =20 /* completion queue v1 cqe */ struct ionic_v1_cqe { @@ -78,6 +338,390 @@ static inline u32 ionic_v1_cqe_qtf_qid(u32 qtf) return qtf >> IONIC_V1_CQE_QID_SHIFT; } =20 +/* v1 base wqe header */ +struct ionic_v1_base_hdr { + __u64 wqe_id; + __u8 op; + __u8 num_sge_key; + __be16 flags; + __be32 imm_data_key; +}; + +/* v1 receive wqe body */ +struct ionic_v1_recv_bdy { + __u8 rsvd[16]; + union ionic_v1_pld pld; +}; + +/* v1 send/rdma wqe body (common, has sgl) */ +struct ionic_v1_common_bdy { + union { + struct { + __be32 ah_id; + __be32 dest_qpn; + __be32 dest_qkey; + } send; + struct { + __be32 remote_va_high; + __be32 remote_va_low; + __be32 remote_rkey; + } rdma; + }; + __be32 length; + union ionic_v1_pld pld; +}; + +/* v1 atomic wqe body */ +struct ionic_v1_atomic_bdy { + __be32 remote_va_high; + __be32 remote_va_low; + __be32 remote_rkey; + __be32 swap_add_high; + __be32 swap_add_low; + __be32 compare_high; + __be32 compare_low; + __u8 rsvd[4]; + struct ionic_sge sge; +}; + +/* v1 reg mr wqe body */ +struct ionic_v1_reg_mr_bdy { + __be64 va; + __be64 length; + __be64 offset; + __be64 dma_addr; + __be32 map_count; + __be16 flags; + __u8 dir_size_log2; + __u8 page_size_log2; + __u8 rsvd[8]; +}; + +/* v1 bind mw wqe body */ +struct ionic_v1_bind_mw_bdy { + __be64 va; + __be64 length; + __be32 lkey; + __be16 flags; + __u8 rsvd[26]; +}; + +/* v1 send/recv wqe */ +struct ionic_v1_wqe { + struct ionic_v1_base_hdr base; + union { + struct ionic_v1_recv_bdy recv; + struct ionic_v1_common_bdy common; + struct ionic_v1_atomic_bdy atomic; + struct ionic_v1_reg_mr_bdy reg_mr; + struct ionic_v1_bind_mw_bdy bind_mw; + }; +}; + +/* queue pair v1 send opcodes */ +enum ionic_v1_op { + IONIC_V1_OP_SEND, + IONIC_V1_OP_SEND_INV, + IONIC_V1_OP_SEND_IMM, + IONIC_V1_OP_RDMA_READ, + IONIC_V1_OP_RDMA_WRITE, + IONIC_V1_OP_RDMA_WRITE_IMM, + IONIC_V1_OP_ATOMIC_CS, + IONIC_V1_OP_ATOMIC_FA, + IONIC_V1_OP_REG_MR, + IONIC_V1_OP_LOCAL_INV, + IONIC_V1_OP_BIND_MW, + + /* flags */ + IONIC_V1_FLAG_FENCE =3D BIT(0), + IONIC_V1_FLAG_SOL =3D BIT(1), + IONIC_V1_FLAG_INL =3D BIT(2), + IONIC_V1_FLAG_SIG =3D BIT(3), + + /* flags last four bits for sgl spec format */ + IONIC_V1_FLAG_SPEC32 =3D (1u << 12), + IONIC_V1_FLAG_SPEC16 =3D (2u << 12), + IONIC_V1_SPEC_FIRST_SGE =3D 2, +}; + +static inline size_t ionic_v1_send_wqe_min_size(int min_sge, int min_data, + int spec, bool expdb) +{ + size_t sz_wqe, sz_sgl, sz_data; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + min_sge +=3D IONIC_V1_SPEC_FIRST_SGE; + + if (expdb) { + min_sge +=3D 1; + min_data +=3D IONIC_EXP_DBELL_SZ; + } + + sz_wqe =3D sizeof(struct ionic_v1_wqe); + sz_sgl =3D offsetof(struct ionic_v1_wqe, common.pld.sgl[min_sge]); + sz_data =3D offsetof(struct ionic_v1_wqe, common.pld.data[min_data]); + + if (sz_sgl > sz_wqe) + sz_wqe =3D sz_sgl; + + if (sz_data > sz_wqe) + sz_wqe =3D sz_data; + + return sz_wqe; +} + +static inline int ionic_v1_send_wqe_max_sge(u8 stride_log2, int spec, + bool expdb) +{ + struct ionic_sge *sge =3D (void *)(1ull << stride_log2); + struct ionic_v1_wqe *wqe =3D (void *)0; + int num_sge =3D 0; + + if (expdb) + sge -=3D 1; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + num_sge =3D IONIC_V1_SPEC_FIRST_SGE; + + num_sge =3D sge - &wqe->common.pld.sgl[num_sge]; + + if (spec && num_sge > spec) + num_sge =3D spec; + + return num_sge; +} + +static inline int ionic_v1_send_wqe_max_data(u8 stride_log2, bool expdb) +{ + struct ionic_v1_wqe *wqe =3D (void *)0; + __u8 *data =3D (void *)(1ull << stride_log2); + + if (expdb) + data -=3D IONIC_EXP_DBELL_SZ; + + return data - wqe->common.pld.data; +} + +static inline size_t ionic_v1_recv_wqe_min_size(int min_sge, int spec, + bool expdb) +{ + size_t sz_wqe, sz_sgl; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + min_sge +=3D IONIC_V1_SPEC_FIRST_SGE; + + if (expdb) + min_sge +=3D 1; + + sz_wqe =3D sizeof(struct ionic_v1_wqe); + sz_sgl =3D offsetof(struct ionic_v1_wqe, recv.pld.sgl[min_sge]); + + if (sz_sgl > sz_wqe) + sz_wqe =3D sz_sgl; + + return sz_wqe; +} + +static inline int ionic_v1_recv_wqe_max_sge(u8 stride_log2, int spec, + bool expdb) +{ + struct ionic_sge *sge =3D (void *)(1ull << stride_log2); + struct ionic_v1_wqe *wqe =3D (void *)0; + int num_sge =3D 0; + + if (expdb) + sge -=3D 1; + + if (spec > IONIC_V1_SPEC_FIRST_SGE) + num_sge =3D IONIC_V1_SPEC_FIRST_SGE; + + num_sge =3D sge - &wqe->recv.pld.sgl[num_sge]; + + if (spec && num_sge > spec) + num_sge =3D spec; + + return num_sge; +} + +static inline int ionic_v1_use_spec_sge(int min_sge, int spec) +{ + if (!spec || min_sge > spec) + return 0; + + if (min_sge <=3D IONIC_V1_SPEC_FIRST_SGE) + return IONIC_V1_SPEC_FIRST_SGE; + + return spec; +} + +struct ionic_admin_create_ah { + __le64 dma_addr; + __le32 length; + __le32 pd_id; + __le32 id_ver; + __le16 dbid_flags; + __u8 csum_profile; + __u8 crypto; +} __packed; + +#define IONIC_ADMIN_CREATE_AH_IN_V1_LEN 24 +static_assert(sizeof(struct ionic_admin_create_ah) =3D=3D + IONIC_ADMIN_CREATE_AH_IN_V1_LEN); + +struct ionic_admin_destroy_ah { + __le32 ah_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_AH_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_ah) =3D=3D + IONIC_ADMIN_DESTROY_AH_IN_V1_LEN); + +struct ionic_admin_query_ah { + __le64 dma_addr; +} __packed; + +#define IONIC_ADMIN_QUERY_AH_IN_V1_LEN 8 +static_assert(sizeof(struct ionic_admin_query_ah) =3D=3D + IONIC_ADMIN_QUERY_AH_IN_V1_LEN); + +struct ionic_admin_create_mr { + __le64 va; + __le64 length; + __le32 pd_id; + __le32 id_ver; + __le32 tbl_index; + __le32 map_count; + __le64 dma_addr; + __le16 dbid_flags; + __u8 pt_type; + __u8 dir_size_log2; + __u8 page_size_log2; +} __packed; + +#define IONIC_ADMIN_CREATE_MR_IN_V1_LEN 45 +static_assert(sizeof(struct ionic_admin_create_mr) =3D=3D + IONIC_ADMIN_CREATE_MR_IN_V1_LEN); + +struct ionic_admin_destroy_mr { + __le32 mr_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_MR_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_mr) =3D=3D + IONIC_ADMIN_DESTROY_MR_IN_V1_LEN); + +struct ionic_admin_create_cq { + __le32 eq_id; + __u8 depth_log2; + __u8 stride_log2; + __u8 dir_size_log2_rsvd; + __u8 page_size_log2; + __le32 cq_flags; + __le32 id_ver; + __le32 tbl_index; + __le32 map_count; + __le64 dma_addr; + __le16 dbid_flags; +} __packed; + +#define IONIC_ADMIN_CREATE_CQ_IN_V1_LEN 34 +static_assert(sizeof(struct ionic_admin_create_cq) =3D=3D + IONIC_ADMIN_CREATE_CQ_IN_V1_LEN); + +struct ionic_admin_destroy_cq { + __le32 cq_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_cq) =3D=3D + IONIC_ADMIN_DESTROY_CQ_IN_V1_LEN); + +struct ionic_admin_create_qp { + __le32 pd_id; + __be32 priv_flags; + __le32 sq_cq_id; + __u8 sq_depth_log2; + __u8 sq_stride_log2; + __u8 sq_dir_size_log2_rsvd; + __u8 sq_page_size_log2; + __le32 sq_tbl_index_xrcd_id; + __le32 sq_map_count; + __le64 sq_dma_addr; + __le32 rq_cq_id; + __u8 rq_depth_log2; + __u8 rq_stride_log2; + __u8 rq_dir_size_log2_rsvd; + __u8 rq_page_size_log2; + __le32 rq_tbl_index_srq_id; + __le32 rq_map_count; + __le64 rq_dma_addr; + __le32 id_ver; + __le16 dbid_flags; + __u8 type_state; + __u8 rsvd; +} __packed; + +#define IONIC_ADMIN_CREATE_QP_IN_V1_LEN 64 +static_assert(sizeof(struct ionic_admin_create_qp) =3D=3D + IONIC_ADMIN_CREATE_QP_IN_V1_LEN); + +struct ionic_admin_destroy_qp { + __le32 qp_id; +} __packed; + +#define IONIC_ADMIN_DESTROY_QP_IN_V1_LEN 4 +static_assert(sizeof(struct ionic_admin_destroy_qp) =3D=3D + IONIC_ADMIN_DESTROY_QP_IN_V1_LEN); + +struct ionic_admin_mod_qp { + __be32 attr_mask; + __u8 dcqcn_profile; + __u8 tfp_csum_profile; + __be16 access_flags; + __le32 rq_psn; + __le32 sq_psn; + __le32 qkey_dest_qpn; + __le32 rate_limit_kbps; + __u8 pmtu; + __u8 retry; + __u8 rnr_timer; + __u8 retry_timeout; + __u8 rsq_depth; + __u8 rrq_depth; + __le16 pkey_id; + __le32 ah_id_len; + __u8 en_pcp; + __u8 ip_dscp; + __u8 rsvd2; + __u8 type_state; + union { + struct { + __le16 rsvd1; + }; + __le32 rrq_index; + }; + __le32 rsq_index; + __le64 dma_addr; + __le32 id_ver; +} __packed; + +#define IONIC_ADMIN_MODIFY_QP_IN_V1_LEN 60 +static_assert(sizeof(struct ionic_admin_mod_qp) =3D=3D + IONIC_ADMIN_MODIFY_QP_IN_V1_LEN); + +struct ionic_admin_query_qp { + __le64 hdr_dma_addr; + __le64 sq_dma_addr; + __le64 rq_dma_addr; + __le32 ah_id; + __le32 id_ver; + __le16 dbid_flags; +} __packed; + +#define IONIC_ADMIN_QUERY_QP_IN_V1_LEN 34 +static_assert(sizeof(struct ionic_admin_query_qp) =3D=3D + IONIC_ADMIN_QUERY_QP_IN_V1_LEN); + #define ADMIN_WQE_STRIDE 64 #define ADMIN_WQE_HDR_LEN 4 =20 @@ -88,9 +732,66 @@ struct ionic_v1_admin_wqe { __le16 len; =20 union { + struct ionic_admin_create_ah create_ah; + struct ionic_admin_destroy_ah destroy_ah; + struct ionic_admin_query_ah query_ah; + struct ionic_admin_create_mr create_mr; + struct ionic_admin_destroy_mr destroy_mr; + struct ionic_admin_create_cq create_cq; + struct ionic_admin_destroy_cq destroy_cq; + struct ionic_admin_create_qp create_qp; + struct ionic_admin_destroy_qp destroy_qp; + struct ionic_admin_mod_qp mod_qp; + struct ionic_admin_query_qp query_qp; } cmd; }; =20 +/* side data for query qp */ +struct ionic_v1_admin_query_qp_sq { + __u8 rnr_timer; + __u8 retry_timeout; + __be16 access_perms_flags; + __be16 rsvd; + __be16 pkey_id; + __be32 qkey_dest_qpn; + __be32 rate_limit_kbps; + __be32 rq_psn; +}; + +struct ionic_v1_admin_query_qp_rq { + __u8 state_pmtu; + __u8 retry_rnrtry; + __u8 rrq_depth; + __u8 rsq_depth; + __be32 sq_psn; + __be16 access_perms_flags; + __be16 rsvd; +}; + +/* admin queue v1 opcodes */ +enum ionic_v1_admin_op { + IONIC_V1_ADMIN_NOOP, + IONIC_V1_ADMIN_CREATE_CQ, + IONIC_V1_ADMIN_CREATE_QP, + IONIC_V1_ADMIN_CREATE_MR, + IONIC_V1_ADMIN_STATS_HDRS, + IONIC_V1_ADMIN_STATS_VALS, + IONIC_V1_ADMIN_DESTROY_MR, + IONIC_v1_ADMIN_RSVD_7, /* RESIZE_CQ */ + IONIC_V1_ADMIN_DESTROY_CQ, + IONIC_V1_ADMIN_MODIFY_QP, + IONIC_V1_ADMIN_QUERY_QP, + IONIC_V1_ADMIN_DESTROY_QP, + IONIC_V1_ADMIN_DEBUG, + IONIC_V1_ADMIN_CREATE_AH, + IONIC_V1_ADMIN_QUERY_AH, + IONIC_V1_ADMIN_MODIFY_DCQCN, + IONIC_V1_ADMIN_DESTROY_AH, + IONIC_V1_ADMIN_QP_STATS_HDRS, + IONIC_V1_ADMIN_QP_STATS_VALS, + IONIC_V1_ADMIN_OPCODES_MAX, +}; + /* admin queue v1 cqe status */ enum ionic_v1_admin_status { IONIC_V1_ASTS_OK, @@ -136,6 +837,22 @@ enum ionic_v1_eqe_evt_bits { IONIC_V1_EQE_QP_ERR_ACCESS =3D 10, }; =20 +enum ionic_tfp_csum_profiles { + IONIC_TFP_CSUM_PROF_ETH_IPV4_UDP =3D 0, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP =3D 1, + IONIC_TFP_CSUM_PROF_ETH_IPV6_UDP =3D 2, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_UDP =3D 3, + IONIC_TFP_CSUM_PROF_IPV4_UDP_VXLAN_ETH_QTAG_IPV4_UDP =3D 4, + IONIC_TFP_CSUM_PROF_IPV4_UDP_VXLAN_ETH_QTAG_IPV6_UDP =3D 5, + IONIC_TFP_CSUM_PROF_QTAG_IPV4_UDP_VXLAN_ETH_QTAG_IPV4_UDP =3D 6, + IONIC_TFP_CSUM_PROF_QTAG_IPV4_UDP_VXLAN_ETH_QTAG_IPV6_UDP =3D 7, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_ESP_IPV4_UDP =3D 8, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_ESP_UDP =3D 9, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_ESP_UDP =3D 10, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV6_ESP_UDP =3D 11, + IONIC_TFP_CSUM_PROF_ETH_QTAG_IPV4_UDP_CSUM =3D 12, +}; + static inline bool ionic_v1_eqe_color(struct ionic_v1_eqe *eqe) { return eqe->evt & cpu_to_be32(IONIC_V1_EQE_COLOR); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 7710190ff65f..6833abbfb1dc 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -15,6 +15,44 @@ MODULE_DESCRIPTION(DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("NET_IONIC"); =20 +static const struct ib_device_ops ionic_dev_ops =3D { + .owner =3D THIS_MODULE, + .driver_id =3D RDMA_DRIVER_IONIC, + .uverbs_abi_ver =3D IONIC_ABI_VERSION, + + .alloc_ucontext =3D ionic_alloc_ucontext, + .dealloc_ucontext =3D ionic_dealloc_ucontext, + .mmap =3D ionic_mmap, + .mmap_free =3D ionic_mmap_free, + .alloc_pd =3D ionic_alloc_pd, + .dealloc_pd =3D ionic_dealloc_pd, + .create_ah =3D ionic_create_ah, + .query_ah =3D ionic_query_ah, + .destroy_ah =3D ionic_destroy_ah, + .create_user_ah =3D ionic_create_ah, + .get_dma_mr =3D ionic_get_dma_mr, + .reg_user_mr =3D ionic_reg_user_mr, + .reg_user_mr_dmabuf =3D ionic_reg_user_mr_dmabuf, + .dereg_mr =3D ionic_dereg_mr, + .alloc_mr =3D ionic_alloc_mr, + .map_mr_sg =3D ionic_map_mr_sg, + .alloc_mw =3D ionic_alloc_mw, + .dealloc_mw =3D ionic_dealloc_mw, + .create_cq =3D ionic_create_cq, + .destroy_cq =3D ionic_destroy_cq, + .create_qp =3D ionic_create_qp, + .modify_qp =3D ionic_modify_qp, + .query_qp =3D ionic_query_qp, + .destroy_qp =3D ionic_destroy_qp, + + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), + INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), + INIT_RDMA_OBJ_SIZE(ib_cq, ionic_vcq, ibcq), + INIT_RDMA_OBJ_SIZE(ib_qp, ionic_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_mw, ionic_mr, ibmw), +}; + static void ionic_init_resids(struct ionic_ibdev *dev) { ionic_resid_init(&dev->inuse_cqid, dev->lif_cfg.cq_count); @@ -48,6 +86,8 @@ static void ionic_destroy_ibdev(struct ionic_ibdev *dev) ib_unregister_device(&dev->ibdev); ionic_destroy_rdma_admin(dev); ionic_destroy_resids(dev); + WARN_ON(!xa_empty(&dev->qp_tbl)); + xa_destroy(&dev->qp_tbl); WARN_ON(!xa_empty(&dev->cq_tbl)); xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); @@ -66,6 +106,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct ion= ic_aux_dev *ionic_adev) =20 ionic_fill_lif_cfg(ionic_adev->lif, &dev->lif_cfg); =20 + xa_init_flags(&dev->qp_tbl, GFP_ATOMIC); xa_init_flags(&dev->cq_tbl, GFP_ATOMIC); =20 ionic_init_resids(dev); @@ -98,6 +139,8 @@ static struct ionic_ibdev *ionic_create_ibdev(struct ion= ic_aux_dev *ionic_adev) if (rc) goto err_admin; =20 + ib_set_device_ops(&dev->ibdev, &ionic_dev_ops); + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); if (rc) goto err_register; @@ -110,6 +153,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) ionic_destroy_rdma_admin(dev); err_reset: ionic_destroy_resids(dev); + xa_destroy(&dev->qp_tbl); xa_destroy(&dev->cq_tbl); ib_dealloc_device(&dev->ibdev); =20 @@ -161,7 +205,7 @@ static int __init ionic_mod_init(void) { int rc; =20 - ionic_evt_workq =3D create_workqueue(DRIVER_NAME "-evt"); + ionic_evt_workq =3D create_workqueue(KBUILD_MODNAME "-evt"); if (!ionic_evt_workq) return -ENOMEM; =20 diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 490897628f41..cb1ac8aca358 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -6,7 +6,10 @@ =20 #include #include +#include +#include =20 +#include #include #include =20 @@ -24,9 +27,26 @@ #define IONIC_EQ_ISR_BUDGET 10 #define IONIC_EQ_WORK_BUDGET 1000 #define IONIC_MAX_PD 1024 +#define IONIC_SPEC_HIGH 8 +#define IONIC_SQCMB_ORDER 5 +#define IONIC_RQCMB_ORDER 0 + +#define IONIC_META_LAST ((void *)1ul) +#define IONIC_META_POSTED ((void *)2ul) =20 #define IONIC_CQ_GRACE 100 =20 +#define IONIC_ROCE_UDP_SPORT 28272 +#define IONIC_DMA_LKEY 0 +#define IONIC_DMA_RKEY IONIC_DMA_LKEY + +#define IONIC_CMB_SUPPORTED \ + (IONIC_CMB_ENABLE | IONIC_CMB_REQUIRE | IONIC_CMB_EXPDB | \ + IONIC_CMB_WC | IONIC_CMB_UC) + +/* resource is not reserved on the device, indicated in tbl_order */ +#define IONIC_RES_INVALID -1 + struct ionic_aq; struct ionic_cq; struct ionic_eq; @@ -44,14 +64,6 @@ enum ionic_admin_flags { IONIC_ADMIN_F_INTERRUPT =3D BIT(2), /* Interruptible w/timeout */ }; =20 -struct ionic_qdesc { - __aligned_u64 addr; - __u32 size; - __u16 mask; - __u8 depth_log2; - __u8 stride_log2; -}; - enum ionic_mmap_flag { IONIC_MMAP_WC =3D BIT(0), }; @@ -160,6 +172,13 @@ struct ionic_tbl_buf { u8 page_size_log2; }; =20 +struct ionic_pd { + struct ib_pd ibpd; + + u32 pdid; + u32 flags; +}; + struct ionic_cq { struct ionic_vcq *vcq; =20 @@ -193,11 +212,188 @@ struct ionic_vcq { u8 poll_idx; }; =20 +struct ionic_sq_meta { + u64 wrid; + u32 len; + u16 seq; + u8 ibop; + u8 ibsts; + u8 remote:1; + u8 signal:1; + u8 local_comp:1; +}; + +struct ionic_rq_meta { + struct ionic_rq_meta *next; + u64 wrid; +}; + +struct ionic_qp { + struct ib_qp ibqp; + enum ib_qp_state state; + + u32 qpid; + u32 ahid; + u32 sq_cqid; + u32 rq_cqid; + u8 udma_idx; + u8 has_ah:1; + u8 has_sq:1; + u8 has_rq:1; + u8 sig_all:1; + + struct list_head qp_list_counter; + + struct list_head cq_poll_sq; + struct list_head cq_flush_sq; + struct list_head cq_flush_rq; + struct list_head ibkill_flush_ent; + + spinlock_t sq_lock; /* for posting and polling */ + struct ionic_queue sq; + struct ionic_sq_meta *sq_meta; + u16 *sq_msn_idx; + int sq_spec; + u16 sq_old_prod; + u16 sq_msn_prod; + u16 sq_msn_cons; + u8 sq_cmb; + bool sq_flush; + bool sq_flush_rcvd; + + spinlock_t rq_lock; /* for posting and polling */ + struct ionic_queue rq; + struct ionic_rq_meta *rq_meta; + struct ionic_rq_meta *rq_meta_head; + int rq_spec; + u16 rq_old_prod; + u8 rq_cmb; + bool rq_flush; + + struct kref qp_kref; + struct completion qp_rel_comp; + + /* infrequently accessed, keep at end */ + int sgid_index; + int sq_cmb_order; + u32 sq_cmb_pgid; + phys_addr_t sq_cmb_addr; + struct rdma_user_mmap_entry *mmap_sq_cmb; + + struct ib_umem *sq_umem; + + int rq_cmb_order; + u32 rq_cmb_pgid; + phys_addr_t rq_cmb_addr; + struct rdma_user_mmap_entry *mmap_rq_cmb; + + struct ib_umem *rq_umem; + + int dcqcn_profile; + + struct ib_ud_header *hdr; +}; + +struct ionic_ah { + struct ib_ah ibah; + u32 ahid; + int sgid_index; + struct ib_ud_header hdr; +}; + +struct ionic_mr { + union { + struct ib_mr ibmr; + struct ib_mw ibmw; + }; + + u32 mrid; + int flags; + + struct ib_umem *umem; + struct ionic_tbl_buf buf; + bool created; +}; + static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) { return container_of(ibdev, struct ionic_ibdev, ibdev); } =20 +static inline struct ionic_ctx *to_ionic_ctx(struct ib_ucontext *ibctx) +{ + return container_of(ibctx, struct ionic_ctx, ibctx); +} + +static inline struct ionic_ctx *to_ionic_ctx_uobj(struct ib_uobject *uobj) +{ + if (!uobj) + return NULL; + + if (!uobj->context) + return NULL; + + return to_ionic_ctx(uobj->context); +} + +static inline struct ionic_pd *to_ionic_pd(struct ib_pd *ibpd) +{ + return container_of(ibpd, struct ionic_pd, ibpd); +} + +static inline struct ionic_mr *to_ionic_mr(struct ib_mr *ibmr) +{ + return container_of(ibmr, struct ionic_mr, ibmr); +} + +static inline struct ionic_mr *to_ionic_mw(struct ib_mw *ibmw) +{ + return container_of(ibmw, struct ionic_mr, ibmw); +} + +static inline struct ionic_vcq *to_ionic_vcq(struct ib_cq *ibcq) +{ + return container_of(ibcq, struct ionic_vcq, ibcq); +} + +static inline struct ionic_cq *to_ionic_vcq_cq(struct ib_cq *ibcq, + uint8_t udma_idx) +{ + return &to_ionic_vcq(ibcq)->cq[udma_idx]; +} + +static inline struct ionic_qp *to_ionic_qp(struct ib_qp *ibqp) +{ + return container_of(ibqp, struct ionic_qp, ibqp); +} + +static inline struct ionic_ah *to_ionic_ah(struct ib_ah *ibah) +{ + return container_of(ibah, struct ionic_ah, ibah); +} + +static inline u32 ionic_ctx_dbid(struct ionic_ibdev *dev, + struct ionic_ctx *ctx) +{ + if (!ctx) + return dev->lif_cfg.dbid; + + return ctx->dbid; +} + +static inline u32 ionic_obj_dbid(struct ionic_ibdev *dev, + struct ib_uobject *uobj) +{ + return ionic_ctx_dbid(dev, to_ionic_ctx_uobj(uobj)); +} + +static inline void ionic_qp_complete(struct kref *kref) +{ + struct ionic_qp *qp =3D container_of(kref, struct ionic_qp, qp_kref); + + complete(&qp->qp_rel_comp); +} + static inline void ionic_cq_complete(struct kref *kref) { struct ionic_cq *cq =3D container_of(kref, struct ionic_cq, cq_kref); @@ -227,8 +423,45 @@ int ionic_create_cq_common(struct ionic_vcq *vcq, __u32 *resp_cqid, int udma_idx); void ionic_destroy_cq_common(struct ionic_ibdev *dev, struct ionic_cq *cq); +void ionic_flush_qp(struct ionic_ibdev *dev, struct ionic_qp *qp); +void ionic_notify_flush_cq(struct ionic_cq *cq); + +int ionic_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata= ); +void ionic_dealloc_ucontext(struct ib_ucontext *ibctx); +int ionic_mmap(struct ib_ucontext *ibctx, struct vm_area_struct *vma); +void ionic_mmap_free(struct rdma_user_mmap_entry *rdma_entry); +int ionic_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); +int ionic_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); +int ionic_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_att= r, + struct ib_udata *udata); +int ionic_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); +int ionic_destroy_ah(struct ib_ah *ibah, u32 flags); +struct ib_mr *ionic_get_dma_mr(struct ib_pd *ibpd, int access); +struct ib_mr *ionic_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 addr, int access, struct ib_udata *udata); +struct ib_mr *ionic_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 offset, + u64 length, u64 addr, int fd, int access, + struct uverbs_attr_bundle *attrs); +int ionic_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); +struct ib_mr *ionic_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type type, + u32 max_sg); +int ionic_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nen= ts, + unsigned int *sg_offset); +int ionic_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); +int ionic_dealloc_mw(struct ib_mw *ibmw); +int ionic_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct uverbs_attr_bundle *attrs); +int ionic_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); +int ionic_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr, + struct ib_udata *udata); +int ionic_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int mask, + struct ib_udata *udata); +int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int mask, + struct ib_qp_init_attr *init_attr); +int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata); =20 /* ionic_pgtbl.c */ +__le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); int ionic_pgtbl_init(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf, diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c index 11461f7642bc..a8eb73be6f86 100644 --- a/drivers/infiniband/hw/ionic/ionic_pgtbl.c +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -7,6 +7,25 @@ #include "ionic_fw.h" #include "ionic_ibdev.h" =20 +__le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va) +{ + u64 pg_mask =3D BIT_ULL(buf->page_size_log2) - 1; + u64 dma; + + if (!buf->tbl_pages) + return cpu_to_le64(0); + + if (buf->tbl_pages > 1) + return cpu_to_le64(buf->tbl_dma); + + if (buf->tbl_buf) + dma =3D le64_to_cpu(buf->tbl_buf[0]); + else + dma =3D buf->tbl_dma; + + return cpu_to_le64(dma + (va & pg_mask)); +} + int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) { if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) diff --git a/include/uapi/rdma/ionic-abi.h b/include/uapi/rdma/ionic-abi.h new file mode 100644 index 000000000000..7b589d3e9728 --- /dev/null +++ b/include/uapi/rdma/ionic-abi.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc */ + +#ifndef IONIC_ABI_H +#define IONIC_ABI_H + +#include + +#define IONIC_ABI_VERSION 1 + +#define IONIC_EXPDB_64 1 +#define IONIC_EXPDB_128 2 +#define IONIC_EXPDB_256 4 +#define IONIC_EXPDB_512 8 + +#define IONIC_EXPDB_SQ 1 +#define IONIC_EXPDB_RQ 2 + +#define IONIC_CMB_ENABLE 1 +#define IONIC_CMB_REQUIRE 2 +#define IONIC_CMB_EXPDB 4 +#define IONIC_CMB_WC 8 +#define IONIC_CMB_UC 16 + +struct ionic_ctx_req { + __u32 rsvd[2]; +}; + +struct ionic_ctx_resp { + __u32 rsvd; + __u32 page_shift; + + __aligned_u64 dbell_offset; + + __u16 version; + __u8 qp_opcodes; + __u8 admin_opcodes; + + __u8 sq_qtype; + __u8 rq_qtype; + __u8 cq_qtype; + __u8 admin_qtype; + + __u8 max_stride; + __u8 max_spec; + __u8 udma_count; + __u8 expdb_mask; + __u8 expdb_qtypes; + + __u8 rsvd2[3]; +}; + +struct ionic_qdesc { + __aligned_u64 addr; + __u32 size; + __u16 mask; + __u8 depth_log2; + __u8 stride_log2; +}; + +struct ionic_ah_resp { + __u32 ahid; + __u32 pad; +}; + +struct ionic_cq_req { + struct ionic_qdesc cq[2]; + __u8 udma_mask; + __u8 rsvd[7]; +}; + +struct ionic_cq_resp { + __u32 cqid[2]; + __u8 udma_mask; + __u8 rsvd[7]; +}; + +struct ionic_qp_req { + struct ionic_qdesc sq; + struct ionic_qdesc rq; + __u8 sq_spec; + __u8 rq_spec; + __u8 sq_cmb; + __u8 rq_cmb; + __u8 udma_mask; + __u8 rsvd[3]; +}; + +struct ionic_qp_resp { + __u32 qpid; + __u8 sq_cmb; + __u8 rq_cmb; + __u8 udma_idx; + __u8 rsvd[1]; + __aligned_u64 sq_cmb_offset; + __aligned_u64 rq_cmb_offset; +}; + +struct ionic_srq_req { + struct ionic_qdesc rq; + __u8 rq_spec; + __u8 rq_cmb; + __u8 udma_mask; + __u8 rsvd[5]; +}; + +struct ionic_srq_resp { + __u32 qpid; + __u8 rq_cmb; + __u8 udma_idx; + __u8 rsvd[2]; + __aligned_u64 rq_cmb_offset; +}; + +#endif /* IONIC_ABI_H */ --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2057.outbound.protection.outlook.com [40.107.93.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 132E8228C9D; Wed, 23 Jul 2025 17:33:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291990; cv=fail; b=Sx9YUlnw1bqkpl+m7BcHV/XT3CYMygUMNA15lXjWqRhCwRoaZ7W74rulokN/EnWlX0lgO6sR7uxDkbNltnG3tsynhaGytv0VFRVhHWWTvifdAWkSgJezuR8WjulRpaOgAR6x0iygN3ZlJ48ozerz4oaTQMiFDh6XtYr4yuQ/0ag= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291990; c=relaxed/simple; bh=EDPzEDTYDhFsU453JpjFdFG0wnTug0PMKmBveqh1RXY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mmVYhY/EwuHcg5zKKTnQqrMxmk0hB23BGsyEt73xMTJIT4x1CnS7E4GZUQYBLc0CP/QtAhBB4JPwXwhMhmdbCy+ySmU+cMeTawOKapKDXCeOn6d91dKyvyhkS6hEkmtlA0Yw8baqdsG4TnQtEc3JxAdc8SsVWQAaIA0Yq+XIBhw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=hW55mVL5; arc=fail smtp.client-ip=40.107.93.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="hW55mVL5" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=hmmlfPluaK7exwC0PgPeZBjnVnzjOkdHehrSRQENBLiFIqSGnhkMvRSx1JB67+d5lHnxEQrIUyG6s5Pr9JfUpbooiBlj/TV2i2PkIIla5Omdu8hK8btVFfrr/7hTBLmhYDLaUMd/6KqS+eUCFzRlVPMn5gL7+TXveyfnfLESkT4icMyU/j1XZv1cygQ4hXQFS4dX0OBQE63TDJ5vh90ddBxYaamvaAd8funDyPMSx99fbloURS6zfKnIoZqPxD3C8DpRSR/Qr/YVblzNOVxnB051GyKmTqk69C92wwQUVGNrhOMiX10aiBj7gOk5DDBdzF2T0sMZiVWFQzkg6X/UXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nkZ/kX98MHkZ7RIi1pxrBXp+nR2YziGOZP9tBfRV2n0=; b=iBSxjopHfjlMP00Zb7a+iY2JVOYjLyGbzemyzrT47eyyHWG7g4XlANR7bQXWsgDN2RyCmv3RpxEysAYT67t55p1NQN00VHzSm4r6P+BV4Ef6/fyh3ZD3dyOxbfa666FUPyabe5wgSIc4Mihxz31uLxZjQ+Kb6Zf0F8ZmznGvWxq6PmZrjl0FHQQ5obwH8svj8oVIwdCxUxR/hm5P8/1ooV6gFd5hrFBHu6aepnycG8Hd8rkLo1WyWUIH6B1CYyOp1MnUgphN2v6OI+wQ2jv8oxthJyttgCZn59WCryw5mKLtwd1AJ+LkuBUarxjNY39SuPsEm8DOUmdacUfLSEWY+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nkZ/kX98MHkZ7RIi1pxrBXp+nR2YziGOZP9tBfRV2n0=; b=hW55mVL50G+KsFbBOOFVPbmU//X+ltfzBBlZ9fvfDRGqgaW5gnn0X5WAW+nc+biM2Z9tavNyUi5PCraZOnnJVsuHbeSLLMIkBYyop3A79TZAE37FQn5b9i8Yh4surFUEUD1dX9hjDNjlv3BuPKgQdiVyzkml4u0/rVgVqyPIeN4= Received: from CH5PR04CA0005.namprd04.prod.outlook.com (2603:10b6:610:1f4::22) by CY8PR12MB7587.namprd12.prod.outlook.com (2603:10b6:930:9a::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8964.21; Wed, 23 Jul 2025 17:33:00 +0000 Received: from CY4PEPF0000EE39.namprd03.prod.outlook.com (2603:10b6:610:1f4:cafe::7a) by CH5PR04CA0005.outlook.office365.com (2603:10b6:610:1f4::22) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:33:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE39.mail.protection.outlook.com (10.167.242.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:33:00 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:51 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:50 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:46 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v4 11/14] RDMA/ionic: Register device ops for datapath Date: Wed, 23 Jul 2025 23:01:46 +0530 Message-ID: <20250723173149.2568776-12-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE39:EE_|CY8PR12MB7587:EE_ X-MS-Office365-Filtering-Correlation-Id: bd730f3f-cc63-4cda-8281-08ddca0ef86a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|376014|7416014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?BygS3btMMVrmBX83S+tZwnb33VKiQAZvNNSpFuZ7lqSX5tcfGfh6klFNYWM8?= =?us-ascii?Q?eeNhB2XEeoJWfEkHJzi4IQoEnLJ2iQvef0Pne8XwBkWZHoIVUX3723Qk3Rcf?= =?us-ascii?Q?epL+aXdQ63MSbI1FmzcoHiIKHxL5PLNEgRCdUD+MzDRzN3SNqA8pksBRW59G?= =?us-ascii?Q?Fd65XkL7y9XaHTLaeLH8Zl+Pepw3Btrn8ZwKA/40Utdmf2Ag0DZ4l3InFL0H?= =?us-ascii?Q?o/3rJP95vUueEpjAri+pDHZh5XC9EXI/7K/c98gd50L+qXInpu61Wwa9ThFf?= =?us-ascii?Q?+3I4ejyC+USHQNSgwg8axkrrrlRPSO4tyzHYm34GNpJLDjfXicQeCEjnZ1ZN?= =?us-ascii?Q?XkvK/bCJAVU631UETcgG5idfkfheGGSIm9/bOHkQIicNHmfY1Nnrlhx3eDQW?= =?us-ascii?Q?c5nQ8ctVhYt6X3veBM/DLArN+qXZj84ebCqllHccXvftRSEHgxRXsXZExxFr?= =?us-ascii?Q?lhTARkJ2iG69uI5CKTm9eZAZET3MWuds7HuNXtZUUCD738X2GsN7zhUExiPn?= =?us-ascii?Q?6KAR1pv8jrWJoPahha78h5NpVty60icuSXvHMgb4elPyB1DbD39Q8dnq09m+?= =?us-ascii?Q?tWj5a21TT7ssr48aUUBN6iPTI8kWdyWYCK6FDxKivAMcZCVF6Bs2LoKXxnX0?= =?us-ascii?Q?3Gh20HCKlhzlEeeKKHv2lEszucVwR/TfafnYDa0yTP43Rhp16/ms8wrQBYd6?= =?us-ascii?Q?n+ClLygRSt5Iitpb6+0eZcQJyaGEsxTsPaEX6ovNmY5NfBEwmvNsrsA/rQ0L?= =?us-ascii?Q?xU0hoVGJ/OubUNeXR4mA/rJd0mfv7cFBSPGRgcV/kJCGBICyfJxx2Zr9s0v5?= =?us-ascii?Q?Sy59ih+EsTNd593J8K8wBstmpi5+iKThHrjwfvaLtbrKPd1FMPK8Z82WCtV4?= =?us-ascii?Q?8ZvUemHiOItV0Q+d8Yz1ACXZLcJ1obe5ovlDkbPGUOvJpvt/nm18Qf8/5HBH?= =?us-ascii?Q?ZabqmUfPmyULA3eWvEjFUMiMm7KD6twvSrWMfmpn978jkTrdm7A0OtvUgEQn?= =?us-ascii?Q?pksJIWZ9h1JVP/IPbAcen8Ufn7j0epd6gB744G30XMEs66AKxkjI89BhZSm7?= =?us-ascii?Q?fGcvpwUk1Chvs5vtAjJ/dzfEym4CKPDmqLYyOZnx/Htm/adkDYpJr44VeHgE?= =?us-ascii?Q?GvESAY2VfR07/mYB+v5w0BGPCLdzAnTBr0QOtfbHeRoITb+Bi+Y/VymTKlcg?= =?us-ascii?Q?o43FKc8sXkg+W3Pw6X1hg77LzY3nZNsmz3M4aULc7UKQaKhpB4U2qLf9jNka?= =?us-ascii?Q?xzp5IDdGa8jW5li8Rue3XCNelQ41M8YyWFpayVjzC6E6fF8gfStEoxu0o5XY?= =?us-ascii?Q?rgh0s31iX8X75ausifgfx3JLJZX2rkB21QMBc+G+u8j3VB04bzpBh9DDEU91?= =?us-ascii?Q?68CNv4mf88cUWv7vbcU6Cf/eq16Yo95NRPnvhT6K1o0mVuvF2bCMqKaLQ2nQ?= =?us-ascii?Q?oqA8ygjkhCS6IddwqtgEU0QdyCfWrnw61YysTeCP/xVtMGumfBcCdFIkM9zw?= =?us-ascii?Q?QGpsUmftJTaeK1+3ERxdxcwi6qrQvT0Dr2nAR92T+nh6v/IUNF8U4aucNw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(7416014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:33:00.1684 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bd730f3f-cc63-4cda-8281-08ddca0ef86a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE39.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7587 Content-Type: text/plain; charset="utf-8" Implement device supported verb APIs for datapath. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v2->v3 - Registered main ib ops at once - Removed uverbs_cmd_mask drivers/infiniband/hw/ionic/ionic_datapath.c | 1392 ++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_fw.h | 105 ++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 5 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 14 + drivers/infiniband/hw/ionic/ionic_pgtbl.c | 11 + 5 files changed, 1527 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_datapath.c diff --git a/drivers/infiniband/hw/ionic/ionic_datapath.c b/drivers/infinib= and/hw/ionic/ionic_datapath.c new file mode 100644 index 000000000000..4f618532ccb2 --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_datapath.c @@ -0,0 +1,1392 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include +#include +#include +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +#define IONIC_OP(version, opname) \ + ((version) < 2 ? IONIC_V1_OP_##opname : IONIC_V2_OP_##opname) + +static bool ionic_next_cqe(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_v1_cqe **cqe) +{ + struct ionic_v1_cqe *qcqe =3D ionic_queue_at_prod(&cq->q); + + if (unlikely(cq->color !=3D ionic_v1_cqe_color(qcqe))) + return false; + + /* Prevent out-of-order reads of the CQE */ + dma_rmb(); + + *cqe =3D qcqe; + + return true; +} + +static int ionic_flush_recv(struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_rq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (!qp->rq_flush) + return 0; + + if (ionic_queue_empty(&qp->rq)) + return 0; + + wqe =3D ionic_queue_at_cons(&qp->rq); + + /* wqe_id must be a valid queue index */ + if (unlikely(wqe->base.wqe_id >> qp->rq.depth_log2)) { + ibdev_warn(qp->ibqp.device, + "flush qp %u recv index %llu invalid\n", + qp->qpid, (unsigned long long)wqe->base.wqe_id); + return -EIO; + } + + /* wqe_id must indicate a request that is outstanding */ + meta =3D &qp->rq_meta[wqe->base.wqe_id]; + if (unlikely(meta->next !=3D IONIC_META_POSTED)) { + ibdev_warn(qp->ibqp.device, + "flush qp %u recv index %llu not posted\n", + qp->qpid, (unsigned long long)wqe->base.wqe_id); + return -EIO; + } + + ionic_queue_consume(&qp->rq); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D IB_WC_WR_FLUSH_ERR; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + meta->next =3D qp->rq_meta_head; + qp->rq_meta_head =3D meta; + + return 1; +} + +static int ionic_flush_recv_many(struct ionic_qp *qp, + struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_flush_recv(qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_flush_send(struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_sq_meta *meta; + + if (!qp->sq_flush) + return 0; + + if (ionic_queue_empty(&qp->sq)) + return 0; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + ionic_queue_consume(&qp->sq); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D IB_WC_WR_FLUSH_ERR; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + return 1; +} + +static int ionic_flush_send_many(struct ionic_qp *qp, + struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_flush_send(qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_poll_recv(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_qp *cqe_qp, struct ionic_v1_cqe *cqe, + struct ib_wc *wc) +{ + struct ionic_qp *qp =3D NULL; + struct ionic_rq_meta *meta; + u32 src_qpn, st_len; + u16 vlan_tag; + u8 op; + + if (cqe_qp->rq_flush) + return 0; + + qp =3D cqe_qp; + + st_len =3D be32_to_cpu(cqe->status_length); + + /* ignore wqe_id in case of flush error */ + if (ionic_v1_cqe_error(cqe) && st_len =3D=3D IONIC_STS_WQE_FLUSHED_ERR) { + cqe_qp->rq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + + /* posted recvs (if any) flushed by ionic_flush_recv */ + return 0; + } + + /* there had better be something in the recv queue to complete */ + if (ionic_queue_empty(&qp->rq)) { + ibdev_warn(&dev->ibdev, "qp %u is empty\n", qp->qpid); + return -EIO; + } + + /* wqe_id must be a valid queue index */ + if (unlikely(cqe->recv.wqe_id >> qp->rq.depth_log2)) { + ibdev_warn(&dev->ibdev, + "qp %u recv index %llu invalid\n", + qp->qpid, (unsigned long long)cqe->recv.wqe_id); + return -EIO; + } + + /* wqe_id must indicate a request that is outstanding */ + meta =3D &qp->rq_meta[cqe->recv.wqe_id]; + if (unlikely(meta->next !=3D IONIC_META_POSTED)) { + ibdev_warn(&dev->ibdev, + "qp %u recv index %llu not posted\n", + qp->qpid, (unsigned long long)cqe->recv.wqe_id); + return -EIO; + } + + meta->next =3D qp->rq_meta_head; + qp->rq_meta_head =3D meta; + + memset(wc, 0, sizeof(*wc)); + + wc->wr_id =3D meta->wrid; + + wc->qp =3D &cqe_qp->ibqp; + + if (ionic_v1_cqe_error(cqe)) { + wc->vendor_err =3D st_len; + wc->status =3D ionic_to_ib_status(st_len); + + cqe_qp->rq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + + ibdev_warn(&dev->ibdev, + "qp %d recv cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, BIT(cq->q.stride_log2), true); + goto out; + } + + wc->vendor_err =3D 0; + wc->status =3D IB_WC_SUCCESS; + + src_qpn =3D be32_to_cpu(cqe->recv.src_qpn_op); + op =3D src_qpn >> IONIC_V1_CQE_RECV_OP_SHIFT; + + src_qpn &=3D IONIC_V1_CQE_RECV_QPN_MASK; + op &=3D IONIC_V1_CQE_RECV_OP_MASK; + + wc->opcode =3D IB_WC_RECV; + switch (op) { + case IONIC_V1_CQE_RECV_OP_RDMA_IMM: + wc->opcode =3D IB_WC_RECV_RDMA_WITH_IMM; + wc->wc_flags |=3D IB_WC_WITH_IMM; + wc->ex.imm_data =3D cqe->recv.imm_data_rkey; /* be32 in wc */ + break; + case IONIC_V1_CQE_RECV_OP_SEND_IMM: + wc->wc_flags |=3D IB_WC_WITH_IMM; + wc->ex.imm_data =3D cqe->recv.imm_data_rkey; /* be32 in wc */ + break; + case IONIC_V1_CQE_RECV_OP_SEND_INV: + wc->wc_flags |=3D IB_WC_WITH_INVALIDATE; + wc->ex.invalidate_rkey =3D be32_to_cpu(cqe->recv.imm_data_rkey); + break; + } + + wc->byte_len =3D st_len; + wc->src_qp =3D src_qpn; + + if (qp->ibqp.qp_type =3D=3D IB_QPT_UD || + qp->ibqp.qp_type =3D=3D IB_QPT_GSI) { + wc->wc_flags |=3D IB_WC_GRH | IB_WC_WITH_SMAC; + ether_addr_copy(wc->smac, cqe->recv.src_mac); + + wc->wc_flags |=3D IB_WC_WITH_NETWORK_HDR_TYPE; + if (ionic_v1_cqe_recv_is_ipv4(cqe)) + wc->network_hdr_type =3D RDMA_NETWORK_IPV4; + else + wc->network_hdr_type =3D RDMA_NETWORK_IPV6; + + if (ionic_v1_cqe_recv_is_vlan(cqe)) + wc->wc_flags |=3D IB_WC_WITH_VLAN; + + /* vlan_tag in cqe will be valid from dpath even if no vlan */ + vlan_tag =3D be16_to_cpu(cqe->recv.vlan_tag); + wc->vlan_id =3D vlan_tag & 0xfff; /* 802.1q VID */ + wc->sl =3D vlan_tag >> VLAN_PRIO_SHIFT; /* 802.1q PCP */ + } + + wc->pkey_index =3D 0; + wc->port_num =3D 1; + +out: + ionic_queue_consume(&qp->rq); + + return 1; +} + +static bool ionic_peek_send(struct ionic_qp *qp) +{ + struct ionic_sq_meta *meta; + + if (qp->sq_flush) + return false; + + /* completed all send queue requests */ + if (ionic_queue_empty(&qp->sq)) + return false; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + /* waiting for remote completion */ + if (meta->remote && meta->seq =3D=3D qp->sq_msn_cons) + return false; + + /* waiting for local completion */ + if (!meta->remote && !meta->local_comp) + return false; + + return true; +} + +static int ionic_poll_send(struct ionic_ibdev *dev, struct ionic_cq *cq, + struct ionic_qp *qp, struct ib_wc *wc) +{ + struct ionic_sq_meta *meta; + + if (qp->sq_flush) + return 0; + + do { + /* completed all send queue requests */ + if (ionic_queue_empty(&qp->sq)) + goto out_empty; + + meta =3D &qp->sq_meta[qp->sq.cons]; + + /* waiting for remote completion */ + if (meta->remote && meta->seq =3D=3D qp->sq_msn_cons) + goto out_empty; + + /* waiting for local completion */ + if (!meta->remote && !meta->local_comp) + goto out_empty; + + ionic_queue_consume(&qp->sq); + + /* produce wc only if signaled or error status */ + } while (!meta->signal && meta->ibsts =3D=3D IB_WC_SUCCESS); + + memset(wc, 0, sizeof(*wc)); + + wc->status =3D meta->ibsts; + wc->wr_id =3D meta->wrid; + wc->qp =3D &qp->ibqp; + + if (meta->ibsts =3D=3D IB_WC_SUCCESS) { + wc->byte_len =3D meta->len; + wc->opcode =3D meta->ibop; + } else { + wc->vendor_err =3D meta->len; + + qp->sq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + + return 1; + +out_empty: + if (qp->sq_flush_rcvd) { + qp->sq_flush =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + return 0; +} + +static int ionic_poll_send_many(struct ionic_ibdev *dev, struct ionic_cq *= cq, + struct ionic_qp *qp, struct ib_wc *wc, int nwc) +{ + int rc =3D 0, npolled =3D 0; + + while (npolled < nwc) { + rc =3D ionic_poll_send(dev, cq, qp, wc + npolled); + if (rc <=3D 0) + break; + + npolled +=3D rc; + } + + return npolled ?: rc; +} + +static int ionic_validate_cons(u16 prod, u16 cons, + u16 comp, u16 mask) +{ + if (((prod - cons) & mask) <=3D ((comp - cons) & mask)) + return -EIO; + + return 0; +} + +static int ionic_comp_msn(struct ionic_qp *qp, struct ionic_v1_cqe *cqe) +{ + struct ionic_sq_meta *meta; + u16 cqe_seq, cqe_idx; + int rc; + + if (qp->sq_flush) + return 0; + + cqe_seq =3D be32_to_cpu(cqe->send.msg_msn) & qp->sq.mask; + + rc =3D ionic_validate_cons(qp->sq_msn_prod, + qp->sq_msn_cons, + cqe_seq - 1, + qp->sq.mask); + if (rc) { + ibdev_warn(qp->ibqp.device, + "qp %u bad msn %#x seq %u for prod %u cons %u\n", + qp->qpid, be32_to_cpu(cqe->send.msg_msn), + cqe_seq, qp->sq_msn_prod, qp->sq_msn_cons); + return rc; + } + + qp->sq_msn_cons =3D cqe_seq; + + if (ionic_v1_cqe_error(cqe)) { + cqe_idx =3D qp->sq_msn_idx[(cqe_seq - 1) & qp->sq.mask]; + + meta =3D &qp->sq_meta[cqe_idx]; + meta->len =3D be32_to_cpu(cqe->status_length); + meta->ibsts =3D ionic_to_ib_status(meta->len); + + ibdev_warn(qp->ibqp.device, + "qp %d msn cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, sizeof(*cqe), true); + } + + return 0; +} + +static int ionic_comp_npg(struct ionic_qp *qp, struct ionic_v1_cqe *cqe) +{ + struct ionic_sq_meta *meta; + u16 cqe_idx; + u32 st_len; + + if (qp->sq_flush) + return 0; + + st_len =3D be32_to_cpu(cqe->status_length); + + if (ionic_v1_cqe_error(cqe) && st_len =3D=3D IONIC_STS_WQE_FLUSHED_ERR) { + /* + * Flush cqe does not consume a wqe on the device, and maybe + * no such work request is posted. + * + * The driver should begin flushing after the last indicated + * normal or error completion. Here, only set a hint that the + * flush request was indicated. In poll_send, if nothing more + * can be polled normally, then begin flushing. + */ + qp->sq_flush_rcvd =3D true; + return 0; + } + + cqe_idx =3D cqe->send.npg_wqe_id & qp->sq.mask; + meta =3D &qp->sq_meta[cqe_idx]; + meta->local_comp =3D true; + + if (ionic_v1_cqe_error(cqe)) { + meta->len =3D st_len; + meta->ibsts =3D ionic_to_ib_status(st_len); + meta->remote =3D false; + ibdev_warn(qp->ibqp.device, + "qp %d npg cqe with error\n", qp->qpid); + print_hex_dump(KERN_WARNING, "cqe ", DUMP_PREFIX_OFFSET, 16, 1, + cqe, sizeof(*cqe), true); + } + + return 0; +} + +static void ionic_reserve_sync_cq(struct ionic_ibdev *dev, struct ionic_cq= *cq) +{ + if (!ionic_queue_empty(&cq->q)) { + cq->credit +=3D ionic_queue_length(&cq->q); + cq->q.cons =3D cq->q.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, + ionic_queue_dbell_val(&cq->q)); + } +} + +static void ionic_reserve_cq(struct ionic_ibdev *dev, struct ionic_cq *cq, + int spend) +{ + cq->credit -=3D spend; + + if (cq->credit <=3D 0) + ionic_reserve_sync_cq(dev, cq); +} + +static int ionic_poll_vcq_cq(struct ionic_ibdev *dev, + struct ionic_cq *cq, + int nwc, struct ib_wc *wc) +{ + struct ionic_qp *qp, *qp_next; + struct ionic_v1_cqe *cqe; + int rc =3D 0, npolled =3D 0; + unsigned long irqflags; + u32 qtf, qid; + bool peek; + u8 type; + + if (nwc < 1) + return 0; + + spin_lock_irqsave(&cq->lock, irqflags); + + /* poll already indicated work completions for send queue */ + list_for_each_entry_safe(qp, qp_next, &cq->poll_sq, cq_poll_sq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->sq_lock); + rc =3D ionic_poll_send_many(dev, cq, qp, wc + npolled, + nwc - npolled); + spin_unlock(&qp->sq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_poll_sq); + } + + /* poll for more work completions */ + while (likely(ionic_next_cqe(dev, cq, &cqe))) { + if (npolled =3D=3D nwc) + goto out; + + qtf =3D ionic_v1_cqe_qtf(cqe); + qid =3D ionic_v1_cqe_qtf_qid(qtf); + type =3D ionic_v1_cqe_qtf_type(qtf); + + qp =3D xa_load(&dev->qp_tbl, qid); + if (unlikely(!qp)) { + ibdev_dbg(&dev->ibdev, "missing qp for qid %u\n", qid); + goto cq_next; + } + + switch (type) { + case IONIC_V1_CQE_TYPE_RECV: + spin_lock(&qp->rq_lock); + rc =3D ionic_poll_recv(dev, cq, qp, cqe, wc + npolled); + spin_unlock(&qp->rq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + break; + + case IONIC_V1_CQE_TYPE_SEND_MSN: + spin_lock(&qp->sq_lock); + rc =3D ionic_comp_msn(qp, cqe); + if (!rc) { + rc =3D ionic_poll_send_many(dev, cq, qp, + wc + npolled, + nwc - npolled); + peek =3D ionic_peek_send(qp); + } + spin_unlock(&qp->sq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + if (peek) + list_move_tail(&qp->cq_poll_sq, &cq->poll_sq); + break; + + case IONIC_V1_CQE_TYPE_SEND_NPG: + spin_lock(&qp->sq_lock); + rc =3D ionic_comp_npg(qp, cqe); + if (!rc) { + rc =3D ionic_poll_send_many(dev, cq, qp, + wc + npolled, + nwc - npolled); + peek =3D ionic_peek_send(qp); + } + spin_unlock(&qp->sq_lock); + + if (rc < 0) + goto out; + + npolled +=3D rc; + + if (peek) + list_move_tail(&qp->cq_poll_sq, &cq->poll_sq); + break; + + default: + ibdev_warn(&dev->ibdev, + "unexpected cqe type %u\n", type); + rc =3D -EIO; + goto out; + } + +cq_next: + ionic_queue_produce(&cq->q); + cq->color =3D ionic_color_wrap(cq->q.prod, cq->color); + } + + /* lastly, flush send and recv queues */ + if (likely(!cq->flush)) + goto out; + + cq->flush =3D false; + + list_for_each_entry_safe(qp, qp_next, &cq->flush_sq, cq_flush_sq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->sq_lock); + rc =3D ionic_flush_send_many(qp, wc + npolled, nwc - npolled); + spin_unlock(&qp->sq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_flush_sq); + else + cq->flush =3D true; + } + + list_for_each_entry_safe(qp, qp_next, &cq->flush_rq, cq_flush_rq) { + if (npolled =3D=3D nwc) + goto out; + + spin_lock(&qp->rq_lock); + rc =3D ionic_flush_recv_many(qp, wc + npolled, nwc - npolled); + spin_unlock(&qp->rq_lock); + + if (rc > 0) + npolled +=3D rc; + + if (npolled < nwc) + list_del_init(&qp->cq_flush_rq); + else + cq->flush =3D true; + } + +out: + /* in case credit was depleted (more work posted than cq depth) */ + if (cq->credit <=3D 0) + ionic_reserve_sync_cq(dev, cq); + + spin_unlock_irqrestore(&cq->lock, irqflags); + + return npolled ?: rc; +} + +int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int rc_tmp, rc =3D 0, npolled =3D 0; + int cq_i, cq_x, cq_ix; + + cq_x =3D vcq->poll_idx; + vcq->poll_idx ^=3D dev->lif_cfg.udma_count - 1; + + for (cq_i =3D 0; npolled < nwc && cq_i < dev->lif_cfg.udma_count; ++cq_i)= { + cq_ix =3D cq_i ^ cq_x; + + if (!(vcq->udma_mask & BIT(cq_ix))) + continue; + + rc_tmp =3D ionic_poll_vcq_cq(dev, &vcq->cq[cq_ix], + nwc - npolled, + wc + npolled); + + if (rc_tmp >=3D 0) + npolled +=3D rc_tmp; + else if (!rc) + rc =3D rc_tmp; + } + + return npolled ?: rc; +} + +static int ionic_req_notify_vcq_cq(struct ionic_ibdev *dev, struct ionic_c= q *cq, + enum ib_cq_notify_flags flags) +{ + u64 dbell_val =3D cq->q.dbell; + + if (flags & IB_CQ_SOLICITED) { + cq->arm_sol_prod =3D ionic_queue_next(&cq->q, cq->arm_sol_prod); + dbell_val |=3D cq->arm_sol_prod | IONIC_CQ_RING_SOL; + } else { + cq->arm_any_prod =3D ionic_queue_next(&cq->q, cq->arm_any_prod); + dbell_val |=3D cq->arm_any_prod | IONIC_CQ_RING_ARM; + } + + ionic_reserve_sync_cq(dev, cq); + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.cq_qtype, dbell_val); + + /* + * IB_CQ_REPORT_MISSED_EVENTS: + * + * The queue index in ring zero guarantees no missed events. + * + * Here, we check if the color bit in the next cqe is flipped. If it + * is flipped, then progress can be made by immediately polling the cq. + * Still, the cq will be armed, and an event will be generated. The cq + * may be empty when polled after the event, because the next poll + * after arming the cq can empty it. + */ + return (flags & IB_CQ_REPORT_MISSED_EVENTS) && + cq->color =3D=3D ionic_v1_cqe_color(ionic_queue_at_prod(&cq->q)); +} + +int ionic_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibcq->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibcq); + int rc =3D 0, cq_i; + + for (cq_i =3D 0; cq_i < dev->lif_cfg.udma_count; ++cq_i) { + if (!(vcq->udma_mask & BIT(cq_i))) + continue; + + if (ionic_req_notify_vcq_cq(dev, &vcq->cq[cq_i], flags)) + rc =3D 1; + } + + return rc; +} + +static s64 ionic_prep_inline(void *data, u32 max_data, + const struct ib_sge *ib_sgl, int num_sge) +{ + static const s64 bit_31 =3D 1u << 31; + s64 len =3D 0, sg_len; + int sg_i; + + for (sg_i =3D 0; sg_i < num_sge; ++sg_i) { + sg_len =3D ib_sgl[sg_i].length; + + /* sge length zero means 2GB */ + if (unlikely(sg_len =3D=3D 0)) + sg_len =3D bit_31; + + /* greater than max inline data is invalid */ + if (unlikely(len + sg_len > max_data)) + return -EINVAL; + + memcpy(data + len, (void *)ib_sgl[sg_i].addr, sg_len); + + len +=3D sg_len; + } + + return len; +} + +static s64 ionic_prep_pld(struct ionic_v1_wqe *wqe, + union ionic_v1_pld *pld, + int spec, u32 max_sge, + const struct ib_sge *ib_sgl, + int num_sge) +{ + static const s64 bit_31 =3D 1l << 31; + struct ionic_sge *sgl; + __be32 *spec32 =3D NULL; + __be16 *spec16 =3D NULL; + s64 len =3D 0, sg_len; + int sg_i =3D 0; + + if (unlikely(num_sge < 0 || (u32)num_sge > max_sge)) + return -EINVAL; + + if (spec && num_sge > IONIC_V1_SPEC_FIRST_SGE) { + sg_i =3D IONIC_V1_SPEC_FIRST_SGE; + + if (num_sge > 8) { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SPEC16); + spec16 =3D pld->spec16; + } else { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SPEC32); + spec32 =3D pld->spec32; + } + } + + sgl =3D &pld->sgl[sg_i]; + + for (sg_i =3D 0; sg_i < num_sge; ++sg_i) { + sg_len =3D ib_sgl[sg_i].length; + + /* sge length zero means 2GB */ + if (unlikely(sg_len =3D=3D 0)) + sg_len =3D bit_31; + + /* greater than 2GB data is invalid */ + if (unlikely(len + sg_len > bit_31)) + return -EINVAL; + + sgl[sg_i].va =3D cpu_to_be64(ib_sgl[sg_i].addr); + sgl[sg_i].len =3D cpu_to_be32(sg_len); + sgl[sg_i].lkey =3D cpu_to_be32(ib_sgl[sg_i].lkey); + + if (spec32) { + spec32[sg_i] =3D sgl[sg_i].len; + } else if (spec16) { + if (unlikely(sg_len > U16_MAX)) + return -EINVAL; + spec16[sg_i] =3D cpu_to_be16(sg_len); + } + + len +=3D sg_len; + } + + return len; +} + +static void ionic_prep_base(struct ionic_qp *qp, + const struct ib_send_wr *wr, + struct ionic_sq_meta *meta, + struct ionic_v1_wqe *wqe) +{ + meta->wrid =3D wr->wr_id; + meta->ibsts =3D IB_WC_SUCCESS; + meta->signal =3D false; + meta->local_comp =3D false; + + wqe->base.wqe_id =3D qp->sq.prod; + + if (wr->send_flags & IB_SEND_FENCE) + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_FENCE); + + if (wr->send_flags & IB_SEND_SOLICITED) + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SOL); + + if (qp->sig_all || wr->send_flags & IB_SEND_SIGNALED) { + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_SIG); + meta->signal =3D true; + } + + meta->seq =3D qp->sq_msn_prod; + meta->remote =3D + qp->ibqp.qp_type !=3D IB_QPT_UD && + qp->ibqp.qp_type !=3D IB_QPT_GSI && + !ionic_ibop_is_local(wr->opcode); + + if (meta->remote) { + qp->sq_msn_idx[meta->seq] =3D qp->sq.prod; + qp->sq_msn_prod =3D ionic_queue_next(&qp->sq, qp->sq_msn_prod); + } + + ionic_queue_produce(&qp->sq); +} + +static int ionic_prep_common(struct ionic_qp *qp, + const struct ib_send_wr *wr, + struct ionic_sq_meta *meta, + struct ionic_v1_wqe *wqe) +{ + s64 signed_len; + u32 mval; + + if (wr->send_flags & IB_SEND_INLINE) { + wqe->base.num_sge_key =3D 0; + wqe->base.flags |=3D cpu_to_be16(IONIC_V1_FLAG_INL); + mval =3D ionic_v1_send_wqe_max_data(qp->sq.stride_log2, false); + signed_len =3D ionic_prep_inline(wqe->common.pld.data, mval, + wr->sg_list, wr->num_sge); + } else { + wqe->base.num_sge_key =3D wr->num_sge; + mval =3D ionic_v1_send_wqe_max_sge(qp->sq.stride_log2, + qp->sq_spec, + false); + signed_len =3D ionic_prep_pld(wqe, &wqe->common.pld, + qp->sq_spec, mval, + wr->sg_list, wr->num_sge); + } + + if (unlikely(signed_len < 0)) + return signed_len; + + meta->len =3D signed_len; + wqe->common.length =3D cpu_to_be32(signed_len); + + ionic_prep_base(qp, wr, meta, wqe); + + return 0; +} + +static void ionic_prep_sq_wqe(struct ionic_qp *qp, void *wqe) +{ + memset(wqe, 0, 1u << qp->sq.stride_log2); +} + +static void ionic_prep_rq_wqe(struct ionic_qp *qp, void *wqe) +{ + memset(wqe, 0, 1u << qp->rq.stride_log2); +} + +static int ionic_prep_send(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_SEND; + + switch (wr->opcode) { + case IB_WR_SEND: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND); + break; + case IB_WR_SEND_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_IMM); + wqe->base.imm_data_key =3D wr->ex.imm_data; + break; + case IB_WR_SEND_WITH_INV: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_INV); + wqe->base.imm_data_key =3D + cpu_to_be32(wr->ex.invalidate_rkey); + break; + default: + return -EINVAL; + } + + return ionic_prep_common(qp, wr, meta, wqe); +} + +static int ionic_prep_send_ud(struct ionic_qp *qp, + const struct ib_ud_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + struct ionic_ah *ah; + + if (unlikely(!wr->ah)) + return -EINVAL; + + ah =3D to_ionic_ah(wr->ah); + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + wqe->common.send.ah_id =3D cpu_to_be32(ah->ahid); + wqe->common.send.dest_qpn =3D cpu_to_be32(wr->remote_qpn); + wqe->common.send.dest_qkey =3D cpu_to_be32(wr->remote_qkey); + + meta->ibop =3D IB_WC_SEND; + + switch (wr->wr.opcode) { + case IB_WR_SEND: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND); + break; + case IB_WR_SEND_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, SEND_IMM); + wqe->base.imm_data_key =3D wr->wr.ex.imm_data; + break; + default: + return -EINVAL; + } + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_rdma(struct ionic_qp *qp, + const struct ib_rdma_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_RDMA_WRITE; + + switch (wr->wr.opcode) { + case IB_WR_RDMA_READ: + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + meta->ibop =3D IB_WC_RDMA_READ; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_READ); + break; + case IB_WR_RDMA_WRITE: + if (wr->wr.send_flags & IB_SEND_SOLICITED) + return -EINVAL; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_WRITE); + break; + case IB_WR_RDMA_WRITE_WITH_IMM: + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, RDMA_WRITE_IMM); + wqe->base.imm_data_key =3D wr->wr.ex.imm_data; + break; + default: + return -EINVAL; + } + + wqe->common.rdma.remote_va_high =3D cpu_to_be32(wr->remote_addr >> 32); + wqe->common.rdma.remote_va_low =3D cpu_to_be32(wr->remote_addr); + wqe->common.rdma.remote_rkey =3D cpu_to_be32(wr->rkey); + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_atomic(struct ionic_qp *qp, + const struct ib_atomic_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (wr->wr.num_sge !=3D 1 || wr->wr.sg_list[0].length !=3D 8) + return -EINVAL; + + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + meta->ibop =3D IB_WC_RDMA_WRITE; + + switch (wr->wr.opcode) { + case IB_WR_ATOMIC_CMP_AND_SWP: + meta->ibop =3D IB_WC_COMP_SWAP; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, ATOMIC_CS); + wqe->atomic.swap_add_high =3D cpu_to_be32(wr->swap >> 32); + wqe->atomic.swap_add_low =3D cpu_to_be32(wr->swap); + wqe->atomic.compare_high =3D cpu_to_be32(wr->compare_add >> 32); + wqe->atomic.compare_low =3D cpu_to_be32(wr->compare_add); + break; + case IB_WR_ATOMIC_FETCH_AND_ADD: + meta->ibop =3D IB_WC_FETCH_ADD; + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, ATOMIC_FA); + wqe->atomic.swap_add_high =3D cpu_to_be32(wr->compare_add >> 32); + wqe->atomic.swap_add_low =3D cpu_to_be32(wr->compare_add); + break; + default: + return -EINVAL; + } + + wqe->atomic.remote_va_high =3D cpu_to_be32(wr->remote_addr >> 32); + wqe->atomic.remote_va_low =3D cpu_to_be32(wr->remote_addr); + wqe->atomic.remote_rkey =3D cpu_to_be32(wr->rkey); + + wqe->base.num_sge_key =3D 1; + wqe->atomic.sge.va =3D cpu_to_be64(wr->wr.sg_list[0].addr); + wqe->atomic.sge.len =3D cpu_to_be32(8); + wqe->atomic.sge.lkey =3D cpu_to_be32(wr->wr.sg_list[0].lkey); + + return ionic_prep_common(qp, &wr->wr, meta, wqe); +} + +static int ionic_prep_inv(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + + if (wr->send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, LOCAL_INV); + wqe->base.imm_data_key =3D cpu_to_be32(wr->ex.invalidate_rkey); + + meta->len =3D 0; + meta->ibop =3D IB_WC_LOCAL_INV; + + ionic_prep_base(qp, wr, meta, wqe); + + return 0; +} + +static int ionic_prep_reg(struct ionic_qp *qp, + const struct ib_reg_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + struct ionic_mr *mr =3D to_ionic_mr(wr->mr); + struct ionic_sq_meta *meta; + struct ionic_v1_wqe *wqe; + __le64 dma_addr; + int flags; + + if (wr->wr.send_flags & (IB_SEND_SOLICITED | IB_SEND_INLINE)) + return -EINVAL; + + /* must call ib_map_mr_sg before posting reg wr */ + if (!mr->buf.tbl_pages) + return -EINVAL; + + meta =3D &qp->sq_meta[qp->sq.prod]; + wqe =3D ionic_queue_at_prod(&qp->sq); + + ionic_prep_sq_wqe(qp, wqe); + + flags =3D to_ionic_mr_flags(wr->access); + + wqe->base.op =3D IONIC_OP(dev->lif_cfg.rdma_version, REG_MR); + wqe->base.num_sge_key =3D wr->key; + wqe->base.imm_data_key =3D cpu_to_be32(mr->ibmr.lkey); + wqe->reg_mr.va =3D cpu_to_be64(mr->ibmr.iova); + wqe->reg_mr.length =3D cpu_to_be64(mr->ibmr.length); + wqe->reg_mr.offset =3D ionic_pgtbl_off(&mr->buf, mr->ibmr.iova); + dma_addr =3D ionic_pgtbl_dma(&mr->buf, mr->ibmr.iova); + wqe->reg_mr.dma_addr =3D cpu_to_be64(le64_to_cpu(dma_addr)); + + wqe->reg_mr.map_count =3D cpu_to_be32(mr->buf.tbl_pages); + wqe->reg_mr.flags =3D cpu_to_be16(flags); + wqe->reg_mr.dir_size_log2 =3D 0; + wqe->reg_mr.page_size_log2 =3D order_base_2(mr->ibmr.page_size); + + meta->len =3D 0; + meta->ibop =3D IB_WC_REG_MR; + + ionic_prep_base(qp, &wr->wr, meta, wqe); + + return 0; +} + +static int ionic_prep_one_rc(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + int rc =3D 0; + + switch (wr->opcode) { + case IB_WR_SEND: + case IB_WR_SEND_WITH_IMM: + case IB_WR_SEND_WITH_INV: + rc =3D ionic_prep_send(qp, wr); + break; + case IB_WR_RDMA_READ: + case IB_WR_RDMA_WRITE: + case IB_WR_RDMA_WRITE_WITH_IMM: + rc =3D ionic_prep_rdma(qp, rdma_wr(wr)); + break; + case IB_WR_ATOMIC_CMP_AND_SWP: + case IB_WR_ATOMIC_FETCH_AND_ADD: + rc =3D ionic_prep_atomic(qp, atomic_wr(wr)); + break; + case IB_WR_LOCAL_INV: + rc =3D ionic_prep_inv(qp, wr); + break; + case IB_WR_REG_MR: + rc =3D ionic_prep_reg(qp, reg_wr(wr)); + break; + default: + ibdev_dbg(&dev->ibdev, "invalid opcode %d\n", wr->opcode); + rc =3D -EINVAL; + } + + return rc; +} + +static int ionic_prep_one_ud(struct ionic_qp *qp, + const struct ib_send_wr *wr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(qp->ibqp.device); + int rc =3D 0; + + switch (wr->opcode) { + case IB_WR_SEND: + case IB_WR_SEND_WITH_IMM: + rc =3D ionic_prep_send_ud(qp, ud_wr(wr)); + break; + default: + ibdev_dbg(&dev->ibdev, "invalid opcode %d\n", wr->opcode); + rc =3D -EINVAL; + } + + return rc; +} + +static int ionic_prep_recv(struct ionic_qp *qp, + const struct ib_recv_wr *wr) +{ + struct ionic_rq_meta *meta; + struct ionic_v1_wqe *wqe; + s64 signed_len; + u32 mval; + + wqe =3D ionic_queue_at_prod(&qp->rq); + + /* if wqe is owned by device, caller can try posting again soon */ + if (wqe->base.flags & cpu_to_be16(IONIC_V1_FLAG_FENCE)) + return -EAGAIN; + + meta =3D qp->rq_meta_head; + if (unlikely(meta =3D=3D IONIC_META_LAST) || + unlikely(meta =3D=3D IONIC_META_POSTED)) + return -EIO; + + ionic_prep_rq_wqe(qp, wqe); + + mval =3D ionic_v1_recv_wqe_max_sge(qp->rq.stride_log2, qp->rq_spec, + false); + signed_len =3D ionic_prep_pld(wqe, &wqe->recv.pld, + qp->rq_spec, mval, + wr->sg_list, wr->num_sge); + if (signed_len < 0) + return signed_len; + + meta->wrid =3D wr->wr_id; + + wqe->base.wqe_id =3D meta - qp->rq_meta; + wqe->base.num_sge_key =3D wr->num_sge; + + /* total length for recv goes in base imm_data_key */ + wqe->base.imm_data_key =3D cpu_to_be32(signed_len); + + ionic_queue_produce(&qp->rq); + + qp->rq_meta_head =3D meta->next; + meta->next =3D IONIC_META_POSTED; + + return 0; +} + +static int ionic_post_send_common(struct ionic_ibdev *dev, + struct ionic_vcq *vcq, + struct ionic_cq *cq, + struct ionic_qp *qp, + const struct ib_send_wr *wr, + const struct ib_send_wr **bad) +{ + unsigned long irqflags; + bool notify =3D false; + int spend, rc =3D 0; + + if (!bad) + return -EINVAL; + + if (!qp->has_sq) { + *bad =3D wr; + return -EINVAL; + } + + if (qp->state < IB_QPS_RTS) { + *bad =3D wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->sq_lock, irqflags); + + while (wr) { + if (ionic_queue_full(&qp->sq)) { + ibdev_dbg(&dev->ibdev, "queue full"); + rc =3D -ENOMEM; + goto out; + } + + if (qp->ibqp.qp_type =3D=3D IB_QPT_UD || + qp->ibqp.qp_type =3D=3D IB_QPT_GSI) + rc =3D ionic_prep_one_ud(qp, wr); + else + rc =3D ionic_prep_one_rc(qp, wr); + if (rc) + goto out; + + wr =3D wr->next; + } + +out: + spin_unlock_irqrestore(&qp->sq_lock, irqflags); + + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->sq_lock); + + if (likely(qp->sq.prod !=3D qp->sq_old_prod)) { + /* ring cq doorbell just in time */ + spend =3D (qp->sq.prod - qp->sq_old_prod) & qp->sq.mask; + ionic_reserve_cq(dev, cq, spend); + + qp->sq_old_prod =3D qp->sq.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.sq_qtype, + ionic_queue_dbell_val(&qp->sq)); + } + + if (qp->sq_flush) { + notify =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_sq, &cq->flush_sq); + } + + spin_unlock(&qp->sq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + + if (notify && vcq->ibcq.comp_handler) + vcq->ibcq.comp_handler(&vcq->ibcq, vcq->ibcq.cq_context); + + *bad =3D wr; + return rc; +} + +static int ionic_post_recv_common(struct ionic_ibdev *dev, + struct ionic_vcq *vcq, + struct ionic_cq *cq, + struct ionic_qp *qp, + const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad) +{ + unsigned long irqflags; + bool notify =3D false; + int spend, rc =3D 0; + + if (!bad) + return -EINVAL; + + if (!qp->has_rq) { + *bad =3D wr; + return -EINVAL; + } + + if (qp->state < IB_QPS_INIT) { + *bad =3D wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->rq_lock, irqflags); + + while (wr) { + if (ionic_queue_full(&qp->rq)) { + ibdev_dbg(&dev->ibdev, "queue full"); + rc =3D -ENOMEM; + goto out; + } + + rc =3D ionic_prep_recv(qp, wr); + if (rc) + goto out; + + wr =3D wr->next; + } + +out: + if (!cq) { + spin_unlock_irqrestore(&qp->rq_lock, irqflags); + goto out_unlocked; + } + spin_unlock_irqrestore(&qp->rq_lock, irqflags); + + spin_lock_irqsave(&cq->lock, irqflags); + spin_lock(&qp->rq_lock); + + if (likely(qp->rq.prod !=3D qp->rq_old_prod)) { + /* ring cq doorbell just in time */ + spend =3D (qp->rq.prod - qp->rq_old_prod) & qp->rq.mask; + ionic_reserve_cq(dev, cq, spend); + + qp->rq_old_prod =3D qp->rq.prod; + + ionic_dbell_ring(dev->lif_cfg.dbpage, dev->lif_cfg.rq_qtype, + ionic_queue_dbell_val(&qp->rq)); + } + + if (qp->rq_flush) { + notify =3D true; + cq->flush =3D true; + list_move_tail(&qp->cq_flush_rq, &cq->flush_rq); + } + + spin_unlock(&qp->rq_lock); + spin_unlock_irqrestore(&cq->lock, irqflags); + + if (notify && vcq->ibcq.comp_handler) + vcq->ibcq.comp_handler(&vcq->ibcq, vcq->ibcq.cq_context); + +out_unlocked: + *bad =3D wr; + return rc; +} + +int ionic_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, + const struct ib_send_wr **bad) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibqp->send_cq); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_cq *cq =3D + to_ionic_vcq_cq(ibqp->send_cq, qp->udma_idx); + + return ionic_post_send_common(dev, vcq, cq, qp, wr, bad); +} + +int ionic_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibqp->device); + struct ionic_vcq *vcq =3D to_ionic_vcq(ibqp->recv_cq); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_cq *cq =3D + to_ionic_vcq_cq(ibqp->recv_cq, qp->udma_idx); + + return ionic_post_recv_common(dev, vcq, cq, qp, wr, bad); +} diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index 8c1c0a07c527..d48ee000f334 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -163,6 +163,61 @@ static inline int to_ionic_qp_flags(int access, bool s= qd_notify, return flags; } =20 +/* cqe non-admin status indicated in status_length field when err bit is s= et */ +enum ionic_status { + IONIC_STS_OK, + IONIC_STS_LOCAL_LEN_ERR, + IONIC_STS_LOCAL_QP_OPER_ERR, + IONIC_STS_LOCAL_PROT_ERR, + IONIC_STS_WQE_FLUSHED_ERR, + IONIC_STS_MEM_MGMT_OPER_ERR, + IONIC_STS_BAD_RESP_ERR, + IONIC_STS_LOCAL_ACC_ERR, + IONIC_STS_REMOTE_INV_REQ_ERR, + IONIC_STS_REMOTE_ACC_ERR, + IONIC_STS_REMOTE_OPER_ERR, + IONIC_STS_RETRY_EXCEEDED, + IONIC_STS_RNR_RETRY_EXCEEDED, + IONIC_STS_XRC_VIO_ERR, + IONIC_STS_LOCAL_SGL_INV_ERR, +}; + +static inline int ionic_to_ib_status(int sts) +{ + switch (sts) { + case IONIC_STS_OK: + return IB_WC_SUCCESS; + case IONIC_STS_LOCAL_LEN_ERR: + return IB_WC_LOC_LEN_ERR; + case IONIC_STS_LOCAL_QP_OPER_ERR: + case IONIC_STS_LOCAL_SGL_INV_ERR: + return IB_WC_LOC_QP_OP_ERR; + case IONIC_STS_LOCAL_PROT_ERR: + return IB_WC_LOC_PROT_ERR; + case IONIC_STS_WQE_FLUSHED_ERR: + return IB_WC_WR_FLUSH_ERR; + case IONIC_STS_MEM_MGMT_OPER_ERR: + return IB_WC_MW_BIND_ERR; + case IONIC_STS_BAD_RESP_ERR: + return IB_WC_BAD_RESP_ERR; + case IONIC_STS_LOCAL_ACC_ERR: + return IB_WC_LOC_ACCESS_ERR; + case IONIC_STS_REMOTE_INV_REQ_ERR: + return IB_WC_REM_INV_REQ_ERR; + case IONIC_STS_REMOTE_ACC_ERR: + return IB_WC_REM_ACCESS_ERR; + case IONIC_STS_REMOTE_OPER_ERR: + return IB_WC_REM_OP_ERR; + case IONIC_STS_RETRY_EXCEEDED: + return IB_WC_RETRY_EXC_ERR; + case IONIC_STS_RNR_RETRY_EXCEEDED: + return IB_WC_RNR_RETRY_EXC_ERR; + case IONIC_STS_XRC_VIO_ERR: + default: + return IB_WC_GENERAL_ERR; + } +} + /* admin queue qp type */ enum ionic_qp_type { IONIC_QPT_RC, @@ -294,6 +349,24 @@ struct ionic_v1_cqe { __be32 qid_type_flags; }; =20 +/* bits for cqe recv */ +enum ionic_v1_cqe_src_qpn_bits { + IONIC_V1_CQE_RECV_QPN_MASK =3D 0xffffff, + IONIC_V1_CQE_RECV_OP_SHIFT =3D 24, + + /* MASK could be 0x3, but need 0x1f for makeshift values: + * OP_TYPE_RDMA_OPER_WITH_IMM, OP_TYPE_SEND_RCVD + */ + IONIC_V1_CQE_RECV_OP_MASK =3D 0x1f, + IONIC_V1_CQE_RECV_OP_SEND =3D 0, + IONIC_V1_CQE_RECV_OP_SEND_INV =3D 1, + IONIC_V1_CQE_RECV_OP_SEND_IMM =3D 2, + IONIC_V1_CQE_RECV_OP_RDMA_IMM =3D 3, + + IONIC_V1_CQE_RECV_IS_IPV4 =3D BIT(7 + IONIC_V1_CQE_RECV_OP_SHIFT), + IONIC_V1_CQE_RECV_IS_VLAN =3D BIT(6 + IONIC_V1_CQE_RECV_OP_SHIFT), +}; + /* bits for cqe qid_type_flags */ enum ionic_v1_cqe_qtf_bits { IONIC_V1_CQE_COLOR =3D BIT(0), @@ -318,6 +391,16 @@ static inline bool ionic_v1_cqe_error(struct ionic_v1_= cqe *cqe) return cqe->qid_type_flags & cpu_to_be32(IONIC_V1_CQE_ERROR); } =20 +static inline bool ionic_v1_cqe_recv_is_ipv4(struct ionic_v1_cqe *cqe) +{ + return cqe->recv.src_qpn_op & cpu_to_be32(IONIC_V1_CQE_RECV_IS_IPV4); +} + +static inline bool ionic_v1_cqe_recv_is_vlan(struct ionic_v1_cqe *cqe) +{ + return cqe->recv.src_qpn_op & cpu_to_be32(IONIC_V1_CQE_RECV_IS_VLAN); +} + static inline void ionic_v1_cqe_clean(struct ionic_v1_cqe *cqe) { cqe->qid_type_flags |=3D cpu_to_be32(~0u << IONIC_V1_CQE_QID_SHIFT); @@ -444,6 +527,28 @@ enum ionic_v1_op { IONIC_V1_SPEC_FIRST_SGE =3D 2, }; =20 +/* queue pair v2 send opcodes */ +enum ionic_v2_op { + IONIC_V2_OPSL_OUT =3D 0x20, + IONIC_V2_OPSL_IMM =3D 0x40, + IONIC_V2_OPSL_INV =3D 0x80, + + IONIC_V2_OP_SEND =3D 0x0 | IONIC_V2_OPSL_OUT, + IONIC_V2_OP_SEND_IMM =3D IONIC_V2_OP_SEND | IONIC_V2_OPSL_IMM, + IONIC_V2_OP_SEND_INV =3D IONIC_V2_OP_SEND | IONIC_V2_OPSL_INV, + + IONIC_V2_OP_RDMA_WRITE =3D 0x1 | IONIC_V2_OPSL_OUT, + IONIC_V2_OP_RDMA_WRITE_IMM =3D IONIC_V2_OP_RDMA_WRITE | IONIC_V2_OPSL_IMM, + + IONIC_V2_OP_RDMA_READ =3D 0x2, + + IONIC_V2_OP_ATOMIC_CS =3D 0x4, + IONIC_V2_OP_ATOMIC_FA =3D 0x5, + IONIC_V2_OP_REG_MR =3D 0x6, + IONIC_V2_OP_LOCAL_INV =3D 0x7, + IONIC_V2_OP_BIND_MW =3D 0x8, +}; + static inline size_t ionic_v1_send_wqe_min_size(int min_sge, int min_data, int spec, bool expdb) { diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 6833abbfb1dc..ab080d945a13 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -45,6 +45,11 @@ static const struct ib_device_ops ionic_dev_ops =3D { .query_qp =3D ionic_query_qp, .destroy_qp =3D ionic_destroy_qp, =20 + .post_send =3D ionic_post_send, + .post_recv =3D ionic_post_recv, + .poll_cq =3D ionic_poll_cq, + .req_notify_cq =3D ionic_req_notify_cq, + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index cb1ac8aca358..dc30fecd9646 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -387,6 +387,11 @@ static inline u32 ionic_obj_dbid(struct ionic_ibdev *d= ev, return ionic_ctx_dbid(dev, to_ionic_ctx_uobj(uobj)); } =20 +static inline bool ionic_ibop_is_local(enum ib_wr_opcode op) +{ + return op =3D=3D IB_WR_LOCAL_INV || op =3D=3D IB_WR_REG_MR; +} + static inline void ionic_qp_complete(struct kref *kref) { struct ionic_qp *qp =3D container_of(kref, struct ionic_qp, qp_kref); @@ -460,8 +465,17 @@ int ionic_query_qp(struct ib_qp *ibqp, struct ib_qp_at= tr *attr, int mask, struct ib_qp_init_attr *init_attr); int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata); =20 +/* ionic_datapath.c */ +int ionic_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, + const struct ib_send_wr **bad); +int ionic_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad); +int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc); +int ionic_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags); + /* ionic_pgtbl.c */ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); +__be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va); int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma); int ionic_pgtbl_init(struct ionic_ibdev *dev, struct ionic_tbl_buf *buf, diff --git a/drivers/infiniband/hw/ionic/ionic_pgtbl.c b/drivers/infiniband= /hw/ionic/ionic_pgtbl.c index a8eb73be6f86..e74db73c9246 100644 --- a/drivers/infiniband/hw/ionic/ionic_pgtbl.c +++ b/drivers/infiniband/hw/ionic/ionic_pgtbl.c @@ -26,6 +26,17 @@ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va) return cpu_to_le64(dma + (va & pg_mask)); } =20 +__be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va) +{ + if (buf->tbl_pages > 1) { + u64 pg_mask =3D BIT_ULL(buf->page_size_log2) - 1; + + return cpu_to_be64(va & pg_mask); + } + + return 0; +} + int ionic_pgtbl_page(struct ionic_tbl_buf *buf, u64 dma) { if (unlikely(buf->tbl_pages =3D=3D buf->tbl_limit)) --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2042.outbound.protection.outlook.com [40.107.102.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 362A123B636; Wed, 23 Jul 2025 17:32:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.102.42 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291982; cv=fail; b=X9khp+WsUSc89NtvI32u/hI7MMR4QH8ZjyBziZb/xv/UMlBrftGwowgeskbtZ11V+aIP3ux/xK25Ets6elGKUOhjUDkB2ZWkIKYI3Z+eJxJoXb/RVsowKo72RJjSDR5VgeG2DNwxVGC8YWo77t8P2yKuTgwvvgfcxp80o+FvVIQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291982; c=relaxed/simple; bh=ZBlcUtpR6D07RjAX9txMRoXGLB+7fqMd2uqbmoBX6FA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=a394yxPikRsKszlZrWT78wlnA/lr37mWrzb3cX7aSIND8j6fX1qMiXNya46UPT0onXPDToxr8CeP4CGKDOtw/icnDOUdX2tDIqKZyaBlS8zndhdFmJ6C35ll5FlPzx4k0YWsbI3oCIkkM3vRekG8QdtYVBRqUqS0uZRPgFSkEMI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=WNleRev6; arc=fail smtp.client-ip=40.107.102.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="WNleRev6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Mr/NY7enBmLVUUgzEZfAzIB9v3wVWNVs5G4n/PPT4GeMUO3QdwMpUkN4VwPi1OuIV0OBYKcGHV7/SNmbiCV6c6zEyghrlAVBS5MjlgVpG6J3nuUM9yozoh2t7bnuxKmXSqYrb7dbV5blAS5+JtJuW5kJTsY7DYjsM2Ri025mhQWdSPUdQ89K87udGgO5kfsbjNRIl6PQ2hRTxBre7XY2eTe5iEVDj5oPaLGWJQH1PPxAReXuK5YZwGS73IExyA8phrknxiKuZ5d1Mj8bdG9AE4F0kfL+r//D0Up53p0W6DvOdJbEtjgoBM2hZEc6puQf3Ks4UC8OztMUxLvsEw2mjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9HugzHnngS7QUUkVIZnV+vVgBVUNR77XJ+O5fwLTuq4=; b=x5f27SEWcniPTrw97VTzPyLlTqMdo4pXmr+OT0iHucxXsXFTs7Uc3N3gDHaFBQEbJmsjARAq+l7hmxF/20BAM+qs2CwXsMXDZgjzTpSYYpr5CVSgdfxW5lwmZxk0ACm8c+WhllcKgl7Lnr52JynMwiM36lAqH7ULEAwF/oSFwcDeMBl54YMbFr/nFSPuFZ1Cx74CRG5QNe/dOC16OB8qapLy1k6Uw15ZL8AHC1OhGtJlXOTFVa+HWE/QolR97nrTfp6iI/84/cr2bE0ifRPLsGLdnDlIeErU1nqg8fWMO2PsT3DWlp78uACV8hVAs6GQBer0LVi8+xF7isA8AX7iuw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9HugzHnngS7QUUkVIZnV+vVgBVUNR77XJ+O5fwLTuq4=; b=WNleRev6qglfjNcDzdG8vA8eRsz/2c2DzmADWoKhAlG1yovXPboWuVpUOVXUBTZ0xtWiVRShzAJeC9z7faQG7ayBXKuh78mW8nZSJkEf/f6YBAANGYd5BIpQd53ymuyIq46KYCTWdllohpzXxG9qPl4xsiY+hSZZqXQaMiz029U= Received: from CH2PR19CA0015.namprd19.prod.outlook.com (2603:10b6:610:4d::25) by LV8PR12MB9083.namprd12.prod.outlook.com (2603:10b6:408:18c::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8943.30; Wed, 23 Jul 2025 17:32:56 +0000 Received: from CH2PEPF00000141.namprd02.prod.outlook.com (2603:10b6:610:4d:cafe::24) by CH2PR19CA0015.outlook.office365.com (2603:10b6:610:4d::25) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:32:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH2PEPF00000141.mail.protection.outlook.com (10.167.244.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:32:56 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:56 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:32:55 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:51 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde , Andrew Boyer Subject: [PATCH v4 12/14] RDMA/ionic: Register device ops for miscellaneous functionality Date: Wed, 23 Jul 2025 23:01:47 +0530 Message-ID: <20250723173149.2568776-13-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF00000141:EE_|LV8PR12MB9083:EE_ X-MS-Office365-Filtering-Correlation-Id: 428cc9de-1aea-48b8-7644-08ddca0ef62e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|82310400026|1800799024|376014|36860700013|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?Lrpqy8s7nxNhJq5iqnIdUxUzWF7srXxDPmxUFBfvhEk3qvHa8Jxk8LownCFN?= =?us-ascii?Q?hE8Qlt2Mc1VUFG3IxJX1+2AtRzz8s32e6AZWlR55atqW0H4j01lwffNL7S1z?= =?us-ascii?Q?5WY1Eb7YYPQTyPngXBg4vn2cjQVsdyW/smtb1a4yXKwWLFbS6jy+CjQpt9TN?= =?us-ascii?Q?2by7jZcS5T8wprgC+9De8SbmIBUMzFYAexsffenXALBg3GWcR08qxigtMyRX?= =?us-ascii?Q?3K9SgkV6KL2OxkMt0T6plGpWPb/Fu5qBUgVIgUY3TzUboCioP5LuJuhFcKcW?= =?us-ascii?Q?pQFqIqFdxdzbosiKYpkuM/CJeaAGR5kNj66BrG93QhiqLzlIq0QK8qKNPAUL?= =?us-ascii?Q?nQ65RmB7xa2XnPstPVdWisMCFzkHe72SV/G1OXoneMEyd8eNIhqpG4+kKyJA?= =?us-ascii?Q?A+Z/E8MWkrUE/vS5Ewg26qmTPFe/DwDHm4iOH5//qh/iFnkhfB2xsvvLEt6t?= =?us-ascii?Q?UwdC42XgCtLri4vbkAWTfBj3GM4D3fymbmbjblQK4RNYemnBdq/UPxJTdtWQ?= =?us-ascii?Q?of/PmVQoEP/E1GCep/wko5qvENJI3gooRHKs8mGTls/FDE/136fwjCj+e/D5?= =?us-ascii?Q?y1C/+QNnzgc+6lC/inNB0tiNKxS9Po3eIAhnCqdqZi0KYOHkoenO1GFNg9pj?= =?us-ascii?Q?3dI+ZIZROBo2Hwu01+TNMXp1a0JhXrvnh1cULz+z47lgkW7f8v19jmRV2ynd?= =?us-ascii?Q?KnSWkT1mJD/SrxxPkf/kVfAQghiWQNANkfeJJvqPe2BvzrPlxQ62oyPdt2cj?= =?us-ascii?Q?c4MdSr10YCixqx4OTLkE2BgZR8uxgWeK26lUlrZHewh0Bmsb5CrxUXIAGhS7?= =?us-ascii?Q?oSQF8LTPWR7GMoSD4vIT9zyyfBJRgbJwiV+7dFjSVBm0zCHlPxD/F3maPVCM?= =?us-ascii?Q?KRKTxDN6I9zQX87/Atnc3Gy0imhiu2tNbt84c1zlSrwicewGY6GSK+8Ip985?= =?us-ascii?Q?aGHj9RFCsjaxc2oTL64AQTzU0U+ZpplMVs0HwT/eChaFR7MnddAeTfprAhwZ?= =?us-ascii?Q?ZvSsDFgCU2RNwDOmxbQP7GGoBPLAplO9EU+TDeupJt7ymyFRF5nS7YtRXTHA?= =?us-ascii?Q?a8UC0ywk49ZVxbi5shebR5piCa4xtbOYH2yDkDtCJP7HHn1dfslZeCqkhPk4?= =?us-ascii?Q?g51ctJZUWf4T1/gQDmFTRcUHj8Iu+FNVXxC3UkpQ0ETw9ABXVK2w3aML3H3B?= =?us-ascii?Q?vMfyP+zBPqpoa3IKjNtxK+Sl1xmFXgg1KYUUHu52XTVw8138NwMG3xG/1sWY?= =?us-ascii?Q?YSlyq7MGQ+BV1B/wji5dn1i6UFLt4D5s5squTcI01vfhF/P/0jYtBKcUbXe3?= =?us-ascii?Q?HBVfMqmZNyMWmAKpmhSe/nxAVfS1D06zEv3avMDLvNV2d2h2VEZA4vYHAC9e?= =?us-ascii?Q?KEN/0Cx+kzEU9LZ0vcwj9N0ycvvH3qt2Ph0iuab1cryYoGY7AGXJlVnG8eKV?= =?us-ascii?Q?b3K+41Xd/zoXWhLPRa9Wn/1b5GFHsmeQ2flbtSpZ0A9Ny8XwtQ2wqL3gzgtX?= =?us-ascii?Q?RwWffNXcE/6mgy1148BCWoRa86VECS/SJTDlJXzaTA85D1KtKGbbE8pbbw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(82310400026)(1800799024)(376014)(36860700013)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:32:56.4545 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 428cc9de-1aea-48b8-7644-08ddca0ef62e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF00000141.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9083 Content-Type: text/plain; charset="utf-8" Implement idbdev ops for device and port information. Co-developed-by: Andrew Boyer Signed-off-by: Andrew Boyer Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v2->v3 - Registered main ib ops at once - Removed uverbs_cmd_mask drivers/infiniband/hw/ionic/ionic_ibdev.c | 212 ++++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.h | 5 + drivers/infiniband/hw/ionic/ionic_lif_cfg.c | 10 + drivers/infiniband/hw/ionic/ionic_lif_cfg.h | 2 + 4 files changed, 229 insertions(+) diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index ab080d945a13..84db48ba357f 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -3,7 +3,11 @@ =20 #include #include +#include +#include #include +#include +#include =20 #include "ionic_ibdev.h" =20 @@ -15,6 +19,203 @@ MODULE_DESCRIPTION(DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); MODULE_IMPORT_NS("NET_IONIC"); =20 +static int ionic_query_device(struct ib_device *ibdev, + struct ib_device_attr *attr, + struct ib_udata *udata) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + struct net_device *ndev; + + ndev =3D ib_device_get_netdev(ibdev, 1); + addrconf_ifid_eui48((u8 *)&attr->sys_image_guid, ndev); + dev_put(ndev); + attr->max_mr_size =3D dev->lif_cfg.npts_per_lif * PAGE_SIZE / 2; + attr->page_size_cap =3D dev->lif_cfg.page_size_supported; + + attr->vendor_id =3D to_pci_dev(dev->lif_cfg.hwdev)->vendor; + attr->vendor_part_id =3D to_pci_dev(dev->lif_cfg.hwdev)->device; + + attr->hw_ver =3D ionic_lif_asic_rev(dev->lif_cfg.lif); + attr->fw_ver =3D 0; + attr->max_qp =3D dev->lif_cfg.qp_count; + attr->max_qp_wr =3D IONIC_MAX_DEPTH; + attr->device_cap_flags =3D + IB_DEVICE_MEM_WINDOW | + IB_DEVICE_MEM_MGT_EXTENSIONS | + IB_DEVICE_MEM_WINDOW_TYPE_2B | + 0; + attr->max_send_sge =3D + min(ionic_v1_send_wqe_max_sge(dev->lif_cfg.max_stride, 0, false), + IONIC_SPEC_HIGH); + attr->max_recv_sge =3D + min(ionic_v1_recv_wqe_max_sge(dev->lif_cfg.max_stride, 0, false), + IONIC_SPEC_HIGH); + attr->max_sge_rd =3D attr->max_send_sge; + attr->max_cq =3D dev->lif_cfg.cq_count / dev->lif_cfg.udma_count; + attr->max_cqe =3D IONIC_MAX_CQ_DEPTH - IONIC_CQ_GRACE; + attr->max_mr =3D dev->lif_cfg.nmrs_per_lif; + attr->max_pd =3D IONIC_MAX_PD; + attr->max_qp_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_ee_rd_atom =3D 0; + attr->max_res_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_qp_init_rd_atom =3D IONIC_MAX_RD_ATOM; + attr->max_ee_init_rd_atom =3D 0; + attr->atomic_cap =3D IB_ATOMIC_GLOB; + attr->masked_atomic_cap =3D IB_ATOMIC_GLOB; + attr->max_mw =3D dev->lif_cfg.nmrs_per_lif; + attr->max_mcast_grp =3D 0; + attr->max_mcast_qp_attach =3D 0; + attr->max_ah =3D dev->lif_cfg.nahs_per_lif; + attr->max_fast_reg_page_list_len =3D dev->lif_cfg.npts_per_lif / 2; + attr->max_pkeys =3D IONIC_PKEY_TBL_LEN; + + return 0; +} + +static int ionic_query_port(struct ib_device *ibdev, u32 port, + struct ib_port_attr *attr) +{ + struct net_device *ndev; + + if (port !=3D 1) + return -EINVAL; + + ndev =3D ib_device_get_netdev(ibdev, port); + + if (netif_running(ndev) && netif_carrier_ok(ndev)) { + attr->state =3D IB_PORT_ACTIVE; + attr->phys_state =3D IB_PORT_PHYS_STATE_LINK_UP; + } else if (netif_running(ndev)) { + attr->state =3D IB_PORT_DOWN; + attr->phys_state =3D IB_PORT_PHYS_STATE_POLLING; + } else { + attr->state =3D IB_PORT_DOWN; + attr->phys_state =3D IB_PORT_PHYS_STATE_DISABLED; + } + + attr->max_mtu =3D iboe_get_mtu(ndev->max_mtu); + attr->active_mtu =3D min(attr->max_mtu, iboe_get_mtu(ndev->mtu)); + attr->gid_tbl_len =3D IONIC_GID_TBL_LEN; + attr->ip_gids =3D true; + attr->port_cap_flags =3D 0; + attr->max_msg_sz =3D 0x80000000; + attr->pkey_tbl_len =3D IONIC_PKEY_TBL_LEN; + attr->max_vl_num =3D 1; + attr->subnet_prefix =3D 0xfe80000000000000ull; + + dev_put(ndev); + + return ib_get_eth_speed(ibdev, port, + &attr->active_speed, + &attr->active_width); +} + +static enum rdma_link_layer ionic_get_link_layer(struct ib_device *ibdev, + u32 port) +{ + return IB_LINK_LAYER_ETHERNET; +} + +static int ionic_query_pkey(struct ib_device *ibdev, u32 port, u16 index, + u16 *pkey) +{ + if (port !=3D 1) + return -EINVAL; + + if (index !=3D 0) + return -EINVAL; + + *pkey =3D IB_DEFAULT_PKEY_FULL; + + return 0; +} + +static int ionic_modify_device(struct ib_device *ibdev, int mask, + struct ib_device_modify *attr) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (mask & ~IB_DEVICE_MODIFY_NODE_DESC) + return -EOPNOTSUPP; + + if (mask & IB_DEVICE_MODIFY_NODE_DESC) + memcpy(dev->ibdev.node_desc, attr->node_desc, + IB_DEVICE_NODE_DESC_MAX); + + return 0; +} + +static int ionic_get_port_immutable(struct ib_device *ibdev, u32 port, + struct ib_port_immutable *attr) +{ + if (port !=3D 1) + return -EINVAL; + + attr->core_cap_flags =3D RDMA_CORE_PORT_IBA_ROCE_UDP_ENCAP; + + attr->pkey_tbl_len =3D IONIC_PKEY_TBL_LEN; + attr->gid_tbl_len =3D IONIC_GID_TBL_LEN; + attr->max_mad_size =3D IB_MGMT_MAD_SIZE; + + return 0; +} + +static void ionic_get_dev_fw_str(struct ib_device *ibdev, char *str) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + ionic_lif_fw_version(dev->lif_cfg.lif, str, IB_FW_VERSION_NAME_MAX); +} + +static const struct cpumask *ionic_get_vector_affinity(struct ib_device *i= bdev, + int comp_vector) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (comp_vector < 0 || comp_vector >=3D dev->lif_cfg.eq_count) + return NULL; + + return irq_get_affinity_mask(dev->eq_vec[comp_vector]->irq); +} + +static ssize_t hw_rev_show(struct device *device, struct device_attribute = *attr, + char *buf) +{ + struct ionic_ibdev *dev =3D + rdma_device_to_drv_device(device, struct ionic_ibdev, ibdev); + + return sysfs_emit(buf, "0x%x\n", ionic_lif_asic_rev(dev->lif_cfg.lif)); +} +static DEVICE_ATTR_RO(hw_rev); + +static ssize_t hca_type_show(struct device *device, + struct device_attribute *attr, char *buf) +{ + struct ionic_ibdev *dev =3D + rdma_device_to_drv_device(device, struct ionic_ibdev, ibdev); + + return sysfs_emit(buf, "%s\n", dev->ibdev.node_desc); +} +static DEVICE_ATTR_RO(hca_type); + +static struct attribute *ionic_rdma_attributes[] =3D { + &dev_attr_hw_rev.attr, + &dev_attr_hca_type.attr, + NULL +}; + +static const struct attribute_group ionic_rdma_attr_group =3D { + .attrs =3D ionic_rdma_attributes, +}; + +static void ionic_disassociate_ucontext(struct ib_ucontext *ibcontext) +{ + /* + * Dummy define disassociate_ucontext so that it does not + * wait for user context before cleaning up hw resources. + */ +} + static const struct ib_device_ops ionic_dev_ops =3D { .owner =3D THIS_MODULE, .driver_id =3D RDMA_DRIVER_IONIC, @@ -50,6 +251,17 @@ static const struct ib_device_ops ionic_dev_ops =3D { .poll_cq =3D ionic_poll_cq, .req_notify_cq =3D ionic_req_notify_cq, =20 + .query_device =3D ionic_query_device, + .query_port =3D ionic_query_port, + .get_link_layer =3D ionic_get_link_layer, + .query_pkey =3D ionic_query_pkey, + .modify_device =3D ionic_modify_device, + .get_port_immutable =3D ionic_get_port_immutable, + .get_dev_fw_str =3D ionic_get_dev_fw_str, + .get_vector_affinity =3D ionic_get_vector_affinity, + .device_group =3D &ionic_rdma_attr_group, + .disassociate_ucontext =3D ionic_disassociate_ucontext, + INIT_RDMA_OBJ_SIZE(ib_ucontext, ionic_ctx, ibctx), INIT_RDMA_OBJ_SIZE(ib_pd, ionic_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_ah, ionic_ah, ibah), diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index dc30fecd9646..1a2c81490c5c 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -26,6 +26,11 @@ #define IONIC_AQ_COUNT 4 #define IONIC_EQ_ISR_BUDGET 10 #define IONIC_EQ_WORK_BUDGET 1000 +#define IONIC_MAX_RD_ATOM 16 +#define IONIC_PKEY_TBL_LEN 1 +#define IONIC_GID_TBL_LEN 256 + +#define IONIC_SPEC_HIGH 8 #define IONIC_MAX_PD 1024 #define IONIC_SPEC_HIGH 8 #define IONIC_SQCMB_ORDER 5 diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.c index 8d0d209227e9..f3cd281c3a2f 100644 --- a/drivers/infiniband/hw/ionic/ionic_lif_cfg.c +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.c @@ -99,3 +99,13 @@ struct net_device *ionic_lif_netdev(struct ionic_lif *li= f) dev_hold(netdev); return netdev; } + +void ionic_lif_fw_version(struct ionic_lif *lif, char *str, size_t len) +{ + strscpy(str, lif->ionic->idev.dev_info.fw_version, len); +} + +u8 ionic_lif_asic_rev(struct ionic_lif *lif) +{ + return lif->ionic->idev.dev_info.asic_rev; +} diff --git a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h b/drivers/infiniba= nd/hw/ionic/ionic_lif_cfg.h index 5b04b8a9937e..20853429f623 100644 --- a/drivers/infiniband/hw/ionic/ionic_lif_cfg.h +++ b/drivers/infiniband/hw/ionic/ionic_lif_cfg.h @@ -60,5 +60,7 @@ struct ionic_lif_cfg { =20 void ionic_fill_lif_cfg(struct ionic_lif *lif, struct ionic_lif_cfg *cfg); struct net_device *ionic_lif_netdev(struct ionic_lif *lif); +void ionic_lif_fw_version(struct ionic_lif *lif, char *str, size_t len); +u8 ionic_lif_asic_rev(struct ionic_lif *lif); =20 #endif /* _IONIC_LIF_CFG_H_ */ --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2063.outbound.protection.outlook.com [40.107.94.63]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF64523F295; Wed, 23 Jul 2025 17:33:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.63 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291987; cv=fail; b=H4IOEkjizuCoF0oEJzoXU0/uh8D4Fu9+qXzuOJOaHXsUgjQNplsXthkxi0HBbzm0XMKdO1vFHSx5XEdwQBs/RgxqLz63tVUJ38Wex73t9oCXqHW+d8xeaJ4M5OFVRRYOYi9Feoi+EaKSkDcpt1Jx+KcpU1/iLRFpeoYD12Q/Uek= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291987; c=relaxed/simple; bh=Cs9x42X1/rkFnmN2izPH+bWBRPAThmt/94TW4oqidSY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=c0Opdxu1WktBlIsq+SL7IatW9fCn+ueh/Qp/Bh+S4wX+/HvcB3focBIS/ILaygBXN0DItLHa+/7eglNSfsXgtDW2JHXdzLIKc1zzW3RYLHTOcIOI/cgXB8kiSCgiWW1AmET9TrELUm+xwkD7Xwq6VoBbfdrZsdI4BoEHdfO9HNg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=v9UB7IHn; arc=fail smtp.client-ip=40.107.94.63 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="v9UB7IHn" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=XTdLsTbE9h9Elunj0n8wyHnjPZoAxurwHyDcxrVqk5G4UWUT/WhPtrZ11YhfyGon4cwnxdOrKOp+APPU3V7R83Ty92jyrj6hxuf+PxRJwGtw86h/U79dfswH+8qRExz4OVF0TcXuPphu5CaxWLFiB0Lsfmj+anYgSKZPSPY3qP5C4SELBGuwZ1H4sS74Qtf1dGLHMwk3oVJCi7GuCE0QfIanoScMfhszEt8kxsLTDouuG2uW4BXHBSACXNghiB5SsyO68lRyJI8s8fIO9ocPSeIzWzHxF8WjCwRlFy+o0XJdLYL0QHNRBm0A0GIJMGu1/Tqh5yMCXHnitgy5UeDGSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Gc1i36iy1Vg41xAQePM2ehWZstorRFU72rYgZkltkMw=; b=OfaREGoicgrP9IkJHJJ4ESbFYPkRUQEgJAOG6MyZekgr9lQo6R8m4n7l6T19rX8vakm/wYB0FGIOGcaUTw8npc07TvbUzjYu73WciQQUZJ1hGec7ZhM7GnRNZaXYBUkAcqtknTwUDm4R10Vt6wBSH5uLUYxVb9M2O0lvCu3ETYfoeB1zdsk6XeY8UIvIZvpo0U2aH57KRJ5qK9j6Vh/FHd6ZIWRtcepdUm1l8LAYVAIEgfBXeFKdyZxDXpfcLoM41DtGKxA1B+FCKgDugx/3hEwO8tJwIv8eNFIqC48AnREc3p1JyJ9hN7R5E98BWfmyxnNezWJV8Hx6cftIqWgbUA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Gc1i36iy1Vg41xAQePM2ehWZstorRFU72rYgZkltkMw=; b=v9UB7IHnSYkZ3aqoDIJaCfgDGWz8Lmqb0cVdPQV3kKw+NwK9dYKto2ViO+Gp+vCUBnjRdY9bQJ3FGq9mvdDskiRvgT9Ys9J8SGfPreGjQCOMmWTPBw7Q91V9rqeLmvmHbG4JGbUmq/yvFpVpphyOWwAJkFUehuDzFB2eEHka1EE= Received: from CH5PR04CA0002.namprd04.prod.outlook.com (2603:10b6:610:1f4::20) by PH7PR12MB6737.namprd12.prod.outlook.com (2603:10b6:510:1a8::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8880.27; Wed, 23 Jul 2025 17:33:02 +0000 Received: from CY4PEPF0000EE39.namprd03.prod.outlook.com (2603:10b6:610:1f4:cafe::b3) by CH5PR04CA0002.outlook.office365.com (2603:10b6:610:1f4::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:33:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE39.mail.protection.outlook.com (10.167.242.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:33:01 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:33:00 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:33:00 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:32:56 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 13/14] RDMA/ionic: Implement device stats ops Date: Wed, 23 Jul 2025 23:01:48 +0530 Message-ID: <20250723173149.2568776-14-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE39:EE_|PH7PR12MB6737:EE_ X-MS-Office365-Filtering-Correlation-Id: 799637c0-6302-40d3-c564-08ddca0ef974 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|82310400026|36860700013|1800799024|7416014|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?QBO6qgM/NxNZPVq6hyWQ0CMgA1w4cGv1oNNRULJ3R7qu5V2sOyNMzbLwvihR?= =?us-ascii?Q?lGxLCsnvWpC9drc3qHmlTRvvP9etVC9IRMqD8HIRPCst2z3exMUxnANLxA9W?= =?us-ascii?Q?8ANHvrzFv6Zrz7ECVi0efaIOahz2TOv9E1HPw8CvUTI+0CkN6lFvcGuK2hqR?= =?us-ascii?Q?h7VPfE34F1uIJQdDMkH4TNIKseo/baPNcczji9HqK4zgv/WNCJRsFNruZMgs?= =?us-ascii?Q?1n3kq4FWMR0oJ0aHhq3QCAkmS2qINLKhMyU2MHviLYXHPLgQ1DSWpGqjWkR0?= =?us-ascii?Q?I4cq29KBNA0Y0lhW9s+21+PGEzOcbONHDucj49pdFJOdkn78O/qiAOxTGBEB?= =?us-ascii?Q?ToQLKIdpSJOuMY0hOVf/BvmIq3NEG3o3FTDXADmg7h02kRbY447kMNaN7HuA?= =?us-ascii?Q?0M/Z3ryKvho2Y/tR6v0EOhyTlMnadozj4RZ4eqPgU3569g47JSjJ6bFsrgk0?= =?us-ascii?Q?GHVPVd4VW+smrQYWEKMjOk0/iZ+rj/9xOxndMnnDM16ZuFZtUN890d18OI9R?= =?us-ascii?Q?wRZVzMKWAn2fmpnvRpzfelcfybssfT3g1wNWGxJOjnaSULjYaqir+ndEUnr6?= =?us-ascii?Q?571IEjYk5KA5BMXvI8RWokINyKv808QxbUWnEoenWgk+MDjbkJUmfGvolYrp?= =?us-ascii?Q?p8iAB75D8JPb5u3JWJjkwUUU/Khp1KTPieT9mtvdz1vv/2V9f2wuAVnW2jgP?= =?us-ascii?Q?f00Y5PVst60FRPi5maKmMDogyNr7UDvwnqEmqxOlWSTEAw4F14SRMAsuiiBJ?= =?us-ascii?Q?dlrsle6HSPvVU/BOqDslTs2XYIow+Mms9VqNUJKD5nmRgDKXCaM3ALpiylev?= =?us-ascii?Q?oCLBGA+sZ1jMcovw/gwG/uKzRK+WroUZ36cRkX36VTVZoQP+qQHeFVPlcv7/?= =?us-ascii?Q?UQ/xVixVNZRiH2F5OHQP9XwxGrNj4L644l+cdlGvYlOulbC0qCnGtKMHPYzM?= =?us-ascii?Q?VECvOBsEF1cS44GkvMfJzuLPsiLF0/SBtbMB56ZrtrB7fCwP5TJ2Ggarl0Q5?= =?us-ascii?Q?nqomNcDgfhNpPdmZszj5lU5tMuYlqodGHadKEhZKNO2ar3c2hw6viUQUoKkx?= =?us-ascii?Q?nm18goF0TLhwbkb9icQZjJrK6WoHlJZ6pqusm/3j02qw6YZj6S/l3ixH0kwa?= =?us-ascii?Q?MP0PdcUjXfGUD7d2G8/1rFXhEmP0CHAQkIzC1s3PXeRRphdGLiv0uH82ycqW?= =?us-ascii?Q?v3NcpaYPeIhc30QunH2Ccvab4W7R1lSxzGg7D3dtKEGHKqzhyFlld36b+M60?= =?us-ascii?Q?H+OMl5ShzsyrsA87H6vSnTeq5PR1Z9Ojky6vi1jnTem0EPill21eOYcFGEr8?= =?us-ascii?Q?SxCs3uosvbSVXtg9kk4C2XZLlKkJU6qJ4fQC+GOgzOdVAV3DZ+esOmLoWQ9f?= =?us-ascii?Q?nk7QVy8j2qJxd743b97fXNJ5IdDuh3zP06ahzV8XJydsQqSislUoE0kXvypZ?= =?us-ascii?Q?2biHM6oSos0UNwbZxzrv+X0glQO8Vyrd6IHtf9nJY/NPaH+ELJTXCH4dASLQ?= =?us-ascii?Q?WQFx2aGQS7mSZVzNSzpaTPzaFzDlVrI68IdW3jUD3AQqRAKutikBZnFLzw?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(82310400026)(36860700013)(1800799024)(7416014)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:33:01.9122 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 799637c0-6302-40d3-c564-08ddca0ef974 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE39.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6737 Content-Type: text/plain; charset="utf-8" Implement device stats operations for hw stats and qp stats. Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v2->v3 - Fixed sparse checks drivers/infiniband/hw/ionic/ionic_fw.h | 43 ++ drivers/infiniband/hw/ionic/ionic_hw_stats.c | 484 +++++++++++++++++++ drivers/infiniband/hw/ionic/ionic_ibdev.c | 4 + drivers/infiniband/hw/ionic/ionic_ibdev.h | 23 + 4 files changed, 554 insertions(+) create mode 100644 drivers/infiniband/hw/ionic/ionic_hw_stats.c diff --git a/drivers/infiniband/hw/ionic/ionic_fw.h b/drivers/infiniband/hw= /ionic/ionic_fw.h index d48ee000f334..8575a374808d 100644 --- a/drivers/infiniband/hw/ionic/ionic_fw.h +++ b/drivers/infiniband/hw/ionic/ionic_fw.h @@ -659,6 +659,17 @@ static inline int ionic_v1_use_spec_sge(int min_sge, i= nt spec) return spec; } =20 +struct ionic_admin_stats_hdr { + __le64 dma_addr; + __le32 length; + __le32 id_ver; + __u8 type_state; +} __packed; + +#define IONIC_ADMIN_STATS_HDRS_IN_V1_LEN 17 +static_assert(sizeof(struct ionic_admin_stats_hdr) =3D=3D + IONIC_ADMIN_STATS_HDRS_IN_V1_LEN); + struct ionic_admin_create_ah { __le64 dma_addr; __le32 length; @@ -837,6 +848,7 @@ struct ionic_v1_admin_wqe { __le16 len; =20 union { + struct ionic_admin_stats_hdr stats; struct ionic_admin_create_ah create_ah; struct ionic_admin_destroy_ah destroy_ah; struct ionic_admin_query_ah query_ah; @@ -983,4 +995,35 @@ static inline u32 ionic_v1_eqe_evt_qid(u32 evt) return evt >> IONIC_V1_EQE_QID_SHIFT; } =20 +enum ionic_v1_stat_bits { + IONIC_V1_STAT_TYPE_SHIFT =3D 28, + IONIC_V1_STAT_TYPE_NONE =3D 0, + IONIC_V1_STAT_TYPE_8 =3D 1, + IONIC_V1_STAT_TYPE_LE16 =3D 2, + IONIC_V1_STAT_TYPE_LE32 =3D 3, + IONIC_V1_STAT_TYPE_LE64 =3D 4, + IONIC_V1_STAT_TYPE_BE16 =3D 5, + IONIC_V1_STAT_TYPE_BE32 =3D 6, + IONIC_V1_STAT_TYPE_BE64 =3D 7, + IONIC_V1_STAT_OFF_MASK =3D BIT(IONIC_V1_STAT_TYPE_SHIFT) - 1, +}; + +struct ionic_v1_stat { + union { + __be32 be_type_off; + u32 type_off; + }; + char name[28]; +}; + +static inline int ionic_v1_stat_type(struct ionic_v1_stat *hdr) +{ + return hdr->type_off >> IONIC_V1_STAT_TYPE_SHIFT; +} + +static inline unsigned int ionic_v1_stat_off(struct ionic_v1_stat *hdr) +{ + return hdr->type_off & IONIC_V1_STAT_OFF_MASK; +} + #endif /* _IONIC_FW_H_ */ diff --git a/drivers/infiniband/hw/ionic/ionic_hw_stats.c b/drivers/infinib= and/hw/ionic/ionic_hw_stats.c new file mode 100644 index 000000000000..244a80dde08f --- /dev/null +++ b/drivers/infiniband/hw/ionic/ionic_hw_stats.c @@ -0,0 +1,484 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2018-2025, Advanced Micro Devices, Inc. */ + +#include + +#include "ionic_fw.h" +#include "ionic_ibdev.h" + +static int ionic_v1_stat_normalize(struct ionic_v1_stat *hw_stats, + int hw_stats_count) +{ + int hw_stat_i; + + for (hw_stat_i =3D 0; hw_stat_i < hw_stats_count; ++hw_stat_i) { + struct ionic_v1_stat *stat =3D &hw_stats[hw_stat_i]; + + stat->type_off =3D be32_to_cpu(stat->be_type_off); + stat->name[sizeof(stat->name) - 1] =3D 0; + if (ionic_v1_stat_type(stat) =3D=3D IONIC_V1_STAT_TYPE_NONE) + break; + } + + return hw_stat_i; +} + +static void ionic_fill_stats_desc(struct rdma_stat_desc *hw_stats_hdrs, + struct ionic_v1_stat *hw_stats, + int hw_stats_count) +{ + int hw_stat_i; + + for (hw_stat_i =3D 0; hw_stat_i < hw_stats_count; ++hw_stat_i) { + struct ionic_v1_stat *stat =3D &hw_stats[hw_stat_i]; + + hw_stats_hdrs[hw_stat_i].name =3D stat->name; + } +} + +static u64 ionic_v1_stat_val(struct ionic_v1_stat *stat, + void *vals_buf, size_t vals_len) +{ + unsigned int off =3D ionic_v1_stat_off(stat); + int type =3D ionic_v1_stat_type(stat); + +#define __ionic_v1_stat_validate(__type) \ + ((off + sizeof(__type) <=3D vals_len) && \ + (IS_ALIGNED(off, sizeof(__type)))) + + switch (type) { + case IONIC_V1_STAT_TYPE_8: + if (__ionic_v1_stat_validate(u8)) + return *(u8 *)(vals_buf + off); + break; + case IONIC_V1_STAT_TYPE_LE16: + if (__ionic_v1_stat_validate(__le16)) + return le16_to_cpu(*(__le16 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_LE32: + if (__ionic_v1_stat_validate(__le32)) + return le32_to_cpu(*(__le32 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_LE64: + if (__ionic_v1_stat_validate(__le64)) + return le64_to_cpu(*(__le64 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE16: + if (__ionic_v1_stat_validate(__be16)) + return be16_to_cpu(*(__be16 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE32: + if (__ionic_v1_stat_validate(__be32)) + return be32_to_cpu(*(__be32 *)(vals_buf + off)); + break; + case IONIC_V1_STAT_TYPE_BE64: + if (__ionic_v1_stat_validate(__be64)) + return be64_to_cpu(*(__be64 *)(vals_buf + off)); + break; + } + + return ~0ull; +#undef __ionic_v1_stat_validate +} + +static int ionic_hw_stats_cmd(struct ionic_ibdev *dev, + dma_addr_t dma, size_t len, int qid, int op) +{ + struct ionic_admin_wr wr =3D { + .work =3D COMPLETION_INITIALIZER_ONSTACK(wr.work), + .wqe =3D { + .op =3D op, + .len =3D cpu_to_le16(IONIC_ADMIN_STATS_HDRS_IN_V1_LEN), + .cmd.stats =3D { + .dma_addr =3D cpu_to_le64(dma), + .length =3D cpu_to_le32(len), + .id_ver =3D cpu_to_le32(qid), + }, + } + }; + + if (dev->lif_cfg.admin_opcodes <=3D op) + return -EBADRQC; + + ionic_admin_post(dev, &wr); + + return ionic_admin_wait(dev, &wr, IONIC_ADMIN_F_INTERRUPT); +} + +static int ionic_init_hw_stats(struct ionic_ibdev *dev) +{ + dma_addr_t hw_stats_dma; + int rc, hw_stats_count; + + if (dev->hw_stats_hdrs) + return 0; + + dev->hw_stats_count =3D 0; + + /* buffer for current values from the device */ + dev->hw_stats_buf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!dev->hw_stats_buf) { + rc =3D -ENOMEM; + goto err_buf; + } + + /* buffer for names, sizes, offsets of values */ + dev->hw_stats =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!dev->hw_stats) { + rc =3D -ENOMEM; + goto err_hw_stats; + } + + /* request the names, sizes, offsets */ + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, dev->hw_stats, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, 0, + IONIC_V1_ADMIN_STATS_HDRS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + + /* normalize and count the number of hw_stats */ + hw_stats_count =3D + ionic_v1_stat_normalize(dev->hw_stats, + PAGE_SIZE / sizeof(*dev->hw_stats)); + if (!hw_stats_count) { + rc =3D -ENODATA; + goto err_dma; + } + + dev->hw_stats_count =3D hw_stats_count; + + /* alloc and init array of names, for alloc_hw_stats */ + dev->hw_stats_hdrs =3D kcalloc(hw_stats_count, + sizeof(*dev->hw_stats_hdrs), + GFP_KERNEL); + if (!dev->hw_stats_hdrs) { + rc =3D -ENOMEM; + goto err_dma; + } + + ionic_fill_stats_desc(dev->hw_stats_hdrs, dev->hw_stats, + hw_stats_count); + + return 0; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); +err_dma: + kfree(dev->hw_stats); +err_hw_stats: + kfree(dev->hw_stats_buf); +err_buf: + dev->hw_stats_count =3D 0; + dev->hw_stats =3D NULL; + dev->hw_stats_buf =3D NULL; + dev->hw_stats_hdrs =3D NULL; + return rc; +} + +static struct rdma_hw_stats *ionic_alloc_hw_stats(struct ib_device *ibdev, + u32 port) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + + if (port !=3D 1) + return NULL; + + return rdma_alloc_hw_stats_struct(dev->hw_stats_hdrs, + dev->hw_stats_count, + RDMA_HW_STATS_DEFAULT_LIFESPAN); +} + +static int ionic_get_hw_stats(struct ib_device *ibdev, + struct rdma_hw_stats *hw_stats, + u32 port, int index) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + dma_addr_t hw_stats_dma; + int rc, hw_stat_i; + + if (port !=3D 1) + return -EINVAL; + + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, dev->hw_stats_buf, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, + 0, IONIC_V1_ADMIN_STATS_VALS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, + PAGE_SIZE, DMA_FROM_DEVICE); + + for (hw_stat_i =3D 0; hw_stat_i < dev->hw_stats_count; ++hw_stat_i) + hw_stats->value[hw_stat_i] =3D + ionic_v1_stat_val(&dev->hw_stats[hw_stat_i], + dev->hw_stats_buf, PAGE_SIZE); + + return hw_stat_i; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, + PAGE_SIZE, DMA_FROM_DEVICE); +err_dma: + return rc; +} + +static struct rdma_hw_stats * +ionic_counter_alloc_stats(struct rdma_counter *counter) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_counter *cntr; + int err; + + cntr =3D kzalloc(sizeof(*cntr), GFP_KERNEL); + if (!cntr) + return NULL; + + /* buffer for current values from the device */ + cntr->vals =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!cntr->vals) + goto err_vals; + + err =3D xa_alloc(&dev->counter_stats->xa_counters, &counter->id, + cntr, + XA_LIMIT(0, IONIC_MAX_QPID), + GFP_KERNEL); + if (err) + goto err_xa; + + INIT_LIST_HEAD(&cntr->qp_list); + + return rdma_alloc_hw_stats_struct(dev->counter_stats->stats_hdrs, + dev->counter_stats->queue_stats_count, + RDMA_HW_STATS_DEFAULT_LIFESPAN); +err_xa: + kfree(cntr->vals); +err_vals: + kfree(cntr); + + return NULL; +} + +static int ionic_counter_dealloc(struct rdma_counter *counter) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_counter *cntr; + + cntr =3D xa_erase(&dev->counter_stats->xa_counters, counter->id); + if (!cntr) + return -EINVAL; + + kfree(cntr->vals); + kfree(cntr); + + return 0; +} + +static int ionic_counter_bind_qp(struct rdma_counter *counter, + struct ib_qp *ibqp, + u32 port) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(counter->device); + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + struct ionic_counter *cntr; + + cntr =3D xa_load(&dev->counter_stats->xa_counters, counter->id); + if (!cntr) + return -EINVAL; + + list_add_tail(&qp->qp_list_counter, &cntr->qp_list); + ibqp->counter =3D counter; + + return 0; +} + +static int ionic_counter_unbind_qp(struct ib_qp *ibqp, u32 port) +{ + struct ionic_qp *qp =3D to_ionic_qp(ibqp); + + if (ibqp->counter) { + list_del(&qp->qp_list_counter); + ibqp->counter =3D NULL; + } + + return 0; +} + +static int ionic_get_qp_stats(struct ib_device *ibdev, + struct rdma_hw_stats *hw_stats, + u32 counter_id) +{ + struct ionic_ibdev *dev =3D to_ionic_ibdev(ibdev); + struct ionic_counter_stats *cs; + struct ionic_counter *cntr; + dma_addr_t hw_stats_dma; + struct ionic_qp *qp; + int rc, stat_i =3D 0; + + cs =3D dev->counter_stats; + cntr =3D xa_load(&cs->xa_counters, counter_id); + if (!cntr) + return -EINVAL; + + hw_stats_dma =3D dma_map_single(dev->lif_cfg.hwdev, cntr->vals, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hw_stats_dma); + if (rc) + return rc; + + memset(hw_stats->value, 0, sizeof(u64) * hw_stats->num_counters); + + list_for_each_entry(qp, &cntr->qp_list, qp_list_counter) { + rc =3D ionic_hw_stats_cmd(dev, hw_stats_dma, PAGE_SIZE, + qp->qpid, + IONIC_V1_ADMIN_QP_STATS_VALS); + if (rc) + goto err_cmd; + + for (stat_i =3D 0; stat_i < cs->queue_stats_count; ++stat_i) + hw_stats->value[stat_i] +=3D + ionic_v1_stat_val(&cs->hdr[stat_i], + cntr->vals, + PAGE_SIZE); + } + + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + return stat_i; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hw_stats_dma, PAGE_SIZE, DMA_FROM_DE= VICE); + + return rc; +} + +static int ionic_counter_update_stats(struct rdma_counter *counter) +{ + return ionic_get_qp_stats(counter->device, counter->stats, counter->id); +} + +static int ionic_alloc_counters(struct ionic_ibdev *dev) +{ + struct ionic_counter_stats *cs =3D dev->counter_stats; + int rc, hw_stats_count; + dma_addr_t hdr_dma; + + /* buffer for names, sizes, offsets of values */ + cs->hdr =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!cs->hdr) + return -ENOMEM; + + hdr_dma =3D dma_map_single(dev->lif_cfg.hwdev, cs->hdr, + PAGE_SIZE, DMA_FROM_DEVICE); + rc =3D dma_mapping_error(dev->lif_cfg.hwdev, hdr_dma); + if (rc) + goto err_dma; + + rc =3D ionic_hw_stats_cmd(dev, hdr_dma, PAGE_SIZE, 0, + IONIC_V1_ADMIN_QP_STATS_HDRS); + if (rc) + goto err_cmd; + + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, PAGE_SIZE, DMA_FROM_DEVICE); + + /* normalize and count the number of hw_stats */ + hw_stats_count =3D ionic_v1_stat_normalize(cs->hdr, + PAGE_SIZE / sizeof(*cs->hdr)); + if (!hw_stats_count) { + rc =3D -ENODATA; + goto err_dma; + } + + cs->queue_stats_count =3D hw_stats_count; + + /* alloc and init array of names */ + cs->stats_hdrs =3D kcalloc(hw_stats_count, sizeof(*cs->stats_hdrs), + GFP_KERNEL); + if (!cs->stats_hdrs) { + rc =3D -ENOMEM; + goto err_dma; + } + + ionic_fill_stats_desc(cs->stats_hdrs, cs->hdr, hw_stats_count); + + return 0; + +err_cmd: + dma_unmap_single(dev->lif_cfg.hwdev, hdr_dma, PAGE_SIZE, DMA_FROM_DEVICE); +err_dma: + kfree(cs->hdr); + + return rc; +} + +static const struct ib_device_ops ionic_hw_stats_ops =3D { + .driver_id =3D RDMA_DRIVER_IONIC, + .alloc_hw_port_stats =3D ionic_alloc_hw_stats, + .get_hw_stats =3D ionic_get_hw_stats, +}; + +static const struct ib_device_ops ionic_counter_stats_ops =3D { + .counter_alloc_stats =3D ionic_counter_alloc_stats, + .counter_dealloc =3D ionic_counter_dealloc, + .counter_bind_qp =3D ionic_counter_bind_qp, + .counter_unbind_qp =3D ionic_counter_unbind_qp, + .counter_update_stats =3D ionic_counter_update_stats, +}; + +void ionic_stats_init(struct ionic_ibdev *dev) +{ + u16 stats_type =3D dev->lif_cfg.stats_type; + int rc; + + if (stats_type & IONIC_LIF_RDMA_STAT_GLOBAL) { + rc =3D ionic_init_hw_stats(dev); + if (rc) + ibdev_dbg(&dev->ibdev, "Failed to init hw stats\n"); + else + ib_set_device_ops(&dev->ibdev, &ionic_hw_stats_ops); + } + + if (stats_type & IONIC_LIF_RDMA_STAT_QP) { + dev->counter_stats =3D kzalloc(sizeof(*dev->counter_stats), + GFP_KERNEL); + if (!dev->counter_stats) + return; + + rc =3D ionic_alloc_counters(dev); + if (rc) { + ibdev_dbg(&dev->ibdev, "Failed to init counter stats\n"); + kfree(dev->counter_stats); + dev->counter_stats =3D NULL; + return; + } + + xa_init_flags(&dev->counter_stats->xa_counters, XA_FLAGS_ALLOC); + + ib_set_device_ops(&dev->ibdev, &ionic_counter_stats_ops); + } +} + +void ionic_stats_cleanup(struct ionic_ibdev *dev) +{ + if (dev->counter_stats) { + xa_destroy(&dev->counter_stats->xa_counters); + kfree(dev->counter_stats->hdr); + kfree(dev->counter_stats->stats_hdrs); + kfree(dev->counter_stats); + dev->counter_stats =3D NULL; + } + + kfree(dev->hw_stats); + kfree(dev->hw_stats_buf); + kfree(dev->hw_stats_hdrs); +} diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.c b/drivers/infiniband= /hw/ionic/ionic_ibdev.c index 84db48ba357f..90ae29e7989c 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.c +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.c @@ -301,6 +301,7 @@ static void ionic_destroy_ibdev(struct ionic_ibdev *dev) { ionic_kill_rdma_admin(dev, false); ib_unregister_device(&dev->ibdev); + ionic_stats_cleanup(dev); ionic_destroy_rdma_admin(dev); ionic_destroy_resids(dev); WARN_ON(!xa_empty(&dev->qp_tbl)); @@ -358,6 +359,8 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) =20 ib_set_device_ops(&dev->ibdev, &ionic_dev_ops); =20 + ionic_stats_init(dev); + rc =3D ib_register_device(ibdev, "ionic_%d", ibdev->dev.parent); if (rc) goto err_register; @@ -365,6 +368,7 @@ static struct ionic_ibdev *ionic_create_ibdev(struct io= nic_aux_dev *ionic_adev) return dev; =20 err_register: + ionic_stats_cleanup(dev); err_admin: ionic_kill_rdma_admin(dev, false); ionic_destroy_rdma_admin(dev); diff --git a/drivers/infiniband/hw/ionic/ionic_ibdev.h b/drivers/infiniband= /hw/ionic/ionic_ibdev.h index 1a2c81490c5c..8e45ade19b19 100644 --- a/drivers/infiniband/hw/ionic/ionic_ibdev.h +++ b/drivers/infiniband/hw/ionic/ionic_ibdev.h @@ -30,6 +30,7 @@ #define IONIC_PKEY_TBL_LEN 1 #define IONIC_GID_TBL_LEN 256 =20 +#define IONIC_MAX_QPID 0xffffff #define IONIC_SPEC_HIGH 8 #define IONIC_MAX_PD 1024 #define IONIC_SPEC_HIGH 8 @@ -109,6 +110,12 @@ struct ionic_ibdev { atomic_t admin_state; =20 struct ionic_eq **eq_vec; + + struct ionic_v1_stat *hw_stats; + void *hw_stats_buf; + struct rdma_stat_desc *hw_stats_hdrs; + struct ionic_counter_stats *counter_stats; + int hw_stats_count; }; =20 struct ionic_eq { @@ -320,6 +327,18 @@ struct ionic_mr { bool created; }; =20 +struct ionic_counter_stats { + int queue_stats_count; + struct ionic_v1_stat *hdr; + struct rdma_stat_desc *stats_hdrs; + struct xarray xa_counters; +}; + +struct ionic_counter { + void *vals; + struct list_head qp_list; +}; + static inline struct ionic_ibdev *to_ionic_ibdev(struct ib_device *ibdev) { return container_of(ibdev, struct ionic_ibdev, ibdev); @@ -478,6 +497,10 @@ int ionic_post_recv(struct ib_qp *ibqp, const struct i= b_recv_wr *wr, int ionic_poll_cq(struct ib_cq *ibcq, int nwc, struct ib_wc *wc); int ionic_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags); =20 +/* ionic_hw_stats.c */ +void ionic_stats_init(struct ionic_ibdev *dev); +void ionic_stats_cleanup(struct ionic_ibdev *dev); + /* ionic_pgtbl.c */ __le64 ionic_pgtbl_dma(struct ionic_tbl_buf *buf, u64 va); __be64 ionic_pgtbl_off(struct ionic_tbl_buf *buf, u64 va); --=20 2.43.0 From nobody Mon Oct 6 06:32:12 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2067.outbound.protection.outlook.com [40.107.236.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32414221FDD; Wed, 23 Jul 2025 17:33:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.67 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291995; cv=fail; b=b26CqsSLvOF2qmfjvqeT3XYgz+NMt9NbQyc5iK3f+hP8af9Ti67l/i2kYYJ+LPB9jip8h+u/OfJPFfgeGtlG8tQLz/YVsmGBsQAe38pmtPpDlyZhV1UudPlrLPQnytI+EGQcY8yGqeOll07NwQNjcy80qgV+gY7tVGLPDuzA8hk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753291995; c=relaxed/simple; bh=xBDGkaaBs8OTkBDDYh80VvMnRA+4sj6DWKOGYanXi8I=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=N/3FNBApaS/yR7b6CM2IL6hfAVSpoD4gbN9QM8jQHSsfCX3As+/QO/RCUdZdE12Jn1pWHnM6uzdU2S8LHqaJkWZ1AzOr3KsQoqyI60Ka0ml2pdDb1U3sJh2I89PIkHZJrAOwygkiVqm1gxr8vhX1NvOfCo3th2Aeg0QwvNvZa/Y= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=PL03sXbg; arc=fail smtp.client-ip=40.107.236.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="PL03sXbg" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=sy0XxFwV+JPDyV/HQh/5LBJ/pfusg7Xh0v3KTQhv3IUU+59H6A9Tlwy7EpnEYqxsn7/mKL9B4o8ZrJKc0ZWoVinpib5IZDgNMvZ0icBSpGG4+Q4zNNBxw0Ld7iKzb59HhEIvyqRdWdij0IFYg9Ov6ULy6zlQ6wb/xDBaByJNsDtaQgmaDLN1abL37O95Qd/BbOQe8aZaOroyaQOMImxKU77iLI08OIhDZws3e34USv6kp8dWwaN1gBi0/r0DfWPwu5jSgKaHmxo9to29RikXZX+wFoN3ZuNeppB9jQIoucLF9yrMhptOqzTzFYeUQnAfe14J0NV9VeMJQNCJEKcyHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=w3GOFkB4Dmd+hu2WQ4ZC6/aGuhcFnJEAo/QPuN85Zbc=; b=UEI4MC8E4F2fAmYj8kU+nCTdNlPcpTeRBoqM6XKXosuQkP4KVMCZjML2C2DT+sy5tdv+Fwr9zhHZQqzKg259l+PQlkYQ62NX5T5ULQLS8izHx3e0zRsZD65gyy9+4Ux9PVjzBGVl/RVrZUjAi459IHANZBMzT647p90jCQMDxffwlKo7HLRbtXYMjGU92hfAWafTlcl8KW1orTYS/EyoG+CrgNWEAUa6sQ2/JDsszcIw1MeF4RqhcRRDupXSUfh0OQf0L7lMUigC/yC/OyADzhj8DT+opqXpVU7G/0x2/yRbMFWLsoNlM36Pqph11d4oENej7d1vp0AubQ1+yuuSZA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=davemloft.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w3GOFkB4Dmd+hu2WQ4ZC6/aGuhcFnJEAo/QPuN85Zbc=; b=PL03sXbgniDnSiAAZE1qaDIhQ6aTIlwpld7+CZrgLYSNZAEYfXkE9zlcmeAkFcQxjD/Mpd0Cnvz6E1bC+bwh/fMdQ41v0wqTF8eKxpg9XkRUk9n+giCueHuXd8sx9FJnrIdXBOg1Ihi00Z5A1ESb8nGtDmRkw4QZiyocWgnLGk0= Received: from CH5PR04CA0014.namprd04.prod.outlook.com (2603:10b6:610:1f4::26) by PH7PR12MB7427.namprd12.prod.outlook.com (2603:10b6:510:202::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8943.30; Wed, 23 Jul 2025 17:33:06 +0000 Received: from CY4PEPF0000EE39.namprd03.prod.outlook.com (2603:10b6:610:1f4:cafe::d8) by CH5PR04CA0014.outlook.office365.com (2603:10b6:610:1f4::26) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8964.21 via Frontend Transport; Wed, 23 Jul 2025 17:33:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.17) by CY4PEPF0000EE39.mail.protection.outlook.com (10.167.242.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8964.20 via Frontend Transport; Wed, 23 Jul 2025 17:33:06 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 23 Jul 2025 12:33:04 -0500 Received: from xhdabhijitg41x.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Wed, 23 Jul 2025 12:33:00 -0500 From: Abhijit Gangurde To: , , , , , , , , , CC: , , , , , , Abhijit Gangurde Subject: [PATCH v4 14/14] RDMA/ionic: Add Makefile/Kconfig to kernel build environment Date: Wed, 23 Jul 2025 23:01:49 +0530 Message-ID: <20250723173149.2568776-15-abhijit.gangurde@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250723173149.2568776-1-abhijit.gangurde@amd.com> References: <20250723173149.2568776-1-abhijit.gangurde@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB03.amd.com: abhijit.gangurde@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE39:EE_|PH7PR12MB7427:EE_ X-MS-Office365-Filtering-Correlation-Id: 88272773-3e0c-4b8d-48c9-08ddca0efc03 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|36860700013|82310400026|921020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?6YsP4/crPzEFfwfaEeFpWm6n/V21ZweA9ee+8DGL5acd98z5HoSd56sUHluu?= =?us-ascii?Q?rgolV3/BFIdu+GjMug+S/5MR/sPmi7B54/69ylLvFbLJAsU8rhcwk2U0nfE7?= =?us-ascii?Q?Lx2kNjwkcn2JB8GwD75fbW2pwqMCRjfGTR+pEWXKTqP7jJBmdc3D82K4MEIN?= =?us-ascii?Q?s11xwsw9G2JiDXDSdWPMQefH4yMxDYTYfQkIYG1jlOSJsF0O/XsLkTE8HCHk?= =?us-ascii?Q?KG36CnTpM5wg/VQOtf3a0IAJ2BNxEKNJibSz6V687N6mq97HOZ+7cxMGe7pg?= =?us-ascii?Q?37scBBsQLdUt6+Fgo9BcvsNoANe9Rx0ipWGLNhfAs+f51fdpqItXkJOlZ9WI?= =?us-ascii?Q?uQnwA6UkV4STCzjV0Itu3PNvMWist3Z6agXZiPE17jcqsCRuHxzNVrZ4jzTD?= =?us-ascii?Q?G80WDEn1OqY7eajE55fnw9UVPjA6w3xWwYbUhVmqtp06sK7uvkFHO9O7PWe1?= =?us-ascii?Q?F0OeoYQ3KmNP6TiVNiAZVhUW8lpII06BBkIiGfWuveSy4gDgWkYNNkt1HW9v?= =?us-ascii?Q?nh0UKaWoYvtVRb6lqumdTkk+QwqNXyScSg21LHDJjPCx420JYpk670fR78Uu?= =?us-ascii?Q?3Ac7arTjqVw2T5eaaWpGdCSOVfX8PPmRumVZ/gCJbVzWOOecXhpBzwTWreM6?= =?us-ascii?Q?4+vZoxu7CPMlVCE71A5KfmT9Ov+0Sp0GgcG4hQGwwGE0n7doE2FnOVuyLz2n?= =?us-ascii?Q?y0aPrBjLExeIt4r6hWQDoMXZsS2IKiLi6/6Zl0bnI+vCn4EkKAFXUeqOwQed?= =?us-ascii?Q?QmRTgAnklsEz/xj5NPtXOL8GA/GFcVFtTRqjLcIZqK7wXCUf+PIQDxaZOAEb?= =?us-ascii?Q?K8lqc6114VDABEFjaZBNEOI9IIzYLNvw45AjQfaJj1LP4Ef0ctq/uyq8DhGz?= =?us-ascii?Q?yK+q5pfM0gt1AmWX1UvURjHBNOj918z9zQgMTWI/+OsCktnOOzzgwoA27wLS?= =?us-ascii?Q?ZloRFS4TFe4zFxRiUeUbgKmtD0rnDYUnBUSWoHrYNteLd2BJl6Iq5VUQBEwv?= =?us-ascii?Q?gy2tOSszc5x6baFnWLB7mkly3zzSRnfK394uTPlnXEa0ipqfaaDBsBl0XzRk?= =?us-ascii?Q?zDo5PhOEEv3XhQFz3iu2T4xx2fo7jSDtGYGGrTuKlh0XV3mc63Exi/EMFDYT?= =?us-ascii?Q?wWSN/lP8zCas4gPhTRJe5LrrdRlTOJv1uzTOQo2ctelCncjAP/T2dlPVOPMv?= =?us-ascii?Q?GmbtgAB66UhV1iNIqOGYj30SuZpZezbgJDietdy2lu2NtDExu1Ms3ewAd0tG?= =?us-ascii?Q?WqgyoOkqqzZsnFdrsujZfzlB5vqMnMnipNkFlufulDLehGwcJXk9+re64EiE?= =?us-ascii?Q?Si4MsvF55I95wIKAUKY3g+e5CK6Pkdq+nftfuBkWmHYaLZMsBUiKtJzmhLID?= =?us-ascii?Q?6Nt5dKrjjnmFz4Q/8Fh8gXYWwTPqh6dMm8qDQHUNmTJC78unJ0mTUV/8LauU?= =?us-ascii?Q?xInDijLyJaKKPQNQXAQQKG2N1U0ccfRJxcKuoyOGZYsPDydk7jM085r6dU86?= =?us-ascii?Q?Wd9o5jC5IBztkvD095kb4JJsj4BIysq7oP1A6SRfdZDpJMJ/mE1pzsDJwQ?= =?us-ascii?Q?=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(36860700013)(82310400026)(921020);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2025 17:33:06.2024 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 88272773-3e0c-4b8d-48c9-08ddca0efc03 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE39.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7427 Content-Type: text/plain; charset="utf-8" Add ionic to the kernel build environment. Co-developed-by: Allen Hubbe Signed-off-by: Allen Hubbe Signed-off-by: Abhijit Gangurde --- v2->v3 - Removed select of ethernet driver - Fixed make htmldocs error .../device_drivers/ethernet/index.rst | 1 + .../ethernet/pensando/ionic_rdma.rst | 43 +++++++++++++++++++ MAINTAINERS | 9 ++++ drivers/infiniband/Kconfig | 1 + drivers/infiniband/hw/Makefile | 1 + drivers/infiniband/hw/ionic/Kconfig | 15 +++++++ drivers/infiniband/hw/ionic/Makefile | 9 ++++ 7 files changed, 79 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/pensan= do/ionic_rdma.rst create mode 100644 drivers/infiniband/hw/ionic/Kconfig create mode 100644 drivers/infiniband/hw/ionic/Makefile diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/D= ocumentation/networking/device_drivers/ethernet/index.rst index 139b4c75a191..4b16ecd289da 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -50,6 +50,7 @@ Contents: neterion/s2io netronome/nfp pensando/ionic + pensando/ionic_rdma smsc/smc9 stmicro/stmmac ti/cpsw diff --git a/Documentation/networking/device_drivers/ethernet/pensando/ioni= c_rdma.rst b/Documentation/networking/device_drivers/ethernet/pensando/ioni= c_rdma.rst new file mode 100644 index 000000000000..80c4d9876d3e --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/pensando/ionic_rdma.= rst @@ -0,0 +1,43 @@ +.. SPDX-License-Identifier: GPL-2.0+ + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Linux Driver for the AMD Pensando(R) Ethernet adapter family +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +AMD Pensando RDMA driver. +Copyright (C) 2018-2025, Advanced Micro Devices, Inc. + +Contents +=3D=3D=3D=3D=3D=3D=3D=3D + +- Identifying the Adapter +- Enabling the driver +- Support + +Identifying the Adapter +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +See Documentation/networking/device_drivers/ethernet/pensando/ionic.rst +for more information on identifying the adapter. + +Enabling the driver +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> InfiniBand support + -> AMD Pensando DSC RDMA/RoCE Support + +Support +=3D=3D=3D=3D=3D=3D=3D + +For general Linux rdma support, please use the rdma mailing +list, which is monitored by AMD Pensando personnel:: + + linux-rdma@vger.kernel.org diff --git a/MAINTAINERS b/MAINTAINERS index b4f3fa14ddca..f52409bde673 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1165,6 +1165,15 @@ F: Documentation/networking/device_drivers/ethernet/= amd/pds_core.rst F: drivers/net/ethernet/amd/pds_core/ F: include/linux/pds/ =20 +AMD PENSANDO RDMA DRIVER +M: Abhijit Gangurde +M: Allen Hubbe +L: linux-rdma@vger.kernel.org +S: Maintained +F: Documentation/networking/device_drivers/ethernet/pensando/ionic_rdma.rst +F: drivers/infiniband/hw/ionic/ +F: include/uapi/rdma/ionic-abi.h + AMD PMC DRIVER M: Shyam Sundar S K L: platform-driver-x86@vger.kernel.org diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig index 3a394cd772f6..f0323f1d6f01 100644 --- a/drivers/infiniband/Kconfig +++ b/drivers/infiniband/Kconfig @@ -85,6 +85,7 @@ source "drivers/infiniband/hw/efa/Kconfig" source "drivers/infiniband/hw/erdma/Kconfig" source "drivers/infiniband/hw/hfi1/Kconfig" source "drivers/infiniband/hw/hns/Kconfig" +source "drivers/infiniband/hw/ionic/Kconfig" source "drivers/infiniband/hw/irdma/Kconfig" source "drivers/infiniband/hw/mana/Kconfig" source "drivers/infiniband/hw/mlx4/Kconfig" diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile index df61b2299ec0..b706dc0d0263 100644 --- a/drivers/infiniband/hw/Makefile +++ b/drivers/infiniband/hw/Makefile @@ -14,3 +14,4 @@ obj-$(CONFIG_INFINIBAND_HNS_HIP08) +=3D hns/ obj-$(CONFIG_INFINIBAND_QEDR) +=3D qedr/ obj-$(CONFIG_INFINIBAND_BNXT_RE) +=3D bnxt_re/ obj-$(CONFIG_INFINIBAND_ERDMA) +=3D erdma/ +obj-$(CONFIG_INFINIBAND_IONIC) +=3D ionic/ diff --git a/drivers/infiniband/hw/ionic/Kconfig b/drivers/infiniband/hw/io= nic/Kconfig new file mode 100644 index 000000000000..de6f10e9b6e9 --- /dev/null +++ b/drivers/infiniband/hw/ionic/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2018-2025, Advanced Micro Devices, Inc. + +config INFINIBAND_IONIC + tristate "AMD Pensando DSC RDMA/RoCE Support" + depends on NETDEVICES && ETHERNET && PCI && INET && IONIC + help + This enables RDMA/RoCE support for the AMD Pensando family of + Distributed Services Cards (DSCs). + + To learn more, visit our website at + . + + To compile this driver as a module, choose M here. The module + will be called ionic_rdma. diff --git a/drivers/infiniband/hw/ionic/Makefile b/drivers/infiniband/hw/i= onic/Makefile new file mode 100644 index 000000000000..957973742820 --- /dev/null +++ b/drivers/infiniband/hw/ionic/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 + +ccflags-y :=3D -I $(srctree)/drivers/net/ethernet/pensando/ionic + +obj-$(CONFIG_INFINIBAND_IONIC) +=3D ionic_rdma.o + +ionic_rdma-y :=3D \ + ionic_ibdev.o ionic_lif_cfg.o ionic_queue.o ionic_pgtbl.o ionic_admin.o \ + ionic_controlpath.o ionic_datapath.o ionic_hw_stats.o --=20 2.43.0