From nobody Fri Dec 19 21:02:18 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C7C6327A916; Tue, 27 May 2025 15:57:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361480; cv=none; b=E7CI+kDV1mT+XsNDYX7aNz8ZQTaXQ1FAyEDP6oQPUlNHFQm0uy2HateS5fgrIgoXjPBLJRvid9UmurwSvbUN/JoOsjaFor4m0RCW/McJ8FrI/XA0rb0Ki21wWz8AQ54lP1rJNE3weGmQAuzcpZ/VE8PZ+hLP/bg39Txa5YfdO2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361480; c=relaxed/simple; bh=tuhkjgggF3Izv+iUVmzHcxoIfuTJL6K3BqngzRFPa7A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=UgiBBCz+rv4DUE4V66P5V8OH24ydGKqjz8OZiI1FMfxOwYU5WVe4DahR9sAbXUTjfyr7MMFbfAm8bhTanT7SokaHsq21OnTGdUKhlFBf6TQ+j5H7lGzMrJl8bVaRALocFpioEz+/qyTwjolXcPkzsatEJiSx9+ymHO7mju8tpLc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=OtDYnORi; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="OtDYnORi" Received: by linux.microsoft.com (Postfix, from userid 1134) id 63A3B206834A; Tue, 27 May 2025 08:57:58 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 63A3B206834A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1748361478; bh=ck7OlH3iLM2zajPne9r3rXRVKyZNS5D+OnC1eJ41hxs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OtDYnORiFmFpoTEF+oxQe1xdCrIRWIQHvLJvUVgl6E0QIGQrpBHcPNaAYUbuoj9xu wCf8YZAy/8FxSVuNaqUctBfgo/xnWXS0dKt4l1DKP4QiRUOV+ktiiugtHqtA7HQ+iz B01Wu9F7O5rb5bZmkAeO7+H29qfuO5BNYrjwtxyM= From: Shradha Gupta To: Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Thomas Gleixner , Bjorn Helgaas , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Kevin Tian , Long Li , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Date: Tue, 27 May 2025 08:57:57 -0700 Message-Id: <1748361477-25244-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" For supporting dynamic MSI-X vector allocation by PCI controllers, enabling the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN is not enough, msix_prepare_msi_desc() to prepare the MSI descriptor is also needed. Export pci_msix_prepare_desc() to allow PCI controllers to support dynamic MSI-X vector allocation. Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang Reviewed-by: Thomas Gleixner Reviewed-by: Saurabh Sengar --- Changes in v3 * Improved the patch description by removing abbreviations --- drivers/pci/msi/irqdomain.c | 5 +++-- include/linux/msi.h | 2 ++ 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c index d7ba8795d60f..43129aa6d6c7 100644 --- a/drivers/pci/msi/irqdomain.c +++ b/drivers/pci/msi/irqdomain.c @@ -222,13 +222,14 @@ static void pci_irq_unmask_msix(struct irq_data *data) pci_msix_unmask(irq_data_get_msi_desc(data)); } =20 -static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_inf= o_t *arg, - struct msi_desc *desc) +void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *ar= g, + struct msi_desc *desc) { /* Don't fiddle with preallocated MSI descriptors */ if (!desc->pci.mask_base) msix_prepare_msi_desc(to_pci_dev(desc->dev), desc); } +EXPORT_SYMBOL_GPL(pci_msix_prepare_desc); =20 static const struct msi_domain_template pci_msix_template =3D { .chip =3D { diff --git a/include/linux/msi.h b/include/linux/msi.h index 86e42742fd0f..d5864d5e75c2 100644 --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -691,6 +691,8 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwn= ode_handle *fwnode, struct irq_domain *parent); u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *= pdev); struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev); +void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *ar= g, + struct msi_desc *desc); #else /* CONFIG_PCI_MSI */ static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev = *pdev) { --=20 2.34.1 From nobody Fri Dec 19 21:02:18 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1F62D2741DC; Tue, 27 May 2025 15:58:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361492; cv=none; b=p7LJCRN4cQjCB1JHl8pFan5BVIVautOUfNQuyYvUR1dwAgj9808qZ8Rc0aqm7gFta2HvxGwAbsfDvL5K247kl0rQjgfcP1Q19CDuGtRjxZpZ6SCwyuecfxyW68fN2/jRGoL8hpS2eicMURAZNeNadh+pZN/gukFnrwJLrbw1n2U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361492; c=relaxed/simple; bh=6zLPd0/W1bzf98poY04UCdVCM4Eyy10eh4IQwBNNFa4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=Lmco//o0q3CzeZ+FvnJx5MFKGXhy4yDzeVRp0Znusq4GRRVdlo50n52AVBHzY4FwBHDxmjgLNEaKYI1I0SkTaBvPO8l88O37X0OU7vrR5pMtBL74QbA29pR1/q0Iw2lXjfOe1ama6F8ieIc9O63iiY8xkVq71a6b+qaboPcGZEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=gTF4MeJv; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="gTF4MeJv" Received: by linux.microsoft.com (Postfix, from userid 1134) id B03CA206B778; Tue, 27 May 2025 08:58:10 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B03CA206B778 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1748361490; bh=9vyIcw/j3U4kYU223Umd1U4i7224+q4yAjNPR7hv21M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gTF4MeJvTwM0N9ergIk0aMrAHEOtQ8gQLJFYc3V+UNQnjvwOyzfYTPu8cu7L6XxsK RZ2zCkrdMGwETayH0nfr3n2T8ITiaO8s0v6AGOKfXDPwdBlmiSnpGY/832FwL/h00f y6p/ocCpbZAfSnfbCuc0vwW9ue9c1qGeeZXtD6PU= From: Shradha Gupta To: Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation Date: Tue, 27 May 2025 08:58:09 -0700 Message-Id: <1748361489-25415-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using pci_msix_prepare_desc() to prepare the MSI-X descriptors. Feature support added for both x86 and ARM64 Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang Reviewed-by: Saurabh Sengar --- Changes in v4: * use the same prepare_desc() callback for arm and x86 --- Changes in v3: * Add arm64 support --- Changes in v2: * split the patch to keep changes in PCI and pci_hyperv controller seperate * replace strings "pci vectors" by "MSI-X vectors" --- drivers/pci/controller/pci-hyperv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletions(-) diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/p= ci-hyperv.c index ac27bda5ba26..0c790f35ad0e 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -2063,6 +2063,7 @@ static struct irq_chip hv_msi_irq_chip =3D { static struct msi_domain_ops hv_msi_ops =3D { .msi_prepare =3D hv_msi_prepare, .msi_free =3D hv_msi_free, + .prepare_desc =3D pci_msix_prepare_desc, }; =20 /** @@ -2084,7 +2085,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_d= evice *hbus) hbus->msi_info.ops =3D &hv_msi_ops; hbus->msi_info.flags =3D (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI | - MSI_FLAG_PCI_MSIX); + MSI_FLAG_PCI_MSIX | MSI_FLAG_PCI_MSIX_ALLOC_DYN); hbus->msi_info.handler =3D FLOW_HANDLER; hbus->msi_info.handler_name =3D FLOW_NAME; hbus->msi_info.data =3D hbus; --=20 2.34.1 From nobody Fri Dec 19 21:02:18 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C533327CCF0; Tue, 27 May 2025 15:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361510; cv=none; b=c2rDcoXrAUargn9uBixbH+d9WIBbPSROEljr1Oz6J3PMkN63P1U2SFSOmVszoH7cTbCrUP8mtASWP3MeYKsj0MuQT/lfPIzxckAco55IpSbPZDMsSDhEq1g1XJws+dd3103VjxQetIJH7RF0db/JOaKbIfBbXg5hQDRcgAqtT6w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361510; c=relaxed/simple; bh=RGSVF44FBBzUEZBfOnOpMUcFXv44SdNQJk9aIiREjiI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=D+HZUKmjyj/9bKTXSmUp6/UBkohxVO380oUp6Q1C37k4WAzNPMP0U0Xx6aI1ocBsxjELyjPElP2rdaBnKXkqZm2IMXmSx2w1ljiTFV+ftv7IcProu7Ilq30LFNzsIzMSpDHhNN5vGYKhd8y1qNTqbWoANsh0BfqttEfISvpzBik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=Go1JEj6f; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="Go1JEj6f" Received: by linux.microsoft.com (Postfix, from userid 1134) id 8B8D7206834A; Tue, 27 May 2025 08:58:28 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 8B8D7206834A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1748361508; bh=+ek6z3vu4GJHoGb/47gC/EKJziuFy+gbpylZmyG5YEM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Go1JEj6fWWZ5y8IKi9mLMxrfnSOl6HFUh9ZjgNV7mDKcn3wbff4xYCzx8MGGrWc+d Ha9kQr+Lw94SGm1wuno0c01V8p2RqKCUHnnAL7zSZxd2AgiWPg/4zb1ivGiQ91YNpn PI52lWvjT49303KyOLJF9rMCjE89h8WtAm9W6JMQ= From: Shradha Gupta To: Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v4 3/5] net: mana: explain irq_setup() algorithm Date: Tue, 27 May 2025 08:58:25 -0700 Message-Id: <1748361505-25513-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs") added the irq_setup() function that distributes IRQs on CPUs according to a tricky heuristic. The corresponding commit message explains the heuristic. Duplicate it in the source code to make available for readers without digging git in history. Also, add more detailed explanation about how the heuristics is implemented. Signed-off-by: Yury Norov [NVIDIA] Signed-off-by: Shradha Gupta --- .../net/ethernet/microsoft/mana/gdma_main.c | 41 +++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/= ethernet/microsoft/mana/gdma_main.c index 4ffaf7588885..f9e8d4d1ba3a 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r) r->size =3D 0; } =20 +/* + * Spread on CPUs with the following heuristics: + * + * 1. No more than one IRQ per CPU, if possible; + * 2. NUMA locality is the second priority; + * 3. Sibling dislocality is the last priority. + * + * Let's consider this topology: + * + * Node 0 1 + * Core 0 1 2 3 + * CPU 0 1 2 3 4 5 6 7 + * + * The most performant IRQ distribution based on the above topology + * and heuristics may look like this: + * + * IRQ Nodes Cores CPUs + * 0 1 0 0-1 + * 1 1 1 2-3 + * 2 1 0 0-1 + * 3 1 1 2-3 + * 4 2 2 4-5 + * 5 2 3 6-7 + * 6 2 2 4-5 + * 7 2 3 6-7 + * + * The heuristics is implemented as follows. + * + * The outer for_each() loop resets the 'weight' to the actual number + * of CPUs in the hop. Then inner for_each() loop decrements it by the + * number of sibling groups (cores) while assigning first set of IRQs + * to each group. IRQs 0 and 1 above are distributed this way. + * + * Now, because NUMA locality is more important, we should walk the + * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's + * implemented by the medium while() loop. We do like this unless the + * number of IRQs assigned on this hop will not become equal to number + * of CPUs in the hop (weight =3D=3D 0). Then we switch to the next hop and + * do the same thing. + */ + static int irq_setup(unsigned int *irqs, unsigned int len, int node) { const struct cpumask *next, *prev =3D cpu_none_mask; --=20 2.34.1 From nobody Fri Dec 19 21:02:18 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 454C727C86B; Tue, 27 May 2025 15:58:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361527; cv=none; b=AI+sH44UWc+S5/DvJFTqA9WF6ZylH65YwWbzlhMuNKsAWJZYxDnXzeDiHUV8hHE9PkfYeLv9Vx9qFX9EvpKTHYpY2ARBXf1hdvzoZcOorlpkEaCvNb0SotuEm32r/AI/wUfUGTFfkjlZ/y/Z34O/MYsQXYAzAF5w32FOzZjr1XM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361527; c=relaxed/simple; bh=b8sMpoLVme1kGwGMb56xFMPBVe+zcUzYdDTbIVM7ygw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=mg7TvXrJQt/pFStCEwWhS8Ue1TUp+gurs3iHVUpuaZiK55oHGxE0v+SBMmZebvCoH9tb5dYLwewT94RHsVOy6zjk+h7h8w09NzYyZ+DZwmgg2OJm2uSvecpFjcw6LO91xC5dGElyxWZ+gOATFcixtQJc7d/lkNS+/G3XUwo0t5Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=YHYsLA9t; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="YHYsLA9t" Received: by linux.microsoft.com (Postfix, from userid 1134) id E3877206B778; Tue, 27 May 2025 08:58:45 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com E3877206B778 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1748361525; bh=BwIXn4Mm68cZw0MA8Sj97K1PtJksjPbNJ6YWr0nO2/g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YHYsLA9tpGgv/v7gMk32D+Lc9PPP1fioktKRI3/XxMDH/2ohO79892DVxKp2nr2He RUZrSlSSOjIZRYPHE36sQj6lqR3quwK36o2pjmigjvZpv/JyItIP8aDQvy6i07Mt15 XVu4TQ+uswTUPmWeiDIssHf+xuNGA7eh9eY2uxEY= From: Shradha Gupta To: Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v4 4/5] net: mana: Allow irq_setup() to skip cpus for affinity Date: Tue, 27 May 2025 08:58:44 -0700 Message-Id: <1748361524-25653-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" In order to prepare the MANA driver to allocate the MSI-X IRQs dynamically, we need to enhance irq_setup() to allow skipping affinitizing IRQs to the first CPU sibling group. This would be for cases when the number of IRQs is less than or equal to the number of online CPUs. In such cases for dynamically added IRQs the first CPU sibling group would already be affinitized with HWC IRQ. Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang Reviewed-by: Yury Norov [NVIDIA] --- Changes in v4 * fix commit description * avoided using next_cpumask: label in the irq_setup() --- drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/= ethernet/microsoft/mana/gdma_main.c index f9e8d4d1ba3a..763a548c4a2b 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1329,7 +1329,8 @@ void mana_gd_free_res_map(struct gdma_resource *r) * do the same thing. */ =20 -static int irq_setup(unsigned int *irqs, unsigned int len, int node) +static int irq_setup(unsigned int *irqs, unsigned int len, int node, + bool skip_first_cpu) { const struct cpumask *next, *prev =3D cpu_none_mask; cpumask_var_t cpus __free(free_cpumask_var); @@ -1344,11 +1345,18 @@ static int irq_setup(unsigned int *irqs, unsigned i= nt len, int node) while (weight > 0) { cpumask_andnot(cpus, next, prev); for_each_cpu(cpu, cpus) { + cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu)); + --weight; + + if (unlikely(skip_first_cpu)) { + skip_first_cpu =3D false; + continue; + } + if (len-- =3D=3D 0) goto done; + irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu)); - cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu)); - --weight; } } prev =3D next; @@ -1444,7 +1452,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) } } =20 - err =3D irq_setup(irqs, (nvec - start_irq_index), gc->numa_node); + err =3D irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false); if (err) goto free_irq; =20 --=20 2.34.1 From nobody Fri Dec 19 21:02:18 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F233827E1D7; Tue, 27 May 2025 15:59:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361546; cv=none; b=YXRAGZVe7ZOk7Wxr1OlpkZ1M9zGvv4NuklNsZxkVwb8EH5PV9EJQda+mnoP1RQhPprsH2h/e+CfkfSR639Zi90eKwX0qEhnIbwSwQE+BzTSqHyXfY5f8w8rkrw1HP+vZpwEZ06s6RFMdwOkSIfxgynM9iw87FuubCPgOspRPnsQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748361546; c=relaxed/simple; bh=oTz4rIzn9Zxu1RFMCThB5g0YCyRL71+nSW/wJXV5//Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=ccU9bJyUUhxpALw7QwpTq0nnC7k3cUiFiGD5KWnbL3PKyExzaw0nzyF/YX4b/ZjnSpEQmdKLVFwoxw01fxKTMYY24Q10m9wffiseEsD15HgLkJZm2EMSf1PJwHqpI37yoFsliVgRZriezhfoZZhpV1MonoLi+M+VgnWN2k2eLcs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=F5ly5XPz; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="F5ly5XPz" Received: by linux.microsoft.com (Postfix, from userid 1134) id 9E9DC206834A; Tue, 27 May 2025 08:59:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 9E9DC206834A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1748361544; bh=Bn+44aHgz2Vad5p99xg0ygnsgh9Kryk/otqfIUTx10M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F5ly5XPzedNi91mBPrVOyzutiYQdOAAoVMfC6NsciMnfrvuoKqqPfA9tU8/46cqEy Fz3Jodf1UhF+10tqiGYItRHeNLlDagd59eVT1hdqDhA0LUq5mVfcCP6/Pa6xZbYNr6 +R8GI7gjFoXcklKZa5v8PDYzspDscgFetOtYl/yc= From: Shradha Gupta To: Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Date: Tue, 27 May 2025 08:59:03 -0700 Message-Id: <1748361543-25845-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1748361453-25096-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Currently, the MANA driver allocates MSI-X vectors statically based on MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends up allocating more vectors than it needs. This is because, by this time we do not have a HW channel and do not know how many IRQs should be allocated. To avoid this, we allocate 1 MSI-X vector during the creation of HWC and after getting the value supported by hardware, dynamically add the remaining MSI-X vectors. Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang --- Changes in v4: * added BUG_ON at appropriate places * moved xa_destroy to mana_gd_remove() * rearragned the cleanup logic in mana_gd_setup_dyn_irqs() * simplified processing around start_irq_index in mana_gd_setup_irqs() * return 0 instead of return err as appropriate --- Changes in v3: * implemented irq_contexts as xarrays rather than list * split the patch to create a perparation patch around irq_setup() * add log when IRQ allocation/setup for remaining IRQs fails --- Changes in v2: * Use string 'MSI-X vectors' instead of 'pci vectors' * make skip-cpu a bool instead of int * rearrange the comment arout skip_cpu variable appropriately * update the capability bit for driver indicating dynamic IRQ allocation * enforced max line length to 80 * enforced RCT convention * initialized gic to NULL, for when there is a possibility of gic not being populated correctly --- .../net/ethernet/microsoft/mana/gdma_main.c | 306 +++++++++++++----- include/net/mana/gdma.h | 8 +- 2 files changed, 235 insertions(+), 79 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/= ethernet/microsoft/mana/gdma_main.c index 763a548c4a2b..98ebecbec9a7 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -6,6 +6,8 @@ #include #include #include +#include +#include =20 #include =20 @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *p= dev) return err ? err : -EPROTO; } =20 - if (gc->num_msix_usable > resp.max_msix) - gc->num_msix_usable =3D resp.max_msix; + if (!pci_msix_can_alloc_dyn(pdev)) { + if (gc->num_msix_usable > resp.max_msix) + gc->num_msix_usable =3D resp.max_msix; + } else { + /* If dynamic allocation is enabled we have already allocated + * hwc msi + */ + gc->num_msix_usable =3D min(resp.max_msix, num_online_cpus() + 1); + } =20 if (gc->num_msix_usable <=3D 1) return -ENOSPC; @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queu= e, } =20 queue->eq.msix_index =3D msi_index; - gic =3D &gc->irq_contexts[msi_index]; + gic =3D xa_load(&gc->irq_contexts, msi_index); + if (WARN_ON(!gic)) + return -EINVAL; =20 spin_lock_irqsave(&gic->lock, flags); list_add_rcu(&queue->entry, &gic->eq_list); @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *q= ueue) if (WARN_ON(msix_index >=3D gc->num_msix_usable)) return; =20 - gic =3D &gc->irq_contexts[msix_index]; + gic =3D xa_load(&gc->irq_contexts, msix_index); + if (WARN_ON(!gic)) + return; + spin_lock_irqsave(&gic->lock, flags); list_for_each_entry_rcu(eq, &gic->eq_list, entry) { if (queue =3D=3D eq) { @@ -1366,47 +1380,113 @@ static int irq_setup(unsigned int *irqs, unsigned = int len, int node, return 0; } =20 -static int mana_gd_setup_irqs(struct pci_dev *pdev) +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec) { struct gdma_context *gc =3D pci_get_drvdata(pdev); - unsigned int max_queues_per_port; struct gdma_irq_context *gic; - unsigned int max_irqs, cpu; - int start_irq_index =3D 1; - int nvec, *irqs, irq; - int err, i =3D 0, j; + bool skip_first_cpu =3D false; + int *irqs, irq, err, i; =20 cpus_read_lock(); - max_queues_per_port =3D num_online_cpus(); - if (max_queues_per_port > MANA_MAX_NUM_QUEUES) - max_queues_per_port =3D MANA_MAX_NUM_QUEUES; =20 - /* Need 1 interrupt for the Hardware communication Channel (HWC) */ - max_irqs =3D max_queues_per_port + 1; - - nvec =3D pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX); - if (nvec < 0) { - cpus_read_unlock(); - return nvec; - } - if (nvec <=3D num_online_cpus()) - start_irq_index =3D 0; - - irqs =3D kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL); + irqs =3D kmalloc_array(nvec, sizeof(int), GFP_KERNEL); if (!irqs) { err =3D -ENOMEM; goto free_irq_vector; } =20 - gc->irq_contexts =3D kcalloc(nvec, sizeof(struct gdma_irq_context), - GFP_KERNEL); - if (!gc->irq_contexts) { + /* + * While processing the next pci irq vector, we start with index 1, + * as IRQ vector at index 0 is already processed for HWC. + * However, the population of irqs array starts with index 0, to be + * further used in irq_setup() + */ + for (i =3D 1; i <=3D nvec; i++) { + gic =3D kzalloc(sizeof(*gic), GFP_KERNEL); + if (!gic) { + err =3D -ENOMEM; + goto free_irq; + } + gic->handler =3D mana_gd_process_eq_events; + INIT_LIST_HEAD(&gic->eq_list); + spin_lock_init(&gic->lock); + + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s", + i - 1, pci_name(pdev)); + + /* one pci vector is already allocated for HWC */ + irqs[i - 1] =3D pci_irq_vector(pdev, i); + if (irqs[i - 1] < 0) { + err =3D irqs[i - 1]; + goto free_current_gic; + } + + err =3D request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic); + if (err) + goto free_current_gic; + + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL); + } + + /* + * When calling irq_setup() for dynamically added IRQs, if number of + * CPUs is more than or equal to allocated MSI-X, we need to skip the + * first CPU sibling group since they are already affinitized to HWC IRQ + */ + if (gc->num_msix_usable <=3D num_online_cpus()) + skip_first_cpu =3D true; + + err =3D irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu); + if (err) + goto free_irq; + + cpus_read_unlock(); + kfree(irqs); + return 0; + +free_current_gic: + kfree(gic); +free_irq: + for (i -=3D 1; i > 0; i--) { + irq =3D pci_irq_vector(pdev, i); + gic =3D xa_load(&gc->irq_contexts, i); + if (WARN_ON(!gic)) + continue; + + irq_update_affinity_hint(irq, NULL); + free_irq(irq, gic); + xa_erase(&gc->irq_contexts, i); + kfree(gic); + } + kfree(irqs); +free_irq_vector: + cpus_read_unlock(); + return err; +} + +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec) +{ + struct gdma_context *gc =3D pci_get_drvdata(pdev); + struct gdma_irq_context *gic; + int *irqs, *start_irqs, irq; + unsigned int cpu; + int err, i; + + cpus_read_lock(); + + irqs =3D kmalloc_array(nvec, sizeof(int), GFP_KERNEL); + if (!irqs) { err =3D -ENOMEM; - goto free_irq_array; + goto free_irq_vector; } =20 for (i =3D 0; i < nvec; i++) { - gic =3D &gc->irq_contexts[i]; + gic =3D kzalloc(sizeof(*gic), GFP_KERNEL); + if (!gic) { + err =3D -ENOMEM; + goto free_irq; + } + gic->handler =3D mana_gd_process_eq_events; INIT_LIST_HEAD(&gic->eq_list); spin_lock_init(&gic->lock); @@ -1418,69 +1498,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s", i - 1, pci_name(pdev)); =20 - irq =3D pci_irq_vector(pdev, i); - if (irq < 0) { - err =3D irq; - goto free_irq; + irqs[i] =3D pci_irq_vector(pdev, i); + if (irqs[i] < 0) { + err =3D irqs[i]; + goto free_current_gic; } =20 - if (!i) { - err =3D request_irq(irq, mana_gd_intr, 0, gic->name, gic); - if (err) - goto free_irq; - - /* If number of IRQ is one extra than number of online CPUs, - * then we need to assign IRQ0 (hwc irq) and IRQ1 to - * same CPU. - * Else we will use different CPUs for IRQ0 and IRQ1. - * Also we are using cpumask_local_spread instead of - * cpumask_first for the node, because the node can be - * mem only. - */ - if (start_irq_index) { - cpu =3D cpumask_local_spread(i, gc->numa_node); - irq_set_affinity_and_hint(irq, cpumask_of(cpu)); - } else { - irqs[start_irq_index] =3D irq; - } - } else { - irqs[i - start_irq_index] =3D irq; - err =3D request_irq(irqs[i - start_irq_index], mana_gd_intr, 0, - gic->name, gic); - if (err) - goto free_irq; - } + err =3D request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic); + if (err) + goto free_current_gic; + + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL); } =20 - err =3D irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false); + /* If number of IRQ is one extra than number of online CPUs, + * then we need to assign IRQ0 (hwc irq) and IRQ1 to + * same CPU. + * Else we will use different CPUs for IRQ0 and IRQ1. + * Also we are using cpumask_local_spread instead of + * cpumask_first for the node, because the node can be + * mem only. + */ + start_irqs =3D irqs; + if (nvec > num_online_cpus()) { + cpu =3D cpumask_local_spread(0, gc->numa_node); + irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu)); + irqs++; + nvec -=3D 1; + } + + err =3D irq_setup(irqs, nvec, gc->numa_node, false); if (err) goto free_irq; =20 - gc->max_num_msix =3D nvec; - gc->num_msix_usable =3D nvec; cpus_read_unlock(); - kfree(irqs); + kfree(start_irqs); return 0; =20 +free_current_gic: + kfree(gic); free_irq: - for (j =3D i - 1; j >=3D 0; j--) { - irq =3D pci_irq_vector(pdev, j); - gic =3D &gc->irq_contexts[j]; + for (i -=3D 1; i >=3D 0; i--) { + irq =3D pci_irq_vector(pdev, i); + gic =3D xa_load(&gc->irq_contexts, i); + if (WARN_ON(!gic)) + continue; =20 irq_update_affinity_hint(irq, NULL); free_irq(irq, gic); + xa_erase(&gc->irq_contexts, i); + kfree(gic); } =20 - kfree(gc->irq_contexts); - gc->irq_contexts =3D NULL; -free_irq_array: - kfree(irqs); + kfree(start_irqs); free_irq_vector: cpus_read_unlock(); - pci_free_irq_vectors(pdev); return err; } =20 +static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev) +{ + struct gdma_context *gc =3D pci_get_drvdata(pdev); + unsigned int max_irqs, min_irqs; + int nvec, err; + + if (pci_msix_can_alloc_dyn(pdev)) { + max_irqs =3D 1; + min_irqs =3D 1; + } else { + /* Need 1 interrupt for HWC */ + max_irqs =3D min(num_online_cpus(), MANA_MAX_NUM_QUEUES) + 1; + min_irqs =3D 2; + } + + nvec =3D pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX); + if (nvec < 0) + return nvec; + + err =3D mana_gd_setup_irqs(pdev, nvec); + if (err) { + pci_free_irq_vectors(pdev); + return err; + } + + gc->num_msix_usable =3D nvec; + gc->max_num_msix =3D nvec; + + return 0; +} + +static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev) +{ + struct gdma_context *gc =3D pci_get_drvdata(pdev); + struct msi_map irq_map; + int max_irqs, i, err; + + if (!pci_msix_can_alloc_dyn(pdev)) + /* remain irqs are already allocated with HWC IRQ */ + return 0; + + /* allocate only remaining IRQs*/ + max_irqs =3D gc->num_msix_usable - 1; + + for (i =3D 1; i <=3D max_irqs; i++) { + irq_map =3D pci_msix_alloc_irq_at(pdev, i, NULL); + if (!irq_map.virq) { + err =3D irq_map.index; + /* caller will handle cleaning up all allocated + * irqs, after HWC is destroyed + */ + return err; + } + } + + err =3D mana_gd_setup_dyn_irqs(pdev, max_irqs); + if (err) + return err; + + gc->max_num_msix =3D gc->max_num_msix + max_irqs; + + return 0; +} + static void mana_gd_remove_irqs(struct pci_dev *pdev) { struct gdma_context *gc =3D pci_get_drvdata(pdev); @@ -1495,19 +1634,21 @@ static void mana_gd_remove_irqs(struct pci_dev *pde= v) if (irq < 0) continue; =20 - gic =3D &gc->irq_contexts[i]; + gic =3D xa_load(&gc->irq_contexts, i); + if (WARN_ON(!gic)) + continue; =20 /* Need to clear the hint before free_irq */ irq_update_affinity_hint(irq, NULL); free_irq(irq, gic); + xa_erase(&gc->irq_contexts, i); + kfree(gic); } =20 pci_free_irq_vectors(pdev); =20 gc->max_num_msix =3D 0; gc->num_msix_usable =3D 0; - kfree(gc->irq_contexts); - gc->irq_contexts =3D NULL; } =20 static int mana_gd_setup(struct pci_dev *pdev) @@ -1518,9 +1659,10 @@ static int mana_gd_setup(struct pci_dev *pdev) mana_gd_init_registers(pdev); mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base); =20 - err =3D mana_gd_setup_irqs(pdev); + err =3D mana_gd_setup_hwc_irqs(pdev); if (err) { - dev_err(gc->dev, "Failed to setup IRQs: %d\n", err); + dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n", + err); return err; } =20 @@ -1536,6 +1678,12 @@ static int mana_gd_setup(struct pci_dev *pdev) if (err) goto destroy_hwc; =20 + err =3D mana_gd_setup_remaining_irqs(pdev); + if (err) { + dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err); + goto destroy_hwc; + } + err =3D mana_gd_detect_devices(pdev); if (err) goto destroy_hwc; @@ -1612,6 +1760,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const = struct pci_device_id *ent) gc->is_pf =3D mana_is_pf(pdev->device); gc->bar0_va =3D bar0_va; gc->dev =3D &pdev->dev; + xa_init(&gc->irq_contexts); =20 if (gc->is_pf) gc->mana_pci_debugfs =3D debugfs_create_dir("0", mana_debugfs_root); @@ -1640,6 +1789,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const = struct pci_device_id *ent) */ debugfs_remove_recursive(gc->mana_pci_debugfs); gc->mana_pci_debugfs =3D NULL; + xa_destroy(&gc->irq_contexts); pci_iounmap(pdev, bar0_va); free_gc: pci_set_drvdata(pdev, NULL); @@ -1664,6 +1814,8 @@ static void mana_gd_remove(struct pci_dev *pdev) =20 gc->mana_pci_debugfs =3D NULL; =20 + xa_destroy(&gc->irq_contexts); + pci_iounmap(pdev, gc->bar0_va); =20 vfree(gc); diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 228603bf03f2..f20d1d1ea5e8 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -373,7 +373,7 @@ struct gdma_context { unsigned int max_num_queues; unsigned int max_num_msix; unsigned int num_msix_usable; - struct gdma_irq_context *irq_contexts; + struct xarray irq_contexts; =20 /* L2 MTU */ u16 adapter_mtu; @@ -558,12 +558,16 @@ enum { /* Driver can handle holes (zeros) in the device list */ #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11) =20 +/* Driver supports dynamic MSI-X vector allocation */ +#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13) + #define GDMA_DRV_CAP_FLAGS1 \ (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \ GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \ GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \ GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \ - GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP) + GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \ + GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT) =20 #define GDMA_DRV_CAP_FLAGS2 0 =20 --=20 2.34.1