From nobody Sat Oct 11 04:07:55 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 696B1241116; Wed, 11 Jun 2025 14:10:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651005; cv=none; b=rvbRkVGnAthbpZS8vE6CTcW0JnEaO80+EiNN9kGk2hOtGrlBOuZJf5CyOGXbK0fTc0Fgxr3OZz+BIHhMn3MjSs+JGSfIVHXOeUPtCmpzLbwKZeKrkbwUb8ojIDOax0QAiDsD4ptl+XQk3iKvBfFKWGLJww0MqhiL83lZfsXQj8A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651005; c=relaxed/simple; bh=NuOIgHOZeWjWCkSYw3tlCHFH4owYCYqKWsMvszk25ik=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=GCGgFcMGl0M9+DgxAkFB7vCLZHvoHVYXGPiNYEzaLjBl147uco82QRzqIx3u39oNRsd4gWOsXCk2zjIcm3K48dz9sJPIbpRIoep9bGhH8/1o9+a6vNb5Yndj4cKEPHmh1h53kzX66KnGiUI/Ox2iSJZg3dIifG9oHjXT0WHkABA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=EstZh1wP; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="EstZh1wP" Received: by linux.microsoft.com (Postfix, from userid 1134) id 0EC3E203B878; Wed, 11 Jun 2025 07:10:03 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0EC3E203B878 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1749651003; bh=S+pNvKzZfljLl9bsYeab1r1/hk38gdzhLlV9de/BwkE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EstZh1wPD1OKOYZH/PRgJZEeG7Eud0yQgMaeSD/1KgNdhuCH3YXvd9SANw2YrupBI 4PSU/uLBdX1TRJKs5cEzfYF08/G0KR+J4vE3vXjDLv6ncoYBaqb7oZmw+FhBJetcdy ZrSQBz1ThztGwCfk+7GeVdGHrhVn6dTVQduwfO48= From: Shradha Gupta To: Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Thomas Gleixner , Bjorn Helgaas , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Kevin Tian , Long Li , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v6 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Date: Wed, 11 Jun 2025 07:10:01 -0700 Message-Id: <1749651001-9436-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" For supporting dynamic MSI-X vector allocation by PCI controllers, enabling the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN is not enough, msix_prepare_msi_desc() to prepare the MSI descriptor is also needed. Export pci_msix_prepare_desc() to allow PCI controllers to support dynamic MSI-X vector allocation. Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang Reviewed-by: Thomas Gleixner Reviewed-by: Saurabh Sengar Acked-by: Bjorn Helgaas --- drivers/pci/msi/irqdomain.c | 5 +++-- include/linux/msi.h | 2 ++ 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c index c05152733993..765312c92d9b 100644 --- a/drivers/pci/msi/irqdomain.c +++ b/drivers/pci/msi/irqdomain.c @@ -222,13 +222,14 @@ static void pci_irq_unmask_msix(struct irq_data *data) pci_msix_unmask(irq_data_get_msi_desc(data)); } =20 -static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_inf= o_t *arg, - struct msi_desc *desc) +void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *ar= g, + struct msi_desc *desc) { /* Don't fiddle with preallocated MSI descriptors */ if (!desc->pci.mask_base) msix_prepare_msi_desc(to_pci_dev(desc->dev), desc); } +EXPORT_SYMBOL_GPL(pci_msix_prepare_desc); =20 static const struct msi_domain_template pci_msix_template =3D { .chip =3D { diff --git a/include/linux/msi.h b/include/linux/msi.h index 6863540f4b71..7f254bde5426 100644 --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -706,6 +706,8 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwn= ode_handle *fwnode, struct irq_domain *parent); u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *= pdev); struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev); +void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *ar= g, + struct msi_desc *desc); #else /* CONFIG_PCI_MSI */ static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev = *pdev) { --=20 2.34.1 From nobody Sat Oct 11 04:07:55 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 73A01288C1C; Wed, 11 Jun 2025 14:10:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651020; cv=none; b=WmuqqZ9QZXR3qokRctXk5wDTYtJNPPRvBANEpU8GWdfVNCWiyFQH6zYD2r7JyI1OSeeRQp7+vfYgij1nc5khQrZFa4KW+YAYddgIjZGyjLi5farERad1k01Uz2t6/kAHTuMUKSlHGWQ3PgT2SPt35mZT0/8C3Y4XKE5zMTfr9Aw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651020; c=relaxed/simple; bh=e90K+htOBMVQUgftdB0okBepiyRdEJAgEpLGNhKRD14=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=NxZ4ozphh30FCCpCd8NRs8TEYPdSCq/Yv6ScY5/dIHaTFtTgLQ55SPKQMqw442GOTF4LhgNFMx6aIXjgTISs/oT9vJjVSA2dUq7YfUokT1ybJxJrmlit+kFOAKEJ5NRJbr9nbZGR6xEvhxenlj/CyViC17GGFwOMU/wA2RQMiNM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=dtUnNw9I; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="dtUnNw9I" Received: by linux.microsoft.com (Postfix, from userid 1134) id 141D62115191; Wed, 11 Jun 2025 07:10:17 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 141D62115191 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1749651017; bh=FxKchsvLCIryqIeiA0+j0+BapXh7RHG9bqOb6DuAAEs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dtUnNw9I4s6UlqQJUdqjfw8g2l/lH573+y0cNpRIiNwZFhpIABmrQr/lf2x0nefgE xFoXAJH/9UvvCr5L7SFdqkHqJr2+c9lIJASQaf7giy7wh6+ynW8M9BWWs9INzhlHRN 3MYN4UVxRC/Q4IJpjdP32UCRyIJN02X0YIMsDFMQ= From: Shradha Gupta To: Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v6 2/5] PCI: hv: Allow dynamic MSI-X vector allocation Date: Wed, 11 Jun 2025 07:10:15 -0700 Message-Id: <1749651015-9668-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using pci_msix_prepare_desc() to prepare the MSI-X descriptors. Feature support added for both x86 and ARM64 Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang Reviewed-by: Saurabh Sengar Acked-by: Bjorn Helgaas --- Changes in v4: * use the same prepare_desc() callback for arm and x86 --- Changes in v3: * Add arm64 support --- Changes in v2: * split the patch to keep changes in PCI and pci_hyperv controller seperate * replace strings "pci vectors" by "MSI-X vectors" --- drivers/pci/controller/pci-hyperv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/p= ci-hyperv.c index ef5d655a0052..86ca041bf74a 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -2119,6 +2119,7 @@ static struct irq_chip hv_msi_irq_chip =3D { static struct msi_domain_ops hv_msi_ops =3D { .msi_prepare =3D hv_msi_prepare, .msi_free =3D hv_msi_free, + .prepare_desc =3D pci_msix_prepare_desc, }; =20 /** @@ -2140,7 +2141,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_d= evice *hbus) hbus->msi_info.ops =3D &hv_msi_ops; hbus->msi_info.flags =3D (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI | - MSI_FLAG_PCI_MSIX); + MSI_FLAG_PCI_MSIX | MSI_FLAG_PCI_MSIX_ALLOC_DYN); hbus->msi_info.handler =3D FLOW_HANDLER; hbus->msi_info.handler_name =3D FLOW_NAME; hbus->msi_info.data =3D hbus; --=20 2.34.1 From nobody Sat Oct 11 04:07:55 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1ED31288C1C; Wed, 11 Jun 2025 14:10:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651032; cv=none; b=ZZBlVAycUC+aNGt3hKX3xcARizjcYfnuzcVKta2wEhDUziYqTVfWUhxKrq1XoM2gaWY7yavydeQl+0lKcn24/G2mMTMR+0FWfuRQL1GxQoLtgRc4T2UQb32pLbMo30rzgUm/u41cERFRRs/rVk2Xiq6EvJ+P5fO8AioN7jedV/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651032; c=relaxed/simple; bh=O8b1iR0BorA2fQJLiWlvFsiX8GqxGChfquVY8SFKOys=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=BgxJ3YEC9wpQUgtBdP5gRvb6rESJ6doRXZw20tqsDMBxeav05bo2rOnn6dW0gsqnywh0Wn7+2uP6/xSyTYomkqz8n2KfNlRwAn1a2Iddngp91k9fm7DYzj7FXxCPwWKO7HXqfmoMPhRQh00BBqZ4wpkcWLrD1miZ9uCehK29dBc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=fuhBF8uS; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="fuhBF8uS" Received: by linux.microsoft.com (Postfix, from userid 1134) id BAF7F2115191; Wed, 11 Jun 2025 07:10:30 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com BAF7F2115191 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1749651030; bh=OvAAbL/c+Rwk797WDOKhGH2iI5RVdUOq6aNBguP1zco=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fuhBF8uSSAqNfKHjAxHp9ZSmTHWoDUTOnnskDgrB4ULHwzAm7MMSvwmAdAQAdALXo sD7Pjm1olGVP0TUW9Ce9u1FVxmdAzP6f0WYhlaQVnoM+iR3Sd65iVZBsaSjVJgPt8o p/hpzn9dqIDQhFNtp3TFG1WMtj1/thHI1quJefx4= From: Shradha Gupta To: Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , Michael Kelley Cc: Yury Norov , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Shradha Gupta , Kevin Tian , Long Li , Thomas Gleixner , Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v6 3/5] net: mana: explain irq_setup() algorithm Date: Wed, 11 Jun 2025 07:10:29 -0700 Message-Id: <1749651029-9790-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Yury Norov Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs") added the irq_setup() function that distributes IRQs on CPUs according to a tricky heuristic. The corresponding commit message explains the heuristic. Duplicate it in the source code to make available for readers without digging git in history. Also, add more detailed explanation about how the heuristics is implemented. Signed-off-by: Yury Norov Signed-off-by: Shradha Gupta --- Changes in v5: * Corrected the Authur of the patch --- .../net/ethernet/microsoft/mana/gdma_main.c | 41 +++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/= ethernet/microsoft/mana/gdma_main.c index 3504507477c6..6c4e143972a1 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r) r->size =3D 0; } =20 +/* + * Spread on CPUs with the following heuristics: + * + * 1. No more than one IRQ per CPU, if possible; + * 2. NUMA locality is the second priority; + * 3. Sibling dislocality is the last priority. + * + * Let's consider this topology: + * + * Node 0 1 + * Core 0 1 2 3 + * CPU 0 1 2 3 4 5 6 7 + * + * The most performant IRQ distribution based on the above topology + * and heuristics may look like this: + * + * IRQ Nodes Cores CPUs + * 0 1 0 0-1 + * 1 1 1 2-3 + * 2 1 0 0-1 + * 3 1 1 2-3 + * 4 2 2 4-5 + * 5 2 3 6-7 + * 6 2 2 4-5 + * 7 2 3 6-7 + * + * The heuristics is implemented as follows. + * + * The outer for_each() loop resets the 'weight' to the actual number + * of CPUs in the hop. Then inner for_each() loop decrements it by the + * number of sibling groups (cores) while assigning first set of IRQs + * to each group. IRQs 0 and 1 above are distributed this way. + * + * Now, because NUMA locality is more important, we should walk the + * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's + * implemented by the medium while() loop. We do like this unless the + * number of IRQs assigned on this hop will not become equal to number + * of CPUs in the hop (weight =3D=3D 0). Then we switch to the next hop and + * do the same thing. + */ + static int irq_setup(unsigned int *irqs, unsigned int len, int node) { const struct cpumask *next, *prev =3D cpu_none_mask; --=20 2.34.1 From nobody Sat Oct 11 04:07:55 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2E7A3288CA0; Wed, 11 Jun 2025 14:10:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651046; cv=none; b=kpl21s6i94gJgBeEL/E6psi5Eqp495mH6ZJvBGS3rguaHp+KKWANEuWl0zl2w6yZLIS3qT+67PR+zkgI8F4K+u0DKixhXwVW60IRDKPCiHSA582KRv/8+vLBNsxzR5kGDBYTbellEhEcPnlZB3Vhn1ITAIM+y1/85Wh3BhVfpLs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651046; c=relaxed/simple; bh=YSp+OCRS4Er/rKV59ecBvAywmM/mat0WmmI7fzDrWgI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=b2jHDrOXSfCOvJyYl1LX/TDAiYFcyVvFIhM89munTQN9Bp8vqaLwcQ3+fe2R8TqxPDZqRqlXqcNlpMyLpo3SUaAloLHVsBkV3H99azjhN9IFtDzobWZ6Zmg+iAGbiLR5+dG7w/HLNQK2MYX+hcK/fyJK4xsX8VIDgxuFra56jM4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=YryDunn6; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="YryDunn6" Received: by linux.microsoft.com (Postfix, from userid 1134) id D0C932115196; Wed, 11 Jun 2025 07:10:44 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D0C932115196 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1749651044; bh=Hxqph0KHOimiusBQXR//CugMtz83aYE3KaxyzEF/pPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YryDunn6OJrMwj/PS2xs9c8WeJIpWFr7xWZTiJ4OFIsvgE/xj0CwPedc1JSVNdwar yBr5rNnvlbyUlIKWnr9yV2ChQkg1UmzTMlDzuzZ5H687n47EbIwUECz+cpH3Mxh7gc dzwosQ5qyRV4ETMWa3ScCBJU2PkD4lGU7K2GhzvU= From: Shradha Gupta To: Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v6 4/5] net: mana: Allow irq_setup() to skip cpus for affinity Date: Wed, 11 Jun 2025 07:10:42 -0700 Message-Id: <1749651042-9997-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" In order to prepare the MANA driver to allocate the MSI-X IRQs dynamically, we need to enhance irq_setup() to allow skipping affinitizing IRQs to the first CPU sibling group. This would be for cases when the number of IRQs is less than or equal to the number of online CPUs. In such cases for dynamically added IRQs the first CPU sibling group would already be affinitized with HWC IRQ. Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang Reviewed-by: Yury Norov [NVIDIA] --- Changes in v4 * fix commit description * avoided using next_cpumask: label in the irq_setup() --- drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/= ethernet/microsoft/mana/gdma_main.c index 6c4e143972a1..6e468c0f2c40 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1329,7 +1329,8 @@ void mana_gd_free_res_map(struct gdma_resource *r) * do the same thing. */ =20 -static int irq_setup(unsigned int *irqs, unsigned int len, int node) +static int irq_setup(unsigned int *irqs, unsigned int len, int node, + bool skip_first_cpu) { const struct cpumask *next, *prev =3D cpu_none_mask; cpumask_var_t cpus __free(free_cpumask_var); @@ -1344,11 +1345,18 @@ static int irq_setup(unsigned int *irqs, unsigned i= nt len, int node) while (weight > 0) { cpumask_andnot(cpus, next, prev); for_each_cpu(cpu, cpus) { + cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu)); + --weight; + + if (unlikely(skip_first_cpu)) { + skip_first_cpu =3D false; + continue; + } + if (len-- =3D=3D 0) goto done; + irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu)); - cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu)); - --weight; } } prev =3D next; @@ -1444,7 +1452,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) } } =20 - err =3D irq_setup(irqs, (nvec - start_irq_index), gc->numa_node); + err =3D irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false); if (err) goto free_irq; =20 --=20 2.34.1 From nobody Sat Oct 11 04:07:55 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2A6262E6108; Wed, 11 Jun 2025 14:11:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651077; cv=none; b=u0ncPxwTxcY96FAJxKZj8xkn+3cQX8+kkeF99mdYcXn3ALOBoOYuM8/SbLpNtsofq8Dvej4T494LVMDZU2mFvac9ibLtTU2kl+VFREpKZG1EdMLqrPLBI+ztA/i9aTXJE1wISk8Wy4dpvav/0ye5nRLZqGE7zmJygVcSyg52EWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749651077; c=relaxed/simple; bh=R0qI1l+Dp+BQxaI9jUomP00rTdQeSJOMBr9UWWJTbf8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=Z6+b8YEbMZMy+ofNIF+N5vOZw2kBr64lY9yj8G+SoN4ADhsqN6ZdInPGkPyCSr8EtkLEUXBne0dXlqtNsiu7rhUUTzfg6uZCKVvNRcIurbbCQ9zTUclfBJa9koTXd3F0i4NWIA6eojyTH6ug7h1dzC0YZvhcOHzpEWWIF21VGEo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=HKLF/gTR; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="HKLF/gTR" Received: by linux.microsoft.com (Postfix, from userid 1134) id D3AB4203EE0A; Wed, 11 Jun 2025 07:11:14 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D3AB4203EE0A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1749651074; bh=7Vj/WeKE7E1u91ZSnRU675QpCJSIZVJHWMl588pdaUs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HKLF/gTRvi3pyM9c1bdYHW9fLq1zCt4EIES4UrtrEKQiWEMf4OHM5VxxRWDnW6dJr Tcyn0DMtVq+6ihsMOKyLQvG1xwiee46sxIW3lFj/VOHAorFM/sETZxcdi0I79tjejM fEkafs7OMlUnXDMm92xc1KRW0eLH8tXWX3ZIBMJE= From: Shradha Gupta To: Dexuan Cui , Wei Liu , Haiyang Zhang , "K. Y. Srinivasan" , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Konstantin Taranov , Simon Horman , Leon Romanovsky , Maxim Levitsky , Erni Sri Satya Vennela , Peter Zijlstra , Michael Kelley Cc: Shradha Gupta , linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Nipun Gupta , Yury Norov , Jason Gunthorpe , Jonathan Cameron , Anna-Maria Behnsen , Kevin Tian , Long Li , Thomas Gleixner , Bjorn Helgaas , Rob Herring , Manivannan Sadhasivam , =?UTF-8?q?Krzysztof=20Wilczy=EF=BF=BD=7EDski?= , Lorenzo Pieralisi , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm , Shradha Gupta Subject: [PATCH v6 5/5] net: mana: Allocate MSI-X vectors dynamically Date: Wed, 11 Jun 2025 07:11:13 -0700 Message-Id: <1749651073-10399-1-git-send-email-shradhagupta@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> References: <1749650984-9193-1-git-send-email-shradhagupta@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Currently, the MANA driver allocates MSI-X vectors statically based on MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends up allocating more vectors than it needs. This is because, by this time we do not have a HW channel and do not know how many IRQs should be allocated. To avoid this, we allocate 1 MSI-X vector during the creation of HWC and after getting the value supported by hardware, dynamically add the remaining MSI-X vectors. Signed-off-by: Shradha Gupta Reviewed-by: Haiyang Zhang --- Changes in v5: * Correctly initialized start_irqs, so that it is cleaned properly * rearranged the cpu_lock to minimize the critical section --- Changes in v4: * added BUG_ON at appropriate places * moved xa_destroy to mana_gd_remove() * rearragned the cleanup logic in mana_gd_setup_dyn_irqs() * simplified processing around start_irq_index in mana_gd_setup_irqs() * return 0 instead of return err as appropriate --- Changes in v3: * implemented irq_contexts as xarrays rather than list * split the patch to create a perparation patch around irq_setup() * add log when IRQ allocation/setup for remaining IRQs fails --- Changes in v2: * Use string 'MSI-X vectors' instead of 'pci vectors' * make skip-cpu a bool instead of int * rearrange the comment arout skip_cpu variable appropriately * update the capability bit for driver indicating dynamic IRQ * allocation * enforced max line length to 80 * enforced RCT convention * initialized gic to NULL, for when there is a possibility of gic not being populated correctly --- .../net/ethernet/microsoft/mana/gdma_main.c | 311 +++++++++++++----- include/net/mana/gdma.h | 8 +- 2 files changed, 235 insertions(+), 84 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/= ethernet/microsoft/mana/gdma_main.c index 6e468c0f2c40..d0040c12b8a2 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -6,6 +6,8 @@ #include #include #include +#include +#include =20 #include =20 @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *p= dev) return err ? err : -EPROTO; } =20 - if (gc->num_msix_usable > resp.max_msix) - gc->num_msix_usable =3D resp.max_msix; + if (!pci_msix_can_alloc_dyn(pdev)) { + if (gc->num_msix_usable > resp.max_msix) + gc->num_msix_usable =3D resp.max_msix; + } else { + /* If dynamic allocation is enabled we have already allocated + * hwc msi + */ + gc->num_msix_usable =3D min(resp.max_msix, num_online_cpus() + 1); + } =20 if (gc->num_msix_usable <=3D 1) return -ENOSPC; @@ -483,7 +492,9 @@ static int mana_gd_register_irq(struct gdma_queue *queu= e, } =20 queue->eq.msix_index =3D msi_index; - gic =3D &gc->irq_contexts[msi_index]; + gic =3D xa_load(&gc->irq_contexts, msi_index); + if (WARN_ON(!gic)) + return -EINVAL; =20 spin_lock_irqsave(&gic->lock, flags); list_add_rcu(&queue->entry, &gic->eq_list); @@ -508,7 +519,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *q= ueue) if (WARN_ON(msix_index >=3D gc->num_msix_usable)) return; =20 - gic =3D &gc->irq_contexts[msix_index]; + gic =3D xa_load(&gc->irq_contexts, msix_index); + if (WARN_ON(!gic)) + return; + spin_lock_irqsave(&gic->lock, flags); list_for_each_entry_rcu(eq, &gic->eq_list, entry) { if (queue =3D=3D eq) { @@ -1366,47 +1380,108 @@ static int irq_setup(unsigned int *irqs, unsigned = int len, int node, return 0; } =20 -static int mana_gd_setup_irqs(struct pci_dev *pdev) +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec) { struct gdma_context *gc =3D pci_get_drvdata(pdev); - unsigned int max_queues_per_port; struct gdma_irq_context *gic; - unsigned int max_irqs, cpu; - int start_irq_index =3D 1; - int nvec, *irqs, irq; - int err, i =3D 0, j; + bool skip_first_cpu =3D false; + int *irqs, irq, err, i; =20 - cpus_read_lock(); - max_queues_per_port =3D num_online_cpus(); - if (max_queues_per_port > MANA_MAX_NUM_QUEUES) - max_queues_per_port =3D MANA_MAX_NUM_QUEUES; + irqs =3D kmalloc_array(nvec, sizeof(int), GFP_KERNEL); + if (!irqs) + return -ENOMEM; + + /* + * While processing the next pci irq vector, we start with index 1, + * as IRQ vector at index 0 is already processed for HWC. + * However, the population of irqs array starts with index 0, to be + * further used in irq_setup() + */ + for (i =3D 1; i <=3D nvec; i++) { + gic =3D kzalloc(sizeof(*gic), GFP_KERNEL); + if (!gic) { + err =3D -ENOMEM; + goto free_irq; + } + gic->handler =3D mana_gd_process_eq_events; + INIT_LIST_HEAD(&gic->eq_list); + spin_lock_init(&gic->lock); =20 - /* Need 1 interrupt for the Hardware communication Channel (HWC) */ - max_irqs =3D max_queues_per_port + 1; + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s", + i - 1, pci_name(pdev)); =20 - nvec =3D pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX); - if (nvec < 0) { - cpus_read_unlock(); - return nvec; + /* one pci vector is already allocated for HWC */ + irqs[i - 1] =3D pci_irq_vector(pdev, i); + if (irqs[i - 1] < 0) { + err =3D irqs[i - 1]; + goto free_current_gic; + } + + err =3D request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic); + if (err) + goto free_current_gic; + + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL); } - if (nvec <=3D num_online_cpus()) - start_irq_index =3D 0; =20 - irqs =3D kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL); - if (!irqs) { - err =3D -ENOMEM; - goto free_irq_vector; + /* + * When calling irq_setup() for dynamically added IRQs, if number of + * CPUs is more than or equal to allocated MSI-X, we need to skip the + * first CPU sibling group since they are already affinitized to HWC IRQ + */ + cpus_read_lock(); + if (gc->num_msix_usable <=3D num_online_cpus()) + skip_first_cpu =3D true; + + err =3D irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu); + if (err) { + cpus_read_unlock(); + goto free_irq; } =20 - gc->irq_contexts =3D kcalloc(nvec, sizeof(struct gdma_irq_context), - GFP_KERNEL); - if (!gc->irq_contexts) { - err =3D -ENOMEM; - goto free_irq_array; + cpus_read_unlock(); + kfree(irqs); + return 0; + +free_current_gic: + kfree(gic); +free_irq: + for (i -=3D 1; i > 0; i--) { + irq =3D pci_irq_vector(pdev, i); + gic =3D xa_load(&gc->irq_contexts, i); + if (WARN_ON(!gic)) + continue; + + irq_update_affinity_hint(irq, NULL); + free_irq(irq, gic); + xa_erase(&gc->irq_contexts, i); + kfree(gic); } + kfree(irqs); + return err; +} + +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec) +{ + struct gdma_context *gc =3D pci_get_drvdata(pdev); + struct gdma_irq_context *gic; + int *irqs, *start_irqs, irq; + unsigned int cpu; + int err, i; + + irqs =3D kmalloc_array(nvec, sizeof(int), GFP_KERNEL); + if (!irqs) + return -ENOMEM; + + start_irqs =3D irqs; =20 for (i =3D 0; i < nvec; i++) { - gic =3D &gc->irq_contexts[i]; + gic =3D kzalloc(sizeof(*gic), GFP_KERNEL); + if (!gic) { + err =3D -ENOMEM; + goto free_irq; + } + gic->handler =3D mana_gd_process_eq_events; INIT_LIST_HEAD(&gic->eq_list); spin_lock_init(&gic->lock); @@ -1418,69 +1493,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s", i - 1, pci_name(pdev)); =20 - irq =3D pci_irq_vector(pdev, i); - if (irq < 0) { - err =3D irq; - goto free_irq; + irqs[i] =3D pci_irq_vector(pdev, i); + if (irqs[i] < 0) { + err =3D irqs[i]; + goto free_current_gic; } =20 - if (!i) { - err =3D request_irq(irq, mana_gd_intr, 0, gic->name, gic); - if (err) - goto free_irq; - - /* If number of IRQ is one extra than number of online CPUs, - * then we need to assign IRQ0 (hwc irq) and IRQ1 to - * same CPU. - * Else we will use different CPUs for IRQ0 and IRQ1. - * Also we are using cpumask_local_spread instead of - * cpumask_first for the node, because the node can be - * mem only. - */ - if (start_irq_index) { - cpu =3D cpumask_local_spread(i, gc->numa_node); - irq_set_affinity_and_hint(irq, cpumask_of(cpu)); - } else { - irqs[start_irq_index] =3D irq; - } - } else { - irqs[i - start_irq_index] =3D irq; - err =3D request_irq(irqs[i - start_irq_index], mana_gd_intr, 0, - gic->name, gic); - if (err) - goto free_irq; - } + err =3D request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic); + if (err) + goto free_current_gic; + + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL); } =20 - err =3D irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false); - if (err) + /* If number of IRQ is one extra than number of online CPUs, + * then we need to assign IRQ0 (hwc irq) and IRQ1 to + * same CPU. + * Else we will use different CPUs for IRQ0 and IRQ1. + * Also we are using cpumask_local_spread instead of + * cpumask_first for the node, because the node can be + * mem only. + */ + cpus_read_lock(); + if (nvec > num_online_cpus()) { + cpu =3D cpumask_local_spread(0, gc->numa_node); + irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu)); + irqs++; + nvec -=3D 1; + } + + err =3D irq_setup(irqs, nvec, gc->numa_node, false); + if (err) { + cpus_read_unlock(); goto free_irq; + } =20 - gc->max_num_msix =3D nvec; - gc->num_msix_usable =3D nvec; cpus_read_unlock(); - kfree(irqs); + kfree(start_irqs); return 0; =20 +free_current_gic: + kfree(gic); free_irq: - for (j =3D i - 1; j >=3D 0; j--) { - irq =3D pci_irq_vector(pdev, j); - gic =3D &gc->irq_contexts[j]; + for (i -=3D 1; i >=3D 0; i--) { + irq =3D pci_irq_vector(pdev, i); + gic =3D xa_load(&gc->irq_contexts, i); + if (WARN_ON(!gic)) + continue; =20 irq_update_affinity_hint(irq, NULL); free_irq(irq, gic); + xa_erase(&gc->irq_contexts, i); + kfree(gic); } =20 - kfree(gc->irq_contexts); - gc->irq_contexts =3D NULL; -free_irq_array: - kfree(irqs); -free_irq_vector: - cpus_read_unlock(); - pci_free_irq_vectors(pdev); + kfree(start_irqs); return err; } =20 +static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev) +{ + struct gdma_context *gc =3D pci_get_drvdata(pdev); + unsigned int max_irqs, min_irqs; + int nvec, err; + + if (pci_msix_can_alloc_dyn(pdev)) { + max_irqs =3D 1; + min_irqs =3D 1; + } else { + /* Need 1 interrupt for HWC */ + max_irqs =3D min(num_online_cpus(), MANA_MAX_NUM_QUEUES) + 1; + min_irqs =3D 2; + } + + nvec =3D pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX); + if (nvec < 0) + return nvec; + + err =3D mana_gd_setup_irqs(pdev, nvec); + if (err) { + pci_free_irq_vectors(pdev); + return err; + } + + gc->num_msix_usable =3D nvec; + gc->max_num_msix =3D nvec; + + return 0; +} + +static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev) +{ + struct gdma_context *gc =3D pci_get_drvdata(pdev); + struct msi_map irq_map; + int max_irqs, i, err; + + if (!pci_msix_can_alloc_dyn(pdev)) + /* remain irqs are already allocated with HWC IRQ */ + return 0; + + /* allocate only remaining IRQs*/ + max_irqs =3D gc->num_msix_usable - 1; + + for (i =3D 1; i <=3D max_irqs; i++) { + irq_map =3D pci_msix_alloc_irq_at(pdev, i, NULL); + if (!irq_map.virq) { + err =3D irq_map.index; + /* caller will handle cleaning up all allocated + * irqs, after HWC is destroyed + */ + return err; + } + } + + err =3D mana_gd_setup_dyn_irqs(pdev, max_irqs); + if (err) + return err; + + gc->max_num_msix =3D gc->max_num_msix + max_irqs; + + return 0; +} + static void mana_gd_remove_irqs(struct pci_dev *pdev) { struct gdma_context *gc =3D pci_get_drvdata(pdev); @@ -1495,19 +1629,21 @@ static void mana_gd_remove_irqs(struct pci_dev *pde= v) if (irq < 0) continue; =20 - gic =3D &gc->irq_contexts[i]; + gic =3D xa_load(&gc->irq_contexts, i); + if (WARN_ON(!gic)) + continue; =20 /* Need to clear the hint before free_irq */ irq_update_affinity_hint(irq, NULL); free_irq(irq, gic); + xa_erase(&gc->irq_contexts, i); + kfree(gic); } =20 pci_free_irq_vectors(pdev); =20 gc->max_num_msix =3D 0; gc->num_msix_usable =3D 0; - kfree(gc->irq_contexts); - gc->irq_contexts =3D NULL; } =20 static int mana_gd_setup(struct pci_dev *pdev) @@ -1522,9 +1658,10 @@ static int mana_gd_setup(struct pci_dev *pdev) if (!gc->service_wq) return -ENOMEM; =20 - err =3D mana_gd_setup_irqs(pdev); + err =3D mana_gd_setup_hwc_irqs(pdev); if (err) { - dev_err(gc->dev, "Failed to setup IRQs: %d\n", err); + dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n", + err); goto free_workqueue; } =20 @@ -1540,6 +1677,12 @@ static int mana_gd_setup(struct pci_dev *pdev) if (err) goto destroy_hwc; =20 + err =3D mana_gd_setup_remaining_irqs(pdev); + if (err) { + dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err); + goto destroy_hwc; + } + err =3D mana_gd_detect_devices(pdev); if (err) goto destroy_hwc; @@ -1620,6 +1763,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const = struct pci_device_id *ent) gc->is_pf =3D mana_is_pf(pdev->device); gc->bar0_va =3D bar0_va; gc->dev =3D &pdev->dev; + xa_init(&gc->irq_contexts); =20 if (gc->is_pf) gc->mana_pci_debugfs =3D debugfs_create_dir("0", mana_debugfs_root); @@ -1654,6 +1798,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const = struct pci_device_id *ent) */ debugfs_remove_recursive(gc->mana_pci_debugfs); gc->mana_pci_debugfs =3D NULL; + xa_destroy(&gc->irq_contexts); pci_iounmap(pdev, bar0_va); free_gc: pci_set_drvdata(pdev, NULL); @@ -1679,6 +1824,8 @@ static void mana_gd_remove(struct pci_dev *pdev) =20 gc->mana_pci_debugfs =3D NULL; =20 + xa_destroy(&gc->irq_contexts); + pci_iounmap(pdev, gc->bar0_va); =20 vfree(gc); diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 3ce56a816425..87162ba96d91 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -388,7 +388,7 @@ struct gdma_context { unsigned int max_num_queues; unsigned int max_num_msix; unsigned int num_msix_usable; - struct gdma_irq_context *irq_contexts; + struct xarray irq_contexts; =20 /* L2 MTU */ u16 adapter_mtu; @@ -578,12 +578,16 @@ enum { /* Driver can handle holes (zeros) in the device list */ #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11) =20 +/* Driver supports dynamic MSI-X vector allocation */ +#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13) + #define GDMA_DRV_CAP_FLAGS1 \ (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \ GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \ GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \ GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \ - GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP) + GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \ + GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT) =20 #define GDMA_DRV_CAP_FLAGS2 0 =20 --=20 2.34.1