From nobody Tue Sep 9 21:36:14 2025 Received: from mail-pl1-f228.google.com (mail-pl1-f228.google.com [209.85.214.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73E7E309EF1 for ; Fri, 5 Sep 2025 17:20:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.228 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757092830; cv=none; b=FMKZ6e1VdDmoUhu4E64uoHSCEoXQV5Iyg0sjDmQd81xb1ukO6zXLWkT20TIcDjIswql4kIjc7QaU6o0hUuk7MHXYtzYElWnW2JdfpgEQnfGO4FO2ykby/IOhd8V/B4XgbtnkOMo/rk2KKc+ozFwEsO+ReL4IOWQWZnxMLl79AgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757092830; c=relaxed/simple; bh=sbwCdmPRBOSjIuI08LWmsZifnOFSNYW0JJWUUgagank=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b5AMeGCtGXKh9M3Ves+R74UMZTnbVSnUzIY+sz2UvDPXzY3yNgtQQtx35EgVwl8ziBuE/E/eJ6t4YeZbeqG/YSAkTLwgKpgwoPhwCfro7KsFsy+eRGvmCa4V+/t94MA/J0dWYigH21DXVWZmxeOkkpkgyw3EoFTArOcidEOsd8s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=BrEzrhJs; arc=none smtp.client-ip=209.85.214.228 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="BrEzrhJs" Received: by mail-pl1-f228.google.com with SMTP id d9443c01a7336-24cde6c65d1so14876195ad.3 for ; Fri, 05 Sep 2025 10:20:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1757092828; x=1757697628; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KE/wxZRixCsfc/2JuvzDJGHqhBwLnVxRK9rdzrG7L+Q=; b=EBuGy3GxNXB7J9kw7Qqe5IVPDE7WHTQijBsjW56ADmm/V/CXJj4aSDPOiKjS+lPVCP phkc5ZJNNbN0v28XFEdGor1yVprpeMKtq0/mlEfzxAPvt0MZwe4FnZRFcHjevIxBvytA Gm+um9yXeNuKA4jxVwLgHPZUZEWk3FT9fY/NCNoALjAtaKDoxLPfxQhDzgEZs8CegpXl 6IFkGiKvj5npUXMaAJar+Ur4nAbs6EX1KdF1FKafb95lHMThOlv0MFZKs8r0oUYOnEqN /63NYE6TKIXZ21vo51EtsZltz+MdzQWsabdV6i1XGRyuDYhrR7etFdXZ5qZ9Pj1F1+X2 yEzg== X-Forwarded-Encrypted: i=1; AJvYcCWHzsoXqBDdfbVulDGvExDbMXwY+GsE+ZB/LwS8SvnvhDTkE72GRPizXuXGWhshtMD9oH8iYegJlbdrWOg=@vger.kernel.org X-Gm-Message-State: AOJu0Yz78jwRPkFcNzkts9S7+KgRSPEUuJEY7eTnWCZUgXnAe/lVaH8w CTIPwE94gr/C121Y0BpBK06D5xnslwybCvMSdqnW9giSmt+y/3m3RUMi97GZAuKMyYMLzqtbjWz yn0TLZcGg6vXpfeAay28uWFawgCSpPoj/llZ52gcPEN+aLe+QqK7O03vkNP4oH5M5V0vXjaRMuk oUOK+qAM0qmkKF9IHH41b31MOqflx5n50RvkYB04L8ivo7KEKV0+euQmz+ZEcJo2OolkRrr5IzV Iy0sSlrVx9/wSm4pmxb5LAyVP8s X-Gm-Gg: ASbGncs3u7tBi2lzEbUj7YAOA7ZrmBO/1rWGsbjf80Se1KjcjVOUbkFsnNH+J6LZ1Jx +dBhyOfzYRFsyCgg4I1/mfal7gKgG5+2paRYkWP2XXWmgy/AjyJUoecYFLXB3KJ4RcjZ/mkWSP3 fGfTZobQLJSv7FQjeLKePFcvf8q0lVyFLPW4NCz6KvY/cPePiflkuU3RyumNGmD4S3s9ABXs2zb L7UMK9+FtJGAOjrtoAOk/Tubpnv5GkLDBat+u1geh36/vD7jrEfxp+BcsTx4jCA0U3mGbyUwt0V mipLebXSzREF390oVN1OURYFSG5otV99VSyqIRhE15rJQcqWl9/M5AIEe8YxafvNuvfpR4KxJyR bvKHpvN7r6S6KqzZd4h6USpKLlWH/6btfJGsOBj/PfMJrU5JnA8pCSj59lb8/SHQtDspr7/UP7K YVdrrKtA== X-Google-Smtp-Source: AGHT+IF3fFNl748xOd/s4F2JmnCi5z1JbHyi8lFlbViDyLmB6CTvawqfBUUx9QVn5+HV82Sf/4xxE1wcVusw X-Received: by 2002:a17:903:22cf:b0:248:bae3:ecfb with SMTP id d9443c01a7336-24944b2940emr309713035ad.59.1757092827590; Fri, 05 Sep 2025 10:20:27 -0700 (PDT) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-10.dlp.protect.broadcom.com. [144.49.247.10]) by smtp-relay.gmail.com with ESMTPS id d9443c01a7336-24c786c4e59sm8060495ad.20.2025.09.05.10.20.27 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Sep 2025 10:20:27 -0700 (PDT) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pj1-f72.google.com with SMTP id 98e67ed59e1d1-329cb4c3f78so2399522a91.2 for ; Fri, 05 Sep 2025 10:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1757092826; x=1757697626; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KE/wxZRixCsfc/2JuvzDJGHqhBwLnVxRK9rdzrG7L+Q=; b=BrEzrhJsjjE36WRrfMEULFmCs+jTpfVuS4uP/cRQfHC5DzwRUrepPeEUwT+3ZAJRIt z6nhPsMrnh2K6lT2fAI3EucbpUcwDfBeMZJyfXQ9BRv7d32DCL4Z61bo/HEeZLY4oUOh YQBvLJTaKNup05LuhpuLLDO9VN+dBpITlwm9o= X-Forwarded-Encrypted: i=1; AJvYcCXwicsHaE4h2/ZVTYg91p0sHvMWoczOKcMGNdTJTSWDtPgT7PXTY9ShT7THCJurvpBfJC+Wsk/9fPpbAkk=@vger.kernel.org X-Received: by 2002:a17:90b:3d48:b0:32b:70a7:16c2 with SMTP id 98e67ed59e1d1-32b70a719e5mr11880024a91.22.1757092825426; Fri, 05 Sep 2025 10:20:25 -0700 (PDT) X-Received: by 2002:a17:90b:3d48:b0:32b:70a7:16c2 with SMTP id 98e67ed59e1d1-32b70a719e5mr11879985a91.22.1757092824927; Fri, 05 Sep 2025 10:20:24 -0700 (PDT) Received: from hyd-csg-thor2-h1-server2.dhcp.broadcom.net ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7722a2b78d7sm22678001b3a.30.2025.09.05.10.20.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Sep 2025 10:20:24 -0700 (PDT) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, Bhargava Marreddy , Vikas Gupta , Rajashekar Hudumula Subject: [v6, net-next 03/10] bng_en: Add initial support for CP and NQ rings Date: Fri, 5 Sep 2025 22:46:45 +0000 Message-ID: <20250905224652.48692-4-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250905224652.48692-1-bhargava.marreddy@broadcom.com> References: <20250905224652.48692-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Allocate CP and NQ related data structures and add support to associate NQ and CQ rings. Also, add the association of NQ, NAPI, and interrupts. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- drivers/net/ethernet/broadcom/bnge/bnge.h | 1 + .../net/ethernet/broadcom/bnge/bnge_netdev.c | 439 ++++++++++++++++++ .../net/ethernet/broadcom/bnge/bnge_netdev.h | 12 + .../net/ethernet/broadcom/bnge/bnge_resc.c | 2 +- 4 files changed, 453 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge.h b/drivers/net/ethern= et/broadcom/bnge/bnge.h index 03e55b931f7..c536c0cc66e 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge.h @@ -215,5 +215,6 @@ static inline bool bnge_is_agg_reqd(struct bnge_dev *bd) } =20 bool bnge_aux_registered(struct bnge_dev *bd); +u16 bnge_aux_get_msix(struct bnge_dev *bd); =20 #endif /* _BNGE_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index b537c66381d..207cbaba08a 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -27,6 +27,237 @@ #define BNGE_RING_TO_TC(bd, tx) \ ((tx) / (bd)->tx_nr_rings_per_tc) =20 +#define BNGE_TC_TO_RING_BASE(bd, tc) \ + ((tc) * (bd)->tx_nr_rings_per_tc) + +static void bnge_free_nq_desc_arr(struct bnge_nq_ring_info *nqr) +{ + struct bnge_ring_struct *ring =3D &nqr->ring_struct; + + kfree(nqr->desc_ring); + nqr->desc_ring =3D NULL; + ring->ring_mem.pg_arr =3D NULL; + kfree(nqr->desc_mapping); + nqr->desc_mapping =3D NULL; + ring->ring_mem.dma_arr =3D NULL; +} + +static void bnge_free_cp_desc_arr(struct bnge_cp_ring_info *cpr) +{ + struct bnge_ring_struct *ring =3D &cpr->ring_struct; + + kfree(cpr->desc_ring); + cpr->desc_ring =3D NULL; + ring->ring_mem.pg_arr =3D NULL; + kfree(cpr->desc_mapping); + cpr->desc_mapping =3D NULL; + ring->ring_mem.dma_arr =3D NULL; +} + +static int bnge_alloc_nq_desc_arr(struct bnge_nq_ring_info *nqr, int n) +{ + nqr->desc_ring =3D kcalloc(n, sizeof(*nqr->desc_ring), GFP_KERNEL); + if (!nqr->desc_ring) + return -ENOMEM; + + nqr->desc_mapping =3D kcalloc(n, sizeof(*nqr->desc_mapping), GFP_KERNEL); + if (!nqr->desc_mapping) + goto err_free_desc_ring; + return 0; + +err_free_desc_ring: + kfree(nqr->desc_ring); + nqr->desc_ring =3D NULL; + return -ENOMEM; +} + +static int bnge_alloc_cp_desc_arr(struct bnge_cp_ring_info *cpr, int n) +{ + cpr->desc_ring =3D kcalloc(n, sizeof(*cpr->desc_ring), GFP_KERNEL); + if (!cpr->desc_ring) + return -ENOMEM; + + cpr->desc_mapping =3D kcalloc(n, sizeof(*cpr->desc_mapping), GFP_KERNEL); + if (!cpr->desc_mapping) + goto err_free_desc_ring; + return 0; + +err_free_desc_ring: + kfree(cpr->desc_ring); + cpr->desc_ring =3D NULL; + return -ENOMEM; +} + +static void bnge_free_nq_arrays(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + bnge_free_nq_desc_arr(&bnapi->nq_ring); + } +} + +static int bnge_alloc_nq_arrays(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, rc; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + rc =3D bnge_alloc_nq_desc_arr(&bnapi->nq_ring, bn->cp_nr_pages); + if (rc) + goto err_free_nq_arrays; + } + return 0; + +err_free_nq_arrays: + bnge_free_nq_arrays(bn); + return rc; +} + +static void bnge_free_nq_tree(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + if (!bn->bnapi) + return; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + struct bnge_nq_ring_info *nqr; + struct bnge_ring_struct *ring; + int j; + + nqr =3D &bnapi->nq_ring; + ring =3D &nqr->ring_struct; + + bnge_free_ring(bd, &ring->ring_mem); + + if (!nqr->cp_ring_arr) + continue; + + for (j =3D 0; j < nqr->cp_ring_count; j++) { + struct bnge_cp_ring_info *cpr =3D &nqr->cp_ring_arr[j]; + + ring =3D &cpr->ring_struct; + bnge_free_ring(bd, &ring->ring_mem); + bnge_free_cp_desc_arr(cpr); + } + kfree(nqr->cp_ring_arr); + nqr->cp_ring_arr =3D NULL; + nqr->cp_ring_count =3D 0; + } +} + +static int alloc_one_cp_ring(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr) +{ + struct bnge_ring_mem_info *rmem; + struct bnge_ring_struct *ring; + struct bnge_dev *bd =3D bn->bd; + int rc; + + rc =3D bnge_alloc_cp_desc_arr(cpr, bn->cp_nr_pages); + if (rc) + return -ENOMEM; + ring =3D &cpr->ring_struct; + rmem =3D &ring->ring_mem; + rmem->nr_pages =3D bn->cp_nr_pages; + rmem->page_size =3D HW_CMPD_RING_SIZE; + rmem->pg_arr =3D (void **)cpr->desc_ring; + rmem->dma_arr =3D cpr->desc_mapping; + rmem->flags =3D BNGE_RMEM_RING_PTE_FLAG; + rc =3D bnge_alloc_ring(bd, rmem); + if (rc) + goto err_free_cp_desc_arr; + return rc; + +err_free_cp_desc_arr: + bnge_free_cp_desc_arr(cpr); + return rc; +} + +static int bnge_alloc_nq_tree(struct bnge_net *bn) +{ + int i, j, ulp_msix, rc =3D -ENOMEM; + struct bnge_dev *bd =3D bn->bd; + int tcs =3D 1; + + ulp_msix =3D bnge_aux_get_msix(bd); + for (i =3D 0, j =3D 0; i < bd->nq_nr_rings; i++) { + bool sh =3D !!(bd->flags & BNGE_EN_SHARED_CHNL); + struct bnge_napi *bnapi =3D bn->bnapi[i]; + struct bnge_nq_ring_info *nqr; + struct bnge_cp_ring_info *cpr; + struct bnge_ring_struct *ring; + int cp_count =3D 0, k; + int rx =3D 0, tx =3D 0; + + if (!bnapi) + continue; + + nqr =3D &bnapi->nq_ring; + nqr->bnapi =3D bnapi; + ring =3D &nqr->ring_struct; + + rc =3D bnge_alloc_ring(bd, &ring->ring_mem); + if (rc) + goto err_free_nq_tree; + + ring->map_idx =3D ulp_msix + i; + + if (i < bd->rx_nr_rings) { + cp_count++; + rx =3D 1; + } + + if ((sh && i < bd->tx_nr_rings) || + (!sh && i >=3D bd->rx_nr_rings)) { + cp_count +=3D tcs; + tx =3D 1; + } + + nqr->cp_ring_arr =3D kcalloc(cp_count, sizeof(*cpr), + GFP_KERNEL); + if (!nqr->cp_ring_arr) + goto err_free_nq_tree; + + nqr->cp_ring_count =3D cp_count; + + for (k =3D 0; k < cp_count; k++) { + cpr =3D &nqr->cp_ring_arr[k]; + rc =3D alloc_one_cp_ring(bn, cpr); + if (rc) + goto err_free_nq_tree; + + cpr->bnapi =3D bnapi; + cpr->cp_idx =3D k; + if (!k && rx) { + bn->rx_ring[i].rx_cpr =3D cpr; + cpr->cp_ring_type =3D BNGE_NQ_HDL_TYPE_RX; + } else { + int n, tc =3D k - rx; + + n =3D BNGE_TC_TO_RING_BASE(bd, tc) + j; + bn->tx_ring[n].tx_cpr =3D cpr; + cpr->cp_ring_type =3D BNGE_NQ_HDL_TYPE_TX; + } + } + if (tx) + j++; + } + return 0; + +err_free_nq_tree: + bnge_free_nq_tree(bn); + return rc; +} + static bool bnge_separate_head_pool(struct bnge_rx_ring_info *rxr) { return rxr->need_head_pool || PAGE_SIZE > BNGE_RX_PAGE_SIZE; @@ -221,10 +452,19 @@ static int bnge_alloc_tx_rings(struct bnge_net *bn) return rc; } =20 +static void bnge_free_ring_grps(struct bnge_net *bn) +{ + kfree(bn->grp_info); + bn->grp_info =3D NULL; +} + static void bnge_free_core(struct bnge_net *bn) { bnge_free_tx_rings(bn); bnge_free_rx_rings(bn); + bnge_free_nq_tree(bn); + bnge_free_nq_arrays(bn); + bnge_free_ring_grps(bn); kfree(bn->tx_ring_map); bn->tx_ring_map =3D NULL; kfree(bn->tx_ring); @@ -311,6 +551,10 @@ static int bnge_alloc_core(struct bnge_net *bn) txr->bnapi =3D bnapi2; } =20 + rc =3D bnge_alloc_nq_arrays(bn); + if (rc) + goto err_free_core; + bnge_init_ring_struct(bn); =20 rc =3D bnge_alloc_rx_rings(bn); @@ -318,6 +562,10 @@ static int bnge_alloc_core(struct bnge_net *bn) goto err_free_core; =20 rc =3D bnge_alloc_tx_rings(bn); + if (rc) + goto err_free_core; + + rc =3D bnge_alloc_nq_tree(bn); if (rc) goto err_free_core; return 0; @@ -327,6 +575,181 @@ static int bnge_alloc_core(struct bnge_net *bn) return rc; } =20 +static int bnge_cp_num_to_irq_num(struct bnge_net *bn, int n) +{ + struct bnge_napi *bnapi =3D bn->bnapi[n]; + struct bnge_nq_ring_info *nqr; + + nqr =3D &bnapi->nq_ring; + + return nqr->ring_struct.map_idx; +} + +static irqreturn_t bnge_msix(int irq, void *dev_instance) +{ + /* NAPI scheduling to be added in a future patch */ + return IRQ_HANDLED; +} + +static void bnge_setup_msix(struct bnge_net *bn) +{ + struct net_device *dev =3D bn->netdev; + struct bnge_dev *bd =3D bn->bd; + int len, i; + + len =3D sizeof(bd->irq_tbl[0].name); + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + char *attr; + + if (bd->flags & BNGE_EN_SHARED_CHNL) + attr =3D "TxRx"; + else if (i < bd->rx_nr_rings) + attr =3D "rx"; + else + attr =3D "tx"; + + snprintf(bd->irq_tbl[map_idx].name, len, "%s-%s-%d", dev->name, + attr, i); + bd->irq_tbl[map_idx].handler =3D bnge_msix; + } +} + +static int bnge_setup_interrupts(struct bnge_net *bn) +{ + struct net_device *dev =3D bn->netdev; + struct bnge_dev *bd =3D bn->bd; + int rc; + + if (!bd->irq_tbl) { + if (bnge_alloc_irqs(bd)) + return -ENODEV; + } + + bnge_setup_msix(bn); + rc =3D netif_set_real_num_queues(dev, bd->tx_nr_rings, bd->rx_nr_rings); + if (rc) + goto err_free_irqs; + return rc; + +err_free_irqs: + bnge_free_irqs(bd); + return rc; +} + +static int bnge_request_irq(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, rc; + + rc =3D bnge_setup_interrupts(bn); + if (rc) { + netdev_err(bn->netdev, "bnge_setup_interrupts err: %d\n", rc); + return rc; + } + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + struct bnge_irq *irq =3D &bd->irq_tbl[map_idx]; + + rc =3D request_irq(irq->vector, irq->handler, 0, irq->name, + bn->bnapi[i]); + if (rc) + break; + + netif_napi_set_irq_locked(&bn->bnapi[i]->napi, irq->vector); + irq->requested =3D 1; + + if (zalloc_cpumask_var(&irq->cpu_mask, GFP_KERNEL)) { + int numa_node =3D dev_to_node(&bd->pdev->dev); + + irq->have_cpumask =3D 1; + cpumask_set_cpu(cpumask_local_spread(i, numa_node), + irq->cpu_mask); + rc =3D irq_set_affinity_hint(irq->vector, irq->cpu_mask); + if (rc) { + netdev_warn(bn->netdev, + "Set affinity failed, IRQ =3D %d\n", + irq->vector); + break; + } + } + } + + return rc; +} + +static int bnge_napi_poll(struct napi_struct *napi, int budget) +{ + int work_done =3D 0; + + /* defer NAPI implementation to next patch series */ + napi_complete_done(napi, work_done); + + return work_done; +} + +static void bnge_init_napi(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + struct bnge_napi *bnapi; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + bnapi =3D bn->bnapi[i]; + netif_napi_add_config_locked(bn->netdev, &bnapi->napi, + bnge_napi_poll, bnapi->index); + } +} + +static void bnge_del_napi(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + if (!bn->bnapi) + return; + + for (i =3D 0; i < bd->rx_nr_rings; i++) + netif_queue_set_napi(bn->netdev, i, NETDEV_QUEUE_TYPE_RX, NULL); + for (i =3D 0; i < bd->tx_nr_rings; i++) + netif_queue_set_napi(bn->netdev, i, NETDEV_QUEUE_TYPE_TX, NULL); + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + __netif_napi_del_locked(&bnapi->napi); + } + + /* Wait for RCU grace period after removing NAPI instances */ + synchronize_net(); +} + +static void bnge_free_irq(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + struct bnge_irq *irq; + int i; + + if (!bd->irq_tbl || !bn->bnapi) + return; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + + irq =3D &bd->irq_tbl[map_idx]; + if (irq->requested) { + if (irq->have_cpumask) { + irq_set_affinity_hint(irq->vector, NULL); + free_cpumask_var(irq->cpu_mask); + irq->have_cpumask =3D 0; + } + free_irq(irq->vector, bn->bnapi[i]); + } + + irq->requested =3D 0; + } +} + static int bnge_open_core(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; @@ -346,8 +769,20 @@ static int bnge_open_core(struct bnge_net *bn) return rc; } =20 + bnge_init_napi(bn); + rc =3D bnge_request_irq(bn); + if (rc) { + netdev_err(bn->netdev, "bnge_request_irq err: %d\n", rc); + goto err_del_napi; + } + set_bit(BNGE_STATE_OPEN, &bd->state); return 0; + +err_del_napi: + bnge_del_napi(bn); + bnge_free_core(bn); + return rc; } =20 static netdev_tx_t bnge_start_xmit(struct sk_buff *skb, struct net_device = *dev) @@ -374,6 +809,9 @@ static void bnge_close_core(struct bnge_net *bn) struct bnge_dev *bd =3D bn->bd; =20 clear_bit(BNGE_STATE_OPEN, &bd->state); + bnge_free_irq(bn); + bnge_del_napi(bn); + bnge_free_core(bn); } =20 @@ -596,6 +1034,7 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irq= s) bnge_init_l2_fltr_tbl(bn); bnge_init_mac_addr(bd); =20 + netdev->request_ops_lock =3D true; rc =3D register_netdev(netdev); if (rc) { dev_err(bd->dev, "Register netdev failed rc: %d\n", rc); diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 92bae665f59..8041951da18 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -133,6 +133,9 @@ enum { =20 #define BNGE_NET_EN_TPA (BNGE_NET_EN_GRO | BNGE_NET_EN_LRO) =20 +#define BNGE_NQ_HDL_TYPE_RX 0x00 +#define BNGE_NQ_HDL_TYPE_TX 0x01 + struct bnge_net { struct bnge_dev *bd; struct net_device *netdev; @@ -172,6 +175,10 @@ struct bnge_net { =20 u16 *tx_ring_map; enum dma_data_direction rx_dir; + + /* grp_info indexed by napi/nq index */ + struct bnge_ring_grp_info *grp_info; + int total_irqs; }; =20 #define BNGE_DEFAULT_RX_RING_SIZE 511 @@ -223,6 +230,8 @@ struct bnge_cp_ring_info { dma_addr_t *desc_mapping; struct tx_cmp **desc_ring; struct bnge_ring_struct ring_struct; + u8 cp_ring_type; + u8 cp_idx; }; =20 struct bnge_nq_ring_info { @@ -230,6 +239,9 @@ struct bnge_nq_ring_info { dma_addr_t *desc_mapping; struct nqe_cn **desc_ring; struct bnge_ring_struct ring_struct; + + int cp_ring_count; + struct bnge_cp_ring_info *cp_ring_arr; }; =20 struct bnge_rx_ring_info { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_resc.c b/drivers/net/e= thernet/broadcom/bnge/bnge_resc.c index c79a3607a1b..5597af1b3b7 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_resc.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_resc.c @@ -46,7 +46,7 @@ static int bnge_aux_get_dflt_msix(struct bnge_dev *bd) return min_t(int, roce_msix, num_online_cpus() + 1); } =20 -static u16 bnge_aux_get_msix(struct bnge_dev *bd) +u16 bnge_aux_get_msix(struct bnge_dev *bd) { if (bnge_is_roce_en(bd)) return bd->aux_num_msix; --=20 2.47.3