From nobody Thu Oct 2 07:45:08 2025 Received: from mail-oo1-f97.google.com (mail-oo1-f97.google.com [209.85.161.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A137322DBC for ; Fri, 19 Sep 2025 17:48:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.97 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758304110; cv=none; b=Q4adbm1LV0VvmCeJJxKEz66ZGbHBMJqZn0b19X4nEh1ZTvSCT3Nk6i8fLaWFwwYG+m4DC/XZqQ1QRlUTR+C+AzmlU51lI31u5UYcVEhx1NngOtT5eyx7Ro8yQT/Ja6D7N62VcBc7m4cZ3NYHaPI0+PksK4UAUM1a+StJIUrT+Tc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758304110; c=relaxed/simple; bh=986ZgAiHCAzs5+S/kSV/dnX5tedqBxDFKoy2aeHmjAQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JASqVs1x0JjtKsP9dy2QHDlbiJLDdXKSeh1dJpum6osN/waNHJifQbF8lnxiuvV3BHIgIB8ekw84H+WCFDOxltIE+zGWPs/SPUus2qzuBD3WEaKYwxQGCc42igFRkHusizKEqqByRHvjRhIPVIu6LB0VAjpsP4dusrfiwU2l6CM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=DP771FA7; arc=none smtp.client-ip=209.85.161.97 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="DP771FA7" Received: by mail-oo1-f97.google.com with SMTP id 006d021491bc7-626190c9c1eso764686eaf.0 for ; Fri, 19 Sep 2025 10:48:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758304106; x=1758908906; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=w4/gA5U/tnXX8j6mDLvEdMncKn+aFLon9l2ymHb15JQ=; b=nC91fM4pHX1uEuGNR6yW92g5BlQGL1duyeNYqMj5GGmEINhqLsJqaTBZ9OZwKSb3mS kWubO2YnT3UWjQJLvsdowOL440/cWJcKrR1VT9ZHFbSbQKay2P6Y+nnl4p/AzDrlZtlF sTyi+s5PqA6XY9i9i+rRxMN2W12m+4m6ybrCCUD5bsXBXYDvD304HO0f9RCGvMVuXtnQ FQU2dt9PTTIAeivBqRwcAX9KEH8xGsgwDG+9vP+SVWT81J52eR0cZUVaSN9LlD/yXvdI x9LZXvRTsP60D8ulA1tyxYMdxdY9Q0xqtWJt0V5IOFtKBFn8UUnfpUzV3qgRHnlQveqL Lo0g== X-Forwarded-Encrypted: i=1; AJvYcCVqv/37eXm670HpXNHfnMjmGRAXHzVJLc06zLPJvFnC6hXEC0YSiaNON4oZNgPXa2akcUBhgW50s1asDjQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyB7nBBaUQwohozCdgjA8RenJL4kjxb2a4ABo5z5yqVEzvqEFrq CNd/hpCCoF1ArvBTaGHBu27L5f5UN9fUznITsbf4qode3PpG76uIqVWL5J26LEFHCk+VdgewTV/ EMl2WDkgmyeqEm8GQlOXNkf8o465MuQLNCMvIyFuWZQ0Xd6lv1kVsVIhFEPubeBFa3UM2+FqxYf Lg5CAcBZ2Li+fU9l0reyqP4351kGToW2ZC2V9I48bvFSL9SBbMtWkW0NCaeH/rQfVhdPg3v+saZ LEAU/iNRBTBFyJxPsHnEtsum+h5 X-Gm-Gg: ASbGncvYSZ79gFmK1bmJRhlZ8e4w5+u/bb4PV8MMDJhiPUhumgTwyyL5JSjjkkHlNib 48dNa1EYeI9BF9/gPPi0u2WMmfKixO6IMmjPZ0VoU1Chx+eyXm9JO1TPjrUm5vspTO7+rCCApp8 4g+RhtzMbGO5OefL5yBA6HrGqOvALffp8mIslEgmajJqjxh7MLpbcVOftSG2T+5fAHalV6KN9cA UJzy3BLzR3dGF2x+qBWdy0NTfKbWIUnoThJGI4/lRY9mTr/m4h/hdSuWnLGDbzjVfuvBm4p2ZwU ZSGSf0x35UTDgcVlYiW6r+4lr5b6NMf91tu0MHlzhwHi+dXzVkGqHsAL/Zqv1wUJbRQjCJqXZg5 xjxJMuaz+pbD+/HIFGOPSrCARuVAGtkk/r/x1PZzLmPdKjsduvgFD5J3y8PJzVf5v2nSfK9NSu1 +az/yUAXey X-Google-Smtp-Source: AGHT+IGNlTcRxoxMJFZGRnSeialp6725ngLLu3vKGFTcqXrOIo+v6XBcGDlRVVGhf2/yzFPoBWkgTqCHqXKP X-Received: by 2002:a05:6808:1b21:b0:43b:9e0b:4bff with SMTP id 5614622812f47-43d6c19efb1mr1967555b6e.19.1758304106009; Fri, 19 Sep 2025 10:48:26 -0700 (PDT) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-118.dlp.protect.broadcom.com. [144.49.247.118]) by smtp-relay.gmail.com with ESMTPS id 586e51a60fabf-336e1bc7fbcsm551418fac.0.2025.09.19.10.48.25 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Sep 2025 10:48:26 -0700 (PDT) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-32df881dce2so2376945a91.2 for ; Fri, 19 Sep 2025 10:48:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1758304104; x=1758908904; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w4/gA5U/tnXX8j6mDLvEdMncKn+aFLon9l2ymHb15JQ=; b=DP771FA7JfrHBcLOZlelno5nDhgMEji2QzVMBI2rWTv1lPdnK2iWBG4IfhYCBG3gyN otonVhpbL4DYwfK7E0i8A0OeD4hiT6/sqkRem/aFmeaQzgM3cKkx9d2ZUDmEFMgaDm+E HDR847W52IC/+x986Ks+Nkyn5dtHoHBk516h8= X-Forwarded-Encrypted: i=1; AJvYcCUbtWN7S+CpDDSCkCn3mOqll6iIuRMGZtZUUEa2MF6hBuiPWD6DzrPnlKht05//z/cSI1tzl/VgYOSthpg=@vger.kernel.org X-Received: by 2002:a17:90b:3f48:b0:329:cb75:fef2 with SMTP id 98e67ed59e1d1-33097fd0cfemr5398409a91.3.1758304104277; Fri, 19 Sep 2025 10:48:24 -0700 (PDT) X-Received: by 2002:a17:90b:3f48:b0:329:cb75:fef2 with SMTP id 98e67ed59e1d1-33097fd0cfemr5398389a91.3.1758304103677; Fri, 19 Sep 2025 10:48:23 -0700 (PDT) Received: from hyd-csg-thor2-h1-server2.dhcp.broadcom.net ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b55138043b6sm3513119a12.26.2025.09.19.10.48.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Sep 2025 10:48:23 -0700 (PDT) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v8, net-next 03/10] bng_en: Add initial support for CP and NQ rings Date: Fri, 19 Sep 2025 23:17:34 +0530 Message-ID: <20250919174742.24969-4-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250919174742.24969-1-bhargava.marreddy@broadcom.com> References: <20250919174742.24969-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Allocate CP and NQ related data structures and add support to associate NQ and CQ rings. Also, add the association of NQ, NAPI, and interrupts. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- drivers/net/ethernet/broadcom/bnge/bnge.h | 1 + .../net/ethernet/broadcom/bnge/bnge_netdev.c | 413 ++++++++++++++++++ .../net/ethernet/broadcom/bnge/bnge_netdev.h | 10 + .../net/ethernet/broadcom/bnge/bnge_resc.c | 2 +- 4 files changed, 425 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge.h b/drivers/net/ethern= et/broadcom/bnge/bnge.h index 03e55b931f7..c536c0cc66e 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge.h @@ -215,5 +215,6 @@ static inline bool bnge_is_agg_reqd(struct bnge_dev *bd) } =20 bool bnge_aux_registered(struct bnge_dev *bd); +u16 bnge_aux_get_msix(struct bnge_dev *bd); =20 #endif /* _BNGE_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index c25a793b8ae..67da7001427 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -27,6 +27,233 @@ #define BNGE_RING_TO_TC(bd, tx) \ ((tx) / (bd)->tx_nr_rings_per_tc) =20 +#define BNGE_TC_TO_RING_BASE(bd, tc) \ + ((tc) * (bd)->tx_nr_rings_per_tc) + +static void bnge_free_nq_desc_arr(struct bnge_nq_ring_info *nqr) +{ + struct bnge_ring_struct *ring =3D &nqr->ring_struct; + + kfree(nqr->desc_ring); + nqr->desc_ring =3D NULL; + ring->ring_mem.pg_arr =3D NULL; + kfree(nqr->desc_mapping); + nqr->desc_mapping =3D NULL; + ring->ring_mem.dma_arr =3D NULL; +} + +static void bnge_free_cp_desc_arr(struct bnge_cp_ring_info *cpr) +{ + struct bnge_ring_struct *ring =3D &cpr->ring_struct; + + kfree(cpr->desc_ring); + cpr->desc_ring =3D NULL; + ring->ring_mem.pg_arr =3D NULL; + kfree(cpr->desc_mapping); + cpr->desc_mapping =3D NULL; + ring->ring_mem.dma_arr =3D NULL; +} + +static int bnge_alloc_nq_desc_arr(struct bnge_nq_ring_info *nqr, int n) +{ + nqr->desc_ring =3D kcalloc(n, sizeof(*nqr->desc_ring), GFP_KERNEL); + if (!nqr->desc_ring) + return -ENOMEM; + + nqr->desc_mapping =3D kcalloc(n, sizeof(*nqr->desc_mapping), GFP_KERNEL); + if (!nqr->desc_mapping) + goto err_free_desc_ring; + return 0; + +err_free_desc_ring: + kfree(nqr->desc_ring); + nqr->desc_ring =3D NULL; + return -ENOMEM; +} + +static int bnge_alloc_cp_desc_arr(struct bnge_cp_ring_info *cpr, int n) +{ + cpr->desc_ring =3D kcalloc(n, sizeof(*cpr->desc_ring), GFP_KERNEL); + if (!cpr->desc_ring) + return -ENOMEM; + + cpr->desc_mapping =3D kcalloc(n, sizeof(*cpr->desc_mapping), GFP_KERNEL); + if (!cpr->desc_mapping) + goto err_free_desc_ring; + return 0; + +err_free_desc_ring: + kfree(cpr->desc_ring); + cpr->desc_ring =3D NULL; + return -ENOMEM; +} + +static void bnge_free_nq_arrays(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + bnge_free_nq_desc_arr(&bnapi->nq_ring); + } +} + +static int bnge_alloc_nq_arrays(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, rc; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + rc =3D bnge_alloc_nq_desc_arr(&bnapi->nq_ring, bn->cp_nr_pages); + if (rc) + goto err_free_nq_arrays; + } + return 0; + +err_free_nq_arrays: + bnge_free_nq_arrays(bn); + return rc; +} + +static void bnge_free_nq_tree(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + struct bnge_nq_ring_info *nqr; + struct bnge_ring_struct *ring; + int j; + + nqr =3D &bnapi->nq_ring; + ring =3D &nqr->ring_struct; + + bnge_free_ring(bd, &ring->ring_mem); + + if (!nqr->cp_ring_arr) + continue; + + for (j =3D 0; j < nqr->cp_ring_count; j++) { + struct bnge_cp_ring_info *cpr =3D &nqr->cp_ring_arr[j]; + + ring =3D &cpr->ring_struct; + bnge_free_ring(bd, &ring->ring_mem); + bnge_free_cp_desc_arr(cpr); + } + kfree(nqr->cp_ring_arr); + nqr->cp_ring_arr =3D NULL; + nqr->cp_ring_count =3D 0; + } +} + +static int alloc_one_cp_ring(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr) +{ + struct bnge_ring_mem_info *rmem; + struct bnge_ring_struct *ring; + struct bnge_dev *bd =3D bn->bd; + int rc; + + rc =3D bnge_alloc_cp_desc_arr(cpr, bn->cp_nr_pages); + if (rc) + return -ENOMEM; + ring =3D &cpr->ring_struct; + rmem =3D &ring->ring_mem; + rmem->nr_pages =3D bn->cp_nr_pages; + rmem->page_size =3D HW_CMPD_RING_SIZE; + rmem->pg_arr =3D (void **)cpr->desc_ring; + rmem->dma_arr =3D cpr->desc_mapping; + rmem->flags =3D BNGE_RMEM_RING_PTE_FLAG; + rc =3D bnge_alloc_ring(bd, rmem); + if (rc) + goto err_free_cp_desc_arr; + return rc; + +err_free_cp_desc_arr: + bnge_free_cp_desc_arr(cpr); + return rc; +} + +static int bnge_alloc_nq_tree(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, j, ulp_msix, rc; + int tcs =3D 1; + + ulp_msix =3D bnge_aux_get_msix(bd); + for (i =3D 0, j =3D 0; i < bd->nq_nr_rings; i++) { + bool sh =3D !!(bd->flags & BNGE_EN_SHARED_CHNL); + struct bnge_napi *bnapi =3D bn->bnapi[i]; + struct bnge_nq_ring_info *nqr; + struct bnge_cp_ring_info *cpr; + struct bnge_ring_struct *ring; + int cp_count =3D 0, k; + int rx =3D 0, tx =3D 0; + + nqr =3D &bnapi->nq_ring; + nqr->bnapi =3D bnapi; + ring =3D &nqr->ring_struct; + + rc =3D bnge_alloc_ring(bd, &ring->ring_mem); + if (rc) + goto err_free_nq_tree; + + ring->map_idx =3D ulp_msix + i; + + if (i < bd->rx_nr_rings) { + cp_count++; + rx =3D 1; + } + + if ((sh && i < bd->tx_nr_rings) || + (!sh && i >=3D bd->rx_nr_rings)) { + cp_count +=3D tcs; + tx =3D 1; + } + + nqr->cp_ring_arr =3D kcalloc(cp_count, sizeof(*cpr), + GFP_KERNEL); + if (!nqr->cp_ring_arr) { + rc =3D -ENOMEM; + goto err_free_nq_tree; + } + + nqr->cp_ring_count =3D cp_count; + + for (k =3D 0; k < cp_count; k++) { + cpr =3D &nqr->cp_ring_arr[k]; + rc =3D alloc_one_cp_ring(bn, cpr); + if (rc) + goto err_free_nq_tree; + + cpr->bnapi =3D bnapi; + cpr->cp_idx =3D k; + if (!k && rx) { + bn->rx_ring[i].rx_cpr =3D cpr; + cpr->cp_ring_type =3D BNGE_NQ_HDL_TYPE_RX; + } else { + int n, tc =3D k - rx; + + n =3D BNGE_TC_TO_RING_BASE(bd, tc) + j; + bn->tx_ring[n].tx_cpr =3D cpr; + cpr->cp_ring_type =3D BNGE_NQ_HDL_TYPE_TX; + } + } + if (tx) + j++; + } + return 0; + +err_free_nq_tree: + bnge_free_nq_tree(bn); + return rc; +} + static bool bnge_separate_head_pool(struct bnge_rx_ring_info *rxr) { return rxr->need_head_pool || PAGE_SIZE > BNGE_RX_PAGE_SIZE; @@ -216,6 +443,8 @@ static void bnge_free_core(struct bnge_net *bn) { bnge_free_tx_rings(bn); bnge_free_rx_rings(bn); + bnge_free_nq_tree(bn); + bnge_free_nq_arrays(bn); kfree(bn->tx_ring_map); bn->tx_ring_map =3D NULL; kfree(bn->tx_ring); @@ -302,6 +531,10 @@ static int bnge_alloc_core(struct bnge_net *bn) txr->bnapi =3D bnapi2; } =20 + rc =3D bnge_alloc_nq_arrays(bn); + if (rc) + goto err_free_core; + bnge_init_ring_struct(bn); =20 rc =3D bnge_alloc_rx_rings(bn); @@ -309,6 +542,10 @@ static int bnge_alloc_core(struct bnge_net *bn) goto err_free_core; =20 rc =3D bnge_alloc_tx_rings(bn); + if (rc) + goto err_free_core; + + rc =3D bnge_alloc_nq_tree(bn); if (rc) goto err_free_core; return 0; @@ -318,6 +555,166 @@ static int bnge_alloc_core(struct bnge_net *bn) return rc; } =20 +static int bnge_cp_num_to_irq_num(struct bnge_net *bn, int n) +{ + struct bnge_napi *bnapi =3D bn->bnapi[n]; + struct bnge_nq_ring_info *nqr; + + nqr =3D &bnapi->nq_ring; + + return nqr->ring_struct.map_idx; +} + +static irqreturn_t bnge_msix(int irq, void *dev_instance) +{ + /* NAPI scheduling to be added in a future patch */ + return IRQ_HANDLED; +} + +static void bnge_setup_msix(struct bnge_net *bn) +{ + struct net_device *dev =3D bn->netdev; + struct bnge_dev *bd =3D bn->bd; + int len, i; + + len =3D sizeof(bd->irq_tbl[0].name); + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + char *attr; + + if (bd->flags & BNGE_EN_SHARED_CHNL) + attr =3D "TxRx"; + else if (i < bd->rx_nr_rings) + attr =3D "rx"; + else + attr =3D "tx"; + + snprintf(bd->irq_tbl[map_idx].name, len, "%s-%s-%d", dev->name, + attr, i); + bd->irq_tbl[map_idx].handler =3D bnge_msix; + } +} + +static int bnge_setup_interrupts(struct bnge_net *bn) +{ + struct net_device *dev =3D bn->netdev; + struct bnge_dev *bd =3D bn->bd; + + bnge_setup_msix(bn); + + return netif_set_real_num_queues(dev, bd->tx_nr_rings, bd->rx_nr_rings); +} + +static void bnge_free_irq(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + struct bnge_irq *irq; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + + irq =3D &bd->irq_tbl[map_idx]; + if (irq->requested) { + if (irq->have_cpumask) { + irq_set_affinity_hint(irq->vector, NULL); + free_cpumask_var(irq->cpu_mask); + irq->have_cpumask =3D 0; + } + free_irq(irq->vector, bn->bnapi[i]); + } + + irq->requested =3D 0; + } +} + +static int bnge_request_irq(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, rc; + + rc =3D bnge_setup_interrupts(bn); + if (rc) { + netdev_err(bn->netdev, "bnge_setup_interrupts err: %d\n", rc); + return rc; + } + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + struct bnge_irq *irq =3D &bd->irq_tbl[map_idx]; + + rc =3D request_irq(irq->vector, irq->handler, 0, irq->name, + bn->bnapi[i]); + if (rc) + goto err_free_irq; + + netif_napi_set_irq_locked(&bn->bnapi[i]->napi, irq->vector); + irq->requested =3D 1; + + if (zalloc_cpumask_var(&irq->cpu_mask, GFP_KERNEL)) { + int numa_node =3D dev_to_node(&bd->pdev->dev); + + irq->have_cpumask =3D 1; + cpumask_set_cpu(cpumask_local_spread(i, numa_node), + irq->cpu_mask); + rc =3D irq_set_affinity_hint(irq->vector, irq->cpu_mask); + if (rc) { + netdev_warn(bn->netdev, + "Set affinity failed, IRQ =3D %d\n", + irq->vector); + goto err_free_irq; + } + } + } + return 0; + +err_free_irq: + bnge_free_irq(bn); + return rc; +} + +static int bnge_napi_poll(struct napi_struct *napi, int budget) +{ + int work_done =3D 0; + + /* defer NAPI implementation to next patch series */ + napi_complete_done(napi, work_done); + + return work_done; +} + +static void bnge_init_napi(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + struct bnge_napi *bnapi; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + bnapi =3D bn->bnapi[i]; + netif_napi_add_config_locked(bn->netdev, &bnapi->napi, + bnge_napi_poll, bnapi->index); + } +} + +static void bnge_del_napi(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + for (i =3D 0; i < bd->rx_nr_rings; i++) + netif_queue_set_napi(bn->netdev, i, NETDEV_QUEUE_TYPE_RX, NULL); + for (i =3D 0; i < bd->tx_nr_rings; i++) + netif_queue_set_napi(bn->netdev, i, NETDEV_QUEUE_TYPE_TX, NULL); + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + __netif_napi_del_locked(&bnapi->napi); + } + + /* Wait for RCU grace period after removing NAPI instances */ + synchronize_net(); +} + static int bnge_open_core(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; @@ -337,8 +734,20 @@ static int bnge_open_core(struct bnge_net *bn) return rc; } =20 + bnge_init_napi(bn); + rc =3D bnge_request_irq(bn); + if (rc) { + netdev_err(bn->netdev, "bnge_request_irq err: %d\n", rc); + goto err_del_napi; + } + set_bit(BNGE_STATE_OPEN, &bd->state); return 0; + +err_del_napi: + bnge_del_napi(bn); + bnge_free_core(bn); + return rc; } =20 static netdev_tx_t bnge_start_xmit(struct sk_buff *skb, struct net_device = *dev) @@ -365,6 +774,9 @@ static void bnge_close_core(struct bnge_net *bn) struct bnge_dev *bd =3D bn->bd; =20 clear_bit(BNGE_STATE_OPEN, &bd->state); + bnge_free_irq(bn); + bnge_del_napi(bn); + bnge_free_core(bn); } =20 @@ -587,6 +999,7 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs) bnge_init_l2_fltr_tbl(bn); bnge_init_mac_addr(bd); =20 + netdev->request_ops_lock =3D true; rc =3D register_netdev(netdev); if (rc) { dev_err(bd->dev, "Register netdev failed rc: %d\n", rc); diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 92bae665f59..bccddae09fa 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -133,6 +133,9 @@ enum { =20 #define BNGE_NET_EN_TPA (BNGE_NET_EN_GRO | BNGE_NET_EN_LRO) =20 +#define BNGE_NQ_HDL_TYPE_RX 0x00 +#define BNGE_NQ_HDL_TYPE_TX 0x01 + struct bnge_net { struct bnge_dev *bd; struct net_device *netdev; @@ -172,6 +175,8 @@ struct bnge_net { =20 u16 *tx_ring_map; enum dma_data_direction rx_dir; + + int total_irqs; }; =20 #define BNGE_DEFAULT_RX_RING_SIZE 511 @@ -223,6 +228,8 @@ struct bnge_cp_ring_info { dma_addr_t *desc_mapping; struct tx_cmp **desc_ring; struct bnge_ring_struct ring_struct; + u8 cp_ring_type; + u8 cp_idx; }; =20 struct bnge_nq_ring_info { @@ -230,6 +237,9 @@ struct bnge_nq_ring_info { dma_addr_t *desc_mapping; struct nqe_cn **desc_ring; struct bnge_ring_struct ring_struct; + + int cp_ring_count; + struct bnge_cp_ring_info *cp_ring_arr; }; =20 struct bnge_rx_ring_info { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_resc.c b/drivers/net/e= thernet/broadcom/bnge/bnge_resc.c index c79a3607a1b..5597af1b3b7 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_resc.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_resc.c @@ -46,7 +46,7 @@ static int bnge_aux_get_dflt_msix(struct bnge_dev *bd) return min_t(int, roce_msix, num_online_cpus() + 1); } =20 -static u16 bnge_aux_get_msix(struct bnge_dev *bd) +u16 bnge_aux_get_msix(struct bnge_dev *bd) { if (bnge_is_roce_en(bd)) return bd->aux_num_msix; --=20 2.47.3