From nobody Sun Feb 8 05:35:12 2026 Received: from mail-pl1-f227.google.com (mail-pl1-f227.google.com [209.85.214.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7390629E116 for ; Mon, 5 Jan 2026 07:22:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597755; cv=none; b=ud3V4rbIEUjvR+DwZJteqNDVaFRO4I+7sqwyS6n1z0QurBDkYrc/otcXFSmq+luQEEDUQPfIRtDvSxeEAGT2odnaCM16AEdPgqwseJ7v2xDu7k8TuFXnBVeSyylBUW8lhXKkmVOmFrREqn7mP5bhgxl1we5jCKcfqBWOl/8WDT4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597755; c=relaxed/simple; bh=G42HEdK63RMSkNcCPQGCk4X08ncSFrRURRT8K21yxRE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GZlMsByBlQbnvtE8eMmw0NJy96XKuVZ+aSttxwvKu+JLXx9DGd71pAyyrzPRXkNszVF/l22jl9GdhSSAdC4h6oZBJcv9GQlECqT+XFFvx1M1VZTQ0F+NMPcTVc01d+STG8E5qBYjO/Y7qU25nAEyMiUTKkKzWSSIqP/qUyQxxBU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=Jaxlceq2; arc=none smtp.client-ip=209.85.214.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="Jaxlceq2" Received: by mail-pl1-f227.google.com with SMTP id d9443c01a7336-2a0fe77d141so158223255ad.1 for ; Sun, 04 Jan 2026 23:22:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597753; x=1768202553; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P5v6be/rvU1Ddm94jt+LcDTFwMP9w1kpc8KK4AcGhZ4=; b=c2nCdeuYQ01HAUsLQwmOK/cmzqdghO5i962Mv1OQ6aIrJ25jC0z/rVffdsG4w050pU hF3Q33QvThEvvbiy5Aa7743A+gQRLHkeNe2buZmy9ltla6itus9HgD6YNI7YxilcfMs2 m7VaSnx3NiPl87av7fnHI1si0S7JQrDRuZ0lw/i9QO+ocrzrQwU8HFUFzwxK72531ujW IiYrxLxNl+vf/JTlt1cMiDZ5vBqYI613NpxpiZJ4DrGKsprHM/bTAmRURD8moK09Eelf zYqUoIb7pr83CWv+Ao87qdQ9oogQ3JzlgaEPjoHCgfAjdttnYtdUSC4jjpwbwGZkkwBG RsVQ== X-Forwarded-Encrypted: i=1; AJvYcCVEf9pQgG99ToVF1t0JcKxdVsfw4yLtRswd/Ga5RbHL1pYYigzUtyF0bqxYyw/vxeDJy5t3lg4+HhilYNw=@vger.kernel.org X-Gm-Message-State: AOJu0YxfLDsVfOqPH4N72bDhsXAi5ypNi9Fg5YB/ekAys/4UXq1Zlc6/ ACqPus/X1AUgh2YBBwv9C8EJRApskAy0qUibePTfX/AMcW3k5T6i1rZfEdo2tuiYOVCObezv/DV QKyvmy93eqn+Bku5iK7mazQRmbRuKR3OGoCP/qKJyhQEKnDFrIxzz5KVRPp9DlrsppvNnZjMYn7 wrPfRmF4WT411e9nxPKQX0M62B7Cf7XJzHIeEZUn8EWqJQAnYf/xIQvcDJlxHSwp4USy4pdYTaI EY+ptCYKKMWyzugxVE2axqxzZV3 X-Gm-Gg: AY/fxX6nXI+h8Q5diCBNGNK1pICXwf55BSohQuT+WHRsAr1zpEeyoFNC7wuLlq+HXZX NYU9C+SWY1WXWaAPSfVxLerAoT3JH8VHU825CNTGqK/jAYHPUawgEFHcX7sbFzCTJZx1k+SnTfO 9kp7/KeGTGbswXc+Uy00D+TKLrwF0i1KrnngeYlQZasPlvWgd/xKsPKxi/G+zp0tdlsGMfMFFQ+ jkDIgngaks3g8/Vat45fjP3/GrPlxFSpO/dhiouaZIo+gESdjdiNPE3kdBS4tyhUzuRsgiKFboF dv/ifEustiqvac/oqb4+Y5yhwy9M5rdU2FUj8HkaCg2pDCIzmSsj6fjo8j6qdXygInN8j7gDh7F EZBfFQ7y50Qhr7PdwKi71jophNFJpOMXqKUphWmxZl4MXAi+q/rIfbO3BDxHBbStT1L4wCEj9Hp y34JmzQoWO9nWXr5P2IJ87jiFxq0EARNjb/XP29ocIZrN89GcejK4hBw== X-Google-Smtp-Source: AGHT+IGwAirTqDWMzEMCugxMhoEcEpdUhYUHGLgEANyea3A5DO3qbEgk4iqB8qDemvGHf0zVw+otVjCvymK8 X-Received: by 2002:a17:902:c94b:b0:2a0:9084:3aff with SMTP id d9443c01a7336-2a2f2a5aa44mr496965375ad.61.1767597752589; Sun, 04 Jan 2026 23:22:32 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-117.dlp.protect.broadcom.com. [144.49.247.117]) by smtp-relay.gmail.com with ESMTPS id d9443c01a7336-2a3bf5f992bsm9716805ad.11.2026.01.04.23.22.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:32 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-7d481452732so26004668b3a.1 for ; Sun, 04 Jan 2026 23:22:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597751; x=1768202551; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P5v6be/rvU1Ddm94jt+LcDTFwMP9w1kpc8KK4AcGhZ4=; b=Jaxlceq2dDbx8/1b68bqSeOvV6wto8qfR4KVKunN4LGbe5l8AvxT8Z2o0E/cqIvApi DYaslqnlAQfONEdz8pRe42WG8AVbAg9DvdiuHNZPfsdg6Mh5EYEyg/48JNz9kNxZt9Kj XV3EcdEr9qv9eOpeeXKwGw2IUM+5aiAwX9AVQ= X-Forwarded-Encrypted: i=1; AJvYcCUVuOwVoCjh/mQnjjsJ0up4kOq9wBXIscUmT0k52dPIVV69LijUCw3cWIrcmVTKX5DecqcbVKGpP9KkMq8=@vger.kernel.org X-Received: by 2002:a05:6a00:600e:b0:7fb:cf05:93db with SMTP id d2e1a72fcca58-7ff6667cc4emr44478626b3a.59.1767597751044; Sun, 04 Jan 2026 23:22:31 -0800 (PST) X-Received: by 2002:a05:6a00:600e:b0:7fb:cf05:93db with SMTP id d2e1a72fcca58-7ff6667cc4emr44478614b3a.59.1767597750640; Sun, 04 Jan 2026 23:22:30 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:30 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 1/7] bng_en: Extend bnge_set_ring_params() for rx-copybreak Date: Mon, 5 Jan 2026 12:51:37 +0530 Message-ID: <20260105072143.19447-2-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Add rx-copybreak support in bnge_set_ring_params() Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../net/ethernet/broadcom/bnge/bnge_netdev.c | 19 +++++++++++++++++-- .../net/ethernet/broadcom/bnge/bnge_netdev.h | 5 +++-- 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 832eeb960bd2..8bd019ea55a2 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include =20 @@ -2295,7 +2296,6 @@ void bnge_set_ring_params(struct bnge_dev *bd) rx_space =3D rx_size + ALIGN(NET_SKB_PAD, 8) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); =20 - bn->rx_copy_thresh =3D BNGE_RX_COPY_THRESH; ring_size =3D bn->rx_ring_size; bn->rx_agg_ring_size =3D 0; bn->rx_agg_nr_pages =3D 0; @@ -2334,7 +2334,10 @@ void bnge_set_ring_params(struct bnge_dev *bd) bn->rx_agg_ring_size =3D agg_ring_size; bn->rx_agg_ring_mask =3D (bn->rx_agg_nr_pages * RX_DESC_CNT) - 1; =20 - rx_size =3D SKB_DATA_ALIGN(BNGE_RX_COPY_THRESH + NET_IP_ALIGN); + rx_size =3D max3(BNGE_DEFAULT_RX_COPYBREAK, + bn->rx_copybreak, + bn->netdev->cfg_pending->hds_thresh); + rx_size =3D SKB_DATA_ALIGN(rx_size + NET_IP_ALIGN); rx_space =3D rx_size + NET_SKB_PAD + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); } @@ -2367,6 +2370,17 @@ void bnge_set_ring_params(struct bnge_dev *bd) bn->cp_ring_mask =3D bn->cp_bit - 1; } =20 +static void bnge_init_ring_params(struct bnge_net *bn) +{ + u32 rx_size; + + bn->rx_copybreak =3D BNGE_DEFAULT_RX_COPYBREAK; + /* Try to fit 4 chunks into a 4k page */ + rx_size =3D SZ_1K - + NET_SKB_PAD - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + bn->netdev->cfg->hds_thresh =3D max(BNGE_DEFAULT_RX_COPYBREAK, rx_size); +} + int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs) { struct net_device *netdev; @@ -2456,6 +2470,7 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_ir= qs) bn->rx_dir =3D DMA_FROM_DEVICE; =20 bnge_set_tpa_flags(bd); + bnge_init_ring_params(bn); bnge_set_ring_params(bd); =20 bnge_init_l2_fltr_tbl(bn); diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index fb3b961536ba..557cca472db6 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -135,7 +135,8 @@ struct bnge_ring_grp_info { u16 nq_fw_ring_id; }; =20 -#define BNGE_RX_COPY_THRESH 256 +#define BNGE_DEFAULT_RX_COPYBREAK 256 +#define BNGE_MAX_RX_COPYBREAK 1024 =20 #define BNGE_HW_FEATURE_VLAN_ALL_RX \ (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX) @@ -186,7 +187,7 @@ struct bnge_net { u32 rx_buf_size; u32 rx_buf_use_size; /* usable size */ u32 rx_agg_ring_size; - u32 rx_copy_thresh; + u32 rx_copybreak; u32 rx_ring_mask; u32 rx_agg_ring_mask; u16 rx_nr_pages; --=20 2.47.3 From nobody Sun Feb 8 05:35:12 2026 Received: from mail-vk1-f228.google.com (mail-vk1-f228.google.com [209.85.221.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82F7E2874FA for ; Mon, 5 Jan 2026 07:22:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.228 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597763; cv=none; b=b4Z0bBBqJdZQ8AoGhTcXWN7t5rzAX1LrFHTuF5fDG7IEnXcAopQnuIX8kxtD5PN7VsQ4LevxTnYa6ekZbyH3P98PV5gS95v00F7vfoIqVLrZtGhQhF+Pjuk1SPkzK4qNJJrZYhJNhJBeiO7KwTPmJaUueCrhTcjm1js0/mfBmhI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597763; c=relaxed/simple; bh=DI3G0ojy3hO6OMRcepuZZcIyEDaAmgu93om9PNhnlec=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ndnKOrmqaFobH7ig6zuTAnBbuFFaSlqfwkikI0hFo1DXfme3aaytP1eCJHylpMkkYaOotAG2JedhaQ99wbss9gcoaQzPQ4fSjDyCQmDULsTpYE631ovn6Z8h5dqUp1qSqoCBssua6bj9J8+B6vNiR9a9AoYJNgvuDNIynZF2TcI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=bleSNb9j; arc=none smtp.client-ip=209.85.221.228 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="bleSNb9j" Received: by mail-vk1-f228.google.com with SMTP id 71dfb90a1353d-559a4d6b511so1356589e0c.0 for ; Sun, 04 Jan 2026 23:22:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597758; x=1768202558; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1xUzUYUNLXnQRm0Ur1ULmzBOklSMRNbKcuIZG17nr1A=; b=tM75+RKcSqH2dO/4ub2iHKDpoS7MymugOU8CRfikmFyBG9lqe2Ciggc9niWXw7CEr+ Un+zPbL73Y27jXlsi2ERVvy7+W2tP8wP298OeD/rKDxuAftDvd2aGnXmA8ttBgnN3vNr nRCH1609NoilxGqLxq1WggJx930qS8ER2593zfD4FVFALeOIOHdrJeINZpRM5bC/0CKm 0M1NrWfvjkvPYuJ3M3SFQ6ImjIks57DKJT2Ef7LWLaHECAPnsV5Sk/zvoqsy+sVdUHAw EFPWFVfo6LFOdjxqqHTUWxBiZUUQV4z+czcZwDqXPglIITnkzCG1KJVt97qXNq3KtFCz rDUA== X-Forwarded-Encrypted: i=1; AJvYcCVtH8NxaeonSCs2N0KPv9OeS+8kzpDrCOrjmZEwRnPvGeb0Nh0GbISK5s8RhlMSK57LLS/9M+FbBf2myUw=@vger.kernel.org X-Gm-Message-State: AOJu0Yy6Vf+BaHeNEbDOi286perBN1uz3CL9rHrna8byWxeAH/jiOjvq NumDD2rZ4YTMXCrHDLRXvRqht7BzTNR12ZFuTcZ0j5pF+Ux/FafYB+xHHRqNzztroFzZ8eHCQof 634Xreq0BeFpalKIT49he54SRuUKCQ0luugsCwy2Mn3oYHEcATkOhKsjTeL6LS6gFcZvXkZf80O wDRkB7NQg9FBzX6u6WvSPq39l05ySsWfpOaKScr47uPWE16PBBsh7iSHEAVSbcP4g9O7WRopAKh L1ia0pZldLe+D6oclOyq8cqWWx1 X-Gm-Gg: AY/fxX4cQiUVTUA3oOIn/AzAGdjDv8mFaOwcc39wxsBgMs0yzQNcfAR3hxOsW24C+b5 hGhtpw202as4eVxDYNL4wGR/re+6n7GlLwcNBm5peRWjrHlAHM1LIUXMQBAfI7O4njlGuCzIAqy /YwmJOgRUoRT6M0QvqttSss8v1BWb/EEgb6+lMNem5SVkoaWlRceT5PM4IcM4vyTTcCCkvNsI30 vgTxr3+WN0AwyEngeBcnOCOrhPBM1fgq2Rg8ruoNIm0cCY3WLZfMOkddpPd/I0eQ810MzEsVT1H 4AXIoCqRwSijdjv2mIrp53cAq6MnGWLYatfxjK1n0Wc/TetA8O34VHck99+y19aZipiEVa3H2dp IiDJ1jgfBfYNXHdR/jAsabRUTnDqRLals31Q9FuAmV3ze+lesutiAhJBmFhd7EKN/0WPtC9QeeV q+boUefDlwSYYAmi3FnDaxJm9tE+FU220XP61KkqVitXrhWmVraYXX6w== X-Google-Smtp-Source: AGHT+IE4ChT9/4HRgzz31u5QGmKKeOduekanZZejCQmIy40izViGPsQBqTiUIR/Yvbs4pYAHfHcyEJIWl1Ut X-Received: by 2002:a05:6122:3197:b0:559:6663:8b1a with SMTP id 71dfb90a1353d-5615bcd66a4mr13736286e0c.4.1767597757732; Sun, 04 Jan 2026 23:22:37 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-120.dlp.protect.broadcom.com. [144.49.247.120]) by smtp-relay.gmail.com with ESMTPS id 71dfb90a1353d-5615d19e4bcsm7488640e0c.8.2026.01.04.23.22.36 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:37 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7c1df71b076so27346103b3a.0 for ; Sun, 04 Jan 2026 23:22:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597756; x=1768202556; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1xUzUYUNLXnQRm0Ur1ULmzBOklSMRNbKcuIZG17nr1A=; b=bleSNb9jqZGy/mMDKOUClKrcknJIexa+m3DXHQkwxJ6iaCbvOPHMgkld1MDoGMWzVd GIAKqHdhBcQ8eWONlOzGuIZDt9IKiD2BmwYb+YKo2CN0C3lmMFWmf0tpOi7GJmsQgYc5 q504UbScmuaw+VTlbkJQdvwODrDSIh0X9bLK4= X-Forwarded-Encrypted: i=1; AJvYcCUcWhaeOEO+ULMyHvN4152ZykjwLoHKwOY1elXkhEjk2rQGtR/COavhJ7qLE5WJ/xlJHM1/SM1cfQ/G/Ls=@vger.kernel.org X-Received: by 2002:a05:6a00:808c:b0:7f6:2b06:7126 with SMTP id d2e1a72fcca58-7ff6607bcd2mr41797038b3a.39.1767597755674; Sun, 04 Jan 2026 23:22:35 -0800 (PST) X-Received: by 2002:a05:6a00:808c:b0:7f6:2b06:7126 with SMTP id d2e1a72fcca58-7ff6607bcd2mr41797019b3a.39.1767597755120; Sun, 04 Jan 2026 23:22:35 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:34 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 2/7] bng_en: Add RX support Date: Mon, 5 Jan 2026 12:51:38 +0530 Message-ID: <20260105072143.19447-3-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Add support to receive packet using NAPI, build and deliver the skb to stack. With help of meta data available in completions, fill the appropriate information in skb. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- drivers/net/ethernet/broadcom/bnge/Makefile | 3 +- .../net/ethernet/broadcom/bnge/bnge_hw_def.h | 198 ++++++ .../net/ethernet/broadcom/bnge/bnge_netdev.c | 113 +++- .../net/ethernet/broadcom/bnge/bnge_netdev.h | 60 +- .../net/ethernet/broadcom/bnge/bnge_txrx.c | 573 ++++++++++++++++++ .../net/ethernet/broadcom/bnge/bnge_txrx.h | 90 +++ 6 files changed, 1016 insertions(+), 21 deletions(-) create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_txrx.c create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_txrx.h diff --git a/drivers/net/ethernet/broadcom/bnge/Makefile b/drivers/net/ethe= rnet/broadcom/bnge/Makefile index ea6596854e5c..fa604ee20264 100644 --- a/drivers/net/ethernet/broadcom/bnge/Makefile +++ b/drivers/net/ethernet/broadcom/bnge/Makefile @@ -10,4 +10,5 @@ bng_en-y :=3D bnge_core.o \ bnge_resc.o \ bnge_netdev.o \ bnge_ethtool.o \ - bnge_auxr.o + bnge_auxr.o \ + bnge_txrx.o diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h b/drivers/net= /ethernet/broadcom/bnge/bnge_hw_def.h new file mode 100644 index 000000000000..4da4259095fa --- /dev/null +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2025 Broadcom */ + +#ifndef _BNGE_HW_DEF_H_ +#define _BNGE_HW_DEF_H_ + +struct tx_bd_ext { + __le32 tx_bd_hsize_lflags; + #define TX_BD_FLAGS_TCP_UDP_CHKSUM (1 << 0) + #define TX_BD_FLAGS_IP_CKSUM (1 << 1) + #define TX_BD_FLAGS_NO_CRC (1 << 2) + #define TX_BD_FLAGS_STAMP (1 << 3) + #define TX_BD_FLAGS_T_IP_CHKSUM (1 << 4) + #define TX_BD_FLAGS_LSO (1 << 5) + #define TX_BD_FLAGS_IPID_FMT (1 << 6) + #define TX_BD_FLAGS_T_IPID (1 << 7) + #define TX_BD_HSIZE (0xff << 16) + #define TX_BD_HSIZE_SHIFT 16 + + __le32 tx_bd_mss; + __le32 tx_bd_cfa_action; + #define TX_BD_CFA_ACTION (0xffff << 16) + #define TX_BD_CFA_ACTION_SHIFT 16 + + __le32 tx_bd_cfa_meta; + #define TX_BD_CFA_META_MASK 0xfffffff + #define TX_BD_CFA_META_VID_MASK 0xfff + #define TX_BD_CFA_META_PRI_MASK (0xf << 12) + #define TX_BD_CFA_META_PRI_SHIFT 12 + #define TX_BD_CFA_META_TPID_MASK (3 << 16) + #define TX_BD_CFA_META_TPID_SHIFT 16 + #define TX_BD_CFA_META_KEY (0xf << 28) + #define TX_BD_CFA_META_KEY_SHIFT 28 + #define TX_BD_CFA_META_KEY_VLAN (1 << 28) +}; + +#define TX_CMP_SQ_CONS_IDX(txcmp) \ + (le32_to_cpu((txcmp)->sq_cons_idx) & TX_CMP_SQ_CONS_IDX_MASK) + +struct rx_cmp { + __le32 rx_cmp_len_flags_type; + #define RX_CMP_CMP_TYPE (0x3f << 0) + #define RX_CMP_FLAGS_ERROR (1 << 6) + #define RX_CMP_FLAGS_PLACEMENT (7 << 7) + #define RX_CMP_FLAGS_RSS_VALID (1 << 10) + #define RX_CMP_FLAGS_PKT_METADATA_PRESENT (1 << 11) + #define RX_CMP_FLAGS_ITYPES_SHIFT 12 + #define RX_CMP_FLAGS_ITYPES_MASK 0xf000 + #define RX_CMP_FLAGS_ITYPE_UNKNOWN (0 << 12) + #define RX_CMP_FLAGS_ITYPE_IP (1 << 12) + #define RX_CMP_FLAGS_ITYPE_TCP (2 << 12) + #define RX_CMP_FLAGS_ITYPE_UDP (3 << 12) + #define RX_CMP_FLAGS_ITYPE_FCOE (4 << 12) + #define RX_CMP_FLAGS_ITYPE_ROCE (5 << 12) + #define RX_CMP_FLAGS_ITYPE_PTP_WO_TS (8 << 12) + #define RX_CMP_FLAGS_ITYPE_PTP_W_TS (9 << 12) + #define RX_CMP_LEN (0xffff << 16) + #define RX_CMP_LEN_SHIFT 16 + + u32 rx_cmp_opaque; + __le32 rx_cmp_misc_v1; + #define RX_CMP_V1 (1 << 0) + #define RX_CMP_AGG_BUFS (0x1f << 1) + #define RX_CMP_AGG_BUFS_SHIFT 1 + #define RX_CMP_RSS_HASH_TYPE (0x7f << 9) + #define RX_CMP_RSS_HASH_TYPE_SHIFT 9 + #define RX_CMP_V3_RSS_EXT_OP_LEGACY (0xf << 12) + #define RX_CMP_V3_RSS_EXT_OP_LEGACY_SHIFT 12 + #define RX_CMP_V3_RSS_EXT_OP_NEW (0xf << 8) + #define RX_CMP_V3_RSS_EXT_OP_NEW_SHIFT 8 + #define RX_CMP_PAYLOAD_OFFSET (0xff << 16) + #define RX_CMP_PAYLOAD_OFFSET_SHIFT 16 + #define RX_CMP_SUB_NS_TS (0xf << 16) + #define RX_CMP_SUB_NS_TS_SHIFT 16 + #define RX_CMP_METADATA1 (0xf << 28) + #define RX_CMP_METADATA1_SHIFT 28 + #define RX_CMP_METADATA1_TPID_SEL (0x7 << 28) + #define RX_CMP_METADATA1_TPID_8021Q (0x1 << 28) + #define RX_CMP_METADATA1_TPID_8021AD (0x0 << 28) + #define RX_CMP_METADATA1_VALID (0x8 << 28) + + __le32 rx_cmp_rss_hash; +}; + +struct rx_cmp_ext { + __le32 rx_cmp_flags2; + #define RX_CMP_FLAGS2_IP_CS_CALC 0x1 + #define RX_CMP_FLAGS2_L4_CS_CALC (0x1 << 1) + #define RX_CMP_FLAGS2_T_IP_CS_CALC (0x1 << 2) + #define RX_CMP_FLAGS2_T_L4_CS_CALC (0x1 << 3) + #define RX_CMP_FLAGS2_META_FORMAT_VLAN (0x1 << 4) + __le32 rx_cmp_meta_data; + #define RX_CMP_FLAGS2_METADATA_TCI_MASK 0xffff + #define RX_CMP_FLAGS2_METADATA_VID_MASK 0xfff + #define RX_CMP_FLAGS2_METADATA_TPID_MASK 0xffff0000 + #define RX_CMP_FLAGS2_METADATA_TPID_SFT 16 + __le32 rx_cmp_cfa_code_errors_v2; + #define RX_CMP_V (1 << 0) + #define RX_CMPL_ERRORS_MASK (0x7fff << 1) + #define RX_CMPL_ERRORS_SFT 1 + #define RX_CMPL_ERRORS_BUFFER_ERROR_MASK (0x7 << 1) + #define RX_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER (0x0 << 1) + #define RX_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT (0x1 << 1) + #define RX_CMPL_ERRORS_BUFFER_ERROR_NOT_ON_CHIP (0x2 << 1) + #define RX_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT (0x3 << 1) + #define RX_CMPL_ERRORS_IP_CS_ERROR (0x1 << 4) + #define RX_CMPL_ERRORS_L4_CS_ERROR (0x1 << 5) + #define RX_CMPL_ERRORS_T_IP_CS_ERROR (0x1 << 6) + #define RX_CMPL_ERRORS_T_L4_CS_ERROR (0x1 << 7) + #define RX_CMPL_ERRORS_CRC_ERROR (0x1 << 8) + #define RX_CMPL_ERRORS_T_PKT_ERROR_MASK (0x7 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_NO_ERROR (0x0 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION (0x1 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN (0x2 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_TUNNEL_TOTAL_ERROR (0x3 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR (0x4 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR (0x5 << 9) + #define RX_CMPL_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL (0x6 << 9) + #define RX_CMPL_ERRORS_PKT_ERROR_MASK (0xf << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_NO_ERROR (0x0 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_L3_BAD_VERSION (0x1 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN (0x2 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_L3_BAD_TTL (0x3 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_IP_TOTAL_ERROR (0x4 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR (0x5 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN (0x6 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL (0x7 << 12) + #define RX_CMPL_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN (0x8 << 12) + + #define RX_CMPL_CFA_CODE_MASK (0xffff << 16) + #define RX_CMPL_CFA_CODE_SFT 16 + #define RX_CMPL_METADATA0_TCI_MASK (0xffff << 16) + #define RX_CMPL_METADATA0_VID_MASK (0x0fff << 16) + #define RX_CMPL_METADATA0_SFT 16 + + __le32 rx_cmp_timestamp; +}; + +#define RX_CMP_L2_ERRORS \ + cpu_to_le32(RX_CMPL_ERRORS_BUFFER_ERROR_MASK | RX_CMPL_ERRORS_CRC_ERROR) + +#define RX_CMP_L4_CS_BITS \ + (cpu_to_le32(RX_CMP_FLAGS2_L4_CS_CALC | RX_CMP_FLAGS2_T_L4_CS_CALC)) + +#define RX_CMP_L4_CS_ERR_BITS \ + (cpu_to_le32(RX_CMPL_ERRORS_L4_CS_ERROR | RX_CMPL_ERRORS_T_L4_CS_ERROR)) + +#define RX_CMP_L4_CS_OK(rxcmp1) \ + (((rxcmp1)->rx_cmp_flags2 & RX_CMP_L4_CS_BITS) && \ + !((rxcmp1)->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS)) + +#define RX_CMP_METADATA0_TCI(rxcmp1) \ + ((le32_to_cpu((rxcmp1)->rx_cmp_cfa_code_errors_v2) & \ + RX_CMPL_METADATA0_TCI_MASK) >> RX_CMPL_METADATA0_SFT) + +#define RX_CMP_ENCAP(rxcmp1) \ + ((le32_to_cpu((rxcmp1)->rx_cmp_flags2) & \ + RX_CMP_FLAGS2_T_L4_CS_CALC) >> 3) + +#define RX_CMP_V3_HASH_TYPE_LEGACY(rxcmp) \ + ((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & \ + RX_CMP_V3_RSS_EXT_OP_LEGACY) >> RX_CMP_V3_RSS_EXT_OP_LEGACY_SHIFT) + +#define RX_CMP_V3_HASH_TYPE_NEW(rxcmp) \ + ((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_V3_RSS_EXT_OP_NEW) >>\ + RX_CMP_V3_RSS_EXT_OP_NEW_SHIFT) + +#define RX_CMP_V3_HASH_TYPE(bd, rxcmp) \ + (((bd)->rss_cap & BNGE_RSS_CAP_RSS_TCAM) ? \ + RX_CMP_V3_HASH_TYPE_NEW(rxcmp) : \ + RX_CMP_V3_HASH_TYPE_LEGACY(rxcmp)) + +#define EXT_OP_INNER_4 0x0 +#define EXT_OP_OUTER_4 0x2 +#define EXT_OP_INNFL_3 0x8 +#define EXT_OP_OUTFL_3 0xa + +#define RX_CMP_VLAN_VALID(rxcmp) \ + ((rxcmp)->rx_cmp_misc_v1 & cpu_to_le32(RX_CMP_METADATA1_VALID)) + +#define RX_CMP_VLAN_TPID_SEL(rxcmp) \ + (le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_METADATA1_TPID_SEL) + +#define RSS_PROFILE_ID_MASK 0x1f + +#define RX_CMP_HASH_TYPE(rxcmp) \ + (((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_RSS_HASH_TYPE) >>\ + RX_CMP_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK) + +#define RX_CMP_HASH_VALID(rxcmp) \ + ((rxcmp)->rx_cmp_len_flags_type & cpu_to_le32(RX_CMP_FLAGS_RSS_VALID)) + +#define HWRM_RING_ALLOC_TX 0x1 +#define HWRM_RING_ALLOC_RX 0x2 +#define HWRM_RING_ALLOC_AGG 0x4 +#define HWRM_RING_ALLOC_CMPL 0x8 +#define HWRM_RING_ALLOC_NQ 0x10 +#endif /* _BNGE_HW_DEF_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 8bd019ea55a2..7533c382714e 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -21,6 +21,7 @@ #include "bnge_hwrm_lib.h" #include "bnge_ethtool.h" #include "bnge_rmem.h" +#include "bnge_txrx.h" =20 #define BNGE_RING_TO_TC_OFF(bd, tx) \ ((tx) % (bd)->tx_nr_rings_per_tc) @@ -857,6 +858,13 @@ u16 bnge_cp_ring_for_tx(struct bnge_tx_ring_info *txr) return txr->tx_cpr->ring_struct.fw_ring_id; } =20 +static void bnge_db_nq_arm(struct bnge_net *bn, + struct bnge_db_info *db, u32 idx) +{ + bnge_writeq(bn->bd, db->db_key64 | DBR_TYPE_NQ_ARM | + DB_RING_IDX(db, idx), db->doorbell); +} + static void bnge_db_nq(struct bnge_net *bn, struct bnge_db_info *db, u32 i= dx) { bnge_writeq(bn->bd, db->db_key64 | DBR_TYPE_NQ_MASK | @@ -879,12 +887,6 @@ static int bnge_cp_num_to_irq_num(struct bnge_net *bn,= int n) return nqr->ring_struct.map_idx; } =20 -static irqreturn_t bnge_msix(int irq, void *dev_instance) -{ - /* NAPI scheduling to be added in a future patch */ - return IRQ_HANDLED; -} - static void bnge_init_nq_tree(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; @@ -942,9 +944,8 @@ static u8 *__bnge_alloc_rx_frag(struct bnge_net *bn, dm= a_addr_t *mapping, return page_address(page) + offset; } =20 -static int bnge_alloc_rx_data(struct bnge_net *bn, - struct bnge_rx_ring_info *rxr, - u16 prod, gfp_t gfp) +int bnge_alloc_rx_data(struct bnge_net *bn, struct bnge_rx_ring_info *rxr, + u16 prod, gfp_t gfp) { struct bnge_sw_rx_bd *rx_buf =3D &rxr->rx_buf_ring[RING_RX(bn, prod)]; struct rx_bd *rxbd; @@ -1756,6 +1757,78 @@ static int bnge_cfg_def_vnic(struct bnge_net *bn) return rc; } =20 +static void bnge_disable_int(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + if (!bn->bnapi) + return; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + struct bnge_nq_ring_info *nqr =3D &bnapi->nq_ring; + struct bnge_ring_struct *ring =3D &nqr->ring_struct; + + if (ring->fw_ring_id !=3D INVALID_HW_RING_ID) + bnge_db_nq(bn, &nqr->nq_db, nqr->nq_raw_cons); + } +} + +static void bnge_disable_int_sync(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + bnge_disable_int(bn); + for (i =3D 0; i < bd->nq_nr_rings; i++) { + int map_idx =3D bnge_cp_num_to_irq_num(bn, i); + + synchronize_irq(bd->irq_tbl[map_idx].vector); + } +} + +static void bnge_enable_int(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + struct bnge_nq_ring_info *nqr =3D &bnapi->nq_ring; + + bnge_db_nq_arm(bn, &nqr->nq_db, nqr->nq_raw_cons); + } +} + +static void bnge_disable_napi(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + if (test_and_set_bit(BNGE_STATE_NAPI_DISABLED, &bn->state)) + return; + + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + napi_disable_locked(&bnapi->napi); + } +} + +static void bnge_enable_napi(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i; + + clear_bit(BNGE_STATE_NAPI_DISABLED, &bn->state); + for (i =3D 0; i < bd->nq_nr_rings; i++) { + struct bnge_napi *bnapi =3D bn->bnapi[i]; + + napi_enable_locked(&bnapi->napi); + } +} + static void bnge_hwrm_vnic_free(struct bnge_net *bn) { int i; @@ -1887,6 +1960,12 @@ static void bnge_hwrm_ring_free(struct bnge_net *bn,= bool close_path) bnge_hwrm_rx_agg_ring_free(bn, &bn->rx_ring[i], close_path); } =20 + /* The completion rings are about to be freed. After that the + * IRQ doorbell will not work anymore. So we need to disable + * IRQ here. + */ + bnge_disable_int_sync(bn); + for (i =3D 0; i < bd->nq_nr_rings; i++) { struct bnge_napi *bnapi =3D bn->bnapi[i]; struct bnge_nq_ring_info *nqr; @@ -2086,16 +2165,6 @@ static int bnge_init_chip(struct bnge_net *bn) return rc; } =20 -static int bnge_napi_poll(struct napi_struct *napi, int budget) -{ - int work_done =3D 0; - - /* defer NAPI implementation to next patch series */ - napi_complete_done(napi, work_done); - - return work_done; -} - static void bnge_init_napi(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; @@ -2193,7 +2262,12 @@ static int bnge_open_core(struct bnge_net *bn) netdev_err(bn->netdev, "bnge_init_nic err: %d\n", rc); goto err_free_irq; } + + bnge_enable_napi(bn); + set_bit(BNGE_STATE_OPEN, &bd->state); + + bnge_enable_int(bn); return 0; =20 err_free_irq: @@ -2236,6 +2310,7 @@ static void bnge_close_core(struct bnge_net *bn) =20 clear_bit(BNGE_STATE_OPEN, &bd->state); bnge_shutdown_nic(bn); + bnge_disable_napi(bn); bnge_free_all_rings_bufs(bn); bnge_free_irq(bn); bnge_del_napi(bn); diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 557cca472db6..04989908b133 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -8,6 +8,7 @@ #include #include #include "bnge_db.h" +#include "bnge_hw_def.h" =20 struct tx_bd { __le32 tx_bd_len_flags_type; @@ -173,10 +174,16 @@ enum { #define RING_RX_AGG(bn, idx) ((idx) & (bn)->rx_agg_ring_mask) #define NEXT_RX_AGG(idx) ((idx) + 1) =20 +#define BNGE_NQ_HDL_IDX_MASK 0x00ffffff +#define BNGE_NQ_HDL_TYPE_MASK 0xff000000 #define BNGE_NQ_HDL_TYPE_SHIFT 24 #define BNGE_NQ_HDL_TYPE_RX 0x00 #define BNGE_NQ_HDL_TYPE_TX 0x01 =20 +#define BNGE_NQ_HDL_IDX(hdl) ((hdl) & BNGE_NQ_HDL_IDX_MASK) +#define BNGE_NQ_HDL_TYPE(hdl) (((hdl) & BNGE_NQ_HDL_TYPE_MASK) >> \ + BNGE_NQ_HDL_TYPE_SHIFT) + struct bnge_net { struct bnge_dev *bd; struct net_device *netdev; @@ -232,6 +239,9 @@ struct bnge_net { u8 rss_hash_key_updated:1; int rsscos_nr_ctxs; u32 stats_coal_ticks; + + unsigned long state; +#define BNGE_STATE_NAPI_DISABLED 0 }; =20 #define BNGE_DEFAULT_RX_RING_SIZE 511 @@ -278,9 +288,25 @@ void bnge_set_ring_params(struct bnge_dev *bd); txr =3D (iter < BNGE_MAX_TXR_PER_NAPI - 1) ? \ (bnapi)->tx_ring[++iter] : NULL) =20 +#define DB_EPOCH(db, idx) (((idx) & (db)->db_epoch_mask) << \ + ((db)->db_epoch_shift)) + +#define DB_TOGGLE(tgl) ((tgl) << DBR_TOGGLE_SFT) + +#define DB_RING_IDX(db, idx) (((idx) & (db)->db_ring_mask) | \ + DB_EPOCH(db, idx)) + #define BNGE_SET_NQ_HDL(cpr) \ (((cpr)->cp_ring_type << BNGE_NQ_HDL_TYPE_SHIFT) | (cpr)->cp_idx) =20 +#define BNGE_DB_NQ(bd, db, idx) \ + bnge_writeq(bd, (db)->db_key64 | DBR_TYPE_NQ | DB_RING_IDX(db, idx),\ + (db)->doorbell) + +#define BNGE_DB_NQ_ARM(bd, db, idx) \ + bnge_writeq(bd, (db)->db_key64 | DBR_TYPE_NQ_ARM | \ + DB_RING_IDX(db, idx), (db)->doorbell) + struct bnge_stats_mem { u64 *sw_stats; u64 *hw_masks; @@ -289,6 +315,25 @@ struct bnge_stats_mem { int len; }; =20 +struct nqe_cn { + __le16 type; + #define NQ_CN_TYPE_MASK 0x3fUL + #define NQ_CN_TYPE_SFT 0 + #define NQ_CN_TYPE_CQ_NOTIFICATION 0x30UL + #define NQ_CN_TYPE_LAST NQ_CN_TYPE_CQ_NOTIFICATION + #define NQ_CN_TOGGLE_MASK 0xc0UL + #define NQ_CN_TOGGLE_SFT 6 + __le16 reserved16; + __le32 cq_handle_low; + __le32 v; + #define NQ_CN_V 0x1UL + __le32 cq_handle_high; +}; + +#define NQE_CN_TYPE(type) ((type) & NQ_CN_TYPE_MASK) +#define NQE_CN_TOGGLE(type) (((type) & NQ_CN_TOGGLE_MASK) >> \ + NQ_CN_TOGGLE_SFT) + struct bnge_cp_ring_info { struct bnge_napi *bnapi; dma_addr_t *desc_mapping; @@ -298,6 +343,10 @@ struct bnge_cp_ring_info { u8 cp_idx; u32 cp_raw_cons; struct bnge_db_info cp_db; + u8 had_work_done:1; + u8 has_more_work:1; + u8 had_nqe_notify:1; + u8 toggle; }; =20 struct bnge_nq_ring_info { @@ -310,8 +359,9 @@ struct bnge_nq_ring_info { =20 struct bnge_stats_mem stats; u32 hw_stats_ctx_id; + u8 has_more_work:1; =20 - int cp_ring_count; + u16 cp_ring_count; struct bnge_cp_ring_info *cp_ring_arr; }; =20 @@ -374,6 +424,12 @@ struct bnge_napi { struct bnge_nq_ring_info nq_ring; struct bnge_rx_ring_info *rx_ring; struct bnge_tx_ring_info *tx_ring[BNGE_MAX_TXR_PER_NAPI]; + u8 events; +#define BNGE_RX_EVENT 1 +#define BNGE_AGG_EVENT 2 +#define BNGE_TX_EVENT 4 +#define BNGE_REDIRECT_EVENT 8 +#define BNGE_TX_CMP_EVENT 0x10 }; =20 #define INVALID_STATS_CTX_ID -1 @@ -452,4 +508,6 @@ struct bnge_l2_filter { u16 bnge_cp_ring_for_rx(struct bnge_rx_ring_info *rxr); u16 bnge_cp_ring_for_tx(struct bnge_tx_ring_info *txr); void bnge_fill_hw_rss_tbl(struct bnge_net *bn, struct bnge_vnic_info *vnic= ); +int bnge_alloc_rx_data(struct bnge_net *bn, struct bnge_rx_ring_info *rxr, + u16 prod, gfp_t gfp); #endif /* _BNGE_NETDEV_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c new file mode 100644 index 000000000000..db49a92542c0 --- /dev/null +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -0,0 +1,573 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2025 Broadcom. + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "bnge.h" +#include "bnge_hwrm.h" +#include "bnge_hwrm_lib.h" +#include "bnge_netdev.h" +#include "bnge_rmem.h" +#include "bnge_txrx.h" + +irqreturn_t bnge_msix(int irq, void *dev_instance) +{ + struct bnge_napi *bnapi =3D dev_instance; + struct bnge_nq_ring_info *nqr; + struct bnge_net *bn; + u32 cons; + + bn =3D bnapi->bn; + nqr =3D &bnapi->nq_ring; + cons =3D RING_CMP(bn, nqr->nq_raw_cons); + + prefetch(&nqr->desc_ring[CP_RING(cons)][CP_IDX(cons)]); + napi_schedule(&bnapi->napi); + return IRQ_HANDLED; +} + +static void bnge_sched_reset_rxr(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr) +{ + /* TODO: Initiate reset task */ + rxr->rx_next_cons =3D 0xffff; +} + +void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a) +{ + struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; + struct bnge_net *bn =3D rxr->bnapi->bn; + struct rx_bd *cons_bd, *prod_bd; + u16 prod =3D rxr->rx_prod; + + prod_rx_buf =3D &rxr->rx_buf_ring[RING_RX(bn, prod)]; + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + + prod_rx_buf->data =3D data; + prod_rx_buf->data_ptr =3D cons_rx_buf->data_ptr; + + prod_rx_buf->mapping =3D cons_rx_buf->mapping; + + prod_bd =3D &rxr->rx_desc_ring[RX_RING(bn, prod)][RX_IDX(prod)]; + cons_bd =3D &rxr->rx_desc_ring[RX_RING(bn, cons)][RX_IDX(cons)]; + + prod_bd->rx_bd_haddr =3D cons_bd->rx_bd_haddr; +} + +static void bnge_deliver_skb(struct bnge_net *bn, struct bnge_napi *bnapi, + struct sk_buff *skb) +{ + skb_mark_for_recycle(skb); + skb_record_rx_queue(skb, bnapi->index); + napi_gro_receive(&bnapi->napi, skb); +} + +static struct sk_buff *bnge_copy_skb(struct bnge_napi *bnapi, u8 *data, + unsigned int len, dma_addr_t mapping) +{ + struct bnge_net *bn =3D bnapi->bn; + struct bnge_dev *bd =3D bn->bd; + struct sk_buff *skb; + + skb =3D napi_alloc_skb(&bnapi->napi, len); + if (!skb) + return NULL; + + dma_sync_single_for_cpu(bd->dev, mapping, bn->rx_copybreak, + bn->rx_dir); + + memcpy(skb->data - NET_IP_ALIGN, data - NET_IP_ALIGN, + len + NET_IP_ALIGN); + + dma_sync_single_for_device(bd->dev, mapping, bn->rx_copybreak, + bn->rx_dir); + + skb_put(skb, len); + + return skb; +} + +static enum pkt_hash_types bnge_rss_ext_op(struct bnge_net *bn, + struct rx_cmp *rxcmp) +{ + u8 ext_op =3D RX_CMP_V3_HASH_TYPE(bn->bd, rxcmp); + + switch (ext_op) { + case EXT_OP_INNER_4: + case EXT_OP_OUTER_4: + case EXT_OP_INNFL_3: + case EXT_OP_OUTFL_3: + return PKT_HASH_TYPE_L4; + default: + return PKT_HASH_TYPE_L3; + } +} + +static struct sk_buff *bnge_rx_vlan(struct sk_buff *skb, u8 cmp_type, + struct rx_cmp *rxcmp, + struct rx_cmp_ext *rxcmp1) +{ + __be16 vlan_proto; + u16 vtag; + + if (cmp_type =3D=3D CMP_TYPE_RX_L2_CMP) { + __le32 flags2 =3D rxcmp1->rx_cmp_flags2; + u32 meta_data; + + if (!(flags2 & cpu_to_le32(RX_CMP_FLAGS2_META_FORMAT_VLAN))) + return skb; + + meta_data =3D le32_to_cpu(rxcmp1->rx_cmp_meta_data); + vtag =3D meta_data & RX_CMP_FLAGS2_METADATA_TCI_MASK; + vlan_proto =3D + htons(meta_data >> RX_CMP_FLAGS2_METADATA_TPID_SFT); + if (eth_type_vlan(vlan_proto)) + __vlan_hwaccel_put_tag(skb, vlan_proto, vtag); + else + goto vlan_err; + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_V3_CMP) { + if (RX_CMP_VLAN_VALID(rxcmp)) { + u32 tpid_sel =3D RX_CMP_VLAN_TPID_SEL(rxcmp); + + if (tpid_sel =3D=3D RX_CMP_METADATA1_TPID_8021Q) + vlan_proto =3D htons(ETH_P_8021Q); + else if (tpid_sel =3D=3D RX_CMP_METADATA1_TPID_8021AD) + vlan_proto =3D htons(ETH_P_8021AD); + else + goto vlan_err; + vtag =3D RX_CMP_METADATA0_TCI(rxcmp1); + __vlan_hwaccel_put_tag(skb, vlan_proto, vtag); + } + } + return skb; + +vlan_err: + skb_mark_for_recycle(skb); + dev_kfree_skb(skb); + return NULL; +} + +static struct sk_buff *bnge_rx_skb(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr, u16 cons, + void *data, u8 *data_ptr, + dma_addr_t dma_addr, + unsigned int offset_and_len) +{ + struct bnge_dev *bd =3D bn->bd; + u16 prod =3D rxr->rx_prod; + struct sk_buff *skb; + int err; + + err =3D bnge_alloc_rx_data(bn, rxr, prod, GFP_ATOMIC); + if (unlikely(err)) { + bnge_reuse_rx_data(rxr, cons, data); + return NULL; + } + + skb =3D napi_build_skb(data, bn->rx_buf_size); + dma_sync_single_for_cpu(bd->dev, dma_addr, bn->rx_buf_use_size, + bn->rx_dir); + if (!skb) { + page_pool_free_va(rxr->head_pool, data, true); + return NULL; + } + + skb_mark_for_recycle(skb); + skb_reserve(skb, bn->rx_offset); + skb_put(skb, offset_and_len & 0xffff); + return skb; +} + +/* returns the following: + * 1 - 1 packet successfully received + * -EBUSY - completion ring does not have all the agg buffers yet + * -ENOMEM - packet aborted due to out of memory + * -EIO - packet aborted due to hw error indicated in BD + */ +static int bnge_rx_pkt(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, + u32 *raw_cons, u8 *event) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + struct net_device *dev =3D bn->netdev; + struct bnge_rx_ring_info *rxr; + u32 tmp_raw_cons, flags, misc; + struct bnge_sw_rx_bd *rx_buf; + struct rx_cmp_ext *rxcmp1; + u16 cons, prod, cp_cons; + u8 *data_ptr, cmp_type; + struct rx_cmp *rxcmp; + dma_addr_t dma_addr; + struct sk_buff *skb; + unsigned int len; + void *data; + int rc =3D 0; + + rxr =3D bnapi->rx_ring; + + tmp_raw_cons =3D *raw_cons; + cp_cons =3D RING_CMP(bn, tmp_raw_cons); + rxcmp =3D (struct rx_cmp *) + &cpr->desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)]; + + cmp_type =3D RX_CMP_TYPE(rxcmp); + + tmp_raw_cons =3D NEXT_RAW_CMP(tmp_raw_cons); + cp_cons =3D RING_CMP(bn, tmp_raw_cons); + rxcmp1 =3D (struct rx_cmp_ext *) + &cpr->desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)]; + + if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons)) + return -EBUSY; + + /* The valid test of the entry must be done first before + * reading any further. + */ + dma_rmb(); + prod =3D rxr->rx_prod; + + cons =3D rxcmp->rx_cmp_opaque; + if (unlikely(cons !=3D rxr->rx_next_cons)) { + /* 0xffff is forced error, don't print it */ + if (rxr->rx_next_cons !=3D 0xffff) + netdev_warn(bn->netdev, "RX cons %x !=3D expected cons %x\n", + cons, rxr->rx_next_cons); + bnge_sched_reset_rxr(bn, rxr); + goto next_rx_no_prod_no_len; + } + rx_buf =3D &rxr->rx_buf_ring[cons]; + data =3D rx_buf->data; + data_ptr =3D rx_buf->data_ptr; + prefetch(data_ptr); + + misc =3D le32_to_cpu(rxcmp->rx_cmp_misc_v1); + *event |=3D BNGE_RX_EVENT; + + rx_buf->data =3D NULL; + if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) { + bnge_reuse_rx_data(rxr, cons, data); + rc =3D -EIO; + goto next_rx_no_len; + } + + flags =3D le32_to_cpu(rxcmp->rx_cmp_len_flags_type); + len =3D flags >> RX_CMP_LEN_SHIFT; + dma_addr =3D rx_buf->mapping; + + if (len <=3D bn->rx_copybreak) { + skb =3D bnge_copy_skb(bnapi, data_ptr, len, dma_addr); + bnge_reuse_rx_data(rxr, cons, data); + if (!skb) + goto oom_next_rx; + } else { + u32 payload; + + if (rx_buf->data_ptr =3D=3D data_ptr) + payload =3D misc & RX_CMP_PAYLOAD_OFFSET; + else + payload =3D 0; + skb =3D bnge_rx_skb(bn, rxr, cons, data, data_ptr, dma_addr, + payload | len); + if (!skb) + goto oom_next_rx; + } + + if (RX_CMP_HASH_VALID(rxcmp)) { + enum pkt_hash_types type; + + if (cmp_type =3D=3D CMP_TYPE_RX_L2_V3_CMP) { + type =3D bnge_rss_ext_op(bn, rxcmp); + } else { + u32 itypes =3D RX_CMP_ITYPES(rxcmp); + + if (itypes =3D=3D RX_CMP_FLAGS_ITYPE_TCP || + itypes =3D=3D RX_CMP_FLAGS_ITYPE_UDP) + type =3D PKT_HASH_TYPE_L4; + else + type =3D PKT_HASH_TYPE_L3; + } + skb_set_hash(skb, le32_to_cpu(rxcmp->rx_cmp_rss_hash), type); + } + + skb->protocol =3D eth_type_trans(skb, dev); + + if (skb->dev->features & BNGE_HW_FEATURE_VLAN_ALL_RX) { + skb =3D bnge_rx_vlan(skb, cmp_type, rxcmp, rxcmp1); + if (!skb) + goto next_rx; + } + + skb_checksum_none_assert(skb); + if (RX_CMP_L4_CS_OK(rxcmp1)) { + if (dev->features & NETIF_F_RXCSUM) { + skb->ip_summed =3D CHECKSUM_UNNECESSARY; + skb->csum_level =3D RX_CMP_ENCAP(rxcmp1); + } + } + + bnge_deliver_skb(bn, bnapi, skb); + rc =3D 1; + +next_rx: + /* Update Stats */ +next_rx_no_len: + rxr->rx_prod =3D NEXT_RX(prod); + rxr->rx_next_cons =3D RING_RX(bn, NEXT_RX(cons)); + +next_rx_no_prod_no_len: + *raw_cons =3D tmp_raw_cons; + return rc; + +oom_next_rx: + rc =3D -ENOMEM; + goto next_rx; +} + +/* In netpoll mode, if we are using a combined completion ring, we need to + * discard the rx packets and recycle the buffers. + */ +static int bnge_force_rx_discard(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u32 *raw_cons, u8 *event) +{ + u32 tmp_raw_cons =3D *raw_cons; + struct rx_cmp_ext *rxcmp1; + struct rx_cmp *rxcmp; + u16 cp_cons; + u8 cmp_type; + int rc; + + cp_cons =3D RING_CMP(bn, tmp_raw_cons); + rxcmp =3D (struct rx_cmp *) + &cpr->desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)]; + + tmp_raw_cons =3D NEXT_RAW_CMP(tmp_raw_cons); + cp_cons =3D RING_CMP(bn, tmp_raw_cons); + rxcmp1 =3D (struct rx_cmp_ext *) + &cpr->desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)]; + + if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons)) + return -EBUSY; + + /* The valid test of the entry must be done first before + * reading any further. + */ + dma_rmb(); + cmp_type =3D RX_CMP_TYPE(rxcmp); + if (cmp_type =3D=3D CMP_TYPE_RX_L2_CMP || + cmp_type =3D=3D CMP_TYPE_RX_L2_V3_CMP) { + rxcmp1->rx_cmp_cfa_code_errors_v2 |=3D + cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR); + } + rc =3D bnge_rx_pkt(bn, cpr, raw_cons, event); + return rc; +} + +static void __bnge_poll_work_done(struct bnge_net *bn, struct bnge_napi *b= napi, + int budget) +{ + struct bnge_rx_ring_info *rxr =3D bnapi->rx_ring; + + if ((bnapi->events & BNGE_RX_EVENT)) { + bnge_db_write(bn->bd, &rxr->rx_db, rxr->rx_prod); + bnapi->events &=3D ~BNGE_RX_EVENT; + } +} + +static int __bnge_poll_work(struct bnge_net *bn, struct bnge_cp_ring_info = *cpr, + int budget) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + u32 raw_cons =3D cpr->cp_raw_cons; + struct tx_cmp *txcmp; + int rx_pkts =3D 0; + u8 event =3D 0; + u32 cons; + + cpr->has_more_work =3D 0; + cpr->had_work_done =3D 1; + while (1) { + u8 cmp_type; + int rc; + + cons =3D RING_CMP(bn, raw_cons); + txcmp =3D &cpr->desc_ring[CP_RING(cons)][CP_IDX(cons)]; + + if (!TX_CMP_VALID(bn, txcmp, raw_cons)) + break; + + /* The valid test of the entry must be done first before + * reading any further. + */ + dma_rmb(); + cmp_type =3D TX_CMP_TYPE(txcmp); + if (cmp_type =3D=3D CMP_TYPE_TX_L2_CMP || + cmp_type =3D=3D CMP_TYPE_TX_L2_COAL_CMP) { + /* + * Tx Compl Processng + */ + } else if (cmp_type >=3D CMP_TYPE_RX_L2_CMP && + cmp_type <=3D CMP_TYPE_RX_L2_TPA_START_V3_CMP) { + if (likely(budget)) + rc =3D bnge_rx_pkt(bn, cpr, &raw_cons, &event); + else + rc =3D bnge_force_rx_discard(bn, cpr, &raw_cons, + &event); + if (likely(rc >=3D 0)) + rx_pkts +=3D rc; + /* Increment rx_pkts when rc is -ENOMEM to count towards + * the NAPI budget. Otherwise, we may potentially loop + * here forever if we consistently cannot allocate + * buffers. + */ + else if (rc =3D=3D -ENOMEM && budget) + rx_pkts++; + else if (rc =3D=3D -EBUSY) /* partial completion */ + break; + } + + raw_cons =3D NEXT_RAW_CMP(raw_cons); + + if (rx_pkts && rx_pkts =3D=3D budget) { + cpr->has_more_work =3D 1; + break; + } + } + + cpr->cp_raw_cons =3D raw_cons; + bnapi->events |=3D event; + return rx_pkts; +} + +static void __bnge_poll_cqs_done(struct bnge_net *bn, struct bnge_napi *bn= api, + u64 dbr_type, int budget) +{ + struct bnge_nq_ring_info *nqr =3D &bnapi->nq_ring; + int i; + + for (i =3D 0; i < nqr->cp_ring_count; i++) { + struct bnge_cp_ring_info *cpr =3D &nqr->cp_ring_arr[i]; + struct bnge_db_info *db; + + if (cpr->had_work_done) { + u32 tgl =3D 0; + + if (dbr_type =3D=3D DBR_TYPE_CQ_ARMALL) { + cpr->had_nqe_notify =3D 0; + tgl =3D cpr->toggle; + } + db =3D &cpr->cp_db; + bnge_writeq(bn->bd, + db->db_key64 | dbr_type | DB_TOGGLE(tgl) | + DB_RING_IDX(db, cpr->cp_raw_cons), + db->doorbell); + cpr->had_work_done =3D 0; + } + } + __bnge_poll_work_done(bn, bnapi, budget); +} + +static int __bnge_poll_cqs(struct bnge_net *bn, struct bnge_napi *bnapi, + int budget) +{ + struct bnge_nq_ring_info *nqr =3D &bnapi->nq_ring; + int i, work_done =3D 0; + + for (i =3D 0; i < nqr->cp_ring_count; i++) { + struct bnge_cp_ring_info *cpr =3D &nqr->cp_ring_arr[i]; + + if (cpr->had_nqe_notify) { + work_done +=3D __bnge_poll_work(bn, cpr, + budget - work_done); + nqr->has_more_work |=3D cpr->has_more_work; + } + } + return work_done; +} + +int bnge_napi_poll(struct napi_struct *napi, int budget) +{ + struct bnge_napi *bnapi =3D container_of(napi, struct bnge_napi, napi); + struct bnge_nq_ring_info *nqr =3D &bnapi->nq_ring; + u32 raw_cons =3D nqr->nq_raw_cons; + struct bnge_net *bn =3D bnapi->bn; + struct bnge_dev *bd =3D bn->bd; + struct nqe_cn *nqcmp; + int work_done =3D 0; + u32 cons; + + if (nqr->has_more_work) { + nqr->has_more_work =3D 0; + work_done =3D __bnge_poll_cqs(bn, bnapi, budget); + } + + while (1) { + u16 type; + + cons =3D RING_CMP(bn, raw_cons); + nqcmp =3D &nqr->desc_ring[CP_RING(cons)][CP_IDX(cons)]; + + if (!NQ_CMP_VALID(bn, nqcmp, raw_cons)) { + if (nqr->has_more_work) + break; + + __bnge_poll_cqs_done(bn, bnapi, DBR_TYPE_CQ_ARMALL, + budget); + nqr->nq_raw_cons =3D raw_cons; + if (napi_complete_done(napi, work_done)) + BNGE_DB_NQ_ARM(bd, &nqr->nq_db, + nqr->nq_raw_cons); + goto poll_done; + } + + /* The valid test of the entry must be done first before + * reading any further. + */ + dma_rmb(); + + type =3D le16_to_cpu(nqcmp->type); + if (NQE_CN_TYPE(type) =3D=3D NQ_CN_TYPE_CQ_NOTIFICATION) { + u32 idx =3D le32_to_cpu(nqcmp->cq_handle_low); + u32 cq_type =3D BNGE_NQ_HDL_TYPE(idx); + struct bnge_cp_ring_info *cpr; + + /* No more budget for RX work */ + if (budget && work_done >=3D budget && + cq_type =3D=3D BNGE_NQ_HDL_TYPE_RX) + break; + + idx =3D BNGE_NQ_HDL_IDX(idx); + cpr =3D &nqr->cp_ring_arr[idx]; + cpr->had_nqe_notify =3D 1; + cpr->toggle =3D NQE_CN_TOGGLE(type); + work_done +=3D __bnge_poll_work(bn, cpr, + budget - work_done); + nqr->has_more_work |=3D cpr->has_more_work; + } + raw_cons =3D NEXT_RAW_CMP(raw_cons); + } + + __bnge_poll_cqs_done(bn, bnapi, DBR_TYPE_CQ, budget); + if (raw_cons !=3D nqr->nq_raw_cons) { + nqr->nq_raw_cons =3D raw_cons; + BNGE_DB_NQ(bd, &nqr->nq_db, raw_cons); + } +poll_done: + return work_done; +} diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.h new file mode 100644 index 000000000000..b13081b0eb79 --- /dev/null +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2025 Broadcom */ + +#ifndef _BNGE_TXRX_H_ +#define _BNGE_TXRX_H_ + +#include +#include "bnge_netdev.h" + +#define BNGE_MIN_PKT_SIZE 52 + +#define TX_OPAQUE_IDX_MASK 0x0000ffff +#define TX_OPAQUE_BDS_MASK 0x00ff0000 +#define TX_OPAQUE_BDS_SHIFT 16 +#define TX_OPAQUE_RING_MASK 0xff000000 +#define TX_OPAQUE_RING_SHIFT 24 + +#define SET_TX_OPAQUE(bn, txr, idx, bds) \ + (((txr)->tx_napi_idx << TX_OPAQUE_RING_SHIFT) | \ + ((bds) << TX_OPAQUE_BDS_SHIFT) | ((idx) & (bn)->tx_ring_mask)) + +#define TX_OPAQUE_IDX(opq) ((opq) & TX_OPAQUE_IDX_MASK) +#define TX_OPAQUE_RING(opq) (((opq) & TX_OPAQUE_RING_MASK) >> \ + TX_OPAQUE_RING_SHIFT) +#define TX_OPAQUE_BDS(opq) (((opq) & TX_OPAQUE_BDS_MASK) >> \ + TX_OPAQUE_BDS_SHIFT) +#define TX_OPAQUE_PROD(bn, opq) ((TX_OPAQUE_IDX(opq) + TX_OPAQUE_BDS(opq))= &\ + (bn)->tx_ring_mask) + +/* Minimum TX BDs for a TX packet with MAX_SKB_FRAGS + 1. We need one ext= ra + * BD because the first TX BD is always a long BD. + */ +#define BNGE_MIN_TX_DESC_CNT (MAX_SKB_FRAGS + 2) + +#define RX_RING(bn, x) (((x) & (bn)->rx_ring_mask) >> (BNGE_PAGE_SHIFT - 4= )) +#define RX_AGG_RING(bn, x) (((x) & (bn)->rx_agg_ring_mask) >> \ + (BNGE_PAGE_SHIFT - 4)) +#define RX_IDX(x) ((x) & (RX_DESC_CNT - 1)) + +#define TX_RING(bn, x) (((x) & (bn)->tx_ring_mask) >> (BNGE_PAGE_SHIFT - 4= )) +#define TX_IDX(x) ((x) & (TX_DESC_CNT - 1)) + +#define CP_RING(x) (((x) & ~(CP_DESC_CNT - 1)) >> (BNGE_PAGE_SHIFT - 4)) +#define CP_IDX(x) ((x) & (CP_DESC_CNT - 1)) + +#define TX_CMP_VALID(bn, txcmp, raw_cons) \ + (!!((txcmp)->tx_cmp_errors_v & cpu_to_le32(TX_CMP_V)) =3D=3D \ + !((raw_cons) & (bn)->cp_bit)) + +#define RX_CMP_VALID(rxcmp1, raw_cons) \ + (!!((rxcmp1)->rx_cmp_cfa_code_errors_v2 & cpu_to_le32(RX_CMP_V)) =3D=3D\ + !((raw_cons) & (bn)->cp_bit)) + +#define RX_AGG_CMP_VALID(bn, agg, raw_cons) \ + (!!((agg)->rx_agg_cmp_v & cpu_to_le32(RX_AGG_CMP_V)) =3D=3D \ + !((raw_cons) & (bn)->cp_bit)) + +#define NQ_CMP_VALID(bn, nqcmp, raw_cons) \ + (!!((nqcmp)->v & cpu_to_le32(NQ_CN_V)) =3D=3D !((raw_cons) & (bn)->cp_bit= )) + +#define TX_CMP_TYPE(txcmp) \ + (le32_to_cpu((txcmp)->tx_cmp_flags_type) & CMP_TYPE) + +#define RX_CMP_TYPE(rxcmp) \ + (le32_to_cpu((rxcmp)->rx_cmp_len_flags_type) & RX_CMP_CMP_TYPE) + +#define RING_RX(bn, idx) ((idx) & (bn)->rx_ring_mask) +#define NEXT_RX(idx) ((idx) + 1) + +#define RING_RX_AGG(bn, idx) ((idx) & (bn)->rx_agg_ring_mask) +#define NEXT_RX_AGG(idx) ((idx) + 1) + +#define RING_TX(bn, idx) ((idx) & (bn)->tx_ring_mask) +#define NEXT_TX(idx) ((idx) + 1) + +#define ADV_RAW_CMP(idx, n) ((idx) + (n)) +#define NEXT_RAW_CMP(idx) ADV_RAW_CMP(idx, 1) +#define RING_CMP(bn, idx) ((idx) & (bn)->cp_ring_mask) + +#define RX_CMP_ITYPES(rxcmp) \ + (le32_to_cpu((rxcmp)->rx_cmp_len_flags_type) & RX_CMP_FLAGS_ITYPES_MASK) + +#define RX_CMP_CFA_CODE(rxcmpl1) \ + ((le32_to_cpu((rxcmpl1)->rx_cmp_cfa_code_errors_v2) & \ + RX_CMPL_CFA_CODE_MASK) >> RX_CMPL_CFA_CODE_SFT) + +irqreturn_t bnge_msix(int irq, void *dev_instance); +void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a); +int bnge_napi_poll(struct napi_struct *napi, int budget); +#endif /* _BNGE_TXRX_H_ */ --=20 2.47.3 From nobody Sun Feb 8 05:35:12 2026 Received: from mail-qv1-f98.google.com (mail-qv1-f98.google.com [209.85.219.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 914DE248F6F for ; Mon, 5 Jan 2026 07:22:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597765; cv=none; b=D0a8/YNCTsscYWpagU5k51WnBAxR0KOg+ttnk+SBmUs3nMXsfPMQAVwPoIlT1dFjj1nyLkA8U0MSS3TQiccwX1QUizJprdn9j78JiiTLwk2R6zP6eKZ2QAtVN9oxPJHUwnhWrDnzj1fiJDWKnDOJ4XhindOCl+/jlc+aNk49w80= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597765; c=relaxed/simple; bh=/wME24CEwrWAEU21ThjVbixPTVFQSGmJJfmADqMDWp4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q/D49q/IQA2fCNGifNYh7PdwEszeP/Xes7wmjrp3dYtaEW4zarQ8z+k8iI7gnibaTGeX/i/gCLb8pcXXZv5SpP3LbaOzozbkpxwGnH7oMADtW9RqzAJrcUgCC8yiLbtA/fvYTjLkZDPGPCcSROtAfG5YrIrQjR5zLPt5epEsD2o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=LF4fU+1d; arc=none smtp.client-ip=209.85.219.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="LF4fU+1d" Received: by mail-qv1-f98.google.com with SMTP id 6a1803df08f44-88ffcb14e11so108836326d6.0 for ; Sun, 04 Jan 2026 23:22:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597761; x=1768202561; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=x9hftn4favyaNi5RDxua41GjYksofMa+nNP6S0BT7ZM=; b=QAlGJNu9/zKlr8nNbmqWeAq8nFuzUtR/n/AoTlKgy9xP2A4DDSpISW0kJ1ax/CgvLs ezz+WWCJfK8waIBUGfzw9VQY02RA+wRZCBdFbE1T1qmQBzZlFz0Eznnt1Om5fqONHXmq iX4h6iA17kOTi6JqMP/bHj3+8XGWcIYXoMxBwpqVSJOAKjcxLu61GDvx/lWkTJRGt7mT S2/tQP5l8NRqVLm9vtfAPDpHJpZVinp9vvDnT0zrXAHYXpHpdbndCzxIch1bMUwsSMA2 /uDUpOSU3mPspxkST6S6QxvG0syJBDwlT0ptKo6cMQQgYK/5mIORWk7oEkyU5GG/Kv8k H/eA== X-Forwarded-Encrypted: i=1; AJvYcCUVaJqETHMEc6PRwPK2BiEastFSb2Fr9gDpcXfL2fFZdj72Gxm150GoKKnqbO2Kwathpke0fltF2iehHAI=@vger.kernel.org X-Gm-Message-State: AOJu0Yz9GoUEaJtcNqAH8D+eq8p0ODN7d1e5QDOO6eqgFeS6FdueFsvb WdpHIilcsatQ6z1Rnsc1WGjZLPoDM3rx6LDAhj4Z0N+7/LKOXwOi5Zo3NqNysHF6KTlZHggi/oQ D4Zv0F6MSsH3+tzeNCnsZ6Dn6w+dZNBx3962UE+AT+SgYPx5NRiwE2uqts/4U7CIIcgRp6FqE6R KCQuyf4p0nRM3T3k+oghljw9vYaJJQJzjPmMc61+dILLVhYfp7PXxOzeW7XTeThTv8wg2GH1Gg7 VdEENSMYObG5Yiv1PCIbfEMa9G5 X-Gm-Gg: AY/fxX7wI2wej8B7ZEm5unN7IVQkFRFwJ9vt64VaUqxPYdNIrrkNTO5KtCRLAedj1jM q8vW8PVi+b74SS+4B/rfoWXxQKTAvlHwrz6RHbzA0XtxSwJEYWquTFTfKvBpTPIJkP5CE9yQSwb yOpFAbXRzCZCQyfu7UatTdbab07zvjOiQAX5QhTpjh3VjNZ6hHGCYwJtsuPnkYEUKY/pf9cyEhI 5rkgVqmIeZNxxptyeq6iWG+mPA4Cta7u3bez4vYZinUUN+xEVlxB274NEGrKfEtOTUIlSr5q/SV ggc8/WzDsQmEFfMR9s/SOxGYwcvzRfeUxyQdze3Ud8gW2/MVK0vzXrIAbvM4b53wFyLne2HU0Rw ij6UV1Hp9xl1cQCJTAYoevwywkaQgZEuRlOrXllVaxiyfAllb0YqzSZE3hjwZS1QL4x+z1w7i6V Uegwa1I8YbDDeEqypbObeZOb3YLEoOIbyVoYtjHdNwB+S5BCdZuwDlZA== X-Google-Smtp-Source: AGHT+IHtEbpSz/yROB1nwNNF82+80kGZeHoXQnu9WnRDitcVi34zDmJlzxDCG9PeNgto/9Gf4q0ry9urucNO X-Received: by 2002:ad4:5609:0:b0:87f:fbe1:2c2a with SMTP id 6a1803df08f44-88d821376c0mr482756186d6.26.1767597761226; Sun, 04 Jan 2026 23:22:41 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-117.dlp.protect.broadcom.com. [144.49.247.117]) by smtp-relay.gmail.com with ESMTPS id 6a1803df08f44-88d95889cbasm66946376d6.13.2026.01.04.23.22.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:41 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7ba92341f38so14802904b3a.0 for ; Sun, 04 Jan 2026 23:22:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597760; x=1768202560; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x9hftn4favyaNi5RDxua41GjYksofMa+nNP6S0BT7ZM=; b=LF4fU+1dfq4+26hkQF0cfxhsKbDHCjvJpWjfzvfJZ4xLqyVAtWxSLqUtr4eI8N2xAJ l6knhSxY/UJM+f9955vOVaF/hyyJqgTQZZXyT4qPhInpBvxVhiSLe72KpW0301zWNuBT pj5iA3thRFSa74l4Jc/LWilnPoZO1MKOVZpaA= X-Forwarded-Encrypted: i=1; AJvYcCXBaTMi0gVbL84LuDifK3rvPJH/rN6gV+EdZDpqJRwkCk4NQ/yhOu1nSchhB01EcDAvs1V3uMwLy5rhGSc=@vger.kernel.org X-Received: by 2002:a05:6a00:430a:b0:7a2:7bdd:cbe8 with SMTP id d2e1a72fcca58-7ff655b041bmr39061218b3a.18.1767597759848; Sun, 04 Jan 2026 23:22:39 -0800 (PST) X-Received: by 2002:a05:6a00:430a:b0:7a2:7bdd:cbe8 with SMTP id d2e1a72fcca58-7ff655b041bmr39061202b3a.18.1767597759470; Sun, 04 Jan 2026 23:22:39 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:39 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 3/7] bng_en: Handle an HWRM completion request Date: Mon, 5 Jan 2026 12:51:39 +0530 Message-ID: <20260105072143.19447-4-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Since the HWRM completion for a sent request lands on the NQ, add functions to handle the HWRM completion event. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../net/ethernet/broadcom/bnge/bnge_netdev.c | 3 +- .../net/ethernet/broadcom/bnge/bnge_netdev.h | 1 + .../net/ethernet/broadcom/bnge/bnge_txrx.c | 44 ++++++++++++++++++- 3 files changed, 45 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 7533c382714e..ad29c489cc88 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -2299,8 +2299,7 @@ static int bnge_open(struct net_device *dev) =20 static int bnge_shutdown_nic(struct bnge_net *bn) { - /* TODO: close_path =3D 0 until we make NAPI functional */ - bnge_hwrm_resource_free(bn, 0); + bnge_hwrm_resource_free(bn, 1); return 0; } =20 diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 04989908b133..b5c3284ee0b6 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -77,6 +77,7 @@ struct tx_cmp { #define CMPL_BASE_TYPE_HWRM_FWD_REQ 0x22UL #define CMPL_BASE_TYPE_HWRM_FWD_RESP 0x24UL #define CMPL_BASE_TYPE_HWRM_ASYNC_EVENT 0x2eUL + #define CMPL_BA_TY_HWRM_ASY_EVT CMPL_BASE_TYPE_HWRM_ASYNC_EVENT #define TX_CMP_FLAGS_ERROR (1 << 6) #define TX_CMP_FLAGS_PUSH (1 << 7) u32 tx_cmp_opaque; diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c index db49a92542c0..fb29465f3c72 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -390,6 +390,43 @@ static void __bnge_poll_work_done(struct bnge_net *bn,= struct bnge_napi *bnapi, } } =20 +static void +bnge_hwrm_update_token(struct bnge_dev *bd, u16 seq_id, + enum bnge_hwrm_wait_state state) +{ + struct bnge_hwrm_wait_token *token; + + rcu_read_lock(); + hlist_for_each_entry_rcu(token, &bd->hwrm_pending_list, node) { + if (token->seq_id =3D=3D seq_id) { + WRITE_ONCE(token->state, state); + rcu_read_unlock(); + return; + } + } + rcu_read_unlock(); + dev_err(bd->dev, "Invalid hwrm seq id %d\n", seq_id); +} + +static int bnge_hwrm_handler(struct bnge_dev *bd, struct tx_cmp *txcmp) +{ + struct hwrm_cmpl *h_cmpl =3D (struct hwrm_cmpl *)txcmp; + u16 cmpl_type =3D TX_CMP_TYPE(txcmp), seq_id; + + switch (cmpl_type) { + case CMPL_BASE_TYPE_HWRM_DONE: + seq_id =3D le16_to_cpu(h_cmpl->sequence_id); + bnge_hwrm_update_token(bd, seq_id, BNGE_HWRM_COMPLETE); + break; + + case CMPL_BASE_TYPE_HWRM_ASYNC_EVENT: + default: + break; + } + + return 0; +} + static int __bnge_poll_work(struct bnge_net *bn, struct bnge_cp_ring_info = *cpr, int budget) { @@ -440,8 +477,11 @@ static int __bnge_poll_work(struct bnge_net *bn, struc= t bnge_cp_ring_info *cpr, rx_pkts++; else if (rc =3D=3D -EBUSY) /* partial completion */ break; + } else if (unlikely(cmp_type =3D=3D CMPL_BASE_TYPE_HWRM_DONE || + cmp_type =3D=3D CMPL_BASE_TYPE_HWRM_FWD_REQ || + cmp_type =3D=3D CMPL_BA_TY_HWRM_ASY_EVT)) { + bnge_hwrm_handler(bn->bd, txcmp); } - raw_cons =3D NEXT_RAW_CMP(raw_cons); =20 if (rx_pkts && rx_pkts =3D=3D budget) { @@ -559,6 +599,8 @@ int bnge_napi_poll(struct napi_struct *napi, int budget) work_done +=3D __bnge_poll_work(bn, cpr, budget - work_done); nqr->has_more_work |=3D cpr->has_more_work; + } else { + bnge_hwrm_handler(bn->bd, (struct tx_cmp *)nqcmp); } raw_cons =3D NEXT_RAW_CMP(raw_cons); } --=20 2.47.3 From nobody Sun Feb 8 05:35:12 2026 Received: from mail-pl1-f227.google.com (mail-pl1-f227.google.com [209.85.214.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 784D125783A for ; Mon, 5 Jan 2026 07:22:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597769; cv=none; b=Z24vNW9UlRVlq5cJchx6ZHGRMXyxGIk7CgiNGSQG9tdy0t6lZddcyXACd57pqSU9sTp3I8i5x+SgJI1fdb0KxZrurTl247Q1R2sX8fSgULO5BmI+bmw3NRGNC2vQlN/fXeeIHgaePaG8lqyG2XWvDzT3dY5JtwGtiyaCYtzp2y8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597769; c=relaxed/simple; bh=gmtPei2Ol3eCWhPEgMT7uXJTaNUhEcv6fwBj0RDqCX8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UQiEcGZmwROlzsygF+wICyFjj7icRHxdENBMYC8S6SutZKdN5lyoWkqJ/HYC7tZtU7N6/ybo5Prq5s21K9Z+0aA32a65a7iy5/MXZOWmDDj9x1U0sUe9BFVHxGe2S2SfNDCFRMDtBw2bxG3RdgfokiBtLk6UuDwFd+LFRd1+5bw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=YIlmCJaK; arc=none smtp.client-ip=209.85.214.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="YIlmCJaK" Received: by mail-pl1-f227.google.com with SMTP id d9443c01a7336-2a0bae9aca3so194868425ad.3 for ; Sun, 04 Jan 2026 23:22:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597766; x=1768202566; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zIP7YDddiNEp9WqTDFAqLaGlS7rntNxh9hw66XPiqck=; b=Gxk4WFL7pLumPOn53dYb2JR0x7MUL7wxVBq2MQdK4QIAsXzXEksdeYDJGytx7TW/ph lwx6WdO5+B7ne42w36WMEt9Rx8DcHYrT5AsnLn62HlaDhNmYPmy+INPorAtjD6HnjhsL /9YexgwPjGVg3TSYuJLW/EoNKEgxbr0NcjRU0C1L2jKyExoTX/z8zWt+84XBFFPsF3y9 nKMgPYjQvhA1wE/LNZ/xqc1tP9KhErdxshRbr2sw4cjyYJEwjvjGLG31gTCP9iGDbvKS ac8cL7ULmXmy+LOpqB3NlOgRlyitHiY9CSm20uHNSpacPNkdJAtxdrd4Mm31YzjtJapS N/WQ== X-Forwarded-Encrypted: i=1; AJvYcCXosaNgopb9R0VR4+/Ox7rp+2FxTWCYMrUMh8UJgZnsoi8m8vSmaPtvFlNTf183OLCZi0EaElsBMlsYHxU=@vger.kernel.org X-Gm-Message-State: AOJu0Yzc6qNc+Xl9UFFewmVZlypz9ULRqoef+c5W8Og7xhypKsXnmokw /y/55lRp08IzK3g7j+J87dmPE6veb0Dmc+BXT8XOHU+/fVYv5I4u5rH+GohOh6ILyT3jbDEr9IQ zbMyMm68V7Fuc8M8fYYESyaEKyYOyOHsf8V0xBPXU47KwR3g4aQQdFab6ijI1MytVf3iHmNNjsL c3V2AnT7geANXr8QOKCzhvs+B0HJzBNzcrYsfYb/j/NIUG5C6617e10uq55nBxTwMF95RPwPJMs VQfONmqdNdHOIb8aVCxvBueMnaL X-Gm-Gg: AY/fxX7dzmaIN6owdsiJB/pD/7Im+liKSay748LeQEMoue/r4wk8ozl/qbe8IduFKwe 6KObdVKXJTRV0wOqHG+ZG2X8TU99IIfkEFAQAicf7NKAf0y7L+Ds16H7j2I5sWx/UixMLilKjG2 h+7vRHBpptdKkkT2EEAmWGcFEgeH13BWiIDBPgg9uMYzMWRizLLObB1vnZBAVejFzRyO3UF1uz7 AdjFgXpFxUD4fbE/BTXS3qla6G4SNHFlxTrNTrDip3udql4sxe5taRDul8bdifYjX9Sedk7+65d SkinztkGAmRlmJyOZk7uKkyGdllvCIoQABs8CUo+hQEN/6ob9ETbwCQ7qXYICXWUqlSAnamxJTy O5DS+EAc15DuprBlR0Z8feqB5kwgK1/FbovpLSeP+aIzof/Ubdj+X29douQN8TscOVFeg3TJ41t 1GO5QOI+6yU/GOHDeGlOg/1lwavXmFKqsxq1WslC1kKxjHWsjiAck87g== X-Google-Smtp-Source: AGHT+IH8HV6OAe6zWoVs0Wu1Vp7dRWAI69VzAc1jJ6IND/TnsQaK6FesPWmo02IkPHzLYcKibGXZUBI12VDn X-Received: by 2002:a17:902:e94f:b0:267:a5df:9b07 with SMTP id d9443c01a7336-2a2f2212844mr419303725ad.12.1767597766385; Sun, 04 Jan 2026 23:22:46 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-117.dlp.protect.broadcom.com. [144.49.247.117]) by smtp-relay.gmail.com with ESMTPS id d9443c01a7336-2a2f3c64bd9sm15804635ad.7.2026.01.04.23.22.46 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:46 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7d481452732so26004998b3a.1 for ; Sun, 04 Jan 2026 23:22:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597764; x=1768202564; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zIP7YDddiNEp9WqTDFAqLaGlS7rntNxh9hw66XPiqck=; b=YIlmCJaKf2Emo9599NAkiOB70tf6SBc7ICFzLFbMzVs22b93fqqMyqUm023x5dy1kw KgQcgJ8ADiM6JQmS4+KeUkVgbS5dfPrEvrztcrwbbgiRmeC068z57G5s1fpXhHmMSLb4 iWZNu/4dXcQu3NbMoN/9c1JXuRFW5Q0LUcCsY= X-Forwarded-Encrypted: i=1; AJvYcCVU43ATQLeIeB+PYwPjwG+P+dKVVVB8PhdtsPh5y9PLtJ0kdWpXg1XAZ0cOF6VNZOXZggod7Brd5BAyPnk=@vger.kernel.org X-Received: by 2002:a05:6a00:2907:b0:7e8:450c:61b3 with SMTP id d2e1a72fcca58-7ff6607e34dmr43942531b3a.35.1767597764391; Sun, 04 Jan 2026 23:22:44 -0800 (PST) X-Received: by 2002:a05:6a00:2907:b0:7e8:450c:61b3 with SMTP id d2e1a72fcca58-7ff6607e34dmr43942498b3a.35.1767597763898; Sun, 04 Jan 2026 23:22:43 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:43 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 4/7] bng_en: Add TX support Date: Mon, 5 Jan 2026 12:51:40 +0530 Message-ID: <20260105072143.19447-5-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Add functions to support xmit along with TSO/GSO. Also, add functions to handle TX completion events in the NAPI context. This commit introduces the fundamental transmit data path Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../net/ethernet/broadcom/bnge/bnge_netdev.c | 100 ++++- .../net/ethernet/broadcom/bnge/bnge_netdev.h | 3 + .../net/ethernet/broadcom/bnge/bnge_txrx.c | 389 +++++++++++++++++- .../net/ethernet/broadcom/bnge/bnge_txrx.h | 34 ++ 4 files changed, 516 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index ad29c489cc88..54b487204f17 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -393,9 +393,60 @@ static void bnge_free_rx_ring_pair_bufs(struct bnge_ne= t *bn) bnge_free_one_rx_ring_pair_bufs(bn, &bn->rx_ring[i]); } =20 +static void bnge_free_tx_skbs(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + u16 max_idx; + int i; + + max_idx =3D bn->tx_nr_pages * TX_DESC_CNT; + for (i =3D 0; i < bd->tx_nr_rings; i++) { + struct bnge_tx_ring_info *txr =3D &bn->tx_ring[i]; + int j; + + if (!txr->tx_buf_ring) + continue; + + for (j =3D 0; j < max_idx;) { + struct bnge_sw_tx_bd *tx_buf =3D &txr->tx_buf_ring[j]; + struct sk_buff *skb; + int k, last; + + skb =3D tx_buf->skb; + if (!skb) { + j++; + continue; + } + + tx_buf->skb =3D NULL; + + dma_unmap_single(bd->dev, + dma_unmap_addr(tx_buf, mapping), + skb_headlen(skb), + DMA_TO_DEVICE); + + last =3D tx_buf->nr_frags; + j +=3D 2; + for (k =3D 0; k < last; k++, j++) { + int ring_idx =3D j & bn->tx_ring_mask; + skb_frag_t *frag =3D &skb_shinfo(skb)->frags[k]; + + tx_buf =3D &txr->tx_buf_ring[ring_idx]; + dma_unmap_page(bd->dev, + dma_unmap_addr(tx_buf, mapping), + skb_frag_size(frag), + DMA_TO_DEVICE); + } + dev_kfree_skb(skb); + } + netdev_tx_reset_queue(netdev_get_tx_queue(bd->netdev, i)); + } +} + static void bnge_free_all_rings_bufs(struct bnge_net *bn) { bnge_free_rx_ring_pair_bufs(bn); + bnge_free_tx_skbs(bn); } =20 static void bnge_free_rx_rings(struct bnge_net *bn) @@ -1825,6 +1876,8 @@ static void bnge_enable_napi(struct bnge_net *bn) for (i =3D 0; i < bd->nq_nr_rings; i++) { struct bnge_napi *bnapi =3D bn->bnapi[i]; =20 + bnapi->tx_fault =3D 0; + napi_enable_locked(&bnapi->napi); } } @@ -2231,6 +2284,42 @@ static int bnge_init_nic(struct bnge_net *bn) return rc; } =20 +static void bnge_tx_disable(struct bnge_net *bn) +{ + struct bnge_tx_ring_info *txr; + int i; + + if (bn->tx_ring) { + for (i =3D 0; i < bn->bd->tx_nr_rings; i++) { + txr =3D &bn->tx_ring[i]; + WRITE_ONCE(txr->dev_state, BNGE_DEV_STATE_CLOSING); + } + } + /* Make sure napi polls see @dev_state change */ + synchronize_net(); + + if (!bn->netdev) + return; + /* Drop carrier first to prevent TX timeout */ + netif_carrier_off(bn->netdev); + /* Stop all TX queues */ + netif_tx_disable(bn->netdev); +} + +static void bnge_tx_enable(struct bnge_net *bn) +{ + struct bnge_tx_ring_info *txr; + int i; + + for (i =3D 0; i < bn->bd->tx_nr_rings; i++) { + txr =3D &bn->tx_ring[i]; + WRITE_ONCE(txr->dev_state, 0); + } + /* Make sure napi polls see @dev_state change */ + synchronize_net(); + netif_tx_wake_all_queues(bn->netdev); +} + static int bnge_open_core(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; @@ -2268,6 +2357,8 @@ static int bnge_open_core(struct bnge_net *bn) set_bit(BNGE_STATE_OPEN, &bd->state); =20 bnge_enable_int(bn); + + bnge_tx_enable(bn); return 0; =20 err_free_irq: @@ -2278,13 +2369,6 @@ static int bnge_open_core(struct bnge_net *bn) return rc; } =20 -static netdev_tx_t bnge_start_xmit(struct sk_buff *skb, struct net_device = *dev) -{ - dev_kfree_skb_any(skb); - - return NETDEV_TX_OK; -} - static int bnge_open(struct net_device *dev) { struct bnge_net *bn =3D netdev_priv(dev); @@ -2307,6 +2391,8 @@ static void bnge_close_core(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; =20 + bnge_tx_disable(bn); + clear_bit(BNGE_STATE_OPEN, &bd->state); bnge_shutdown_nic(bn); bnge_disable_napi(bn); diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index b5c3284ee0b6..fba758cc8b04 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -243,6 +243,8 @@ struct bnge_net { =20 unsigned long state; #define BNGE_STATE_NAPI_DISABLED 0 + + u32 msg_enable; }; =20 #define BNGE_DEFAULT_RX_RING_SIZE 511 @@ -431,6 +433,7 @@ struct bnge_napi { #define BNGE_TX_EVENT 4 #define BNGE_REDIRECT_EVENT 8 #define BNGE_TX_CMP_EVENT 0x10 + u8 tx_fault:1; }; =20 #define INVALID_STATS_CTX_ID -1 diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c index fb29465f3c72..c7b89b1635a2 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -50,6 +50,23 @@ static void bnge_sched_reset_rxr(struct bnge_net *bn, rxr->rx_next_cons =3D 0xffff; } =20 +static void bnge_sched_reset_txr(struct bnge_net *bn, + struct bnge_tx_ring_info *txr, + u16 curr) +{ + struct bnge_napi *bnapi =3D txr->bnapi; + + if (bnapi->tx_fault) + return; + + netdev_err(bn->netdev, "Invalid Tx completion (ring:%d tx_hw_cons:%u cons= :%u prod:%u curr:%u)", + txr->txq_index, txr->tx_hw_cons, + txr->tx_cons, txr->tx_prod, curr); + WARN_ON_ONCE(1); + bnapi->tx_fault =3D 1; + /* TODO: Initiate reset task */ +} + void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a) { struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; @@ -379,11 +396,86 @@ static int bnge_force_rx_discard(struct bnge_net *bn, return rc; } =20 +static void __bnge_tx_int(struct bnge_net *bn, struct bnge_tx_ring_info *t= xr, + int budget) +{ + u16 hw_cons =3D txr->tx_hw_cons; + struct bnge_dev *bd =3D bn->bd; + unsigned int tx_bytes =3D 0; + unsigned int tx_pkts =3D 0; + struct netdev_queue *txq; + u16 cons =3D txr->tx_cons; + skb_frag_t *frag; + + txq =3D netdev_get_tx_queue(bn->netdev, txr->txq_index); + + while (RING_TX(bn, cons) !=3D hw_cons) { + struct bnge_sw_tx_bd *tx_buf; + struct sk_buff *skb; + int j, last; + + tx_buf =3D &txr->tx_buf_ring[RING_TX(bn, cons)]; + skb =3D tx_buf->skb; + if (unlikely(!skb)) { + bnge_sched_reset_txr(bn, txr, cons); + return; + } + + cons =3D NEXT_TX(cons); + tx_pkts++; + tx_bytes +=3D skb->len; + tx_buf->skb =3D NULL; + + dma_unmap_single(bd->dev, dma_unmap_addr(tx_buf, mapping), + skb_headlen(skb), DMA_TO_DEVICE); + last =3D tx_buf->nr_frags; + + for (j =3D 0; j < last; j++) { + frag =3D &skb_shinfo(skb)->frags[j]; + cons =3D NEXT_TX(cons); + tx_buf =3D &txr->tx_buf_ring[RING_TX(bn, cons)]; + netmem_dma_unmap_page_attrs(bd->dev, + dma_unmap_addr(tx_buf, + mapping), + skb_frag_size(frag), + DMA_TO_DEVICE, 0); + } + + cons =3D NEXT_TX(cons); + + napi_consume_skb(skb, budget); + } + + WRITE_ONCE(txr->tx_cons, cons); + + __netif_txq_completed_wake(txq, tx_pkts, tx_bytes, + bnge_tx_avail(bn, txr), bn->tx_wake_thresh, + (READ_ONCE(txr->dev_state) =3D=3D + BNGE_DEV_STATE_CLOSING)); +} + +static void bnge_tx_int(struct bnge_net *bn, struct bnge_napi *bnapi, + int budget) +{ + struct bnge_tx_ring_info *txr; + int i; + + bnge_for_each_napi_tx(i, bnapi, txr) { + if (txr->tx_hw_cons !=3D RING_TX(bn, txr->tx_cons)) + __bnge_tx_int(bn, txr, budget); + } + + bnapi->events &=3D ~BNGE_TX_CMP_EVENT; +} + static void __bnge_poll_work_done(struct bnge_net *bn, struct bnge_napi *b= napi, int budget) { struct bnge_rx_ring_info *rxr =3D bnapi->rx_ring; =20 + if ((bnapi->events & BNGE_TX_CMP_EVENT) && !bnapi->tx_fault) + bnge_tx_int(bn, bnapi, budget); + if ((bnapi->events & BNGE_RX_EVENT)) { bnge_db_write(bn->bd, &rxr->rx_db, rxr->rx_prod); bnapi->events &=3D ~BNGE_RX_EVENT; @@ -456,9 +548,26 @@ static int __bnge_poll_work(struct bnge_net *bn, struc= t bnge_cp_ring_info *cpr, cmp_type =3D TX_CMP_TYPE(txcmp); if (cmp_type =3D=3D CMP_TYPE_TX_L2_CMP || cmp_type =3D=3D CMP_TYPE_TX_L2_COAL_CMP) { - /* - * Tx Compl Processng - */ + u32 opaque =3D txcmp->tx_cmp_opaque; + struct bnge_tx_ring_info *txr; + u16 tx_freed; + + txr =3D bnapi->tx_ring[TX_OPAQUE_RING(opaque)]; + event |=3D BNGE_TX_CMP_EVENT; + if (cmp_type =3D=3D CMP_TYPE_TX_L2_COAL_CMP) + txr->tx_hw_cons =3D TX_CMP_SQ_CONS_IDX(txcmp); + else + txr->tx_hw_cons =3D TX_OPAQUE_PROD(bn, opaque); + tx_freed =3D ((txr->tx_hw_cons - txr->tx_cons) & + bn->tx_ring_mask); + /* return full budget so NAPI will complete. */ + if (unlikely(tx_freed >=3D bn->tx_wake_thresh)) { + rx_pkts =3D budget; + raw_cons =3D NEXT_RAW_CMP(raw_cons); + if (budget) + cpr->has_more_work =3D 1; + break; + } } else if (cmp_type >=3D CMP_TYPE_RX_L2_CMP && cmp_type <=3D CMP_TYPE_RX_L2_TPA_START_V3_CMP) { if (likely(budget)) @@ -613,3 +722,277 @@ int bnge_napi_poll(struct napi_struct *napi, int budg= et) poll_done: return work_done; } + +static u16 bnge_xmit_get_cfa_action(struct sk_buff *skb) +{ + struct metadata_dst *md_dst =3D skb_metadata_dst(skb); + + if (!md_dst || md_dst->type !=3D METADATA_HW_PORT_MUX) + return 0; + + return md_dst->u.port_info.port_id; +} + +static const u16 bnge_lhint_arr[] =3D { + TX_BD_FLAGS_LHINT_512_AND_SMALLER, + TX_BD_FLAGS_LHINT_512_TO_1023, + TX_BD_FLAGS_LHINT_1024_TO_2047, + TX_BD_FLAGS_LHINT_1024_TO_2047, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, + TX_BD_FLAGS_LHINT_2048_AND_LARGER, +}; + +static void bnge_txr_db_kick(struct bnge_net *bn, struct bnge_tx_ring_info= *txr, + u16 prod) +{ + /* Sync BD data before updating doorbell */ + wmb(); + bnge_db_write(bn->bd, &txr->tx_db, prod); + txr->kick_pending =3D 0; +} + +netdev_tx_t bnge_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + u32 len, free_size, vlan_tag_flags, cfa_action, flags; + struct bnge_net *bn =3D netdev_priv(dev); + struct bnge_tx_ring_info *txr; + struct bnge_dev *bd =3D bn->bd; + unsigned int length, pad =3D 0; + struct bnge_sw_tx_bd *tx_buf; + struct tx_bd *txbd, *txbd0; + struct netdev_queue *txq; + struct tx_bd_ext *txbd1; + u16 prod, last_frag; + dma_addr_t mapping; + __le32 lflags =3D 0; + skb_frag_t *frag; + int i; + + i =3D skb_get_queue_mapping(skb); + if (unlikely(i >=3D bd->tx_nr_rings)) { + dev_kfree_skb_any(skb); + dev_core_stats_tx_dropped_inc(dev); + return NETDEV_TX_OK; + } + + txq =3D netdev_get_tx_queue(dev, i); + txr =3D &bn->tx_ring[bn->tx_ring_map[i]]; + prod =3D txr->tx_prod; + +#if (MAX_SKB_FRAGS > TX_MAX_FRAGS) + if (skb_shinfo(skb)->nr_frags > TX_MAX_FRAGS) { + netdev_warn_once(dev, "SKB has too many (%d) fragments, max supported is= %d. SKB will be linearized.\n", + skb_shinfo(skb)->nr_frags, TX_MAX_FRAGS); + if (skb_linearize(skb)) { + dev_kfree_skb_any(skb); + dev_core_stats_tx_dropped_inc(dev); + return NETDEV_TX_OK; + } + } +#endif + free_size =3D bnge_tx_avail(bn, txr); + if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) { + /* We must have raced with NAPI cleanup */ + if (net_ratelimit() && txr->kick_pending) + netif_warn(bn, tx_err, dev, + "bnge: ring busy w/ flush pending!\n"); + if (!netif_txq_try_stop(txq, bnge_tx_avail(bn, txr), + bn->tx_wake_thresh)) + return NETDEV_TX_BUSY; + } + + if (unlikely(ipv6_hopopt_jumbo_remove(skb))) + goto tx_free; + + length =3D skb->len; + len =3D skb_headlen(skb); + last_frag =3D skb_shinfo(skb)->nr_frags; + + txbd =3D &txr->tx_desc_ring[TX_RING(bn, prod)][TX_IDX(prod)]; + + tx_buf =3D &txr->tx_buf_ring[RING_TX(bn, prod)]; + tx_buf->skb =3D skb; + tx_buf->nr_frags =3D last_frag; + + vlan_tag_flags =3D 0; + cfa_action =3D bnge_xmit_get_cfa_action(skb); + if (skb_vlan_tag_present(skb)) { + vlan_tag_flags =3D TX_BD_CFA_META_KEY_VLAN | + skb_vlan_tag_get(skb); + /* Currently supports 8021Q, 8021AD vlan offloads + * QINQ1, QINQ2, QINQ3 vlan headers are deprecated + */ + if (skb->vlan_proto =3D=3D htons(ETH_P_8021Q)) + vlan_tag_flags |=3D 1 << TX_BD_CFA_META_TPID_SHIFT; + } + + if (unlikely(skb->no_fcs)) + lflags |=3D cpu_to_le32(TX_BD_FLAGS_NO_CRC); + + if (length < BNGE_MIN_PKT_SIZE) { + pad =3D BNGE_MIN_PKT_SIZE - length; + if (skb_pad(skb, pad)) + /* SKB already freed. */ + goto tx_kick_pending; + length =3D BNGE_MIN_PKT_SIZE; + } + + mapping =3D dma_map_single(bd->dev, skb->data, len, DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(bd->dev, mapping))) + goto tx_free; + + dma_unmap_addr_set(tx_buf, mapping, mapping); + flags =3D (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD | + TX_BD_CNT(last_frag + 2); + + txbd->tx_bd_haddr =3D cpu_to_le64(mapping); + txbd->tx_bd_opaque =3D SET_TX_OPAQUE(bn, txr, prod, 2 + last_frag); + + prod =3D NEXT_TX(prod); + txbd1 =3D (struct tx_bd_ext *) + &txr->tx_desc_ring[TX_RING(bn, prod)][TX_IDX(prod)]; + + txbd1->tx_bd_hsize_lflags =3D lflags; + if (skb_is_gso(skb)) { + bool udp_gso =3D !!(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4); + u32 hdr_len; + + if (skb->encapsulation) { + if (udp_gso) + hdr_len =3D skb_inner_transport_offset(skb) + + sizeof(struct udphdr); + else + hdr_len =3D skb_inner_tcp_all_headers(skb); + } else if (udp_gso) { + hdr_len =3D skb_transport_offset(skb) + + sizeof(struct udphdr); + } else { + hdr_len =3D skb_tcp_all_headers(skb); + } + + txbd1->tx_bd_hsize_lflags |=3D cpu_to_le32(TX_BD_FLAGS_LSO | + TX_BD_FLAGS_T_IPID | + (hdr_len << (TX_BD_HSIZE_SHIFT - 1))); + length =3D skb_shinfo(skb)->gso_size; + txbd1->tx_bd_mss =3D cpu_to_le32(length); + length +=3D hdr_len; + } else if (skb->ip_summed =3D=3D CHECKSUM_PARTIAL) { + txbd1->tx_bd_hsize_lflags |=3D + cpu_to_le32(TX_BD_FLAGS_TCP_UDP_CHKSUM); + txbd1->tx_bd_mss =3D 0; + } + + length >>=3D 9; + if (unlikely(length >=3D ARRAY_SIZE(bnge_lhint_arr))) { + dev_warn_ratelimited(bd->dev, "Dropped oversize %d bytes TX packet.\n", + skb->len); + i =3D 0; + goto tx_dma_error; + } + flags |=3D bnge_lhint_arr[length]; + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + + txbd1->tx_bd_cfa_meta =3D cpu_to_le32(vlan_tag_flags); + txbd1->tx_bd_cfa_action =3D + cpu_to_le32(cfa_action << TX_BD_CFA_ACTION_SHIFT); + txbd0 =3D txbd; + for (i =3D 0; i < last_frag; i++) { + frag =3D &skb_shinfo(skb)->frags[i]; + + prod =3D NEXT_TX(prod); + txbd =3D &txr->tx_desc_ring[TX_RING(bn, prod)][TX_IDX(prod)]; + + len =3D skb_frag_size(frag); + mapping =3D skb_frag_dma_map(bd->dev, frag, 0, len, + DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(bd->dev, mapping))) + goto tx_dma_error; + + tx_buf =3D &txr->tx_buf_ring[RING_TX(bn, prod)]; + netmem_dma_unmap_addr_set(skb_frag_netmem(frag), tx_buf, + mapping, mapping); + + txbd->tx_bd_haddr =3D cpu_to_le64(mapping); + + flags =3D len << TX_BD_LEN_SHIFT; + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + } + + flags &=3D ~TX_BD_LEN; + txbd->tx_bd_len_flags_type =3D + cpu_to_le32(((len + pad) << TX_BD_LEN_SHIFT) | flags | + TX_BD_FLAGS_PACKET_END); + + netdev_tx_sent_queue(txq, skb->len); + + prod =3D NEXT_TX(prod); + WRITE_ONCE(txr->tx_prod, prod); + + if (!netdev_xmit_more() || netif_xmit_stopped(txq)) { + bnge_txr_db_kick(bn, txr, prod); + } else { + if (free_size >=3D bn->tx_wake_thresh) + txbd0->tx_bd_len_flags_type |=3D + cpu_to_le32(TX_BD_FLAGS_NO_CMPL); + txr->kick_pending =3D 1; + } + + if (unlikely(bnge_tx_avail(bn, txr) <=3D MAX_SKB_FRAGS + 1)) { + if (netdev_xmit_more()) { + txbd0->tx_bd_len_flags_type &=3D + cpu_to_le32(~TX_BD_FLAGS_NO_CMPL); + bnge_txr_db_kick(bn, txr, prod); + } + + netif_txq_try_stop(txq, bnge_tx_avail(bn, txr), + bn->tx_wake_thresh); + } + return NETDEV_TX_OK; + +tx_dma_error: + last_frag =3D i; + + /* start back at beginning and unmap skb */ + prod =3D txr->tx_prod; + tx_buf =3D &txr->tx_buf_ring[RING_TX(bn, prod)]; + dma_unmap_single(bd->dev, dma_unmap_addr(tx_buf, mapping), + skb_headlen(skb), DMA_TO_DEVICE); + prod =3D NEXT_TX(prod); + + /* unmap remaining mapped pages */ + for (i =3D 0; i < last_frag; i++) { + prod =3D NEXT_TX(prod); + tx_buf =3D &txr->tx_buf_ring[RING_TX(bn, prod)]; + frag =3D &skb_shinfo(skb)->frags[i]; + netmem_dma_unmap_page_attrs(bd->dev, + dma_unmap_addr(tx_buf, mapping), + skb_frag_size(frag), + DMA_TO_DEVICE, 0); + } + +tx_free: + dev_kfree_skb_any(skb); + +tx_kick_pending: + if (txr->kick_pending) + bnge_txr_db_kick(bn, txr, txr->tx_prod); + txr->tx_buf_ring[RING_TX(bn, txr->tx_prod)].skb =3D NULL; + dev_core_stats_tx_dropped_inc(dev); + return NETDEV_TX_OK; +} + diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.h index b13081b0eb79..8cd980875a3b 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h @@ -7,6 +7,34 @@ #include #include "bnge_netdev.h" =20 +static inline u32 bnge_tx_avail(struct bnge_net *bn, + const struct bnge_tx_ring_info *txr) +{ + u32 used =3D READ_ONCE(txr->tx_prod) - READ_ONCE(txr->tx_cons); + + return bn->tx_ring_size - (used & bn->tx_ring_mask); +} + +static inline void bnge_writeq_relaxed(struct bnge_dev *bd, u64 val, + void __iomem *addr) +{ +#if BITS_PER_LONG =3D=3D 32 + spin_lock(&bd->db_lock); + lo_hi_writeq_relaxed(val, addr); + spin_unlock(&bd->db_lock); +#else + writeq_relaxed(val, addr); +#endif +} + +/* For TX and RX ring doorbells with no ordering guarantee*/ +static inline void bnge_db_write_relaxed(struct bnge_net *bn, + struct bnge_db_info *db, u32 idx) +{ + bnge_writeq_relaxed(bn->bd, db->db_key64 | DB_RING_IDX(db, idx), + db->doorbell); +} + #define BNGE_MIN_PKT_SIZE 52 =20 #define TX_OPAQUE_IDX_MASK 0x0000ffff @@ -26,6 +54,11 @@ TX_OPAQUE_BDS_SHIFT) #define TX_OPAQUE_PROD(bn, opq) ((TX_OPAQUE_IDX(opq) + TX_OPAQUE_BDS(opq))= &\ (bn)->tx_ring_mask) +#define TX_BD_CNT(n) (((n) << TX_BD_FLAGS_BD_CNT_SHIFT) & TX_BD_FLAGS_BD_C= NT) + +#define TX_MAX_BD_CNT 32 + +#define TX_MAX_FRAGS (TX_MAX_BD_CNT - 2) =20 /* Minimum TX BDs for a TX packet with MAX_SKB_FRAGS + 1. We need one ext= ra * BD because the first TX BD is always a long BD. @@ -85,6 +118,7 @@ RX_CMPL_CFA_CODE_MASK) >> RX_CMPL_CFA_CODE_SFT) =20 irqreturn_t bnge_msix(int irq, void *dev_instance); +netdev_tx_t bnge_start_xmit(struct sk_buff *skb, struct net_device *dev); void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a); int bnge_napi_poll(struct napi_struct *napi, int budget); #endif /* _BNGE_TXRX_H_ */ --=20 2.47.3 From nobody Sun Feb 8 05:35:12 2026 Received: from mail-qv1-f98.google.com (mail-qv1-f98.google.com [209.85.219.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F39B289367 for ; Mon, 5 Jan 2026 07:22:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597774; cv=none; b=WO4CC4vtN5Zbl/O5FSieMoDx3GWis35zviow5scGDWKeqVhriD2ecBaHOLRu5fiPwF8MGVqZMYJvpdC0FbNT/Nlb+esQyLrhoojvlpBrFCreX2u7IgvLm4Sc+guqw2esHjH5HOXL0G3GkVjvaGV/B39UqyJmPKKXQtTL4Fpt+Bc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597774; c=relaxed/simple; bh=SxOa4U6P+EIMTTpwb+76baA+/jTeAceVKEvtuF0rdvc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bgv9PCXI6LJLEuBlhubvFhyE55BLZv8F6H5lhlpGdDRPK/poc7gmp6r+jEsNAeJxualO1zDMHOzqyXVWFZkc5r+/MhOD1U996YWf2el/o8vvQ4OZDjsznqswpmhOGSdCYSMIaTfxfJH8XpAveyt6c0aMEfYL+CzA1psOlFWFsIs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=cONFjPfD; arc=none smtp.client-ip=209.85.219.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="cONFjPfD" Received: by mail-qv1-f98.google.com with SMTP id 6a1803df08f44-88fdac49a85so118228976d6.0 for ; Sun, 04 Jan 2026 23:22:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597770; x=1768202570; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LixPI7lhNrvI0E1ThxeWNSrgR/XHNrOxiZ2JDldYw1g=; b=bJn+Rz/Z8qNBs/ubN23Zqy6+NDoBhzwqd8WAFpf1OaBWbI5Y3mc3VwNxVEvFUBaMDc H4dsflOM+3lFbuf6EfGVeXN3M5IDfwzK4/hVMGTpvnki0QzpmT7y7lKCYCMTJ4QYHJGQ nO6XZHnhTjFhk8g3sb0o7hHecDhWrAKI8VxEfJSVz8vGcQvF+eNi/xed6ukkfbjyPt9m jXIkVV95eOE4Pt5rp6rBRQ8Z7vhsgEeEmG6owm8bcmtErEs1l31YZpOdLw/fs4MnTUrP Zy58F8fPmXRUR6NyT5gMOi+CuQv5EIZp/BHStkydrn9DlainzM+dLBQdQHho4Mx7wbH3 cNbw== X-Forwarded-Encrypted: i=1; AJvYcCV9cGCY/Ar1sRivhw1OviM3EJy5YEpKaULenisnP1TlTHCaxeQhH8XuAb5wVqDuRlrj3cm+AqI6DiFG8eo=@vger.kernel.org X-Gm-Message-State: AOJu0Yytmy4bx4ZVlsd7nGFfqKNDFLcanBMr44fqn8Xkr60CO/8pipe3 A0sdBwI/8P0FGhQn2U26xd/1OnDeNx6lElF324GCW7qmxNoVuL5pLWB1PMNvmQ0AgB/WpVpBeNe xGupkkjtwQqOJOECtNNn0wzwDpokNwXohZlnW0h0dvbFOmHsBMHpCfBtgp70ARZ9mubJQdrDhDB IOGgSm4vf6IQuxMVmIX3wcF4hGq4RFHi190LKL6uIZ302eF6ENFov6WbMBkETC1hNYNSVO8mYtS cQcCpsMA7ekWJOgZRY8ZIh/p7ww X-Gm-Gg: AY/fxX7P8YwGi0FK/ZIxMIdPrROcBtfyTvCSQT1wILz3/vOFX1jrxl7ywQAG99p1QF3 skXIuQ8gZGD+XLUr2866Zgoeq6IvrHzEuyD8dIMu1UoG4KUwtJ6/87xCGOwuapaCc3oTwUWr5wj oG2HcanMRt+DEhmxD/+ncQrnFHOv/5RmHSeu7xn1fud2qeZ48IIygOCxgATjwP2tDvx0O9JI89x 8DyhHwpDSH1re47SAjv53UqJ3ZzLVG4FGsd/sA28cf2nery16bYWWLxRaWdowRYiJIFAMQru6pk gGaPrypPQwsrLl+MpZAsWClhp2K5AQGGcHWi5MyuhQQsvA/fx03cwJlK+xDsDP5At3jqjKaTTsF REJwl2adkDlrExAbm5Cm0mzjnInNHE9TzUEUNAwzu7LJzA3f94pLcpVUxSXkHeq4QTUPyJiJhlJ qqZFLIch6E07neMH8Qdp2aEMG7OWFX9MtsmUbaW17+Fr+FPGkRkLnypA== X-Google-Smtp-Source: AGHT+IFpPUAekjcAcDmudcc7Fp6h8pU1/toO8wsinKbiqwHAMzVdDshZQV+bLIBZY3G3sITJZynnj0KyjilR X-Received: by 2002:a05:6214:8082:b0:888:8088:209e with SMTP id 6a1803df08f44-88d859bd1d7mr622006216d6.16.1767597770111; Sun, 04 Jan 2026 23:22:50 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-117.dlp.protect.broadcom.com. [144.49.247.117]) by smtp-relay.gmail.com with ESMTPS id 6a1803df08f44-88d94fcc0dasm67029146d6.5.2026.01.04.23.22.49 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:50 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7aa9f595688so21895794b3a.2 for ; Sun, 04 Jan 2026 23:22:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597769; x=1768202569; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LixPI7lhNrvI0E1ThxeWNSrgR/XHNrOxiZ2JDldYw1g=; b=cONFjPfD88PKum8jWQ6bxq/DReSW4p37mpuVVAHrQBClLe0jodcrGufagHZLgfQTjn uVgSfhRsK5AZawBxkJ+yQT+5FYAPDVHSnlOcpeuXKQ3zk9frlnCnliWgmkZJ5n9tjOjf /QX51bi0F5utGeDDOhg5X6lqJmJ9+ptG17GbI= X-Forwarded-Encrypted: i=1; AJvYcCWxODjCiGQzxdtHxaJKh5+HQ3etzBrmfEc8dlZaaeYKAyQCbEKNuQtZKf4bSXQQxXQ6OV0wQP4CDhA2L1c=@vger.kernel.org X-Received: by 2002:a05:6a00:2986:b0:78c:994a:fc87 with SMTP id d2e1a72fcca58-7ff64403becmr47103470b3a.6.1767597768772; Sun, 04 Jan 2026 23:22:48 -0800 (PST) X-Received: by 2002:a05:6a00:2986:b0:78c:994a:fc87 with SMTP id d2e1a72fcca58-7ff64403becmr47103454b3a.6.1767597768301; Sun, 04 Jan 2026 23:22:48 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:47 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 5/7] bng_en: Add support to handle AGG events Date: Mon, 5 Jan 2026 12:51:41 +0530 Message-ID: <20260105072143.19447-6-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Add AGG event handling in the RX path to receive packet data on AGG rings. This enables Jumbo and HDS functionality. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../net/ethernet/broadcom/bnge/bnge_hw_def.h | 13 ++ .../net/ethernet/broadcom/bnge/bnge_netdev.c | 17 +- .../net/ethernet/broadcom/bnge/bnge_netdev.h | 5 + .../net/ethernet/broadcom/bnge/bnge_txrx.c | 220 +++++++++++++++++- .../net/ethernet/broadcom/bnge/bnge_txrx.h | 1 + 5 files changed, 248 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h b/drivers/net= /ethernet/broadcom/bnge/bnge_hw_def.h index 4da4259095fa..cfc888a7f9ee 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h @@ -4,6 +4,19 @@ #ifndef _BNGE_HW_DEF_H_ #define _BNGE_HW_DEF_H_ =20 +struct rx_agg_cmp { + __le32 rx_agg_cmp_len_flags_type; + #define RX_AGG_CMP_TYPE (0x3f << 0) + #define RX_AGG_CMP_LEN (0xffff << 16) + #define RX_AGG_CMP_LEN_SHIFT 16 + u32 rx_agg_cmp_opaque; + __le32 rx_agg_cmp_v; + #define RX_AGG_CMP_V (1 << 0) + #define RX_AGG_CMP_AGG_ID (0xffff << 16) + #define RX_AGG_CMP_AGG_ID_SHIFT 16 + __le32 rx_agg_cmp_unused; +}; + struct tx_bd_ext { __le32 tx_bd_hsize_lflags; #define TX_BD_FLAGS_TCP_UDP_CHKSUM (1 << 0) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 54b487204f17..0f2700131237 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -10,6 +10,9 @@ #include #include #include +#include +#include +#include #include #include #include @@ -979,9 +982,9 @@ static netmem_ref __bnge_alloc_rx_netmem(struct bnge_ne= t *bn, return netmem; } =20 -static u8 *__bnge_alloc_rx_frag(struct bnge_net *bn, dma_addr_t *mapping, - struct bnge_rx_ring_info *rxr, - gfp_t gfp) +u8 *__bnge_alloc_rx_frag(struct bnge_net *bn, dma_addr_t *mapping, + struct bnge_rx_ring_info *rxr, + gfp_t gfp) { unsigned int offset; struct page *page; @@ -1048,7 +1051,7 @@ static int bnge_alloc_one_rx_ring_bufs(struct bnge_ne= t *bn, return 0; } =20 -static u16 bnge_find_next_agg_idx(struct bnge_rx_ring_info *rxr, u16 idx) +u16 bnge_find_next_agg_idx(struct bnge_rx_ring_info *rxr, u16 idx) { u16 next, max =3D rxr->rx_agg_bmap_size; =20 @@ -1058,9 +1061,9 @@ static u16 bnge_find_next_agg_idx(struct bnge_rx_ring= _info *rxr, u16 idx) return next; } =20 -static int bnge_alloc_rx_netmem(struct bnge_net *bn, - struct bnge_rx_ring_info *rxr, - u16 prod, gfp_t gfp) +int bnge_alloc_rx_netmem(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr, + u16 prod, gfp_t gfp) { struct bnge_sw_rx_agg_bd *rx_agg_buf; u16 sw_prod =3D rxr->rx_sw_agg_prod; diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index fba758cc8b04..8451d35d7b7e 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -514,4 +514,9 @@ u16 bnge_cp_ring_for_tx(struct bnge_tx_ring_info *txr); void bnge_fill_hw_rss_tbl(struct bnge_net *bn, struct bnge_vnic_info *vnic= ); int bnge_alloc_rx_data(struct bnge_net *bn, struct bnge_rx_ring_info *rxr, u16 prod, gfp_t gfp); +u16 bnge_find_next_agg_idx(struct bnge_rx_ring_info *rxr, u16 idx); +u8 *__bnge_alloc_rx_frag(struct bnge_net *bn, dma_addr_t *mapping, + struct bnge_rx_ring_info *rxr, gfp_t gfp); +int bnge_alloc_rx_netmem(struct bnge_net *bn, struct bnge_rx_ring_info *rx= r, + u16 prod, gfp_t gfp); #endif /* _BNGE_NETDEV_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c index c7b89b1635a2..fb54a9b14a8d 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -43,6 +44,191 @@ irqreturn_t bnge_msix(int irq, void *dev_instance) return IRQ_HANDLED; } =20 +static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u16 cp_cons, u16 curr) +{ + struct rx_agg_cmp *agg; + + cp_cons =3D RING_CMP(bn, ADV_RAW_CMP(cp_cons, curr)); + agg =3D (struct rx_agg_cmp *) + &cpr->desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)]; + return agg; +} + +static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_info *cpr, u16 idx, + u16 start, u32 agg_bufs) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + struct bnge_net *bn =3D bnapi->bn; + struct bnge_rx_ring_info *rxr; + u16 prod, sw_prod; + u32 i; + + rxr =3D bnapi->rx_ring; + sw_prod =3D rxr->rx_sw_agg_prod; + prod =3D rxr->rx_agg_prod; + + for (i =3D 0; i < agg_bufs; i++) { + struct bnge_sw_rx_agg_bd *cons_rx_buf, *prod_rx_buf; + struct rx_agg_cmp *agg; + struct rx_bd *prod_bd; + netmem_ref netmem; + u16 cons; + + agg =3D bnge_get_agg(bn, cpr, idx, start + i); + cons =3D agg->rx_agg_cmp_opaque; + __clear_bit(cons, rxr->rx_agg_bmap); + + if (unlikely(test_bit(sw_prod, rxr->rx_agg_bmap))) + sw_prod =3D bnge_find_next_agg_idx(rxr, sw_prod); + + __set_bit(sw_prod, rxr->rx_agg_bmap); + prod_rx_buf =3D &rxr->rx_agg_buf_ring[sw_prod]; + cons_rx_buf =3D &rxr->rx_agg_buf_ring[cons]; + + /* It is possible for sw_prod to be equal to cons, so + * set cons_rx_buf->netmem to 0 first. + */ + netmem =3D cons_rx_buf->netmem; + cons_rx_buf->netmem =3D 0; + prod_rx_buf->netmem =3D netmem; + prod_rx_buf->offset =3D cons_rx_buf->offset; + + prod_rx_buf->mapping =3D cons_rx_buf->mapping; + + prod_bd =3D &rxr->rx_agg_desc_ring[RX_AGG_RING(bn, prod)] + [RX_IDX(prod)]; + + prod_bd->rx_bd_haddr =3D cpu_to_le64(cons_rx_buf->mapping); + prod_bd->rx_bd_opaque =3D sw_prod; + + prod =3D NEXT_RX_AGG(prod); + sw_prod =3D RING_RX_AGG(bn, NEXT_RX_AGG(sw_prod)); + } + rxr->rx_agg_prod =3D prod; + rxr->rx_sw_agg_prod =3D sw_prod; +} + +static int bnge_agg_bufs_valid(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u8 agg_bufs, u32 *raw_cons) +{ + struct rx_agg_cmp *agg; + u16 last; + + *raw_cons =3D ADV_RAW_CMP(*raw_cons, agg_bufs); + last =3D RING_CMP(bn, *raw_cons); + agg =3D (struct rx_agg_cmp *) + &cpr->desc_ring[CP_RING(last)][CP_IDX(last)]; + return RX_AGG_CMP_VALID(bn, agg, *raw_cons); +} + +static int bnge_discard_rx(struct bnge_net *bn, struct bnge_cp_ring_info *= cpr, + u32 *raw_cons, void *cmp) +{ + u32 tmp_raw_cons =3D *raw_cons; + struct rx_cmp *rxcmp =3D cmp; + u8 cmp_type, agg_bufs =3D 0; + + cmp_type =3D RX_CMP_TYPE(rxcmp); + + if (cmp_type =3D=3D CMP_TYPE_RX_L2_CMP) { + agg_bufs =3D (le32_to_cpu(rxcmp->rx_cmp_misc_v1) & + RX_CMP_AGG_BUFS) >> + RX_CMP_AGG_BUFS_SHIFT; + } + + if (agg_bufs) { + if (!bnge_agg_bufs_valid(bn, cpr, agg_bufs, &tmp_raw_cons)) + return -EBUSY; + } + *raw_cons =3D tmp_raw_cons; + return 0; +} + +static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u16 idx, u32 agg_bufs, + struct sk_buff *skb) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + struct skb_shared_info *shinfo; + struct bnge_rx_ring_info *rxr; + u32 i, total_frag_len =3D 0; + u16 prod; + + rxr =3D bnapi->rx_ring; + prod =3D rxr->rx_agg_prod; + shinfo =3D skb_shinfo(skb); + + for (i =3D 0; i < agg_bufs; i++) { + struct bnge_sw_rx_agg_bd *cons_rx_buf; + struct rx_agg_cmp *agg; + u16 cons, frag_len; + netmem_ref netmem; + + agg =3D bnge_get_agg(bn, cpr, idx, i); + cons =3D agg->rx_agg_cmp_opaque; + frag_len =3D (le32_to_cpu(agg->rx_agg_cmp_len_flags_type) & + RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; + + cons_rx_buf =3D &rxr->rx_agg_buf_ring[cons]; + skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem, + cons_rx_buf->offset, + frag_len, BNGE_RX_PAGE_SIZE); + __clear_bit(cons, rxr->rx_agg_bmap); + + /* It is possible for bnge_alloc_rx_netmem() to allocate + * a sw_prod index that equals the cons index, so we + * need to clear the cons entry now. + */ + netmem =3D cons_rx_buf->netmem; + cons_rx_buf->netmem =3D 0; + + if (bnge_alloc_rx_netmem(bn, rxr, prod, GFP_ATOMIC) !=3D 0) { + skb->len -=3D frag_len; + skb->data_len -=3D frag_len; + skb->truesize -=3D BNGE_RX_PAGE_SIZE; + + --shinfo->nr_frags; + cons_rx_buf->netmem =3D netmem; + + /* Update prod since possibly some netmems have been + * allocated already. + */ + rxr->rx_agg_prod =3D prod; + bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i); + return 0; + } + + page_pool_dma_sync_netmem_for_cpu(rxr->page_pool, netmem, 0, + BNGE_RX_PAGE_SIZE); + + total_frag_len +=3D frag_len; + prod =3D NEXT_RX_AGG(prod); + } + rxr->rx_agg_prod =3D prod; + return total_frag_len; +} + +static struct sk_buff *bnge_rx_agg_netmems_skb(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + struct sk_buff *skb, u16 idx, + u32 agg_bufs) +{ + u32 total_frag_len; + + total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, skb); + if (!total_frag_len) { + skb_mark_for_recycle(skb); + dev_kfree_skb(skb); + return NULL; + } + + return skb; +} + static void bnge_sched_reset_rxr(struct bnge_net *bn, struct bnge_rx_ring_info *rxr) { @@ -233,6 +419,7 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, dma_addr_t dma_addr; struct sk_buff *skb; unsigned int len; + u8 agg_bufs; void *data; int rc =3D 0; =20 @@ -261,11 +448,15 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bn= ge_cp_ring_info *cpr, =20 cons =3D rxcmp->rx_cmp_opaque; if (unlikely(cons !=3D rxr->rx_next_cons)) { + int rc1 =3D bnge_discard_rx(bn, cpr, &tmp_raw_cons, rxcmp); + /* 0xffff is forced error, don't print it */ if (rxr->rx_next_cons !=3D 0xffff) netdev_warn(bn->netdev, "RX cons %x !=3D expected cons %x\n", cons, rxr->rx_next_cons); bnge_sched_reset_rxr(bn, rxr); + if (rc1) + return rc1; goto next_rx_no_prod_no_len; } rx_buf =3D &rxr->rx_buf_ring[cons]; @@ -274,11 +465,22 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bn= ge_cp_ring_info *cpr, prefetch(data_ptr); =20 misc =3D le32_to_cpu(rxcmp->rx_cmp_misc_v1); + agg_bufs =3D (misc & RX_CMP_AGG_BUFS) >> RX_CMP_AGG_BUFS_SHIFT; + + if (agg_bufs) { + if (!bnge_agg_bufs_valid(bn, cpr, agg_bufs, &tmp_raw_cons)) + return -EBUSY; + + cp_cons =3D NEXT_CMP(bn, cp_cons); + *event |=3D BNGE_AGG_EVENT; + } *event |=3D BNGE_RX_EVENT; =20 rx_buf->data =3D NULL; if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) { bnge_reuse_rx_data(rxr, cons, data); + if (agg_bufs) + bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs); rc =3D -EIO; goto next_rx_no_len; } @@ -290,8 +492,12 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, if (len <=3D bn->rx_copybreak) { skb =3D bnge_copy_skb(bnapi, data_ptr, len, dma_addr); bnge_reuse_rx_data(rxr, cons, data); - if (!skb) + if (!skb) { + if (agg_bufs) + bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, + agg_bufs); goto oom_next_rx; + } } else { u32 payload; =20 @@ -305,6 +511,13 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, goto oom_next_rx; } =20 + if (agg_bufs) { + skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, cp_cons, + agg_bufs); + if (!skb) + goto oom_next_rx; + } + if (RX_CMP_HASH_VALID(rxcmp)) { enum pkt_hash_types type; =20 @@ -480,6 +693,11 @@ static void __bnge_poll_work_done(struct bnge_net *bn,= struct bnge_napi *bnapi, bnge_db_write(bn->bd, &rxr->rx_db, rxr->rx_prod); bnapi->events &=3D ~BNGE_RX_EVENT; } + + if (bnapi->events & BNGE_AGG_EVENT) { + bnge_db_write(bn->bd, &rxr->rx_agg_db, rxr->rx_agg_prod); + bnapi->events &=3D ~BNGE_AGG_EVENT; + } } =20 static void diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.h index 8cd980875a3b..7de718898181 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.h @@ -109,6 +109,7 @@ static inline void bnge_db_write_relaxed(struct bnge_ne= t *bn, #define ADV_RAW_CMP(idx, n) ((idx) + (n)) #define NEXT_RAW_CMP(idx) ADV_RAW_CMP(idx, 1) #define RING_CMP(bn, idx) ((idx) & (bn)->cp_ring_mask) +#define NEXT_CMP(bn, idx) RING_CMP(bn, ADV_RAW_CMP(idx, 1)) =20 #define RX_CMP_ITYPES(rxcmp) \ (le32_to_cpu((rxcmp)->rx_cmp_len_flags_type) & RX_CMP_FLAGS_ITYPES_MASK) --=20 2.47.3 From nobody Sun Feb 8 05:35:12 2026 Received: from mail-pl1-f225.google.com (mail-pl1-f225.google.com [209.85.214.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F7F54204E for ; Mon, 5 Jan 2026 07:22:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597779; cv=none; b=KWbfJxWvWZYbfJN/xqIB/5zf2lLYxyPnl8DADg6cr1qtUpe6ngJU/Nx52qNF5IY+kIPeHb1ak6ezvg7bqMWyf52e8ngyC53XE1QhZucdMNhW/yqzXa5apxGgt5AM2fylIgx2JWubfD9aRiC6iXhrKVVKvM0nERSa5lvcPynTlWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597779; c=relaxed/simple; bh=pJMNsGDiwXSwJI9vdC9s1BEz6bR4w1GkdspONgoWZIU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j4lwHffJGXXJaD95CFQoDimprzYwlQZXkDDEcG2K81xwefIoV8QWDeZpGwm7hHAiskMS2ev17+yJRRuuPI4tisfANLzC+4XkEzSv+8e1f/Jav+Xijf/PhheUhkjiV4j7qKIlJBqskQwKL5HtTJn5by24cphhNqXP2Inxzhi6xP8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=eMHNBblK; arc=none smtp.client-ip=209.85.214.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="eMHNBblK" Received: by mail-pl1-f225.google.com with SMTP id d9443c01a7336-2a07f8dd9cdso141228785ad.1 for ; Sun, 04 Jan 2026 23:22:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597775; x=1768202575; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rDiBaVpHhFC1w3aaXW4NAExX45Ifap/ZxvquXSkEsfw=; b=E1SGyPZjoc9IJ7war0v5wLXYd+U0Bz1zNdIoNXa6I/mGj6u/nEDSamXv9oaD1kHYjT hyuwFIXPXEjASaz8RFYC2731DHjpz1sC248odwGB/Wpw3SC0+PRcadVzZj+pExGDcAYH XkqDg4mVsX/ZSDXswmB9/ngOD2FLkdpxQ4S4Xu1DHh1GdSHRw8517PFsuGl//OWk9x3n KzJDlhpOY0w+a4+CobOQh0KYto4EPlsGbDSCd8eAgmiydB0FnOf8OOyuT3+dCTJ8HQ7s G1EVMICcQhSlzcjBjLZbkT+YcakITfO+zDnBPXBBxbZQtFZtvow+uCr9IJzaRiHqyDAZ ShhQ== X-Forwarded-Encrypted: i=1; AJvYcCU4QRPM7R7sQIW8hXxI10HRvBovG6hU7/SpJD+AXi8YLwYttlIHhkZt8CsQJAbHX6igvfoEyIHFbDnhPsY=@vger.kernel.org X-Gm-Message-State: AOJu0YwVQCZsAIZnc5k35x9IWpaAtdPujyaBaZXe+i0ZIkV/+VORRgZc XFy03iZS2ycCNfRoegks0JlSS5ifMVsDduzZ9tIYF6PEi7T8Nsxj1q82jgQW4VPClgh1Zwp5hJc 94fW6m5XdmunemnTooETYPcMK7Y9Q4FumcEflz5sTsGwhNzit7gMy7/1nxHkuN9EtZdSmeQqVcE Y6M/UiHAj01CdSYen59SCK34WzY8t/fTCIl/hMadMqHpf5xgPI0Yw4NTvTtFve61S6eu4RPZXBz td6J6tIlsxDVEI/LRjlS3BGEN/Y X-Gm-Gg: AY/fxX75RWvtLqu6fudNGOJ6MDDOsf65yTzjQThJqT+C/6BDHi7WPKm2plY5ygLMsmG 05KVP7Zi0G/O0LTiDdRqJD9uMlo77l9MimvGHFH/quYfpgzDWPz+eqmjxizjDEL1oosh0e6+kTC RGKRhIl8hxoD8GsStNWuFO7Q/SattK8fY8hG9vqoyEMNlx0uppPXYa1keXBO0rsICFkCkt3mMUQ 5/6pdjy9qqq9Mx6LLzpP2kIuOKE5cmaaef5IQdEWaGfsAJVkl54uGpaCqIOsHUcy6eqzc3cjvHK L7yfsrj/tjenRQpgXAA60+mxxXEX9Cn3GayZn5le45gTwxqSPJlZ8MyScf6O3EEevVstoOYn7Kt 8Nvo8Zq82bBMN3rDvEvo1FHD57xEIUsARL3Wjqd1n/DgK3paZMPc+Rv3PI8rJjx/FlGlXnYagvl Tn0fEkqzWnzCP0cVePV8HnDoVs7kPaZV3x2Em5Hj/TiyOOtglSyNo6mQ== X-Google-Smtp-Source: AGHT+IEG/1PsrR1EA7R/Mchw/IoIwLVQjh+b/dhfnTG5+PsBZnMWc2QZPVGIKpzm2uxHHoPKltNWUnDhaEP8 X-Received: by 2002:a17:902:f64a:b0:2a0:c5b8:24b0 with SMTP id d9443c01a7336-2a2f2a34fedmr542804265ad.46.1767597775251; Sun, 04 Jan 2026 23:22:55 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-117.dlp.protect.broadcom.com. [144.49.247.117]) by smtp-relay.gmail.com with ESMTPS id d9443c01a7336-2a2f3c800d5sm54578005ad.17.2026.01.04.23.22.54 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:55 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-c3373f2bd74so5174845a12.3 for ; Sun, 04 Jan 2026 23:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597773; x=1768202573; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rDiBaVpHhFC1w3aaXW4NAExX45Ifap/ZxvquXSkEsfw=; b=eMHNBblKc8fmGrEK/67xqbJKZeYcjGUnRc0ZbWEDhux+gnYUFj0pDhnEa4qD7q9XNb dy9c0d0Z8NQWvDeNE7J9jAA3820e269VANA/WZWxVZn/TEX+jSDsxrT+UZUtD+qIO4BB +2G3+Lsb8oJVZ3Dn+7+1khaZmeUZoOMEykI6k= X-Forwarded-Encrypted: i=1; AJvYcCU78ZO87XjtX0Lj51y2FNcMcbEPaE8B4uibrjcTZBkJSARv69jvx2aWle1vByJVKf0e5BTaGZBHnjix8jc=@vger.kernel.org X-Received: by 2002:a05:6a00:1c81:b0:7e8:4587:e8c5 with SMTP id d2e1a72fcca58-7ff664807c4mr42403545b3a.56.1767597773212; Sun, 04 Jan 2026 23:22:53 -0800 (PST) X-Received: by 2002:a05:6a00:1c81:b0:7e8:4587:e8c5 with SMTP id d2e1a72fcca58-7ff664807c4mr42403534b3a.56.1767597772729; Sun, 04 Jan 2026 23:22:52 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:52 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 6/7] bng_en: Add TPA related functions Date: Mon, 5 Jan 2026 12:51:42 +0530 Message-ID: <20260105072143.19447-7-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Add the functions to handle TPA events in RX path. This helps the next patch enable TPA functionality. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../net/ethernet/broadcom/bnge/bnge_hw_def.h | 248 ++++++++++++++++++ .../net/ethernet/broadcom/bnge/bnge_netdev.c | 123 +++++++++ .../net/ethernet/broadcom/bnge/bnge_netdev.h | 47 ++++ 3 files changed, 418 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h b/drivers/net= /ethernet/broadcom/bnge/bnge_hw_def.h index cfc888a7f9ee..a824e0566bef 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hw_def.h @@ -208,4 +208,252 @@ struct rx_cmp_ext { #define HWRM_RING_ALLOC_AGG 0x4 #define HWRM_RING_ALLOC_CMPL 0x8 #define HWRM_RING_ALLOC_NQ 0x10 + +#define TPA_AGG_AGG_ID(rx_agg) \ + ((le32_to_cpu((rx_agg)->rx_agg_cmp_v) & \ + RX_AGG_CMP_AGG_ID) >> RX_AGG_CMP_AGG_ID_SHIFT) + +struct rx_tpa_start_cmp { + __le32 rx_tpa_start_cmp_len_flags_type; + #define RX_TPA_START_CMP_TYPE (0x3f << 0) + #define RX_TPA_START_CMP_FLAGS (0x3ff << 6) + #define RX_TPA_START_CMP_FLAGS_SHIFT 6 + #define RX_TPA_START_CMP_FLAGS_ERROR (0x1 << 6) + #define RX_TPA_START_CMP_FLAGS_PLACEMENT (0x7 << 7) + #define RX_TPA_START_CMP_FLAGS_PLACEMENT_SHIFT 7 + #define RX_TPA_START_CMP_FLAGS_PLACEMENT_JUMBO (0x1 << 7) + #define RX_TPA_START_CMP_FLAGS_PLACEMENT_HDS (0x2 << 7) + #define RX_TPA_START_CMP_FLAGS_PLACEMENT_GRO_JUMBO (0x5 << 7) + #define RX_TPA_START_CMP_FLAGS_PLACEMENT_GRO_HDS (0x6 << 7) + #define RX_TPA_START_CMP_FLAGS_RSS_VALID (0x1 << 10) + #define RX_TPA_START_CMP_FLAGS_TIMESTAMP (0x1 << 11) + #define RX_TPA_START_CMP_FLAGS_ITYPES (0xf << 12) + #define RX_TPA_START_CMP_FLAGS_ITYPES_SHIFT 12 + #define RX_TPA_START_CMP_FLAGS_ITYPE_TCP (0x2 << 12) + #define RX_TPA_START_CMP_LEN (0xffff << 16) + #define RX_TPA_START_CMP_LEN_SHIFT 16 + + u32 rx_tpa_start_cmp_opaque; + __le32 rx_tpa_start_cmp_misc_v1; + #define RX_TPA_START_CMP_V1 (0x1 << 0) + #define RX_TPA_START_CMP_RSS_HASH_TYPE (0x7f << 9) + #define RX_TPA_START_CMP_RSS_HASH_TYPE_SHIFT 9 + #define RX_TPA_START_CMP_V3_RSS_HASH_TYPE (0x1ff << 7) + #define RX_TPA_START_CMP_V3_RSS_HASH_TYPE_SHIFT 7 + #define RX_TPA_START_CMP_AGG_ID (0x7f << 25) + #define RX_TPA_START_CMP_AGG_ID_SHIFT 25 + #define RX_TPA_START_CMP_AGG_ID_P5 (0xffff << 16) + #define RX_TPA_START_CMP_AGG_ID_SHIFT_P5 16 + #define RX_TPA_START_CMP_METADATA1 (0xf << 28) + #define RX_TPA_START_CMP_METADATA1_SHIFT 28 + #define RX_TPA_START_METADATA1_TPID_SEL (0x7 << 28) + #define RX_TPA_START_METADATA1_TPID_8021Q (0x1 << 28) + #define RX_TPA_START_METADATA1_TPID_8021AD (0x0 << 28) + #define RX_TPA_START_METADATA1_VALID (0x8 << 28) + + __le32 rx_tpa_start_cmp_rss_hash; +}; + +#define TPA_START_HASH_VALID(rx_tpa_start) \ + ((rx_tpa_start)->rx_tpa_start_cmp_len_flags_type & \ + cpu_to_le32(RX_TPA_START_CMP_FLAGS_RSS_VALID)) + +#define TPA_START_HASH_TYPE(rx_tpa_start) \ + (((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \ + RX_TPA_START_CMP_RSS_HASH_TYPE) >> \ + RX_TPA_START_CMP_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK) + +#define TPA_START_V3_HASH_TYPE(rx_tpa_start) \ + (((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \ + RX_TPA_START_CMP_V3_RSS_HASH_TYPE) >> \ + RX_TPA_START_CMP_V3_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK) + +#define TPA_START_AGG_ID(rx_tpa_start) \ + ((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \ + RX_TPA_START_CMP_AGG_ID_P5) >> RX_TPA_START_CMP_AGG_ID_SHIFT_P5) + +#define TPA_START_ERROR(rx_tpa_start) \ + ((rx_tpa_start)->rx_tpa_start_cmp_len_flags_type & \ + cpu_to_le32(RX_TPA_START_CMP_FLAGS_ERROR)) + +#define TPA_START_VLAN_VALID(rx_tpa_start) \ + ((rx_tpa_start)->rx_tpa_start_cmp_misc_v1 & \ + cpu_to_le32(RX_TPA_START_METADATA1_VALID)) + +#define TPA_START_VLAN_TPID_SEL(rx_tpa_start) \ + (le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \ + RX_TPA_START_METADATA1_TPID_SEL) + +struct rx_tpa_start_cmp_ext { + __le32 rx_tpa_start_cmp_flags2; + #define RX_TPA_START_CMP_FLAGS2_IP_CS_CALC (0x1 << 0) + #define RX_TPA_START_CMP_FLAGS2_L4_CS_CALC (0x1 << 1) + #define RX_TPA_START_CMP_FLAGS2_T_IP_CS_CALC (0x1 << 2) + #define RX_TPA_START_CMP_FLAGS2_T_L4_CS_CALC (0x1 << 3) + #define RX_TPA_START_CMP_FLAGS2_IP_TYPE (0x1 << 8) + #define RX_TPA_START_CMP_FLAGS2_CSUM_CMPL_VALID (0x1 << 9) + #define RX_TPA_START_CMP_FLAGS2_EXT_META_FORMAT (0x3 << 10) + #define RX_TPA_START_CMP_FLAGS2_EXT_META_FORMAT_SHIFT 10 + #define RX_TPA_START_CMP_V3_FLAGS2_T_IP_TYPE (0x1 << 10) + #define RX_TPA_START_CMP_V3_FLAGS2_AGG_GRO (0x1 << 11) + #define RX_TPA_START_CMP_FLAGS2_CSUM_CMPL (0xffff << 16) + #define RX_TPA_START_CMP_FLAGS2_CSUM_CMPL_SHIFT 16 + + __le32 rx_tpa_start_cmp_metadata; + __le32 rx_tpa_start_cmp_cfa_code_v2; + #define RX_TPA_START_CMP_V2 (0x1 << 0) + #define RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_MASK (0x7 << 1) + #define RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_SHIFT 1 + #define RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_NO_BUFFER (0x0 << 1) + #define RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_BAD_FORMAT (0x3 << 1) + #define RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_FLUSH (0x5 << 1) + #define RX_TPA_START_CMP_CFA_CODE (0xffff << 16) + #define RX_TPA_START_CMPL_CFA_CODE_SHIFT 16 + #define RX_TPA_START_CMP_METADATA0_TCI_MASK (0xffff << 16) + #define RX_TPA_START_CMP_METADATA0_VID_MASK (0x0fff << 16) + #define RX_TPA_START_CMP_METADATA0_SFT 16 + __le32 rx_tpa_start_cmp_hdr_info; +}; + +#define TPA_START_CFA_CODE(rx_tpa_start) \ + ((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_cfa_code_v2) & \ + RX_TPA_START_CMP_CFA_CODE) >> RX_TPA_START_CMPL_CFA_CODE_SHIFT) + +#define TPA_START_IS_IPV6(rx_tpa_start) \ + (!!((rx_tpa_start)->rx_tpa_start_cmp_flags2 & \ + cpu_to_le32(RX_TPA_START_CMP_FLAGS2_IP_TYPE))) + +#define TPA_START_ERROR_CODE(rx_tpa_start) \ + ((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_cfa_code_v2) & \ + RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_MASK) >> \ + RX_TPA_START_CMP_ERRORS_BUFFER_ERROR_SHIFT) + +#define TPA_START_METADATA0_TCI(rx_tpa_start) \ + ((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_cfa_code_v2) & \ + RX_TPA_START_CMP_METADATA0_TCI_MASK) >> \ + RX_TPA_START_CMP_METADATA0_SFT) + +struct rx_tpa_end_cmp { + __le32 rx_tpa_end_cmp_len_flags_type; + #define RX_TPA_END_CMP_TYPE (0x3f << 0) + #define RX_TPA_END_CMP_FLAGS (0x3ff << 6) + #define RX_TPA_END_CMP_FLAGS_SHIFT 6 + #define RX_TPA_END_CMP_FLAGS_PLACEMENT (0x7 << 7) + #define RX_TPA_END_CMP_FLAGS_PLACEMENT_SHIFT 7 + #define RX_TPA_END_CMP_FLAGS_PLACEMENT_JUMBO (0x1 << 7) + #define RX_TPA_END_CMP_FLAGS_PLACEMENT_HDS (0x2 << 7) + #define RX_TPA_END_CMP_FLAGS_PLACEMENT_GRO_JUMBO (0x5 << 7) + #define RX_TPA_END_CMP_FLAGS_PLACEMENT_GRO_HDS (0x6 << 7) + #define RX_TPA_END_CMP_FLAGS_RSS_VALID (0x1 << 10) + #define RX_TPA_END_CMP_FLAGS_ITYPES (0xf << 12) + #define RX_TPA_END_CMP_FLAGS_ITYPES_SHIFT 12 + #define RX_TPA_END_CMP_FLAGS_ITYPE_TCP (0x2 << 12) + #define RX_TPA_END_CMP_LEN (0xffff << 16) + #define RX_TPA_END_CMP_LEN_SHIFT 16 + + u32 rx_tpa_end_cmp_opaque; + __le32 rx_tpa_end_cmp_misc_v1; + #define RX_TPA_END_CMP_V1 (0x1 << 0) + #define RX_TPA_END_CMP_AGG_BUFS (0x3f << 1) + #define RX_TPA_END_CMP_AGG_BUFS_SHIFT 1 + #define RX_TPA_END_CMP_TPA_SEGS (0xff << 8) + #define RX_TPA_END_CMP_TPA_SEGS_SHIFT 8 + #define RX_TPA_END_CMP_PAYLOAD_OFFSET (0xff << 16) + #define RX_TPA_END_CMP_PAYLOAD_OFFSET_SHIFT 16 + #define RX_TPA_END_CMP_AGG_ID (0xffff << 16) + #define RX_TPA_END_CMP_AGG_ID_SHIFT 16 + + __le32 rx_tpa_end_cmp_tsdelta; + #define RX_TPA_END_GRO_TS (0x1 << 31) +}; + +#define TPA_END_AGG_ID(rx_tpa_end) \ + ((le32_to_cpu((rx_tpa_end)->rx_tpa_end_cmp_misc_v1) & \ + RX_TPA_END_CMP_AGG_ID) >> RX_TPA_END_CMP_AGG_ID_SHIFT) + +#define TPA_END_TPA_SEGS(rx_tpa_end) \ + ((le32_to_cpu((rx_tpa_end)->rx_tpa_end_cmp_misc_v1) & \ + RX_TPA_END_CMP_TPA_SEGS) >> RX_TPA_END_CMP_TPA_SEGS_SHIFT) + +#define RX_TPA_END_CMP_FLAGS_PLACEMENT_ANY_GRO \ + cpu_to_le32(RX_TPA_END_CMP_FLAGS_PLACEMENT_GRO_JUMBO & \ + RX_TPA_END_CMP_FLAGS_PLACEMENT_GRO_HDS) + +#define TPA_END_GRO(rx_tpa_end) \ + ((rx_tpa_end)->rx_tpa_end_cmp_len_flags_type & \ + RX_TPA_END_CMP_FLAGS_PLACEMENT_ANY_GRO) + +#define TPA_END_GRO_TS(rx_tpa_end) \ + (!!((rx_tpa_end)->rx_tpa_end_cmp_tsdelta & \ + cpu_to_le32(RX_TPA_END_GRO_TS))) + +struct rx_tpa_end_cmp_ext { + __le32 rx_tpa_end_cmp_dup_acks; + #define RX_TPA_END_CMP_TPA_DUP_ACKS (0xf << 0) + #define RX_TPA_END_CMP_PAYLOAD_OFFSET_P5 (0xff << 16) + #define RX_TPA_END_CMP_PAYLOAD_OFFSET_SHIFT_P5 16 + #define RX_TPA_END_CMP_AGG_BUFS_P5 (0xff << 24) + #define RX_TPA_END_CMP_AGG_BUFS_SHIFT_P5 24 + + __le32 rx_tpa_end_cmp_seg_len; + #define RX_TPA_END_CMP_TPA_SEG_LEN (0xffff << 0) + + __le32 rx_tpa_end_cmp_errors_v2; + #define RX_TPA_END_CMP_V2 (0x1 << 0) + #define RX_TPA_END_CMP_ERRORS (0x3 << 1) + #define RX_TPA_END_CMP_ERRORS_P5 (0x7 << 1) + #define RX_TPA_END_CMPL_ERRORS_SHIFT 1 + #define RX_TPA_END_CMP_ERRORS_BUFFER_ERROR_NO_BUFFER (0x0 << 1) + #define RX_TPA_END_CMP_ERRORS_BUFFER_ERROR_NOT_ON_CHIP (0x2 << 1) + #define RX_TPA_END_CMP_ERRORS_BUFFER_ERROR_BAD_FORMAT (0x3 << 1) + #define RX_TPA_END_CMP_ERRORS_BUFFER_ERROR_RSV_ERROR (0x4 << 1) + #define RX_TPA_END_CMP_ERRORS_BUFFER_ERROR_FLUSH (0x5 << 1) + + u32 rx_tpa_end_cmp_start_opaque; +}; + +#define TPA_END_ERRORS(rx_tpa_end_ext) \ + ((rx_tpa_end_ext)->rx_tpa_end_cmp_errors_v2 & \ + cpu_to_le32(RX_TPA_END_CMP_ERRORS)) + +#define TPA_END_PAYLOAD_OFF(rx_tpa_end_ext) \ + ((le32_to_cpu((rx_tpa_end_ext)->rx_tpa_end_cmp_dup_acks) & \ + RX_TPA_END_CMP_PAYLOAD_OFFSET_P5) >> \ + RX_TPA_END_CMP_PAYLOAD_OFFSET_SHIFT_P5) + +#define TPA_END_AGG_BUFS(rx_tpa_end_ext) \ + ((le32_to_cpu((rx_tpa_end_ext)->rx_tpa_end_cmp_dup_acks) & \ + RX_TPA_END_CMP_AGG_BUFS_P5) >> RX_TPA_END_CMP_AGG_BUFS_SHIFT_P5) + +#define EVENT_DATA1_RESET_NOTIFY_FATAL(data1) \ + (((data1) & \ + ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_MASK) =3D=3D\ + ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_FATAL) + +#define EVENT_DATA1_RESET_NOTIFY_FW_ACTIVATION(data1) \ + (((data1) & \ + ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_MASK) =3D=3D\ + ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_ACTIVATION) + +#define EVENT_DATA2_RESET_NOTIFY_FW_STATUS_CODE(data2) \ + ((data2) & \ + ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA2_FW_STATUS_CODE_MASK) + +#define EVENT_DATA1_RECOVERY_MASTER_FUNC(data1) \ + (!!((data1) & \ + ASYNC_EVENT_CMPL_ERROR_RECOVERY_EVENT_DATA1_FLAGS_MASTER_FUNC)) + +#define EVENT_DATA1_RECOVERY_ENABLED(data1) \ + (!!((data1) & \ + ASYNC_EVENT_CMPL_ERROR_RECOVERY_EVENT_DATA1_FLAGS_RECOVERY_ENABLED)) + +#define BNGE_EVENT_ERROR_REPORT_TYPE(data1) \ + (((data1) & \ + ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_MASK) >>\ + ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_SFT) + +#define BNGE_EVENT_INVALID_SIGNAL_DATA(data2) \ + (((data2) & \ + ASYNC_EVENT_CMPL_ERROR_REPORT_INVALID_SIGNAL_EVENT_DATA2_PIN_ID_MASK) >= >\ + ASYNC_EVENT_CMPL_ERROR_REPORT_INVALID_SIGNAL_EVENT_DATA2_PIN_ID_SFT) #endif /* _BNGE_HW_DEF_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 0f2700131237..16b062d7688a 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -377,11 +377,37 @@ static void bnge_free_one_agg_ring_bufs(struct bnge_n= et *bn, } } =20 +static void bnge_free_one_tpa_info_data(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr) +{ + int i; + + for (i =3D 0; i < bn->max_tpa; i++) { + struct bnge_tpa_info *tpa_info =3D &rxr->rx_tpa[i]; + u8 *data =3D tpa_info->data; + + if (!data) + continue; + + tpa_info->data =3D NULL; + page_pool_free_va(rxr->head_pool, data, false); + } +} + static void bnge_free_one_rx_ring_pair_bufs(struct bnge_net *bn, struct bnge_rx_ring_info *rxr) { + struct bnge_tpa_idx_map *map; + + if (rxr->rx_tpa) + bnge_free_one_tpa_info_data(bn, rxr); + bnge_free_one_rx_ring_bufs(bn, rxr); bnge_free_one_agg_ring_bufs(bn, rxr); + + map =3D rxr->rx_tpa_idx_map; + if (map) + memset(map->agg_idx_bmap, 0, sizeof(map->agg_idx_bmap)); } =20 static void bnge_free_rx_ring_pair_bufs(struct bnge_net *bn) @@ -452,11 +478,70 @@ static void bnge_free_all_rings_bufs(struct bnge_net = *bn) bnge_free_tx_skbs(bn); } =20 +static void bnge_free_tpa_info(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, j; + + for (i =3D 0; i < bd->rx_nr_rings; i++) { + struct bnge_rx_ring_info *rxr =3D &bn->rx_ring[i]; + + kfree(rxr->rx_tpa_idx_map); + rxr->rx_tpa_idx_map =3D NULL; + if (rxr->rx_tpa) { + for (j =3D 0; j < bn->max_tpa; j++) { + kfree(rxr->rx_tpa[j].agg_arr); + rxr->rx_tpa[j].agg_arr =3D NULL; + } + } + kfree(rxr->rx_tpa); + rxr->rx_tpa =3D NULL; + } +} + +static int bnge_alloc_tpa_info(struct bnge_net *bn) +{ + struct bnge_dev *bd =3D bn->bd; + int i, j; + + if (!bd->max_tpa_v2) + return 0; + + bn->max_tpa =3D max_t(u16, bd->max_tpa_v2, MAX_TPA); + for (i =3D 0; i < bd->rx_nr_rings; i++) { + struct bnge_rx_ring_info *rxr =3D &bn->rx_ring[i]; + + rxr->rx_tpa =3D kcalloc(bn->max_tpa, sizeof(struct bnge_tpa_info), + GFP_KERNEL); + if (!rxr->rx_tpa) + goto err_free_tpa_info; + + for (j =3D 0; j < bn->max_tpa; j++) { + struct rx_agg_cmp *agg; + + agg =3D kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL); + if (!agg) + goto err_free_tpa_info; + rxr->rx_tpa[j].agg_arr =3D agg; + } + rxr->rx_tpa_idx_map =3D kzalloc(sizeof(*rxr->rx_tpa_idx_map), + GFP_KERNEL); + if (!rxr->rx_tpa_idx_map) + goto err_free_tpa_info; + } + return 0; + +err_free_tpa_info: + bnge_free_tpa_info(bn); + return -ENOMEM; +} + static void bnge_free_rx_rings(struct bnge_net *bn) { struct bnge_dev *bd =3D bn->bd; int i; =20 + bnge_free_tpa_info(bn); for (i =3D 0; i < bd->rx_nr_rings; i++) { struct bnge_rx_ring_info *rxr =3D &bn->rx_ring[i]; struct bnge_ring_struct *ring; @@ -581,6 +666,12 @@ static int bnge_alloc_rx_rings(struct bnge_net *bn) goto err_free_rx_rings; } } + + if (bn->priv_flags & BNGE_NET_EN_TPA) { + rc =3D bnge_alloc_tpa_info(bn); + if (rc) + goto err_free_rx_rings; + } return rc; =20 err_free_rx_rings: @@ -1126,6 +1217,29 @@ static int bnge_alloc_one_agg_ring_bufs(struct bnge_= net *bn, return -ENOMEM; } =20 +static int bnge_alloc_one_tpa_info_data(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr) +{ + dma_addr_t mapping; + u8 *data; + int i; + + for (i =3D 0; i < bn->max_tpa; i++) { + data =3D __bnge_alloc_rx_frag(bn, &mapping, rxr, + GFP_KERNEL); + if (!data) + goto err_free_tpa_info_data; + + rxr->rx_tpa[i].data =3D data; + rxr->rx_tpa[i].data_ptr =3D data + bn->rx_offset; + rxr->rx_tpa[i].mapping =3D mapping; + } + return 0; +err_free_tpa_info_data: + bnge_free_one_tpa_info_data(bn, rxr); + return -ENOMEM; +} + static int bnge_alloc_one_rx_ring_pair_bufs(struct bnge_net *bn, int ring_= nr) { struct bnge_rx_ring_info *rxr =3D &bn->rx_ring[ring_nr]; @@ -1140,8 +1254,17 @@ static int bnge_alloc_one_rx_ring_pair_bufs(struct b= nge_net *bn, int ring_nr) if (rc) goto err_free_one_rx_ring_bufs; } + + if (rxr->rx_tpa) { + rc =3D bnge_alloc_one_tpa_info_data(bn, rxr); + if (rc) + goto err_free_one_agg_ring_bufs; + } + return 0; =20 +err_free_one_agg_ring_bufs: + bnge_free_one_agg_ring_bufs(bn, rxr); err_free_one_rx_ring_bufs: bnge_free_one_rx_ring_bufs(bn, rxr); return rc; diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 8451d35d7b7e..335785041369 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -153,6 +153,46 @@ enum { =20 #define BNGE_NET_EN_TPA (BNGE_NET_EN_GRO | BNGE_NET_EN_LRO) =20 +#define BNGE_NO_FW_ACCESS(bd) (pci_channel_offline((bd)->pdev)) + +#define MAX_TPA 256 +#define MAX_TPA_MASK (MAX_TPA - 1) +#define MAX_TPA_SEGS 0x3f + +#define BNGE_AGG_IDX_BMAP_SIZE (MAX_TPA / BITS_PER_LONG) +struct bnge_tpa_idx_map { + u16 agg_id_tbl[1024]; + unsigned long agg_idx_bmap[BNGE_AGG_IDX_BMAP_SIZE]; +}; + +struct bnge_tpa_info { + void *data; + u8 *data_ptr; + dma_addr_t mapping; + u16 len; + unsigned short gso_type; + u32 flags2; + u32 metadata; + enum pkt_hash_types hash_type; + u32 rss_hash; + u32 hdr_info; + +#define BNGE_TPA_INNER_L3_OFF(hdr_info) \ + (((hdr_info) >> 18) & 0x1ff) + +#define BNGE_TPA_INNER_L2_OFF(hdr_info) \ + (((hdr_info) >> 9) & 0x1ff) + +#define BNGE_TPA_OUTER_L3_OFF(hdr_info) \ + ((hdr_info) & 0x1ff) + + u16 cfa_code; /* cfa_code in TPA start compl */ + u8 agg_count; + u8 vlan_valid:1; + u8 cfa_code_valid:1; + struct rx_agg_cmp *agg_arr; +}; + /* Minimum TX BDs for a TX packet with MAX_SKB_FRAGS + 1. We need one extra * BD because the first TX BD is always a long BD. */ @@ -245,6 +285,10 @@ struct bnge_net { #define BNGE_STATE_NAPI_DISABLED 0 =20 u32 msg_enable; + u16 max_tpa; + __be16 vxlan_port; + __be16 nge_port; + __be16 vxlan_gpe_port; }; =20 #define BNGE_DEFAULT_RX_RING_SIZE 511 @@ -390,6 +434,9 @@ struct bnge_rx_ring_info { dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; dma_addr_t rx_agg_desc_mapping[MAX_RX_AGG_PAGES]; =20 + struct bnge_tpa_info *rx_tpa; + struct bnge_tpa_idx_map *rx_tpa_idx_map; + struct bnge_ring_struct rx_ring_struct; struct bnge_ring_struct rx_agg_ring_struct; struct page_pool *page_pool; --=20 2.47.3 From nobody Sun Feb 8 05:35:12 2026 Received: from mail-pl1-f225.google.com (mail-pl1-f225.google.com [209.85.214.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E990221290 for ; Mon, 5 Jan 2026 07:22:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597783; cv=none; b=Q64nMchEAhaQ0xbP+eodT5GLfFrbT7pPlmnRvQ7IashvXPDpTuWZ/nfScsxliCn5/tsijgfyItAe7hoLqjJ0Ii0F/qdX98ol7Bi0zBB4VHG3KPdziaO+XTmCY54w1wTFN7UK4yowp3lYSnOcp7otisqnUKeLQh/opviMO4vzRu0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767597783; c=relaxed/simple; bh=5sFlgtCjoGCQCCelWDJeGBFaKuPKM3xHN//1fp8X5QY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=saOiaZc4d8wRgdkjPrubtKi/EeFd2rHuTjvQIO2BG7rMOcwZeJ42hnVJKZTT+rBfNYrPWHPMKqTbmPEz+SQmcHGk7AaUb7GEnaDQXpJhdX0jH/q5ETPGeKaInUo46Z4TkkjPbgEPQSd8S0iJ85O7f+iCnaeUblCugD/zNcdPNKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=WPpOn3+H; arc=none smtp.client-ip=209.85.214.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="WPpOn3+H" Received: by mail-pl1-f225.google.com with SMTP id d9443c01a7336-29f1bc40b35so246921345ad.2 for ; Sun, 04 Jan 2026 23:22:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767597779; x=1768202579; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Q3IENRBrwGK7yDQiXfxIkRKKQU1KZqtx2W4AQfEVzXM=; b=vaOysXCFOl3OREckY1ItSrz9ld5gteqkvzDItbAagwrRZdGl1sVOGzXW94Ydw02bif KQuJMcsm1HJ0uj6C8zexlUOw6I75wTXvqxQX5NWGbaPwrvYQ95Rr6QQ0dawAnajE4WUu G3fwbXNG+9ya/8euTKsHQEhisAl/k2bkL0otx1D7XVQ9lPvuuqTcU2TS7TuK10793k2v /Jqkh7WSOanCN6qOoZx02ifj3Xk1iVog3RKxceC817iZDnTMlg/8fX+j/eAsAq3DhNK6 LCycqGWLEajLUCPDHcPcqdf8d0++hOBHsdAqVGYYnEp7yuWsNrq9ioVedkqE5T8ccQhB FVhA== X-Forwarded-Encrypted: i=1; AJvYcCW6JuvOs0rYnMjEldDyPstjyNckEGcFHuN4UGCUKUf4BN8j38gPkQzwtxfWMkEMMm7oBGOdt9GhlYdiigM=@vger.kernel.org X-Gm-Message-State: AOJu0YyBEK3IPbix563KrkHaiKzlCDo1JLO2kFZKOOWAufFTW6siIUiF y41ODZO4UWESR5pDuZqLkfzhBrnOc3L0VXcXB7lcXnU34zT9jwh/rkCSahxeX8EnSoU1fNYi+Nd L8qjw+Pygk4hyOnMiQUls8q/2TTS+StCt1gfTvm2+7OYxVXJsaREmWYfhvsh2g63PAWbjz/qX2k 1TmhuooyY2zPFE2zmIJ9QI1R+vRbyck8cdmtb6hJ+/HHbnwpQH5u40+i8s1t2Oqv+lHQezPKHsz +HDeEb792oFFFGDCZNYP6Qu4JKy X-Gm-Gg: AY/fxX7MTX9IQVSI/i7D2E3HwZoQ4c1b9H0zvLHEVI9ItJaWujxhD9yGIMxZjdt0iG8 SFqzRAHBmzIgkpBCMF22gF0XPS1Eu9SA7NWjbxM9sTlKHZM4a7Aal9hPLUuk5Ad0X5dZPOt6sO2 mFskBUoX2NTmllZ26B9axp38srZQjIg1hnBrCgLcChiuWAaUK3DqxjAwJwdX3A0U24LK2p5wDTZ UAV9koOuRT454+8EDy2NyqAvnHCI8OmH2gVehmEPdybtWd6XrcwK7a9X7vpGbCp7gxkB8P+7D+D 61Cl8oIAkoRfwvpYG/c9SZ2TSErE1jCnw2gWnDtnKbkSn6CDZvM2wdQZxMLmm5lNgrI9CcatSDG jxDdQ2tQoEO/8oNjzeMe/zMJ25dUDKPMMzELr3RyKBb8BiFNn7EabBNGS3nJWBfh356n9buMrmh 7NPJz8gn1eBpQ3LNUtuiggXIY9dJXh/kstDb3nmtfMdaVg888kXjuPBQ== X-Google-Smtp-Source: AGHT+IFlKBRHQ5GZlO0HlsPXbm4XVPzUW/RYOs1hfgig9rj3PMQbXs7t1DpcU1kiYgbIVKylIA8XUjgIy000 X-Received: by 2002:a17:902:c945:b0:295:9b73:b15c with SMTP id d9443c01a7336-2a2f2840b76mr505698035ad.42.1767597779373; Sun, 04 Jan 2026 23:22:59 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-120.dlp.protect.broadcom.com. [144.49.247.120]) by smtp-relay.gmail.com with ESMTPS id d9443c01a7336-2a2f3c7ffe6sm57121915ad.16.2026.01.04.23.22.58 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Jan 2026 23:22:59 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7ba9c366057so37732039b3a.1 for ; Sun, 04 Jan 2026 23:22:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1767597778; x=1768202578; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Q3IENRBrwGK7yDQiXfxIkRKKQU1KZqtx2W4AQfEVzXM=; b=WPpOn3+HOxgMBainMPrvTcmqAiQFbjhbOzl5+DgFzJKXyh1h1rAoWmhm+JcIZJLRui B7SA+SZqkX68DW11e9HQIAXMVaPk6XqBQ/lDA1AMtk8soJK5i58Uj7Bo2wgY5skLGG+d iTGlU4wDeULoXgOr/ahF8xC1hVcbXWrvg+340= X-Forwarded-Encrypted: i=1; AJvYcCUwUxwZmHzpwsprTcp+8zmUuE+WShZD7+WIh+KSjONdGbtTVi3JiiaGqRM91A1/SSEwUYX2CaWfHpvb/n4=@vger.kernel.org X-Received: by 2002:a05:6a00:3004:b0:7a2:8853:28f6 with SMTP id d2e1a72fcca58-7ff64dcd50amr42608396b3a.22.1767597777662; Sun, 04 Jan 2026 23:22:57 -0800 (PST) X-Received: by 2002:a05:6a00:3004:b0:7a2:8853:28f6 with SMTP id d2e1a72fcca58-7ff64dcd50amr42608376b3a.22.1767597777195; Sun, 04 Jan 2026 23:22:57 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7dfab836sm47293293b3a.36.2026.01.04.23.22.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jan 2026 23:22:56 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v4, net-next 7/7] bng_en: Add support for TPA events Date: Mon, 5 Jan 2026 12:51:43 +0530 Message-ID: <20260105072143.19447-8-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> References: <20260105072143.19447-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Enable TPA functionality in the VNIC and add functions to handle TPA events, which help in processing LRO/GRO. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 65 +++ .../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 2 + .../net/ethernet/broadcom/bnge/bnge_netdev.c | 27 ++ .../net/ethernet/broadcom/bnge/bnge_netdev.h | 3 +- .../net/ethernet/broadcom/bnge/bnge_txrx.c | 435 +++++++++++++++++- 5 files changed, 520 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.c index 2994f10446a6..34a7fed92cc0 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c @@ -1183,3 +1183,68 @@ int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd= , int idx) req->async_event_cr =3D cpu_to_le16(idx); return bnge_hwrm_req_send(bd, req); } + +#define BNGE_DFLT_TUNL_TPA_BMAP \ + (VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GRE | \ + VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV4 | \ + VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV6) + +static void bnge_hwrm_vnic_update_tunl_tpa(struct bnge_dev *bd, + struct hwrm_vnic_tpa_cfg_input *req) +{ + struct bnge_net *bn =3D netdev_priv(bd->netdev); + u32 tunl_tpa_bmap =3D BNGE_DFLT_TUNL_TPA_BMAP; + + if (!(bd->fw_cap & BNGE_FW_CAP_VNIC_TUNNEL_TPA)) + return; + + if (bn->vxlan_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN; + if (bn->vxlan_gpe_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN_GPE; + if (bn->nge_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GENEVE; + + req->enables |=3D cpu_to_le32(VNIC_TPA_CFG_REQ_ENABLES_TNL_TPA_EN); + req->tnl_tpa_en_bitmap =3D cpu_to_le32(tunl_tpa_bmap); +} + +int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vni= c, + u32 tpa_flags) +{ + struct bnge_net *bn =3D netdev_priv(bd->netdev); + struct hwrm_vnic_tpa_cfg_input *req; + int rc; + + if (vnic->fw_vnic_id =3D=3D INVALID_HW_RING_ID) + return 0; + + rc =3D bnge_hwrm_req_init(bd, req, HWRM_VNIC_TPA_CFG); + if (rc) + return rc; + + if (tpa_flags) { + u32 flags; + + flags =3D VNIC_TPA_CFG_REQ_FLAGS_TPA | + VNIC_TPA_CFG_REQ_FLAGS_ENCAP_TPA | + VNIC_TPA_CFG_REQ_FLAGS_RSC_WND_UPDATE | + VNIC_TPA_CFG_REQ_FLAGS_AGG_WITH_ECN | + VNIC_TPA_CFG_REQ_FLAGS_AGG_WITH_SAME_GRE_SEQ; + if (tpa_flags & BNGE_NET_EN_GRO) + flags |=3D VNIC_TPA_CFG_REQ_FLAGS_GRO; + + req->flags =3D cpu_to_le32(flags); + req->enables =3D + cpu_to_le32(VNIC_TPA_CFG_REQ_ENABLES_MAX_AGG_SEGS | + VNIC_TPA_CFG_REQ_ENABLES_MAX_AGGS | + VNIC_TPA_CFG_REQ_ENABLES_MIN_AGG_LEN); + req->max_agg_segs =3D cpu_to_le16(MAX_TPA_SEGS); + req->max_aggs =3D cpu_to_le16(bn->max_tpa); + req->min_agg_len =3D cpu_to_le32(512); + bnge_hwrm_vnic_update_tunl_tpa(bd, req); + } + req->vnic_id =3D cpu_to_le16(vnic->fw_vnic_id); + + return bnge_hwrm_req_send(bd, req); +} diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.h index 042f28e84a05..38b046237feb 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h @@ -55,4 +55,6 @@ int hwrm_ring_alloc_send_msg(struct bnge_net *bn, struct bnge_ring_struct *ring, u32 ring_type, u32 map_index); int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd, int idx); +int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vni= c, + u32 tpa_flags); #endif /* _BNGE_HWRM_LIB_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 16b062d7688a..2f8e98a0c2d4 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -2274,6 +2274,27 @@ static int bnge_request_irq(struct bnge_net *bn) return rc; } =20 +static int bnge_set_tpa(struct bnge_net *bn, bool set_tpa) +{ + u32 tpa_flags =3D 0; + int rc, i; + + if (set_tpa) + tpa_flags =3D bn->priv_flags & BNGE_NET_EN_TPA; + else if (BNGE_NO_FW_ACCESS(bn->bd)) + return 0; + for (i =3D 0; i < bn->nr_vnics; i++) { + rc =3D bnge_hwrm_vnic_set_tpa(bn->bd, &bn->vnic_info[i], + tpa_flags); + if (rc) { + netdev_err(bn->netdev, "hwrm vnic set tpa failure rc for vnic %d: %x\n", + i, rc); + return rc; + } + } + return 0; +} + static int bnge_init_chip(struct bnge_net *bn) { struct bnge_vnic_info *vnic =3D &bn->vnic_info[BNGE_VNIC_DEFAULT]; @@ -2308,6 +2329,12 @@ static int bnge_init_chip(struct bnge_net *bn) if (bd->rss_cap & BNGE_RSS_CAP_RSS_HASH_TYPE_DELTA) bnge_hwrm_update_rss_hash_cfg(bn); =20 + if (bn->priv_flags & BNGE_NET_EN_TPA) { + rc =3D bnge_set_tpa(bn, true); + if (rc) + goto err_out; + } + /* Filter for default vnic 0 */ rc =3D bnge_hwrm_set_vnic_filter(bn, 0, 0, bn->netdev->dev_addr); if (rc) { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 335785041369..6c206e6ff96c 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -159,10 +159,9 @@ enum { #define MAX_TPA_MASK (MAX_TPA - 1) #define MAX_TPA_SEGS 0x3f =20 -#define BNGE_AGG_IDX_BMAP_SIZE (MAX_TPA / BITS_PER_LONG) struct bnge_tpa_idx_map { u16 agg_id_tbl[1024]; - unsigned long agg_idx_bmap[BNGE_AGG_IDX_BMAP_SIZE]; + DECLARE_BITMAP(agg_idx_bmap, MAX_TPA); }; =20 struct bnge_tpa_info { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c index fb54a9b14a8d..6586ba3d47d6 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -44,6 +45,15 @@ irqreturn_t bnge_msix(int irq, void *dev_instance) return IRQ_HANDLED; } =20 +static struct rx_agg_cmp *bnge_get_tpa_agg(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr, + u16 agg_id, u16 curr) +{ + struct bnge_tpa_info *tpa_info =3D &rxr->rx_tpa[agg_id]; + + return &tpa_info->agg_arr[curr]; +} + static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, u16 cp_cons, u16 curr) @@ -57,7 +67,7 @@ static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *b= n, } =20 static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_info *cpr, u16 idx, - u16 start, u32 agg_bufs) + u16 start, u32 agg_bufs, bool tpa) { struct bnge_napi *bnapi =3D cpr->bnapi; struct bnge_net *bn =3D bnapi->bn; @@ -76,7 +86,10 @@ static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_i= nfo *cpr, u16 idx, netmem_ref netmem; u16 cons; =20 - agg =3D bnge_get_agg(bn, cpr, idx, start + i); + if (tpa) + agg =3D bnge_get_tpa_agg(bn, rxr, idx, start + i); + else + agg =3D bnge_get_agg(bn, cpr, idx, start + i); cons =3D agg->rx_agg_cmp_opaque; __clear_bit(cons, rxr->rx_agg_bmap); =20 @@ -137,6 +150,8 @@ static int bnge_discard_rx(struct bnge_net *bn, struct = bnge_cp_ring_info *cpr, agg_bufs =3D (le32_to_cpu(rxcmp->rx_cmp_misc_v1) & RX_CMP_AGG_BUFS) >> RX_CMP_AGG_BUFS_SHIFT; + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + return 0; } =20 if (agg_bufs) { @@ -149,7 +164,7 @@ static int bnge_discard_rx(struct bnge_net *bn, struct = bnge_cp_ring_info *cpr, =20 static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, - u16 idx, u32 agg_bufs, + u16 idx, u32 agg_bufs, bool tpa, struct sk_buff *skb) { struct bnge_napi *bnapi =3D cpr->bnapi; @@ -168,7 +183,10 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, u16 cons, frag_len; netmem_ref netmem; =20 - agg =3D bnge_get_agg(bn, cpr, idx, i); + if (tpa) + agg =3D bnge_get_tpa_agg(bn, rxr, idx, i); + else + agg =3D bnge_get_agg(bn, cpr, idx, i); cons =3D agg->rx_agg_cmp_opaque; frag_len =3D (le32_to_cpu(agg->rx_agg_cmp_len_flags_type) & RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; @@ -198,7 +216,7 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, * allocated already. */ rxr->rx_agg_prod =3D prod; - bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i); + bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i, tpa); return 0; } =20 @@ -215,11 +233,12 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, static struct sk_buff *bnge_rx_agg_netmems_skb(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, struct sk_buff *skb, u16 idx, - u32 agg_bufs) + u32 agg_bufs, bool tpa) { u32 total_frag_len; =20 - total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, skb); + total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, + tpa, skb); if (!total_frag_len) { skb_mark_for_recycle(skb); dev_kfree_skb(skb); @@ -253,6 +272,165 @@ static void bnge_sched_reset_txr(struct bnge_net *bn, /* TODO: Initiate reset task */ } =20 +static u16 bnge_tpa_alloc_agg_idx(struct bnge_rx_ring_info *rxr, u16 agg_i= d) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + u16 idx =3D agg_id & MAX_TPA_MASK; + + if (test_bit(idx, map->agg_idx_bmap)) { + idx =3D find_first_zero_bit(map->agg_idx_bmap, MAX_TPA); + if (idx >=3D MAX_TPA) + return INVALID_HW_RING_ID; + } + __set_bit(idx, map->agg_idx_bmap); + map->agg_id_tbl[agg_id] =3D idx; + return idx; +} + +static void bnge_free_agg_idx(struct bnge_rx_ring_info *rxr, u16 idx) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + + __clear_bit(idx, map->agg_idx_bmap); +} + +static u16 bnge_lookup_agg_idx(struct bnge_rx_ring_info *rxr, u16 agg_id) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + + return map->agg_id_tbl[agg_id]; +} + +static void bnge_tpa_metadata(struct bnge_tpa_info *tpa_info, + struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + tpa_info->cfa_code_valid =3D 1; + tpa_info->cfa_code =3D TPA_START_CFA_CODE(tpa_start1); + tpa_info->vlan_valid =3D 0; + if (tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) { + tpa_info->vlan_valid =3D 1; + tpa_info->metadata =3D + le32_to_cpu(tpa_start1->rx_tpa_start_cmp_metadata); + } +} + +static void bnge_tpa_metadata_v2(struct bnge_tpa_info *tpa_info, + struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + tpa_info->vlan_valid =3D 0; + if (TPA_START_VLAN_VALID(tpa_start)) { + u32 tpid_sel =3D TPA_START_VLAN_TPID_SEL(tpa_start); + u32 vlan_proto =3D ETH_P_8021Q; + + tpa_info->vlan_valid =3D 1; + if (tpid_sel =3D=3D RX_TPA_START_METADATA1_TPID_8021AD) + vlan_proto =3D ETH_P_8021AD; + tpa_info->metadata =3D vlan_proto << 16 | + TPA_START_METADATA0_TCI(tpa_start1); + } +} + +static void bnge_tpa_start(struct bnge_net *bn, struct bnge_rx_ring_info *= rxr, + u8 cmp_type, struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; + struct bnge_tpa_info *tpa_info; + u16 cons, prod, agg_id; + struct rx_bd *prod_bd; + dma_addr_t mapping; + + agg_id =3D TPA_START_AGG_ID(tpa_start); + agg_id =3D bnge_tpa_alloc_agg_idx(rxr, agg_id); + if (unlikely(agg_id =3D=3D INVALID_HW_RING_ID)) { + netdev_warn(bn->netdev, "Unable to allocate agg ID for ring %d, agg 0x%x= \n", + rxr->bnapi->index, TPA_START_AGG_ID(tpa_start)); + bnge_sched_reset_rxr(bn, rxr); + return; + } + cons =3D tpa_start->rx_tpa_start_cmp_opaque; + prod =3D rxr->rx_prod; + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + prod_rx_buf =3D &rxr->rx_buf_ring[RING_RX(bn, prod)]; + tpa_info =3D &rxr->rx_tpa[agg_id]; + + if (unlikely(cons !=3D rxr->rx_next_cons || + TPA_START_ERROR(tpa_start))) { + netdev_warn(bn->netdev, "TPA cons %x, expected cons %x, error code %x\n", + cons, rxr->rx_next_cons, + TPA_START_ERROR_CODE(tpa_start1)); + bnge_sched_reset_rxr(bn, rxr); + return; + } + prod_rx_buf->data =3D tpa_info->data; + prod_rx_buf->data_ptr =3D tpa_info->data_ptr; + + mapping =3D tpa_info->mapping; + prod_rx_buf->mapping =3D mapping; + + prod_bd =3D &rxr->rx_desc_ring[RX_RING(bn, prod)][RX_IDX(prod)]; + + prod_bd->rx_bd_haddr =3D cpu_to_le64(mapping); + + tpa_info->data =3D cons_rx_buf->data; + tpa_info->data_ptr =3D cons_rx_buf->data_ptr; + cons_rx_buf->data =3D NULL; + tpa_info->mapping =3D cons_rx_buf->mapping; + + tpa_info->len =3D + le32_to_cpu(tpa_start->rx_tpa_start_cmp_len_flags_type) >> + RX_TPA_START_CMP_LEN_SHIFT; + if (likely(TPA_START_HASH_VALID(tpa_start))) { + tpa_info->hash_type =3D PKT_HASH_TYPE_L4; + if (TPA_START_IS_IPV6(tpa_start1)) + tpa_info->gso_type =3D SKB_GSO_TCPV6; + else + tpa_info->gso_type =3D SKB_GSO_TCPV4; + tpa_info->rss_hash =3D + le32_to_cpu(tpa_start->rx_tpa_start_cmp_rss_hash); + } else { + tpa_info->hash_type =3D PKT_HASH_TYPE_NONE; + tpa_info->gso_type =3D 0; + netif_warn(bn, rx_err, bn->netdev, "TPA packet without valid hash\n"); + } + tpa_info->flags2 =3D le32_to_cpu(tpa_start1->rx_tpa_start_cmp_flags2); + tpa_info->hdr_info =3D le32_to_cpu(tpa_start1->rx_tpa_start_cmp_hdr_info); + if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_CMP) + bnge_tpa_metadata(tpa_info, tpa_start, tpa_start1); + else + bnge_tpa_metadata_v2(tpa_info, tpa_start, tpa_start1); + tpa_info->agg_count =3D 0; + + rxr->rx_prod =3D NEXT_RX(prod); + cons =3D RING_RX(bn, NEXT_RX(cons)); + rxr->rx_next_cons =3D RING_RX(bn, NEXT_RX(cons)); + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + + bnge_reuse_rx_data(rxr, cons, cons_rx_buf->data); + rxr->rx_prod =3D NEXT_RX(rxr->rx_prod); + cons_rx_buf->data =3D NULL; +} + +static void bnge_abort_tpa(struct bnge_cp_ring_info *cpr, u16 idx, u32 agg= _bufs) +{ + if (agg_bufs) + bnge_reuse_rx_agg_bufs(cpr, idx, 0, agg_bufs, true); +} + +static void bnge_tpa_agg(struct bnge_net *bn, struct bnge_rx_ring_info *rx= r, + struct rx_agg_cmp *rx_agg) +{ + u16 agg_id =3D TPA_AGG_AGG_ID(rx_agg); + struct bnge_tpa_info *tpa_info; + + agg_id =3D bnge_lookup_agg_idx(rxr, agg_id); + tpa_info =3D &rxr->rx_tpa[agg_id]; + + tpa_info->agg_arr[tpa_info->agg_count++] =3D *rx_agg; +} + void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a) { struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; @@ -307,6 +485,208 @@ static struct sk_buff *bnge_copy_skb(struct bnge_napi= *bnapi, u8 *data, return skb; } =20 +#ifdef CONFIG_INET +static void bnge_gro_tunnel(struct sk_buff *skb, __be16 ip_proto) +{ + struct udphdr *uh =3D NULL; + + if (ip_proto =3D=3D htons(ETH_P_IP)) { + struct iphdr *iph =3D (struct iphdr *)skb->data; + + if (iph->protocol =3D=3D IPPROTO_UDP) + uh =3D (struct udphdr *)(iph + 1); + } else { + struct ipv6hdr *iph =3D (struct ipv6hdr *)skb->data; + + if (iph->nexthdr =3D=3D IPPROTO_UDP) + uh =3D (struct udphdr *)(iph + 1); + } + if (uh) { + if (uh->check) + skb_shinfo(skb)->gso_type |=3D SKB_GSO_UDP_TUNNEL_CSUM; + else + skb_shinfo(skb)->gso_type |=3D SKB_GSO_UDP_TUNNEL; + } +} + +static struct sk_buff *bnge_gro_func(struct bnge_tpa_info *tpa_info, + int payload_off, int tcp_ts, + struct sk_buff *skb) +{ + u16 outer_ip_off, inner_ip_off, inner_mac_off; + u32 hdr_info =3D tpa_info->hdr_info; + int iphdr_len, nw_off; + + inner_ip_off =3D BNGE_TPA_INNER_L3_OFF(hdr_info); + inner_mac_off =3D BNGE_TPA_INNER_L2_OFF(hdr_info); + outer_ip_off =3D BNGE_TPA_OUTER_L3_OFF(hdr_info); + + nw_off =3D inner_ip_off - ETH_HLEN; + skb_set_network_header(skb, nw_off); + iphdr_len =3D (tpa_info->flags2 & RX_TPA_START_CMP_FLAGS2_IP_TYPE) ? + sizeof(struct ipv6hdr) : sizeof(struct iphdr); + skb_set_transport_header(skb, nw_off + iphdr_len); + + if (inner_mac_off) { /* tunnel */ + __be16 proto =3D *((__be16 *)(skb->data + outer_ip_off - + ETH_HLEN - 2)); + + bnge_gro_tunnel(skb, proto); + } + + return skb; +} + +static struct sk_buff *bnge_gro_skb(struct bnge_net *bn, + struct bnge_tpa_info *tpa_info, + struct rx_tpa_end_cmp *tpa_end, + struct rx_tpa_end_cmp_ext *tpa_end1, + struct sk_buff *skb) +{ + int payload_off; + u16 segs; + + segs =3D TPA_END_TPA_SEGS(tpa_end); + if (segs =3D=3D 1) + return skb; + + NAPI_GRO_CB(skb)->count =3D segs; + skb_shinfo(skb)->gso_size =3D + le32_to_cpu(tpa_end1->rx_tpa_end_cmp_seg_len); + skb_shinfo(skb)->gso_type =3D tpa_info->gso_type; + payload_off =3D TPA_END_PAYLOAD_OFF(tpa_end1); + skb =3D bnge_gro_func(tpa_info, payload_off, + TPA_END_GRO_TS(tpa_end), skb); + if (likely(skb)) + tcp_gro_complete(skb); + + return skb; +} +#endif + +static struct sk_buff *bnge_tpa_end(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u32 *raw_cons, + struct rx_tpa_end_cmp *tpa_end, + struct rx_tpa_end_cmp_ext *tpa_end1, + u8 *event) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + struct net_device *dev =3D bn->netdev; + struct bnge_tpa_info *tpa_info; + struct bnge_rx_ring_info *rxr; + u8 *data_ptr, agg_bufs; + struct sk_buff *skb; + u16 idx =3D 0, agg_id; + dma_addr_t mapping; + unsigned int len; + void *data; + + rxr =3D bnapi->rx_ring; + agg_id =3D TPA_END_AGG_ID(tpa_end); + agg_id =3D bnge_lookup_agg_idx(rxr, agg_id); + agg_bufs =3D TPA_END_AGG_BUFS(tpa_end1); + tpa_info =3D &rxr->rx_tpa[agg_id]; + if (unlikely(agg_bufs !=3D tpa_info->agg_count)) { + netdev_warn(bn->netdev, "TPA end agg_buf %d !=3D expected agg_bufs %d\n", + agg_bufs, tpa_info->agg_count); + agg_bufs =3D tpa_info->agg_count; + } + tpa_info->agg_count =3D 0; + *event |=3D BNGE_AGG_EVENT; + bnge_free_agg_idx(rxr, agg_id); + idx =3D agg_id; + data =3D tpa_info->data; + data_ptr =3D tpa_info->data_ptr; + prefetch(data_ptr); + len =3D tpa_info->len; + mapping =3D tpa_info->mapping; + + if (unlikely(agg_bufs > MAX_SKB_FRAGS || TPA_END_ERRORS(tpa_end1))) { + bnge_abort_tpa(cpr, idx, agg_bufs); + if (agg_bufs > MAX_SKB_FRAGS) + netdev_warn(bn->netdev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n", + agg_bufs, (int)MAX_SKB_FRAGS); + return NULL; + } + + if (len <=3D bn->rx_copybreak) { + skb =3D bnge_copy_skb(bnapi, data_ptr, len, mapping); + if (!skb) { + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + } else { + dma_addr_t new_mapping; + u8 *new_data; + + new_data =3D __bnge_alloc_rx_frag(bn, &new_mapping, rxr, + GFP_ATOMIC); + if (!new_data) { + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + + tpa_info->data =3D new_data; + tpa_info->data_ptr =3D new_data + bn->rx_offset; + tpa_info->mapping =3D new_mapping; + + skb =3D napi_build_skb(data, bn->rx_buf_size); + dma_sync_single_for_cpu(bn->bd->dev, mapping, + bn->rx_buf_use_size, bn->rx_dir); + + if (!skb) { + page_pool_free_va(rxr->head_pool, data, true); + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + skb_mark_for_recycle(skb); + skb_reserve(skb, bn->rx_offset); + skb_put(skb, len); + } + + if (agg_bufs) { + skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, idx, agg_bufs, + true); + /* Page reuse already handled by bnge_rx_agg_netmems_skb(). */ + if (!skb) + return NULL; + } + + skb->protocol =3D eth_type_trans(skb, dev); + + if (tpa_info->hash_type !=3D PKT_HASH_TYPE_NONE) + skb_set_hash(skb, tpa_info->rss_hash, tpa_info->hash_type); + + if (tpa_info->vlan_valid && + (dev->features & BNGE_HW_FEATURE_VLAN_ALL_RX)) { + __be16 vlan_proto =3D htons(tpa_info->metadata >> + RX_CMP_FLAGS2_METADATA_TPID_SFT); + u16 vtag =3D tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK; + + if (eth_type_vlan(vlan_proto)) { + __vlan_hwaccel_put_tag(skb, vlan_proto, vtag); + } else { + dev_kfree_skb(skb); + return NULL; + } + } + + skb_checksum_none_assert(skb); + if (likely(tpa_info->flags2 & RX_TPA_START_CMP_FLAGS2_L4_CS_CALC)) { + skb->ip_summed =3D CHECKSUM_UNNECESSARY; + skb->csum_level =3D + (tpa_info->flags2 & RX_CMP_FLAGS2_T_L4_CS_CALC) >> 3; + } + +#ifdef CONFIG_INET + if (bn->priv_flags & BNGE_NET_EN_GRO) + skb =3D bnge_gro_skb(bn, tpa_info, tpa_end, tpa_end1, skb); +#endif + + return skb; +} + static enum pkt_hash_types bnge_rss_ext_op(struct bnge_net *bn, struct rx_cmp *rxcmp) { @@ -400,6 +780,7 @@ static struct sk_buff *bnge_rx_skb(struct bnge_net *bn, =20 /* returns the following: * 1 - 1 packet successfully received + * 0 - successful TPA_START, packet not completed yet * -EBUSY - completion ring does not have all the agg buffers yet * -ENOMEM - packet aborted due to out of memory * -EIO - packet aborted due to hw error indicated in BD @@ -432,6 +813,11 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, =20 cmp_type =3D RX_CMP_TYPE(rxcmp); =20 + if (cmp_type =3D=3D CMP_TYPE_RX_TPA_AGG_CMP) { + bnge_tpa_agg(bn, rxr, (struct rx_agg_cmp *)rxcmp); + goto next_rx_no_prod_no_len; + } + tmp_raw_cons =3D NEXT_RAW_CMP(tmp_raw_cons); cp_cons =3D RING_CMP(bn, tmp_raw_cons); rxcmp1 =3D (struct rx_cmp_ext *) @@ -446,6 +832,28 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, dma_rmb(); prod =3D rxr->rx_prod; =20 + if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_CMP || + cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_V3_CMP) { + bnge_tpa_start(bn, rxr, cmp_type, + (struct rx_tpa_start_cmp *)rxcmp, + (struct rx_tpa_start_cmp_ext *)rxcmp1); + + *event |=3D BNGE_RX_EVENT; + goto next_rx_no_prod_no_len; + + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + skb =3D bnge_tpa_end(bn, cpr, &tmp_raw_cons, + (struct rx_tpa_end_cmp *)rxcmp, + (struct rx_tpa_end_cmp_ext *)rxcmp1, event); + rc =3D -ENOMEM; + if (likely(skb)) { + bnge_deliver_skb(bn, bnapi, skb); + rc =3D 1; + } + *event |=3D BNGE_RX_EVENT; + goto next_rx_no_prod_no_len; + } + cons =3D rxcmp->rx_cmp_opaque; if (unlikely(cons !=3D rxr->rx_next_cons)) { int rc1 =3D bnge_discard_rx(bn, cpr, &tmp_raw_cons, rxcmp); @@ -480,7 +888,8 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) { bnge_reuse_rx_data(rxr, cons, data); if (agg_bufs) - bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs); + bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs, + false); rc =3D -EIO; goto next_rx_no_len; } @@ -495,7 +904,7 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, if (!skb) { if (agg_bufs) bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, - agg_bufs); + agg_bufs, false); goto oom_next_rx; } } else { @@ -513,7 +922,7 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, =20 if (agg_bufs) { skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, cp_cons, - agg_bufs); + agg_bufs, false); if (!skb) goto oom_next_rx; } @@ -604,6 +1013,12 @@ static int bnge_force_rx_discard(struct bnge_net *bn, cmp_type =3D=3D CMP_TYPE_RX_L2_V3_CMP) { rxcmp1->rx_cmp_cfa_code_errors_v2 |=3D cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR); + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + struct rx_tpa_end_cmp_ext *tpa_end1; + + tpa_end1 =3D (struct rx_tpa_end_cmp_ext *)rxcmp1; + tpa_end1->rx_tpa_end_cmp_errors_v2 |=3D + cpu_to_le32(RX_TPA_END_CMP_ERRORS); } rc =3D bnge_rx_pkt(bn, cpr, raw_cons, event); return rc; --=20 2.47.3