From nobody Mon Feb 9 13:57:12 2026 Received: from mail-qv1-f98.google.com (mail-qv1-f98.google.com [209.85.219.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 163DA3A1D02 for ; Fri, 16 Jan 2026 19:38:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768592325; cv=none; b=UIcUVGdDJTJnNEzfANby74VcdCSAp3/Em9LfxYOi9yoys4eiOI3JVJFW83+yzMh+/tGNHziS3LS971iiBszyZ1/VVCTBYMDQkp1MHXarwb4sAI9+1AJ6frRzNbRkqqXjJzPGPHFoz0pI0wIKqcpOUremRGyVMhOJ9lwNXHuCex0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768592325; c=relaxed/simple; bh=q0Mr3xmQ9t1PW7VDR9LFlbYRKMVXh1lD/Cts1exQZ+0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cS1pzbPvdd+t5l95Q/i54V6LUWsaqCEECDnhYTttfvY4x/wLlmDy8d1cVTeJvq3z5p3AH9piEkej/vHAv30bVcUhru+PstMrQNgT9Jgxs4ofx5ePHMK81m9IpH5uL2uwdHW/dAuSNw+G+eyGovF+3BgSetJ+4WqzpGGkCRpbvtk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=ZiDI7vAU; arc=none smtp.client-ip=209.85.219.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="ZiDI7vAU" Received: by mail-qv1-f98.google.com with SMTP id 6a1803df08f44-88a37cb5afdso48031596d6.0 for ; Fri, 16 Jan 2026 11:38:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768592318; x=1769197118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mE5sZL9lSqyNObwFvV9SbkGR3tD8SkuFH528O+nmWiM=; b=GvukrrwTjKu5ugQ1ZlK6FGPYcRlkuLRj+vWh90e2PEdm5KkayToX8zrYPRoy0HOr5W lRogkh1uUaLmfx8p/jwhRpCGg88ZT5UiKuBDAai/S6tPK8qvS25U0YhX6V5pJBZXmpzA cUXwBYUTbV5GCkfu2quKE8E9cP1jARkb0eprz3w1ksW6roaRiiInVntPlsoLaeTR9bWG XkNZyqErqvQi/PDA53TJIqzjMcRKCCe3T/M/oWPcl6aIPus4C/YNP8hXTyvr5yP/qwR7 vYgWGCl49Xtyow3dCipv3OKDAV2cY+mliUo4BfoI2ZPz5OxkXILDK66hxmThLgi0HJ9Y G/uA== X-Forwarded-Encrypted: i=1; AJvYcCVV5gzE+6vUG86h+F0cizgdMlP2XzrsUstBqpRr8N/n0Jxwy+ZAh5fi0smDEHYXg+ngfl+V9UXmOxJBZW4=@vger.kernel.org X-Gm-Message-State: AOJu0YwSxPvPOuqexrnGV6s2qC6/nIpWw447T0lwvlDn5dzCpje/qnyJ d+6MS+SQpv2n6zKSzjHPe7W8XMSMGo7Z8SNRZHj3eeX/0xT4icgSd9xtZH+ZyP8KMJMIQHSQ9w6 5TePPKpRbrwXK+At+YTu3gNkGacts9/1slqh32lqg/fXFX6pnoumUkNUAuSTMj0tIt1Gyk6NYNJ TfMlkiyxeytws8NpAWwRm6AFdqEtSP6HpNydi6i8dWYOgTQW+CeTZAa5n6m4oh15xrBpFxU4qBD aiZBPK6o/pibZIpwjH70T/XbQ== X-Gm-Gg: AY/fxX5y4yXckwxT+6wXGxOyZlLQDnvry8FRhmRsachNLGZhwahhb8uC8UOWlrKttEv FcHLB7weTK8wtqvTmlfYJvxhbHooedmr2p7grkF65kBRx77udFCM04zdEQllHs1GAYFtZXZz5JZ 1tSv/8o+MfSyempL6d+On23+xOXGM4ZLE49CPtHJbxHWK2DFsAFplGbZzlvm1wF/vWS6LpSCRMG +FdNXfNh0fg3mF/qfJmb3IwlODIxfvyluD5wQgCz0+pJr9WmAwfsaVX7GgrR8N8rxeDsLHI666e jtmrLd+l/qUfnBRp5x4N1yS2JIC2u5cj1Rguw1cEb93BRp7kFBk6Egma5GchZ4CKLz30vvOBfRY 0sWwVaZEjmofy9g/ymSYUkwIMiTxF6gLPAwgvlqfv/h3t+316FmRQTXzDHLDaCmOFx48k5L1l+y goPnl7cEYWnEHM39yzOX0loiA/M5BAeYYfK+doRbICvpyMeiI2f6s= X-Received: by 2002:a05:6214:f28:b0:88a:2ed6:252 with SMTP id 6a1803df08f44-89389f8586emr99714516d6.5.1768592317602; Fri, 16 Jan 2026 11:38:37 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-72.dlp.protect.broadcom.com. [144.49.247.72]) by smtp-relay.gmail.com with ESMTPS id 6a1803df08f44-8942e6e6129sm1919886d6.26.2026.01.16.11.38.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jan 2026 11:38:37 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pj1-f72.google.com with SMTP id 98e67ed59e1d1-34c93f0849dso2445492a91.1 for ; Fri, 16 Jan 2026 11:38:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1768592316; x=1769197116; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mE5sZL9lSqyNObwFvV9SbkGR3tD8SkuFH528O+nmWiM=; b=ZiDI7vAU/098P0WjrfHjFoZTjzhaOl7F8s/ImfGFXLfXUf4OmA4mWOrJBOeeOSo8oc nyMuNIysKqdMtE46hg/fLDD2eEYT3oQJXT8rjssfyijz/uVxBrphLaP9fwkJ2qbpksXZ aH9KtjpUImSdTdXPdBaLb1mrhoQa/6w9TB4wI= X-Forwarded-Encrypted: i=1; AJvYcCXhej8xghZMwh2Str+C2YU9OS/GXeXT1Noa5116bCqWmtIAuUSSs0ctRiiuPrwI0GC1Vu55Qz91fP8etrM=@vger.kernel.org X-Received: by 2002:a17:90b:2543:b0:34c:2aac:21a7 with SMTP id 98e67ed59e1d1-35267816ab8mr8154313a91.7.1768592316156; Fri, 16 Jan 2026 11:38:36 -0800 (PST) X-Received: by 2002:a17:90b:2543:b0:34c:2aac:21a7 with SMTP id 98e67ed59e1d1-35267816ab8mr8154276a91.7.1768592315524; Fri, 16 Jan 2026 11:38:35 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-35273121856sm2764909a91.15.2026.01.16.11.38.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Jan 2026 11:38:35 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, ajit.khaparde@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v5, net-next 8/8] bng_en: Add support for TPA events Date: Sat, 17 Jan 2026 01:07:32 +0530 Message-ID: <20260116193732.157898-9-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260116193732.157898-1-bhargava.marreddy@broadcom.com> References: <20260116193732.157898-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Enable TPA functionality in the VNIC and add functions to handle TPA events, which help in processing LRO/GRO. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 65 +++ .../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 2 + .../net/ethernet/broadcom/bnge/bnge_netdev.c | 27 ++ .../net/ethernet/broadcom/bnge/bnge_netdev.h | 3 +- .../net/ethernet/broadcom/bnge/bnge_txrx.c | 435 +++++++++++++++++- 5 files changed, 520 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.c index 198f49b40dbf..d4b1c0d2c44c 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c @@ -1183,3 +1183,68 @@ int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd= , int idx) req->async_event_cr =3D cpu_to_le16(idx); return bnge_hwrm_req_send(bd, req); } + +#define BNGE_DFLT_TUNL_TPA_BMAP \ + (VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GRE | \ + VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV4 | \ + VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV6) + +static void bnge_hwrm_vnic_update_tunl_tpa(struct bnge_dev *bd, + struct hwrm_vnic_tpa_cfg_input *req) +{ + struct bnge_net *bn =3D netdev_priv(bd->netdev); + u32 tunl_tpa_bmap =3D BNGE_DFLT_TUNL_TPA_BMAP; + + if (!(bd->fw_cap & BNGE_FW_CAP_VNIC_TUNNEL_TPA)) + return; + + if (bn->vxlan_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN; + if (bn->vxlan_gpe_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN_GPE; + if (bn->nge_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GENEVE; + + req->enables |=3D cpu_to_le32(VNIC_TPA_CFG_REQ_ENABLES_TNL_TPA_EN); + req->tnl_tpa_en_bitmap =3D cpu_to_le32(tunl_tpa_bmap); +} + +int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vni= c, + u32 tpa_flags) +{ + struct bnge_net *bn =3D netdev_priv(bd->netdev); + struct hwrm_vnic_tpa_cfg_input *req; + int rc; + + if (vnic->fw_vnic_id =3D=3D INVALID_HW_RING_ID) + return 0; + + rc =3D bnge_hwrm_req_init(bd, req, HWRM_VNIC_TPA_CFG); + if (rc) + return rc; + + if (tpa_flags) { + u32 flags; + + flags =3D VNIC_TPA_CFG_REQ_FLAGS_TPA | + VNIC_TPA_CFG_REQ_FLAGS_ENCAP_TPA | + VNIC_TPA_CFG_REQ_FLAGS_RSC_WND_UPDATE | + VNIC_TPA_CFG_REQ_FLAGS_AGG_WITH_ECN | + VNIC_TPA_CFG_REQ_FLAGS_AGG_WITH_SAME_GRE_SEQ; + if (tpa_flags & BNGE_NET_EN_GRO) + flags |=3D VNIC_TPA_CFG_REQ_FLAGS_GRO; + + req->flags =3D cpu_to_le32(flags); + req->enables =3D + cpu_to_le32(VNIC_TPA_CFG_REQ_ENABLES_MAX_AGG_SEGS | + VNIC_TPA_CFG_REQ_ENABLES_MAX_AGGS | + VNIC_TPA_CFG_REQ_ENABLES_MIN_AGG_LEN); + req->max_agg_segs =3D cpu_to_le16(MAX_TPA_SEGS); + req->max_aggs =3D cpu_to_le16(bn->max_tpa); + req->min_agg_len =3D cpu_to_le32(512); + bnge_hwrm_vnic_update_tunl_tpa(bd, req); + } + req->vnic_id =3D cpu_to_le16(vnic->fw_vnic_id); + + return bnge_hwrm_req_send(bd, req); +} diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.h index 042f28e84a05..38b046237feb 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h @@ -55,4 +55,6 @@ int hwrm_ring_alloc_send_msg(struct bnge_net *bn, struct bnge_ring_struct *ring, u32 ring_type, u32 map_index); int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd, int idx); +int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vni= c, + u32 tpa_flags); #endif /* _BNGE_HWRM_LIB_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 7800c870d6d3..0f28aae29bad 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -2278,6 +2278,27 @@ static int bnge_request_irq(struct bnge_net *bn) return rc; } =20 +static int bnge_set_tpa(struct bnge_net *bn, bool set_tpa) +{ + u32 tpa_flags =3D 0; + int rc, i; + + if (set_tpa) + tpa_flags =3D bn->priv_flags & BNGE_NET_EN_TPA; + else if (BNGE_NO_FW_ACCESS(bn->bd)) + return 0; + for (i =3D 0; i < bn->nr_vnics; i++) { + rc =3D bnge_hwrm_vnic_set_tpa(bn->bd, &bn->vnic_info[i], + tpa_flags); + if (rc) { + netdev_err(bn->netdev, "hwrm vnic set tpa failure rc for vnic %d: %x\n", + i, rc); + return rc; + } + } + return 0; +} + static int bnge_init_chip(struct bnge_net *bn) { struct bnge_vnic_info *vnic =3D &bn->vnic_info[BNGE_VNIC_DEFAULT]; @@ -2312,6 +2333,12 @@ static int bnge_init_chip(struct bnge_net *bn) if (bd->rss_cap & BNGE_RSS_CAP_RSS_HASH_TYPE_DELTA) bnge_hwrm_update_rss_hash_cfg(bn); =20 + if (bn->priv_flags & BNGE_NET_EN_TPA) { + rc =3D bnge_set_tpa(bn, true); + if (rc) + goto err_out; + } + /* Filter for default vnic 0 */ rc =3D bnge_hwrm_set_vnic_filter(bn, 0, 0, bn->netdev->dev_addr); if (rc) { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.h index 1d700e74d106..658620727957 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h @@ -159,10 +159,9 @@ enum { #define MAX_TPA_MASK (MAX_TPA - 1) #define MAX_TPA_SEGS 0x3f =20 -#define BNGE_AGG_IDX_BMAP_SIZE (MAX_TPA / BITS_PER_LONG) struct bnge_tpa_idx_map { u16 agg_id_tbl[1024]; - unsigned long agg_idx_bmap[BNGE_AGG_IDX_BMAP_SIZE]; + DECLARE_BITMAP(agg_idx_bmap, MAX_TPA); }; =20 struct bnge_tpa_info { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c index d6c8557fcb19..bc5a387aae48 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -44,6 +45,15 @@ irqreturn_t bnge_msix(int irq, void *dev_instance) return IRQ_HANDLED; } =20 +static struct rx_agg_cmp *bnge_get_tpa_agg(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr, + u16 agg_id, u16 curr) +{ + struct bnge_tpa_info *tpa_info =3D &rxr->rx_tpa[agg_id]; + + return &tpa_info->agg_arr[curr]; +} + static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, u16 cp_cons, u16 curr) @@ -57,7 +67,7 @@ static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *b= n, } =20 static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_info *cpr, u16 idx, - u16 start, u32 agg_bufs) + u16 start, u32 agg_bufs, bool tpa) { struct bnge_napi *bnapi =3D cpr->bnapi; struct bnge_net *bn =3D bnapi->bn; @@ -76,7 +86,10 @@ static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_i= nfo *cpr, u16 idx, netmem_ref netmem; u16 cons; =20 - agg =3D bnge_get_agg(bn, cpr, idx, start + i); + if (tpa) + agg =3D bnge_get_tpa_agg(bn, rxr, idx, start + i); + else + agg =3D bnge_get_agg(bn, cpr, idx, start + i); cons =3D agg->rx_agg_cmp_opaque; __clear_bit(cons, rxr->rx_agg_bmap); =20 @@ -137,6 +150,8 @@ static int bnge_discard_rx(struct bnge_net *bn, struct = bnge_cp_ring_info *cpr, agg_bufs =3D (le32_to_cpu(rxcmp->rx_cmp_misc_v1) & RX_CMP_AGG_BUFS) >> RX_CMP_AGG_BUFS_SHIFT; + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + return 0; } =20 if (agg_bufs) { @@ -149,7 +164,7 @@ static int bnge_discard_rx(struct bnge_net *bn, struct = bnge_cp_ring_info *cpr, =20 static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, - u16 idx, u32 agg_bufs, + u16 idx, u32 agg_bufs, bool tpa, struct sk_buff *skb) { struct bnge_napi *bnapi =3D cpr->bnapi; @@ -168,7 +183,10 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, u16 cons, frag_len; netmem_ref netmem; =20 - agg =3D bnge_get_agg(bn, cpr, idx, i); + if (tpa) + agg =3D bnge_get_tpa_agg(bn, rxr, idx, i); + else + agg =3D bnge_get_agg(bn, cpr, idx, i); cons =3D agg->rx_agg_cmp_opaque; frag_len =3D (le32_to_cpu(agg->rx_agg_cmp_len_flags_type) & RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; @@ -198,7 +216,7 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, * allocated already. */ rxr->rx_agg_prod =3D prod; - bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i); + bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i, tpa); return 0; } =20 @@ -215,11 +233,12 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, static struct sk_buff *bnge_rx_agg_netmems_skb(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, struct sk_buff *skb, u16 idx, - u32 agg_bufs) + u32 agg_bufs, bool tpa) { u32 total_frag_len; =20 - total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, skb); + total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, + tpa, skb); if (!total_frag_len) { skb_mark_for_recycle(skb); dev_kfree_skb(skb); @@ -253,6 +272,165 @@ static void bnge_sched_reset_txr(struct bnge_net *bn, /* TODO: Initiate reset task */ } =20 +static u16 bnge_tpa_alloc_agg_idx(struct bnge_rx_ring_info *rxr, u16 agg_i= d) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + u16 idx =3D agg_id & MAX_TPA_MASK; + + if (test_bit(idx, map->agg_idx_bmap)) { + idx =3D find_first_zero_bit(map->agg_idx_bmap, MAX_TPA); + if (idx >=3D MAX_TPA) + return INVALID_HW_RING_ID; + } + __set_bit(idx, map->agg_idx_bmap); + map->agg_id_tbl[agg_id] =3D idx; + return idx; +} + +static void bnge_free_agg_idx(struct bnge_rx_ring_info *rxr, u16 idx) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + + __clear_bit(idx, map->agg_idx_bmap); +} + +static u16 bnge_lookup_agg_idx(struct bnge_rx_ring_info *rxr, u16 agg_id) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + + return map->agg_id_tbl[agg_id]; +} + +static void bnge_tpa_metadata(struct bnge_tpa_info *tpa_info, + struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + tpa_info->cfa_code_valid =3D 1; + tpa_info->cfa_code =3D TPA_START_CFA_CODE(tpa_start1); + tpa_info->vlan_valid =3D 0; + if (tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) { + tpa_info->vlan_valid =3D 1; + tpa_info->metadata =3D + le32_to_cpu(tpa_start1->rx_tpa_start_cmp_metadata); + } +} + +static void bnge_tpa_metadata_v2(struct bnge_tpa_info *tpa_info, + struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + tpa_info->vlan_valid =3D 0; + if (TPA_START_VLAN_VALID(tpa_start)) { + u32 tpid_sel =3D TPA_START_VLAN_TPID_SEL(tpa_start); + u32 vlan_proto =3D ETH_P_8021Q; + + tpa_info->vlan_valid =3D 1; + if (tpid_sel =3D=3D RX_TPA_START_METADATA1_TPID_8021AD) + vlan_proto =3D ETH_P_8021AD; + tpa_info->metadata =3D vlan_proto << 16 | + TPA_START_METADATA0_TCI(tpa_start1); + } +} + +static void bnge_tpa_start(struct bnge_net *bn, struct bnge_rx_ring_info *= rxr, + u8 cmp_type, struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; + struct bnge_tpa_info *tpa_info; + u16 cons, prod, agg_id; + struct rx_bd *prod_bd; + dma_addr_t mapping; + + agg_id =3D TPA_START_AGG_ID(tpa_start); + agg_id =3D bnge_tpa_alloc_agg_idx(rxr, agg_id); + if (unlikely(agg_id =3D=3D INVALID_HW_RING_ID)) { + netdev_warn(bn->netdev, "Unable to allocate agg ID for ring %d, agg 0x%x= \n", + rxr->bnapi->index, TPA_START_AGG_ID(tpa_start)); + bnge_sched_reset_rxr(bn, rxr); + return; + } + cons =3D tpa_start->rx_tpa_start_cmp_opaque; + prod =3D rxr->rx_prod; + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + prod_rx_buf =3D &rxr->rx_buf_ring[RING_RX(bn, prod)]; + tpa_info =3D &rxr->rx_tpa[agg_id]; + + if (unlikely(cons !=3D rxr->rx_next_cons || + TPA_START_ERROR(tpa_start))) { + netdev_warn(bn->netdev, "TPA cons %x, expected cons %x, error code %x\n", + cons, rxr->rx_next_cons, + TPA_START_ERROR_CODE(tpa_start1)); + bnge_sched_reset_rxr(bn, rxr); + return; + } + prod_rx_buf->data =3D tpa_info->data; + prod_rx_buf->data_ptr =3D tpa_info->data_ptr; + + mapping =3D tpa_info->mapping; + prod_rx_buf->mapping =3D mapping; + + prod_bd =3D &rxr->rx_desc_ring[RX_RING(bn, prod)][RX_IDX(prod)]; + + prod_bd->rx_bd_haddr =3D cpu_to_le64(mapping); + + tpa_info->data =3D cons_rx_buf->data; + tpa_info->data_ptr =3D cons_rx_buf->data_ptr; + cons_rx_buf->data =3D NULL; + tpa_info->mapping =3D cons_rx_buf->mapping; + + tpa_info->len =3D + le32_to_cpu(tpa_start->rx_tpa_start_cmp_len_flags_type) >> + RX_TPA_START_CMP_LEN_SHIFT; + if (likely(TPA_START_HASH_VALID(tpa_start))) { + tpa_info->hash_type =3D PKT_HASH_TYPE_L4; + if (TPA_START_IS_IPV6(tpa_start1)) + tpa_info->gso_type =3D SKB_GSO_TCPV6; + else + tpa_info->gso_type =3D SKB_GSO_TCPV4; + tpa_info->rss_hash =3D + le32_to_cpu(tpa_start->rx_tpa_start_cmp_rss_hash); + } else { + tpa_info->hash_type =3D PKT_HASH_TYPE_NONE; + tpa_info->gso_type =3D 0; + netif_warn(bn, rx_err, bn->netdev, "TPA packet without valid hash\n"); + } + tpa_info->flags2 =3D le32_to_cpu(tpa_start1->rx_tpa_start_cmp_flags2); + tpa_info->hdr_info =3D le32_to_cpu(tpa_start1->rx_tpa_start_cmp_hdr_info); + if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_CMP) + bnge_tpa_metadata(tpa_info, tpa_start, tpa_start1); + else + bnge_tpa_metadata_v2(tpa_info, tpa_start, tpa_start1); + tpa_info->agg_count =3D 0; + + rxr->rx_prod =3D NEXT_RX(prod); + cons =3D RING_RX(bn, NEXT_RX(cons)); + rxr->rx_next_cons =3D RING_RX(bn, NEXT_RX(cons)); + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + + bnge_reuse_rx_data(rxr, cons, cons_rx_buf->data); + rxr->rx_prod =3D NEXT_RX(rxr->rx_prod); + cons_rx_buf->data =3D NULL; +} + +static void bnge_abort_tpa(struct bnge_cp_ring_info *cpr, u16 idx, u32 agg= _bufs) +{ + if (agg_bufs) + bnge_reuse_rx_agg_bufs(cpr, idx, 0, agg_bufs, true); +} + +static void bnge_tpa_agg(struct bnge_net *bn, struct bnge_rx_ring_info *rx= r, + struct rx_agg_cmp *rx_agg) +{ + u16 agg_id =3D TPA_AGG_AGG_ID(rx_agg); + struct bnge_tpa_info *tpa_info; + + agg_id =3D bnge_lookup_agg_idx(rxr, agg_id); + tpa_info =3D &rxr->rx_tpa[agg_id]; + + tpa_info->agg_arr[tpa_info->agg_count++] =3D *rx_agg; +} + void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a) { struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; @@ -303,6 +481,208 @@ static struct sk_buff *bnge_copy_skb(struct bnge_napi= *bnapi, u8 *data, return skb; } =20 +#ifdef CONFIG_INET +static void bnge_gro_tunnel(struct sk_buff *skb, __be16 ip_proto) +{ + struct udphdr *uh =3D NULL; + + if (ip_proto =3D=3D htons(ETH_P_IP)) { + struct iphdr *iph =3D (struct iphdr *)skb->data; + + if (iph->protocol =3D=3D IPPROTO_UDP) + uh =3D (struct udphdr *)(iph + 1); + } else { + struct ipv6hdr *iph =3D (struct ipv6hdr *)skb->data; + + if (iph->nexthdr =3D=3D IPPROTO_UDP) + uh =3D (struct udphdr *)(iph + 1); + } + if (uh) { + if (uh->check) + skb_shinfo(skb)->gso_type |=3D SKB_GSO_UDP_TUNNEL_CSUM; + else + skb_shinfo(skb)->gso_type |=3D SKB_GSO_UDP_TUNNEL; + } +} + +static struct sk_buff *bnge_gro_func(struct bnge_tpa_info *tpa_info, + int payload_off, int tcp_ts, + struct sk_buff *skb) +{ + u16 outer_ip_off, inner_ip_off, inner_mac_off; + u32 hdr_info =3D tpa_info->hdr_info; + int iphdr_len, nw_off; + + inner_ip_off =3D BNGE_TPA_INNER_L3_OFF(hdr_info); + inner_mac_off =3D BNGE_TPA_INNER_L2_OFF(hdr_info); + outer_ip_off =3D BNGE_TPA_OUTER_L3_OFF(hdr_info); + + nw_off =3D inner_ip_off - ETH_HLEN; + skb_set_network_header(skb, nw_off); + iphdr_len =3D (tpa_info->flags2 & RX_TPA_START_CMP_FLAGS2_IP_TYPE) ? + sizeof(struct ipv6hdr) : sizeof(struct iphdr); + skb_set_transport_header(skb, nw_off + iphdr_len); + + if (inner_mac_off) { /* tunnel */ + __be16 proto =3D *((__be16 *)(skb->data + outer_ip_off - + ETH_HLEN - 2)); + + bnge_gro_tunnel(skb, proto); + } + + return skb; +} + +static struct sk_buff *bnge_gro_skb(struct bnge_net *bn, + struct bnge_tpa_info *tpa_info, + struct rx_tpa_end_cmp *tpa_end, + struct rx_tpa_end_cmp_ext *tpa_end1, + struct sk_buff *skb) +{ + int payload_off; + u16 segs; + + segs =3D TPA_END_TPA_SEGS(tpa_end); + if (segs =3D=3D 1) + return skb; + + NAPI_GRO_CB(skb)->count =3D segs; + skb_shinfo(skb)->gso_size =3D + le32_to_cpu(tpa_end1->rx_tpa_end_cmp_seg_len); + skb_shinfo(skb)->gso_type =3D tpa_info->gso_type; + payload_off =3D TPA_END_PAYLOAD_OFF(tpa_end1); + skb =3D bnge_gro_func(tpa_info, payload_off, + TPA_END_GRO_TS(tpa_end), skb); + if (likely(skb)) + tcp_gro_complete(skb); + + return skb; +} +#endif + +static struct sk_buff *bnge_tpa_end(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u32 *raw_cons, + struct rx_tpa_end_cmp *tpa_end, + struct rx_tpa_end_cmp_ext *tpa_end1, + u8 *event) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + struct net_device *dev =3D bn->netdev; + struct bnge_tpa_info *tpa_info; + struct bnge_rx_ring_info *rxr; + u8 *data_ptr, agg_bufs; + struct sk_buff *skb; + u16 idx =3D 0, agg_id; + dma_addr_t mapping; + unsigned int len; + void *data; + + rxr =3D bnapi->rx_ring; + agg_id =3D TPA_END_AGG_ID(tpa_end); + agg_id =3D bnge_lookup_agg_idx(rxr, agg_id); + agg_bufs =3D TPA_END_AGG_BUFS(tpa_end1); + tpa_info =3D &rxr->rx_tpa[agg_id]; + if (unlikely(agg_bufs !=3D tpa_info->agg_count)) { + netdev_warn(bn->netdev, "TPA end agg_buf %d !=3D expected agg_bufs %d\n", + agg_bufs, tpa_info->agg_count); + agg_bufs =3D tpa_info->agg_count; + } + tpa_info->agg_count =3D 0; + *event |=3D BNGE_AGG_EVENT; + bnge_free_agg_idx(rxr, agg_id); + idx =3D agg_id; + data =3D tpa_info->data; + data_ptr =3D tpa_info->data_ptr; + prefetch(data_ptr); + len =3D tpa_info->len; + mapping =3D tpa_info->mapping; + + if (unlikely(agg_bufs > MAX_SKB_FRAGS || TPA_END_ERRORS(tpa_end1))) { + bnge_abort_tpa(cpr, idx, agg_bufs); + if (agg_bufs > MAX_SKB_FRAGS) + netdev_warn(bn->netdev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n", + agg_bufs, (int)MAX_SKB_FRAGS); + return NULL; + } + + if (len <=3D bn->rx_copybreak) { + skb =3D bnge_copy_skb(bnapi, data_ptr, len, mapping); + if (!skb) { + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + } else { + dma_addr_t new_mapping; + u8 *new_data; + + new_data =3D __bnge_alloc_rx_frag(bn, &new_mapping, rxr, + GFP_ATOMIC); + if (!new_data) { + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + + tpa_info->data =3D new_data; + tpa_info->data_ptr =3D new_data + bn->rx_offset; + tpa_info->mapping =3D new_mapping; + + skb =3D napi_build_skb(data, bn->rx_buf_size); + dma_sync_single_for_cpu(bn->bd->dev, mapping, + bn->rx_buf_use_size, bn->rx_dir); + + if (!skb) { + page_pool_free_va(rxr->head_pool, data, true); + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + skb_mark_for_recycle(skb); + skb_reserve(skb, bn->rx_offset); + skb_put(skb, len); + } + + if (agg_bufs) { + skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, idx, agg_bufs, + true); + /* Page reuse already handled by bnge_rx_agg_netmems_skb(). */ + if (!skb) + return NULL; + } + + skb->protocol =3D eth_type_trans(skb, dev); + + if (tpa_info->hash_type !=3D PKT_HASH_TYPE_NONE) + skb_set_hash(skb, tpa_info->rss_hash, tpa_info->hash_type); + + if (tpa_info->vlan_valid && + (dev->features & BNGE_HW_FEATURE_VLAN_ALL_RX)) { + __be16 vlan_proto =3D htons(tpa_info->metadata >> + RX_CMP_FLAGS2_METADATA_TPID_SFT); + u16 vtag =3D tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK; + + if (eth_type_vlan(vlan_proto)) { + __vlan_hwaccel_put_tag(skb, vlan_proto, vtag); + } else { + dev_kfree_skb(skb); + return NULL; + } + } + + skb_checksum_none_assert(skb); + if (likely(tpa_info->flags2 & RX_TPA_START_CMP_FLAGS2_L4_CS_CALC)) { + skb->ip_summed =3D CHECKSUM_UNNECESSARY; + skb->csum_level =3D + (tpa_info->flags2 & RX_CMP_FLAGS2_T_L4_CS_CALC) >> 3; + } + +#ifdef CONFIG_INET + if (bn->priv_flags & BNGE_NET_EN_GRO) + skb =3D bnge_gro_skb(bn, tpa_info, tpa_end, tpa_end1, skb); +#endif + + return skb; +} + static enum pkt_hash_types bnge_rss_ext_op(struct bnge_net *bn, struct rx_cmp *rxcmp) { @@ -395,6 +775,7 @@ static struct sk_buff *bnge_rx_skb(struct bnge_net *bn, =20 /* returns the following: * 1 - 1 packet successfully received + * 0 - successful TPA_START, packet not completed yet * -EBUSY - completion ring does not have all the agg buffers yet * -ENOMEM - packet aborted due to out of memory * -EIO - packet aborted due to hw error indicated in BD @@ -427,6 +808,11 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, =20 cmp_type =3D RX_CMP_TYPE(rxcmp); =20 + if (cmp_type =3D=3D CMP_TYPE_RX_TPA_AGG_CMP) { + bnge_tpa_agg(bn, rxr, (struct rx_agg_cmp *)rxcmp); + goto next_rx_no_prod_no_len; + } + tmp_raw_cons =3D NEXT_RAW_CMP(tmp_raw_cons); cp_cons =3D RING_CMP(bn, tmp_raw_cons); rxcmp1 =3D (struct rx_cmp_ext *) @@ -441,6 +827,28 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, dma_rmb(); prod =3D rxr->rx_prod; =20 + if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_CMP || + cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_V3_CMP) { + bnge_tpa_start(bn, rxr, cmp_type, + (struct rx_tpa_start_cmp *)rxcmp, + (struct rx_tpa_start_cmp_ext *)rxcmp1); + + *event |=3D BNGE_RX_EVENT; + goto next_rx_no_prod_no_len; + + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + skb =3D bnge_tpa_end(bn, cpr, &tmp_raw_cons, + (struct rx_tpa_end_cmp *)rxcmp, + (struct rx_tpa_end_cmp_ext *)rxcmp1, event); + rc =3D -ENOMEM; + if (likely(skb)) { + bnge_deliver_skb(bn, bnapi, skb); + rc =3D 1; + } + *event |=3D BNGE_RX_EVENT; + goto next_rx_no_prod_no_len; + } + cons =3D rxcmp->rx_cmp_opaque; if (unlikely(cons !=3D rxr->rx_next_cons)) { int rc1 =3D bnge_discard_rx(bn, cpr, &tmp_raw_cons, rxcmp); @@ -475,7 +883,8 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) { bnge_reuse_rx_data(rxr, cons, data); if (agg_bufs) - bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs); + bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs, + false); rc =3D -EIO; goto next_rx_no_len; } @@ -490,7 +899,7 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, if (!skb) { if (agg_bufs) bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, - agg_bufs); + agg_bufs, false); goto oom_next_rx; } } else { @@ -501,7 +910,7 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, =20 if (agg_bufs) { skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, cp_cons, - agg_bufs); + agg_bufs, false); if (!skb) goto oom_next_rx; } @@ -592,6 +1001,12 @@ static int bnge_force_rx_discard(struct bnge_net *bn, cmp_type =3D=3D CMP_TYPE_RX_L2_V3_CMP) { rxcmp1->rx_cmp_cfa_code_errors_v2 |=3D cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR); + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + struct rx_tpa_end_cmp_ext *tpa_end1; + + tpa_end1 =3D (struct rx_tpa_end_cmp_ext *)rxcmp1; + tpa_end1->rx_tpa_end_cmp_errors_v2 |=3D + cpu_to_le32(RX_TPA_END_CMP_ERRORS); } rc =3D bnge_rx_pkt(bn, cpr, raw_cons, event); return rc; --=20 2.47.3