From nobody Mon Feb 9 13:57:14 2026 Received: from mail-pg1-f227.google.com (mail-pg1-f227.google.com [209.85.215.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7154F2DAFDD for ; Fri, 23 Jan 2026 19:35:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769196956; cv=none; b=p4XGj+ElCi/GN5WWmIAYs0qPQStLaIfL2946YVkbI+sl99mNWpf7ZGNTmhZ/jJQCtGrR8PPmUoAQvMofNM7uk3GGyLY51VEJdnevcY8EDbjwFK8o1SiwfiuHrbBvonUiOIOy2GhpUbai6qH8ftKQIICMWNWyt3QDI/Ezb8jrzr0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769196956; c=relaxed/simple; bh=prPzBnxeeMUWdAKzGYCuyJ95STzszX1/WTbLT0a/WAU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Gmx28ffcOODMnDr3BEloeIWQymbrkBvuS+5r+agjtPHDc8m9YilbEqlhhVYqnkLhVS3xWt0pjpH4b+ozbNH4RzQOJ6XB27IyxOCpupTwNso/7jSVf+I74bbnO++o+dCJtIcekz2QuH7jqv8makJMtTCvhp9YaV3PlRiyxpOgaC0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=SC164VuF; arc=none smtp.client-ip=209.85.215.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="SC164VuF" Received: by mail-pg1-f227.google.com with SMTP id 41be03b00d2f7-c63555db09fso514345a12.1 for ; Fri, 23 Jan 2026 11:35:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769196954; x=1769801754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wTcjSb6ZwansrR5rl4czcgB7EtYaLy+a0+IL3nt4vsg=; b=kM+41QdmiTbiq9bsBdL+ZV9ugra/NurbGJbDgsMyYBXZ8MPBeb15UxkUHKnYxkJGB0 p4enSLomg6yV6xZJC6JVptz9FBdxy3iKe20G1sxKqB0UxcsT/0Dh0pr6rq55ohtIQcD+ eWbmekDvC4VaullVkCV+vQ3O9R7+wGVmo+BhIVfAbW5mEbytw6xtUaKTjUFsf1YhsxlO 17a8v8/4xjiXNLAOw5OBZp847rHn2/G9SQVXygvNFYZlkpcKv2qsqm4VhVcK1PnzMZlm Q4WtmuxkMLuEc+8DvRlp8Yf5Sj1yIP5nV7LN+DHKQal3yhq1HiKRzny+ADkzuEN6VW52 fg8Q== X-Forwarded-Encrypted: i=1; AJvYcCV7OGqxVA5EXEpHPcGMusgEjPGt3icTBhSrbcNFTBr6pIIHMPCPXNw4my0o6HwGu7DafYfCGZ8DYVK/S5s=@vger.kernel.org X-Gm-Message-State: AOJu0Yz26x8eNa+GGxUhB2OzG3YsKci0QfblHi+GEy3kybYeg0I4HxHf Meqk2uL+EO6fXwZAgTxPfu3LjfTzRYLVdGFsFtj5RAoA2X8U7vgfx3h5ccpBgv7Wt5Me+H7sd9d Eif8k5eBvWcoCeVlbuDq+M/CWssjvKuCjUNBno6WC9mzH/QPJm9z9fUG2DJmud0Nk97R4lsoxCe zIfjVRw4qJYTVOol2LP0E4wlDiVgmjzBq3AkI8f546WhjZMEPasfw/Vi1+JeJhkDepsW6wJiS5A hl+Zdw/4rhV33Ysccqmv+g+aw== X-Gm-Gg: AZuq6aJWe7S+CniY7H94gRlQOkWK1FnMxOy/YdXGbQKq0D0csOoNpCynB9s+XwUPQuh OItt8xiomJcmuYevYYs0YuxC32a93uaop1wc2Hazith3WUDjS7HtdikCtUnRwkKZHj7yEK9m6BJ qR/k+oQDk2HA04YUgsPqHN4UWlnR/3fkHdAplxb80SwFAlf7JNR3wSoshYMsGrM12w0gYnV68Di vanXaf2o0oP7dm5dGvt1fjzxIMgJuMNtEnXQAoPWPmF0dN+4ooGHxnZCZgaMvE+GPyM8QnbcDBK gqtTDacG26aEUXY8U95fPgLjNWpesFX2JdDfZbGG9hiXajyqWQ1A0UHpK/zM6WBm9J+EfQfQ/K1 NshmB2+72v7YsDTXaFf56wq/724Qy0U6W9/wHSQH5oCQekifRzXPn2suzT7T24ylaUrmjbcS/BO ggRcy6jSXsf69oQn1QYzzBjJJXxtP8+VOsk1UgJBc2ohvE0QdGUv33Dg== X-Received: by 2002:a17:90b:58c8:b0:34f:6ddc:d9de with SMTP id 98e67ed59e1d1-35335646b94mr5716884a91.16.1769196953569; Fri, 23 Jan 2026 11:35:53 -0800 (PST) Received: from smtp-us-east1-p01-i01-si01.dlp.protect.broadcom.com (address-144-49-247-100.dlp.protect.broadcom.com. [144.49.247.100]) by smtp-relay.gmail.com with ESMTPS id 98e67ed59e1d1-3536dc432basm414869a91.6.2026.01.23.11.35.53 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2026 11:35:53 -0800 (PST) X-Relaying-Domain: broadcom.com X-CFilter-Loop: Reflected Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-8216fece04cso2544842b3a.0 for ; Fri, 23 Jan 2026 11:35:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1769196952; x=1769801752; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wTcjSb6ZwansrR5rl4czcgB7EtYaLy+a0+IL3nt4vsg=; b=SC164VuF5h/jR5FnQA7fYwAK7QKVFzJMWCJqcbN3iWF6/b2F3b5prOr4wXj16Z2AGs /hlWHuRjJmSPljqUur3tA8grNGNQN/eu8fcdkr3cucP2477NKOzpHz24fxYPUHcHTcnL YpaeAzPlTAA3mCvctv/2lUtkHNFUfJUyD7guw= X-Forwarded-Encrypted: i=1; AJvYcCUph7FmW3vwR8KEcUfL+TlaxS9CTMeOQLQkVjlf2mGOEg/h7xWe+3F/OxpSkCXEw/c85oaxRk00pedfjN0=@vger.kernel.org X-Received: by 2002:a05:6a00:138c:b0:81c:5bca:8104 with SMTP id d2e1a72fcca58-82317b3094amr3007036b3a.24.1769196951338; Fri, 23 Jan 2026 11:35:51 -0800 (PST) X-Received: by 2002:a05:6a00:138c:b0:81c:5bca:8104 with SMTP id d2e1a72fcca58-82317b3094amr3007016b3a.24.1769196950782; Fri, 23 Jan 2026 11:35:50 -0800 (PST) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231873e519sm2843206b3a.49.2026.01.23.11.35.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jan 2026 11:35:50 -0800 (PST) From: Bhargava Marreddy To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, vikas.gupta@broadcom.com, ajit.khaparde@broadcom.com, Bhargava Marreddy , Rajashekar Hudumula Subject: [v6, net-next 8/8] bng_en: Add support for TPA events Date: Sat, 24 Jan 2026 01:05:04 +0530 Message-ID: <20260123193504.285573-9-bhargava.marreddy@broadcom.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260123193504.285573-1-bhargava.marreddy@broadcom.com> References: <20260123193504.285573-1-bhargava.marreddy@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-DetectorID-Processed: b00c1d49-9d2e-4205-b15f-d015386d3d5e Content-Type: text/plain; charset="utf-8" Enable TPA functionality in the VNIC and add functions to handle TPA events, which help in processing LRO/GRO. Signed-off-by: Bhargava Marreddy Reviewed-by: Vikas Gupta Reviewed-by: Rajashekar Hudumula --- .../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 65 +++ .../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 2 + .../net/ethernet/broadcom/bnge/bnge_netdev.c | 27 ++ .../net/ethernet/broadcom/bnge/bnge_txrx.c | 436 +++++++++++++++++- 4 files changed, 520 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.c index 198f49b40dbf..d4b1c0d2c44c 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c @@ -1183,3 +1183,68 @@ int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd= , int idx) req->async_event_cr =3D cpu_to_le16(idx); return bnge_hwrm_req_send(bd, req); } + +#define BNGE_DFLT_TUNL_TPA_BMAP \ + (VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GRE | \ + VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV4 | \ + VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV6) + +static void bnge_hwrm_vnic_update_tunl_tpa(struct bnge_dev *bd, + struct hwrm_vnic_tpa_cfg_input *req) +{ + struct bnge_net *bn =3D netdev_priv(bd->netdev); + u32 tunl_tpa_bmap =3D BNGE_DFLT_TUNL_TPA_BMAP; + + if (!(bd->fw_cap & BNGE_FW_CAP_VNIC_TUNNEL_TPA)) + return; + + if (bn->vxlan_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN; + if (bn->vxlan_gpe_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN_GPE; + if (bn->nge_port) + tunl_tpa_bmap |=3D VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GENEVE; + + req->enables |=3D cpu_to_le32(VNIC_TPA_CFG_REQ_ENABLES_TNL_TPA_EN); + req->tnl_tpa_en_bitmap =3D cpu_to_le32(tunl_tpa_bmap); +} + +int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vni= c, + u32 tpa_flags) +{ + struct bnge_net *bn =3D netdev_priv(bd->netdev); + struct hwrm_vnic_tpa_cfg_input *req; + int rc; + + if (vnic->fw_vnic_id =3D=3D INVALID_HW_RING_ID) + return 0; + + rc =3D bnge_hwrm_req_init(bd, req, HWRM_VNIC_TPA_CFG); + if (rc) + return rc; + + if (tpa_flags) { + u32 flags; + + flags =3D VNIC_TPA_CFG_REQ_FLAGS_TPA | + VNIC_TPA_CFG_REQ_FLAGS_ENCAP_TPA | + VNIC_TPA_CFG_REQ_FLAGS_RSC_WND_UPDATE | + VNIC_TPA_CFG_REQ_FLAGS_AGG_WITH_ECN | + VNIC_TPA_CFG_REQ_FLAGS_AGG_WITH_SAME_GRE_SEQ; + if (tpa_flags & BNGE_NET_EN_GRO) + flags |=3D VNIC_TPA_CFG_REQ_FLAGS_GRO; + + req->flags =3D cpu_to_le32(flags); + req->enables =3D + cpu_to_le32(VNIC_TPA_CFG_REQ_ENABLES_MAX_AGG_SEGS | + VNIC_TPA_CFG_REQ_ENABLES_MAX_AGGS | + VNIC_TPA_CFG_REQ_ENABLES_MIN_AGG_LEN); + req->max_agg_segs =3D cpu_to_le16(MAX_TPA_SEGS); + req->max_aggs =3D cpu_to_le16(bn->max_tpa); + req->min_agg_len =3D cpu_to_le32(512); + bnge_hwrm_vnic_update_tunl_tpa(bd, req); + } + req->vnic_id =3D cpu_to_le16(vnic->fw_vnic_id); + + return bnge_hwrm_req_send(bd, req); +} diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.h index 042f28e84a05..38b046237feb 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h @@ -55,4 +55,6 @@ int hwrm_ring_alloc_send_msg(struct bnge_net *bn, struct bnge_ring_struct *ring, u32 ring_type, u32 map_index); int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd, int idx); +int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vni= c, + u32 tpa_flags); #endif /* _BNGE_HWRM_LIB_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net= /ethernet/broadcom/bnge/bnge_netdev.c index 6e240f48d89b..abb9b6bc91e0 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c @@ -2279,6 +2279,27 @@ static int bnge_request_irq(struct bnge_net *bn) return rc; } =20 +static int bnge_set_tpa(struct bnge_net *bn, bool set_tpa) +{ + u32 tpa_flags =3D 0; + int rc, i; + + if (set_tpa) + tpa_flags =3D bn->priv_flags & BNGE_NET_EN_TPA; + else if (BNGE_NO_FW_ACCESS(bn->bd)) + return 0; + for (i =3D 0; i < bn->nr_vnics; i++) { + rc =3D bnge_hwrm_vnic_set_tpa(bn->bd, &bn->vnic_info[i], + tpa_flags); + if (rc) { + netdev_err(bn->netdev, "hwrm vnic set tpa failure rc for vnic %d: %x\n", + i, rc); + return rc; + } + } + return 0; +} + static int bnge_init_chip(struct bnge_net *bn) { struct bnge_vnic_info *vnic =3D &bn->vnic_info[BNGE_VNIC_DEFAULT]; @@ -2313,6 +2334,12 @@ static int bnge_init_chip(struct bnge_net *bn) if (bd->rss_cap & BNGE_RSS_CAP_RSS_HASH_TYPE_DELTA) bnge_hwrm_update_rss_hash_cfg(bn); =20 + if (bn->priv_flags & BNGE_NET_EN_TPA) { + rc =3D bnge_set_tpa(bn, true); + if (rc) + goto err_out; + } + /* Filter for default vnic 0 */ rc =3D bnge_hwrm_set_vnic_filter(bn, 0, 0, bn->netdev->dev_addr); if (rc) { diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/e= thernet/broadcom/bnge/bnge_txrx.c index 363f835438f4..3c41e439369d 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -44,6 +45,15 @@ irqreturn_t bnge_msix(int irq, void *dev_instance) return IRQ_HANDLED; } =20 +static struct rx_agg_cmp *bnge_get_tpa_agg(struct bnge_net *bn, + struct bnge_rx_ring_info *rxr, + u16 agg_id, u16 curr) +{ + struct bnge_tpa_info *tpa_info =3D &rxr->rx_tpa[agg_id]; + + return &tpa_info->agg_arr[curr]; +} + static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, u16 cp_cons, u16 curr) @@ -57,7 +67,7 @@ static struct rx_agg_cmp *bnge_get_agg(struct bnge_net *b= n, } =20 static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_info *cpr, u16 idx, - u16 start, u32 agg_bufs) + u16 start, u32 agg_bufs, bool tpa) { struct bnge_napi *bnapi =3D cpr->bnapi; struct bnge_net *bn =3D bnapi->bn; @@ -76,7 +86,10 @@ static void bnge_reuse_rx_agg_bufs(struct bnge_cp_ring_i= nfo *cpr, u16 idx, netmem_ref netmem; u16 cons; =20 - agg =3D bnge_get_agg(bn, cpr, idx, start + i); + if (tpa) + agg =3D bnge_get_tpa_agg(bn, rxr, idx, start + i); + else + agg =3D bnge_get_agg(bn, cpr, idx, start + i); cons =3D agg->rx_agg_cmp_opaque; __clear_bit(cons, rxr->rx_agg_bmap); =20 @@ -137,6 +150,8 @@ static int bnge_discard_rx(struct bnge_net *bn, struct = bnge_cp_ring_info *cpr, agg_bufs =3D (le32_to_cpu(rxcmp->rx_cmp_misc_v1) & RX_CMP_AGG_BUFS) >> RX_CMP_AGG_BUFS_SHIFT; + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + return 0; } =20 if (agg_bufs) { @@ -149,7 +164,7 @@ static int bnge_discard_rx(struct bnge_net *bn, struct = bnge_cp_ring_info *cpr, =20 static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, - u16 idx, u32 agg_bufs, + u16 idx, u32 agg_bufs, bool tpa, struct sk_buff *skb) { struct bnge_napi *bnapi =3D cpr->bnapi; @@ -168,7 +183,10 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, u16 cons, frag_len; netmem_ref netmem; =20 - agg =3D bnge_get_agg(bn, cpr, idx, i); + if (tpa) + agg =3D bnge_get_tpa_agg(bn, rxr, idx, i); + else + agg =3D bnge_get_agg(bn, cpr, idx, i); cons =3D agg->rx_agg_cmp_opaque; frag_len =3D (le32_to_cpu(agg->rx_agg_cmp_len_flags_type) & RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; @@ -198,7 +216,7 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, * allocated already. */ rxr->rx_agg_prod =3D prod; - bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i); + bnge_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i, tpa); return 0; } =20 @@ -215,11 +233,12 @@ static u32 __bnge_rx_agg_netmems(struct bnge_net *bn, static struct sk_buff *bnge_rx_agg_netmems_skb(struct bnge_net *bn, struct bnge_cp_ring_info *cpr, struct sk_buff *skb, u16 idx, - u32 agg_bufs) + u32 agg_bufs, bool tpa) { u32 total_frag_len; =20 - total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, skb); + total_frag_len =3D __bnge_rx_agg_netmems(bn, cpr, idx, agg_bufs, + tpa, skb); if (!total_frag_len) { skb_mark_for_recycle(skb); dev_kfree_skb(skb); @@ -253,6 +272,165 @@ static void bnge_sched_reset_txr(struct bnge_net *bn, /* TODO: Initiate reset task */ } =20 +static u16 bnge_tpa_alloc_agg_idx(struct bnge_rx_ring_info *rxr, u16 agg_i= d) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + u16 idx =3D agg_id & MAX_TPA_MASK; + + if (test_bit(idx, map->agg_idx_bmap)) { + idx =3D find_first_zero_bit(map->agg_idx_bmap, MAX_TPA); + if (idx >=3D MAX_TPA) + return INVALID_HW_RING_ID; + } + __set_bit(idx, map->agg_idx_bmap); + map->agg_id_tbl[agg_id] =3D idx; + return idx; +} + +static void bnge_free_agg_idx(struct bnge_rx_ring_info *rxr, u16 idx) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + + __clear_bit(idx, map->agg_idx_bmap); +} + +static u16 bnge_lookup_agg_idx(struct bnge_rx_ring_info *rxr, u16 agg_id) +{ + struct bnge_tpa_idx_map *map =3D rxr->rx_tpa_idx_map; + + return map->agg_id_tbl[agg_id]; +} + +static void bnge_tpa_metadata(struct bnge_tpa_info *tpa_info, + struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + tpa_info->cfa_code_valid =3D 1; + tpa_info->cfa_code =3D TPA_START_CFA_CODE(tpa_start1); + tpa_info->vlan_valid =3D 0; + if (tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) { + tpa_info->vlan_valid =3D 1; + tpa_info->metadata =3D + le32_to_cpu(tpa_start1->rx_tpa_start_cmp_metadata); + } +} + +static void bnge_tpa_metadata_v2(struct bnge_tpa_info *tpa_info, + struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + tpa_info->vlan_valid =3D 0; + if (TPA_START_VLAN_VALID(tpa_start)) { + u32 tpid_sel =3D TPA_START_VLAN_TPID_SEL(tpa_start); + u32 vlan_proto =3D ETH_P_8021Q; + + tpa_info->vlan_valid =3D 1; + if (tpid_sel =3D=3D RX_TPA_START_METADATA1_TPID_8021AD) + vlan_proto =3D ETH_P_8021AD; + tpa_info->metadata =3D vlan_proto << 16 | + TPA_START_METADATA0_TCI(tpa_start1); + } +} + +static void bnge_tpa_start(struct bnge_net *bn, struct bnge_rx_ring_info *= rxr, + u8 cmp_type, struct rx_tpa_start_cmp *tpa_start, + struct rx_tpa_start_cmp_ext *tpa_start1) +{ + struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; + struct bnge_tpa_info *tpa_info; + u16 cons, prod, agg_id; + struct rx_bd *prod_bd; + dma_addr_t mapping; + + agg_id =3D TPA_START_AGG_ID(tpa_start); + agg_id =3D bnge_tpa_alloc_agg_idx(rxr, agg_id); + if (unlikely(agg_id =3D=3D INVALID_HW_RING_ID)) { + netdev_warn(bn->netdev, "Unable to allocate agg ID for ring %d, agg 0x%l= x\n", + rxr->bnapi->index, TPA_START_AGG_ID(tpa_start)); + bnge_sched_reset_rxr(bn, rxr); + return; + } + cons =3D tpa_start->rx_tpa_start_cmp_opaque; + prod =3D rxr->rx_prod; + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + prod_rx_buf =3D &rxr->rx_buf_ring[RING_RX(bn, prod)]; + tpa_info =3D &rxr->rx_tpa[agg_id]; + + if (unlikely(cons !=3D rxr->rx_next_cons || + TPA_START_ERROR(tpa_start))) { + netdev_warn(bn->netdev, "TPA cons %x, expected cons %x, error code %lx\n= ", + cons, rxr->rx_next_cons, + TPA_START_ERROR_CODE(tpa_start1)); + bnge_sched_reset_rxr(bn, rxr); + return; + } + prod_rx_buf->data =3D tpa_info->data; + prod_rx_buf->data_ptr =3D tpa_info->data_ptr; + + mapping =3D tpa_info->mapping; + prod_rx_buf->mapping =3D mapping; + + prod_bd =3D &rxr->rx_desc_ring[RX_RING(bn, prod)][RX_IDX(prod)]; + + prod_bd->rx_bd_haddr =3D cpu_to_le64(mapping); + + tpa_info->data =3D cons_rx_buf->data; + tpa_info->data_ptr =3D cons_rx_buf->data_ptr; + cons_rx_buf->data =3D NULL; + tpa_info->mapping =3D cons_rx_buf->mapping; + + tpa_info->len =3D + le32_to_cpu(tpa_start->rx_tpa_start_cmp_len_flags_type) >> + RX_TPA_START_CMP_LEN_SHIFT; + if (likely(TPA_START_HASH_VALID(tpa_start))) { + tpa_info->hash_type =3D PKT_HASH_TYPE_L4; + if (TPA_START_IS_IPV6(tpa_start1)) + tpa_info->gso_type =3D SKB_GSO_TCPV6; + else + tpa_info->gso_type =3D SKB_GSO_TCPV4; + tpa_info->rss_hash =3D + le32_to_cpu(tpa_start->rx_tpa_start_cmp_rss_hash); + } else { + tpa_info->hash_type =3D PKT_HASH_TYPE_NONE; + tpa_info->gso_type =3D 0; + netif_warn(bn, rx_err, bn->netdev, "TPA packet without valid hash\n"); + } + tpa_info->flags2 =3D le32_to_cpu(tpa_start1->rx_tpa_start_cmp_flags2); + tpa_info->hdr_info =3D le32_to_cpu(tpa_start1->rx_tpa_start_cmp_hdr_info); + if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_CMP) + bnge_tpa_metadata(tpa_info, tpa_start, tpa_start1); + else + bnge_tpa_metadata_v2(tpa_info, tpa_start, tpa_start1); + tpa_info->agg_count =3D 0; + + rxr->rx_prod =3D NEXT_RX(prod); + cons =3D RING_RX(bn, NEXT_RX(cons)); + rxr->rx_next_cons =3D RING_RX(bn, NEXT_RX(cons)); + cons_rx_buf =3D &rxr->rx_buf_ring[cons]; + + bnge_reuse_rx_data(rxr, cons, cons_rx_buf->data); + rxr->rx_prod =3D NEXT_RX(rxr->rx_prod); + cons_rx_buf->data =3D NULL; +} + +static void bnge_abort_tpa(struct bnge_cp_ring_info *cpr, u16 idx, u32 agg= _bufs) +{ + if (agg_bufs) + bnge_reuse_rx_agg_bufs(cpr, idx, 0, agg_bufs, true); +} + +static void bnge_tpa_agg(struct bnge_net *bn, struct bnge_rx_ring_info *rx= r, + struct rx_agg_cmp *rx_agg) +{ + u16 agg_id =3D TPA_AGG_AGG_ID(rx_agg); + struct bnge_tpa_info *tpa_info; + + agg_id =3D bnge_lookup_agg_idx(rxr, agg_id); + tpa_info =3D &rxr->rx_tpa[agg_id]; + + tpa_info->agg_arr[tpa_info->agg_count++] =3D *rx_agg; +} + void bnge_reuse_rx_data(struct bnge_rx_ring_info *rxr, u16 cons, void *dat= a) { struct bnge_sw_rx_bd *cons_rx_buf, *prod_rx_buf; @@ -305,6 +483,208 @@ static struct sk_buff *bnge_copy_skb(struct bnge_napi= *bnapi, u8 *data, return skb; } =20 +#ifdef CONFIG_INET +static void bnge_gro_tunnel(struct sk_buff *skb, __be16 ip_proto) +{ + struct udphdr *uh =3D NULL; + + if (ip_proto =3D=3D htons(ETH_P_IP)) { + struct iphdr *iph =3D (struct iphdr *)skb->data; + + if (iph->protocol =3D=3D IPPROTO_UDP) + uh =3D (struct udphdr *)(iph + 1); + } else { + struct ipv6hdr *iph =3D (struct ipv6hdr *)skb->data; + + if (iph->nexthdr =3D=3D IPPROTO_UDP) + uh =3D (struct udphdr *)(iph + 1); + } + if (uh) { + if (uh->check) + skb_shinfo(skb)->gso_type |=3D SKB_GSO_UDP_TUNNEL_CSUM; + else + skb_shinfo(skb)->gso_type |=3D SKB_GSO_UDP_TUNNEL; + } +} + +static struct sk_buff *bnge_gro_func(struct bnge_tpa_info *tpa_info, + int payload_off, int tcp_ts, + struct sk_buff *skb) +{ + u16 outer_ip_off, inner_ip_off, inner_mac_off; + u32 hdr_info =3D tpa_info->hdr_info; + int iphdr_len, nw_off; + + inner_ip_off =3D BNGE_TPA_INNER_L3_OFF(hdr_info); + inner_mac_off =3D BNGE_TPA_INNER_L2_OFF(hdr_info); + outer_ip_off =3D BNGE_TPA_OUTER_L3_OFF(hdr_info); + + nw_off =3D inner_ip_off - ETH_HLEN; + skb_set_network_header(skb, nw_off); + iphdr_len =3D (tpa_info->flags2 & RX_TPA_START_CMP_FLAGS2_IP_TYPE) ? + sizeof(struct ipv6hdr) : sizeof(struct iphdr); + skb_set_transport_header(skb, nw_off + iphdr_len); + + if (inner_mac_off) { /* tunnel */ + __be16 proto =3D *((__be16 *)(skb->data + outer_ip_off - + ETH_HLEN - 2)); + + bnge_gro_tunnel(skb, proto); + } + + return skb; +} + +static struct sk_buff *bnge_gro_skb(struct bnge_net *bn, + struct bnge_tpa_info *tpa_info, + struct rx_tpa_end_cmp *tpa_end, + struct rx_tpa_end_cmp_ext *tpa_end1, + struct sk_buff *skb) +{ + int payload_off; + u16 segs; + + segs =3D TPA_END_TPA_SEGS(tpa_end); + if (segs =3D=3D 1) + return skb; + + NAPI_GRO_CB(skb)->count =3D segs; + skb_shinfo(skb)->gso_size =3D + le32_to_cpu(tpa_end1->rx_tpa_end_cmp_seg_len); + skb_shinfo(skb)->gso_type =3D tpa_info->gso_type; + payload_off =3D TPA_END_PAYLOAD_OFF(tpa_end1); + skb =3D bnge_gro_func(tpa_info, payload_off, + TPA_END_GRO_TS(tpa_end), skb); + if (likely(skb)) + tcp_gro_complete(skb); + + return skb; +} +#endif + +static struct sk_buff *bnge_tpa_end(struct bnge_net *bn, + struct bnge_cp_ring_info *cpr, + u32 *raw_cons, + struct rx_tpa_end_cmp *tpa_end, + struct rx_tpa_end_cmp_ext *tpa_end1, + u8 *event) +{ + struct bnge_napi *bnapi =3D cpr->bnapi; + struct net_device *dev =3D bn->netdev; + struct bnge_tpa_info *tpa_info; + struct bnge_rx_ring_info *rxr; + u8 *data_ptr, agg_bufs; + struct sk_buff *skb; + u16 idx =3D 0, agg_id; + dma_addr_t mapping; + unsigned int len; + void *data; + + rxr =3D bnapi->rx_ring; + agg_id =3D TPA_END_AGG_ID(tpa_end); + agg_id =3D bnge_lookup_agg_idx(rxr, agg_id); + agg_bufs =3D TPA_END_AGG_BUFS(tpa_end1); + tpa_info =3D &rxr->rx_tpa[agg_id]; + if (unlikely(agg_bufs !=3D tpa_info->agg_count)) { + netdev_warn(bn->netdev, "TPA end agg_buf %d !=3D expected agg_bufs %d\n", + agg_bufs, tpa_info->agg_count); + agg_bufs =3D tpa_info->agg_count; + } + tpa_info->agg_count =3D 0; + *event |=3D BNGE_AGG_EVENT; + bnge_free_agg_idx(rxr, agg_id); + idx =3D agg_id; + data =3D tpa_info->data; + data_ptr =3D tpa_info->data_ptr; + prefetch(data_ptr); + len =3D tpa_info->len; + mapping =3D tpa_info->mapping; + + if (unlikely(agg_bufs > MAX_SKB_FRAGS || TPA_END_ERRORS(tpa_end1))) { + bnge_abort_tpa(cpr, idx, agg_bufs); + if (agg_bufs > MAX_SKB_FRAGS) + netdev_warn(bn->netdev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n", + agg_bufs, (int)MAX_SKB_FRAGS); + return NULL; + } + + if (len <=3D bn->rx_copybreak) { + skb =3D bnge_copy_skb(bnapi, data_ptr, len, mapping); + if (!skb) { + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + } else { + dma_addr_t new_mapping; + u8 *new_data; + + new_data =3D __bnge_alloc_rx_frag(bn, &new_mapping, rxr, + GFP_ATOMIC); + if (!new_data) { + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + + tpa_info->data =3D new_data; + tpa_info->data_ptr =3D new_data + bn->rx_offset; + tpa_info->mapping =3D new_mapping; + + skb =3D napi_build_skb(data, bn->rx_buf_size); + dma_sync_single_for_cpu(bn->bd->dev, mapping, + bn->rx_buf_use_size, bn->rx_dir); + + if (!skb) { + page_pool_free_va(rxr->head_pool, data, true); + bnge_abort_tpa(cpr, idx, agg_bufs); + return NULL; + } + skb_mark_for_recycle(skb); + skb_reserve(skb, bn->rx_offset); + skb_put(skb, len); + } + + if (agg_bufs) { + skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, idx, agg_bufs, + true); + /* Page reuse already handled by bnge_rx_agg_netmems_skb(). */ + if (!skb) + return NULL; + } + + skb->protocol =3D eth_type_trans(skb, dev); + + if (tpa_info->hash_type !=3D PKT_HASH_TYPE_NONE) + skb_set_hash(skb, tpa_info->rss_hash, tpa_info->hash_type); + + if (tpa_info->vlan_valid && + (dev->features & BNGE_HW_FEATURE_VLAN_ALL_RX)) { + __be16 vlan_proto =3D htons(tpa_info->metadata >> + RX_CMP_FLAGS2_METADATA_TPID_SFT); + u16 vtag =3D tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK; + + if (eth_type_vlan(vlan_proto)) { + __vlan_hwaccel_put_tag(skb, vlan_proto, vtag); + } else { + dev_kfree_skb(skb); + return NULL; + } + } + + skb_checksum_none_assert(skb); + if (likely(tpa_info->flags2 & RX_TPA_START_CMP_FLAGS2_L4_CS_CALC)) { + skb->ip_summed =3D CHECKSUM_UNNECESSARY; + skb->csum_level =3D + (tpa_info->flags2 & RX_CMP_FLAGS2_T_L4_CS_CALC) >> 3; + } + +#ifdef CONFIG_INET + if (bn->priv_flags & BNGE_NET_EN_GRO) + skb =3D bnge_gro_skb(bn, tpa_info, tpa_end, tpa_end1, skb); +#endif + + return skb; +} + static enum pkt_hash_types bnge_rss_ext_op(struct bnge_net *bn, struct rx_cmp *rxcmp) { @@ -397,6 +777,7 @@ static struct sk_buff *bnge_rx_skb(struct bnge_net *bn, =20 /* returns the following: * 1 - 1 packet successfully received + * 0 - successful TPA_START, packet not completed yet * -EBUSY - completion ring does not have all the agg buffers yet * -ENOMEM - packet aborted due to out of memory * -EIO - packet aborted due to hw error indicated in BD @@ -429,6 +810,11 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, =20 cmp_type =3D RX_CMP_TYPE(rxcmp); =20 + if (cmp_type =3D=3D CMP_TYPE_RX_TPA_AGG_CMP) { + bnge_tpa_agg(bn, rxr, (struct rx_agg_cmp *)rxcmp); + goto next_rx_no_prod_no_len; + } + tmp_raw_cons =3D NEXT_RAW_CMP(tmp_raw_cons); cp_cons =3D RING_CMP(bn, tmp_raw_cons); rxcmp1 =3D (struct rx_cmp_ext *) @@ -443,6 +829,28 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bng= e_cp_ring_info *cpr, dma_rmb(); prod =3D rxr->rx_prod; =20 + if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_CMP || + cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_START_V3_CMP) { + bnge_tpa_start(bn, rxr, cmp_type, + (struct rx_tpa_start_cmp *)rxcmp, + (struct rx_tpa_start_cmp_ext *)rxcmp1); + + *event |=3D BNGE_RX_EVENT; + goto next_rx_no_prod_no_len; + + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + skb =3D bnge_tpa_end(bn, cpr, &tmp_raw_cons, + (struct rx_tpa_end_cmp *)rxcmp, + (struct rx_tpa_end_cmp_ext *)rxcmp1, event); + rc =3D -ENOMEM; + if (likely(skb)) { + bnge_deliver_skb(bn, bnapi, skb); + rc =3D 1; + } + *event |=3D BNGE_RX_EVENT; + goto next_rx_no_prod_no_len; + } + cons =3D rxcmp->rx_cmp_opaque; if (unlikely(cons !=3D rxr->rx_next_cons)) { int rc1 =3D bnge_discard_rx(bn, cpr, &tmp_raw_cons, rxcmp); @@ -477,7 +885,8 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bnge= _cp_ring_info *cpr, if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) { bnge_reuse_rx_data(rxr, cons, data); if (agg_bufs) - bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs); + bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs, + false); rc =3D -EIO; goto next_rx_no_len; } @@ -495,13 +904,14 @@ static int bnge_rx_pkt(struct bnge_net *bn, struct bn= ge_cp_ring_info *cpr, =20 if (!skb) { if (agg_bufs) - bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, agg_bufs); + bnge_reuse_rx_agg_bufs(cpr, cp_cons, 0, + agg_bufs,false); goto oom_next_rx; } =20 if (agg_bufs) { skb =3D bnge_rx_agg_netmems_skb(bn, cpr, skb, cp_cons, - agg_bufs); + agg_bufs, false); if (!skb) goto oom_next_rx; } @@ -592,6 +1002,12 @@ static int bnge_force_rx_discard(struct bnge_net *bn, cmp_type =3D=3D CMP_TYPE_RX_L2_V3_CMP) { rxcmp1->rx_cmp_cfa_code_errors_v2 |=3D cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR); + } else if (cmp_type =3D=3D CMP_TYPE_RX_L2_TPA_END_CMP) { + struct rx_tpa_end_cmp_ext *tpa_end1; + + tpa_end1 =3D (struct rx_tpa_end_cmp_ext *)rxcmp1; + tpa_end1->rx_tpa_end_cmp_errors_v2 |=3D + cpu_to_le32(RX_TPA_END_CMP_ERRORS); } rc =3D bnge_rx_pkt(bn, cpr, raw_cons, event); return rc; --=20 2.47.3