From nobody Sun Apr 12 10:20:05 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F150A36F407; Wed, 4 Mar 2026 16:36:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772642175; cv=none; b=bRpFPZpjtQQWsE8s9SHvyKPDr/OI8zK4sdc5n/xSZfLJ04lUyx88E9iCEWKI1zZX8UXpD1rLqSsJwIFRhSWv1IP6vi8g7bvDZ2+ehOmo+PdNo2sY16HkRMfvy1yU/QZ+K0+Lob68H8YkuiGaCpQbWiPHfGsYKTcsFQIZvTVlGxY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772642175; c=relaxed/simple; bh=IOY8YI5FCKqZkUhckYEu/S/48sykVUGvU+Vw8bzqOU8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pRtBOE5IIZHILLejk82Cesn/xdXWudHVrDWjYDOXAtU0J9Gv7kR67gvE4LCkCKxvUs0Nyqf06/XlkYASRvGIVy7oILdMpU9P+fOjyygSaE83Ttf6pmMIHeBwuUi4UkzwQURjHDzEJeDqKTOdlrpO0SlU2XpWZJMzTH5Xv3rRpko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GPFLAK4S; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GPFLAK4S" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1772642169; x=1804178169; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IOY8YI5FCKqZkUhckYEu/S/48sykVUGvU+Vw8bzqOU8=; b=GPFLAK4SPV/n2i++0b2qNHtgLGleGoXe9ysnHM/ohFmqZXnAvB1omZk0 /kMv1aVHBdTFLt5zy0SL9R2jf9D8cCQG4h4g8jyOr60hAnGsLlQkHF7on mcm0XVTXOngW+nM8BpYn4ax4ZdzOvV1F3tNTie8/DlAz7XBSqAEZEbiFH Wwk8lchUWecJQ+8z0Vu+peZkdv+bhDVPjbWEoxGhgF7Cw5OopInCBAFxN KvrGS+rWT02prHGr5PLIKkP8PFJfKIirJoHh1lFWavR62q0rZpYCWf5Mc QorFdJZ+IDLiPTbGG7qF3Ri3hnroAdA+DWYSW1f7xN8O2dPnK39L/+3BU g==; X-CSE-ConnectionGUID: LLtl15c6QzeGayapu2qLRA== X-CSE-MsgGUID: UH9JsTynTc2MPR3FsQeEvg== X-IronPort-AV: E=McAfee;i="6800,10657,11719"; a="72906383" X-IronPort-AV: E=Sophos;i="6.21,324,1763452800"; d="scan'208";a="72906383" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2026 08:36:07 -0800 X-CSE-ConnectionGUID: 3YZ5wlK2Tr+3bH7WilE6+g== X-CSE-MsgGUID: niAmt55NSKa0Q4eKOpBI+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,324,1763452800"; d="scan'208";a="241404968" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmviesa002.fm.intel.com with ESMTP; 04 Mar 2026 08:36:03 -0800 Received: from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 6D7D1312CD; Wed, 4 Mar 2026 16:36:01 +0000 (GMT) From: Larysa Zaremba To: Tony Nguyen , intel-wired-lan@lists.osuosl.org Cc: Larysa Zaremba , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexander Lobakin , Simon Horman , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Aleksandr Loktionov , Natalia Wochtman , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH iwl-next v3 06/10] ixgbevf: XDP_TX in multi-buffer through libeth Date: Wed, 4 Mar 2026 17:03:38 +0100 Message-ID: <20260304160345.1340940-7-larysa.zaremba@intel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260304160345.1340940-1-larysa.zaremba@intel.com> References: <20260304160345.1340940-1-larysa.zaremba@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use libeth to support XDP_TX action for segmented packets. Reviewed-by: Alexander Lobakin Signed-off-by: Larysa Zaremba --- drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 14 +- .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 294 ++++++++++++------ 2 files changed, 200 insertions(+), 108 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/eth= ernet/intel/ixgbevf/ixgbevf.h index 2626af039361..a27081ee764b 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h @@ -81,20 +81,22 @@ struct ixgbevf_ring { struct net_device *netdev; struct bpf_prog __rcu *xdp_prog; union { - struct page_pool *pp; /* Rx ring */ + struct page_pool *pp; /* Rx and XDP rings */ struct device *dev; /* Tx ring */ }; void *desc; /* descriptor ring memory */ - dma_addr_t dma; /* phys. address of descriptor ring */ - unsigned int size; /* length in bytes */ - u32 truesize; /* Rx buffer full size */ + union { + u32 truesize; /* Rx buffer full size */ + u32 pending; /* Sent-not-completed descriptors */ + }; u16 count; /* amount of descriptors */ - u16 next_to_use; u16 next_to_clean; + u32 next_to_use; =20 union { struct libeth_fqe *rx_fqes; struct ixgbevf_tx_buffer *tx_buffer_info; + struct libeth_sqe *xdp_sqes; }; unsigned long state; struct ixgbevf_stats stats; @@ -114,6 +116,8 @@ struct ixgbevf_ring { int queue_index; /* needed for multiqueue queue management */ u32 rx_buf_len; struct libeth_xdp_buff_stash xdp_stash; + unsigned int dma_size; /* length in bytes */ + dma_addr_t dma; /* phys. address of descriptor ring */ } ____cacheline_internodealigned_in_smp; =20 /* How many Rx Buffers do we bundle into one write to the hardware ? */ diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/ne= t/ethernet/intel/ixgbevf/ixgbevf_main.c index 27cab542d3bb..177eb141e22d 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -306,10 +306,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vect= or *q_vector, total_ipsec++; =20 /* free the skb */ - if (ring_is_xdp(tx_ring)) - libeth_xdp_return_va(tx_buffer->data, true); - else - napi_consume_skb(tx_buffer->skb, napi_budget); + napi_consume_skb(tx_buffer->skb, napi_budget); =20 /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -392,9 +389,8 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vecto= r *q_vector, eop_desc, (eop_desc ? eop_desc->wb.status : 0), tx_ring->tx_buffer_info[i].time_stamp, jiffies); =20 - if (!ring_is_xdp(tx_ring)) - netif_stop_subqueue(tx_ring->netdev, - tx_ring->queue_index); + netif_stop_subqueue(tx_ring->netdev, + tx_ring->queue_index); =20 /* schedule immediate reset if we believe we hung */ ixgbevf_tx_timeout_reset(adapter); @@ -402,9 +398,6 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vecto= r *q_vector, return true; } =20 - if (ring_is_xdp(tx_ring)) - return !!budget; - #define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) && (ixgbevf_desc_unused(tx_ring) >=3D TX_WAKE_THRESHOLD))) { @@ -660,44 +653,83 @@ static inline void ixgbevf_irq_enable_queues(struct i= xgbevf_adapter *adapter, #define IXGBEVF_XDP_CONSUMED 1 #define IXGBEVF_XDP_TX 2 =20 -static int ixgbevf_xmit_xdp_ring(struct ixgbevf_ring *ring, - struct xdp_buff *xdp) +static void ixgbevf_clean_xdp_num(struct ixgbevf_ring *xdp_ring, bool in_n= api, + u16 to_clean) +{ + struct libeth_xdpsq_napi_stats stats =3D { }; + u32 ntc =3D xdp_ring->next_to_clean; + struct xdp_frame_bulk cbulk; + struct libeth_cq_pp cp =3D { + .bq =3D &cbulk, + .dev =3D xdp_ring->dev, + .xss =3D &stats, + .napi =3D in_napi, + }; + + xdp_frame_bulk_init(&cbulk); + xdp_ring->pending -=3D to_clean; + + while (likely(to_clean--)) { + libeth_xdp_complete_tx(&xdp_ring->xdp_sqes[ntc], &cp); + ntc++; + ntc =3D unlikely(ntc =3D=3D xdp_ring->count) ? 0 : ntc; + } + + xdp_ring->next_to_clean =3D ntc; + xdp_flush_frame_bulk(&cbulk); +} + +static u16 ixgbevf_tx_get_num_sent(struct ixgbevf_ring *xdp_ring) { - struct ixgbevf_tx_buffer *tx_buffer; - union ixgbe_adv_tx_desc *tx_desc; - u32 len, cmd_type; - dma_addr_t dma; - u16 i; + u16 ntc =3D xdp_ring->next_to_clean; + u16 to_clean =3D 0; =20 - len =3D xdp->data_end - xdp->data; + while (likely(to_clean < xdp_ring->pending)) { + u32 idx =3D xdp_ring->xdp_sqes[ntc].rs_idx; + union ixgbe_adv_tx_desc *rs_desc; =20 - if (unlikely(!ixgbevf_desc_unused(ring))) - return IXGBEVF_XDP_CONSUMED; + if (!idx--) + break; =20 - dma =3D dma_map_single(ring->dev, xdp->data, len, DMA_TO_DEVICE); - if (dma_mapping_error(ring->dev, dma)) - return IXGBEVF_XDP_CONSUMED; + rs_desc =3D IXGBEVF_TX_DESC(xdp_ring, idx); =20 - /* record the location of the first descriptor for this packet */ - i =3D ring->next_to_use; - tx_buffer =3D &ring->tx_buffer_info[i]; - - dma_unmap_len_set(tx_buffer, len, len); - dma_unmap_addr_set(tx_buffer, dma, dma); - tx_buffer->data =3D xdp->data; - tx_buffer->bytecount =3D len; - tx_buffer->gso_segs =3D 1; - tx_buffer->protocol =3D 0; - - /* Populate minimal context descriptor that will provide for the - * fact that we are expected to process Ethernet frames. - */ - if (!test_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state)) { + if (!(rs_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD))) + break; + + xdp_ring->xdp_sqes[ntc].rs_idx =3D 0; + + to_clean +=3D + (idx >=3D ntc ? idx : idx + xdp_ring->count) - ntc + 1; + + ntc =3D (idx + 1 =3D=3D xdp_ring->count) ? 0 : idx + 1; + } + + return to_clean; +} + +static void ixgbevf_clean_xdp_ring(struct ixgbevf_ring *xdp_ring) +{ + ixgbevf_clean_xdp_num(xdp_ring, false, xdp_ring->pending); +} + +static u32 ixgbevf_prep_xdp_sq(void *xdpsq, struct libeth_xdpsq *sq) +{ + struct ixgbevf_ring *xdp_ring =3D xdpsq; + + if (unlikely(ixgbevf_desc_unused(xdp_ring) < LIBETH_XDP_TX_BULK)) { + u16 to_clean =3D ixgbevf_tx_get_num_sent(xdp_ring); + + if (likely(to_clean)) + ixgbevf_clean_xdp_num(xdp_ring, true, to_clean); + } + + if (unlikely(!test_bit(__IXGBEVF_TX_XDP_RING_PRIMED, + &xdp_ring->state))) { struct ixgbe_adv_tx_context_desc *context_desc; =20 - set_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state); + set_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &xdp_ring->state); =20 - context_desc =3D IXGBEVF_TX_CTXTDESC(ring, 0); + context_desc =3D IXGBEVF_TX_CTXTDESC(xdp_ring, 0); context_desc->vlan_macip_lens =3D cpu_to_le32(ETH_HLEN << IXGBE_ADVTXD_MACLEN_SHIFT); context_desc->fceof_saidx =3D 0; @@ -706,48 +738,98 @@ static int ixgbevf_xmit_xdp_ring(struct ixgbevf_ring = *ring, IXGBE_ADVTXD_DTYP_CTXT); context_desc->mss_l4len_idx =3D 0; =20 - i =3D 1; + xdp_ring->next_to_use =3D 1; + xdp_ring->pending =3D 1; + + /* Finish descriptor writes before bumping tail */ + wmb(); + ixgbevf_write_tail(xdp_ring, 1); } =20 - /* put descriptor type bits */ - cmd_type =3D IXGBE_ADVTXD_DTYP_DATA | - IXGBE_ADVTXD_DCMD_DEXT | - IXGBE_ADVTXD_DCMD_IFCS; - cmd_type |=3D len | IXGBE_TXD_CMD; + *sq =3D (struct libeth_xdpsq) { + .count =3D xdp_ring->count, + .descs =3D xdp_ring->desc, + .lock =3D NULL, + .ntu =3D &xdp_ring->next_to_use, + .pending =3D &xdp_ring->pending, + .pool =3D NULL, + .sqes =3D xdp_ring->xdp_sqes, + }; + + return ixgbevf_desc_unused(xdp_ring); +} =20 - tx_desc =3D IXGBEVF_TX_DESC(ring, i); - tx_desc->read.buffer_addr =3D cpu_to_le64(dma); +static void ixgbevf_xdp_xmit_desc(struct libeth_xdp_tx_desc desc, u32 i, + const struct libeth_xdpsq *sq, + u64 priv) +{ + union ixgbe_adv_tx_desc *tx_desc =3D + &((union ixgbe_adv_tx_desc *)sq->descs)[i]; =20 - tx_desc->read.cmd_type_len =3D cpu_to_le32(cmd_type); - tx_desc->read.olinfo_status =3D - cpu_to_le32((len << IXGBE_ADVTXD_PAYLEN_SHIFT) | + u32 cmd_type =3D IXGBE_ADVTXD_DTYP_DATA | + IXGBE_ADVTXD_DCMD_DEXT | + IXGBE_ADVTXD_DCMD_IFCS | + desc.len; + + if (desc.flags & LIBETH_XDP_TX_LAST) + cmd_type |=3D IXGBE_TXD_CMD_EOP; + + if (desc.flags & LIBETH_XDP_TX_FIRST) { + struct skb_shared_info *sinfo =3D sq->sqes[i].sinfo; + u16 full_len =3D desc.len + sinfo->xdp_frags_size; + + tx_desc->read.olinfo_status =3D + cpu_to_le32((full_len << IXGBE_ADVTXD_PAYLEN_SHIFT) | IXGBE_ADVTXD_CC); + } =20 - /* Avoid any potential race with cleanup */ - smp_wmb(); + tx_desc->read.buffer_addr =3D cpu_to_le64(desc.addr); + tx_desc->read.cmd_type_len =3D cpu_to_le32(cmd_type); +} =20 - /* set next_to_watch value indicating a packet is present */ - i++; - if (i =3D=3D ring->count) - i =3D 0; +LIBETH_XDP_DEFINE_START(); +LIBETH_XDP_DEFINE_FLUSH_TX(static ixgbevf_xdp_flush_tx, ixgbevf_prep_xdp_s= q, + ixgbevf_xdp_xmit_desc); +LIBETH_XDP_DEFINE_END(); =20 - tx_buffer->next_to_watch =3D tx_desc; - ring->next_to_use =3D i; +static void ixgbevf_xdp_set_rs(struct ixgbevf_ring *xdp_ring, u32 cached_n= tu) +{ + u32 ltu =3D (xdp_ring->next_to_use ? : xdp_ring->count) - 1; + union ixgbe_adv_tx_desc *desc; =20 - return IXGBEVF_XDP_TX; + desc =3D IXGBEVF_TX_DESC(xdp_ring, ltu); + xdp_ring->xdp_sqes[cached_ntu].rs_idx =3D ltu + 1; + desc->read.cmd_type_len |=3D cpu_to_le32(IXGBE_TXD_CMD); } =20 -static int ixgbevf_run_xdp(struct ixgbevf_adapter *adapter, - struct ixgbevf_ring *rx_ring, +static void ixgbevf_rx_finalize_xdp(struct libeth_xdp_tx_bulk *tx_bulk, + bool xdp_xmit, u32 cached_ntu) +{ + struct ixgbevf_ring *xdp_ring =3D tx_bulk->xdpsq; + + if (!xdp_xmit) + goto unlock; + + if (tx_bulk->count) + ixgbevf_xdp_flush_tx(tx_bulk, LIBETH_XDP_TX_DROP); + + ixgbevf_xdp_set_rs(xdp_ring, cached_ntu); + + /* Finish descriptor writes before bumping tail */ + wmb(); + ixgbevf_write_tail(xdp_ring, xdp_ring->next_to_use); +unlock: + rcu_read_unlock(); +} + +static int ixgbevf_run_xdp(struct libeth_xdp_tx_bulk *tx_bulk, struct libeth_xdp_buff *xdp) { int result =3D IXGBEVF_XDP_PASS; - struct ixgbevf_ring *xdp_ring; - struct bpf_prog *xdp_prog; + const struct bpf_prog *xdp_prog; u32 act; =20 - xdp_prog =3D READ_ONCE(rx_ring->xdp_prog); - + xdp_prog =3D tx_bulk->prog; if (!xdp_prog) goto xdp_out; =20 @@ -756,17 +838,16 @@ static int ixgbevf_run_xdp(struct ixgbevf_adapter *ad= apter, case XDP_PASS: break; case XDP_TX: - xdp_ring =3D adapter->xdp_ring[rx_ring->queue_index]; - result =3D ixgbevf_xmit_xdp_ring(xdp_ring, &xdp->base); - if (result =3D=3D IXGBEVF_XDP_CONSUMED) - goto out_failure; + result =3D IXGBEVF_XDP_TX; + if (!libeth_xdp_tx_queue_bulk(tx_bulk, xdp, + ixgbevf_xdp_flush_tx)) + result =3D IXGBEVF_XDP_CONSUMED; break; default: - bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, act); + bpf_warn_invalid_xdp_action(tx_bulk->dev, xdp_prog, act); fallthrough; case XDP_ABORTED: -out_failure: - trace_xdp_exception(rx_ring->netdev, xdp_prog, act); + trace_xdp_exception(tx_bulk->dev, xdp_prog, act); fallthrough; /* handle aborts by dropping packet */ case XDP_DROP: result =3D IXGBEVF_XDP_CONSUMED; @@ -784,11 +865,19 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vect= or *q_vector, unsigned int total_rx_bytes =3D 0, total_rx_packets =3D 0; struct ixgbevf_adapter *adapter =3D q_vector->adapter; u16 cleaned_count =3D ixgbevf_desc_unused(rx_ring); + LIBETH_XDP_ONSTACK_BULK(xdp_tx_bulk); LIBETH_XDP_ONSTACK_BUFF(xdp); + u32 cached_ntu; bool xdp_xmit =3D false; int xdp_res =3D 0; =20 libeth_xdp_init_buff(xdp, &rx_ring->xdp_stash, &rx_ring->xdp_rxq); + libeth_xdp_tx_init_bulk(&xdp_tx_bulk, rx_ring->xdp_prog, + adapter->netdev, adapter->xdp_ring, + adapter->num_xdp_queues); + if (xdp_tx_bulk.prog) + cached_ntu =3D + ((struct ixgbevf_ring *)xdp_tx_bulk.xdpsq)->next_to_use; =20 while (likely(total_rx_packets < budget)) { union ixgbe_adv_rx_desc *rx_desc; @@ -821,11 +910,12 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vect= or *q_vector, if (ixgbevf_is_non_eop(rx_ring, rx_desc)) continue; =20 - xdp_res =3D ixgbevf_run_xdp(adapter, rx_ring, xdp); + xdp_res =3D ixgbevf_run_xdp(&xdp_tx_bulk, xdp); if (xdp_res) { if (xdp_res =3D=3D IXGBEVF_XDP_TX) xdp_xmit =3D true; =20 + xdp->data =3D NULL; total_rx_packets++; total_rx_bytes +=3D xdp_get_buff_len(&xdp->base); continue; @@ -870,16 +960,7 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vecto= r *q_vector, /* place incomplete frames back on ring for completion */ libeth_xdp_save_buff(&rx_ring->xdp_stash, xdp); =20 - if (xdp_xmit) { - struct ixgbevf_ring *xdp_ring =3D - adapter->xdp_ring[rx_ring->queue_index]; - - /* Force memory writes to complete before letting h/w - * know there are new descriptors to fetch. - */ - wmb(); - ixgbevf_write_tail(xdp_ring, xdp_ring->next_to_use); - } + ixgbevf_rx_finalize_xdp(&xdp_tx_bulk, xdp_xmit, cached_ntu); =20 u64_stats_update_begin(&rx_ring->syncp); rx_ring->stats.packets +=3D total_rx_packets; @@ -909,6 +990,8 @@ static int ixgbevf_poll(struct napi_struct *napi, int b= udget) bool clean_complete =3D true; =20 ixgbevf_for_each_ring(ring, q_vector->tx) { + if (ring_is_xdp(ring)) + continue; if (!ixgbevf_clean_tx_irq(q_vector, ring, budget)) clean_complete =3D false; } @@ -1348,6 +1431,7 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_= adapter *adapter, /* reset ntu and ntc to place SW in sync with hardwdare */ ring->next_to_clean =3D 0; ring->next_to_use =3D 0; + ring->pending =3D 0; =20 /* In order to avoid issues WTHRESH + PTHRESH should always be equal * to or less than the number of on chip descriptors, which is @@ -1360,8 +1444,12 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf= _adapter *adapter, 32; /* PTHRESH =3D 32 */ =20 /* reinitialize tx_buffer_info */ - memset(ring->tx_buffer_info, 0, - sizeof(struct ixgbevf_tx_buffer) * ring->count); + if (!ring_is_xdp(ring)) + memset(ring->tx_buffer_info, 0, + sizeof(struct ixgbevf_tx_buffer) * ring->count); + else + memset(ring->xdp_sqes, 0, + sizeof(struct libeth_sqe) * ring->count); =20 clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state); clear_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state); @@ -2016,10 +2104,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_rin= g *tx_ring) union ixgbe_adv_tx_desc *eop_desc, *tx_desc; =20 /* Free all the Tx ring sk_buffs */ - if (ring_is_xdp(tx_ring)) - libeth_xdp_return_va(tx_buffer->data, false); - else - dev_kfree_skb_any(tx_buffer->skb); + dev_kfree_skb_any(tx_buffer->skb); =20 /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -2088,7 +2173,7 @@ static void ixgbevf_clean_all_tx_rings(struct ixgbevf= _adapter *adapter) for (i =3D 0; i < adapter->num_tx_queues; i++) ixgbevf_clean_tx_ring(adapter->tx_ring[i]); for (i =3D 0; i < adapter->num_xdp_queues; i++) - ixgbevf_clean_tx_ring(adapter->xdp_ring[i]); + ixgbevf_clean_xdp_ring(adapter->xdp_ring[i]); } =20 void ixgbevf_down(struct ixgbevf_adapter *adapter) @@ -2834,8 +2919,6 @@ static void ixgbevf_check_hang_subtask(struct ixgbevf= _adapter *adapter) if (netif_carrier_ok(adapter->netdev)) { for (i =3D 0; i < adapter->num_tx_queues; i++) set_check_for_tx_hang(adapter->tx_ring[i]); - for (i =3D 0; i < adapter->num_xdp_queues; i++) - set_check_for_tx_hang(adapter->xdp_ring[i]); } =20 /* get one bit for every active Tx/Rx interrupt vector */ @@ -2979,7 +3062,10 @@ static void ixgbevf_service_task(struct work_struct = *work) **/ void ixgbevf_free_tx_resources(struct ixgbevf_ring *tx_ring) { - ixgbevf_clean_tx_ring(tx_ring); + if (!ring_is_xdp(tx_ring)) + ixgbevf_clean_tx_ring(tx_ring); + else + ixgbevf_clean_xdp_ring(tx_ring); =20 vfree(tx_ring->tx_buffer_info); tx_ring->tx_buffer_info =3D NULL; @@ -2988,7 +3074,7 @@ void ixgbevf_free_tx_resources(struct ixgbevf_ring *t= x_ring) if (!tx_ring->desc) return; =20 - dma_free_coherent(tx_ring->dev, tx_ring->size, tx_ring->desc, + dma_free_coherent(tx_ring->dev, tx_ring->dma_size, tx_ring->desc, tx_ring->dma); =20 tx_ring->desc =3D NULL; @@ -3023,7 +3109,9 @@ int ixgbevf_setup_tx_resources(struct ixgbevf_ring *t= x_ring) struct ixgbevf_adapter *adapter =3D netdev_priv(tx_ring->netdev); int size; =20 - size =3D sizeof(struct ixgbevf_tx_buffer) * tx_ring->count; + size =3D (!ring_is_xdp(tx_ring) ? sizeof(struct ixgbevf_tx_buffer) : + sizeof(struct libeth_sqe)) * tx_ring->count; + tx_ring->tx_buffer_info =3D vmalloc(size); if (!tx_ring->tx_buffer_info) goto err; @@ -3031,10 +3119,10 @@ int ixgbevf_setup_tx_resources(struct ixgbevf_ring = *tx_ring) u64_stats_init(&tx_ring->syncp); =20 /* round up to nearest 4K */ - tx_ring->size =3D tx_ring->count * sizeof(union ixgbe_adv_tx_desc); - tx_ring->size =3D ALIGN(tx_ring->size, 4096); + tx_ring->dma_size =3D tx_ring->count * sizeof(union ixgbe_adv_tx_desc); + tx_ring->dma_size =3D ALIGN(tx_ring->dma_size, 4096); =20 - tx_ring->desc =3D dma_alloc_coherent(tx_ring->dev, tx_ring->size, + tx_ring->desc =3D dma_alloc_coherent(tx_ring->dev, tx_ring->dma_size, &tx_ring->dma, GFP_KERNEL); if (!tx_ring->desc) goto err; @@ -3123,10 +3211,10 @@ int ixgbevf_setup_rx_resources(struct ixgbevf_adapt= er *adapter, u64_stats_init(&rx_ring->syncp); =20 /* Round up to nearest 4K */ - rx_ring->size =3D rx_ring->count * sizeof(union ixgbe_adv_rx_desc); - rx_ring->size =3D ALIGN(rx_ring->size, 4096); + rx_ring->dma_size =3D rx_ring->count * sizeof(union ixgbe_adv_rx_desc); + rx_ring->dma_size =3D ALIGN(rx_ring->dma_size, 4096); =20 - rx_ring->desc =3D dma_alloc_coherent(fq.pp->p.dev, rx_ring->size, + rx_ring->desc =3D dma_alloc_coherent(fq.pp->p.dev, rx_ring->dma_size, &rx_ring->dma, GFP_KERNEL); =20 if (!rx_ring->desc) { @@ -3202,7 +3290,7 @@ void ixgbevf_free_rx_resources(struct ixgbevf_ring *r= x_ring) xdp_rxq_info_detach_mem_model(&rx_ring->xdp_rxq); xdp_rxq_info_unreg(&rx_ring->xdp_rxq); =20 - dma_free_coherent(fq.pp->p.dev, rx_ring->size, rx_ring->desc, + dma_free_coherent(fq.pp->p.dev, rx_ring->dma_size, rx_ring->desc, rx_ring->dma); rx_ring->desc =3D NULL; =20 --=20 2.52.0