From nobody Tue Dec 2 00:04:41 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5868E32824A; Tue, 25 Nov 2025 17:36:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092206; cv=none; b=IZt2WVaca4BF4dObQhpSrYJuWjjq2cjipFYxKCNjdDUB8qYGXWcYETPUUBMlB/ZkAAL7XIW9fUpTDf5b5eqS+oA7YbAf/UpRZD7sF+90DweofISQWZTw1w6Pjcr2uZW/K1mpofXNd1rRpod24x73aCDcnDkeV9UBe2gbQiUfi0Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092206; c=relaxed/simple; bh=+DFq6dbzD3lc+CDVNerVm77IALgzgfxpNZyKVSb2ApU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iBBdB8OLdM00VK6GnKdlq32fOrDBWhylFnkwhXSlXjHh0sNnHiN8Q4BPhJfI9Rgaa8MNmVBGcXlP2CSIDt70IvW6zhbAnDy4uAcSMRs4k602K6myzkYkCAsnPufT12cmb23314+oYrq8ETAwsas+AZYi/SSMNKk+h1iPw6l9a2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BWyZMKKP; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BWyZMKKP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764092204; x=1795628204; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+DFq6dbzD3lc+CDVNerVm77IALgzgfxpNZyKVSb2ApU=; b=BWyZMKKPCybtv2v17FYpOGEz5Gl915vsTN5c8QqS6pTHXIwk622r5b8x N4ffVPqUM30IxfDx13b1FxS2MXrXpIOhvSpe77zcgXxkJODW4HSoajqva JnDDqN22Vi97L+54RL3jv9Dfg37as9TJL9xy53illqPZURw3iUV0C/Pf2 C3O4Ue5h7thKdHAr0VXdBzgYfSh+8/Pp/mpgjRfms6B/RrgIsnboHKH0e W2OfHjbnBBD0dy3oizt69TSHpcuGzpsojfgEnE4TTpF/rEgXMKrW+xFi6 TAgNoHxCTE+WculcbTwoE/O8OaUbYaqiv8d0NTO1Uo+tA7TyPdLp/ubDW w==; X-CSE-ConnectionGUID: YapaV8IDTHyVwwT9zyEDNQ== X-CSE-MsgGUID: wgkdenwwTPmBsvWFddI0hw== X-IronPort-AV: E=McAfee;i="6800,10657,11624"; a="69979879" X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="69979879" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2025 09:36:38 -0800 X-CSE-ConnectionGUID: oDga+snDRSe9QN1Pz3pnew== X-CSE-MsgGUID: lls53vmoQ7ytKkRrLVdvhg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="216040337" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 25 Nov 2025 09:36:34 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 1/5] libeth: pass Rx queue index to PP when creating a fill queue Date: Tue, 25 Nov 2025 18:35:59 +0100 Message-ID: <20251125173603.3834486-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251125173603.3834486-1-aleksander.lobakin@intel.com> References: <20251125173603.3834486-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since recently, page_pool_create() accepts optional stack index of the Rx queue which the pool will be created for. It can then be used on control path for stuff like memory providers. Add the same field to libeth_fq and pass the index from all the drivers using libeth for managing Rx to simplify implementing MP support later. idpf has one libeth_fq per buffer/fill queue and each Rx queue has two fill queues, but since fill queues can never be shared, we can store the corresponding Rx queue index there during the initialization to pass it to libeth. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 ++ include/net/libeth/rx.h | 2 ++ drivers/net/ethernet/intel/iavf/iavf_txrx.c | 1 + drivers/net/ethernet/intel/ice/ice_base.c | 2 ++ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 13 +++++++++++++ drivers/net/ethernet/intel/libeth/rx.c | 1 + 6 files changed, 21 insertions(+) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index 75b977094741..1f368c4e0a76 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -744,6 +744,7 @@ libeth_cacheline_set_assert(struct idpf_tx_queue, 64, * @q_id: Queue id * @size: Length of descriptor ring in bytes * @dma: Physical address of ring + * @rxq_idx: stack index of the corresponding Rx queue * @q_vector: Backreference to associated vector * @rx_buffer_low_watermark: RX buffer low watermark * @rx_hbuf_size: Header buffer size @@ -788,6 +789,7 @@ struct idpf_buf_queue { dma_addr_t dma; =20 struct idpf_q_vector *q_vector; + u16 rxq_ixd; =20 u16 rx_buffer_low_watermark; u16 rx_hbuf_size; diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h index 5d991404845e..3b3d7acd13c9 100644 --- a/include/net/libeth/rx.h +++ b/include/net/libeth/rx.h @@ -71,6 +71,7 @@ enum libeth_fqe_type { * @xdp: flag indicating whether XDP is enabled * @buf_len: HW-writeable length per each buffer * @nid: ID of the closest NUMA node with memory + * @idx: stack index of the corresponding Rx queue */ struct libeth_fq { struct_group_tagged(libeth_fq_fp, fp, @@ -88,6 +89,7 @@ struct libeth_fq { =20 u32 buf_len; int nid; + u32 idx; }; =20 int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi); diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethe= rnet/intel/iavf/iavf_txrx.c index 363c42bf3dcf..d3c68659162b 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -771,6 +771,7 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) .count =3D rx_ring->count, .buf_len =3D LIBIE_MAX_RX_BUF_LEN, .nid =3D NUMA_NO_NODE, + .idx =3D rx_ring->queue_index, }; int ret; =20 diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethern= et/intel/ice/ice_base.c index eadb1e3d12b3..1aa40f13947e 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -607,6 +607,7 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq) struct libeth_fq fq =3D { .count =3D rq->count, .nid =3D NUMA_NO_NODE, + .idx =3D rq->q_index, .hsplit =3D rq->vsi->hsplit, .xdp =3D ice_is_xdp_ena_vsi(rq->vsi), .buf_len =3D LIBIE_MAX_RX_BUF_LEN, @@ -629,6 +630,7 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq) .count =3D rq->count, .type =3D LIBETH_FQE_HDR, .nid =3D NUMA_NO_NODE, + .idx =3D rq->q_index, .xdp =3D ice_is_xdp_ena_vsi(rq->vsi), }; =20 diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 828f7c444d30..5e397560a515 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -557,6 +557,7 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_qu= eue *bufq) .type =3D LIBETH_FQE_HDR, .xdp =3D idpf_xdp_enabled(bufq->q_vector->vport), .nid =3D idpf_q_vector_to_mem(bufq->q_vector), + .idx =3D bufq->rxq_ixd, }; int ret; =20 @@ -698,6 +699,7 @@ static int idpf_rx_bufs_init_singleq(struct idpf_rx_que= ue *rxq) .count =3D rxq->desc_count, .type =3D LIBETH_FQE_MTU, .nid =3D idpf_q_vector_to_mem(rxq->q_vector), + .idx =3D rxq->idx, }; int ret; =20 @@ -757,6 +759,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *buf= q, .hsplit =3D idpf_queue_has(HSPLIT_EN, bufq), .xdp =3D idpf_xdp_enabled(bufq->q_vector->vport), .nid =3D idpf_q_vector_to_mem(bufq->q_vector), + .idx =3D bufq->rxq_ixd, }; int ret; =20 @@ -1900,6 +1903,16 @@ static int idpf_rxq_group_alloc(struct idpf_vport *v= port, u16 num_rxq) LIBETH_RX_LL_LEN; idpf_rxq_set_descids(vport, q); } + + if (!idpf_is_queue_model_split(vport->rxq_model)) + continue; + + for (j =3D 0; j < vport->num_bufqs_per_qgrp; j++) { + struct idpf_buf_queue *bufq; + + bufq =3D &rx_qgrp->splitq.bufq_sets[j].bufq; + bufq->rxq_ixd =3D rx_qgrp->splitq.rxq_sets[0]->rxq.idx; + } } =20 err_alloc: diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/= intel/libeth/rx.c index 62521a1f4ec9..8874b714cdcc 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -156,6 +156,7 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct na= pi_struct *napi) .order =3D LIBETH_RX_PAGE_ORDER, .pool_size =3D fq->count, .nid =3D fq->nid, + .queue_idx =3D fq->idx, .dev =3D napi->dev->dev.parent, .netdev =3D napi->dev, .napi =3D napi, --=20 2.51.1 From nobody Tue Dec 2 00:04:41 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE19332A3E1; Tue, 25 Nov 2025 17:36:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092206; cv=none; b=jW/bu11CKXBNnAeK2uID/0WsgHQO8Nlri9Pxr7xVxkyC0bWOLiSzJQeRAgtAC1CF8TfUUdlABe14iY7C7pKCjX7bMXeIdC0bziWvYuTaEQiQgC7UKhmC5OIJ2BGYr92XwaQgwxGQxDrQKG2jnWQrRnPN8ZhGMP/9XgWHur/goGg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092206; c=relaxed/simple; bh=7JJU128tDJ+PyF5G7UAG3ifJrnbZV7vGBTPFHmlcVHc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iacQbSpM1QaqTOh5io2zRDlqMIHOfiQD3fD0Dg7EKivPxlbh3IZ1eAUk3RRDB/zNCYPYFkRE+Q24EPNYRu6KC131tparkO8dhIfZ84uCsjb2GwmpOxJsJz+L0tbEcpncDfwbh94y3LfSkBG9RSJt1HfZxaBomqIlPgLdlfRIJQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=axXbt80O; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="axXbt80O" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764092205; x=1795628205; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7JJU128tDJ+PyF5G7UAG3ifJrnbZV7vGBTPFHmlcVHc=; b=axXbt80Od/C5tIOFTcO6xL7FJHxOJYKRcDAoHS6zZqHkymzD1738J53X Q5LLlp2Qbok9GxJh6da2hgg3dNrI6A1tzKF62be51jIredVSMpcoGAErG Uoq7ZIotL0JGpHLc74TOSmA7YjMFgdGqzxwsVCieuyhpF2QVavlnFc4n+ bPTnFmDviHLj2eZaqs7sebB1NQw/V+OAGix67SFJJKE9uMhbZQZTIujj6 C8rbwWeGUVzdUBu9KGNqZDYUll8SBaxWKU9QhHL/nlGhpFCzcLurRL9jA 1WT0NX9IxbsKX+LkWMH7BfhcAJhz0z0o6siDgQbrXx88wyi3Xf5Om0kwz Q==; X-CSE-ConnectionGUID: 2cbnuSUZS9yMv0VuPE+r/w== X-CSE-MsgGUID: CcLitvPoQpqWZHCiDlu4Gg== X-IronPort-AV: E=McAfee;i="6800,10657,11624"; a="69979889" X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="69979889" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2025 09:36:41 -0800 X-CSE-ConnectionGUID: lPf9As8yQfeQnFdQgUVrcA== X-CSE-MsgGUID: uG4UwnLnQmaHtDL2hWSgug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="216040375" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 25 Nov 2025 09:36:38 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 2/5] libeth: handle creating pools with unreadable buffers Date: Tue, 25 Nov 2025 18:36:00 +0100 Message-ID: <20251125173603.3834486-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251125173603.3834486-1-aleksander.lobakin@intel.com> References: <20251125173603.3834486-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" libeth uses netmems for quite some time already, so in order to support unreadable frags / memory providers, it only needs to set PP_FLAG_ALLOW_UNREADABLE_NETMEM when needed. Also add a couple sanity checks to make sure the driver didn't mess up the configuration options and, in case when an MP is installed, return the truesize always equal to PAGE_SIZE, so that libeth_rx_alloc() will never try to allocate frags. Memory providers manage buffers on their own and expect 1:1 buffer / HW Rx descriptor association. Bonus: mention in the libeth_sqe_type description that LIBETH_SQE_EMPTY should also be used for netmem Tx SQEs -- they don't need DMA unmapping. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- include/net/libeth/tx.h | 2 +- drivers/net/ethernet/intel/libeth/rx.c | 42 ++++++++++++++++++++++++++ 2 files changed, 43 insertions(+), 1 deletion(-) diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index c3db5c6f1641..a66fc2b3a114 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -12,7 +12,7 @@ =20 /** * enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion - * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX/XSk frame, no action required + * @LIBETH_SQE_EMPTY: empty OR netmem/XDP_TX/XSk frame, no action required * @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required * @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree() * @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/= intel/libeth/rx.c index 8874b714cdcc..11e6e8f353ef 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -6,6 +6,7 @@ #include =20 #include +#include =20 /* Rx buffer management */ =20 @@ -139,9 +140,47 @@ static bool libeth_rx_page_pool_params_zc(struct libet= h_fq *fq, fq->buf_len =3D clamp(mtu, LIBETH_RX_BUF_STRIDE, max); fq->truesize =3D fq->buf_len; =20 + /* + * Allow frags only for kernel pages. `fq->truesize =3D=3D pp->max_len` + * will always fall back to regular page_pool_alloc_netmems() + * regardless of the MTU / FQ buffer size. + */ + if (pp->flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) + fq->truesize =3D pp->max_len; + return true; } =20 +/** + * libeth_rx_page_pool_check_unread - check input params for unreadable MPs + * @fq: buffer queue to check + * @pp: &page_pool_params for the queue + * + * Make sure we don't create an invalid pool with full-frame unreadable + * buffers, bidirectional unreadable buffers or so, and configure the + * ZC payload pool accordingly. + * + * Return: true on success, false on invalid input params. + */ +static bool libeth_rx_page_pool_check_unread(const struct libeth_fq *fq, + struct page_pool_params *pp) +{ + if (!netif_rxq_has_unreadable_mp(pp->netdev, pp->queue_idx)) + return true; + + /* For now, the core stack doesn't allow XDP with unreadable frags */ + if (fq->xdp) + return false; + + /* It should be either a header pool or a ZC payload pool */ + if (fq->type =3D=3D LIBETH_FQE_HDR) + return !fq->hsplit; + + pp->flags |=3D PP_FLAG_ALLOW_UNREADABLE_NETMEM; + + return fq->hsplit; +} + /** * libeth_rx_fq_create - create a PP with the default libeth settings * @fq: buffer queue struct to fill @@ -165,6 +204,9 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct na= pi_struct *napi) struct page_pool *pool; int ret; =20 + if (!libeth_rx_page_pool_check_unread(fq, &pp)) + return -EINVAL; + pp.dma_dir =3D fq->xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; =20 if (!fq->hsplit) --=20 2.51.1 From nobody Tue Dec 2 00:04:41 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29D4832AADD; Tue, 25 Nov 2025 17:36:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092207; cv=none; b=m8cRUiLmxFiBePp4M9EpRlHNKq/QNsiydUIfSypNXF5YtbtsJujogngCAv1C4sKeLNxdeGbTzhhrh9XLauAHvcinax8S7DnsHiPUSl0zVRsm2yjA8lEi2sWhj6rmmjK3GqJCG+NenslyfJPoCc8t4Vhk4NX3/BLyoptxLVaZvMo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092207; c=relaxed/simple; bh=hiQCTOP4/S5xPr0kEs71P8ohuU8clI3OaBh/kWSlaPc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V0ZqLADu49X5GHPPUFlOeSpJ5gJ3o9UKnIyfq5pzP2RlIggxLlpSAJl4XSr68l7Jb6+vjotFy2IoNaRfIFuLrkNe0SVEi1bWyjVc/BavrqMZ4cBrnLTTTY2NbNMEJymEZqnNSY5PNs4tqBJ6Ryvntgr99qp9cdmFmOntEA65w6U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dHegYmI9; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dHegYmI9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764092206; x=1795628206; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hiQCTOP4/S5xPr0kEs71P8ohuU8clI3OaBh/kWSlaPc=; b=dHegYmI9Z4Uv80w3IuIMGf0nCBCLHFHWTDkiPNNzwV0sZNbTcBi7TVwV mLdGapSbE/vG603Yzpn7Pwyj9FWVniwe482L2oycPXx0ABvO3wLLu0Kbj s9xL6DpEW9u7Cdzn0wZSTR+OSkJXXAeby/RQJdfMceTGq1XBf67kNuaJR z+vavBdtIbi//qfOWGx62AzUHmPkm93FmtRmvnXKBZqc26Dn38541U9a3 R/Eo07M8yaUzU5AkT6fMVi9uAUBcZH1y8kISXtWqoReHCDNUiE07wKTQi 0+vTBVTT6ukxlfAUYf0j4Aop9GJcZqdflbBI2uyvLFJjzGoUJaZFc5lyx A==; X-CSE-ConnectionGUID: nWdpxwfXSrSZ/sMIgaXkYw== X-CSE-MsgGUID: DxiZoV7vQ9CRa6YGzSmRFA== X-IronPort-AV: E=McAfee;i="6800,10657,11624"; a="69979903" X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="69979903" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2025 09:36:44 -0800 X-CSE-ConnectionGUID: 83E69N6LRfueqL1R3ySKew== X-CSE-MsgGUID: bf3w1m7bTKqmPJ2OYgTqdA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="216040404" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 25 Nov 2025 09:36:41 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 3/5] ice: migrate to netdev ops lock Date: Tue, 25 Nov 2025 18:36:01 +0100 Message-ID: <20251125173603.3834486-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251125173603.3834486-1-aleksander.lobakin@intel.com> References: <20251125173603.3834486-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Queue management ops unconditionally enable netdev locking. The same lock is taken by default by several NAPI configuration functions, such as napi_enable() and netif_napi_set_irq(). Request ops locking in advance and make sure we use the _locked counterparts of those functions to avoid deadlocks, taking the lock manually where needed (suspend/resume, queue rebuild and resets). Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/ice/ice_lib.h | 6 ++- drivers/net/ethernet/intel/ice/ice_lib.c | 56 +++++++++++++++++---- drivers/net/ethernet/intel/ice/ice_main.c | 49 ++++++++++-------- drivers/net/ethernet/intel/ice/ice_sf_eth.c | 1 + drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +- 5 files changed, 82 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/etherne= t/intel/ice/ice_lib.h index 2cb1eb98b9da..d9c94c06c657 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -53,9 +53,11 @@ struct ice_vsi * ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params); =20 void ice_vsi_set_napi_queues(struct ice_vsi *vsi); -void ice_napi_add(struct ice_vsi *vsi); - +void ice_vsi_set_napi_queues_locked(struct ice_vsi *vsi); void ice_vsi_clear_napi_queues(struct ice_vsi *vsi); +void ice_vsi_clear_napi_queues_locked(struct ice_vsi *vsi); + +void ice_napi_add(struct ice_vsi *vsi); =20 int ice_vsi_release(struct ice_vsi *vsi); =20 diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/etherne= t/intel/ice/ice_lib.c index 15621707fbf8..8f79dd022e91 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2695,7 +2695,7 @@ void ice_vsi_close(struct ice_vsi *vsi) if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state)) ice_down(vsi); =20 - ice_vsi_clear_napi_queues(vsi); + ice_vsi_clear_napi_queues_locked(vsi); ice_vsi_free_irq(vsi); ice_vsi_free_tx_rings(vsi); ice_vsi_free_rx_rings(vsi); @@ -2764,12 +2764,13 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked) } =20 /** - * ice_vsi_set_napi_queues - associate netdev queues with napi + * ice_vsi_set_napi_queues_locked - associate netdev queues with napi * @vsi: VSI pointer * * Associate queue[s] with napi for all vectors. + * Must be called only with the netdev_lock taken. */ -void ice_vsi_set_napi_queues(struct ice_vsi *vsi) +void ice_vsi_set_napi_queues_locked(struct ice_vsi *vsi) { struct net_device *netdev =3D vsi->netdev; int q_idx, v_idx; @@ -2777,7 +2778,6 @@ void ice_vsi_set_napi_queues(struct ice_vsi *vsi) if (!netdev) return; =20 - ASSERT_RTNL(); ice_for_each_rxq(vsi, q_idx) netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, &vsi->rx_rings[q_idx]->q_vector->napi); @@ -2789,17 +2789,37 @@ void ice_vsi_set_napi_queues(struct ice_vsi *vsi) ice_for_each_q_vector(vsi, v_idx) { struct ice_q_vector *q_vector =3D vsi->q_vectors[v_idx]; =20 - netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); + netif_napi_set_irq_locked(&q_vector->napi, q_vector->irq.virq); } } =20 /** - * ice_vsi_clear_napi_queues - dissociate netdev queues from napi + * ice_vsi_set_napi_queues - associate VSI queues with NAPIs * @vsi: VSI pointer * + * Version of ice_vsi_set_napi_queues_locked() that takes the netdev_lock, + * to use it outside of the net_device_ops context. + */ +void ice_vsi_set_napi_queues(struct ice_vsi *vsi) +{ + struct net_device *netdev =3D vsi->netdev; + + if (!netdev) + return; + + netdev_lock(netdev); + ice_vsi_set_napi_queues_locked(vsi); + netdev_unlock(netdev); +} + +/** + * ice_vsi_clear_napi_queues_locked - dissociate netdev queues from napi + * @vsi: VSI to process + * * Clear the association between all VSI queues queue[s] and napi. + * Must be called only with the netdev_lock taken. */ -void ice_vsi_clear_napi_queues(struct ice_vsi *vsi) +void ice_vsi_clear_napi_queues_locked(struct ice_vsi *vsi) { struct net_device *netdev =3D vsi->netdev; int q_idx, v_idx; @@ -2807,12 +2827,11 @@ void ice_vsi_clear_napi_queues(struct ice_vsi *vsi) if (!netdev) return; =20 - ASSERT_RTNL(); /* Clear the NAPI's interrupt number */ ice_for_each_q_vector(vsi, v_idx) { struct ice_q_vector *q_vector =3D vsi->q_vectors[v_idx]; =20 - netif_napi_set_irq(&q_vector->napi, -1); + netif_napi_set_irq_locked(&q_vector->napi, -1); } =20 ice_for_each_txq(vsi, q_idx) @@ -2822,6 +2841,25 @@ void ice_vsi_clear_napi_queues(struct ice_vsi *vsi) netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, NULL); } =20 +/** + * ice_vsi_clear_napi_queues - dissociate VSI queues from NAPIs + * @vsi: VSI to process + * + * Version of ice_vsi_clear_napi_queues_locked() that takes the netdev loc= k, + * to use it outside of the net_device_ops context. + */ +void ice_vsi_clear_napi_queues(struct ice_vsi *vsi) +{ + struct net_device *netdev =3D vsi->netdev; + + if (!netdev) + return; + + netdev_lock(netdev); + ice_vsi_clear_napi_queues_locked(vsi); + netdev_unlock(netdev); +} + /** * ice_napi_add - register NAPI handler for the VSI * @vsi: VSI for which NAPI handler is to be registered diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethern= et/intel/ice/ice_main.c index 2533876f1a2f..c0432182b482 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3523,6 +3523,7 @@ static void ice_set_ops(struct ice_vsi *vsi) } =20 netdev->netdev_ops =3D &ice_netdev_ops; + netdev->request_ops_lock =3D true; netdev->udp_tunnel_nic_info =3D &pf->hw.udp_tunnel_nic; netdev->xdp_metadata_ops =3D &ice_xdp_md_ops; ice_set_ethtool_ops(netdev); @@ -5533,16 +5534,17 @@ static int ice_reinit_interrupt_scheme(struct ice_p= f *pf) =20 /* Remap vectors and rings, after successful re-init interrupts */ ice_for_each_vsi(pf, v) { - if (!pf->vsi[v]) + struct ice_vsi *vsi =3D pf->vsi[v]; + + if (!vsi) continue; =20 - ret =3D ice_vsi_alloc_q_vectors(pf->vsi[v]); + ret =3D ice_vsi_alloc_q_vectors(vsi); if (ret) goto err_reinit; - ice_vsi_map_rings_to_vectors(pf->vsi[v]); - rtnl_lock(); - ice_vsi_set_napi_queues(pf->vsi[v]); - rtnl_unlock(); + + ice_vsi_map_rings_to_vectors(vsi); + ice_vsi_set_napi_queues(vsi); } =20 ret =3D ice_req_irq_msix_misc(pf); @@ -5555,13 +5557,15 @@ static int ice_reinit_interrupt_scheme(struct ice_p= f *pf) return 0; =20 err_reinit: - while (v--) - if (pf->vsi[v]) { - rtnl_lock(); - ice_vsi_clear_napi_queues(pf->vsi[v]); - rtnl_unlock(); - ice_vsi_free_q_vectors(pf->vsi[v]); - } + while (v--) { + struct ice_vsi *vsi =3D pf->vsi[v]; + + if (!vsi) + continue; + + ice_vsi_clear_napi_queues(vsi); + ice_vsi_free_q_vectors(vsi); + } =20 return ret; } @@ -5623,14 +5627,17 @@ static int ice_suspend(struct device *dev) * to CPU0. */ ice_free_irq_msix_misc(pf); + ice_for_each_vsi(pf, v) { - if (!pf->vsi[v]) + struct ice_vsi *vsi =3D pf->vsi[v]; + + if (!vsi) continue; - rtnl_lock(); - ice_vsi_clear_napi_queues(pf->vsi[v]); - rtnl_unlock(); - ice_vsi_free_q_vectors(pf->vsi[v]); + + ice_vsi_clear_napi_queues(vsi); + ice_vsi_free_q_vectors(vsi); } + ice_clear_interrupt_scheme(pf); =20 pci_save_state(pdev); @@ -6760,7 +6767,7 @@ static void ice_napi_enable_all(struct ice_vsi *vsi) ice_init_moderation(q_vector); =20 if (q_vector->rx.rx_ring || q_vector->tx.tx_ring) - napi_enable(&q_vector->napi); + napi_enable_locked(&q_vector->napi); } } =20 @@ -7204,7 +7211,7 @@ static void ice_napi_disable_all(struct ice_vsi *vsi) struct ice_q_vector *q_vector =3D vsi->q_vectors[q_idx]; =20 if (q_vector->rx.rx_ring || q_vector->tx.tx_ring) - napi_disable(&q_vector->napi); + napi_disable_locked(&q_vector->napi); =20 cancel_work_sync(&q_vector->tx.dim.work); cancel_work_sync(&q_vector->rx.dim.work); @@ -7504,7 +7511,7 @@ int ice_vsi_open(struct ice_vsi *vsi) if (err) goto err_set_qs; =20 - ice_vsi_set_napi_queues(vsi); + ice_vsi_set_napi_queues_locked(vsi); } =20 err =3D ice_up_complete(vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethe= rnet/intel/ice/ice_sf_eth.c index 1a2c94375ca7..2c3db1b03055 100644 --- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c +++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c @@ -58,6 +58,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn= _port, eth_hw_addr_set(netdev, dyn_port->hw_addr); ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr); netdev->netdev_ops =3D &ice_sf_netdev_ops; + netdev->request_ops_lock =3D true; SET_NETDEV_DEVLINK_PORT(netdev, devlink_port); =20 err =3D register_netdev(netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/etherne= t/intel/ice/ice_xsk.c index 989ff1fd9110..4168cd58d4d8 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -33,9 +33,9 @@ ice_qvec_toggle_napi(struct ice_vsi *vsi, struct ice_q_ve= ctor *q_vector, return; =20 if (enable) - napi_enable(&q_vector->napi); + napi_enable_locked(&q_vector->napi); else - napi_disable(&q_vector->napi); + napi_disable_locked(&q_vector->napi); } =20 /** --=20 2.51.1 From nobody Tue Dec 2 00:04:41 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8BB932C95B; Tue, 25 Nov 2025 17:36:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092211; cv=none; b=C6Dk1nadBBG0cKwMQncnGL5TrrwHviKVEOQw7Tbj3pfs5A4lw7Sso8pDqcKCVAChbMqYYOuyOMANJHBDaWYv+Y0j+SBl1HPRsp/0jkH7FXEAGF8D43RQlL0cYfZN50ItvpsmspAEtdnCvE8LEuRdx2THeTB3ztcgrMsyBcnUxG8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092211; c=relaxed/simple; bh=h2pk/dERUPYTRmIq07kM949YUqsGfjYiE9oKHWqOD+E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hv72A91wMpY9scoIVxNP0Iiroio83nlPrmRjxRmniiz53h14CBV2D5XU1DddPLgCpj6WyT/Drr3eHsy0iLopYptMxRBoqe4OiiPc3wGnitGE8ZP0d+0ioyGR+I3l/Cl1wo/EFPlQBEx0MoIkXyR1DbOTIDt/KHZR+ox4ukzVJkM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WViJ3pr9; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WViJ3pr9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764092210; x=1795628210; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=h2pk/dERUPYTRmIq07kM949YUqsGfjYiE9oKHWqOD+E=; b=WViJ3pr9n82mUkzlRpWn5XwWPE87U7aqMT9xeU5vjNzDkapXoRcZgSBY MU5bM06hjEa+B28eNViw6MMPmRtEyWPUgDz7oa2jknBineJyoCPHYBn6W 4tZIIUHXjAPNK7Ny9N4/V/Sp2cfj0juQTexLd1qQiwbg1BBI5BeDaiES2 w4C5Cj2DpaCu2hFuGGrvY28M38Cv2UXisGonJWh/9uXGVMU7Ooc+8jYl1 g50VyGbF/SULIUK6Gcg2U95fyJMZ/NbgfxKM7JBF51xJbwQV2iIJWWuDI S/yzeE//ZrDvF1eEnYe1uG6+w/oizHcGxL8RblHsq12BQOnW1HJQ+906Q Q==; X-CSE-ConnectionGUID: 8vDEJgH1SmCD1oeVaYqENA== X-CSE-MsgGUID: X2DKcc6fR7SfV1Mxj2uznQ== X-IronPort-AV: E=McAfee;i="6800,10657,11624"; a="69979913" X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="69979913" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2025 09:36:48 -0800 X-CSE-ConnectionGUID: 1q3t7kDdQLisWBQOuAmOQA== X-CSE-MsgGUID: +Icz3oWHRiWspliIUfUkjQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="216040420" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 25 Nov 2025 09:36:45 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 4/5] ice: implement Rx queue management ops Date: Tue, 25 Nov 2025 18:36:02 +0100 Message-ID: <20251125173603.3834486-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251125173603.3834486-1-aleksander.lobakin@intel.com> References: <20251125173603.3834486-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now ice is ready to get queue_mgmt_ops support. It already has API to disable/reconfig/enable one particular queue (for XSk). Reuse as much of its code as possible to implement Rx queue management callbacks and vice versa -- ice_queue_mem_{alloc,free}() can be reused during ifup/ifdown to elide code duplication. With this, ice passes the io_uring zcrx selftests, meaning the Rx part of netmem/MP support is done. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/ice/ice_lib.h | 5 + drivers/net/ethernet/intel/ice/ice_txrx.h | 2 + drivers/net/ethernet/intel/ice/ice_base.c | 192 ++++++++++++++------ drivers/net/ethernet/intel/ice/ice_main.c | 2 +- drivers/net/ethernet/intel/ice/ice_sf_eth.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.c | 26 ++- 6 files changed, 163 insertions(+), 66 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/etherne= t/intel/ice/ice_lib.h index d9c94c06c657..781319f70118 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -4,6 +4,8 @@ #ifndef _ICE_LIB_H_ #define _ICE_LIB_H_ =20 +#include + #include "ice.h" #include "ice_vlan.h" =20 @@ -126,4 +128,7 @@ void ice_clear_feature_support(struct ice_pf *pf, enum = ice_feature f); void ice_init_feature_support(struct ice_pf *pf); bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi); void ice_vsi_update_l2tsel(struct ice_vsi *vsi, enum ice_l2tsel l2tsel); + +extern const struct netdev_queue_mgmt_ops ice_queue_mgmt_ops; + #endif /* !_ICE_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethern= et/intel/ice/ice_txrx.h index e440c55d9e9f..f741301c28b6 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -409,6 +409,8 @@ u16 ice_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev); void ice_clean_tx_ring(struct ice_tx_ring *tx_ring); +void ice_queue_mem_free(struct net_device *dev, void *per_queue_mem); +void ice_zero_rx_ring(struct ice_rx_ring *rx_ring); void ice_clean_rx_ring(struct ice_rx_ring *rx_ring); int ice_setup_tx_ring(struct ice_tx_ring *tx_ring); int ice_setup_rx_ring(struct ice_rx_ring *rx_ring); diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethern= et/intel/ice/ice_base.c index 1aa40f13947e..77d09dc6a48d 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -651,6 +651,42 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq) return err; } =20 +static int ice_queue_mem_alloc(struct net_device *dev, void *per_queue_mem, + int idx) +{ + const struct ice_netdev_priv *priv =3D netdev_priv(dev); + const struct ice_rx_ring *real =3D priv->vsi->rx_rings[idx]; + struct ice_rx_ring *new =3D per_queue_mem; + int ret; + + new->count =3D real->count; + new->netdev =3D real->netdev; + new->q_index =3D real->q_index; + new->q_vector =3D real->q_vector; + new->vsi =3D real->vsi; + + ret =3D ice_rxq_pp_create(new); + if (ret) + return ret; + + if (!netif_running(dev)) + return 0; + + ret =3D __xdp_rxq_info_reg(&new->xdp_rxq, new->netdev, new->q_index, + new->q_vector->napi.napi_id, new->rx_buf_len); + if (ret) + goto err_destroy_fq; + + xdp_rxq_info_attach_page_pool(&new->xdp_rxq, new->pp); + + return 0; + +err_destroy_fq: + ice_rxq_pp_destroy(new); + + return ret; +} + /** * ice_vsi_cfg_rxq - Configure an Rx queue * @ring: the ring being configured @@ -665,23 +701,12 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) int err; =20 if (ring->vsi->type =3D=3D ICE_VSI_PF || ring->vsi->type =3D=3D ICE_VSI_S= F) { - if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) { - err =3D __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, - ring->q_index, - ring->q_vector->napi.napi_id, - ring->rx_buf_len); - if (err) - return err; - } - ice_rx_xsk_pool(ring); err =3D ice_realloc_rx_xdp_bufs(ring, ring->xsk_pool); if (err) return err; =20 if (ring->xsk_pool) { - xdp_rxq_info_unreg(&ring->xdp_rxq); - rx_buf_len =3D xsk_pool_get_rx_frame_size(ring->xsk_pool); err =3D __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, @@ -700,20 +725,10 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ri= ng %d\n", ring->q_index); } else { - err =3D ice_rxq_pp_create(ring); + err =3D ice_queue_mem_alloc(ring->netdev, ring, + ring->q_index); if (err) return err; - - if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) { - err =3D __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, - ring->q_index, - ring->q_vector->napi.napi_id, - ring->rx_buf_len); - if (err) - goto err_destroy_fq; - } - xdp_rxq_info_attach_page_pool(&ring->xdp_rxq, - ring->pp); } } =20 @@ -722,7 +737,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) if (err) { dev_err(dev, "ice_setup_rx_ctx failed for RxQ %d, err %d\n", ring->q_index, err); - goto err_destroy_fq; + goto err_clean_rq; } =20 if (ring->xsk_pool) { @@ -753,12 +768,12 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) err =3D ice_alloc_rx_bufs(ring, num_bufs); =20 if (err) - goto err_destroy_fq; + goto err_clean_rq; =20 return 0; =20 -err_destroy_fq: - ice_rxq_pp_destroy(ring); +err_clean_rq: + ice_clean_rx_ring(ring); =20 return err; } @@ -1425,27 +1440,7 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, = u16 q_idx) sizeof(vsi->xdp_rings[q_idx]->ring_stats->stats)); } =20 -/** - * ice_qp_clean_rings - Cleans all the rings of a given index - * @vsi: VSI that contains rings of interest - * @q_idx: ring index in array - */ -static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) -{ - ice_clean_tx_ring(vsi->tx_rings[q_idx]); - if (vsi->xdp_rings) - ice_clean_tx_ring(vsi->xdp_rings[q_idx]); - ice_clean_rx_ring(vsi->rx_rings[q_idx]); -} - -/** - * ice_qp_dis - Disables a queue pair - * @vsi: VSI of interest - * @q_idx: ring index in array - * - * Returns 0 on success, negative on failure. - */ -int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) +static int __ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) { struct ice_txq_meta txq_meta =3D { }; struct ice_q_vector *q_vector; @@ -1484,23 +1479,35 @@ int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) } =20 ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); - ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); =20 + ice_clean_tx_ring(vsi->tx_rings[q_idx]); + if (vsi->xdp_rings) + ice_clean_tx_ring(vsi->xdp_rings[q_idx]); + return fail; } =20 /** - * ice_qp_ena - Enables a queue pair + * ice_qp_dis - Disables a queue pair * @vsi: VSI of interest * @q_idx: ring index in array * * Returns 0 on success, negative on failure. */ -int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) +int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) +{ + int ret; + + ret =3D __ice_qp_dis(vsi, q_idx); + ice_clean_rx_ring(vsi->rx_rings[q_idx]); + + return ret; +} + +static int __ice_qp_ena(struct ice_vsi *vsi, u16 q_idx, int fail) { struct ice_q_vector *q_vector; - int fail =3D 0; bool link_up; int err; =20 @@ -1518,10 +1525,6 @@ int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_tx_xsk_pool(vsi, q_idx); } =20 - err =3D ice_vsi_cfg_single_rxq(vsi, q_idx); - if (!fail) - fail =3D err; - q_vector =3D vsi->rx_rings[q_idx]->q_vector; ice_qvec_cfg_msix(vsi, q_vector, q_idx); =20 @@ -1542,3 +1545,80 @@ int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) =20 return fail; } + +/** + * ice_qp_ena - Enables a queue pair + * @vsi: VSI of interest + * @q_idx: ring index in array + * + * Returns 0 on success, negative on failure. + */ +int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) +{ + return __ice_qp_ena(vsi, q_idx, ice_vsi_cfg_single_rxq(vsi, q_idx)); +} + +static int ice_queue_start(struct net_device *dev, void *per_queue_mem, + int idx) +{ + const struct ice_netdev_priv *priv =3D netdev_priv(dev); + struct ice_rx_ring *real =3D priv->vsi->rx_rings[idx]; + struct ice_rx_ring *new =3D per_queue_mem; + struct napi_struct *napi; + int ret; + + real->pp =3D new->pp; + real->rx_fqes =3D new->rx_fqes; + real->hdr_fqes =3D new->hdr_fqes; + real->hdr_pp =3D new->hdr_pp; + + real->hdr_truesize =3D new->hdr_truesize; + real->truesize =3D new->truesize; + real->rx_hdr_len =3D new->rx_hdr_len; + real->rx_buf_len =3D new->rx_buf_len; + + memcpy(&real->xdp_rxq, &new->xdp_rxq, sizeof(new->xdp_rxq)); + + ret =3D ice_setup_rx_ctx(real); + if (ret) + return ret; + + napi =3D &real->q_vector->napi; + + page_pool_enable_direct_recycling(real->pp, napi); + if (real->hdr_pp) + page_pool_enable_direct_recycling(real->hdr_pp, napi); + + ret =3D ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real)); + + return __ice_qp_ena(priv->vsi, idx, ret); +} + +static int ice_queue_stop(struct net_device *dev, void *per_queue_mem, + int idx) +{ + const struct ice_netdev_priv *priv =3D netdev_priv(dev); + struct ice_rx_ring *real =3D priv->vsi->rx_rings[idx]; + int ret; + + ret =3D __ice_qp_dis(priv->vsi, idx); + if (ret) + return ret; + + page_pool_disable_direct_recycling(real->pp); + if (real->hdr_pp) + page_pool_disable_direct_recycling(real->hdr_pp); + + ice_zero_rx_ring(real); + memcpy(per_queue_mem, real, sizeof(*real)); + + return 0; +} + +const struct netdev_queue_mgmt_ops ice_queue_mgmt_ops =3D { + .ndo_queue_mem_alloc =3D ice_queue_mem_alloc, + .ndo_queue_mem_free =3D ice_queue_mem_free, + .ndo_queue_mem_size =3D sizeof(struct ice_rx_ring), + .ndo_queue_start =3D ice_queue_start, + .ndo_queue_stop =3D ice_queue_stop, +}; diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethern= et/intel/ice/ice_main.c index c0432182b482..9eb27a0d984b 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3523,7 +3523,7 @@ static void ice_set_ops(struct ice_vsi *vsi) } =20 netdev->netdev_ops =3D &ice_netdev_ops; - netdev->request_ops_lock =3D true; + netdev->queue_mgmt_ops =3D &ice_queue_mgmt_ops; netdev->udp_tunnel_nic_info =3D &pf->hw.udp_tunnel_nic; netdev->xdp_metadata_ops =3D &ice_xdp_md_ops; ice_set_ethtool_ops(netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethe= rnet/intel/ice/ice_sf_eth.c index 2c3db1b03055..41e1606a8222 100644 --- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c +++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c @@ -58,7 +58,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn= _port, eth_hw_addr_set(netdev, dyn_port->hw_addr); ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr); netdev->netdev_ops =3D &ice_sf_netdev_ops; - netdev->request_ops_lock =3D true; + netdev->queue_mgmt_ops =3D &ice_queue_mgmt_ops; SET_NETDEV_DEVLINK_PORT(netdev, devlink_port); =20 err =3D register_netdev(netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethern= et/intel/ice/ice_txrx.c index ad76768a4232..b00fa436472d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -530,17 +530,13 @@ void ice_rxq_pp_destroy(struct ice_rx_ring *rq) rq->hdr_pp =3D NULL; } =20 -/** - * ice_clean_rx_ring - Free Rx buffers - * @rx_ring: ring to be cleaned - */ -void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) +void ice_queue_mem_free(struct net_device *dev, void *per_queue_mem) { - u32 size; + struct ice_rx_ring *rx_ring =3D per_queue_mem; =20 if (rx_ring->xsk_pool) { ice_xsk_clean_rx_ring(rx_ring); - goto rx_skip_free; + return; } =20 /* ring already cleared, nothing to do */ @@ -567,8 +563,12 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) } =20 ice_rxq_pp_destroy(rx_ring); +} + +void ice_zero_rx_ring(struct ice_rx_ring *rx_ring) +{ + size_t size; =20 -rx_skip_free: /* Zero out the descriptor ring */ size =3D ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), PAGE_SIZE); @@ -579,6 +579,16 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) rx_ring->next_to_use =3D 0; } =20 +/** + * ice_clean_rx_ring - Free Rx buffers + * @rx_ring: ring to be cleaned + */ +void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) +{ + ice_queue_mem_free(rx_ring->netdev, rx_ring); + ice_zero_rx_ring(rx_ring); +} + /** * ice_free_rx_ring - Free Rx resources * @rx_ring: ring to clean the resources from --=20 2.51.1 From nobody Tue Dec 2 00:04:41 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3C9632A3CC; Tue, 25 Nov 2025 17:36:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092213; cv=none; b=abuz8zdC7PlzaiVYUQD4olmoH1h2paoxgbvNp1L3as1n/Sp9VjIvoDZ2uuD6EYefOTSGsAk29StHs/X1knhPMdUk9iYr7mCVxYV+BNA/ZrmiyI7n9bX7p1SqKUGpQTpHU9rTfE5XFpnJO5x/kPPIwXz6//k3OqPM48z6/H2y2/E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092213; c=relaxed/simple; bh=DckpuTLHr+h44ictMmml2fKM41YlxFMkmDarCT3LOPM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HeNFeN80RhzBMo1CdtJKWz9dxrQeAKHkn7NQ7RsbNlUAut1oBh4T2AeXY5K5m0RRwPL4O7aDhsA1nKsvAPcPfsvC5lA/JzktgYdLJ1s8TddOHXhMbAtE8gA3eVIFRGgePx2bzCjoK4higFMh8tXKqs2WJ4lRndbooQ/cTkgEVgY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UYFRyfiS; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UYFRyfiS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764092212; x=1795628212; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DckpuTLHr+h44ictMmml2fKM41YlxFMkmDarCT3LOPM=; b=UYFRyfiSrG3sL8SaTOLP2Pvyxu45HJ4jsQDdLzwQjYVaDnWUfPFRC+iF yZ5Mi0is86ubFgLXrd2AdPl9hxRmOEZZOIZISIaasV6NkytKKOq5emNu3 dboDNRVa44xo5byfICKp5WcLsQGU7CQwCz+CGzBr9HweMsAIPCH2a6T6T EJ88rUVC+68ACps/WSvyPANkYhV61SD+FaD0PZ86/D7Pqo1zs4ZT9OxNT pLuLaON/fwZZvJ5FzDIikOg6p9uxCo4FMp0UCYsi/Ca1B/t3hLU7R+G++ GSqlVeD2KeQ2i/AKb/7dYKLdSX2TrerC+R6oFP0gh/dkKXrS3IAwKp0yu w==; X-CSE-ConnectionGUID: D7v+CpU8T76A12ByLok0yw== X-CSE-MsgGUID: Y59BkbnmS7O4p086+iTKWw== X-IronPort-AV: E=McAfee;i="6800,10657,11624"; a="69979927" X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="69979927" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2025 09:36:52 -0800 X-CSE-ConnectionGUID: DoZsEv1SRKOmu96wqKZmSQ== X-CSE-MsgGUID: 9b02swwvT4uoQHfnAYAfBw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="216040434" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 25 Nov 2025 09:36:48 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 5/5] ice: add support for transmitting unreadable frags Date: Tue, 25 Nov 2025 18:36:03 +0100 Message-ID: <20251125173603.3834486-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251125173603.3834486-1-aleksander.lobakin@intel.com> References: <20251125173603.3834486-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Advertise netmem Tx support in ice. The only change needed is to set ICE_TX_BUF_FRAG conditionally, only when skb_frag_is_net_iov() is false. Otherwise, the Tx buffer type will be ICE_TX_BUF_EMPTY and the driver will skip the DMA unmapping operation. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/ice/ice_main.c | 1 + drivers/net/ethernet/intel/ice/ice_sf_eth.c | 1 + drivers/net/ethernet/intel/ice/ice_txrx.c | 17 +++++++++++++---- 3 files changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethern= et/intel/ice/ice_main.c index 9eb27a0d984b..0ac28b45e0fa 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3524,6 +3524,7 @@ static void ice_set_ops(struct ice_vsi *vsi) =20 netdev->netdev_ops =3D &ice_netdev_ops; netdev->queue_mgmt_ops =3D &ice_queue_mgmt_ops; + netdev->netmem_tx =3D true; netdev->udp_tunnel_nic_info =3D &pf->hw.udp_tunnel_nic; netdev->xdp_metadata_ops =3D &ice_xdp_md_ops; ice_set_ethtool_ops(netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethe= rnet/intel/ice/ice_sf_eth.c index 41e1606a8222..51ad13c9d7f9 100644 --- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c +++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c @@ -59,6 +59,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn= _port, ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr); netdev->netdev_ops =3D &ice_sf_netdev_ops; netdev->queue_mgmt_ops =3D &ice_queue_mgmt_ops; + netdev->netmem_tx =3D true; SET_NETDEV_DEVLINK_PORT(netdev, devlink_port); =20 err =3D register_netdev(netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethern= et/intel/ice/ice_txrx.c index b00fa436472d..494bcfed75af 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -113,11 +113,17 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fl= tr_desc *fdir_desc, static void ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_= buf) { - if (tx_buf->type !=3D ICE_TX_BUF_XDP_TX && dma_unmap_len(tx_buf, len)) + switch (tx_buf->type) { + case ICE_TX_BUF_DUMMY: + case ICE_TX_BUF_FRAG: + case ICE_TX_BUF_SKB: + case ICE_TX_BUF_XDP_XMIT: dma_unmap_page(ring->dev, dma_unmap_addr(tx_buf, dma), dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + break; + } =20 switch (tx_buf->type) { case ICE_TX_BUF_DUMMY: @@ -337,12 +343,14 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_r= ing, int napi_budget) } =20 /* unmap any remaining paged data */ - if (dma_unmap_len(tx_buf, len)) { + if (tx_buf->type !=3D ICE_TX_BUF_EMPTY) { dma_unmap_page(tx_ring->dev, dma_unmap_addr(tx_buf, dma), dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + tx_buf->type =3D ICE_TX_BUF_EMPTY; } } ice_trace(clean_tx_irq_unmap_eop, tx_ring, tx_desc, tx_buf); @@ -1492,7 +1500,8 @@ ice_tx_map(struct ice_tx_ring *tx_ring, struct ice_tx= _buf *first, DMA_TO_DEVICE); =20 tx_buf =3D &tx_ring->tx_buf[i]; - tx_buf->type =3D ICE_TX_BUF_FRAG; + if (!skb_frag_is_net_iov(frag)) + tx_buf->type =3D ICE_TX_BUF_FRAG; } =20 /* record SW timestamp if HW timestamp is not available */ @@ -2367,7 +2376,7 @@ void ice_clean_ctrl_tx_irq(struct ice_tx_ring *tx_rin= g) } =20 /* unmap the data header */ - if (dma_unmap_len(tx_buf, len)) + if (tx_buf->type !=3D ICE_TX_BUF_EMPTY) dma_unmap_single(tx_ring->dev, dma_unmap_addr(tx_buf, dma), dma_unmap_len(tx_buf, len), --=20 2.51.1