From nobody Tue Dec 16 21:12:37 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 027032EC55C; Thu, 4 Dec 2025 15:51:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764863513; cv=none; b=Z0fFzdiOrmN5GUNogImIm1vSb0sEEz2yf7s4SLVW2jYNJAdfW4RUcEvBnHsARk2jigm6Zit7JxkYk/hnwXXkJ5goINTLAhwT+KDVQURZTn1NdUXEhMANABXsvZvpH3tVkmDgln8hPqrKraOd2OD85rI7E1qLd9gJS4Q1w1p46Zg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764863513; c=relaxed/simple; bh=JHpQKJQHKub/qIFhXAATo79hs3k9QEabO0yRwPmth4Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ms2tldgq6+QoTejOVif8mbAyneX7r2HDVgO9YSn7lVJ8HUwmHdHahuJBXSFB9HG0SVys+CjQXmzQ9M9HSyWEYypRc80IyGmGFndELW5uz9TQtgZsF1b0T+q+y2gAyEICG2CXT9nC34sUlrw7bcfzhR9KcVAUapyA95NJO5b3YoI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=R4ebtjyy; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="R4ebtjyy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764863511; x=1796399511; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JHpQKJQHKub/qIFhXAATo79hs3k9QEabO0yRwPmth4Q=; b=R4ebtjyy3JnuV9647z2/ThpwU9I1DCC+eJEm2Hj2tf9sdQ0ad3o0TRI5 hgqzQjMJwAjS+VCpFiuaSDL7UgA3EiZnJsp0pHvf3NTT0m771+jTrQTKd 2yuYJ15X3GaQKQM2xBb1R5Wn5QufUWGc+unM05ZEpapdCQahFc/tymLT5 8eEmfdLqnmTmCQfHuMJiaTH7vcQkMZwW7bML7ib671f5+orUNe0beMYN9 ZTaxHchZyvMWLx48682lX0JYb5cK+CJJY2E+Cs20m9PZ7+7gNNvDGehA4 y+isbIpn9mFEM3IVEiOyPJRBUUFc01Uq1y6XAGesMRYIGslCcNEOXejNs A==; X-CSE-ConnectionGUID: VNDaWXEpSnuCHzxVlX3+Vw== X-CSE-MsgGUID: 4QtxtdrtS9eniZ9NBvU1TA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="92365109" X-IronPort-AV: E=Sophos;i="6.20,249,1758610800"; d="scan'208";a="92365109" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 07:51:51 -0800 X-CSE-ConnectionGUID: 6PKjto03T8yaVnVenlRHRQ== X-CSE-MsgGUID: 4bVDufmkTWq3nj/GUNPyIA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,249,1758610800"; d="scan'208";a="194677264" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 04 Dec 2025 07:51:47 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next v2 1/5] libeth: pass Rx queue index to PP when creating a fill queue Date: Thu, 4 Dec 2025 16:51:29 +0100 Message-ID: <20251204155133.2437621-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251204155133.2437621-1-aleksander.lobakin@intel.com> References: <20251204155133.2437621-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since recently, page_pool_create() accepts optional stack index of the Rx queue which the pool will be created for. It can then be used on control path for stuff like memory providers. Add the same field to libeth_fq and pass the index from all the drivers using libeth for managing Rx to simplify implementing MP support later. idpf has one libeth_fq per buffer/fill queue and each Rx queue has two fill queues, but since fill queues can never be shared, we can store the corresponding Rx queue index there during the initialization to pass it to libeth. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 ++ include/net/libeth/rx.h | 2 ++ drivers/net/ethernet/intel/iavf/iavf_txrx.c | 1 + drivers/net/ethernet/intel/ice/ice_base.c | 2 ++ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 13 +++++++++++++ drivers/net/ethernet/intel/libeth/rx.c | 1 + 6 files changed, 21 insertions(+) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index 6796f010e382..0eaebac8ceae 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -748,6 +748,7 @@ libeth_cacheline_set_assert(struct idpf_tx_queue, 64, * @size: Length of descriptor ring in bytes * @dma: Physical address of ring * @q_vector: Backreference to associated vector + * @rxq_idx: stack index of the corresponding Rx queue * @rx_buffer_low_watermark: RX buffer low watermark * @rx_hbuf_size: Header buffer size * @rx_buf_size: Buffer size @@ -791,6 +792,7 @@ struct idpf_buf_queue { dma_addr_t dma; =20 struct idpf_q_vector *q_vector; + u16 rxq_idx; =20 u16 rx_buffer_low_watermark; u16 rx_hbuf_size; diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h index 0e736846c5e8..db838ef7f9bb 100644 --- a/include/net/libeth/rx.h +++ b/include/net/libeth/rx.h @@ -72,6 +72,7 @@ enum libeth_fqe_type { * @no_napi: the queue is not a data queue and does not have NAPI * @buf_len: HW-writeable length per each buffer * @nid: ID of the closest NUMA node with memory + * @idx: stack index of the corresponding Rx queue */ struct libeth_fq { struct_group_tagged(libeth_fq_fp, fp, @@ -90,6 +91,7 @@ struct libeth_fq { =20 u32 buf_len; int nid; + u32 idx; }; =20 int libeth_rx_fq_create(struct libeth_fq *fq, void *napi_dev); diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethe= rnet/intel/iavf/iavf_txrx.c index 275b11dd0c60..3d938d7ab2cc 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -771,6 +771,7 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) .count =3D rx_ring->count, .buf_len =3D LIBIE_MAX_RX_BUF_LEN, .nid =3D NUMA_NO_NODE, + .idx =3D rx_ring->queue_index, }; int ret; =20 diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethern= et/intel/ice/ice_base.c index 6fb7051aa463..7097324c38f3 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -607,6 +607,7 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq) struct libeth_fq fq =3D { .count =3D rq->count, .nid =3D NUMA_NO_NODE, + .idx =3D rq->q_index, .hsplit =3D rq->vsi->hsplit, .xdp =3D ice_is_xdp_ena_vsi(rq->vsi), .buf_len =3D LIBIE_MAX_RX_BUF_LEN, @@ -629,6 +630,7 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq) .count =3D rq->count, .type =3D LIBETH_FQE_HDR, .nid =3D NUMA_NO_NODE, + .idx =3D rq->q_index, .xdp =3D ice_is_xdp_ena_vsi(rq->vsi), }; =20 diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 72215612b460..5dc41b7ba609 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -561,6 +561,7 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_qu= eue *bufq) .type =3D LIBETH_FQE_HDR, .xdp =3D idpf_xdp_enabled(bufq->q_vector->vport), .nid =3D idpf_q_vector_to_mem(bufq->q_vector), + .idx =3D bufq->rxq_idx, }; int ret; =20 @@ -703,6 +704,7 @@ static int idpf_rx_bufs_init_singleq(struct idpf_rx_que= ue *rxq) .type =3D LIBETH_FQE_MTU, .buf_len =3D IDPF_RX_MAX_BUF_SZ, .nid =3D idpf_q_vector_to_mem(rxq->q_vector), + .idx =3D rxq->idx, }; int ret; =20 @@ -763,6 +765,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *buf= q, .hsplit =3D idpf_queue_has(HSPLIT_EN, bufq), .xdp =3D idpf_xdp_enabled(bufq->q_vector->vport), .nid =3D idpf_q_vector_to_mem(bufq->q_vector), + .idx =3D bufq->rxq_idx, }; int ret; =20 @@ -1922,6 +1925,16 @@ static int idpf_rxq_group_alloc(struct idpf_vport *v= port, LIBETH_RX_LL_LEN; idpf_rxq_set_descids(rsrc, q); } + + if (!idpf_is_queue_model_split(rsrc->rxq_model)) + continue; + + for (u32 j =3D 0; j < rsrc->num_bufqs_per_qgrp; j++) { + struct idpf_buf_queue *bufq; + + bufq =3D &rx_qgrp->splitq.bufq_sets[j].bufq; + bufq->rxq_idx =3D rx_qgrp->splitq.rxq_sets[0]->rxq.idx; + } } =20 err_alloc: diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/= intel/libeth/rx.c index 1d8248a31037..9ac3a1448b2f 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -157,6 +157,7 @@ int libeth_rx_fq_create(struct libeth_fq *fq, void *nap= i_dev) .order =3D LIBETH_RX_PAGE_ORDER, .pool_size =3D fq->count, .nid =3D fq->nid, + .queue_idx =3D fq->idx, .dev =3D napi ? napi->dev->dev.parent : napi_dev, .netdev =3D napi ? napi->dev : NULL, .napi =3D napi, --=20 2.52.0