From nobody Tue Dec 2 00:44:39 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE19332A3E1; Tue, 25 Nov 2025 17:36:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092206; cv=none; b=jW/bu11CKXBNnAeK2uID/0WsgHQO8Nlri9Pxr7xVxkyC0bWOLiSzJQeRAgtAC1CF8TfUUdlABe14iY7C7pKCjX7bMXeIdC0bziWvYuTaEQiQgC7UKhmC5OIJ2BGYr92XwaQgwxGQxDrQKG2jnWQrRnPN8ZhGMP/9XgWHur/goGg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764092206; c=relaxed/simple; bh=7JJU128tDJ+PyF5G7UAG3ifJrnbZV7vGBTPFHmlcVHc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iacQbSpM1QaqTOh5io2zRDlqMIHOfiQD3fD0Dg7EKivPxlbh3IZ1eAUk3RRDB/zNCYPYFkRE+Q24EPNYRu6KC131tparkO8dhIfZ84uCsjb2GwmpOxJsJz+L0tbEcpncDfwbh94y3LfSkBG9RSJt1HfZxaBomqIlPgLdlfRIJQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=axXbt80O; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="axXbt80O" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764092205; x=1795628205; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7JJU128tDJ+PyF5G7UAG3ifJrnbZV7vGBTPFHmlcVHc=; b=axXbt80Od/C5tIOFTcO6xL7FJHxOJYKRcDAoHS6zZqHkymzD1738J53X Q5LLlp2Qbok9GxJh6da2hgg3dNrI6A1tzKF62be51jIredVSMpcoGAErG Uoq7ZIotL0JGpHLc74TOSmA7YjMFgdGqzxwsVCieuyhpF2QVavlnFc4n+ bPTnFmDviHLj2eZaqs7sebB1NQw/V+OAGix67SFJJKE9uMhbZQZTIujj6 C8rbwWeGUVzdUBu9KGNqZDYUll8SBaxWKU9QhHL/nlGhpFCzcLurRL9jA 1WT0NX9IxbsKX+LkWMH7BfhcAJhz0z0o6siDgQbrXx88wyi3Xf5Om0kwz Q==; X-CSE-ConnectionGUID: 2cbnuSUZS9yMv0VuPE+r/w== X-CSE-MsgGUID: CcLitvPoQpqWZHCiDlu4Gg== X-IronPort-AV: E=McAfee;i="6800,10657,11624"; a="69979889" X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="69979889" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2025 09:36:41 -0800 X-CSE-ConnectionGUID: lPf9As8yQfeQnFdQgUVrcA== X-CSE-MsgGUID: uG4UwnLnQmaHtDL2hWSgug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,226,1758610800"; d="scan'208";a="216040375" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 25 Nov 2025 09:36:38 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 2/5] libeth: handle creating pools with unreadable buffers Date: Tue, 25 Nov 2025 18:36:00 +0100 Message-ID: <20251125173603.3834486-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251125173603.3834486-1-aleksander.lobakin@intel.com> References: <20251125173603.3834486-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" libeth uses netmems for quite some time already, so in order to support unreadable frags / memory providers, it only needs to set PP_FLAG_ALLOW_UNREADABLE_NETMEM when needed. Also add a couple sanity checks to make sure the driver didn't mess up the configuration options and, in case when an MP is installed, return the truesize always equal to PAGE_SIZE, so that libeth_rx_alloc() will never try to allocate frags. Memory providers manage buffers on their own and expect 1:1 buffer / HW Rx descriptor association. Bonus: mention in the libeth_sqe_type description that LIBETH_SQE_EMPTY should also be used for netmem Tx SQEs -- they don't need DMA unmapping. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- include/net/libeth/tx.h | 2 +- drivers/net/ethernet/intel/libeth/rx.c | 42 ++++++++++++++++++++++++++ 2 files changed, 43 insertions(+), 1 deletion(-) diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index c3db5c6f1641..a66fc2b3a114 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -12,7 +12,7 @@ =20 /** * enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion - * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX/XSk frame, no action required + * @LIBETH_SQE_EMPTY: empty OR netmem/XDP_TX/XSk frame, no action required * @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required * @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree() * @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/= intel/libeth/rx.c index 8874b714cdcc..11e6e8f353ef 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -6,6 +6,7 @@ #include =20 #include +#include =20 /* Rx buffer management */ =20 @@ -139,9 +140,47 @@ static bool libeth_rx_page_pool_params_zc(struct libet= h_fq *fq, fq->buf_len =3D clamp(mtu, LIBETH_RX_BUF_STRIDE, max); fq->truesize =3D fq->buf_len; =20 + /* + * Allow frags only for kernel pages. `fq->truesize =3D=3D pp->max_len` + * will always fall back to regular page_pool_alloc_netmems() + * regardless of the MTU / FQ buffer size. + */ + if (pp->flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) + fq->truesize =3D pp->max_len; + return true; } =20 +/** + * libeth_rx_page_pool_check_unread - check input params for unreadable MPs + * @fq: buffer queue to check + * @pp: &page_pool_params for the queue + * + * Make sure we don't create an invalid pool with full-frame unreadable + * buffers, bidirectional unreadable buffers or so, and configure the + * ZC payload pool accordingly. + * + * Return: true on success, false on invalid input params. + */ +static bool libeth_rx_page_pool_check_unread(const struct libeth_fq *fq, + struct page_pool_params *pp) +{ + if (!netif_rxq_has_unreadable_mp(pp->netdev, pp->queue_idx)) + return true; + + /* For now, the core stack doesn't allow XDP with unreadable frags */ + if (fq->xdp) + return false; + + /* It should be either a header pool or a ZC payload pool */ + if (fq->type =3D=3D LIBETH_FQE_HDR) + return !fq->hsplit; + + pp->flags |=3D PP_FLAG_ALLOW_UNREADABLE_NETMEM; + + return fq->hsplit; +} + /** * libeth_rx_fq_create - create a PP with the default libeth settings * @fq: buffer queue struct to fill @@ -165,6 +204,9 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct na= pi_struct *napi) struct page_pool *pool; int ret; =20 + if (!libeth_rx_page_pool_check_unread(fq, &pp)) + return -EINVAL; + pp.dma_dir =3D fq->xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; =20 if (!fq->hsplit) --=20 2.51.1