From nobody Wed Nov 27 00:39:19 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2B861F76A3; Tue, 15 Oct 2024 14:55:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729004117; cv=none; b=q6yON6Nx0qEWhN3U7vlJea8V+Jw4HEzkYWEn8gZN42+yJikl57ZdDQBLPRqX371Ep7fTYc1+TvnBER+J7Esl6f6ENDtZ2JUnXMnKWQmy6npylPUKU2vpzmpvwUQqffyzn9vwJsrTMSHKSRxB9n5uW5CafiS1yyuW2IuDXQeO1fI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729004117; c=relaxed/simple; bh=BorV1Brv0my44N5PQ+B8MmLZhQivYvRrdWmcvwcHk7k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GR2okRd2b7FI8VBeka9wSZ/xyHcWO2YlEirZkRgMqgFRSh6dxV96anu5mlXIH6iuKok4aSZZ4JRsgceXE1WB8tL4YKzJ8sIIr55vq2DxQRmFduYE7PqjSkFUmby5rT0xGhii0xhv37nBDq8yStjZgy1TuSpFgzIpnmOpGbQxOgg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QOHbwFgf; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QOHbwFgf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729004116; x=1760540116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BorV1Brv0my44N5PQ+B8MmLZhQivYvRrdWmcvwcHk7k=; b=QOHbwFgfF0ZNsgtfu6qXU52jwPLg07Kb25+zT/3Q/7/D38f9j1ENfv6W uLLqhWD6+uKcyR+bCGbfhk5rNyNPjX59ZQVLXaHDHsoFcxA5fAArU8m/u yIZL6Y/dtk0Jf5Is1y9Boq+MUFwmSoBg3guCvVJXU9flNRUu3TFOzHLp0 iSWsbXK+aAE3l2ElzAuADMxt103stD9qSwiqMZfCnl9q8OHlZ/HWrHFPB QITL/Bwd+pzY3OeC5qbev87ykliw/WSimL+3x0AW14q3DLR6VgNFSWsVl MWY9aczFjqHfIzlZxUIdOWBLlWhqfs0pCGlS+dXxhSNHton5yXP34O4+u w==; X-CSE-ConnectionGUID: YefBOIAMScGIe9/kyHAhNQ== X-CSE-MsgGUID: BJ/ZUgGyQ9evuwATSHIang== X-IronPort-AV: E=McAfee;i="6700,10204,11225"; a="31277675" X-IronPort-AV: E=Sophos;i="6.11,205,1725346800"; d="scan'208";a="31277675" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2024 07:55:16 -0700 X-CSE-ConnectionGUID: 1SbrzFkWRlS06lLwPiVoDA== X-CSE-MsgGUID: BExLO5LbRMSRqJPhaVDrPQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="82723156" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa003.jf.intel.com with ESMTP; 15 Oct 2024 07:55:12 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Stanislav Fomichev , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 15/18] xsk: add generic XSk &xdp_buff -> skb conversion Date: Tue, 15 Oct 2024 16:53:47 +0200 Message-ID: <20241015145350.4077765-16-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241015145350.4077765-1-aleksander.lobakin@intel.com> References: <20241015145350.4077765-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Same as with converting &xdp_buff to skb on Rx, the code which allocates a new skb and copies the XSk frame there is identical across the drivers, so make it generic. This includes copying all the frags if they are present in the original buff. System percpu Page Pools help here a lot: when available, allocate pages from there instead of the MM layer. This greatly improves XDP_PASS performance on XSk: instead of page_alloc() + page_free(), the net core recycles the same pages, so the only overhead left is memcpy()s. Note that the passed buff gets freed if the conversion is done w/o any error, assuming you don't need this buffer after you convert it to an skb. Signed-off-by: Alexander Lobakin Reviewed-by: Maciej Fijalkowski --- include/net/xdp.h | 1 + net/core/xdp.c | 138 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index 83e3f4648caa..69728b2d75d5 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -331,6 +331,7 @@ void xdp_warn(const char *msg, const char *func, const = int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) =20 struct sk_buff *xdp_build_skb_from_buff(const struct xdp_buff *xdp); +struct sk_buff *xdp_build_skb_from_zc(struct xdp_buff *xdp); struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, diff --git a/net/core/xdp.c b/net/core/xdp.c index 371c26c203b2..116153b88d26 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -22,6 +22,8 @@ #include #include =20 +#include "dev.h" + #define REG_STATE_NEW 0x0 #define REG_STATE_REGISTERED 0x1 #define REG_STATE_UNREGISTERED 0x2 @@ -682,6 +684,142 @@ struct sk_buff *xdp_build_skb_from_buff(const struct = xdp_buff *xdp) } EXPORT_SYMBOL_GPL(xdp_build_skb_from_buff); =20 +/** + * xdp_copy_frags_from_zc - copy the frags from an XSk buff to an skb + * @skb: skb to copy frags to + * @xdp: XSk &xdp_buff from which the frags will be copied + * @pp: &page_pool backing page allocation, if available + * + * Copy all frags from an XSk &xdp_buff to an skb to pass it up the stack. + * Allocate a new page / page frag for each frag, copy it and attach to + * the skb. + * + * Return: true on success, false on page allocation fail. + */ +static noinline bool xdp_copy_frags_from_zc(struct sk_buff *skb, + const struct xdp_buff *xdp, + struct page_pool *pp) +{ + const struct skb_shared_info *xinfo; + struct skb_shared_info *sinfo; + u32 nr_frags, ts; + + xinfo =3D xdp_get_shared_info_from_buff(xdp); + nr_frags =3D xinfo->nr_frags; + sinfo =3D skb_shinfo(skb); + +#if IS_ENABLED(CONFIG_PAGE_POOL) + ts =3D 0; +#else + ts =3D xinfo->xdp_frags_truesize ? : nr_frags * xdp->frame_sz; +#endif + + for (u32 i =3D 0; i < nr_frags; i++) { + u32 len =3D skb_frag_size(&xinfo->frags[i]); + void *data; +#if IS_ENABLED(CONFIG_PAGE_POOL) + u32 truesize =3D len; + + data =3D page_pool_dev_alloc_va(pp, &truesize); + ts +=3D truesize; +#else + data =3D napi_alloc_frag(len); +#endif + if (unlikely(!data)) + return false; + + memcpy(data, skb_frag_address(&xinfo->frags[i]), + LARGEST_ALIGN(len)); + __skb_fill_page_desc(skb, sinfo->nr_frags++, + virt_to_page(data), + offset_in_page(data), len); + } + + xdp_update_skb_shared_info(skb, nr_frags, xinfo->xdp_frags_size, + ts, false); + + return true; +} + +/** + * xdp_build_skb_from_zc - create an skb from an XSk &xdp_buff + * @xdp: source XSk buff + * + * Similar to xdp_build_skb_from_buff(), but for XSk frames. Allocate an s= kb + * head, new page for the head, copy the data and initialize the skb field= s. + * If there are frags, allocate new pages for them and copy. + * If Page Pool is available, the function allocates memory from the system + * percpu pools to try recycling the pages, otherwise it uses the NAPI page + * frag caches. + * If new skb was built successfully, @xdp is returned to XSk pool's freel= ist. + * On error, it remains untouched and the caller must take care of this. + * + * Return: new &sk_buff on success, %NULL on error. + */ +struct sk_buff *xdp_build_skb_from_zc(struct xdp_buff *xdp) +{ + const struct xdp_rxq_info *rxq =3D xdp->rxq; + u32 len =3D xdp->data_end - xdp->data_meta; + struct page_pool *pp; + struct sk_buff *skb; + int metalen; +#if IS_ENABLED(CONFIG_PAGE_POOL) + u32 truesize; + void *data; + + pp =3D this_cpu_read(system_page_pool); + truesize =3D xdp->frame_sz; + + data =3D page_pool_dev_alloc_va(pp, &truesize); + if (unlikely(!data)) + return NULL; + + skb =3D napi_build_skb(data, truesize); + if (unlikely(!skb)) { + page_pool_free_va(pp, data, true); + return NULL; + } + + skb_mark_for_recycle(skb); + skb_reserve(skb, xdp->data_meta - xdp->data_hard_start); +#else /* !CONFIG_PAGE_POOL */ + struct napi_struct *napi; + + pp =3D NULL; + napi =3D napi_by_id(rxq->napi_id); + if (likely(napi)) + skb =3D napi_alloc_skb(napi, len); + else + skb =3D __netdev_alloc_skb_ip_align(rxq->dev, len, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; +#endif /* !CONFIG_PAGE_POOL */ + + memcpy(__skb_put(skb, len), xdp->data_meta, LARGEST_ALIGN(len)); + + metalen =3D xdp->data - xdp->data_meta; + if (metalen > 0) { + skb_metadata_set(skb, metalen); + __skb_pull(skb, metalen); + } + + skb_record_rx_queue(skb, rxq->queue_index); + + if (unlikely(xdp_buff_has_frags(xdp)) && + unlikely(!xdp_copy_frags_from_zc(skb, xdp, pp))) { + napi_consume_skb(skb, true); + return NULL; + } + + xsk_buff_free(xdp); + + skb->protocol =3D eth_type_trans(skb, rxq->dev); + + return skb; +} +EXPORT_SYMBOL_GPL(xdp_build_skb_from_zc); + struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) --=20 2.46.2