From nobody Sun Nov 24 05:02:09 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB59A21B43D; Thu, 7 Nov 2024 16:14:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730996085; cv=none; b=Zb4R5RSOkh03wPGjYtCLvHihlJ95R/s5EqObmbdAFJqeFL2uKPHNXJ/McpD6S+Vz7EQOWXm+Wu3UofBBJiRtQA2Zfw5mY+RS0begpNkpSQd8Dx5pG1fYtTcfo+B9UISnIAOXeHEJ9jj9mewkaG8vZTGPAuQ7bKLlyMcCJFBNBXU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730996085; c=relaxed/simple; bh=3KFWtFjq5B4ECtLOzIozP637DDzwc0dvnEwfGsr1pEI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nSLVGn7hqSDHlBmIphD5a90ZhC++pjxPKFIwv3DCeCGg8MVlzfjFvdM/okd6B1OKIwZW5M+FZaVeoXtymZqK/XTQoF4RSkESM9Zy4ee5y6IZuAY3rPKSJvU9vnh1mp61+WYyMCTB4uprHberRBS4P0/GeEObeQLkklxC0DZ4PnE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fxBAP6FB; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fxBAP6FB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730996084; x=1762532084; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3KFWtFjq5B4ECtLOzIozP637DDzwc0dvnEwfGsr1pEI=; b=fxBAP6FBkZrwEt8HMvhpXAwWcIn/l+imap11AbagYCGFfqMtmgGyIpnk 5BvbumDWabZy+vJLIq8FyCcwAGFlqJ6oAEDTdQhkOxv8j0c55rSn2EE5N Fz9QXE36Y16NQGiysm+fu2+V1xBHh7ugq8Sfux4+I0enZ87z7GaEFNVEl uwDrierTeuyXf92u/dtVXvdH+MdypomNQhIBy6fqEXjFOz7aIqfzwmsU5 lK6xabX3Nj8gg5Kp9NK+CJkaHI3T9c3JxQxJw2x0IQDvxSy1Xm6fcF2Lv Hkv0DR5pPvxhXRts58tDAkBgsJKguGbtRCxb/TL2go0yb8WYsK1p4Y/yw Q==; X-CSE-ConnectionGUID: AxKHPyDiSXa2F77mfo6f8Q== X-CSE-MsgGUID: byN8hLHVRPKADtWnb9ns8A== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41956038" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41956038" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2024 08:14:44 -0800 X-CSE-ConnectionGUID: cVEthrFLTx+npU3MjeRcnA== X-CSE-MsgGUID: 8gIsil52S6iEdTBS5+v29Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,135,1728975600"; d="scan'208";a="90258240" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa004.jf.intel.com with ESMTP; 07 Nov 2024 08:14:40 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Maciej Fijalkowski , Stanislav Fomichev , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 15/19] xsk: make xsk_buff_add_frag really add a frag via __xdp_buff_add_frag() Date: Thu, 7 Nov 2024 17:10:22 +0100 Message-ID: <20241107161026.2903044-16-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241107161026.2903044-1-aleksander.lobakin@intel.com> References: <20241107161026.2903044-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, xsk_buff_add_frag() only adds a frag to the pool linked list, not doing anything with the &xdp_buff. The drivers do that manually and the logic is the same. Make it really add an skb frag, just like xdp_buff_add_frag() does that, and freeing frags on error if needed. This allows to remove repeating code from i40e and ice and not add the same code again and again. Acked-by: Maciej Fijalkowski Signed-off-by: Alexander Lobakin --- include/net/xdp_sock_drv.h | 18 ++++++++++-- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 30 ++------------------ drivers/net/ethernet/intel/ice/ice_xsk.c | 32 ++-------------------- 3 files changed, 20 insertions(+), 60 deletions(-) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index f3175a5d28f7..6aae95b83645 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -136,11 +136,21 @@ static inline void xsk_buff_free(struct xdp_buff *xdp) xp_free(xskb); } =20 -static inline void xsk_buff_add_frag(struct xdp_buff *xdp) +static inline bool xsk_buff_add_frag(struct xdp_buff *head, + struct xdp_buff *xdp) { - struct xdp_buff_xsk *frag =3D container_of(xdp, struct xdp_buff_xsk, xdp); + const void *data =3D xdp->data; + struct xdp_buff_xsk *frag; + + if (!__xdp_buff_add_frag(head, virt_to_page(data), + offset_in_page(data), xdp->data_end - data, + xdp->frame_sz, false)) + return false; =20 + frag =3D container_of(xdp, struct xdp_buff_xsk, xdp); list_add_tail(&frag->list_node, &frag->pool->xskb_list); + + return true; } =20 static inline struct xdp_buff *xsk_buff_get_frag(const struct xdp_buff *fi= rst) @@ -357,8 +367,10 @@ static inline void xsk_buff_free(struct xdp_buff *xdp) { } =20 -static inline void xsk_buff_add_frag(struct xdp_buff *xdp) +static inline bool xsk_buff_add_frag(struct xdp_buff *head, + struct xdp_buff *xdp) { + return false; } =20 static inline struct xdp_buff *xsk_buff_get_frag(const struct xdp_buff *fi= rst) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ether= net/intel/i40e/i40e_xsk.c index 4e885df789ef..e28f1905a4a0 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -395,32 +395,6 @@ static void i40e_handle_xdp_result_zc(struct i40e_ring= *rx_ring, WARN_ON_ONCE(1); } =20 -static int -i40e_add_xsk_frag(struct i40e_ring *rx_ring, struct xdp_buff *first, - struct xdp_buff *xdp, const unsigned int size) -{ - struct skb_shared_info *sinfo =3D xdp_get_shared_info_from_buff(first); - - if (!xdp_buff_has_frags(first)) { - sinfo->nr_frags =3D 0; - sinfo->xdp_frags_size =3D 0; - xdp_buff_set_frags_flag(first); - } - - if (unlikely(sinfo->nr_frags =3D=3D MAX_SKB_FRAGS)) { - xsk_buff_free(first); - return -ENOMEM; - } - - __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, - virt_to_page(xdp->data_hard_start), - XDP_PACKET_HEADROOM, size); - sinfo->xdp_frags_size +=3D size; - xsk_buff_add_frag(xdp); - - return 0; -} - /** * i40e_clean_rx_irq_zc - Consumes Rx packets from the hardware ring * @rx_ring: Rx ring @@ -486,8 +460,10 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, in= t budget) =20 if (!first) first =3D bi; - else if (i40e_add_xsk_frag(rx_ring, first, bi, size)) + else if (!xsk_buff_add_frag(first, bi)) { + xsk_buff_free(first); break; + } =20 if (++next_to_process =3D=3D count) next_to_process =3D 0; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/etherne= t/intel/ice/ice_xsk.c index 334ae945d640..8975d2971bc3 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -801,35 +801,6 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp= _buff *xdp, return result; } =20 -static int -ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first, - struct xdp_buff *xdp, const unsigned int size) -{ - struct skb_shared_info *sinfo =3D xdp_get_shared_info_from_buff(first); - - if (!size) - return 0; - - if (!xdp_buff_has_frags(first)) { - sinfo->nr_frags =3D 0; - sinfo->xdp_frags_size =3D 0; - xdp_buff_set_frags_flag(first); - } - - if (unlikely(sinfo->nr_frags =3D=3D MAX_SKB_FRAGS)) { - xsk_buff_free(first); - return -ENOMEM; - } - - __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, - virt_to_page(xdp->data_hard_start), - XDP_PACKET_HEADROOM, size); - sinfo->xdp_frags_size +=3D size; - xsk_buff_add_frag(xdp); - - return 0; -} - /** * ice_clean_rx_irq_zc - consumes packets from the hardware ring * @rx_ring: AF_XDP Rx ring @@ -895,7 +866,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, =20 if (!first) { first =3D xdp; - } else if (ice_add_xsk_frag(rx_ring, first, xdp, size)) { + } else if (likely(size) && !xsk_buff_add_frag(first, xdp)) { + xsk_buff_free(first); break; } =20 --=20 2.47.0