From nobody Mon Feb 9 15:08:59 2026 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B162E14A0A0 for ; Wed, 3 Apr 2024 15:28:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712158133; cv=none; b=GZnjWaC7qERim67OjyFWCVDwgcScCEZqInhM9GgmqL1wNgNjhtCg7P/DGLwWnt0LrjOEBg3OSbCIVCSfyqdLbNmadDWM1JUtjNelZhmhpfIMxmHfJF4gSU9XJso2Lk/pnZrKxrBXxW39Hg4Fgk/NRwu6fJOadVG6QIQ4+L3cdFs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712158133; c=relaxed/simple; bh=qI/WTLfanKJmZ34UlJ0XO8JlnMX3COqXLh2LGzSj/Tk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AKIx8LADXM1I5+bLa1IW2wsV/rB5NPi3LWJPBfwgxWcDAxrJlnp7z27l3zKtYrV5d/p0BTZlmnJYCIpJFs8RzGiFDRndCBUxb8ElFEkWeD8CezMHHFasAhDjXXS68b/sGktuhLPFMpwM2xxaN+mAYhfYwbrPW5lGvDchi2LG1M8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4bFNfmse; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4bFNfmse" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60cd62fa1f9so97191547b3.0 for ; Wed, 03 Apr 2024 08:28:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712158129; x=1712762929; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=f30V4mJ6zBXpnDX7YMJN+GSwZJV8AwOhj9hRQ6MYBsI=; b=4bFNfmsetopb+Y3QWCUFAzneGbVAAsKjY9q5WCpX1tbkCA2GlF/360bWElCPfey7EW 87+vrRjmTZeIdtP6RTbKrneLRl9MsROMOfHi4hmWXWS6ThQNsNb/AV+Y2Nv2LomCLZ2D 8xkc4kQqL/2Y0Mzp4obBaruZK2K13HmL7mA3jJcQIllpTkDX+F54HenDCqHGeDEaXkpg lipb253vKh6elxJRXsku3Z6YPRQxrdfVaBod3U+3OklRAILr/nzyDWShba+dOqOh0XLr M7BTgWClayRto0MX7WSi+/iTD6mG9hbmQgwp2sF5+89BZz17MUOuB0C+xVYM+vpybSNJ qfTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712158129; x=1712762929; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=f30V4mJ6zBXpnDX7YMJN+GSwZJV8AwOhj9hRQ6MYBsI=; b=uT94xaklu6vLKMwb7GhenFtmTKTA1FgzZg4B8OJy6lIQ+G1KPPLcMIdpbrSk4UIb+X R7bzno4GYf9qaxiyD5FVKeHU2Tr2E9OO4TLqMKOUWrNEv1pWv6hWFvSSF4j7Lyy7gScy Gi60L2Bryl43WLXVFkRlhIXVoNRF5h+SIrqS3rpxu7Djt/Cs2AbmISGZnN85cEX2R0aS xohVyoT15199roIDup/Z0G4sE0ZNdQgowjVjjjPhL4wBjULtesq4WOYEAj/vTPbEC9qR kmZgX8RSUOuTRlzG8jpDVqNJRzr7RBra8r+Lq9Zb+Ut/7nCCW7g3161jXOc7Hee3cuwO uw0A== X-Forwarded-Encrypted: i=1; AJvYcCV+jA232pCO2uI7GRsY97layWv3UrxXBYkx9Aef+r0m0NuY+CwcRibS6b+F/TToKJ0yz2i+EjrQ7XQFdNjQgxcpE4JvLkKy+pvIOFg3 X-Gm-Message-State: AOJu0Yw8ngS93u7whw+S4q2ydILuRxon5twxBFQHfWYylqwEu9B7l2hq y/pOMczjb2eSzrzreQQnCt0EAG8u5tO7u9jqSuTI8vwxQFPdEUUkdm4QR+n6L8jI7MlKzZPaQqa mPNndlt1bW807xaJ+rFDb1A== X-Google-Smtp-Source: AGHT+IGzZVWQFKGFPRlSV+1U40c2zc08NGC6CIm6+7ow+62WSgAyqJwDMpv69O41zQOGQ11r0GAMxvoCgYHdQi/kIQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:1726:7620:90a1:78b9]) (user=almasrymina job=sendgmr) by 2002:a81:9a93:0:b0:610:3b7a:c179 with SMTP id r141-20020a819a93000000b006103b7ac179mr3808941ywg.8.1712158129686; Wed, 03 Apr 2024 08:28:49 -0700 (PDT) Date: Wed, 3 Apr 2024 08:28:40 -0700 In-Reply-To: <20240403152844.4061814-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240403152844.4061814-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240403152844.4061814-2-almasrymina@google.com> Subject: [PATCH net-next v4 1/3] net: make napi_frag_unref reuse skb_page_unref From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Tariq Toukan , Dragos Tatulea , Simon Horman , Sabrina Dubroca , "=?UTF-8?q?Ahelenia=20Ziemia=C5=84ska?=" , Pavan Chebbi , Christophe JAILLET , Yunsheng Lin , Florian Westphal , David Howells , Alexander Lobakin , Lorenzo Bianconi , Johannes Berg Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The implementations of these 2 functions are almost identical. Remove the implementation of napi_frag_unref, and make it a call into skb_page_unref so we don't duplicate the implementation. Signed-off-by: Mina Almasry Reviewed-by: Eric Dumazet --- include/linux/skbuff.h | 12 +++--------- net/ipv4/esp4.c | 2 +- net/ipv6/esp6.c | 2 +- 3 files changed, 5 insertions(+), 11 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 03ea36a82cdd..7dcbd27e1497 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3513,10 +3513,10 @@ int skb_cow_data_for_xdp(struct page_pool *pool, st= ruct sk_buff **pskb, bool napi_pp_put_page(struct page *page); =20 static inline void -skb_page_unref(const struct sk_buff *skb, struct page *page) +skb_page_unref(struct page *page, bool recycle) { #ifdef CONFIG_PAGE_POOL - if (skb->pp_recycle && napi_pp_put_page(page)) + if (recycle && napi_pp_put_page(page)) return; #endif put_page(page); @@ -3525,13 +3525,7 @@ skb_page_unref(const struct sk_buff *skb, struct pag= e *page) static inline void napi_frag_unref(skb_frag_t *frag, bool recycle) { - struct page *page =3D skb_frag_page(frag); - -#ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page)) - return; -#endif - put_page(page); + skb_page_unref(skb_frag_page(frag), recycle); } =20 /** diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 3d647c9a7a21..40330253f076 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -114,7 +114,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *t= mp, struct sk_buff *skb) */ if (req->src !=3D req->dst) for (sg =3D sg_next(req->src); sg; sg =3D sg_next(sg)) - skb_page_unref(skb, sg_page(sg)); + skb_page_unref(sg_page(sg), skb->pp_recycle); } =20 #ifdef CONFIG_INET_ESPINTCP diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index fe8d53f5a5ee..fb431d0a3475 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -131,7 +131,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *t= mp, struct sk_buff *skb) */ if (req->src !=3D req->dst) for (sg =3D sg_next(req->src); sg; sg =3D sg_next(sg)) - skb_page_unref(skb, sg_page(sg)); + skb_page_unref(sg_page(sg), skb->pp_recycle); } =20 #ifdef CONFIG_INET6_ESPINTCP --=20 2.44.0.478.gd926399ef9-goog From nobody Mon Feb 9 15:08:59 2026 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3E1614A610 for ; Wed, 3 Apr 2024 15:28:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712158134; cv=none; b=th3NRjKbJBjTcoU6dvS6nnsK2yQXz120nHwcPT+sNhYDmjI575mTNYXN0aOiUGIHs9K0lfOhzDClkcn6waiGzj5AmGYMJZsOoyrZ1XGeEtH8nMYc9paSZ3txrVWo7QiwZ946Rd+yfRMr/7Lvzp79qIWq48xuwUTucSchhRuQuRI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712158134; c=relaxed/simple; bh=A5ukh2+Z6U51aMWh/tYkaiLlibm3zpXx/Ey8HxhPlgI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q7HYj4nrhJTViFjmMssZgzjUcIiGi+znUTCdWfc1gF326fivUYP3Chd+zAqyGWphEKEbpVHQKhunsGE2wjbgTX6ReO/aE6BeCz4MnynMvKUwVOnZuHA00UZu1zTTEOgHy7nyfqweLe6BxcM79w44cn+wrp1oud6mqU9/okOPYl4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VUi0UOSE; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VUi0UOSE" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-610c23abd1fso118191267b3.0 for ; Wed, 03 Apr 2024 08:28:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712158131; x=1712762931; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5nTl4cUJQN6C9/W2w+L747h7ZZyFRYDUY2SPGpa8Ikk=; b=VUi0UOSEuiX4xjB2QAC4vwoRZ/Sm/uuKCbSyD28H5uKgB6ZNpHvoPQ3TKmb6CfxjyE eeNzlVPvSiZwHjAtuymJwa80XPqA6EN8Yu1Hgcgr0ISfO7ePYgovRB9xLmI+QgdgLz7t 6SvfC1aLSlofL1a3PGAzaLqRFzbd35fIAYdYLW9p2W0fWEyof194SwPWCVnYFrGBK5hV k1SPlfA4nf1P58TkePniv9H5uw/0fHPiPFtWto3s0lSt8TtNG8RokRRaIe7Jm6WFaN4v sHpEpRloXLppm5xHmQl53EJhVqGs4MZ6fl7m+OMh9NU34SKAhmOTgtb39XxSxRqQR0Rc qKLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712158131; x=1712762931; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5nTl4cUJQN6C9/W2w+L747h7ZZyFRYDUY2SPGpa8Ikk=; b=OQ11PFK4foovfwX0zV58p+JBQjzVj1NXNVkF3Dj6STSYsSLgvWfslHKFtapGJR92KP gKS78+/k1I/TCHSgNiAXho2yge/3gfAlRYMyIqs6EkbRAC/y/SdakPwCUm2o+C44ECj5 cqnmKYtOFQ2X+NfZPd1goJhb2BQiERiep4L9C2mDK3DW5fEan8T45FrV9nOgyFSxw8MA nQ5I/Jult9W6Xdy5V11UWCvf6phqX4daLHAOx1iZq47MHgXKCwFygaj2PZtA5KVHE+J8 24pYz5WwToZNfd71PvBSZkeN2SGIxHPXv0NPstJ9gJsbgrCcGESGwGm7fjxeqjxG1WK7 dWEA== X-Forwarded-Encrypted: i=1; AJvYcCVozzJHlISTwqIL7aV11vaWq/Bw7bpxkM8natN4f8Utj0J6dJYZSvDoQmwM6uwPjqt4hnVTA4EiFqtNTeA8DImv7oyVLx4SF1HseQO5 X-Gm-Message-State: AOJu0YwAVFgWbAtM7CORfsGUmUiKzqie+iWvno6WdJjlRX5PjRMfbyYw cKhAqHsbNfYREDHBoK9rmYk4/WGRt6yAET6ot7oo6y6z2F8eFeX2buSXBhfKPbMMNoU5EEWEtOh AsmkfIII6OtaQwnEZG1qwSQ== X-Google-Smtp-Source: AGHT+IHfo3ddyozuc0W0+di2k7ZccaYseC/onrdI7PFh4x6XJehwveb6aZqNnjGJakNkSAoy8PGWCFCDNgvwcVRraA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:1726:7620:90a1:78b9]) (user=almasrymina job=sendgmr) by 2002:a05:690c:b8b:b0:614:ff0d:2c7b with SMTP id ck11-20020a05690c0b8b00b00614ff0d2c7bmr1620431ywb.10.1712158131601; Wed, 03 Apr 2024 08:28:51 -0700 (PDT) Date: Wed, 3 Apr 2024 08:28:41 -0700 In-Reply-To: <20240403152844.4061814-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240403152844.4061814-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240403152844.4061814-3-almasrymina@google.com> Subject: [PATCH net-next v4 2/3] net: mirror skb frag ref/unref helpers From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Tariq Toukan , Dragos Tatulea , Simon Horman , Sabrina Dubroca , "=?UTF-8?q?Ahelenia=20Ziemia=C5=84ska?=" , Pavan Chebbi , Christophe JAILLET , Yunsheng Lin , Florian Westphal , David Howells , Alexander Lobakin , Lorenzo Bianconi , Johannes Berg Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor some of the skb frag ref/unref helpers for improved clarity. Implement napi_pp_get_page() to be the mirror counterpart of napi_pp_put_page(). Implement skb_page_ref() to be the mirror of skb_page_unref(). Improve __skb_frag_ref() to become a mirror counterpart of __skb_frag_unref(). Previously unref could handle pp & non-pp pages, while the ref could only handle non-pp pages. Now both the ref & unref helpers can correctly handle both pp & non-pp pages. Now that __skb_frag_ref() can handle both pp & non-pp pages, remove skb_pp_frag_ref(), and use __skb_frag_ref() instead. This lets us remove pp specific handling from skb_try_coalesce. Additionally, since __skb_frag_ref() can now handle both pp & non-pp pages, a latent issue in skb_shift() should now be fixed. Previously this function would do a non-pp ref & pp unref on potential pp frags (fragfrom). After this patch, skb_shift() should correctly do a pp ref/unref on pp frags. Signed-off-by: Mina Almasry Reviewed-by: Dragos Tatulea --- v4: - pass skb->pp_recycle instead of 'false' in __skb_frag_ref in chcr_ktls.c & cassini.c. - Add some details on the changes to skb_shift() in this commit in the commit message. v3: - Fix build errors reported by patchwork. - Fix drivers/net/veth.c & tls_device_fallback.c callsite I missed to updat= e. - Fix page_pool_ref_page(head_page) -> page_pool_ref_page(page) --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 2 +- drivers/net/ethernet/sun/cassini.c | 4 +- drivers/net/veth.c | 2 +- include/linux/skbuff.h | 22 ++++++-- net/core/skbuff.c | 53 ++++++------------- net/tls/tls_device_fallback.c | 2 +- 6 files changed, 39 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c= b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 6482728794dd..d7e8deafddf1 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1658,7 +1658,7 @@ static void chcr_ktls_copy_record_in_skb(struct sk_bu= ff *nskb, for (i =3D 0; i < record->num_frags; i++) { skb_shinfo(nskb)->frags[i] =3D record->frags[i]; /* increase the frag ref count */ - __skb_frag_ref(&skb_shinfo(nskb)->frags[i]); + __skb_frag_ref(&skb_shinfo(nskb)->frags[i], nskb->pp_recycle); } =20 skb_shinfo(nskb)->nr_frags =3D record->num_frags; diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/= cassini.c index bfb903506367..31878256feee 100644 --- a/drivers/net/ethernet/sun/cassini.c +++ b/drivers/net/ethernet/sun/cassini.c @@ -1999,7 +1999,7 @@ static int cas_rx_process_pkt(struct cas *cp, struct = cas_rx_comp *rxc, skb->len +=3D hlen - swivel; =20 skb_frag_fill_page_desc(frag, page->buffer, off, hlen - swivel); - __skb_frag_ref(frag); + __skb_frag_ref(frag, skb->pp_recycle); =20 /* any more data? */ if ((words[0] & RX_COMP1_SPLIT_PKT) && ((dlen -=3D hlen) > 0)) { @@ -2023,7 +2023,7 @@ static int cas_rx_process_pkt(struct cas *cp, struct = cas_rx_comp *rxc, frag++; =20 skb_frag_fill_page_desc(frag, page->buffer, 0, hlen); - __skb_frag_ref(frag); + __skb_frag_ref(frag, skb->pp_recycle); RX_USED_ADD(page, hlen + cp->crc_size); } =20 diff --git a/drivers/net/veth.c b/drivers/net/veth.c index bcdfbf61eb66..6160a3e8d341 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -716,7 +716,7 @@ static void veth_xdp_get(struct xdp_buff *xdp) return; =20 for (i =3D 0; i < sinfo->nr_frags; i++) - __skb_frag_ref(&sinfo->frags[i]); + __skb_frag_ref(&sinfo->frags[i], false); } =20 static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 7dcbd27e1497..71caeee061ca 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3483,15 +3483,29 @@ static inline struct page *skb_frag_page(const skb_= frag_t *frag) return netmem_to_page(frag->netmem); } =20 +bool napi_pp_get_page(struct page *page); + +static inline void skb_page_ref(struct page *page, bool recycle) +{ +#ifdef CONFIG_PAGE_POOL + if (recycle && napi_pp_get_page(page)) + return; +#endif + get_page(page); +} + /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment + * @recycle: skb->pp_recycle param of the parent skb. False if no parent s= kb. * - * Takes an additional reference on the paged fragment @frag. + * Takes an additional reference on the paged fragment @frag. Obtains the + * correct reference count depending on whether skb->pp_recycle is set and + * whether the frag is a page pool frag. */ -static inline void __skb_frag_ref(skb_frag_t *frag) +static inline void __skb_frag_ref(skb_frag_t *frag, bool recycle) { - get_page(skb_frag_page(frag)); + skb_page_ref(skb_frag_page(frag), recycle); } =20 /** @@ -3503,7 +3517,7 @@ static inline void __skb_frag_ref(skb_frag_t *frag) */ static inline void skb_frag_ref(struct sk_buff *skb, int f) { - __skb_frag_ref(&skb_shinfo(skb)->frags[f]); + __skb_frag_ref(&skb_shinfo(skb)->frags[f], skb->pp_recycle); } =20 int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 2a5ce6667bbb..ff7e450ec5ea 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1004,6 +1004,18 @@ int skb_cow_data_for_xdp(struct page_pool *pool, str= uct sk_buff **pskb, EXPORT_SYMBOL(skb_cow_data_for_xdp); =20 #if IS_ENABLED(CONFIG_PAGE_POOL) +bool napi_pp_get_page(struct page *page) +{ + page =3D compound_head(page); + + if (!is_pp_page(page)) + return false; + + page_pool_ref_page(page); + return true; +} +EXPORT_SYMBOL(napi_pp_get_page); + bool napi_pp_put_page(struct page *page) { page =3D compound_head(page); @@ -1032,37 +1044,6 @@ static bool skb_pp_recycle(struct sk_buff *skb, void= *data) return napi_pp_put_page(virt_to_page(data)); } =20 -/** - * skb_pp_frag_ref() - Increase fragment references of a page pool aware s= kb - * @skb: page pool aware skb - * - * Increase the fragment reference count (pp_ref_count) of a skb. This is - * intended to gain fragment references only for page pool aware skbs, - * i.e. when skb->pp_recycle is true, and not for fragments in a - * non-pp-recycling skb. It has a fallback to increase references on normal - * pages, as page pool aware skbs may also have normal page fragments. - */ -static int skb_pp_frag_ref(struct sk_buff *skb) -{ - struct skb_shared_info *shinfo; - struct page *head_page; - int i; - - if (!skb->pp_recycle) - return -EINVAL; - - shinfo =3D skb_shinfo(skb); - - for (i =3D 0; i < shinfo->nr_frags; i++) { - head_page =3D compound_head(skb_frag_page(&shinfo->frags[i])); - if (likely(is_pp_page(head_page))) - page_pool_ref_page(head_page); - else - page_ref_inc(head_page); - } - return 0; -} - static void skb_kfree_head(void *head, unsigned int end_offset) { if (end_offset =3D=3D SKB_SMALL_HEAD_HEADROOM) @@ -4169,7 +4150,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *sk= b, int shiftlen) to++; =20 } else { - __skb_frag_ref(fragfrom); + __skb_frag_ref(fragfrom, skb->pp_recycle); skb_frag_page_copy(fragto, fragfrom); skb_frag_off_copy(fragto, fragfrom); skb_frag_size_set(fragto, todo); @@ -4819,7 +4800,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, } =20 *nskb_frag =3D (i < 0) ? skb_head_frag_to_page_desc(frag_skb) : *frag; - __skb_frag_ref(nskb_frag); + __skb_frag_ref(nskb_frag, nskb->pp_recycle); size =3D skb_frag_size(nskb_frag); =20 if (pos < offset) { @@ -5950,10 +5931,8 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_= buff *from, /* if the skb is not cloned this does nothing * since we set nr_frags to 0. */ - if (skb_pp_frag_ref(from)) { - for (i =3D 0; i < from_shinfo->nr_frags; i++) - __skb_frag_ref(&from_shinfo->frags[i]); - } + for (i =3D 0; i < from_shinfo->nr_frags; i++) + __skb_frag_ref(&from_shinfo->frags[i], from->pp_recycle); =20 to->truesize +=3D delta; to->len +=3D len; diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c index 4e7228f275fa..d4000b4a1f7d 100644 --- a/net/tls/tls_device_fallback.c +++ b/net/tls/tls_device_fallback.c @@ -277,7 +277,7 @@ static int fill_sg_in(struct scatterlist *sg_in, for (i =3D 0; remaining > 0; i++) { skb_frag_t *frag =3D &record->frags[i]; =20 - __skb_frag_ref(frag); + __skb_frag_ref(frag, false); sg_set_page(sg_in + i, skb_frag_page(frag), skb_frag_size(frag), skb_frag_off(frag)); =20 --=20 2.44.0.478.gd926399ef9-goog From nobody Mon Feb 9 15:08:59 2026 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CA83149E15 for ; Wed, 3 Apr 2024 15:28:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712158136; cv=none; b=cN/Ii8QIE9tJLNPZKWF8Zdq8QI/lS8Zgs3obUJqoIE69P8xXqKiJ6AAH17Rhr7Ju8Iv6I5QY4Eso0dqZ7gRCNI7Cc+rfQKFwBCuLS9kqW3mxqRZtLsxrdzVSzCfRKGPrBYP/QucQkfz1jvxW4plnwwPCxMqNG/8o4FB0W10DSqg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712158136; c=relaxed/simple; bh=9MBXfFFaUkMScseJNVCokGwDY6l5vMDtbQJR1lmoadQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=B9HLL9z7EFCsH/XmZEoWdZ9FwEAWjxeOYtg6nLNptIyO2rqDHKFDauunrKJo/ZWyUI6BsGywdRBsCOVGsDGlIuUBG/3eeiVlDaZKlSeWh9fekZdHaseOrHHe/U0nmhVCR+NoUGhxoAEpSkr9z2tMkU8rAPt71FP9zFPNex5DQRs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bs5IfSlt; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bs5IfSlt" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dcd94cc48a1so10129660276.3 for ; Wed, 03 Apr 2024 08:28:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712158133; x=1712762933; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BZ46wgx2VOZhZI3amGmb078HYyws+2NawAwS/VEa28E=; b=bs5IfSltzz7E63ZXnVgagfOUKyUDAU0L2xci9/3yTS3emn1eqB0QlG3Bw1GNALGcvU naQntpM8oDEpStessr9xFB74f1+dfp2Game7PDKsBL2Q6lam2xyfcYHnEIB39IDFNpeM ogfek0ZcHuvNf+t9Lwtan/q8INSARZ1RwkEzJGaBd8PDWbCPDGg4kte4swF5vBalMuPG voxB7IN5yw2KC6/gE6xvvfxmwjXlZlgpR3dEL6WM9vNO0jzYHLaY0GMln7JGW6aliL6A TGB0T6FKYGScRZAhWmM2pRrA6lbTWuUXRG+dTzJebO5eWFM2MMl1PUfdtxOEMbl0Mbba BWQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712158133; x=1712762933; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BZ46wgx2VOZhZI3amGmb078HYyws+2NawAwS/VEa28E=; b=pk+a0ljBQOT1TlJB19F/VbAXQOBijqsQK74VIriOM12EyD1FIJpqiA9tdINwGm3hPQ u2SnX34MDJqj4elMaRZgzJCp2Cxgeni5kncKvCz+T5OCxnHfAhiQI5UvPUu3HP9hgPs+ afcYcruOktSjxTsMs+0OMWxc2CLjPNHgTuxXC579izVyCrcxoEvbQyg9aMGQllWuaeec 9bowWz071hoH54hp7rOBu9XsssyfJeExCUShV/lNP/V/2BBsOqJBJpvWjRTwsNJ3iVV6 gOEjXFkiQx1sJYfFFXXbiLqN97b1BcBoAgUO68uDXMVOlvJNaCvt563KahcP90lbuuiI vOvA== X-Forwarded-Encrypted: i=1; AJvYcCUShhLxRGupgqVf1D5dG8pyr49PkxnrR5SQ7Qe1rKqMc5xKpWwgCUukzvxtUGDHd27Yp47aVrYztlV+VuGoyb1VUScMx1rklJt50ONm X-Gm-Message-State: AOJu0YzsMqbMShrWcw5l9LSo84x+4vVSbXWAISt+vdjp0ewTKR6/76wN 0tWRuGIe/0i1K+FPsaHycYc0J/mvRY3HAm5Sbxii1lKkrSCSWR8PCn3mOhiVpKEGrvQELIzPMCr UcJPr8F7u4Mgb5iyxg/cALw== X-Google-Smtp-Source: AGHT+IH2BOj3HujJ3UAL3LDYPh9x/ZqBz2OYXz7yuPx/8kZYe48WcKGacUBuDSlX/S4SNRPVLQsoSlsEIcWqYK+yVg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:1726:7620:90a1:78b9]) (user=almasrymina job=sendgmr) by 2002:a05:6902:154d:b0:dc7:48ce:d17f with SMTP id r13-20020a056902154d00b00dc748ced17fmr4888098ybu.10.1712158133514; Wed, 03 Apr 2024 08:28:53 -0700 (PDT) Date: Wed, 3 Apr 2024 08:28:42 -0700 In-Reply-To: <20240403152844.4061814-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240403152844.4061814-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240403152844.4061814-4-almasrymina@google.com> Subject: [PATCH net-next v4 3/3] net: remove napi_frag_unref From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Tariq Toukan , Dragos Tatulea , Simon Horman , Sabrina Dubroca , "=?UTF-8?q?Ahelenia=20Ziemia=C5=84ska?=" , Pavan Chebbi , Christophe JAILLET , Yunsheng Lin , Florian Westphal , David Howells , Alexander Lobakin , Lorenzo Bianconi , Johannes Berg Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the changes in the last patches, napi_frag_unref() is now reduandant. Remove it and use skb_page_unref directly. Signed-off-by: Mina Almasry Reviewed-by: Dragos Tatulea Reviewed-by: Eric Dumazet --- include/linux/skbuff.h | 8 +------- net/core/skbuff.c | 2 +- 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 71caeee061ca..eb3d70e57166 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3536,12 +3536,6 @@ skb_page_unref(struct page *page, bool recycle) put_page(page); } =20 -static inline void -napi_frag_unref(skb_frag_t *frag, bool recycle) -{ - skb_page_unref(skb_frag_page(frag), recycle); -} - /** * __skb_frag_unref - release a reference on a paged fragment. * @frag: the paged fragment @@ -3552,7 +3546,7 @@ napi_frag_unref(skb_frag_t *frag, bool recycle) */ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { - napi_frag_unref(frag, recycle); + skb_page_unref(skb_frag_page(frag), recycle); } =20 /** diff --git a/net/core/skbuff.c b/net/core/skbuff.c index ff7e450ec5ea..9aa1b40d1693 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1082,7 +1082,7 @@ static void skb_release_data(struct sk_buff *skb, enu= m skb_drop_reason reason) } =20 for (i =3D 0; i < shinfo->nr_frags; i++) - napi_frag_unref(&shinfo->frags[i], skb->pp_recycle); + __skb_frag_unref(&shinfo->frags[i], skb->pp_recycle); =20 free_head: if (shinfo->frag_list) --=20 2.44.0.478.gd926399ef9-goog