From nobody Sun Feb 8 12:38:23 2026 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B528153821 for ; Wed, 27 Mar 2024 21:45:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711575932; cv=none; b=CLboUB1IFzigfUN7Md6FyjHFcFF2QOYxYCSHJf/rvuVwpf8VR/ztoDriB/NzpI6T7j2Okzf1feMHmYtNQKPUCRgwp2l67FQQk65bz+rBJTW4M8Sf8e0rOk4Np2DUxp8K5LJFE16Y0wZqcsd5SxgjLg1qhqdHMUIJt8rKhJWBzww= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711575932; c=relaxed/simple; bh=Rak+tbVGAvFieKkSB/eo4ljeZ0A/D+7QGjSJN/aq218=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Km0/W46bD2JdeFY3tqp6kMtVkR+ArP6p/SA3oDA2Fw8Oz30k4DdGggmww3ctPZyc6xac5h53YjLcCgsKrQt7q1B4ti2O3Rp0ga/Wcij2UkifH2ZRlZeUD2yDfdH/HRL3uNlhiCWM0QIWwtlBxStmrtLhRCmWfYZoolexyPzb0Bg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GrhrOBZR; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GrhrOBZR" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcc4563611cso425873276.3 for ; Wed, 27 Mar 2024 14:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575929; x=1712180729; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ldINoU5VYJkv9i5GCH5hBaOnA51dgNGBQpx4IgBlUR8=; b=GrhrOBZRdAzZjM2pRpFpgQNYPUUO/NpseZ3Yu/cTyKuzS+mk327o61FrScxQ4FgS0o RsqqkNr7aaYDslM5vtOCc3LUbGUyUIuVayNS+cqU3qUJtKqkvYPU4WPxcrzN2U5jAsXF PtaUKjkI4xS6vR+z8UUI2GBDAAWtC6wu4Q3bir6E+h1YRs2kQEA0dj0paHMAR+1T21NK ZtB92L6t+xOT7xdcZgHpcVY15U37y2556KjuF3xG6ng9sdXlL6/Q84uxJwUxggTHmXdD b9IwhxHJIApPdtKJCBF8jHuEqak9NZVnBHRJQqbGVXSvksyxiKye8qR8f4+ckdfKZRJg TDFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575929; x=1712180729; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ldINoU5VYJkv9i5GCH5hBaOnA51dgNGBQpx4IgBlUR8=; b=wtsvWN6UzvlPsh/eHDiydvvr+88qeUDGCorTxx/vOqcgosestzZu0b+J6PGpGcjpm+ 9tMsaW4e5Oc5OJc0wcrQOgD5VMZ4xvPJKXgLLPjNe4w1YLaUlGo9zGyG6iiMIRyVbylx /1wPJzHvsGXvYyZ/6r5YruSJrtEDfGFG+U/fMwO6yP2ws1a0+Nj3v8KiLlWimB3f8hHU J6+Y641VNhHxp5Bq886ghFUNzSJMkfGj9MQ6UUFboXkCAVSGjD7rXspF52iUf5Y2Fkdx zoJseI0Tu/d+iH7V9+Qg/ib+tH50uD1eAV8TtQ39bX49spPu9kYlO2nAghz8DBkrokzO X7lA== X-Forwarded-Encrypted: i=1; AJvYcCX6flqAjnSzGeP8phLaBpwLFyH1LX/D5Yvalkw9MitXB1F/3YrN6fV4JlfXllOlxw+Sm77NA5ubjS9rZR6vO4OusPJRjQkYbtWW1tL3 X-Gm-Message-State: AOJu0YyqHHUHUREhF5BAfJgCzXVbbIM6L6gOHepKR7jdKysxPz+1s7fe tPoL4QVxH0DJnGhZgTcBylxpPTXm45bkJjKyjOCwa5wQYgECumljpOjpReltCGSbw0mQP0TDRbH UYn8JZWt2JSS/paTcPuQrpw== X-Google-Smtp-Source: AGHT+IEKtADRvlkxc7YGEqMlczMA9y2zeZF2AgXSBclUgkSzZX00m5c+R1YQNQRHKujGk4eQyKgrO2GZQNQ7f1pkaQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:b757:6e7b:2156:cabc]) (user=almasrymina job=sendgmr) by 2002:a05:6902:110c:b0:dcc:f01f:65e1 with SMTP id o12-20020a056902110c00b00dccf01f65e1mr341668ybu.8.1711575929627; Wed, 27 Mar 2024 14:45:29 -0700 (PDT) Date: Wed, 27 Mar 2024 14:45:19 -0700 In-Reply-To: <20240327214523.2182174-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240327214523.2182174-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327214523.2182174-2-almasrymina@google.com> Subject: [PATCH net-next v2 1/3] net: make napi_frag_unref reuse skb_page_unref From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Dragos Tatulea Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The implementations of these 2 functions are almost identical. Remove the implementation of napi_frag_unref, and make it a call into skb_page_unref so we don't duplicate the implementation. Signed-off-by: Mina Almasry --- include/linux/skbuff.h | 12 +++--------- net/ipv4/esp4.c | 2 +- net/ipv6/esp6.c | 2 +- 3 files changed, 5 insertions(+), 11 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b945af8a6208..bafa5c9ff59a 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3524,10 +3524,10 @@ int skb_cow_data_for_xdp(struct page_pool *pool, st= ruct sk_buff **pskb, bool napi_pp_put_page(struct page *page, bool napi_safe); =20 static inline void -skb_page_unref(const struct sk_buff *skb, struct page *page, bool napi_saf= e) +skb_page_unref(struct page *page, bool recycle, bool napi_safe) { #ifdef CONFIG_PAGE_POOL - if (skb->pp_recycle && napi_pp_put_page(page, napi_safe)) + if (recycle && napi_pp_put_page(page, napi_safe)) return; #endif put_page(page); @@ -3536,13 +3536,7 @@ skb_page_unref(const struct sk_buff *skb, struct pag= e *page, bool napi_safe) static inline void napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { - struct page *page =3D skb_frag_page(frag); - -#ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page, napi_safe)) - return; -#endif - put_page(page); + skb_page_unref(skb_frag_page(frag), recycle, napi_safe); } =20 /** diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index d33d12421814..3d2c252c5570 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -114,7 +114,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *t= mp, struct sk_buff *skb) */ if (req->src !=3D req->dst) for (sg =3D sg_next(req->src); sg; sg =3D sg_next(sg)) - skb_page_unref(skb, sg_page(sg), false); + skb_page_unref(sg_page(sg), skb->pp_recycle, false); } =20 #ifdef CONFIG_INET_ESPINTCP diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index 7371886d4f9f..4fe4f97f5420 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -131,7 +131,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *t= mp, struct sk_buff *skb) */ if (req->src !=3D req->dst) for (sg =3D sg_next(req->src); sg; sg =3D sg_next(sg)) - skb_page_unref(skb, sg_page(sg), false); + skb_page_unref(sg_page(sg), skb->pp_recycle, false); } =20 #ifdef CONFIG_INET6_ESPINTCP --=20 2.44.0.396.g6e790dbe36-goog From nobody Sun Feb 8 12:38:23 2026 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E0A5153BC9 for ; Wed, 27 Mar 2024 21:45:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711575934; cv=none; b=DMjHQ/XcCjexsPWcV8kUT+2avylDo9E6NaToF/gku5LliQ3DezSJ/HIPGje5e089wDVdPjIXYo7mODU3H8kJwb4zND4G+zrvtcFNJEL3QfXS0XvoPOxxB/h5zUSTh/SLiiwDN0Dl9MtYmKYlAgfTYtKxctXf5Re0/r9zUkEi7S0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711575934; c=relaxed/simple; bh=UFh9WIe9rm3vbiPkPQJfYCpA8I+9vkOzZY11SDb6fDU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fi7v4ywV2frelFMyTb0kCR/GEmzjywDB5EZi8BYYqWs0j7sfyNU2JnhLsopOL5+/k6mCAYR92S/L+uSddgVNe7AVNuHNzOc/2c7xQaTgBCtyQ2dcZOs4rudtjubjnPHP2DCe0BGje/CPAfUpq7uUE7bdVR8X2qPzAB82SsOCx/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a8q6QeTZ; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a8q6QeTZ" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dced704f17cso454455276.1 for ; Wed, 27 Mar 2024 14:45:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575931; x=1712180731; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7nhCSN7ZlS5U7nzl7rNYNpzESUVtaxwKFd/9+3DX9HA=; b=a8q6QeTZMtTOlnJPJ1w89kpOLnrz6gi3yy+FeNC1py2DaP+k/PRc6FnUcGCj3Vxv6P Zz4icwA6M5spcBSSjnm+xgK5ZTN/EaJ6MlKe1vPxTohAZIVsPHWKc84fHfc8Qk8uPmlC W1WWbGAZd/+P46GB7rkeIJciRKQDM8Jrh8TvSObSkDUSirF0M9V1rj+3BYJJmzTOuKiK F/xmXj/jG6vt0u5QZgQy+0P+BpaQrGY66f3PbhKZQ+oB3FRla+uEGr1rhLMe4qvq8lPc SZtwDe+5L5yW1OO7vFdK7wRmE6Svg4+X6nWxNj7b//WrDjVUBZl6vPokOK9vZlY9onet 8XGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575931; x=1712180731; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7nhCSN7ZlS5U7nzl7rNYNpzESUVtaxwKFd/9+3DX9HA=; b=WNFrTaw7Mm9UJplfU4/C6BQJb+rcYNW5w3BlGoaRmqvClnhRcMr/g9TawptSRqxwLO 8EPZ8BbcsSzsORi3ijDL/Yd3nPk980EZnSDbzgB46gwNd3aaVvx3BlqElPuEYE/pNKnU FJrU53UuMTM+44qBTTh9lqGEYjje1UiOEHyCUCVSCNPkHUwNzHABFRiroBkKADrdy+n1 fudTPIJGIOHxsDkYibPg5PiwaHQKqPMn3s+xJWsTsa7yPIwj3H0czwYb7O7B6xN/GnXP Vfdf/uSd93qav0yiA99ZSacKa8JhkaFsbUrqSitfGHFkdCPO+jgM3FvHm1um3ykOWAiZ F3kw== X-Forwarded-Encrypted: i=1; AJvYcCWAkL3tOG4jDmAp+yS0RNxSqVIUgORGg74y0opYMB058C0Swep47ok2/4EYue0PvZt1rclIilfwxukjCE7Nat0fkO7NI4OlKfdLL/hB X-Gm-Message-State: AOJu0Yz8h54g4azSL3dVW6O5yAU4ng89ljzva6rCGjYPhRhyb43pVEH3 adwPQNkJbtKsnwF6obRDtSCB7CLiZeJxU/Ge9dIqBjcVwv6smGhEIs+PA/xDQtNCMvSOEillAw1 zre+4FTAOnIlCSsWDSunp6w== X-Google-Smtp-Source: AGHT+IFiiv618YyP5fRdIU/0IHkoi6CzDlRKCzAH4g50/1TpW13MEqg2OxQ5+JaxPx6CPrRX8XqMN6rT5+evY9uAVQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:b757:6e7b:2156:cabc]) (user=almasrymina job=sendgmr) by 2002:a05:6902:2311:b0:dbe:d0a9:2be8 with SMTP id do17-20020a056902231100b00dbed0a92be8mr118134ybb.0.1711575931557; Wed, 27 Mar 2024 14:45:31 -0700 (PDT) Date: Wed, 27 Mar 2024 14:45:20 -0700 In-Reply-To: <20240327214523.2182174-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240327214523.2182174-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327214523.2182174-3-almasrymina@google.com> Subject: [PATCH net-next v2 2/3] net: mirror skb frag ref/unref helpers From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Dragos Tatulea Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor some of the skb frag ref/unref helpers for improved clarity. Implement napi_pp_get_page() to be the mirror counterpart of napi_pp_put_page(). Implement skb_page_ref() to be the mirror of skb_page_unref(). Improve __skb_frag_ref() to become a mirror counterpart of __skb_frag_unref(). Previously unref could handle pp & non-pp pages, while the ref could only handle non-pp pages. Now both the ref & unref helpers can correctly handle both pp & non-pp pages. Now that __skb_frag_ref() can handle both pp & non-pp pages, remove skb_pp_frag_ref(), and use __skb_frag_ref() instead. This lets us remove pp specific handling from skb_try_coalesce. Signed-off-by: Mina Almasry Reviewed-by: Dragos Tatulea --- .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 2 +- drivers/net/ethernet/sun/cassini.c | 4 +- include/linux/skbuff.h | 22 ++++++-- net/core/skbuff.c | 54 ++++++------------- 4 files changed, 38 insertions(+), 44 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c= b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c index 6482728794dd..f9b0a9533985 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c @@ -1658,7 +1658,7 @@ static void chcr_ktls_copy_record_in_skb(struct sk_bu= ff *nskb, for (i =3D 0; i < record->num_frags; i++) { skb_shinfo(nskb)->frags[i] =3D record->frags[i]; /* increase the frag ref count */ - __skb_frag_ref(&skb_shinfo(nskb)->frags[i]); + __skb_frag_ref(&skb_shinfo(nskb)->frags[i], false); } =20 skb_shinfo(nskb)->nr_frags =3D record->num_frags; diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/= cassini.c index bfb903506367..fabba729e1b8 100644 --- a/drivers/net/ethernet/sun/cassini.c +++ b/drivers/net/ethernet/sun/cassini.c @@ -1999,7 +1999,7 @@ static int cas_rx_process_pkt(struct cas *cp, struct = cas_rx_comp *rxc, skb->len +=3D hlen - swivel; =20 skb_frag_fill_page_desc(frag, page->buffer, off, hlen - swivel); - __skb_frag_ref(frag); + __skb_frag_ref(frag, false); =20 /* any more data? */ if ((words[0] & RX_COMP1_SPLIT_PKT) && ((dlen -=3D hlen) > 0)) { @@ -2023,7 +2023,7 @@ static int cas_rx_process_pkt(struct cas *cp, struct = cas_rx_comp *rxc, frag++; =20 skb_frag_fill_page_desc(frag, page->buffer, 0, hlen); - __skb_frag_ref(frag); + __skb_frag_ref(frag, false); RX_USED_ADD(page, hlen + cp->crc_size); } =20 diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index bafa5c9ff59a..058d72a2a250 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3494,15 +3494,29 @@ static inline struct page *skb_frag_page(const skb_= frag_t *frag) return netmem_to_page(frag->netmem); } =20 +bool napi_pp_get_page(struct page *page); + +static inline void skb_page_ref(struct page *page, bool recycle) +{ +#ifdef CONFIG_PAGE_POOL + if (recycle && napi_pp_get_page(page)) + return; +#endif + get_page(page); +} + /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment + * @recycle: skb->pp_recycle param of the parent skb. False if no parent s= kb. * - * Takes an additional reference on the paged fragment @frag. + * Takes an additional reference on the paged fragment @frag. Obtains the + * correct reference count depending on whether skb->pp_recycle is set and + * whether the frag is a page pool frag. */ -static inline void __skb_frag_ref(skb_frag_t *frag) +static inline void __skb_frag_ref(skb_frag_t *frag, bool recycle) { - get_page(skb_frag_page(frag)); + skb_page_ref(skb_frag_page(frag), recycle); } =20 /** @@ -3514,7 +3528,7 @@ static inline void __skb_frag_ref(skb_frag_t *frag) */ static inline void skb_frag_ref(struct sk_buff *skb, int f) { - __skb_frag_ref(&skb_shinfo(skb)->frags[f]); + __skb_frag_ref(&skb_shinfo(skb)->frags[f], skb->pp_recycle); } =20 int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 17617c29be2d..5c86ecaceb6c 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1005,6 +1005,19 @@ int skb_cow_data_for_xdp(struct page_pool *pool, str= uct sk_buff **pskb, EXPORT_SYMBOL(skb_cow_data_for_xdp); =20 #if IS_ENABLED(CONFIG_PAGE_POOL) +bool napi_pp_get_page(struct page *page) +{ + + page =3D compound_head(page); + + if (!is_pp_page(page)) + return false; + + page_pool_ref_page(head_page); + return true; +} +EXPORT_SYMBOL(napi_pp_get_page); + bool napi_pp_put_page(struct page *page, bool napi_safe) { bool allow_direct =3D false; @@ -1057,37 +1070,6 @@ static bool skb_pp_recycle(struct sk_buff *skb, void= *data, bool napi_safe) return napi_pp_put_page(virt_to_page(data), napi_safe); } =20 -/** - * skb_pp_frag_ref() - Increase fragment references of a page pool aware s= kb - * @skb: page pool aware skb - * - * Increase the fragment reference count (pp_ref_count) of a skb. This is - * intended to gain fragment references only for page pool aware skbs, - * i.e. when skb->pp_recycle is true, and not for fragments in a - * non-pp-recycling skb. It has a fallback to increase references on normal - * pages, as page pool aware skbs may also have normal page fragments. - */ -static int skb_pp_frag_ref(struct sk_buff *skb) -{ - struct skb_shared_info *shinfo; - struct page *head_page; - int i; - - if (!skb->pp_recycle) - return -EINVAL; - - shinfo =3D skb_shinfo(skb); - - for (i =3D 0; i < shinfo->nr_frags; i++) { - head_page =3D compound_head(skb_frag_page(&shinfo->frags[i])); - if (likely(is_pp_page(head_page))) - page_pool_ref_page(head_page); - else - page_ref_inc(head_page); - } - return 0; -} - static void skb_kfree_head(void *head, unsigned int end_offset) { if (end_offset =3D=3D SKB_SMALL_HEAD_HEADROOM) @@ -4196,7 +4178,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *sk= b, int shiftlen) to++; =20 } else { - __skb_frag_ref(fragfrom); + __skb_frag_ref(fragfrom, skb->pp_recycle); skb_frag_page_copy(fragto, fragfrom); skb_frag_off_copy(fragto, fragfrom); skb_frag_size_set(fragto, todo); @@ -4846,7 +4828,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, } =20 *nskb_frag =3D (i < 0) ? skb_head_frag_to_page_desc(frag_skb) : *frag; - __skb_frag_ref(nskb_frag); + __skb_frag_ref(nskb_frag, nskb->pp_recycle); size =3D skb_frag_size(nskb_frag); =20 if (pos < offset) { @@ -5977,10 +5959,8 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_= buff *from, /* if the skb is not cloned this does nothing * since we set nr_frags to 0. */ - if (skb_pp_frag_ref(from)) { - for (i =3D 0; i < from_shinfo->nr_frags; i++) - __skb_frag_ref(&from_shinfo->frags[i]); - } + for (i =3D 0; i < from_shinfo->nr_frags; i++) + __skb_frag_ref(&from_shinfo->frags[i], from->pp_recycle); =20 to->truesize +=3D delta; to->len +=3D len; --=20 2.44.0.396.g6e790dbe36-goog From nobody Sun Feb 8 12:38:23 2026 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AC95154422 for ; Wed, 27 Mar 2024 21:45:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711575937; cv=none; b=RITx24B/wMZbbj4f70upXXU59Ho6tprlqFqce1W2cOdAiBsdYqFuzF5K55w4TtuF5ExBG3W3xoxaPLjiXsrTzMLENZQ7Owlh9pEtndJn2FaddmaOemNkClQlw8o9rcv9DlCmY7aT8VezYmYCDjMqgezZLPggMk/sVVViMmhdLO0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711575937; c=relaxed/simple; bh=HcoNxlvdGhGfyulBOvat+dNXCCSpR7y/F/Sfr6c14to=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=j0x9RHAgbKKx2ri9RurPGf+fSo7yhoRWtiDmjDEyNjb+Rh4ApqFb2aSFRAeoBFNqMB9xZF554KjbbhA8SuVAXiBLepaxmxNyEPIktbqSn1/CUznMPjBA+t4D0nHct4e09yj9VAvdVkABduPJMoMKHXslcJS8GREWMmigMmehaKA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0XVbZsSD; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0XVbZsSD" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b269686aso424457276.1 for ; Wed, 27 Mar 2024 14:45:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711575934; x=1712180734; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0Fas+uh+IATqHO1ThFI61gBVEvoGJvlSG5a3mDRu0bA=; b=0XVbZsSD/EY7uJzjdM0oltuqcR63ul7SIEC7onTbbcgVIuCwdq7GDDDQ21AtBfeUyk cs+Scu5GqJ9NAPd+HY9k+8msD7NSzwA/g4+SXQHpMVBvJffadO//HDSWSOF3KhUq/C/u Lx59/TdBmkST50ZNDmySR5a9G2XbLBCphPNthvBVtR/WxmI8U64PfV6tCfJgCONDzJMq jAcziPMivArgkZzgNSar4Tmw5bjbfc4SMsc5H1IFhs1gsMfYwSVWlGpTGb/5Q+zJbRgF TbXUJ7yzYW5XfMXtWdAo36YpcuWpTanmWI8uB5e9eMPcWweRHgs+Sd3xGZqnadJs+8yk P9VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711575934; x=1712180734; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0Fas+uh+IATqHO1ThFI61gBVEvoGJvlSG5a3mDRu0bA=; b=vekJlgiUJ0pUDFnqahlOSCUp28ZJQEyTbrQQT+iOERG+frYvPcpqBmcidLABcfpopc TwMnWtuBp0EAfPCl8KRrXl8B7B6LEqXQo8jcP5sNXkXkRkHxx4gUXmmQlhPUYxbYPUoK wXo9fF83RgP639Dio1yO56LrQuRIm/DyT57zTRCpkAg1tnJECY0w9fxMmmpG+0Za4JUw +CW9VE9adQhPONrfOJqp+wtc+4OBeOD0ZZHLgo8aa1onQlrBMTJBWx50fKwGwZdvij4w 9U85bfEZAVWwZPvUZt/w715Hd085+JNVij01r6lFZbJwd+LZE/vRNsAFJ4Cx1uOAKa0P 3igA== X-Forwarded-Encrypted: i=1; AJvYcCXhmb6PJd7cHHcjxCm5FsEHNki0Cl7P3ahbQ9lv61yQsWnN07DkBe3SXZuaOfI7Xhi/WezGAaCCVFXLEQaNSLy816KysnVuklmbEQ6A X-Gm-Message-State: AOJu0YztVxK8rPyDE0cDY4siheghe5seRFo33/Puyil1Kkv6Qgk3T/AC ikGSBeCIhCfwsF9Z1FyMLDkEQRMRY/f6FcQBLBgAyX1ZnZv26y/DuSQJOF4i7dm+hOj9PrsSAgE kvmNTzDkyoBmU/kaBXo46hw== X-Google-Smtp-Source: AGHT+IEbxk1R4CRvllhl4uFMMmoABCiOd+KYKwhp/rGbIhNQO57O71um9bj7xu9u0AunMaECDk0/6mpFbNSCoQSHdw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:b757:6e7b:2156:cabc]) (user=almasrymina job=sendgmr) by 2002:a05:6902:2413:b0:dc6:dfd9:d423 with SMTP id dr19-20020a056902241300b00dc6dfd9d423mr119003ybb.3.1711575933748; Wed, 27 Mar 2024 14:45:33 -0700 (PDT) Date: Wed, 27 Mar 2024 14:45:21 -0700 In-Reply-To: <20240327214523.2182174-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240327214523.2182174-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240327214523.2182174-4-almasrymina@google.com> Subject: [PATCH net-next v2 3/3] net: remove napi_frag_unref From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Dragos Tatulea Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the changes in the last patches, napi_frag_unref() is now reduandant. Remove it and use skb_page_unref directly. Signed-off-by: Mina Almasry Reviewed-by: Dragos Tatulea --- drivers/net/ethernet/marvell/sky2.c | 2 +- drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +- include/linux/skbuff.h | 14 +++++--------- net/core/skbuff.c | 4 ++-- net/tls/tls_device.c | 2 +- net/tls/tls_strp.c | 2 +- 6 files changed, 11 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/mar= vell/sky2.c index 07720841a8d7..8e00a5856856 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -2501,7 +2501,7 @@ static void skb_put_frags(struct sk_buff *skb, unsign= ed int hdr_space, =20 if (length =3D=3D 0) { /* don't need this page */ - __skb_frag_unref(frag, false); + __skb_frag_unref(frag, false, false); --skb_shinfo(skb)->nr_frags; } else { size =3D min(length, (unsigned) PAGE_SIZE); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ether= net/mellanox/mlx4/en_rx.c index eac49657bd07..4dbf29b46979 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv= *priv, fail: while (nr > 0) { nr--; - __skb_frag_unref(skb_shinfo(skb)->frags + nr, false); + __skb_frag_unref(skb_shinfo(skb)->frags + nr, false, false); } return 0; } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 058d72a2a250..c3edb4a3450a 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3547,23 +3547,19 @@ skb_page_unref(struct page *page, bool recycle, boo= l napi_safe) put_page(page); } =20 -static inline void -napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) -{ - skb_page_unref(skb_frag_page(frag), recycle, napi_safe); -} - /** * __skb_frag_unref - release a reference on a paged fragment. * @frag: the paged fragment * @recycle: recycle the page if allocated via page_pool + * @napi_safe: set to true if running in the same napi context as where the + * consumer would run. * * Releases a reference on the paged fragment @frag * or recycles the page via the page_pool API. */ -static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) +static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle, bool n= api_safe) { - napi_frag_unref(frag, recycle, false); + skb_page_unref(skb_frag_page(frag), recycle, napi_safe); } =20 /** @@ -3578,7 +3574,7 @@ static inline void skb_frag_unref(struct sk_buff *skb= , int f) struct skb_shared_info *shinfo =3D skb_shinfo(skb); =20 if (!skb_zcopy_managed(skb)) - __skb_frag_unref(&shinfo->frags[f], skb->pp_recycle); + __skb_frag_unref(&shinfo->frags[f], skb->pp_recycle, false); } =20 /** diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 5c86ecaceb6c..a6dbba56e047 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1109,7 +1109,7 @@ static void skb_release_data(struct sk_buff *skb, enu= m skb_drop_reason reason, } =20 for (i =3D 0; i < shinfo->nr_frags; i++) - napi_frag_unref(&shinfo->frags[i], skb->pp_recycle, napi_safe); + __skb_frag_unref(&shinfo->frags[i], skb->pp_recycle, napi_safe); =20 free_head: if (shinfo->frag_list) @@ -4200,7 +4200,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *sk= b, int shiftlen) fragto =3D &skb_shinfo(tgt)->frags[merge]; =20 skb_frag_size_add(fragto, skb_frag_size(fragfrom)); - __skb_frag_unref(fragfrom, skb->pp_recycle); + __skb_frag_unref(fragfrom, skb->pp_recycle, false); } =20 /* Reposition in the original skb */ diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index bf8ed36b1ad6..5dc6381f34fb 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -140,7 +140,7 @@ static void destroy_record(struct tls_record_info *reco= rd) int i; =20 for (i =3D 0; i < record->num_frags; i++) - __skb_frag_unref(&record->frags[i], false); + __skb_frag_unref(&record->frags[i], false, false); kfree(record); } =20 diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c index ca1e0e198ceb..85b41f226978 100644 --- a/net/tls/tls_strp.c +++ b/net/tls/tls_strp.c @@ -196,7 +196,7 @@ static void tls_strp_flush_anchor_copy(struct tls_strpa= rser *strp) DEBUG_NET_WARN_ON_ONCE(atomic_read(&shinfo->dataref) !=3D 1); =20 for (i =3D 0; i < shinfo->nr_frags; i++) - __skb_frag_unref(&shinfo->frags[i], false); + __skb_frag_unref(&shinfo->frags[i], false, false); shinfo->nr_frags =3D 0; if (strp->copy_mode) { kfree_skb_list(shinfo->frag_list); --=20 2.44.0.396.g6e790dbe36-goog