From nobody Sun Dec 28 00:46:12 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68351C4332F for ; Thu, 14 Dec 2023 02:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442986AbjLNCFe (ORCPT ); Wed, 13 Dec 2023 21:05:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234195AbjLNCFb (ORCPT ); Wed, 13 Dec 2023 21:05:31 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2616FE4 for ; Wed, 13 Dec 2023 18:05:38 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-dbcca990ee9so1811155276.0 for ; Wed, 13 Dec 2023 18:05:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519537; x=1703124337; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+ronbOySVVJCeCFXE+E452YTaKGaTM58JCsYafTPHk4=; b=O2j4yHTFdH4Gpg94umNd+/vXPVzeJMcj0A46SsXoBpcqoVIHwpjY6eXU/AdPlb1yUd 6779vyYmeDcnhCnDOOagQRJ2hdBCLXYU0Z0uCDcIqyibpqDwupCLm3GY4wkWV6U/yCmx kx8lCopO+tYLcm7ub7xqp1ePTyowyQ/kbUWT9D3DjcnwvF8YGCYCS2S9rIvqDOy8f5BC oQiuHbbOA1wEQDdN8cYGYdiZVATuGAiUa/5n8Dwd0CgYCUuirhxfd88m6Lx5ULI+hRbN GGnLMu5OvnMFoucjTH8IOENKXYUr2RhiyO4IWUvRvn8aIbvk7/0o4MsO3GN4KFLtxxHQ AbxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519537; x=1703124337; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+ronbOySVVJCeCFXE+E452YTaKGaTM58JCsYafTPHk4=; b=nr371bosPMmHkhbZpZneM77za902f6Psv9zAATHF6ZFNLmUUsopqoDxXM+S+NDZv0z EH9bxZRC5M0OMH2LFcEbro4MAo4SUydOH0w5CD/NTjVSwXEbJHu3AbKHSvQ87WdLoFuk AFHz28ST5fUp9pnZQ8zesa6Yk191X91LErNDcKaP7/m9EX6nGky89QKlwTlI4tbg3xgU rw8LiElIbdFyAZ7BghOYjk+hSEhHzp73FxPhA+UnGNmcn9CvgfHWk4NJDzwEfLwEcXA2 eeaq/Hb+LBQraB6FE8j8snvj+MQYa+4qZziBR7KLNLKOCGVeXo2o2ye17sDVl4ZtGIiW X/hg== X-Gm-Message-State: AOJu0Yxie/yrXO+ThdZwnnICu4wwVg5IaEJUfkqUyRkeZQ/U70C5Lh1m aN9NrVBlWp0IcQXPxk8EL5lAQ4VYEympndUMCq23ZAJnUFX9HtfDMghd+mkX6Zj7fZ7G2QTymxt 7Fd3yAtGNVc435A8deGH6Y4kM0nck6VqSs3JRMpcKjMDuu6n1iP+j2FOqgU2Tz4jPdUiTIt6ho4 K/Lk3styk= X-Google-Smtp-Source: AGHT+IGncV41fwbhz/mCy6hSR3EOohDoL5eRhnTHrk4bhzSdLjyeCnLJseqiKf/w0iEIyiWoDGOSz2aVqCYno2zrog== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a05:6902:545:b0:dbc:e64a:ccd6 with SMTP id z5-20020a056902054500b00dbce64accd6mr1821ybs.9.1702519536540; Wed, 13 Dec 2023 18:05:36 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:24 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-2-almasrymina@google.com> Subject: [RFC PATCH net-next v1 1/4] vsock/virtio: use skb_frag_page() helper From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , "=?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?=" , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Minor fix for virtio: code wanting to access the page inside the skb should use skb_frag_page() helper, instead of accessing bv_page directly. This allows for extensions where the underlying memory is not a page. Signed-off-by: Mina Almasry Acked-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transp= ort.c index af5bab1acee1..bd0b413dfa3f 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -153,7 +153,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) * 'virt_to_phys()' later to fill the buffer descriptor. * We don't touch memory at "virtual" address of this page. */ - va =3D page_to_virt(skb_frag->bv_page); + va =3D page_to_virt(skb_frag_page(skb_frag)); sg_init_one(sgs[out_sg], va + skb_frag->bv_offset, skb_frag->bv_len); --=20 2.43.0.472.g3155946c3a-goog From nobody Sun Dec 28 00:46:12 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C214BC4332F for ; Thu, 14 Dec 2023 02:05:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443008AbjLNCFr (ORCPT ); Wed, 13 Dec 2023 21:05:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbjLNCFe (ORCPT ); Wed, 13 Dec 2023 21:05:34 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B5F1F4 for ; Wed, 13 Dec 2023 18:05:40 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dbc4f389835so6643954276.2 for ; Wed, 13 Dec 2023 18:05:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519539; x=1703124339; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KzTXKjA5KWsaYAjqKhDbOIpqWzWFT9AS368JXZ2MRs8=; b=dvhIQC/v1AHHBiu9xhrWi4kRmfpWMCzJe4P5d7bNTHkaEX/Qw7QPHgnupS2yzV1Ca6 65rtV+JwRqEnrclvbw34QOSyjJxt76awp7bgZTZmjNT74uxiopeWERjDxZEksP540jFX GCA+9GUSCr7ZRwCX03Nd1GUYiKKLj2IIZHiQJP6GlNIQMBVdSjUWp+HzDF8n4byMRohe l7IMSOTHQSFM5q7BoHS+SwbLYc9SzcANV2njO7MSHovSofSXzA8sQW6gut83NShKcioj g4OHMyif42YSMQysGddkTrRUGw0RA4XT2HZQR1i4ZAObE5pcSziEfhTUUOU7xfns0eCl 4JdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519539; x=1703124339; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KzTXKjA5KWsaYAjqKhDbOIpqWzWFT9AS368JXZ2MRs8=; b=v7nreXq3fMMIKPv0UdZwnZqvJhMdq35bLzQ/iOdBUbzGAg7yuKpELZB3/AHYK/fHw0 Y3DHtZ3siItzccNT8fHq8fiX9W5A3q+c0RWjFnpxvF2q49oW2WN6FWggXOM5QCWjwVoj G5/SY7pYstno5QRAC0U2Ro6LOb4bjiP84U8TNC2nzKTjgOr7THggOJ3UUq86hgnV1Xed NxEwmytzMXON8y7xIYNQW+zcoEPlMJxkC1NDm6e6xYTCcSsEtKadit5U/NzlpdSiKSUO e+BMCoiGln5m1kMKueLCB3vnvGDp7IL7FToDpZK2wcmCiJhPT+aW7QrlYEFKO5iv8svL OFIQ== X-Gm-Message-State: AOJu0YzFXi3Ooa2zqCq7h1+KhbJ6s0xKGiOlmbdfacQD9Ju1Pmwnkdfn LgP6KBlWXGwU5J5zKAiBnNns8/LcF6CsevK8e42J4i665UATDVGVnXwwlxw2nU0vP7/3sbO2p/u +paWHrp5kMqXitvtHRmh2DALmoSGj+eHrYf5ExN7nDjZlnSntxHca+bT76uq6gPy/mhye3GYE8P RkdMlW9SM= X-Google-Smtp-Source: AGHT+IFvXUPRVjinne3eyeqMXrPSn27B36v8rluqCbSKO9WAxxureBDR9DLVkSWnv3Loqa1nvhzoL2hhHQkqKHt9JQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a25:abeb:0:b0:db4:5eed:8907 with SMTP id v98-20020a25abeb000000b00db45eed8907mr77688ybi.8.1702519538827; Wed, 13 Dec 2023 18:05:38 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:25 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-3-almasrymina@google.com> Subject: [RFC PATCH net-next v1 2/4] net: introduce abstraction for network memory From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , "=?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?=" , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the netmem_t type, an abstraction for network memory. To add support for new memory types to the net stack, we must first abstract the current memory type from the net stack. Currently parts of the net stack use struct page directly: - page_pool - drivers - skb_frag_t Originally the plan was to reuse struct page* for the new memory types, and to set the LSB on the page* to indicate it's not really a page. However, for compiler type checking we need to introduce a new type. netmem_t is introduced to abstract the underlying memory type. Currently it's a no-op abstraction that is always a struct page underneath. In parallel there is an undergoing effort to add support for devmem to the net stack: https://lore.kernel.org/netdev/20231208005250.2910004-1-almasrymina@google.= com/ Signed-off-by: Mina Almasry --- include/net/netmem.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 include/net/netmem.h diff --git a/include/net/netmem.h b/include/net/netmem.h new file mode 100644 index 000000000000..e4309242d8be --- /dev/null +++ b/include/net/netmem.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * netmem.h + * Author: Mina Almasry + * Copyright (C) 2023 Google LLC + */ + +#ifndef _NET_NETMEM_H +#define _NET_NETMEM_H + +struct netmem { + union { + struct page page; + + /* Stub to prevent compiler implicitly converting from page* + * to netmem_t* and vice versa. + * + * Other memory type(s) net stack would like to support + * can be added to this union. + */ + void *addr; + }; +}; + +static inline struct page *netmem_to_page(struct netmem *netmem) +{ + return &netmem->page; +} + +static inline struct netmem *page_to_netmem(struct page *page) +{ + return (struct netmem *)page; +} + +#endif /* _NET_NETMEM_H */ --=20 2.43.0.472.g3155946c3a-goog From nobody Sun Dec 28 00:46:12 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13C64C4332F for ; Thu, 14 Dec 2023 02:06:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443034AbjLNCFv (ORCPT ); Wed, 13 Dec 2023 21:05:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235271AbjLNCFg (ORCPT ); Wed, 13 Dec 2023 21:05:36 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 903AC107 for ; Wed, 13 Dec 2023 18:05:42 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dbcc62c68fdso2156905276.2 for ; Wed, 13 Dec 2023 18:05:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519542; x=1703124342; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dUMRyPJzc2KBqp9rlzOcicivZPWQYvb+p95UviyM3iw=; b=fUN8P8eaFBYXz025ievhny71hWvob2+/oQllg6fLGdRdwjGqADeA7CYptTERsYLqsm DVspJIT/TkLY6wLUrC0k7gAf6DmLaERZ0lEfkVUuny03YesDuBch898IVJ3070T/0tWX ppAJnbY+q+wxKDZJTWWhVIQzWFmiu7dtq0sjeocWivEuoRHZwH2W2CtSMMcSVELXQ+rH 61GZxyTXzTCY5Fh90R/yb9ylV8Q/GOJmspmXlXNSUNxrE0bg/nvmONpKDfoMesic+Tnf DvUnTE6C0t1hx/nbOJcLR+m1t8iulLxmgE4fc0rRMYk19R3N7sLb4bDBBAJATNOy5qKm yAdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519542; x=1703124342; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dUMRyPJzc2KBqp9rlzOcicivZPWQYvb+p95UviyM3iw=; b=ORboojoMtj8elXL/Qlt+H8el5LSX059m3S2RT+WE9zWn/By7sAdjymNgP1o5PpZSJ3 gth7hQTyeoHPktvu7FtmMLZR7uIQ18AtDutTw9jNZCZXloH0MWskYTiubjjNN3RGNC2m BMo1SRKKHuE7L0+3XrvUFVsWa3HDqF9OKpicAPWsajP2l7PRHRQO1h73U7t30TcV2M7L 5BtURbzioAN8RkAZLqfzb35xe3xXoTts+u3RP0cfYexLb4pngPJVCfcBrFiQBLhXnbhI NlI3/ZqJDnW0UuvrJf1O2b5t6rAQwPzzE7cFkGGwP2MYdhpFUKHfv5Crwx+PmmlX/Cyu h24w== X-Gm-Message-State: AOJu0YwS+pZuHb9y0YSpCJLyCOVkwI7lZ9wQcfo4HHZ+8vTbVRkwlSQU Z94BdYfU2EOe3avQs2hfrBTYLTN7N9fzN2bRrP83ev8V6JgpJc5nY9qHOgMiL+REVRyb9jJurZZ pJVhZxwtAizJB0680NQQr2osxpxUtWogNvVrHfwAitFYNpi9SjFjoYix9YBFuQE93cN1G9+C0NJ wf18elB0E= X-Google-Smtp-Source: AGHT+IET8r8BI9pfqS6KupRcShXv+0eu0hJA4DOJcShMg+bVFuCr5TXdWqpVc1aC50PtBVwEg1I8t4qSWt8xl7gpAg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a25:c0d4:0:b0:dbc:b692:65a7 with SMTP id c203-20020a25c0d4000000b00dbcb69265a7mr49497ybf.10.1702519540921; Wed, 13 Dec 2023 18:05:40 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:26 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-4-almasrymina@google.com> Subject: [RFC PATCH net-next v1 3/4] net: add netmem_t to skb_frag_t From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , "=?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?=" , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use netmem_t instead of page directly in skb_frag_t. Currently netmem_t is always a struct page underneath, but the abstraction allows efforts to add support for skb frags not backed by pages. There is unfortunately 1 instance where the skb_frag_t is assumed to be a bio_vec in kcm. For this case, add a debug assert that the skb frag is indeed backed by a page, and do a cast. Signed-off-by: Mina Almasry --- include/linux/skbuff.h | 11 ++++++++--- net/kcm/kcmsock.c | 9 +++++++-- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b370eb8d70f7..6d681c40213c 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,7 @@ #endif #include #include +#include =20 /** * DOC: skb checksums @@ -359,7 +360,11 @@ extern int sysctl_max_skb_frags; */ #define GSO_BY_FRAGS 0xFFFF =20 -typedef struct bio_vec skb_frag_t; +typedef struct skb_frag { + struct netmem *bv_page; + unsigned int bv_len; + unsigned int bv_offset; +} skb_frag_t; =20 /** * skb_frag_size() - Returns the size of a skb fragment @@ -2435,7 +2440,7 @@ static inline void skb_frag_fill_page_desc(skb_frag_t= *frag, struct page *page, int off, int size) { - frag->bv_page =3D page; + frag->bv_page =3D page_to_netmem(page); frag->bv_offset =3D off; skb_frag_size_set(frag, size); } @@ -3422,7 +3427,7 @@ static inline void skb_frag_off_copy(skb_frag_t *frag= to, */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + return netmem_to_page(frag->bv_page); } =20 /** diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 65d1f6755f98..926349eeeaf6 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -636,9 +636,14 @@ static int kcm_write_msgs(struct kcm_sock *kcm) for (i =3D 0; i < skb_shinfo(skb)->nr_frags; i++) msize +=3D skb_shinfo(skb)->frags[i].bv_len; =20 + /* The cast to struct bio_vec* here assumes the frags are + * struct page based. + */ + DEBUG_NET_WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0])); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, - skb_shinfo(skb)->frags, skb_shinfo(skb)->nr_frags, - msize); + (const struct bio_vec *)skb_shinfo(skb)->frags, + skb_shinfo(skb)->nr_frags, msize); iov_iter_advance(&msg.msg_iter, txm->frag_offset); =20 do { --=20 2.43.0.472.g3155946c3a-goog From nobody Sun Dec 28 00:46:12 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F350CC4332F for ; Thu, 14 Dec 2023 02:06:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443040AbjLNCF4 (ORCPT ); Wed, 13 Dec 2023 21:05:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443057AbjLNCFo (ORCPT ); Wed, 13 Dec 2023 21:05:44 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 594E7114 for ; Wed, 13 Dec 2023 18:05:45 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5ddd64f83a4so62513987b3.0 for ; Wed, 13 Dec 2023 18:05:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519544; x=1703124344; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wUHNM84aY8bG1LONwHsP22xk6JoFkZxhjeLJi0hGskg=; b=LAmrN3gxbJdfQaO8aXvQXgBgSYBuo3ooLBhYOpYj/N/v8u/yAUbfz4DyZZtxcDjIgc cs405hG0+in2EvxHu/yi0uSv0NyDz9pjCLruiVuAnIoo2C2XTqxAIPlva4DPxFSQLYdI uY0tSUlhftpYrMATcyXQmQzw7lIihzC2ko9iLGvKJV9qyKZBHMKe9Fy3IruUX++LVmxa 3qyPnwYYK1pEMP09AgLzC036gq3HRjwdII+DJaU1mjdVE4INIyl2qCjAUFBSoEv2JTv/ +1kM/KUaUV0EIUGdgMpl72t+ql2Vh02diW88GU2iE4Kt6fZzB4K7tDnQW3+UrEPonh/w bvNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519544; x=1703124344; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wUHNM84aY8bG1LONwHsP22xk6JoFkZxhjeLJi0hGskg=; b=sH/aGUzKzIPgZxGPlTnVsZY2cQlJhYoPi95aifyCyqEh/yDNQD4R2QpyWfkcVyAOLK qJTc9DmZpc9p0WmFWrXlOh4NKC2lS9ljA5VMMxdBjWbxJMD7IFD3h/4+qgiHeC4bjE9h FsX/uzeTAqJKG1JtNz2AfZaCoHo1Nh24QtgojessPEBwbE05ucJItA/aRaqZMgwvKFiY Oj8kh9pptzD78x2pZ9pBwMDCUPMrvA6SyvWQjHakTSzT1abrQPQaAVwGU60ByqNGCrSb QFXJTavAVVcpGaNbomFLTOkOKu/1E/k58kCqsD9t6tn+P/uwgqzBxeBX4rewfQo6ZlKp bMfw== X-Gm-Message-State: AOJu0YwBZhfhFA31V7AtspGv3OuFfdwVdlOaSkrLeJnihY/HSZFSTJRX q4aIQk+Cwx4Vni7MAeHfR5QmgXejz0TlwAVS0DH34BPBhxAiscIgl41AlxCjTAGQ1OqQYoOEXV4 HBxQX2wnMPys9WRrziu9CI4ySNvEm1SPCuvQHp+cH2X/JJoDjxYOVA7bHhFXYM+Cjt03eyjvkqi kQAzBpbUc= X-Google-Smtp-Source: AGHT+IHL9bTDSelId43RHE3R2vaIHDYkHh5qttulZDOCmhkgdaYnPJXKhGJ3z5E1GjseILXVxNA9sIlle6yWXot0MQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a5b:787:0:b0:d9c:801c:4230 with SMTP id b7-20020a5b0787000000b00d9c801c4230mr92069ybq.5.1702519543403; Wed, 13 Dec 2023 18:05:43 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:27 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-5-almasrymina@google.com> Subject: [RFC PATCH net-next v1 4/4] net: page_pool: use netmem_t instead of struct page in API From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , "=?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?=" , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace struct page in the page_pool API with the new netmem_t. Currently the changes are to the API layer only. The internals of the page_pool & drivers still convert the netmem_t to a page and use it regularly. Drivers that don't support other memory types than page can still use netmem_t as page only. Drivers that add support for other memory types such as devmem TCP will need to be modified to use the generic netmem_t rather than assuming the underlying memory is always a page. Similarly, the page_pool (and future pools) that add support for non-page memory will need to use the generic netmem_t. page_pools that only support one memory type (page or otherwise) can use that memory type internally, and convert it to netmem_t before delivering it to the driver for a more consistent API exposed to the drivers. Signed-off-by: Mina Almasry --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 15 ++-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 8 ++- drivers/net/ethernet/engleder/tsnep_main.c | 22 +++--- drivers/net/ethernet/freescale/fec_main.c | 33 ++++++--- .../net/ethernet/hisilicon/hns3/hns3_enet.c | 14 ++-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 2 +- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 15 ++-- drivers/net/ethernet/marvell/mvneta.c | 24 ++++--- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 18 +++-- .../marvell/octeontx2/nic/otx2_common.c | 8 ++- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 22 +++--- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 27 ++++--- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 28 ++++---- .../ethernet/microchip/lan966x/lan966x_fdma.c | 16 +++-- drivers/net/ethernet/microsoft/mana/mana_en.c | 10 +-- drivers/net/ethernet/socionext/netsec.c | 25 ++++--- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 48 ++++++++----- drivers/net/ethernet/ti/cpsw.c | 11 +-- drivers/net/ethernet/ti/cpsw_new.c | 11 +-- drivers/net/ethernet/ti/cpsw_priv.c | 12 ++-- drivers/net/ethernet/wangxun/libwx/wx_lib.c | 18 +++-- drivers/net/veth.c | 5 +- drivers/net/vmxnet3/vmxnet3_drv.c | 7 +- drivers/net/vmxnet3/vmxnet3_xdp.c | 20 +++--- drivers/net/wireless/mediatek/mt76/dma.c | 4 +- drivers/net/wireless/mediatek/mt76/mt76.h | 5 +- .../net/wireless/mediatek/mt76/mt7915/mmio.c | 4 +- drivers/net/xen-netfront.c | 4 +- include/net/page_pool/helpers.h | 72 ++++++++++--------- include/net/page_pool/types.h | 9 +-- net/bpf/test_run.c | 2 +- net/core/page_pool.c | 39 +++++----- net/core/skbuff.c | 2 +- net/core/xdp.c | 3 +- 34 files changed, 330 insertions(+), 233 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index be3fa0545fdc..9e37da8ed389 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -807,16 +807,17 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt = *bp, dma_addr_t *mapping, struct page *page; =20 if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - page =3D page_pool_dev_alloc_frag(rxr->page_pool, offset, - BNXT_RX_PAGE_SIZE); + page =3D netmem_to_page(page_pool_dev_alloc_frag(rxr->page_pool, + offset, + BNXT_RX_PAGE_SIZE)); } else { - page =3D page_pool_dev_alloc_pages(rxr->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rxr->page_pool)); *offset =3D 0; } if (!page) return NULL; =20 - *mapping =3D page_pool_get_dma_addr(page) + *offset; + *mapping =3D page_pool_get_dma_addr(page_to_netmem(page)) + *offset; return page; } =20 @@ -1040,7 +1041,7 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct = bnxt *bp, bp->rx_dir); skb =3D napi_build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); if (!skb) { - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); return NULL; } skb_mark_for_recycle(skb); @@ -1078,7 +1079,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, =20 skb =3D napi_alloc_skb(&rxr->bnapi->napi, payload); if (!skb) { - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); return NULL; } =20 @@ -3283,7 +3284,7 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *b= p, int ring_nr) rx_agg_buf->page =3D NULL; __clear_bit(i, rxr->rx_agg_bmap); =20 - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); } =20 skip_rx_agg_free: diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.c index 037624f17aea..3b6b09f835e4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -161,7 +161,8 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi = *bnapi, int budget) for (j =3D 0; j < frags; j++) { tx_cons =3D NEXT_TX(tx_cons); tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, tx_cons)]; - page_pool_recycle_direct(rxr->page_pool, tx_buf->page); + page_pool_recycle_direct(rxr->page_pool, + page_to_netmem(tx_buf->page)); } } else { bnxt_sched_reset_txr(bp, txr, tx_cons); @@ -219,7 +220,7 @@ void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info = *rxr, for (i =3D 0; i < shinfo->nr_frags; i++) { struct page *page =3D skb_frag_page(&shinfo->frags[i]); =20 - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); } shinfo->nr_frags =3D 0; } @@ -320,7 +321,8 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_i= nfo *rxr, u16 cons, =20 if (xdp_do_redirect(bp->dev, &xdp, xdp_prog)) { trace_xdp_exception(bp->dev, xdp_prog, act); - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, + page_to_netmem(page)); return true; } =20 diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ether= net/engleder/tsnep_main.c index df40c720e7b2..ce32dcf7c6f8 100644 --- a/drivers/net/ethernet/engleder/tsnep_main.c +++ b/drivers/net/ethernet/engleder/tsnep_main.c @@ -641,7 +641,7 @@ static int tsnep_xdp_tx_map(struct xdp_frame *xdpf, str= uct tsnep_tx *tx, } else { page =3D unlikely(frag) ? skb_frag_page(frag) : virt_to_page(xdpf->data); - dma =3D page_pool_get_dma_addr(page); + dma =3D page_pool_get_dma_addr(page_to_netmem(page)); if (unlikely(frag)) dma +=3D skb_frag_off(frag); else @@ -940,7 +940,8 @@ static void tsnep_rx_ring_cleanup(struct tsnep_rx *rx) for (i =3D 0; i < TSNEP_RING_SIZE; i++) { entry =3D &rx->entry[i]; if (!rx->xsk_pool && entry->page) - page_pool_put_full_page(rx->page_pool, entry->page, + page_pool_put_full_page(rx->page_pool, + page_to_netmem(entry->page), false); if (rx->xsk_pool && entry->xdp) xsk_buff_free(entry->xdp); @@ -1066,7 +1067,8 @@ static void tsnep_rx_free_page_buffer(struct tsnep_rx= *rx) */ page =3D rx->page_buffer; while (*page) { - page_pool_put_full_page(rx->page_pool, *page, false); + page_pool_put_full_page(rx->page_pool, page_to_netmem(*page), + false); *page =3D NULL; page++; } @@ -1080,7 +1082,8 @@ static int tsnep_rx_alloc_page_buffer(struct tsnep_rx= *rx) * be filled completely */ for (i =3D 0; i < TSNEP_RING_SIZE - 1; i++) { - rx->page_buffer[i] =3D page_pool_dev_alloc_pages(rx->page_pool); + rx->page_buffer[i] =3D + netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (!rx->page_buffer[i]) { tsnep_rx_free_page_buffer(rx); =20 @@ -1096,7 +1099,7 @@ static void tsnep_rx_set_page(struct tsnep_rx *rx, st= ruct tsnep_rx_entry *entry, { entry->page =3D page; entry->len =3D TSNEP_MAX_RX_BUF_SIZE; - entry->dma =3D page_pool_get_dma_addr(entry->page); + entry->dma =3D page_pool_get_dma_addr(page_to_netmem(entry->page)); entry->desc->rx =3D __cpu_to_le64(entry->dma + TSNEP_RX_OFFSET); } =20 @@ -1105,7 +1108,7 @@ static int tsnep_rx_alloc_buffer(struct tsnep_rx *rx,= int index) struct tsnep_rx_entry *entry =3D &rx->entry[index]; struct page *page; =20 - page =3D page_pool_dev_alloc_pages(rx->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (unlikely(!page)) return -ENOMEM; tsnep_rx_set_page(rx, entry, page); @@ -1296,7 +1299,8 @@ static bool tsnep_xdp_run_prog(struct tsnep_rx *rx, s= truct bpf_prog *prog, sync =3D xdp->data_end - xdp->data_hard_start - XDP_PACKET_HEADROOM; sync =3D max(sync, length); - page_pool_put_page(rx->page_pool, virt_to_head_page(xdp->data), + page_pool_put_page(rx->page_pool, + page_to_netmem(virt_to_head_page(xdp->data)), sync, true); return true; } @@ -1400,7 +1404,7 @@ static void tsnep_rx_page(struct tsnep_rx *rx, struct= napi_struct *napi, =20 napi_gro_receive(napi, skb); } else { - page_pool_recycle_direct(rx->page_pool, page); + page_pool_recycle_direct(rx->page_pool, page_to_netmem(page)); =20 rx->dropped++; } @@ -1599,7 +1603,7 @@ static int tsnep_rx_poll_zc(struct tsnep_rx *rx, stru= ct napi_struct *napi, } } =20 - page =3D page_pool_dev_alloc_pages(rx->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (page) { memcpy(page_address(page) + TSNEP_RX_OFFSET, entry->xdp->data - TSNEP_RX_INLINE_METADATA_SIZE, diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethern= et/freescale/fec_main.c index bae9536de767..4da3e6161a73 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -996,7 +996,9 @@ static void fec_enet_bd_init(struct net_device *dev) struct page *page =3D txq->tx_buf[i].buf_p; =20 if (page) - page_pool_put_page(page->pp, page, 0, false); + page_pool_put_page(page->pp, + page_to_netmem(page), + 0, false); } =20 txq->tx_buf[i].buf_p =3D NULL; @@ -1520,7 +1522,8 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_= id, int budget) xdp_return_frame_rx_napi(xdpf); } else { /* recycle pages of XDP_TX frames */ /* The dma_sync_size =3D 0 as XDP_TX has already synced DMA for_device = */ - page_pool_put_page(page->pp, page, 0, true); + page_pool_put_page(page->pp, page_to_netmem(page), 0, + true); } =20 txq->tx_buf[index].buf_p =3D NULL; @@ -1568,12 +1571,13 @@ static void fec_enet_update_cbd(struct fec_enet_pri= v_rx_q *rxq, struct page *new_page; dma_addr_t phys_addr; =20 - new_page =3D page_pool_dev_alloc_pages(rxq->page_pool); + new_page =3D netmem_to_page(page_pool_dev_alloc_pages(rxq->page_pool)); WARN_ON(!new_page); rxq->rx_skb_info[index].page =3D new_page; =20 rxq->rx_skb_info[index].offset =3D FEC_ENET_XDP_HEADROOM; - phys_addr =3D page_pool_get_dma_addr(new_page) + FEC_ENET_XDP_HEADROOM; + phys_addr =3D page_pool_get_dma_addr(page_to_netmem(new_page)) + + FEC_ENET_XDP_HEADROOM; bdp->cbd_bufaddr =3D cpu_to_fec32(phys_addr); } =20 @@ -1633,7 +1637,8 @@ fec_enet_run_xdp(struct fec_enet_private *fep, struct= bpf_prog *prog, xdp_err: ret =3D FEC_ENET_XDP_CONSUMED; page =3D virt_to_head_page(xdp->data); - page_pool_put_page(rxq->page_pool, page, sync, true); + page_pool_put_page(rxq->page_pool, page_to_netmem(page), sync, + true); if (act !=3D XDP_DROP) trace_xdp_exception(fep->netdev, prog, act); break; @@ -1761,7 +1766,8 @@ fec_enet_rx_queue(struct net_device *ndev, int budget= , u16 queue_id) */ skb =3D build_skb(page_address(page), PAGE_SIZE); if (unlikely(!skb)) { - page_pool_recycle_direct(rxq->page_pool, page); + page_pool_recycle_direct(rxq->page_pool, + page_to_netmem(page)); ndev->stats.rx_dropped++; =20 netdev_err_once(ndev, "build_skb failed!\n"); @@ -3264,7 +3270,9 @@ static void fec_enet_free_buffers(struct net_device *= ndev) for (q =3D 0; q < fep->num_rx_queues; q++) { rxq =3D fep->rx_queue[q]; for (i =3D 0; i < rxq->bd.ring_size; i++) - page_pool_put_full_page(rxq->page_pool, rxq->rx_skb_info[i].page, false= ); + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(rxq->rx_skb_info[i].page), + false); =20 for (i =3D 0; i < XDP_STATS_TOTAL; i++) rxq->stats[i] =3D 0; @@ -3293,7 +3301,9 @@ static void fec_enet_free_buffers(struct net_device *= ndev) } else { struct page *page =3D txq->tx_buf[i].buf_p; =20 - page_pool_put_page(page->pp, page, 0, false); + page_pool_put_page(page->pp, + page_to_netmem(page), 0, + false); } =20 txq->tx_buf[i].buf_p =3D NULL; @@ -3390,11 +3400,12 @@ fec_enet_alloc_rxq_buffers(struct net_device *ndev,= unsigned int queue) } =20 for (i =3D 0; i < rxq->bd.ring_size; i++) { - page =3D page_pool_dev_alloc_pages(rxq->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rxq->page_pool)); if (!page) goto err_alloc; =20 - phys_addr =3D page_pool_get_dma_addr(page) + FEC_ENET_XDP_HEADROOM; + phys_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + + FEC_ENET_XDP_HEADROOM; bdp->cbd_bufaddr =3D cpu_to_fec32(phys_addr); =20 rxq->rx_skb_info[i].page =3D page; @@ -3856,7 +3867,7 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_pr= ivate *fep, struct page *page; =20 page =3D virt_to_page(xdpb->data); - dma_addr =3D page_pool_get_dma_addr(page) + + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + (xdpb->data - xdpb->data_hard_start); dma_sync_single_for_device(&fep->pdev->dev, dma_addr, dma_sync_len, DMA_BIDIRECTIONAL); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/= ethernet/hisilicon/hns3/hns3_enet.c index b618797a7e8d..0ab015cb1b51 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -3371,15 +3371,15 @@ static int hns3_alloc_buffer(struct hns3_enet_ring = *ring, struct page *p; =20 if (ring->page_pool) { - p =3D page_pool_dev_alloc_frag(ring->page_pool, - &cb->page_offset, - hns3_buf_size(ring)); + p =3D netmem_to_page(page_pool_dev_alloc_frag(ring->page_pool, + &cb->page_offset, + hns3_buf_size(ring))); if (unlikely(!p)) return -ENOMEM; =20 cb->priv =3D p; cb->buf =3D page_address(p); - cb->dma =3D page_pool_get_dma_addr(p); + cb->dma =3D page_pool_get_dma_addr(page_to_netmem(p)); cb->type =3D DESC_TYPE_PP_FRAG; cb->reuse_flag =3D 0; return 0; @@ -3411,7 +3411,8 @@ static void hns3_free_buffer(struct hns3_enet_ring *r= ing, if (cb->type & DESC_TYPE_PAGE && cb->pagecnt_bias) __page_frag_cache_drain(cb->priv, cb->pagecnt_bias); else if (cb->type & DESC_TYPE_PP_FRAG) - page_pool_put_full_page(ring->page_pool, cb->priv, + page_pool_put_full_page(ring->page_pool, + page_to_netmem(cb->priv), false); } memset(cb, 0, sizeof(*cb)); @@ -4058,7 +4059,8 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring= , unsigned int length, if (dev_page_is_reusable(desc_cb->priv)) desc_cb->reuse_flag =3D 1; else if (desc_cb->type & DESC_TYPE_PP_FRAG) - page_pool_put_full_page(ring->page_pool, desc_cb->priv, + page_pool_put_full_page(ring->page_pool, + page_to_netmem(desc_cb->priv), false); else /* This page cannot be reused so discard it */ __page_frag_cache_drain(desc_cb->priv, diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 1f728a9004d9..bcef8b49652a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -336,7 +336,7 @@ static void idpf_rx_page_rel(struct idpf_queue *rxq, st= ruct idpf_rx_buf *rx_buf) if (unlikely(!rx_buf->page)) return; =20 - page_pool_put_full_page(rxq->pp, rx_buf->page, false); + page_pool_put_full_page(rxq->pp, page_to_netmem(rx_buf->page), false); =20 rx_buf->page =3D NULL; rx_buf->page_offset =3D 0; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.h index df76493faa75..5efe4920326b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -932,18 +932,19 @@ static inline dma_addr_t idpf_alloc_page(struct page_= pool *pool, unsigned int buf_size) { if (buf_size =3D=3D IDPF_RX_BUF_2048) - buf->page =3D page_pool_dev_alloc_frag(pool, &buf->page_offset, - buf_size); + buf->page =3D netmem_to_page(page_pool_dev_alloc_frag(pool, + &buf->page_offset, + buf_size)); else - buf->page =3D page_pool_dev_alloc_pages(pool); + buf->page =3D netmem_to_page(page_pool_dev_alloc_pages(pool)); =20 if (!buf->page) return DMA_MAPPING_ERROR; =20 buf->truesize =3D buf_size; =20 - return page_pool_get_dma_addr(buf->page) + buf->page_offset + - pool->p.offset; + return page_pool_get_dma_addr(page_to_netmem(buf->page)) + + buf->page_offset + pool->p.offset; } =20 /** @@ -952,7 +953,7 @@ static inline dma_addr_t idpf_alloc_page(struct page_po= ol *pool, */ static inline void idpf_rx_put_page(struct idpf_rx_buf *rx_buf) { - page_pool_put_page(rx_buf->page->pp, rx_buf->page, + page_pool_put_page(rx_buf->page->pp, page_to_netmem(rx_buf->page), rx_buf->truesize, true); rx_buf->page =3D NULL; } @@ -968,7 +969,7 @@ static inline void idpf_rx_sync_for_cpu(struct idpf_rx_= buf *rx_buf, u32 len) struct page_pool *pp =3D page->pp; =20 dma_sync_single_range_for_cpu(pp->p.dev, - page_pool_get_dma_addr(page), + page_pool_get_dma_addr(page_to_netmem(page)), rx_buf->page_offset + pp->p.offset, len, page_pool_get_dma_dir(pp)); } diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/m= arvell/mvneta.c index 29aac327574d..f20c09fa6764 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1940,12 +1940,13 @@ static int mvneta_rx_refill(struct mvneta_port *pp, dma_addr_t phys_addr; struct page *page; =20 - page =3D page_pool_alloc_pages(rxq->page_pool, - gfp_mask | __GFP_NOWARN); + page =3D netmem_to_page(page_pool_alloc_pages(rxq->page_pool, + gfp_mask | __GFP_NOWARN)); if (!page) return -ENOMEM; =20 - phys_addr =3D page_pool_get_dma_addr(page) + pp->rx_offset_correction; + phys_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + + pp->rx_offset_correction; mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); =20 return 0; @@ -2013,7 +2014,8 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *= pp, if (!data || !(rx_desc->buf_phys_addr)) continue; =20 - page_pool_put_full_page(rxq->page_pool, data, false); + page_pool_put_full_page(rxq->page_pool, page_to_netmem(data), + false); } if (xdp_rxq_info_is_reg(&rxq->xdp_rxq)) xdp_rxq_info_unreg(&rxq->xdp_rxq); @@ -2080,10 +2082,12 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct = mvneta_rx_queue *rxq, =20 for (i =3D 0; i < sinfo->nr_frags; i++) page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + page_to_netmem(skb_frag_page(&sinfo->frags[i])), + true); =20 out: - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), + page_pool_put_page(rxq->page_pool, + page_to_netmem(virt_to_head_page(xdp->data)), sync_len, true); } =20 @@ -2132,7 +2136,7 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struc= t mvneta_tx_queue *txq, } else { page =3D unlikely(frag) ? skb_frag_page(frag) : virt_to_page(xdpf->data); - dma_addr =3D page_pool_get_dma_addr(page); + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)); if (unlikely(frag)) dma_addr +=3D skb_frag_off(frag); else @@ -2386,7 +2390,8 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, if (page_is_pfmemalloc(page)) xdp_buff_set_frag_pfmemalloc(xdp); } else { - page_pool_put_full_page(rxq->page_pool, page, true); + page_pool_put_full_page(rxq->page_pool, page_to_netmem(page), + true); } *size -=3D len; } @@ -2471,7 +2476,8 @@ static int mvneta_rx_swbm(struct napi_struct *napi, } else { if (unlikely(!xdp_buf.data_hard_start)) { rx_desc->buf_phys_addr =3D 0; - page_pool_put_full_page(rxq->page_pool, page, + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(page), true); goto next; } diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/= ethernet/marvell/mvpp2/mvpp2_main.c index 93137606869e..32ae784b1484 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -361,7 +361,7 @@ static void *mvpp2_frag_alloc(const struct mvpp2_bm_poo= l *pool, struct page_pool *page_pool) { if (page_pool) - return page_pool_dev_alloc_pages(page_pool); + return netmem_to_page(page_pool_dev_alloc_pages(page_pool)); =20 if (likely(pool->frag_size <=3D PAGE_SIZE)) return netdev_alloc_frag(pool->frag_size); @@ -373,7 +373,9 @@ static void mvpp2_frag_free(const struct mvpp2_bm_pool = *pool, struct page_pool *page_pool, void *data) { if (page_pool) - page_pool_put_full_page(page_pool, virt_to_head_page(data), false); + page_pool_put_full_page(page_pool, + page_to_netmem(virt_to_head_page(data)), + false); else if (likely(pool->frag_size <=3D PAGE_SIZE)) skb_free_frag(data); else @@ -750,7 +752,7 @@ static void *mvpp2_buf_alloc(struct mvpp2_port *port, =20 if (page_pool) { page =3D (struct page *)data; - dma_addr =3D page_pool_get_dma_addr(page); + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)); data =3D page_to_virt(page); } else { dma_addr =3D dma_map_single(port->dev->dev.parent, data, @@ -3687,7 +3689,7 @@ mvpp2_xdp_submit_frame(struct mvpp2_port *port, u16 t= xq_id, /* XDP_TX */ struct page *page =3D virt_to_page(xdpf->data); =20 - dma_addr =3D page_pool_get_dma_addr(page) + + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + sizeof(*xdpf) + xdpf->headroom; dma_sync_single_for_device(port->dev->dev.parent, dma_addr, xdpf->len, DMA_BIDIRECTIONAL); @@ -3809,7 +3811,8 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct bpf_pro= g *prog, if (unlikely(err)) { ret =3D MVPP2_XDP_DROPPED; page =3D virt_to_head_page(xdp->data); - page_pool_put_page(pp, page, sync, true); + page_pool_put_page(pp, page_to_netmem(page), sync, + true); } else { ret =3D MVPP2_XDP_REDIR; stats->xdp_redirect++; @@ -3819,7 +3822,8 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct bpf_pro= g *prog, ret =3D mvpp2_xdp_xmit_back(port, xdp); if (ret !=3D MVPP2_XDP_TX) { page =3D virt_to_head_page(xdp->data); - page_pool_put_page(pp, page, sync, true); + page_pool_put_page(pp, page_to_netmem(page), sync, + true); } break; default: @@ -3830,7 +3834,7 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct bpf_pro= g *prog, fallthrough; case XDP_DROP: page =3D virt_to_head_page(xdp->data); - page_pool_put_page(pp, page, sync, true); + page_pool_put_page(pp, page_to_netmem(page), sync, true); ret =3D MVPP2_XDP_DROPPED; stats->xdp_drop++; break; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 7ca6941ea0b9..bbff52a24cab 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -530,11 +530,12 @@ static int otx2_alloc_pool_buf(struct otx2_nic *pfvf,= struct otx2_pool *pool, sz =3D SKB_DATA_ALIGN(pool->rbsize); sz =3D ALIGN(sz, OTX2_ALIGN); =20 - page =3D page_pool_alloc_frag(pool->page_pool, &offset, sz, GFP_ATOMIC); + page =3D netmem_to_page(page_pool_alloc_frag(pool->page_pool, + &offset, sz, GFP_ATOMIC)); if (unlikely(!page)) return -ENOMEM; =20 - *dma =3D page_pool_get_dma_addr(page) + offset; + *dma =3D page_pool_get_dma_addr(page_to_netmem(page)) + offset; return 0; } =20 @@ -1208,7 +1209,8 @@ void otx2_free_bufs(struct otx2_nic *pfvf, struct otx= 2_pool *pool, page =3D virt_to_head_page(phys_to_virt(pa)); =20 if (pool->page_pool) { - page_pool_put_full_page(pool->page_pool, page, true); + page_pool_put_full_page(pool->page_pool, page_to_netmem(page), + true); } else { dma_unmap_page_attrs(pfvf->dev, iova, size, DMA_FROM_DEVICE, diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethe= rnet/mediatek/mtk_eth_soc.c index a6e91573f8da..68146071a919 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1735,11 +1735,13 @@ static void *mtk_page_pool_get_buff(struct page_poo= l *pp, dma_addr_t *dma_addr, { struct page *page; =20 - page =3D page_pool_alloc_pages(pp, gfp_mask | __GFP_NOWARN); + page =3D netmem_to_page(page_pool_alloc_pages(pp, + gfp_mask | __GFP_NOWARN)); if (!page) return NULL; =20 - *dma_addr =3D page_pool_get_dma_addr(page) + MTK_PP_HEADROOM; + *dma_addr =3D + page_pool_get_dma_addr(page_to_netmem(page)) + MTK_PP_HEADROOM; return page_address(page); } =20 @@ -1747,7 +1749,8 @@ static void mtk_rx_put_buff(struct mtk_rx_ring *ring,= void *data, bool napi) { if (ring->page_pool) page_pool_put_full_page(ring->page_pool, - virt_to_head_page(data), napi); + page_to_netmem(virt_to_head_page(data)), + napi); else skb_free_frag(data); } @@ -1771,7 +1774,7 @@ static int mtk_xdp_frame_map(struct mtk_eth *eth, str= uct net_device *dev, } else { struct page *page =3D virt_to_head_page(data); =20 - txd_info->addr =3D page_pool_get_dma_addr(page) + + txd_info->addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + sizeof(struct xdp_frame) + headroom; dma_sync_single_for_device(eth->dma_dev, txd_info->addr, txd_info->size, DMA_BIDIRECTIONAL); @@ -1985,7 +1988,8 @@ static u32 mtk_xdp_run(struct mtk_eth *eth, struct mt= k_rx_ring *ring, } =20 page_pool_put_full_page(ring->page_pool, - virt_to_head_page(xdp->data), true); + page_to_netmem(virt_to_head_page(xdp->data)), + true); =20 update_stats: u64_stats_update_begin(&hw_stats->syncp); @@ -2074,8 +2078,9 @@ static int mtk_poll_rx(struct napi_struct *napi, int = budget, } =20 dma_sync_single_for_cpu(eth->dma_dev, - page_pool_get_dma_addr(page) + MTK_PP_HEADROOM, - pktlen, page_pool_get_dma_dir(ring->page_pool)); + page_pool_get_dma_addr(page_to_netmem(page)) + + MTK_PP_HEADROOM, + pktlen, page_pool_get_dma_dir(ring->page_pool)); =20 xdp_init_buff(&xdp, PAGE_SIZE, &ring->xdp_q); xdp_prepare_buff(&xdp, data, MTK_PP_HEADROOM, pktlen, @@ -2092,7 +2097,8 @@ static int mtk_poll_rx(struct napi_struct *napi, int = budget, skb =3D build_skb(data, PAGE_SIZE); if (unlikely(!skb)) { page_pool_put_full_page(ring->page_pool, - page, true); + page_to_netmem(page), + true); netdev->stats.rx_dropped++; goto skip_rx; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.c index e2e7d82cfca4..c8275e4b6cae 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -122,7 +122,8 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5= e_rq *rq, * mode. */ =20 - dma_addr =3D page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd->len, DMA_BIDIRECTI= ONAL); =20 if (xdptxd->has_frags) { @@ -134,8 +135,8 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5= e_rq *rq, dma_addr_t addr; u32 len; =20 - addr =3D page_pool_get_dma_addr(skb_frag_page(frag)) + - skb_frag_off(frag); + addr =3D page_pool_get_dma_addr(page_to_netmem(skb_frag_page(frag))) + + skb_frag_off(frag); len =3D skb_frag_size(frag); dma_sync_single_for_device(sq->pdev, addr, len, DMA_BIDIRECTIONAL); @@ -458,9 +459,12 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, str= uct mlx5e_xmit_data *xdptx =20 tmp.data =3D skb_frag_address(frag); tmp.len =3D skb_frag_size(frag); - tmp.dma_addr =3D xdptxdf->dma_arr ? xdptxdf->dma_arr[0] : - page_pool_get_dma_addr(skb_frag_page(frag)) + - skb_frag_off(frag); + tmp.dma_addr =3D + xdptxdf->dma_arr ? + xdptxdf->dma_arr[0] : + page_pool_get_dma_addr(page_to_netmem( + skb_frag_page(frag))) + + skb_frag_off(frag); p =3D &tmp; } } @@ -607,9 +611,11 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct ml= x5e_xmit_data *xdptxd, skb_frag_t *frag =3D &xdptxdf->sinfo->frags[i]; dma_addr_t addr; =20 - addr =3D xdptxdf->dma_arr ? xdptxdf->dma_arr[i] : - page_pool_get_dma_addr(skb_frag_page(frag)) + - skb_frag_off(frag); + addr =3D xdptxdf->dma_arr ? + xdptxdf->dma_arr[i] : + page_pool_get_dma_addr(page_to_netmem( + skb_frag_page(frag))) + + skb_frag_off(frag); =20 dseg->addr =3D cpu_to_be64(addr); dseg->byte_count =3D cpu_to_be32(skb_frag_size(frag)); @@ -699,7 +705,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *s= q, /* No need to check ((page->pp_magic & ~0x3UL) =3D=3D PP_SIGNATURE) * as we know this is a page_pool page. */ - page_pool_recycle_direct(page->pp, page); + page_pool_recycle_direct(page->pp, + page_to_netmem(page)); } while (++n < num); =20 break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/= ethernet/mellanox/mlx5/core/en_rx.c index 8d9743a5e42c..73d41dc2b47e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -278,11 +278,11 @@ static int mlx5e_page_alloc_fragmented(struct mlx5e_r= q *rq, { struct page *page; =20 - page =3D page_pool_dev_alloc_pages(rq->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rq->page_pool)); if (unlikely(!page)) return -ENOMEM; =20 - page_pool_fragment_page(page, MLX5E_PAGECNT_BIAS_MAX); + page_pool_fragment_page(page_to_netmem(page), MLX5E_PAGECNT_BIAS_MAX); =20 *frag_page =3D (struct mlx5e_frag_page) { .page =3D page, @@ -298,8 +298,9 @@ static void mlx5e_page_release_fragmented(struct mlx5e_= rq *rq, u16 drain_count =3D MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page =3D frag_page->page; =20 - if (page_pool_defrag_page(page, drain_count) =3D=3D 0) - page_pool_put_defragged_page(rq->page_pool, page, -1, true); + if (page_pool_defrag_page(page_to_netmem(page), drain_count) =3D=3D 0) + page_pool_put_defragged_page(rq->page_pool, + page_to_netmem(page), -1, true); } =20 static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, @@ -358,7 +359,7 @@ static int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, stru= ct mlx5e_rx_wqe_cyc *wqe, frag->flags &=3D ~BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); =20 headroom =3D i =3D=3D 0 ? rq->buff.headroom : 0; - addr =3D page_pool_get_dma_addr(frag->frag_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(frag->frag_page->page)); wqe->data[i].addr =3D cpu_to_be64(addr + frag->offset + headroom); } =20 @@ -501,7 +502,8 @@ mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, str= uct skb_shared_info *sinf { skb_frag_t *frag; =20 - dma_addr_t addr =3D page_pool_get_dma_addr(frag_page->page); + dma_addr_t addr =3D + page_pool_get_dma_addr(page_to_netmem(frag_page->page)); =20 dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_d= ir); if (!xdp_buff_has_frags(xdp)) { @@ -526,7 +528,7 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff = *skb, struct page *page, u32 frag_offset, u32 len, unsigned int truesize) { - dma_addr_t addr =3D page_pool_get_dma_addr(page); + dma_addr_t addr =3D page_pool_get_dma_addr(page_to_netmem(page)); =20 dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); @@ -674,7 +676,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *r= q, if (unlikely(err)) goto err_unmap; =20 - addr =3D page_pool_get_dma_addr(frag_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(frag_page->page)); =20 dma_info->addr =3D addr; dma_info->frag_page =3D frag_page; @@ -786,7 +788,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u1= 6 ix) err =3D mlx5e_page_alloc_fragmented(rq, frag_page); if (unlikely(err)) goto err_unmap; - addr =3D page_pool_get_dma_addr(frag_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(frag_page->page)); umr_wqe->inline_mtts[i] =3D (struct mlx5_mtt) { .ptag =3D cpu_to_be64(addr | MLX5_EN_WR), }; @@ -1685,7 +1687,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct= mlx5e_wqe_frag_info *wi, data =3D va + rx_headroom; frag_size =3D MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); =20 - addr =3D page_pool_get_dma_addr(frag_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -1738,7 +1740,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi va =3D page_address(frag_page->page) + wi->offset; frag_consumed_bytes =3D min_t(u32, frag_info->frag_size, cqe_bcnt); =20 - addr =3D page_pool_get_dma_addr(frag_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, rq->buff.frame0_sz, rq->buff.map_dir); net_prefetchw(va); /* xdp_frame data area */ @@ -2124,7 +2126,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *r= q, struct mlx5e_mpw_info *w while (++pagep < frag_page); } /* copy header */ - addr =3D page_pool_get_dma_addr(head_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(head_page->page)); mlx5e_copy_skb_header(rq, skb, head_page->page, addr, head_offset, head_offset, headlen); /* skb linear part was allocated with headlen and aligned to long */ @@ -2159,7 +2161,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, = struct mlx5e_mpw_info *wi, data =3D va + rx_headroom; frag_size =3D MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); =20 - addr =3D page_pool_get_dma_addr(frag_page->page); + addr =3D page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset, frag_size, rq->buff.map_dir); net_prefetch(data); diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c b/driver= s/net/ethernet/microchip/lan966x/lan966x_fdma.c index 3960534ac2ad..fdd4a9ccafd4 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c @@ -16,11 +16,12 @@ static struct page *lan966x_fdma_rx_alloc_page(struct l= an966x_rx *rx, { struct page *page; =20 - page =3D page_pool_dev_alloc_pages(rx->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (unlikely(!page)) return NULL; =20 - db->dataptr =3D page_pool_get_dma_addr(page) + XDP_PACKET_HEADROOM; + db->dataptr =3D page_pool_get_dma_addr(page_to_netmem(page)) + + XDP_PACKET_HEADROOM; =20 return page; } @@ -32,7 +33,8 @@ static void lan966x_fdma_rx_free_pages(struct lan966x_rx = *rx) for (i =3D 0; i < FDMA_DCB_MAX; ++i) { for (j =3D 0; j < FDMA_RX_DCB_MAX_DBS; ++j) page_pool_put_full_page(rx->page_pool, - rx->page[i][j], false); + page_to_netmem(rx->page[i][j]), + false); } } =20 @@ -44,7 +46,7 @@ static void lan966x_fdma_rx_free_page(struct lan966x_rx *= rx) if (unlikely(!page)) return; =20 - page_pool_recycle_direct(rx->page_pool, page); + page_pool_recycle_direct(rx->page_pool, page_to_netmem(page)); } =20 static void lan966x_fdma_rx_add_dcb(struct lan966x_rx *rx, @@ -435,7 +437,7 @@ static void lan966x_fdma_tx_clear_buf(struct lan966x *l= an966x, int weight) xdp_return_frame_bulk(dcb_buf->data.xdpf, &bq); else page_pool_recycle_direct(rx->page_pool, - dcb_buf->data.page); + page_to_netmem(dcb_buf->data.page)); } =20 clear =3D true; @@ -537,7 +539,7 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct= lan966x_rx *rx, return skb; =20 free_page: - page_pool_recycle_direct(rx->page_pool, page); + page_pool_recycle_direct(rx->page_pool, page_to_netmem(page)); =20 return NULL; } @@ -765,7 +767,7 @@ int lan966x_fdma_xmit_xdpf(struct lan966x_port *port, v= oid *ptr, u32 len) lan966x_ifh_set_bypass(ifh, 1); lan966x_ifh_set_port(ifh, BIT_ULL(port->chip_port)); =20 - dma_addr =3D page_pool_get_dma_addr(page); + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)); dma_sync_single_for_device(lan966x->dev, dma_addr + XDP_PACKET_HEADROOM, len + IFH_LEN_BYTES, diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/et= hernet/microsoft/mana/mana_en.c index cb7b9d8ef618..7172041076d8 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1587,7 +1587,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool, drop: if (from_pool) { page_pool_recycle_direct(rxq->page_pool, - virt_to_head_page(buf_va)); + page_to_netmem(virt_to_head_page(buf_va))); } else { WARN_ON_ONCE(rxq->xdp_save_va); /* Save for reuse */ @@ -1627,7 +1627,7 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, st= ruct device *dev, return NULL; } } else { - page =3D page_pool_dev_alloc_pages(rxq->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rxq->page_pool)); if (!page) return NULL; =20 @@ -1639,7 +1639,8 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, st= ruct device *dev, DMA_FROM_DEVICE); if (dma_mapping_error(dev, *da)) { if (*from_pool) - page_pool_put_full_page(rxq->page_pool, page, false); + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(page), false); else put_page(virt_to_head_page(va)); =20 @@ -2027,7 +2028,8 @@ static void mana_destroy_rxq(struct mana_port_context= *apc, page =3D virt_to_head_page(rx_oob->buf_va); =20 if (rx_oob->from_pool) - page_pool_put_full_page(rxq->page_pool, page, false); + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(page), false); else put_page(page); =20 diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet= /socionext/netsec.c index 5ab8b81b84e6..a573d1dead67 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -739,7 +739,7 @@ static void *netsec_alloc_rx_data(struct netsec_priv *p= riv, struct netsec_desc_ring *dring =3D &priv->desc_ring[NETSEC_RING_RX]; struct page *page; =20 - page =3D page_pool_dev_alloc_pages(dring->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(dring->page_pool)); if (!page) return NULL; =20 @@ -747,7 +747,8 @@ static void *netsec_alloc_rx_data(struct netsec_priv *p= riv, * page_pool API will map the whole page, skip what's needed for * network payloads and/or XDP */ - *dma_handle =3D page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM; + *dma_handle =3D page_pool_get_dma_addr(page_to_netmem(page)) + + NETSEC_RXBUF_HEADROOM; /* Make sure the incoming payload fits in the page for XDP and non-XDP * cases and reserve enough space for headroom + skb_shared_info */ @@ -862,8 +863,8 @@ static u32 netsec_xdp_queue_one(struct netsec_priv *pri= v, enum dma_data_direction dma_dir =3D page_pool_get_dma_dir(rx_ring->page_pool); =20 - dma_handle =3D page_pool_get_dma_addr(page) + xdpf->headroom + - sizeof(*xdpf); + dma_handle =3D page_pool_get_dma_addr(page_to_netmem(page)) + + xdpf->headroom + sizeof(*xdpf); dma_sync_single_for_device(priv->dev, dma_handle, xdpf->len, dma_dir); tx_desc.buf_type =3D TYPE_NETSEC_XDP_TX; @@ -919,7 +920,8 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, str= uct bpf_prog *prog, ret =3D netsec_xdp_xmit_back(priv, xdp); if (ret !=3D NETSEC_XDP_TX) { page =3D virt_to_head_page(xdp->data); - page_pool_put_page(dring->page_pool, page, sync, true); + page_pool_put_page(dring->page_pool, + page_to_netmem(page), sync, true); } break; case XDP_REDIRECT: @@ -929,7 +931,8 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, str= uct bpf_prog *prog, } else { ret =3D NETSEC_XDP_CONSUMED; page =3D virt_to_head_page(xdp->data); - page_pool_put_page(dring->page_pool, page, sync, true); + page_pool_put_page(dring->page_pool, + page_to_netmem(page), sync, true); } break; default: @@ -941,7 +944,8 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, str= uct bpf_prog *prog, case XDP_DROP: ret =3D NETSEC_XDP_CONSUMED; page =3D virt_to_head_page(xdp->data); - page_pool_put_page(dring->page_pool, page, sync, true); + page_pool_put_page(dring->page_pool, page_to_netmem(page), sync, + true); break; } =20 @@ -1038,8 +1042,8 @@ static int netsec_process_rx(struct netsec_priv *priv= , int budget) * cache state. Since we paid the allocation cost if * building an skb fails try to put the page into cache */ - page_pool_put_page(dring->page_pool, page, pkt_len, - true); + page_pool_put_page(dring->page_pool, + page_to_netmem(page), pkt_len, true); netif_err(priv, drv, priv->ndev, "rx failed to build skb\n"); break; @@ -1212,7 +1216,8 @@ static void netsec_uninit_pkt_dring(struct netsec_pri= v *priv, int id) if (id =3D=3D NETSEC_RING_RX) { struct page *page =3D virt_to_page(desc->addr); =20 - page_pool_put_full_page(dring->page_pool, page, false); + page_pool_put_full_page(dring->page_pool, + page_to_netmem(page), false); } else if (id =3D=3D NETSEC_RING_TX) { dma_unmap_single(priv->dev, desc->dma_addr, desc->len, DMA_TO_DEVICE); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/ne= t/ethernet/stmicro/stmmac/stmmac_main.c index 47de466e432c..7680db4b54b6 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1455,25 +1455,29 @@ static int stmmac_init_rx_buffers(struct stmmac_pri= v *priv, gfp |=3D GFP_DMA32; =20 if (!buf->page) { - buf->page =3D page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->page =3D netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->page) return -ENOMEM; buf->page_offset =3D stmmac_rx_offset(priv); } =20 if (priv->sph && !buf->sec_page) { - buf->sec_page =3D page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->sec_page =3D netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->sec_page) return -ENOMEM; =20 - buf->sec_addr =3D page_pool_get_dma_addr(buf->sec_page); + buf->sec_addr =3D + page_pool_get_dma_addr(page_to_netmem(buf->sec_page)); stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true); } else { buf->sec_page =3D NULL; stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false); } =20 - buf->addr =3D page_pool_get_dma_addr(buf->page) + buf->page_offset; + buf->addr =3D page_pool_get_dma_addr(page_to_netmem(buf->page)) + + buf->page_offset; =20 stmmac_set_desc_addr(priv, p, buf->addr); if (dma_conf->dma_buf_sz =3D=3D BUF_SIZE_16KiB) @@ -1495,11 +1499,13 @@ static void stmmac_free_rx_buffer(struct stmmac_pri= v *priv, struct stmmac_rx_buffer *buf =3D &rx_q->buf_pool[i]; =20 if (buf->page) - page_pool_put_full_page(rx_q->page_pool, buf->page, false); + page_pool_put_full_page(rx_q->page_pool, + page_to_netmem(buf->page), false); buf->page =3D NULL; =20 if (buf->sec_page) - page_pool_put_full_page(rx_q->page_pool, buf->sec_page, false); + page_pool_put_full_page(rx_q->page_pool, + page_to_netmem(buf->sec_page), false); buf->sec_page =3D NULL; } =20 @@ -4739,20 +4745,23 @@ static inline void stmmac_rx_refill(struct stmmac_p= riv *priv, u32 queue) p =3D rx_q->dma_rx + entry; =20 if (!buf->page) { - buf->page =3D page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->page =3D netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->page) break; } =20 if (priv->sph && !buf->sec_page) { - buf->sec_page =3D page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->sec_page =3D netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->sec_page) break; =20 - buf->sec_addr =3D page_pool_get_dma_addr(buf->sec_page); + buf->sec_addr =3D page_pool_get_dma_addr(page_to_netmem(buf->sec_page)); } =20 - buf->addr =3D page_pool_get_dma_addr(buf->page) + buf->page_offset; + buf->addr =3D page_pool_get_dma_addr(page_to_netmem(buf->page)) + + buf->page_offset; =20 stmmac_set_desc_addr(priv, p, buf->addr); if (priv->sph) @@ -4861,8 +4870,8 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *p= riv, int queue, } else { struct page *page =3D virt_to_page(xdpf->data); =20 - dma_addr =3D page_pool_get_dma_addr(page) + sizeof(*xdpf) + - xdpf->headroom; + dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + + sizeof(*xdpf) + xdpf->headroom; dma_sync_single_for_device(priv->device, dma_addr, xdpf->len, DMA_BIDIRECTIONAL); =20 @@ -5432,7 +5441,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int li= mit, u32 queue) if (priv->extend_desc) stmmac_rx_extended_status(priv, &priv->xstats, rx_q->dma_erx + entry); if (unlikely(status =3D=3D discard_frame)) { - page_pool_recycle_direct(rx_q->page_pool, buf->page); + page_pool_recycle_direct(rx_q->page_pool, + page_to_netmem(buf->page)); buf->page =3D NULL; error =3D 1; if (!priv->hwts_rx_en) @@ -5500,9 +5510,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int l= imit, u32 queue) unsigned int xdp_res =3D -PTR_ERR(skb); =20 if (xdp_res & STMMAC_XDP_CONSUMED) { - page_pool_put_page(rx_q->page_pool, - virt_to_head_page(ctx.xdp.data), - sync_len, true); + page_pool_put_page( + rx_q->page_pool, + page_to_netmem( + virt_to_head_page( + ctx.xdp.data)), + sync_len, true); buf->page =3D NULL; rx_dropped++; =20 @@ -5543,7 +5556,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int li= mit, u32 queue) skb_put(skb, buf1_len); =20 /* Data payload copied into SKB, page ready for recycle */ - page_pool_recycle_direct(rx_q->page_pool, buf->page); + page_pool_recycle_direct(rx_q->page_pool, + page_to_netmem(buf->page)); buf->page =3D NULL; } else if (buf1_len) { dma_sync_single_for_cpu(priv->device, buf->addr, diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index ea85c6dd5484..ea9f1fe492e6 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -380,11 +380,11 @@ static void cpsw_rx_handler(void *token, int len, int= status) } =20 /* the interface is going down, pages are purged */ - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); return; } =20 - new_page =3D page_pool_dev_alloc_pages(pool); + new_page =3D netmem_to_page(page_pool_dev_alloc_pages(pool)); if (unlikely(!new_page)) { new_page =3D page; ndev->stats.rx_dropped++; @@ -417,7 +417,7 @@ static void cpsw_rx_handler(void *token, int len, int s= tatus) skb =3D build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); if (!skb) { ndev->stats.rx_dropped++; - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); goto requeue; } =20 @@ -442,12 +442,13 @@ static void cpsw_rx_handler(void *token, int len, int= status) xmeta->ndev =3D ndev; xmeta->ch =3D ch; =20 - dma =3D page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; + dma =3D page_pool_get_dma_addr(page_to_netmem(new_page)) + + CPSW_HEADROOM_NA; ret =3D cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, pkt_size, 0); if (ret < 0) { WARN_ON(ret =3D=3D -ENOMEM); - page_pool_recycle_direct(pool, new_page); + page_pool_recycle_direct(pool, page_to_netmem(new_page)); } } =20 diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/c= psw_new.c index 498c50c6d1a7..d02b29aedddf 100644 --- a/drivers/net/ethernet/ti/cpsw_new.c +++ b/drivers/net/ethernet/ti/cpsw_new.c @@ -325,11 +325,11 @@ static void cpsw_rx_handler(void *token, int len, int= status) } =20 /* the interface is going down, pages are purged */ - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); return; } =20 - new_page =3D page_pool_dev_alloc_pages(pool); + new_page =3D netmem_to_page(page_pool_dev_alloc_pages(pool)); if (unlikely(!new_page)) { new_page =3D page; ndev->stats.rx_dropped++; @@ -361,7 +361,7 @@ static void cpsw_rx_handler(void *token, int len, int s= tatus) skb =3D build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); if (!skb) { ndev->stats.rx_dropped++; - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); goto requeue; } =20 @@ -387,12 +387,13 @@ static void cpsw_rx_handler(void *token, int len, int= status) xmeta->ndev =3D ndev; xmeta->ch =3D ch; =20 - dma =3D page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; + dma =3D page_pool_get_dma_addr(page_to_netmem(new_page)) + + CPSW_HEADROOM_NA; ret =3D cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, pkt_size, 0); if (ret < 0) { WARN_ON(ret =3D=3D -ENOMEM); - page_pool_recycle_direct(pool, new_page); + page_pool_recycle_direct(pool, page_to_netmem(new_page)); } } =20 diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/= cpsw_priv.c index 764ed298b570..222b2bd3dc47 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.c +++ b/drivers/net/ethernet/ti/cpsw_priv.c @@ -1113,7 +1113,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) pool =3D cpsw->page_pool[ch]; ch_buf_num =3D cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); for (i =3D 0; i < ch_buf_num; i++) { - page =3D page_pool_dev_alloc_pages(pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(pool)); if (!page) { cpsw_err(priv, ifup, "allocate rx page err\n"); return -ENOMEM; @@ -1123,7 +1123,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) xmeta->ndev =3D priv->ndev; xmeta->ch =3D ch; =20 - dma =3D page_pool_get_dma_addr(page) + CPSW_HEADROOM_NA; + dma =3D page_pool_get_dma_addr(page_to_netmem(page)) + + CPSW_HEADROOM_NA; ret =3D cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, page, dma, cpsw->rx_packet_max, @@ -1132,7 +1133,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) cpsw_err(priv, ifup, "cannot submit page to channel %d rx, error %d\n", ch, ret); - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, + page_to_netmem(page)); return ret; } } @@ -1303,7 +1305,7 @@ int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct = xdp_frame *xdpf, txch =3D cpsw->txv[0].ch; =20 if (page) { - dma =3D page_pool_get_dma_addr(page); + dma =3D page_pool_get_dma_addr(page_to_netmem(page)); dma +=3D xdpf->headroom + sizeof(struct xdp_frame); ret =3D cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), dma, xdpf->len, port); @@ -1379,7 +1381,7 @@ int cpsw_run_xdp(struct cpsw_priv *priv, int ch, stru= ct xdp_buff *xdp, out: return ret; drop: - page_pool_recycle_direct(cpsw->page_pool[ch], page); + page_pool_recycle_direct(cpsw->page_pool[ch], page_to_netmem(page)); return ret; } =20 diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethe= rnet/wangxun/libwx/wx_lib.c index a5a50b5a8816..57291cbf774b 100644 --- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c +++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c @@ -228,7 +228,8 @@ static void wx_dma_sync_frag(struct wx_ring *rx_ring, =20 /* If the page was released, just unmap it. */ if (unlikely(WX_CB(skb)->page_released)) - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), false); } =20 static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring, @@ -288,7 +289,9 @@ static void wx_put_rx_buffer(struct wx_ring *rx_ring, /* the page has been released from the ring */ WX_CB(skb)->page_released =3D true; else - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), + false); =20 __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); @@ -375,9 +378,9 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_rin= g, if (likely(page)) return true; =20 - page =3D page_pool_dev_alloc_pages(rx_ring->page_pool); + page =3D netmem_to_page(page_pool_dev_alloc_pages(rx_ring->page_pool)); WARN_ON(!page); - dma =3D page_pool_get_dma_addr(page); + dma =3D page_pool_get_dma_addr(page_to_netmem(page)); =20 bi->page_dma =3D dma; bi->page =3D page; @@ -2232,7 +2235,9 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring) struct sk_buff *skb =3D rx_buffer->skb; =20 if (WX_CB(skb)->page_released) - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), + false); =20 dev_kfree_skb(skb); } @@ -2247,7 +2252,8 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring) DMA_FROM_DEVICE); =20 /* free resources associated with mapping */ - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), false); __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); =20 diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 977861c46b1f..c93c199224da 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -781,8 +781,9 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq = *rq, size =3D min_t(u32, len, PAGE_SIZE); truesize =3D size; =20 - page =3D page_pool_dev_alloc(rq->page_pool, &page_offset, - &truesize); + page =3D netmem_to_page(page_pool_dev_alloc(rq->page_pool, + &page_offset, + &truesize)); if (!page) { consume_skb(nskb); goto drop; diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet= 3_drv.c index 0578864792b6..063a5c2c948d 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1349,11 +1349,12 @@ vmxnet3_pp_get_buff(struct page_pool *pp, dma_addr_= t *dma_addr, { struct page *page; =20 - page =3D page_pool_alloc_pages(pp, gfp_mask | __GFP_NOWARN); + page =3D netmem_to_page(page_pool_alloc_pages(pp, + gfp_mask | __GFP_NOWARN)); if (unlikely(!page)) return NULL; =20 - *dma_addr =3D page_pool_get_dma_addr(page) + pp->p.offset; + *dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + pp->p.offset; =20 return page_address(page); } @@ -1931,7 +1932,7 @@ vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq, if (rxd->btype =3D=3D VMXNET3_RXD_BTYPE_HEAD && rbi->page && rbi->buf_type =3D=3D VMXNET3_RX_BUF_XDP) { page_pool_recycle_direct(rq->page_pool, - rbi->page); + page_to_netmem(rbi->page)); rbi->page =3D NULL; } else if (rxd->btype =3D=3D VMXNET3_RXD_BTYPE_HEAD && rbi->skb) { diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet= 3_xdp.c index 80ddaff759d4..71f3c278a960 100644 --- a/drivers/net/vmxnet3/vmxnet3_xdp.c +++ b/drivers/net/vmxnet3/vmxnet3_xdp.c @@ -147,7 +147,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter, tbi->map_type |=3D VMXNET3_MAP_SINGLE; } else { /* XDP buffer from page pool */ page =3D virt_to_page(xdpf->data); - tbi->dma_addr =3D page_pool_get_dma_addr(page) + + tbi->dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)) + VMXNET3_XDP_HEADROOM; dma_sync_single_for_device(&adapter->pdev->dev, tbi->dma_addr, buf_size, @@ -269,7 +269,8 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp= _buff *xdp, rq->stats.xdp_redirects++; } else { rq->stats.xdp_drops++; - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, + page_to_netmem(page)); } return act; case XDP_TX: @@ -277,7 +278,8 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp= _buff *xdp, if (unlikely(!xdpf || vmxnet3_xdp_xmit_back(rq->adapter, xdpf))) { rq->stats.xdp_drops++; - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, + page_to_netmem(page)); } else { rq->stats.xdp_tx++; } @@ -294,7 +296,7 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp= _buff *xdp, break; } =20 - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, page_to_netmem(page)); =20 return act; } @@ -307,7 +309,7 @@ vmxnet3_build_skb(struct vmxnet3_rx_queue *rq, struct p= age *page, =20 skb =3D build_skb(page_address(page), PAGE_SIZE); if (unlikely(!skb)) { - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, page_to_netmem(page)); rq->stats.rx_buf_alloc_failure++; return NULL; } @@ -332,7 +334,7 @@ vmxnet3_process_xdp_small(struct vmxnet3_adapter *adapt= er, struct page *page; int act; =20 - page =3D page_pool_alloc_pages(rq->page_pool, GFP_ATOMIC); + page =3D netmem_to_page(page_pool_alloc_pages(rq->page_pool, GFP_ATOMIC)); if (unlikely(!page)) { rq->stats.rx_buf_alloc_failure++; return XDP_DROP; @@ -381,9 +383,9 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter, =20 page =3D rbi->page; dma_sync_single_for_cpu(&adapter->pdev->dev, - page_pool_get_dma_addr(page) + - rq->page_pool->p.offset, rcd->len, - page_pool_get_dma_dir(rq->page_pool)); + page_pool_get_dma_addr(page_to_netmem(page)) + + rq->page_pool->p.offset, + rcd->len, page_pool_get_dma_dir(rq->page_pool)); =20 xdp_init_buff(&xdp, rbi->len, &rq->xdp_rxq); xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireles= s/mediatek/mt76/dma.c index 511fe7e6e744..64972792fa4b 100644 --- a/drivers/net/wireless/mediatek/mt76/dma.c +++ b/drivers/net/wireless/mediatek/mt76/dma.c @@ -616,7 +616,9 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queu= e *q, if (!buf) break; =20 - addr =3D page_pool_get_dma_addr(virt_to_head_page(buf)) + offset; + addr =3D page_pool_get_dma_addr( + page_to_netmem(virt_to_head_page(buf))) + + offset; dir =3D page_pool_get_dma_dir(q->page_pool); dma_sync_single_for_device(dev->dma_dev, addr, len, dir); =20 diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wirele= ss/mediatek/mt76/mt76.h index ea828ba0b83a..a559d870312a 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76.h +++ b/drivers/net/wireless/mediatek/mt76/mt76.h @@ -1565,7 +1565,7 @@ static inline void mt76_put_page_pool_buf(void *buf, = bool allow_direct) { struct page *page =3D virt_to_head_page(buf); =20 - page_pool_put_full_page(page->pp, page, allow_direct); + page_pool_put_full_page(page->pp, page_to_netmem(page), allow_direct); } =20 static inline void * @@ -1573,7 +1573,8 @@ mt76_get_page_pool_buf(struct mt76_queue *q, u32 *off= set, u32 size) { struct page *page; =20 - page =3D page_pool_dev_alloc_frag(q->page_pool, offset, size); + page =3D netmem_to_page( + page_pool_dev_alloc_frag(q->page_pool, offset, size)); if (!page) return NULL; =20 diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net= /wireless/mediatek/mt76/mt7915/mmio.c index e7d8e03f826f..452d3018adc7 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c +++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c @@ -616,7 +616,9 @@ static u32 mt7915_mmio_wed_init_rx_buf(struct mtk_wed_d= evice *wed, int size) if (!buf) goto unmap; =20 - addr =3D page_pool_get_dma_addr(virt_to_head_page(buf)) + offset; + addr =3D page_pool_get_dma_addr( + page_to_netmem(virt_to_head_page(buf))) + + offset; dir =3D page_pool_get_dma_dir(q->page_pool); dma_sync_single_for_device(dev->mt76.dma_dev, addr, len, dir); =20 diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index ad29f370034e..2b07b56fde54 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -278,8 +278,8 @@ static struct sk_buff *xennet_alloc_one_rx_buffer(struc= t netfront_queue *queue) if (unlikely(!skb)) return NULL; =20 - page =3D page_pool_alloc_pages(queue->page_pool, - GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO); + page =3D netmem_to_page(page_pool_alloc_pages( + queue->page_pool, GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO)); if (unlikely(!page)) { kfree_skb(skb); return NULL; diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helper= s.h index 7dc65774cde5..153a3313562c 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -85,7 +85,7 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data,= void *stats) * * Get a page from the page allocator or page_pool caches. */ -static inline struct page *page_pool_dev_alloc_pages(struct page_pool *poo= l) +static inline struct netmem *page_pool_dev_alloc_pages(struct page_pool *p= ool) { gfp_t gfp =3D (GFP_ATOMIC | __GFP_NOWARN); =20 @@ -103,18 +103,18 @@ static inline struct page *page_pool_dev_alloc_pages(= struct page_pool *pool) * Return: * Return allocated page fragment, otherwise return NULL. */ -static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, - unsigned int *offset, - unsigned int size) +static inline struct netmem *page_pool_dev_alloc_frag(struct page_pool *po= ol, + unsigned int *offset, + unsigned int size) { gfp_t gfp =3D (GFP_ATOMIC | __GFP_NOWARN); =20 return page_pool_alloc_frag(pool, offset, size, gfp); } =20 -static inline struct page *page_pool_alloc(struct page_pool *pool, - unsigned int *offset, - unsigned int *size, gfp_t gfp) +static inline struct netmem *page_pool_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size, gfp_t gfp) { unsigned int max_size =3D PAGE_SIZE << pool->p.order; struct page *page; @@ -125,7 +125,7 @@ static inline struct page *page_pool_alloc(struct page_= pool *pool, return page_pool_alloc_pages(pool, gfp); } =20 - page =3D page_pool_alloc_frag(pool, offset, *size, gfp); + page =3D netmem_to_page(page_pool_alloc_frag(pool, offset, *size, gfp)); if (unlikely(!page)) return NULL; =20 @@ -138,7 +138,7 @@ static inline struct page *page_pool_alloc(struct page_= pool *pool, pool->frag_offset =3D max_size; } =20 - return page; + return page_to_netmem(page); } =20 /** @@ -154,9 +154,9 @@ static inline struct page *page_pool_alloc(struct page_= pool *pool, * Return: * Return allocated page or page fragment, otherwise return NULL. */ -static inline struct page *page_pool_dev_alloc(struct page_pool *pool, - unsigned int *offset, - unsigned int *size) +static inline struct netmem *page_pool_dev_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size) { gfp_t gfp =3D (GFP_ATOMIC | __GFP_NOWARN); =20 @@ -170,7 +170,8 @@ static inline void *page_pool_alloc_va(struct page_pool= *pool, struct page *page; =20 /* Mask off __GFP_HIGHMEM to ensure we can use page_address() */ - page =3D page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM); + page =3D netmem_to_page( + page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM)); if (unlikely(!page)) return NULL; =20 @@ -220,13 +221,14 @@ inline enum dma_data_direction page_pool_get_dma_dir(= struct page_pool *pool) * refcnt is 1 or return it back to the memory allocator and destroy any * mappings we have. */ -static inline void page_pool_fragment_page(struct page *page, long nr) +static inline void page_pool_fragment_page(struct netmem *netmem, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&netmem_to_page(netmem)->pp_frag_count, nr); } =20 -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_defrag_page(struct netmem *netmem, long nr) { + struct page *page =3D netmem_to_page(netmem); long ret; =20 /* If nr =3D=3D pp_frag_count then we have cleared all remaining @@ -269,16 +271,16 @@ static inline long page_pool_defrag_page(struct page = *page, long nr) return ret; } =20 -static inline bool page_pool_is_last_frag(struct page *page) +static inline bool page_pool_is_last_frag(struct netmem *netmem) { /* If page_pool_defrag_page() returns 0, we were the last user */ - return page_pool_defrag_page(page, 1) =3D=3D 0; + return page_pool_defrag_page(netmem, 1) =3D=3D 0; } =20 /** * page_pool_put_page() - release a reference to a page pool page * @pool: pool from which page was allocated - * @page: page to release a reference on + * @netmem: netmem to release a reference on * @dma_sync_size: how much of the page may have been touched by the device * @allow_direct: released by the consumer, allow lockless caching * @@ -288,8 +290,7 @@ static inline bool page_pool_is_last_frag(struct page *= page) * caches. If PP_FLAG_DMA_SYNC_DEV is set, the page will be synced for_dev= ice * using dma_sync_single_range_for_device(). */ -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_page(struct page_pool *pool, struct netme= m *netmem, unsigned int dma_sync_size, bool allow_direct) { @@ -297,40 +298,40 @@ static inline void page_pool_put_page(struct page_poo= l *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_frag(netmem)) return; =20 - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_page(pool, netmem, dma_sync_size, allow_direct); #endif } =20 /** * page_pool_put_full_page() - release a reference on a page pool page * @pool: pool from which page was allocated - * @page: page to release a reference on + * @netmem: netmem to release a reference on * @allow_direct: released by the consumer, allow lockless caching * * Similar to page_pool_put_page(), but will DMA sync the entire memory ar= ea * as configured in &page_pool_params.max_len. */ static inline void page_pool_put_full_page(struct page_pool *pool, - struct page *page, bool allow_direct) + struct netmem *netmem, bool allow_direct) { - page_pool_put_page(pool, page, -1, allow_direct); + page_pool_put_page(pool, netmem, -1, allow_direct); } =20 /** * page_pool_recycle_direct() - release a reference on a page pool page * @pool: pool from which page was allocated - * @page: page to release a reference on + * @netmem: netmem to release a reference on * * Similar to page_pool_put_full_page() but caller must guarantee safe con= text * (e.g NAPI), since it will recycle the page directly into the pool fast = cache. */ static inline void page_pool_recycle_direct(struct page_pool *pool, - struct page *page) + struct netmem *netmem) { - page_pool_put_full_page(pool, page, true); + page_pool_put_full_page(pool, netmem, true); } =20 #define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \ @@ -347,19 +348,20 @@ static inline void page_pool_recycle_direct(struct pa= ge_pool *pool, static inline void page_pool_free_va(struct page_pool *pool, void *va, bool allow_direct) { - page_pool_put_page(pool, virt_to_head_page(va), -1, allow_direct); + page_pool_put_page(pool, page_to_netmem(virt_to_head_page(va)), -1, + allow_direct); } =20 /** * page_pool_get_dma_addr() - Retrieve the stored DMA address. - * @page: page allocated from a page pool + * @netmem: netmem allocated from a page pool * * Fetch the DMA address of the page. The page pool to which the page belo= ngs * must had been created with PP_FLAG_DMA_MAP. */ -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t page_pool_get_dma_addr(struct netmem *netmem) { - dma_addr_t ret =3D page->dma_addr; + dma_addr_t ret =3D netmem_to_page(netmem)->dma_addr; =20 if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) ret <<=3D PAGE_SHIFT; @@ -367,8 +369,10 @@ static inline dma_addr_t page_pool_get_dma_addr(struct= page *page) return ret; } =20 -static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t ad= dr) +static inline bool page_pool_set_dma_addr(struct netmem *netmem, dma_addr_= t addr) { + struct page *page =3D netmem_to_page(netmem); + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { page->dma_addr =3D addr >> PAGE_SHIFT; =20 diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index ac286ea8ce2d..0faa5207a394 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -6,6 +6,7 @@ #include #include #include +#include =20 #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA * map/unmap @@ -199,9 +200,9 @@ struct page_pool { } user; }; =20 -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); -struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *of= fset, - unsigned int size, gfp_t gfp); +struct netmem *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *= offset, + unsigned int size, gfp_t gfp); struct page_pool *page_pool_create(const struct page_pool_params *params); =20 struct xdp_mem_info; @@ -234,7 +235,7 @@ static inline void page_pool_put_page_bulk(struct page_= pool *pool, void **data, } #endif =20 -void page_pool_put_defragged_page(struct page_pool *pool, struct page *pag= e, +void page_pool_put_defragged_page(struct page_pool *pool, struct netmem *n= etmem, unsigned int dma_sync_size, bool allow_direct); =20 diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 711cf5d59816..32e3fbc17e65 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -296,7 +296,7 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp= , struct bpf_prog *prog, xdp_set_return_frame_no_direct(); =20 for (i =3D 0; i < batch_sz; i++) { - page =3D page_pool_dev_alloc_pages(xdp->pp); + page =3D netmem_to_page(page_pool_dev_alloc_pages(xdp->pp)); if (!page) { err =3D -ENOMEM; goto out; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c2e7c9a6efbe..e8ab7944e291 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -360,7 +360,7 @@ static void page_pool_dma_sync_for_device(struct page_p= ool *pool, struct page *page, unsigned int dma_sync_size) { - dma_addr_t dma_addr =3D page_pool_get_dma_addr(page); + dma_addr_t dma_addr =3D page_pool_get_dma_addr(page_to_netmem(page)); =20 dma_sync_size =3D min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -384,7 +384,7 @@ static bool page_pool_dma_map(struct page_pool *pool, s= truct page *page) if (dma_mapping_error(pool->p.dev, dma)) return false; =20 - if (page_pool_set_dma_addr(page, dma)) + if (page_pool_set_dma_addr(page_to_netmem(page), dma)) goto unmap_failed; =20 if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) @@ -412,7 +412,7 @@ static void page_pool_set_pp_info(struct page_pool *poo= l, * is dirtying the same cache line as the page->pp_magic above, so * the overhead is negligible. */ - page_pool_fragment_page(page, 1); + page_pool_fragment_page(page_to_netmem(page), 1); if (pool->has_init_callback) pool->slow.init_callback(page, pool->slow.init_arg); } @@ -509,18 +509,18 @@ static struct page *__page_pool_alloc_pages_slow(stru= ct page_pool *pool, /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct netmem *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) { struct page *page; =20 /* Fast-path: Get a page from cache */ page =3D __page_pool_get_cached(pool); if (page) - return page; + return page_to_netmem(page); =20 /* Slow-path: cache empty, do real allocation */ page =3D __page_pool_alloc_pages_slow(pool, gfp); - return page; + return page_to_netmem(page); } EXPORT_SYMBOL(page_pool_alloc_pages); =20 @@ -564,13 +564,13 @@ static void page_pool_return_page(struct page_pool *p= ool, struct page *page) */ goto skip_dma_unmap; =20 - dma =3D page_pool_get_dma_addr(page); + dma =3D page_pool_get_dma_addr(page_to_netmem(page)); =20 /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); - page_pool_set_dma_addr(page, 0); + page_pool_set_dma_addr(page_to_netmem(page), 0); skip_dma_unmap: page_pool_clear_pp_info(page); =20 @@ -677,9 +677,11 @@ __page_pool_put_page(struct page_pool *pool, struct pa= ge *page, return NULL; } =20 -void page_pool_put_defragged_page(struct page_pool *pool, struct page *pag= e, +void page_pool_put_defragged_page(struct page_pool *pool, struct netmem *n= etmem, unsigned int dma_sync_size, bool allow_direct) { + struct page *page =3D netmem_to_page(netmem); + page =3D __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { /* Cache full, fallback to free pages */ @@ -714,7 +716,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, vo= id **data, struct page *page =3D virt_to_head_page(data[i]); =20 /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_frag(page_to_netmem(page))) continue; =20 page =3D __page_pool_put_page(pool, page, -1, false); @@ -756,7 +758,7 @@ static struct page *page_pool_drain_frag(struct page_po= ol *pool, long drain_count =3D BIAS_MAX - pool->frag_users; =20 /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_defrag_page(page_to_netmem(page), drain_count))) return NULL; =20 if (page_ref_count(page) =3D=3D 1 && !page_is_pfmemalloc(page)) { @@ -777,15 +779,14 @@ static void page_pool_free_frag(struct page_pool *poo= l) =20 pool->frag_page =3D NULL; =20 - if (!page || page_pool_defrag_page(page, drain_count)) + if (!page || page_pool_defrag_page(page_to_netmem(page), drain_count)) return; =20 page_pool_return_page(pool, page); } =20 -struct page *page_pool_alloc_frag(struct page_pool *pool, - unsigned int *offset, - unsigned int size, gfp_t gfp) +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *= offset, + unsigned int size, gfp_t gfp) { unsigned int max_size =3D PAGE_SIZE << pool->p.order; struct page *page =3D pool->frag_page; @@ -805,7 +806,7 @@ struct page *page_pool_alloc_frag(struct page_pool *poo= l, } =20 if (!page) { - page =3D page_pool_alloc_pages(pool, gfp); + page =3D netmem_to_page(page_pool_alloc_pages(pool, gfp)); if (unlikely(!page)) { pool->frag_page =3D NULL; return NULL; @@ -817,14 +818,14 @@ struct page *page_pool_alloc_frag(struct page_pool *p= ool, pool->frag_users =3D 1; *offset =3D 0; pool->frag_offset =3D size; - page_pool_fragment_page(page, BIAS_MAX); - return page; + page_pool_fragment_page(page_to_netmem(page), BIAS_MAX); + return page_to_netmem(page); } =20 pool->frag_users++; pool->frag_offset =3D *offset + size; alloc_stat_inc(pool, fast); - return page; + return page_to_netmem(page); } EXPORT_SYMBOL(page_pool_alloc_frag); =20 diff --git a/net/core/skbuff.c b/net/core/skbuff.c index b157efea5dea..01509728a753 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -928,7 +928,7 @@ bool napi_pp_put_page(struct page *page, bool napi_safe) * The page will be returned to the pool here regardless of the * 'flipped' fragment being in use or not. */ - page_pool_put_full_page(pp, page, allow_direct); + page_pool_put_full_page(pp, page_to_netmem(page), allow_direct); =20 return true; } diff --git a/net/core/xdp.c b/net/core/xdp.c index b6f1d6dab3f2..681294eee763 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -387,7 +387,8 @@ void __xdp_return(void *data, struct xdp_mem_info *mem,= bool napi_direct, /* No need to check ((page->pp_magic & ~0x3UL) =3D=3D PP_SIGNATURE) * as mem->type knows this a page_pool page */ - page_pool_put_full_page(page->pp, page, napi_direct); + page_pool_put_full_page(page->pp, page_to_netmem(page), + napi_direct); break; case MEM_TYPE_PAGE_SHARED: page_frag_free(data); --=20 2.43.0.472.g3155946c3a-goog