From nobody Fri Sep 20 09:34:01 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4477CDB47E for ; Fri, 13 Oct 2023 06:48:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229835AbjJMGsF (ORCPT ); Fri, 13 Oct 2023 02:48:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229809AbjJMGsB (ORCPT ); Fri, 13 Oct 2023 02:48:01 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E432CBD; Thu, 12 Oct 2023 23:47:59 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4S6H5L64DqzvPxv; Fri, 13 Oct 2023 14:43:18 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 13 Oct 2023 14:47:57 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Lobakin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Guillaume Tucker , Matthew Wilcox , Linux-MM , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet Subject: [PATCH net-next v11 1/6] page_pool: fragment API support for 32-bit arch with 64-bit DMA Date: Fri, 13 Oct 2023 14:48:21 +0800 Message-ID: <20231013064827.61135-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231013064827.61135-1-linyunsheng@huawei.com> References: <20231013064827.61135-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently page_pool_alloc_frag() is not supported in 32-bit arch with 64-bit DMA because of the overlap issue between pp_frag_count and dma_addr_upper in 'struct page' for those arches, which seems to be quite common, see [1], which means driver may need to handle it when using fragment API. It is assumed that the combination of the above arch with an address space >16TB does not exist, as all those arches have 64b equivalent, it seems logical to use the 64b version for a system with a large address space. It is also assumed that dma address is page aligned when we are dma mapping a page aligned buffer, see [2]. That means we're storing 12 bits of 0 at the lower end for a dma address, we can reuse those bits for the above arches to support 32b+12b, which is 16TB of memory. If we make a wrong assumption, a warning is emitted so that user can report to us. 1. https://lore.kernel.org/all/20211117075652.58299-1-linyunsheng@huawei.co= m/ 2. https://lore.kernel.org/all/20230818145145.4b357c89@kernel.org/ Tested-by: Alexander Lobakin Signed-off-by: Jakub Kicinski Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin CC: Guillaume Tucker CC: Matthew Wilcox CC: Linux-MM Acked-by: Ilias Apalodimas --- include/linux/mm_types.h | 13 +------------ include/net/page_pool/helpers.h | 20 ++++++++++++++------ net/core/page_pool.c | 14 +++++++++----- 3 files changed, 24 insertions(+), 23 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 36c5b43999e6..74b49c4c7a52 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,18 +125,7 @@ struct page { struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; - union { - /** - * dma_addr_upper: might require a 64-bit - * value on 32-bit architectures. - */ - unsigned long dma_addr_upper; - /** - * For frag page support, not supported in - * 32-bit architectures with 64-bit DMA. - */ - atomic_long_t pp_frag_count; - }; + atomic_long_t pp_frag_count; }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helper= s.h index 8e7751464ff5..8f64adf86f5b 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -197,7 +197,7 @@ static inline void page_pool_recycle_direct(struct page= _pool *pool, page_pool_put_full_page(pool, page, true); } =20 -#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ +#define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \ (sizeof(dma_addr_t) > sizeof(unsigned long)) =20 /** @@ -211,17 +211,25 @@ static inline dma_addr_t page_pool_get_dma_addr(struc= t page *page) { dma_addr_t ret =3D page->dma_addr; =20 - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - ret |=3D (dma_addr_t)page->dma_addr_upper << 16 << 16; + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) + ret <<=3D PAGE_SHIFT; =20 return ret; } =20 -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t ad= dr) +static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t ad= dr) { + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { + page->dma_addr =3D addr >> PAGE_SHIFT; + + /* We assume page alignment to shave off bottom bits, + * if this "compression" doesn't work we need to drop. + */ + return addr !=3D (dma_addr_t)page->dma_addr << PAGE_SHIFT; + } + page->dma_addr =3D addr; - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - page->dma_addr_upper =3D upper_32_bits(addr); + return false; } =20 static inline bool page_pool_put(struct page_pool *pool) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 77cb75e63aca..8a9868ea5067 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -211,10 +211,6 @@ static int page_pool_init(struct page_pool *pool, */ } =20 - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT && - pool->p.flags & PP_FLAG_PAGE_FRAG) - return -EINVAL; - #ifdef CONFIG_PAGE_POOL_STATS pool->recycle_stats =3D alloc_percpu(struct page_pool_recycle_stats); if (!pool->recycle_stats) @@ -359,12 +355,20 @@ static bool page_pool_dma_map(struct page_pool *pool,= struct page *page) if (dma_mapping_error(pool->p.dev, dma)) return false; =20 - page_pool_set_dma_addr(page, dma); + if (page_pool_set_dma_addr(page, dma)) + goto unmap_failed; =20 if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); =20 return true; + +unmap_failed: + WARN_ON_ONCE("unexpected DMA address, please report to netdev@"); + dma_unmap_page_attrs(pool->p.dev, dma, + PAGE_SIZE << pool->p.order, pool->p.dma_dir, + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); + return false; } =20 static void page_pool_set_pp_info(struct page_pool *pool, --=20 2.33.0 From nobody Fri Sep 20 09:34:01 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 573B8CDB482 for ; Fri, 13 Oct 2023 06:48:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229865AbjJMGsH (ORCPT ); Fri, 13 Oct 2023 02:48:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229844AbjJMGsE (ORCPT ); Fri, 13 Oct 2023 02:48:04 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55343C0; Thu, 12 Oct 2023 23:48:01 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4S6H674ZCGz1kv69; Fri, 13 Oct 2023 14:43:59 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 13 Oct 2023 14:47:59 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet Subject: [PATCH net-next v11 2/6] page_pool: unify frag_count handling in page_pool_is_last_frag() Date: Fri, 13 Oct 2023 14:48:22 +0800 Message-ID: <20231013064827.61135-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231013064827.61135-1-linyunsheng@huawei.com> References: <20231013064827.61135-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently when page_pool_create() is called with PP_FLAG_PAGE_FRAG flag, page_pool_alloc_pages() is only allowed to be called under the below constraints: 1. page_pool_fragment_page() need to be called to setup page->pp_frag_count immediately. 2. page_pool_defrag_page() often need to be called to drain the page->pp_frag_count when there is no more user will be holding on to that page. Those constraints exist in order to support a page to be split into multi fragments. And those constraints have some overhead because of the cache line dirtying/bouncing and atomic update. Those constraints are unavoidable for case when we need a page to be split into more than one fragment, but there is also case that we want to avoid the above constraints and their overhead when a page can't be split as it can only hold a fragment as requested by user, depending on different use cases: use case 1: allocate page without page splitting. use case 2: allocate page with page splitting. use case 3: allocate page with or without page splitting depending on the fragment size. Currently page pool only provide page_pool_alloc_pages() and page_pool_alloc_frag() API to enable the 1 & 2 separately, so we can not use a combination of 1 & 2 to enable 3, it is not possible yet because of the per page_pool flag PP_FLAG_PAGE_FRAG. So in order to allow allocating unsplit page without the overhead of split page while still allow allocating split page we need to remove the per page_pool flag in page_pool_is_last_frag(), as best as I can think of, it seems there are two methods as below: 1. Add per page flag/bit to indicate a page is split or not, which means we might need to update that flag/bit everytime the page is recycled, dirtying the cache line of 'struct page' for use case 1. 2. Unify the page->pp_frag_count handling for both split and unsplit page by assuming all pages in the page pool is split into a big fragment initially. As page pool already supports use case 1 without dirtying the cache line of 'struct page' whenever a page is recyclable, we need to support the above use case 3 with minimal overhead, especially not adding any noticeable overhead for use case 1, and we are already doing an optimization by not updating pp_frag_count in page_pool_defrag_page() for the last fragment user, this patch chooses to unify the pp_frag_count handling to support the above use case 3. There is no noticeable performance degradation and some justification for unifying the frag_count handling with this patch applied using a micro-benchmark testing in [1]. 1. https://lore.kernel.org/all/bf2591f8-7b3c-4480-bb2c-31dc9da1d6ac@huawei.= com/ Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin --- include/net/page_pool/helpers.h | 47 ++++++++++++++++++++++++--------- net/core/page_pool.c | 10 ++++++- 2 files changed, 43 insertions(+), 14 deletions(-) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helper= s.h index 8f64adf86f5b..759489c037c7 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -115,28 +115,49 @@ static inline long page_pool_defrag_page(struct page = *page, long nr) long ret; =20 /* If nr =3D=3D pp_frag_count then we have cleared all remaining - * references to the page. No need to actually overwrite it, instead - * we can leave this to be overwritten by the calling function. + * references to the page: + * 1. 'n =3D=3D 1': no need to actually overwrite it. + * 2. 'n !=3D 1': overwrite it with one, which is the rare case + * for pp_frag_count draining. * - * The main advantage to doing this is that an atomic_read is - * generally a much cheaper operation than an atomic update, - * especially when dealing with a page that may be partitioned - * into only 2 or 3 pieces. + * The main advantage to doing this is that not only we avoid a atomic + * update, as an atomic_read is generally a much cheaper operation than + * an atomic update, especially when dealing with a page that may be + * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count + * handling by ensuring all pages have partitioned into only 1 piece + * initially, and only overwrite it when the page is partitioned into + * more than one piece. */ - if (atomic_long_read(&page->pp_frag_count) =3D=3D nr) + if (atomic_long_read(&page->pp_frag_count) =3D=3D nr) { + /* As we have ensured nr is always one for constant case using + * the BUILD_BUG_ON(), only need to handle the non-constant case + * here for pp_frag_count draining, which is a rare case. + */ + BUILD_BUG_ON(__builtin_constant_p(nr) && nr !=3D 1); + if (!__builtin_constant_p(nr)) + atomic_long_set(&page->pp_frag_count, 1); + return 0; + } =20 ret =3D atomic_long_sub_return(nr, &page->pp_frag_count); WARN_ON(ret < 0); + + /* We are the last user here too, reset pp_frag_count back to 1 to + * ensure all pages have been partitioned into 1 piece initially, + * this should be the rare case when the last two fragment users call + * page_pool_defrag_page() currently. + */ + if (unlikely(!ret)) + atomic_long_set(&page->pp_frag_count, 1); + return ret; } =20 -static inline bool page_pool_is_last_frag(struct page_pool *pool, - struct page *page) +static inline bool page_pool_is_last_frag(struct page *page) { - /* If fragments aren't enabled or count is 0 we were the last user */ - return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || - (page_pool_defrag_page(page, 1) =3D=3D 0); + /* If page_pool_defrag_page() returns 0, we were the last user */ + return page_pool_defrag_page(page, 1) =3D=3D 0; } =20 /** @@ -161,7 +182,7 @@ static inline void page_pool_put_page(struct page_pool = *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(page)) return; =20 page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8a9868ea5067..953535cab081 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -376,6 +376,14 @@ static void page_pool_set_pp_info(struct page_pool *po= ol, { page->pp =3D pool; page->pp_magic |=3D PP_SIGNATURE; + + /* Ensuring all pages have been split into one fragment initially: + * page_pool_set_pp_info() is only called once for every page when it + * is allocated from the page allocator and page_pool_fragment_page() + * is dirtying the same cache line as the page->pp_magic above, so + * the overhead is negligible. + */ + page_pool_fragment_page(page, 1); if (pool->p.init_callback) pool->p.init_callback(page, pool->p.init_arg); } @@ -672,7 +680,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, vo= id **data, struct page *page =3D virt_to_head_page(data[i]); =20 /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(page)) continue; =20 page =3D __page_pool_put_page(pool, page, -1, false); --=20 2.33.0 From nobody Fri Sep 20 09:34:01 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0C70CDB482 for ; Fri, 13 Oct 2023 06:48:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229926AbjJMGsR (ORCPT ); Fri, 13 Oct 2023 02:48:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229869AbjJMGsI (ORCPT ); Fri, 13 Oct 2023 02:48:08 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AFF4DA; Thu, 12 Oct 2023 23:48:05 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4S6H6k1BHdzVl9N; Fri, 13 Oct 2023 14:44:30 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 13 Oct 2023 14:48:02 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Michael Chan , Eric Dumazet , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Saeed Mahameed , Leon Romanovsky , Felix Fietkau , Ryder Lee , Shayne Chen , Sean Wang , Kalle Valo , Matthias Brugger , AngeloGioacchino Del Regno , Jesper Dangaard Brouer , Ilias Apalodimas , , , , , Subject: [PATCH net-next v11 3/6] page_pool: remove PP_FLAG_PAGE_FRAG Date: Fri, 13 Oct 2023 14:48:23 +0800 Message-ID: <20231013064827.61135-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231013064827.61135-1-linyunsheng@huawei.com> References: <20231013064827.61135-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" PP_FLAG_PAGE_FRAG is not really needed after pp_frag_count handling is unified and page_pool_alloc_frag() is supported in 32-bit arch with 64-bit DMA, so remove it. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 -- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 3 +-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 3 --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +- drivers/net/wireless/mediatek/mt76/mac80211.c | 2 +- include/net/page_pool/types.h | 6 ++---- net/core/page_pool.c | 3 +-- net/core/skbuff.c | 2 +- 9 files changed, 8 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index b0ca3b319e4f..96d11f41dd38 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3250,8 +3250,6 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.dma_dir =3D bp->rx_dir; pp.max_len =3D PAGE_SIZE; pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) - pp.flags |=3D PP_FLAG_PAGE_FRAG; =20 rxr->page_pool =3D page_pool_create(&pp); if (IS_ERR(rxr->page_pool)) { diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/= ethernet/hisilicon/hns3/hns3_enet.c index cf50368441b7..06117502001f 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -4940,8 +4940,7 @@ static void hns3_put_ring_config(struct hns3_nic_priv= *priv) static void hns3_alloc_page_pool(struct hns3_enet_ring *ring) { struct page_pool_params pp_params =3D { - .flags =3D PP_FLAG_DMA_MAP | PP_FLAG_PAGE_FRAG | - PP_FLAG_DMA_SYNC_DEV, + .flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, .order =3D hns3_page_order(ring), .pool_size =3D ring->desc_num * hns3_buf_size(ring) / (PAGE_SIZE << hns3_page_order(ring)), diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethe= rnet/intel/idpf/idpf_txrx.c index 6fa79898c42c..55a099986b55 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -595,9 +595,6 @@ static struct page_pool *idpf_rx_create_page_pool(struc= t idpf_queue *rxbufq) .offset =3D 0, }; =20 - if (rxbufq->rx_buf_size =3D=3D IDPF_RX_BUF_2048) - pp.flags |=3D PP_FLAG_PAGE_FRAG; - return page_pool_create(&pp); } =20 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 818ce76185b2..1a42bfded872 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1404,7 +1404,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, } =20 pp_params.order =3D get_order(buf_size); - pp_params.flags =3D PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP; + pp_params.flags =3D PP_FLAG_DMA_MAP; pp_params.pool_size =3D min(OTX2_PAGE_POOL_SZ, numptrs); pp_params.nid =3D NUMA_NO_NODE; pp_params.dev =3D pfvf->dev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/ne= t/ethernet/mellanox/mlx5/core/en_main.c index acb40770cf0c..a5441dea3463 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -834,7 +834,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params, struct page_pool_params pp_params =3D { 0 }; =20 pp_params.order =3D 0; - pp_params.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG= _PAGE_FRAG; + pp_params.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; pp_params.pool_size =3D pool_size; pp_params.nid =3D node; pp_params.dev =3D rq->pdev; diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wi= reless/mediatek/mt76/mac80211.c index d158320bc15d..fe7cc67b7ee2 100644 --- a/drivers/net/wireless/mediatek/mt76/mac80211.c +++ b/drivers/net/wireless/mediatek/mt76/mac80211.c @@ -566,7 +566,7 @@ int mt76_create_page_pool(struct mt76_dev *dev, struct = mt76_queue *q) { struct page_pool_params pp_params =3D { .order =3D 0, - .flags =3D PP_FLAG_PAGE_FRAG, + .flags =3D 0, .nid =3D NUMA_NO_NODE, .dev =3D dev->dma_dev, }; diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 887e7946a597..6fc5134095ed 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -17,10 +17,8 @@ * Please note DMA-sync-for-CPU is still * device driver responsibility */ -#define PP_FLAG_PAGE_FRAG BIT(2) /* for page frag feature */ #define PP_FLAG_ALL (PP_FLAG_DMA_MAP |\ - PP_FLAG_DMA_SYNC_DEV |\ - PP_FLAG_PAGE_FRAG) + PP_FLAG_DMA_SYNC_DEV) =20 /* * Fast allocation side cache array/stack @@ -45,7 +43,7 @@ struct pp_alloc_cache { =20 /** * struct page_pool_params - page pool parameters - * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV, PP_FLAG_PAGE_FRAG + * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV * @order: 2^order pages on allocation * @pool_size: size of the ptr_ring * @nid: NUMA node id to allocate from pages from diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 953535cab081..2a3671c97ca7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -756,8 +756,7 @@ struct page *page_pool_alloc_frag(struct page_pool *poo= l, unsigned int max_size =3D PAGE_SIZE << pool->p.order; struct page *page =3D pool->frag_page; =20 - if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) || - size > max_size)) + if (WARN_ON(size > max_size)) return NULL; =20 size =3D ALIGN(size, dma_get_cache_alignment()); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 0401f40973a5..ced4549b06c5 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5764,7 +5764,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_b= uff *from, /* In general, avoid mixing page_pool and non-page_pool allocated * pages within the same SKB. Additionally avoid dealing with clones * with page_pool pages, in case the SKB is using page_pool fragment - * references (PP_FLAG_PAGE_FRAG). Since we only take full page + * references (page_pool_alloc_frag()). Since we only take full page * references for cloned SKBs at the moment that would result in * inconsistent reference counts. * In theory we could take full references if @from is cloned and --=20 2.33.0 From nobody Fri Sep 20 09:34:01 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA3B9CDB47E for ; Fri, 13 Oct 2023 06:48:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229914AbjJMGsZ (ORCPT ); Fri, 13 Oct 2023 02:48:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229844AbjJMGsH (ORCPT ); Fri, 13 Oct 2023 02:48:07 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22313E1; Thu, 12 Oct 2023 23:48:06 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4S6H7q5z8YzrTG3; Fri, 13 Oct 2023 14:45:27 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 13 Oct 2023 14:48:04 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet Subject: [PATCH net-next v11 4/6] page_pool: introduce page_pool[_cache]_alloc() API Date: Fri, 13 Oct 2023 14:48:24 +0800 Message-ID: <20231013064827.61135-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231013064827.61135-1-linyunsheng@huawei.com> References: <20231013064827.61135-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently page pool supports the below use cases: use case 1: allocate page without page splitting using page_pool_alloc_pages() API if the driver knows that the memory it need is always bigger than half of the page allocated from page pool. use case 2: allocate page frag with page splitting using page_pool_alloc_frag() API if the driver knows that the memory it need is always smaller than or equal to the half of the page allocated from page pool. There is emerging use case [1] & [2] that is a mix of the above two case: the driver doesn't know the size of memory it need beforehand, so the driver may use something like below to allocate memory with least memory utilization and performance penalty: if (size << 1 > max_size) page =3D page_pool_alloc_pages(); else page =3D page_pool_alloc_frag(); To avoid the driver doing something like above, add the page_pool[_cache]_alloc() API to support the above use case, and update the true size of memory that is acctually allocated by updating '*size' back to the driver in order to avoid exacerbating truesize underestimate problem. Rename page_pool_free() which is used in the destroy process to __page_pool_destroy() to avoid confusion with the newly added API. 1. https://lore.kernel.org/all/d3ae6bd3537fbce379382ac6a42f67e22f27ece2.168= 3896626.git.lorenzo@kernel.org/ 2. https://lore.kernel.org/all/20230526054621.18371-3-liangchen.linux@gmail= .com/ Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin --- include/net/page_pool/helpers.h | 65 +++++++++++++++++++++++++++++++++ net/core/page_pool.c | 4 +- 2 files changed, 67 insertions(+), 2 deletions(-) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helper= s.h index 759489c037c7..674f480d9f2e 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -82,6 +82,65 @@ static inline struct page *page_pool_dev_alloc_frag(stru= ct page_pool *pool, return page_pool_alloc_frag(pool, offset, size, gfp); } =20 +static inline struct page *page_pool_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size, gfp_t gfp) +{ + unsigned int max_size =3D PAGE_SIZE << pool->p.order; + struct page *page; + + if ((*size << 1) > max_size) { + *size =3D max_size; + *offset =3D 0; + return page_pool_alloc_pages(pool, gfp); + } + + page =3D page_pool_alloc_frag(pool, offset, *size, gfp); + if (unlikely(!page)) + return NULL; + + /* There is very likely not enough space for another fragment, so append + * the remaining size to the current fragment to avoid truesize + * underestimate problem. + */ + if (pool->frag_offset + *size > max_size) { + *size =3D max_size - *offset; + pool->frag_offset =3D max_size; + } + + return page; +} + +static inline struct page *page_pool_dev_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size) +{ + gfp_t gfp =3D (GFP_ATOMIC | __GFP_NOWARN); + + return page_pool_alloc(pool, offset, size, gfp); +} + +static inline void *page_pool_cache_alloc(struct page_pool *pool, + unsigned int *size, gfp_t gfp) +{ + unsigned int offset; + struct page *page; + + page =3D page_pool_alloc(pool, &offset, size, gfp); + if (unlikely(!page)) + return NULL; + + return page_address(page) + offset; +} + +static inline void *page_pool_dev_cache_alloc(struct page_pool *pool, + unsigned int *size) +{ + gfp_t gfp =3D (GFP_ATOMIC | __GFP_NOWARN); + + return page_pool_cache_alloc(pool, size, gfp); +} + /** * page_pool_get_dma_dir() - Retrieve the stored DMA direction. * @pool: pool from which page was allocated @@ -221,6 +280,12 @@ static inline void page_pool_recycle_direct(struct pag= e_pool *pool, #define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \ (sizeof(dma_addr_t) > sizeof(unsigned long)) =20 +static inline void page_pool_cache_free(struct page_pool *pool, void *data, + bool allow_direct) +{ + page_pool_put_page(pool, virt_to_head_page(data), -1, allow_direct); +} + /** * page_pool_get_dma_addr() - Retrieve the stored DMA address. * @page: page allocated from a page pool diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 2a3671c97ca7..5e409b98aba0 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -809,7 +809,7 @@ static void page_pool_empty_ring(struct page_pool *pool) } } =20 -static void page_pool_free(struct page_pool *pool) +static void __page_pool_destroy(struct page_pool *pool) { if (pool->disconnect) pool->disconnect(pool); @@ -860,7 +860,7 @@ static int page_pool_release(struct page_pool *pool) page_pool_scrub(pool); inflight =3D page_pool_inflight(pool); if (!inflight) - page_pool_free(pool); + __page_pool_destroy(pool); =20 return inflight; } --=20 2.33.0 From nobody Fri Sep 20 09:34:01 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D39BC41513 for ; Fri, 13 Oct 2023 06:48:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229948AbjJMGsj (ORCPT ); Fri, 13 Oct 2023 02:48:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229843AbjJMGsO (ORCPT ); Fri, 13 Oct 2023 02:48:14 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BDFCF3; Thu, 12 Oct 2023 23:48:08 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4S6H7s4rZCzrTG6; Fri, 13 Oct 2023 14:45:29 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 13 Oct 2023 14:48:06 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Dima Tisnek , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet , Jonathan Corbet , Alexei Starovoitov , Daniel Borkmann , John Fastabend , , Subject: [PATCH net-next v11 5/6] page_pool: update document about fragment API Date: Fri, 13 Oct 2023 14:48:25 +0800 Message-ID: <20231013064827.61135-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231013064827.61135-1-linyunsheng@huawei.com> References: <20231013064827.61135-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As more drivers begin to use the fragment API, update the document about how to decide which API to use for the driver author. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin CC: Dima Tisnek --- Documentation/networking/page_pool.rst | 4 +- include/net/page_pool/helpers.h | 93 ++++++++++++++++++++++---- 2 files changed, 82 insertions(+), 15 deletions(-) diff --git a/Documentation/networking/page_pool.rst b/Documentation/network= ing/page_pool.rst index 215ebc92752c..0c0705994f51 100644 --- a/Documentation/networking/page_pool.rst +++ b/Documentation/networking/page_pool.rst @@ -58,7 +58,9 @@ a page will cause no race conditions is enough. =20 .. kernel-doc:: include/net/page_pool/helpers.h :identifiers: page_pool_put_page page_pool_put_full_page - page_pool_recycle_direct page_pool_dev_alloc_pages + page_pool_recycle_direct page_pool_cache_free + page_pool_dev_alloc_pages page_pool_dev_alloc_frag + page_pool_dev_alloc page_pool_dev_cache_alloc page_pool_get_dma_addr page_pool_get_dma_dir =20 .. kernel-doc:: net/core/page_pool.c diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helper= s.h index 674f480d9f2e..7550beeacf3d 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -8,23 +8,46 @@ /** * DOC: page_pool allocator * - * The page_pool allocator is optimized for the XDP mode that - * uses one frame per-page, but it can fallback on the - * regular page allocator APIs. + * The page_pool allocator is optimized for recycling page or page fragmen= t used + * by skb packet and xdp frame. * - * Basic use involves replacing alloc_pages() calls with the - * page_pool_alloc_pages() call. Drivers should use - * page_pool_dev_alloc_pages() replacing dev_alloc_pages(). + * Basic use involves replacing napi_alloc_frag() and alloc_pages() calls = with + * page_pool_cache_alloc() and page_pool_alloc(), which allocate memory wi= th or + * without page splitting depending on the requested memory size. * - * The API keeps track of in-flight pages, in order to let API users know - * when it is safe to free a page_pool object. Thus, API users - * must call page_pool_put_page() to free the page, or attach - * the page to a page_pool-aware object like skbs marked with - * skb_mark_for_recycle(). + * If the driver knows that it always requires full pages or its allocatio= ns are + * always smaller than half a page, it can use one of the more specific API + * calls: * - * API users must call page_pool_put_page() once on a page, as it - * will either recycle the page, or in case of refcnt > 1, it will - * release the DMA mapping and in-flight state accounting. + * 1. page_pool_alloc_pages(): allocate memory without page splitting when + * driver knows that the memory it need is always bigger than half of the = page + * allocated from page pool. There is no cache line dirtying for 'struct p= age' + * when a page is recycled back to the page pool. + * + * 2. page_pool_alloc_frag(): allocate memory with page splitting when dri= ver + * knows that the memory it need is always smaller than or equal to half o= f the + * page allocated from page pool. Page splitting enables memory saving and= thus + * avoids TLB/cache miss for data access, but there also is some cost to + * implement page splitting, mainly some cache line dirtying/bouncing for + * 'struct page' and atomic operation for page->pp_frag_count. + * + * The API keeps track of in-flight pages, in order to let API users know = when + * it is safe to free a page_pool object, the API users must call + * page_pool_put_page() or page_pool_cache_free() to free the pp page or t= he pp + * buffer, or attach the pp page or the pp buffer to a page_pool-aware obj= ect + * like skbs marked with skb_mark_for_recycle(). + * + * page_pool_put_page() may be called multi times on the same page if a pa= ge is + * split into multi fragments. For the last fragment, it will either recyc= le the + * page, or in case of page->_refcount > 1, it will release the DMA mappin= g and + * in-flight state accounting. + * + * dma_sync_single_range_for_device() is only called for the last fragment= when + * page_pool is created with PP_FLAG_DMA_SYNC_DEV flag, so it depends on t= he + * last freed fragment to do the sync_for_device operation for all fragmen= ts in + * the same page when a page is split, the API user must setup pool->p.max= _len + * and pool->p.offset correctly and ensure that page_pool_put_page() is ca= lled + * with dma_sync_size being -1 for fragment API. */ #ifndef _NET_PAGE_POOL_HELPERS_H #define _NET_PAGE_POOL_HELPERS_H @@ -73,6 +96,17 @@ static inline struct page *page_pool_dev_alloc_pages(str= uct page_pool *pool) return page_pool_alloc_pages(pool, gfp); } =20 +/** + * page_pool_dev_alloc_frag() - allocate a page fragment. + * @pool: pool from which to allocate + * @offset: offset to the allocated page + * @size: requested size + * + * Get a page fragment from the page allocator or page_pool caches. + * + * Return: + * Return allocated page fragment, otherwise return NULL. + */ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, unsigned int *offset, unsigned int size) @@ -111,6 +145,19 @@ static inline struct page *page_pool_alloc(struct page= _pool *pool, return page; } =20 +/** + * page_pool_dev_alloc() - allocate a page or a page fragment. + * @pool: pool from which to allocate + * @offset: offset to the allocated page + * @size: in as the requested size, out as the allocated size + * + * Get a page or a page fragment from the page allocator or page_pool cach= es + * depending on the requested size in order to allocate memory with least = memory + * utilization and performance penalty. + * + * Return: + * Return allocated page or page fragment, otherwise return NULL. + */ static inline struct page *page_pool_dev_alloc(struct page_pool *pool, unsigned int *offset, unsigned int *size) @@ -133,6 +180,16 @@ static inline void *page_pool_cache_alloc(struct page_= pool *pool, return page_address(page) + offset; } =20 +/** + * page_pool_dev_cache_alloc() - allocate a cache. + * @pool: pool from which to allocate + * @size: in as the requested size, out as the allocated size + * + * Get a cache from the page allocator or page_pool caches. + * + * Return: + * Return the addr for the allocated cache, otherwise return NULL. + */ static inline void *page_pool_dev_cache_alloc(struct page_pool *pool, unsigned int *size) { @@ -280,6 +337,14 @@ static inline void page_pool_recycle_direct(struct pag= e_pool *pool, #define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \ (sizeof(dma_addr_t) > sizeof(unsigned long)) =20 +/** + * page_pool_cache_free() - free a cache into the page_pool + * @pool: pool from which cache was allocated + * @data: addr of cache to be free + * @allow_direct: freed by the consumer, allow lockless caching + * + * Free a cache allocated from page_pool_dev_cache_alloc(). + */ static inline void page_pool_cache_free(struct page_pool *pool, void *data, bool allow_direct) { --=20 2.33.0 From nobody Fri Sep 20 09:34:01 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47706CDB47E for ; Fri, 13 Oct 2023 06:48:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229825AbjJMGse (ORCPT ); Fri, 13 Oct 2023 02:48:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229900AbjJMGsP (ORCPT ); Fri, 13 Oct 2023 02:48:15 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B857DFD; Thu, 12 Oct 2023 23:48:09 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4S6H7v1yxbzrTKL; Fri, 13 Oct 2023 14:45:31 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 13 Oct 2023 14:48:07 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Eric Dumazet , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Subject: [PATCH net-next v11 6/6] net: veth: use newly added page pool API for veth with xdp Date: Fri, 13 Oct 2023 14:48:26 +0800 Message-ID: <20231013064827.61135-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231013064827.61135-1-linyunsheng@huawei.com> References: <20231013064827.61135-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use page_pool[_cache]_alloc() API to allocate memory with least memory utilization and performance penalty. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin --- drivers/net/veth.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 0deefd1573cf..470791b0b533 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -737,10 +737,11 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, if (skb_shared(skb) || skb_head_is_locked(skb) || skb_shinfo(skb)->nr_frags || skb_headroom(skb) < XDP_PACKET_HEADROOM) { - u32 size, len, max_head_size, off; + u32 size, len, max_head_size, off, truesize, page_offset; struct sk_buff *nskb; struct page *page; int i, head_off; + void *data; =20 /* We need a private copy of the skb and data buffers since * the ebpf program can modify it. We segment the original skb @@ -753,14 +754,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size) goto drop; =20 + size =3D min_t(u32, skb->len, max_head_size); + truesize =3D SKB_HEAD_ALIGN(size) + VETH_XDP_HEADROOM; + /* Allocate skb head */ - page =3D page_pool_dev_alloc_pages(rq->page_pool); - if (!page) + data =3D page_pool_dev_cache_alloc(rq->page_pool, &truesize); + if (!data) goto drop; =20 - nskb =3D napi_build_skb(page_address(page), PAGE_SIZE); + nskb =3D napi_build_skb(data, truesize); if (!nskb) { - page_pool_put_full_page(rq->page_pool, page, true); + page_pool_cache_free(rq->page_pool, data, true); goto drop; } =20 @@ -768,7 +772,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq = *rq, skb_copy_header(nskb, skb); skb_mark_for_recycle(nskb); =20 - size =3D min_t(u32, skb->len, max_head_size); if (skb_copy_bits(skb, 0, nskb->data, size)) { consume_skb(nskb); goto drop; @@ -783,14 +786,18 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, len =3D skb->len - off; =20 for (i =3D 0; i < MAX_SKB_FRAGS && off < skb->len; i++) { - page =3D page_pool_dev_alloc_pages(rq->page_pool); + size =3D min_t(u32, len, PAGE_SIZE); + truesize =3D size; + + page =3D page_pool_dev_alloc(rq->page_pool, &page_offset, + &truesize); if (!page) { consume_skb(nskb); goto drop; } =20 - size =3D min_t(u32, len, PAGE_SIZE); - skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE); + skb_add_rx_frag(nskb, i, page, page_offset, size, + truesize); if (skb_copy_bits(skb, off, page_address(page), size)) { consume_skb(nskb); --=20 2.33.0