From nobody Fri Sep 20 12:26:28 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26707CA0ECE for ; Tue, 12 Sep 2023 08:35:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232882AbjILIfM (ORCPT ); Tue, 12 Sep 2023 04:35:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232808AbjILIfA (ORCPT ); Tue, 12 Sep 2023 04:35:00 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28B9310DD; Tue, 12 Sep 2023 01:34:54 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RlGzB35YfzVkfZ; Tue, 12 Sep 2023 16:32:06 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 12 Sep 2023 16:34:52 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Eric Dumazet , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Subject: [PATCH net-next v8 6/6] net: veth: use newly added page pool API for veth with xdp Date: Tue, 12 Sep 2023 16:31:25 +0800 Message-ID: <20230912083126.65484-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230912083126.65484-1-linyunsheng@huawei.com> References: <20230912083126.65484-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use page_pool[_cache]_alloc() API to allocate memory with least memory utilization and performance penalty. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin --- drivers/net/veth.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 9c6f4f83f22b..a31a792aa00d 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -737,10 +737,11 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, if (skb_shared(skb) || skb_head_is_locked(skb) || skb_shinfo(skb)->nr_frags || skb_headroom(skb) < XDP_PACKET_HEADROOM) { - u32 size, len, max_head_size, off; + u32 size, len, max_head_size, off, truesize, page_offset; struct sk_buff *nskb; struct page *page; int i, head_off; + void *data; =20 /* We need a private copy of the skb and data buffers since * the ebpf program can modify it. We segment the original skb @@ -753,14 +754,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size) goto drop; =20 + size =3D min_t(u32, skb->len, max_head_size); + truesize =3D SKB_HEAD_ALIGN(size) + VETH_XDP_HEADROOM; + /* Allocate skb head */ - page =3D page_pool_dev_alloc_pages(rq->page_pool); - if (!page) + data =3D page_pool_dev_cache_alloc(rq->page_pool, &truesize); + if (!data) goto drop; =20 - nskb =3D napi_build_skb(page_address(page), PAGE_SIZE); + nskb =3D napi_build_skb(data, truesize); if (!nskb) { - page_pool_put_full_page(rq->page_pool, page, true); + page_pool_cache_free(rq->page_pool, data, true); goto drop; } =20 @@ -768,7 +772,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq = *rq, skb_copy_header(nskb, skb); skb_mark_for_recycle(nskb); =20 - size =3D min_t(u32, skb->len, max_head_size); if (skb_copy_bits(skb, 0, nskb->data, size)) { consume_skb(nskb); goto drop; @@ -783,14 +786,18 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, len =3D skb->len - off; =20 for (i =3D 0; i < MAX_SKB_FRAGS && off < skb->len; i++) { - page =3D page_pool_dev_alloc_pages(rq->page_pool); + size =3D min_t(u32, len, PAGE_SIZE); + truesize =3D size; + + page =3D page_pool_dev_alloc(rq->page_pool, &page_offset, + &truesize); if (!page) { consume_skb(nskb); goto drop; } =20 - size =3D min_t(u32, len, PAGE_SIZE); - skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE); + skb_add_rx_frag(nskb, i, page, page_offset, size, + truesize); if (skb_copy_bits(skb, off, page_address(page), size)) { consume_skb(nskb); --=20 2.33.0