From nobody Thu Nov 14 16:51:23 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760E0C001DF for ; Fri, 20 Oct 2023 10:00:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376926AbjJTKAA (ORCPT ); Fri, 20 Oct 2023 06:00:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376878AbjJTJ7s (ORCPT ); Fri, 20 Oct 2023 05:59:48 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3389BD66; Fri, 20 Oct 2023 02:59:38 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SBg136Nf6zcd6k; Fri, 20 Oct 2023 17:54:47 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 20 Oct 2023 17:59:34 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Liang Chen , Alexander Lobakin , Eric Dumazet , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Subject: [PATCH net-next v12 5/5] net: veth: use newly added page pool API for veth with xdp Date: Fri, 20 Oct 2023 17:59:52 +0800 Message-ID: <20231020095952.11055-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231020095952.11055-1-linyunsheng@huawei.com> References: <20231020095952.11055-1-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use page_pool_alloc() API to allocate memory with least memory utilization and performance penalty. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck CC: Liang Chen CC: Alexander Lobakin --- drivers/net/veth.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 0deefd1573cf..9980517ed8b0 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -737,10 +737,11 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, if (skb_shared(skb) || skb_head_is_locked(skb) || skb_shinfo(skb)->nr_frags || skb_headroom(skb) < XDP_PACKET_HEADROOM) { - u32 size, len, max_head_size, off; + u32 size, len, max_head_size, off, truesize, page_offset; struct sk_buff *nskb; struct page *page; int i, head_off; + void *va; =20 /* We need a private copy of the skb and data buffers since * the ebpf program can modify it. We segment the original skb @@ -753,14 +754,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size) goto drop; =20 + size =3D min_t(u32, skb->len, max_head_size); + truesize =3D SKB_HEAD_ALIGN(size) + VETH_XDP_HEADROOM; + /* Allocate skb head */ - page =3D page_pool_dev_alloc_pages(rq->page_pool); - if (!page) + va =3D page_pool_dev_alloc_va(rq->page_pool, &truesize); + if (!va) goto drop; =20 - nskb =3D napi_build_skb(page_address(page), PAGE_SIZE); + nskb =3D napi_build_skb(va, truesize); if (!nskb) { - page_pool_put_full_page(rq->page_pool, page, true); + page_pool_free_va(rq->page_pool, va, true); goto drop; } =20 @@ -768,7 +772,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq = *rq, skb_copy_header(nskb, skb); skb_mark_for_recycle(nskb); =20 - size =3D min_t(u32, skb->len, max_head_size); if (skb_copy_bits(skb, 0, nskb->data, size)) { consume_skb(nskb); goto drop; @@ -783,14 +786,18 @@ static int veth_convert_skb_to_xdp_buff(struct veth_r= q *rq, len =3D skb->len - off; =20 for (i =3D 0; i < MAX_SKB_FRAGS && off < skb->len; i++) { - page =3D page_pool_dev_alloc_pages(rq->page_pool); + size =3D min_t(u32, len, PAGE_SIZE); + truesize =3D size; + + page =3D page_pool_dev_alloc(rq->page_pool, &page_offset, + &truesize); if (!page) { consume_skb(nskb); goto drop; } =20 - size =3D min_t(u32, len, PAGE_SIZE); - skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE); + skb_add_rx_frag(nskb, i, page, page_offset, size, + truesize); if (skb_copy_bits(skb, off, page_address(page), size)) { consume_skb(nskb); --=20 2.33.0