From nobody Sun Feb 8 05:41:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B3FEEB64D7 for ; Tue, 20 Jun 2023 14:55:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233110AbjFTOzt (ORCPT ); Tue, 20 Jun 2023 10:55:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231879AbjFTOzr (ORCPT ); Tue, 20 Jun 2023 10:55:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0EE31709 for ; Tue, 20 Jun 2023 07:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687272898; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+u30ziD9KgNs/SO68HcOpUwCa+4S9m3KYto4qjbvl9U=; b=D1NBsNozghwelIVfKn8DQ+4969LMyK50obJCrJA4oCzEl5TaoXcFXpGiC7QaUavp/+462j UPo3x4u0vsi7arMS1CAWQdHgdoFfRkafLdi4fSFtcxU5wxOOXR2EJymqpk9YwLc74kfHi0 ErOnUQMBTaocUMjNvQOcJVN/G7sVUIw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-43-AZi11j61MEqhDouArFBQJA-1; Tue, 20 Jun 2023 10:54:52 -0400 X-MC-Unique: AZi11j61MEqhDouArFBQJA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 341D238117F0; Tue, 20 Jun 2023 14:53:47 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD2FE40C6CD2; Tue, 20 Jun 2023 14:53:45 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Subject: [PATCH net-next v3 02/18] net: Display info about MSG_SPLICE_PAGES memory handling in proc Date: Tue, 20 Jun 2023 15:53:21 +0100 Message-ID: <20230620145338.1300897-3-dhowells@redhat.com> In-Reply-To: <20230620145338.1300897-1-dhowells@redhat.com> References: <20230620145338.1300897-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Display information about the memory handling MSG_SPLICE_PAGES does to copy slabbed data into page fragments. For each CPU that has a cached folio, it displays the folio pfn, the offset pointer within the folio and the size of the folio. It also displays the number of pages refurbished and the number of pages replaced. Signed-off-by: David Howells cc: Alexander Duyck cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: Menglong Dong cc: netdev@vger.kernel.org --- net/core/skbuff.c | 42 +++++++++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d962c93a429d..36605510a76d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -83,6 +83,7 @@ #include #include #include +#include =20 #include "dev.h" #include "sock_destructor.h" @@ -6758,6 +6759,7 @@ nodefer: __kfree_skb(skb); struct skb_splice_frag_cache { struct folio *folio; void *virt; + unsigned int fsize; unsigned int offset; /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. @@ -6767,6 +6769,26 @@ struct skb_splice_frag_cache { }; =20 static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache); +static atomic_t skb_splice_frag_replaced, skb_splice_frag_refurbished; + +static int skb_splice_show(struct seq_file *m, void *data) +{ + int cpu; + + seq_printf(m, "refurb=3D%u repl=3D%u\n", + atomic_read(&skb_splice_frag_refurbished), + atomic_read(&skb_splice_frag_replaced)); + + for_each_possible_cpu(cpu) { + const struct skb_splice_frag_cache *cache =3D + per_cpu_ptr(&skb_splice_frag_cache, cpu); + + seq_printf(m, "[%u] %lx %u/%u\n", + cpu, folio_pfn(cache->folio), + cache->offset, cache->fsize); + } + return 0; +} =20 /** * alloc_skb_frag - Allocate a page fragment for using in a socket @@ -6803,17 +6825,21 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) =20 insufficient_space: /* See if we can refurbish the current folio. */ - if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + if (!folio) goto get_new_folio; + if (!folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + goto replace_folio; if (unlikely(cache->pfmemalloc)) { __folio_put(folio); - goto get_new_folio; + goto replace_folio; } =20 fsize =3D folio_size(folio); if (unlikely(fragsz > fsize)) goto frag_too_big; =20 + atomic_inc(&skb_splice_frag_refurbished); + /* OK, page count is 0, we can safely set it */ folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); =20 @@ -6822,6 +6848,8 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) offset =3D fsize; goto try_again; =20 +replace_folio: + atomic_inc(&skb_splice_frag_replaced); get_new_folio: if (!spare) { cache->folio =3D NULL; @@ -6848,6 +6876,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) =20 cache->folio =3D spare; cache->virt =3D folio_address(spare); + cache->fsize =3D folio_size(spare); folio =3D spare; spare =3D NULL; =20 @@ -6858,7 +6887,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) =20 /* Reset page count bias and offset to start of new frag */ cache->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset =3D folio_size(folio); + offset =3D cache->fsize; goto try_again; =20 frag_too_big: @@ -7007,3 +7036,10 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, st= ruct iov_iter *iter, return spliced ?: ret; } EXPORT_SYMBOL(skb_splice_from_iter); + +static int skb_splice_init(void) +{ + proc_create_single("pagefrags", S_IFREG | 0444, NULL, &skb_splice_show); + return 0; +} +late_initcall(skb_splice_init);