From nobody Sun Feb 8 11:45:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D708AEB64D8 for ; Fri, 16 Jun 2023 16:14:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343558AbjFPQOM (ORCPT ); Fri, 16 Jun 2023 12:14:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229901AbjFPQOD (ORCPT ); Fri, 16 Jun 2023 12:14:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCE3335A3 for ; Fri, 16 Jun 2023 09:13:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686931994; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QLiOARcSP/G9NfcLRlp5hvrHkmLQeF8H+vWuq3j9CzM=; b=XBMHL/bmK3JPbKUakPdgxOnH1nr1HOI6hw6L3O1Rdb20IwZNDb/2Xv2Ek2TWTiB7Sqvxxs fq/6PTsZCYh9+a6Z0MRySNWpdH8pTlfHD1ALQyQFWl/Uo7cH3CvwE2gG0asadYqFepBdtn 84qIjY8tJnOuMi5s0cLYZOjwuZywDWU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-463-UkTLdvphMoig0DrcTgcXEg-1; Fri, 16 Jun 2023 12:13:10 -0400 X-MC-Unique: UkTLdvphMoig0DrcTgcXEg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ECA63185A78B; Fri, 16 Jun 2023 16:13:09 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.51]) by smtp.corp.redhat.com (Postfix) with ESMTP id D38ED2026D49; Fri, 16 Jun 2023 16:13:07 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Subject: [PATCH net-next 01/17] net: Copy slab data for sendmsg(MSG_SPLICE_PAGES) Date: Fri, 16 Jun 2023 17:12:44 +0100 Message-ID: <20230616161301.622169-2-dhowells@redhat.com> In-Reply-To: <20230616161301.622169-1-dhowells@redhat.com> References: <20230616161301.622169-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If sendmsg() is passed MSG_SPLICE_PAGES and is given a buffer that contains some data that's resident in the slab, copy it rather than returning EIO. This can be made use of by a number of drivers in the kernel, including: iwarp, ceph/rds, dlm, nvme, ocfs2, drdb. It could also be used by iscsi, rxrpc, sunrpc, cifs and probably others. skb_splice_from_iter() is given it's own fragment allocator as page_frag_alloc_align() can't be used because it does no locking to prevent parallel callers from racing. alloc_skb_frag() uses a separate folio for each cpu and locks to the cpu whilst allocating, reenabling cpu migration around folio allocation. This could allocate a whole page instead for each fragment to be copied, as alloc_skb_with_frags() would do instead, but that would waste a lot of space (most of the fragments look like they're going to be small). This allows an entire message that consists of, say, a protocol header or two, a number of pages of data and a protocol footer to be sent using a single call to sock_sendmsg(). The callers could be made to copy the data into fragments before calling sendmsg(), but that then penalises them if MSG_SPLICE_PAGES gets ignored. Signed-off-by: David Howells cc: Alexander Duyck cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: Menglong Dong cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 5 ++ net/core/skbuff.c | 172 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 174 insertions(+), 3 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 91ed66952580..0ba776cd9be8 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -5037,6 +5037,11 @@ static inline void skb_mark_for_recycle(struct sk_bu= ff *skb) #endif } =20 +void *alloc_skb_frag(size_t fragsz, gfp_t gfp); +void *copy_skb_frag(const void *s, size_t len, gfp_t gfp); +ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, + ssize_t maxsize, gfp_t gfp); + ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, ssize_t maxsize, gfp_t gfp); =20 diff --git a/net/core/skbuff.c b/net/core/skbuff.c index fee2b1c105fe..9bd8d6bf6c21 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6755,6 +6755,146 @@ nodefer: __kfree_skb(skb); smp_call_function_single_async(cpu, &sd->defer_csd); } =20 +struct skb_splice_frag_cache { + struct folio *folio; + void *virt; + unsigned int offset; + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + +static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache); + +/** + * alloc_skb_frag - Allocate a page fragment for using in a socket + * @fragsz: The size of fragment required + * @gfp: Allocation flags + */ +void *alloc_skb_frag(size_t fragsz, gfp_t gfp) +{ + struct skb_splice_frag_cache *cache; + struct folio *folio, *spare =3D NULL; + size_t offset, fsize; + void *p; + + if (WARN_ON_ONCE(fragsz =3D=3D 0)) + fragsz =3D 1; + + cache =3D get_cpu_ptr(&skb_splice_frag_cache); +reload: + folio =3D cache->folio; + offset =3D cache->offset; +try_again: + if (fragsz > offset) + goto insufficient_space; + + /* Make the allocation. */ + cache->pagecnt_bias--; + offset =3D ALIGN_DOWN(offset - fragsz, SMP_CACHE_BYTES); + cache->offset =3D offset; + p =3D cache->virt + offset; + put_cpu_ptr(skb_splice_frag_cache); + if (spare) + folio_put(spare); + return p; + +insufficient_space: + /* See if we can refurbish the current folio. */ + if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + goto get_new_folio; + if (unlikely(cache->pfmemalloc)) { + __folio_put(folio); + goto get_new_folio; + } + + fsize =3D folio_size(folio); + if (unlikely(fragsz > fsize)) + goto frag_too_big; + + /* OK, page count is 0, we can safely set it */ + folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* Reset page count bias and offset to start of new frag */ + cache->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset =3D fsize; + goto try_again; + +get_new_folio: + if (!spare) { + cache->folio =3D NULL; + put_cpu_ptr(&skb_splice_frag_cache); + +#if PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE + spare =3D folio_alloc(gfp | __GFP_NOWARN | __GFP_NORETRY | + __GFP_NOMEMALLOC, + PAGE_FRAG_CACHE_MAX_ORDER); + if (!spare) +#endif + spare =3D folio_alloc(gfp, 0); + if (!spare) + return NULL; + + cache =3D get_cpu_ptr(&skb_splice_frag_cache); + /* We may now be on a different cpu and/or someone else may + * have refilled it + */ + cache->pfmemalloc =3D folio_is_pfmemalloc(spare); + if (cache->folio) + goto reload; + } + + cache->folio =3D spare; + cache->virt =3D folio_address(spare); + folio =3D spare; + spare =3D NULL; + + /* Even if we own the page, we do not use atomic_set(). This would + * break get_page_unless_zero() users. + */ + folio_ref_add(folio, PAGE_FRAG_CACHE_MAX_SIZE); + + /* Reset page count bias and offset to start of new frag */ + cache->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset =3D folio_size(folio); + goto try_again; + +frag_too_big: + /* + * The caller is trying to allocate a fragment with fragsz > PAGE_SIZE + * but the cache isn't big enough to satisfy the request, this may + * happen in low memory conditions. We don't release the cache page + * because it could make memory pressure worse so we simply return NULL + * here. + */ + cache->offset =3D offset; + put_cpu_ptr(&skb_splice_frag_cache); + if (spare) + folio_put(spare); + return NULL; +} +EXPORT_SYMBOL(alloc_skb_frag); + +/** + * copy_skb_frag - Copy data into a page fragment. + * @s: The data to copy + * @len: The size of the data + * @gfp: Allocation flags + */ +void *copy_skb_frag(const void *s, size_t len, gfp_t gfp) +{ + void *p; + + p =3D alloc_skb_frag(len, gfp); + if (!p) + return NULL; + + return memcpy(p, s, len); +} +EXPORT_SYMBOL(copy_skb_frag); + static void skb_splice_csum_page(struct sk_buff *skb, struct page *page, size_t offset, size_t len) { @@ -6808,17 +6948,43 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, s= truct iov_iter *iter, break; } =20 + if (space =3D=3D 0 && + !skb_can_coalesce(skb, skb_shinfo(skb)->nr_frags, + pages[0], off)) { + iov_iter_revert(iter, len); + break; + } + i =3D 0; do { struct page *page =3D pages[i++]; size_t part =3D min_t(size_t, PAGE_SIZE - off, len); - - ret =3D -EIO; - if (WARN_ON_ONCE(!sendpage_ok(page))) + bool put =3D false; + + if (PageSlab(page)) { + const void *p; + void *q; + + p =3D kmap_local_page(page); + q =3D copy_skb_frag(p + off, part, gfp); + kunmap_local(p); + if (!q) { + iov_iter_revert(iter, len); + ret =3D -ENOMEM; + goto out; + } + page =3D virt_to_page(q); + off =3D offset_in_page(q); + put =3D true; + } else if (WARN_ON_ONCE(!sendpage_ok(page))) { + ret =3D -EIO; goto out; + } =20 ret =3D skb_append_pagefrags(skb, page, off, part, frag_limit); + if (put) + put_page(page); if (ret < 0) { iov_iter_revert(iter, len); goto out;