From nobody Mon Feb 9 10:26:46 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FC0DC77B61 for ; Wed, 29 Mar 2023 14:20:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229679AbjC2OUA (ORCPT ); Wed, 29 Mar 2023 10:20:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231165AbjC2OSG (ORCPT ); Wed, 29 Mar 2023 10:18:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54C055BA5 for ; Wed, 29 Mar 2023 07:16:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099315; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M5DaLi1tovZrH11oTI95mtlHRv/XZcPSFvwL/aUxUh4=; b=Dk//I4lx7BOnYxkfIz9av1ekycqK22HPXeNxO1cvCjF8u68qCSuXVWi021S1bxx9ueIwEZ Kv55SWUKxhqolj0IODlih/ilHJqUdr8G7gqd+wnEe8ObkCiRPl1Dh76AL+LY1pK0xaIWGa 1XcSQIcbrOceMyZqQ55U01F8cDhiYWQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-612-sHDpKiIEOqiGBf3DQmh3WA-1; Wed, 29 Mar 2023 10:15:11 -0400 X-MC-Unique: sHDpKiIEOqiGBf3DQmh3WA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6673388562B; Wed, 29 Mar 2023 14:15:10 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 548E32027041; Wed, 29 Mar 2023 14:15:08 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Herbert Xu , linux-crypto@vger.kernel.org Subject: [RFC PATCH v2 26/48] crypto: af_alg/hash: Support MSG_SPLICE_PAGES Date: Wed, 29 Mar 2023 15:13:32 +0100 Message-Id: <20230329141354.516864-27-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make AF_ALG sendmsg() support MSG_SPLICE_PAGES in the hashing code. This causes pages to be spliced from the source iterator if possible. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. [!] Note that this makes use of netfs_extract_iter_to_sg() from netfslib. This probably needs moving to core code somewhere. Signed-off-by: David Howells cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org --- crypto/af_alg.c | 11 +++-- crypto/algif_hash.c | 99 ++++++++++++++++++++++++++++----------------- 2 files changed, 70 insertions(+), 40 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index 7fe8c8db6bb5..686610a4986f 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -543,9 +543,14 @@ void af_alg_free_sg(struct af_alg_sgl *sgl) { int i; =20 - if (sgl->need_unpin) - for (i =3D 0; i < sgl->sgt.nents; i++) - unpin_user_page(sg_page(&sgl->sgt.sgl[i])); + if (sgl->sgt.sgl) { + if (sgl->need_unpin) + for (i =3D 0; i < sgl->sgt.nents; i++) + unpin_user_page(sg_page(&sgl->sgt.sgl[i])); + if (sgl->sgt.sgl !=3D sgl->sgl) + kvfree(sgl->sgt.sgl); + sgl->sgt.sgl =3D NULL; + } } EXPORT_SYMBOL_GPL(af_alg_free_sg); =20 diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c index f051fa624bd7..b89c2c50cecc 100644 --- a/crypto/algif_hash.c +++ b/crypto/algif_hash.c @@ -64,77 +64,102 @@ static void hash_free_result(struct sock *sk, struct h= ash_ctx *ctx) static int hash_sendmsg(struct socket *sock, struct msghdr *msg, size_t ignored) { - int limit =3D ALG_MAX_PAGES * PAGE_SIZE; struct sock *sk =3D sock->sk; struct alg_sock *ask =3D alg_sk(sk); struct hash_ctx *ctx =3D ask->private; - long copied =3D 0; + ssize_t copied =3D 0; + size_t len, max_pages =3D ALG_MAX_PAGES, npages; + bool continuing =3D ctx->more, need_init =3D false; int err; =20 - if (limit > sk->sk_sndbuf) - limit =3D sk->sk_sndbuf; + /* Don't limit to ALG_MAX_PAGES if the pages are all already pinned. */ + if (!user_backed_iter(&msg->msg_iter)) + max_pages =3D INT_MAX; + else + max_pages =3D min_t(size_t, max_pages, + DIV_ROUND_UP(sk->sk_sndbuf, PAGE_SIZE)); =20 lock_sock(sk); - if (!ctx->more) { + if (!continuing) { if ((msg->msg_flags & MSG_MORE)) hash_free_result(sk, ctx); - - err =3D crypto_wait_req(crypto_ahash_init(&ctx->req), &ctx->wait); - if (err) - goto unlock; + need_init =3D true; } =20 ctx->more =3D false; =20 while (msg_data_left(msg)) { - int len =3D msg_data_left(msg); - - if (len > limit) - len =3D limit; - ctx->sgl.sgt.sgl =3D ctx->sgl.sgl; ctx->sgl.sgt.nents =3D 0; ctx->sgl.sgt.orig_nents =3D 0; =20 - len =3D netfs_extract_iter_to_sg(&msg->msg_iter, len, - &ctx->sgl.sgt, ALG_MAX_PAGES, 0); - if (len < 0) { - err =3D copied ? 0 : len; - goto unlock; + err =3D -EIO; + npages =3D iov_iter_npages(&msg->msg_iter, max_pages); + if (npages =3D=3D 0) + goto unlock_free; + + if (npages > ARRAY_SIZE(ctx->sgl.sgl)) { + err =3D -ENOMEM; + ctx->sgl.sgt.sgl =3D + kvmalloc(array_size(npages, sizeof(*ctx->sgl.sgt.sgl)), + GFP_KERNEL); + if (!ctx->sgl.sgt.sgl) + goto unlock_free; } + sg_init_table(ctx->sgl.sgl, npages); =20 ctx->sgl.need_unpin =3D iov_iter_extract_will_pin(&msg->msg_iter); =20 - ahash_request_set_crypt(&ctx->req, ctx->sgl.sgt.sgl, NULL, len); + err =3D netfs_extract_iter_to_sg(&msg->msg_iter, LONG_MAX, + &ctx->sgl.sgt, npages, 0); + if (err < 0) + goto unlock_free; + len =3D err; + sg_mark_end(ctx->sgl.sgt.sgl + ctx->sgl.sgt.nents - 1); =20 - err =3D crypto_wait_req(crypto_ahash_update(&ctx->req), - &ctx->wait); - af_alg_free_sg(&ctx->sgl); - if (err) { - iov_iter_revert(&msg->msg_iter, len); - goto unlock; + if (!msg_data_left(msg)) { + err =3D hash_alloc_result(sk, ctx); + if (err) + goto unlock_free; } =20 - copied +=3D len; - } + ahash_request_set_crypt(&ctx->req, ctx->sgl.sgt.sgl, ctx->result, len); =20 - err =3D 0; + if (!msg_data_left(msg) && !continuing && !(msg->msg_flags & MSG_MORE)) { + err =3D crypto_ahash_digest(&ctx->req); + } else { + if (need_init) { + err =3D crypto_wait_req(crypto_ahash_init(&ctx->req), + &ctx->wait); + if (err) + goto unlock_free; + need_init =3D false; + } + + if (msg_data_left(msg) || (msg->msg_flags & MSG_MORE)) + err =3D crypto_ahash_update(&ctx->req); + else + err =3D crypto_ahash_finup(&ctx->req); + continuing =3D true; + } =20 - ctx->more =3D msg->msg_flags & MSG_MORE; - if (!ctx->more) { - err =3D hash_alloc_result(sk, ctx); + err =3D crypto_wait_req(err, &ctx->wait); if (err) - goto unlock; + goto unlock_free; =20 - ahash_request_set_crypt(&ctx->req, NULL, ctx->result, 0); - err =3D crypto_wait_req(crypto_ahash_final(&ctx->req), - &ctx->wait); + copied +=3D len; + af_alg_free_sg(&ctx->sgl); } =20 + ctx->more =3D msg->msg_flags & MSG_MORE; + err =3D 0; unlock: release_sock(sk); + return copied ?: err; =20 - return err ?: copied; +unlock_free: + af_alg_free_sg(&ctx->sgl); + goto unlock; } =20 static ssize_t hash_sendpage(struct socket *sock, struct page *page,