From nobody Sat Feb 7 21:08:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B351239E8D; Sat, 15 Nov 2025 23:09:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763248178; cv=none; b=jj+TPXb1tQEF88bv5DcuTKxeJ0ilEcYK0/VDs0RUwOQXwrcftu0W18kf4Ofhh+OMn43szogGJ66UuE6ZOnBT3UEcO6P/ZtJpCpVtTakKm+UGTK4X/Vq7X98m7eENZKGHOiOgGdDAtmb0KlUFDV7l64HMaVblCcl/bnNf9N+cVD4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763248178; c=relaxed/simple; bh=NpOdqjDBxL5abs/oBB2EGO3MLF09j5tG2jpPmvZHEcs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i9vpM/B1AHCeUXGFo/VHHCqtyYANpHmucYQLpaNkcx4Anzi98bbSSIdfZ8XDjfRKiISF8nRaEZJ+QZJKPvEIUW9miu/MeZrvAe0Hzk0WPj1F5EmKLjGAUPwxYC45rk8KuY6f/VD9t+j3EvMZUlgyLiIFjPgWBwCpXdZytvQW6WY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=J8J/HFrD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="J8J/HFrD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96A5BC116B1; Sat, 15 Nov 2025 23:09:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763248177; bh=NpOdqjDBxL5abs/oBB2EGO3MLF09j5tG2jpPmvZHEcs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J8J/HFrDX7VdJK+40cmSSv9Ayn+aZd0VlzF7tvWriyD0QMh4OyeCONQHg75SsqB0k U27Mbmf9bCY0CbKwqYOzkKIraMXw4DZKUL+Kw+bCv3YstC+H+8wgEcBYGqgD9kGmTv 3a50twwT3sHEDxZrfdYVAwq4wWgjDn03NhO5IdhqYrCTnAmPBc+SQuHWPMIFCbaVPM 0GHprq9uPIDZiH2EZjnPXEMjJK7u2Gf+m56wNTIFSkwEghmeD41SJWz9LV96H9D5sR Y8yrj1P/eHT47WQDW4MRDI0xdnvtqA0Wqu5XGoRm3LtIrWo1r1KrDjDtm7L8eHOOV5 LTZgqMCnYxdTg== From: Eric Biggers To: linux-crypto@vger.kernel.org, Herbert Xu Cc: Colin Ian King , linux-kernel@vger.kernel.org, Eric Biggers , stable@vger.kernel.org Subject: [PATCH v2 1/2] crypto: scatterwalk - Fix memcpy_sglist() to always succeed Date: Sat, 15 Nov 2025 15:08:16 -0800 Message-ID: <20251115230817.26070-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251115230817.26070-1-ebiggers@kernel.org> References: <20251115230817.26070-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The original implementation of memcpy_sglist() was broken because it didn't handle scatterlists that describe exactly the same memory, which is a case that many callers rely on. The current implementation is broken too because it calls the skcipher_walk functions which can fail. It ignores any errors from those functions. Fix it by replacing it with a new implementation written from scratch. It always succeeds. It's also a bit faster, since it avoids the overhead of skcipher_walk. skcipher_walk includes a lot of functionality (such as alignmask handling) that's irrelevant here. Reported-by: Colin Ian King Closes: https://lore.kernel.org/r/20251114122620.111623-1-coking@nvidia.com Fixes: 131bdceca1f0 ("crypto: scatterwalk - Add memcpy_sglist") Fixes: 0f8d42bf128d ("crypto: scatterwalk - Move skcipher walk and use it f= or memcpy_sglist") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 97 +++++++++++++++++++++++++++++++----- include/crypto/scatterwalk.h | 52 +++++++++++-------- 2 files changed, 115 insertions(+), 34 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 1d010e2a1b1a..b95e5974e327 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -99,30 +99,101 @@ void memcpy_to_sglist(struct scatterlist *sg, unsigned= int start, scatterwalk_start_at_pos(&walk, sg, start); memcpy_to_scatterwalk(&walk, buf, nbytes); } EXPORT_SYMBOL_GPL(memcpy_to_sglist); =20 +/** + * memcpy_sglist() - Copy data from one scatterlist to another + * @dst: The destination scatterlist. Can be NULL if @nbytes =3D=3D 0. + * @src: The source scatterlist. Can be NULL if @nbytes =3D=3D 0. + * @nbytes: Number of bytes to copy + * + * The scatterlists can describe exactly the same memory, in which case th= is + * function is a no-op. No other overlaps are supported. + * + * Context: Any context + */ void memcpy_sglist(struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes) { - struct skcipher_walk walk =3D {}; + unsigned int src_offset, dst_offset; =20 - if (unlikely(nbytes =3D=3D 0)) /* in case sg =3D=3D NULL */ + if (unlikely(nbytes =3D=3D 0)) /* in case src and/or dst is NULL */ return; =20 - walk.total =3D nbytes; - - scatterwalk_start(&walk.in, src); - scatterwalk_start(&walk.out, dst); + src_offset =3D src->offset; + dst_offset =3D dst->offset; + for (;;) { + /* Compute the length to copy this step. */ + unsigned int len =3D min3(src->offset + src->length - src_offset, + dst->offset + dst->length - dst_offset, + nbytes); + struct page *src_page =3D sg_page(src); + struct page *dst_page =3D sg_page(dst); + const void *src_virt; + void *dst_virt; + + if (IS_ENABLED(CONFIG_HIGHMEM)) { + /* HIGHMEM: we may have to actually map the pages. */ + const unsigned int src_oip =3D offset_in_page(src_offset); + const unsigned int dst_oip =3D offset_in_page(dst_offset); + const unsigned int limit =3D PAGE_SIZE; + + /* Further limit len to not cross a page boundary. */ + len =3D min3(len, limit - src_oip, limit - dst_oip); + + /* Compute the source and destination pages. */ + src_page +=3D src_offset / PAGE_SIZE; + dst_page +=3D dst_offset / PAGE_SIZE; + + if (src_page !=3D dst_page) { + /* Copy between different pages. */ + memcpy_page(dst_page, dst_oip, + src_page, src_oip, len); + flush_dcache_page(dst_page); + } else if (src_oip !=3D dst_oip) { + /* Copy between different parts of same page. */ + dst_virt =3D kmap_local_page(dst_page); + memcpy(dst_virt + dst_oip, dst_virt + src_oip, + len); + kunmap_local(dst_virt); + flush_dcache_page(dst_page); + } /* Else, it's the same memory. No action needed. */ + } else { + /* + * !HIGHMEM: no mapping needed. Just work in the linear + * buffer of each sg entry. Note that we can cross page + * boundaries, as they are not significant in this case. + */ + src_virt =3D page_address(src_page) + src_offset; + dst_virt =3D page_address(dst_page) + dst_offset; + if (src_virt !=3D dst_virt) { + memcpy(dst_virt, src_virt, len); + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) + __scatterwalk_flush_dcache_pages( + dst_page, dst_offset, len); + } /* Else, it's the same memory. No action needed. */ + } + nbytes -=3D len; + if (nbytes =3D=3D 0) /* No more to copy? */ + break; =20 - skcipher_walk_first(&walk, true); - do { - if (walk.src.virt.addr !=3D walk.dst.virt.addr) - memcpy(walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes); - skcipher_walk_done(&walk, 0); - } while (walk.nbytes); + /* + * There's more to copy. Advance the offsets by the length + * copied this step, and advance the sg entries as needed. + */ + src_offset +=3D len; + if (src_offset >=3D src->offset + src->length) { + src =3D sg_next(src); + src_offset =3D src->offset; + } + dst_offset +=3D len; + if (dst_offset >=3D dst->offset + dst->length) { + dst =3D sg_next(dst); + dst_offset =3D dst->offset; + } + } } EXPORT_SYMBOL_GPL(memcpy_sglist); =20 struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], struct scatterlist *src, diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 83d14376ff2b..f485454e3955 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -225,10 +225,38 @@ static inline void scatterwalk_done_src(struct scatte= r_walk *walk, { scatterwalk_unmap(walk); scatterwalk_advance(walk, nbytes); } =20 +/* + * Flush the dcache of any pages that overlap the region + * [offset, offset + nbytes) relative to base_page. + * + * This should be called only when ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, to e= nsure + * that all relevant code (including the call to sg_page() in the caller, = if + * applicable) gets fully optimized out when !ARCH_IMPLEMENTS_FLUSH_DCACHE= _PAGE. + */ +static inline void __scatterwalk_flush_dcache_pages(struct page *base_page, + unsigned int offset, + unsigned int nbytes) +{ + unsigned int num_pages; + + base_page +=3D offset / PAGE_SIZE; + offset %=3D PAGE_SIZE; + + /* + * This is an overflow-safe version of + * num_pages =3D DIV_ROUND_UP(offset + nbytes, PAGE_SIZE). + */ + num_pages =3D nbytes / PAGE_SIZE; + num_pages +=3D DIV_ROUND_UP(offset + (nbytes % PAGE_SIZE), PAGE_SIZE); + + for (unsigned int i =3D 0; i < num_pages; i++) + flush_dcache_page(base_page + i); +} + /** * scatterwalk_done_dst() - Finish one step of a walk of destination scatt= erlist * @walk: the scatter_walk * @nbytes: the number of bytes processed this step, less than or equal to= the * number of bytes that scatterwalk_next() returned. @@ -238,31 +266,13 @@ static inline void scatterwalk_done_src(struct scatte= r_walk *walk, */ static inline void scatterwalk_done_dst(struct scatter_walk *walk, unsigned int nbytes) { scatterwalk_unmap(walk); - /* - * Explicitly check ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE instead of just - * relying on flush_dcache_page() being a no-op when not implemented, - * since otherwise the BUG_ON in sg_page() does not get optimized out. - * This also avoids having to consider whether the loop would get - * reliably optimized out or not. - */ - if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) { - struct page *base_page; - unsigned int offset; - int start, end, i; - - base_page =3D sg_page(walk->sg); - offset =3D walk->offset; - start =3D offset >> PAGE_SHIFT; - end =3D start + (nbytes >> PAGE_SHIFT); - end +=3D (offset_in_page(offset) + offset_in_page(nbytes) + - PAGE_SIZE - 1) >> PAGE_SHIFT; - for (i =3D start; i < end; i++) - flush_dcache_page(base_page + i); - } + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) + __scatterwalk_flush_dcache_pages(sg_page(walk->sg), + walk->offset, nbytes); scatterwalk_advance(walk, nbytes); } =20 void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); =20 --=20 2.51.2 From nobody Sat Feb 7 21:08:25 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82B5B25B305; Sat, 15 Nov 2025 23:09:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763248178; cv=none; b=YAsHJKLIt6imYeghFqRHIop06FoRtt7K9GCnhQSi7eO6AYQi3HTcCSonmzTGP+4qKGi/yse1w66GDHKjk2Rs+wVqhMW8Wh5Caa9WVZ7Y5efPxGmKoUKxPWMGNLnpUgqgx74IofZ3ydg5bvObOE8R+Mho4eYyqk1YzLCSy8y2aQ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763248178; c=relaxed/simple; bh=SwDdiR2CHC1AriktNlXRtJxcwG3MiX6jnkkBdHZckKE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AuArvbTexO6LNEQkuc+7FhozxVpwhSduzXtsJ1+vO5OVM63lXAgPZe5zNhse41GPhg2wNQptxYkSfONvzP/QQu/laKsZOqlBT1G6xef3n1ZPJdYtRTtPNfh9bIIs+KWRVabXF8/7Qoll6EGScnfrE1fYOTzgtx1CqVrfUGXrlf4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YkKggI1X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YkKggI1X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0DB1C116D0; Sat, 15 Nov 2025 23:09:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763248178; bh=SwDdiR2CHC1AriktNlXRtJxcwG3MiX6jnkkBdHZckKE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YkKggI1XYK6at5/Fr962NsNWbl+5oEEmmE4smd5XzodHReZmpar3S7IMGPiKzdRbS ZH3J1r2Mi0s1BSjVmtf6e863a5bewqUeg/A5ACBDoH+AzmDoE9bcVqVKrcFkPcwwB+ q9MO3Rf1WWxcLRlXR5bn4q0XjrSZvgOtO+USIh1jR9BKZsLynVblHeVxPyHULq2NPC X/NUXVT4y3uCUkmvQlVRbfTKif4xb3+nu3zVna7qemqrnqSTrwQNh21Xc+z54B9E+G w3hbPb/YMKHAN6PXfEgT3iKrduUS96WyIs+SGKOfgIH5Z9EAUrJJawwJKCgB5BaqW2 cEh3GH0CcmU2Q== From: Eric Biggers To: linux-crypto@vger.kernel.org, Herbert Xu Cc: Colin Ian King , linux-kernel@vger.kernel.org, Eric Biggers Subject: [PATCH v2 2/2] Revert "crypto: scatterwalk - Move skcipher walk and use it for memcpy_sglist" Date: Sat, 15 Nov 2025 15:08:17 -0800 Message-ID: <20251115230817.26070-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251115230817.26070-1-ebiggers@kernel.org> References: <20251115230817.26070-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This reverts commit 0f8d42bf128d349ad490e87d5574d211245e40f1, with the memcpy_sglist() part dropped. Now that memcpy_sglist() no longer uses the skcipher_walk code, the skcipher_walk code can be moved back to where it belongs. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 248 --------------------------- crypto/skcipher.c | 261 ++++++++++++++++++++++++++++- include/crypto/algapi.h | 12 ++ include/crypto/internal/skcipher.h | 48 +++++- include/crypto/scatterwalk.h | 65 +------ 5 files changed, 316 insertions(+), 318 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index b95e5974e327..be0e24843806 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -8,29 +8,14 @@ * 2002 Adam J. Richter * 2004 Jean-Luc Cooke */ =20 #include -#include -#include #include #include #include #include -#include - -enum { - SKCIPHER_WALK_SLOW =3D 1 << 0, - SKCIPHER_WALK_COPY =3D 1 << 1, - SKCIPHER_WALK_DIFF =3D 1 << 2, - SKCIPHER_WALK_SLEEP =3D 1 << 3, -}; - -static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) -{ - return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; -} =20 void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) { struct scatterlist *sg =3D walk->sg; =20 @@ -215,238 +200,5 @@ struct scatterlist *scatterwalk_ffwd(struct scatterli= st dst[2], scatterwalk_crypto_chain(dst, sg_next(src), 2); =20 return dst; } EXPORT_SYMBOL_GPL(scatterwalk_ffwd); - -static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsi= ze) -{ - unsigned alignmask =3D walk->alignmask; - unsigned n; - void *buffer; - - if (!walk->buffer) - walk->buffer =3D walk->page; - buffer =3D walk->buffer; - if (!buffer) { - /* Min size for a buffer of bsize bytes aligned to alignmask */ - n =3D bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); - - buffer =3D kzalloc(n, skcipher_walk_gfp(walk)); - if (!buffer) - return skcipher_walk_done(walk, -ENOMEM); - walk->buffer =3D buffer; - } - - buffer =3D PTR_ALIGN(buffer, alignmask + 1); - memcpy_from_scatterwalk(buffer, &walk->in, bsize); - walk->out.__addr =3D buffer; - walk->in.__addr =3D walk->out.addr; - - walk->nbytes =3D bsize; - walk->flags |=3D SKCIPHER_WALK_SLOW; - - return 0; -} - -static int skcipher_next_copy(struct skcipher_walk *walk) -{ - void *tmp =3D walk->page; - - scatterwalk_map(&walk->in); - memcpy(tmp, walk->in.addr, walk->nbytes); - scatterwalk_unmap(&walk->in); - /* - * walk->in is advanced later when the number of bytes actually - * processed (which might be less than walk->nbytes) is known. - */ - - walk->in.__addr =3D tmp; - walk->out.__addr =3D tmp; - return 0; -} - -static int skcipher_next_fast(struct skcipher_walk *walk) -{ - unsigned long diff; - - diff =3D offset_in_page(walk->in.offset) - - offset_in_page(walk->out.offset); - diff |=3D (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) - - (u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT)); - - scatterwalk_map(&walk->out); - walk->in.__addr =3D walk->out.__addr; - - if (diff) { - walk->flags |=3D SKCIPHER_WALK_DIFF; - scatterwalk_map(&walk->in); - } - - return 0; -} - -static int skcipher_walk_next(struct skcipher_walk *walk) -{ - unsigned int bsize; - unsigned int n; - - n =3D walk->total; - bsize =3D min(walk->stride, max(n, walk->blocksize)); - n =3D scatterwalk_clamp(&walk->in, n); - n =3D scatterwalk_clamp(&walk->out, n); - - if (unlikely(n < bsize)) { - if (unlikely(walk->total < walk->blocksize)) - return skcipher_walk_done(walk, -EINVAL); - -slow_path: - return skcipher_next_slow(walk, bsize); - } - walk->nbytes =3D n; - - if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { - if (!walk->page) { - gfp_t gfp =3D skcipher_walk_gfp(walk); - - walk->page =3D (void *)__get_free_page(gfp); - if (!walk->page) - goto slow_path; - } - walk->flags |=3D SKCIPHER_WALK_COPY; - return skcipher_next_copy(walk); - } - - return skcipher_next_fast(walk); -} - -static int skcipher_copy_iv(struct skcipher_walk *walk) -{ - unsigned alignmask =3D walk->alignmask; - unsigned ivsize =3D walk->ivsize; - unsigned aligned_stride =3D ALIGN(walk->stride, alignmask + 1); - unsigned size; - u8 *iv; - - /* Min size for a buffer of stride + ivsize, aligned to alignmask */ - size =3D aligned_stride + ivsize + - (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); - - walk->buffer =3D kmalloc(size, skcipher_walk_gfp(walk)); - if (!walk->buffer) - return -ENOMEM; - - iv =3D PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; - - walk->iv =3D memcpy(iv, walk->iv, walk->ivsize); - return 0; -} - -int skcipher_walk_first(struct skcipher_walk *walk, bool atomic) -{ - if (WARN_ON_ONCE(in_hardirq())) - return -EDEADLK; - - walk->flags =3D atomic ? 0 : SKCIPHER_WALK_SLEEP; - - walk->buffer =3D NULL; - if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { - int err =3D skcipher_copy_iv(walk); - if (err) - return err; - } - - walk->page =3D NULL; - - return skcipher_walk_next(walk); -} -EXPORT_SYMBOL_GPL(skcipher_walk_first); - -/** - * skcipher_walk_done() - finish one step of a skcipher_walk - * @walk: the skcipher_walk - * @res: number of bytes *not* processed (>=3D 0) from walk->nbytes, - * or a -errno value to terminate the walk due to an error - * - * This function cleans up after one step of walking through the source and - * destination scatterlists, and advances to the next step if applicable. - * walk->nbytes is set to the number of bytes available in the next step, - * walk->total is set to the new total number of bytes remaining, and - * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If= there - * is no more data, or if an error occurred (i.e. -errno return), then - * walk->nbytes and walk->total are set to 0 and all resources owned by the - * skcipher_walk are freed. - * - * Return: 0 or a -errno value. If @res was a -errno value then it will be - * returned, but other errors may occur too. - */ -int skcipher_walk_done(struct skcipher_walk *walk, int res) -{ - unsigned int n =3D walk->nbytes; /* num bytes processed this step */ - unsigned int total =3D 0; /* new total remaining */ - - if (!n) - goto finish; - - if (likely(res >=3D 0)) { - n -=3D res; /* subtract num bytes *not* processed */ - total =3D walk->total - n; - } - - if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | - SKCIPHER_WALK_COPY | - SKCIPHER_WALK_DIFF)))) { - scatterwalk_advance(&walk->in, n); - } else if (walk->flags & SKCIPHER_WALK_DIFF) { - scatterwalk_done_src(&walk->in, n); - } else if (walk->flags & SKCIPHER_WALK_COPY) { - scatterwalk_advance(&walk->in, n); - scatterwalk_map(&walk->out); - memcpy(walk->out.addr, walk->page, n); - } else { /* SKCIPHER_WALK_SLOW */ - if (res > 0) { - /* - * Didn't process all bytes. Either the algorithm is - * broken, or this was the last step and it turned out - * the message wasn't evenly divisible into blocks but - * the algorithm requires it. - */ - res =3D -EINVAL; - total =3D 0; - } else - memcpy_to_scatterwalk(&walk->out, walk->out.addr, n); - goto dst_done; - } - - scatterwalk_done_dst(&walk->out, n); -dst_done: - - if (res > 0) - res =3D 0; - - walk->total =3D total; - walk->nbytes =3D 0; - - if (total) { - if (walk->flags & SKCIPHER_WALK_SLEEP) - cond_resched(); - walk->flags &=3D ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | - SKCIPHER_WALK_DIFF); - return skcipher_walk_next(walk); - } - -finish: - /* Short-circuit for the common/fast path. */ - if (!((unsigned long)walk->buffer | (unsigned long)walk->page)) - goto out; - - if (walk->iv !=3D walk->oiv) - memcpy(walk->oiv, walk->iv, walk->ivsize); - if (walk->buffer !=3D walk->page) - kfree(walk->buffer); - if (walk->page) - free_page((unsigned long)walk->page); - -out: - return res; -} -EXPORT_SYMBOL_GPL(skcipher_walk_done); diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 8fa5d9686d08..14a820cb06c7 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -15,28 +15,273 @@ #include #include #include #include #include +#include #include #include #include #include #include #include #include "skcipher.h" =20 #define CRYPTO_ALG_TYPE_SKCIPHER_MASK 0x0000000e =20 +enum { + SKCIPHER_WALK_SLOW =3D 1 << 0, + SKCIPHER_WALK_COPY =3D 1 << 1, + SKCIPHER_WALK_DIFF =3D 1 << 2, + SKCIPHER_WALK_SLEEP =3D 1 << 3, +}; + static const struct crypto_type crypto_skcipher_type; =20 +static int skcipher_walk_next(struct skcipher_walk *walk); + +static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) +{ + return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; +} + static inline struct skcipher_alg *__crypto_skcipher_alg( struct crypto_alg *alg) { return container_of(alg, struct skcipher_alg, base); } =20 +/** + * skcipher_walk_done() - finish one step of a skcipher_walk + * @walk: the skcipher_walk + * @res: number of bytes *not* processed (>=3D 0) from walk->nbytes, + * or a -errno value to terminate the walk due to an error + * + * This function cleans up after one step of walking through the source and + * destination scatterlists, and advances to the next step if applicable. + * walk->nbytes is set to the number of bytes available in the next step, + * walk->total is set to the new total number of bytes remaining, and + * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If= there + * is no more data, or if an error occurred (i.e. -errno return), then + * walk->nbytes and walk->total are set to 0 and all resources owned by the + * skcipher_walk are freed. + * + * Return: 0 or a -errno value. If @res was a -errno value then it will be + * returned, but other errors may occur too. + */ +int skcipher_walk_done(struct skcipher_walk *walk, int res) +{ + unsigned int n =3D walk->nbytes; /* num bytes processed this step */ + unsigned int total =3D 0; /* new total remaining */ + + if (!n) + goto finish; + + if (likely(res >=3D 0)) { + n -=3D res; /* subtract num bytes *not* processed */ + total =3D walk->total - n; + } + + if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | + SKCIPHER_WALK_COPY | + SKCIPHER_WALK_DIFF)))) { + scatterwalk_advance(&walk->in, n); + } else if (walk->flags & SKCIPHER_WALK_DIFF) { + scatterwalk_done_src(&walk->in, n); + } else if (walk->flags & SKCIPHER_WALK_COPY) { + scatterwalk_advance(&walk->in, n); + scatterwalk_map(&walk->out); + memcpy(walk->out.addr, walk->page, n); + } else { /* SKCIPHER_WALK_SLOW */ + if (res > 0) { + /* + * Didn't process all bytes. Either the algorithm is + * broken, or this was the last step and it turned out + * the message wasn't evenly divisible into blocks but + * the algorithm requires it. + */ + res =3D -EINVAL; + total =3D 0; + } else + memcpy_to_scatterwalk(&walk->out, walk->out.addr, n); + goto dst_done; + } + + scatterwalk_done_dst(&walk->out, n); +dst_done: + + if (res > 0) + res =3D 0; + + walk->total =3D total; + walk->nbytes =3D 0; + + if (total) { + if (walk->flags & SKCIPHER_WALK_SLEEP) + cond_resched(); + walk->flags &=3D ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | + SKCIPHER_WALK_DIFF); + return skcipher_walk_next(walk); + } + +finish: + /* Short-circuit for the common/fast path. */ + if (!((unsigned long)walk->buffer | (unsigned long)walk->page)) + goto out; + + if (walk->iv !=3D walk->oiv) + memcpy(walk->oiv, walk->iv, walk->ivsize); + if (walk->buffer !=3D walk->page) + kfree(walk->buffer); + if (walk->page) + free_page((unsigned long)walk->page); + +out: + return res; +} +EXPORT_SYMBOL_GPL(skcipher_walk_done); + +static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsi= ze) +{ + unsigned alignmask =3D walk->alignmask; + unsigned n; + void *buffer; + + if (!walk->buffer) + walk->buffer =3D walk->page; + buffer =3D walk->buffer; + if (!buffer) { + /* Min size for a buffer of bsize bytes aligned to alignmask */ + n =3D bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); + + buffer =3D kzalloc(n, skcipher_walk_gfp(walk)); + if (!buffer) + return skcipher_walk_done(walk, -ENOMEM); + walk->buffer =3D buffer; + } + + buffer =3D PTR_ALIGN(buffer, alignmask + 1); + memcpy_from_scatterwalk(buffer, &walk->in, bsize); + walk->out.__addr =3D buffer; + walk->in.__addr =3D walk->out.addr; + + walk->nbytes =3D bsize; + walk->flags |=3D SKCIPHER_WALK_SLOW; + + return 0; +} + +static int skcipher_next_copy(struct skcipher_walk *walk) +{ + void *tmp =3D walk->page; + + scatterwalk_map(&walk->in); + memcpy(tmp, walk->in.addr, walk->nbytes); + scatterwalk_unmap(&walk->in); + /* + * walk->in is advanced later when the number of bytes actually + * processed (which might be less than walk->nbytes) is known. + */ + + walk->in.__addr =3D tmp; + walk->out.__addr =3D tmp; + return 0; +} + +static int skcipher_next_fast(struct skcipher_walk *walk) +{ + unsigned long diff; + + diff =3D offset_in_page(walk->in.offset) - + offset_in_page(walk->out.offset); + diff |=3D (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) - + (u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT)); + + scatterwalk_map(&walk->out); + walk->in.__addr =3D walk->out.__addr; + + if (diff) { + walk->flags |=3D SKCIPHER_WALK_DIFF; + scatterwalk_map(&walk->in); + } + + return 0; +} + +static int skcipher_walk_next(struct skcipher_walk *walk) +{ + unsigned int bsize; + unsigned int n; + + n =3D walk->total; + bsize =3D min(walk->stride, max(n, walk->blocksize)); + n =3D scatterwalk_clamp(&walk->in, n); + n =3D scatterwalk_clamp(&walk->out, n); + + if (unlikely(n < bsize)) { + if (unlikely(walk->total < walk->blocksize)) + return skcipher_walk_done(walk, -EINVAL); + +slow_path: + return skcipher_next_slow(walk, bsize); + } + walk->nbytes =3D n; + + if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { + if (!walk->page) { + gfp_t gfp =3D skcipher_walk_gfp(walk); + + walk->page =3D (void *)__get_free_page(gfp); + if (!walk->page) + goto slow_path; + } + walk->flags |=3D SKCIPHER_WALK_COPY; + return skcipher_next_copy(walk); + } + + return skcipher_next_fast(walk); +} + +static int skcipher_copy_iv(struct skcipher_walk *walk) +{ + unsigned alignmask =3D walk->alignmask; + unsigned ivsize =3D walk->ivsize; + unsigned aligned_stride =3D ALIGN(walk->stride, alignmask + 1); + unsigned size; + u8 *iv; + + /* Min size for a buffer of stride + ivsize, aligned to alignmask */ + size =3D aligned_stride + ivsize + + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); + + walk->buffer =3D kmalloc(size, skcipher_walk_gfp(walk)); + if (!walk->buffer) + return -ENOMEM; + + iv =3D PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; + + walk->iv =3D memcpy(iv, walk->iv, walk->ivsize); + return 0; +} + +static int skcipher_walk_first(struct skcipher_walk *walk) +{ + if (WARN_ON_ONCE(in_hardirq())) + return -EDEADLK; + + walk->buffer =3D NULL; + if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { + int err =3D skcipher_copy_iv(walk); + if (err) + return err; + } + + walk->page =3D NULL; + + return skcipher_walk_next(walk); +} + int skcipher_walk_virt(struct skcipher_walk *__restrict walk, struct skcipher_request *__restrict req, bool atomic) { struct crypto_skcipher *tfm =3D crypto_skcipher_reqtfm(req); struct skcipher_alg *alg; @@ -47,12 +292,14 @@ int skcipher_walk_virt(struct skcipher_walk *__restric= t walk, =20 walk->total =3D req->cryptlen; walk->nbytes =3D 0; walk->iv =3D req->iv; walk->oiv =3D req->iv; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP)) - atomic =3D true; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags =3D SKCIPHER_WALK_SLEEP; + else + walk->flags =3D 0; =20 if (unlikely(!walk->total)) return 0; =20 scatterwalk_start(&walk->in, req->src); @@ -65,11 +312,11 @@ int skcipher_walk_virt(struct skcipher_walk *__restric= t walk, if (alg->co.base.cra_type !=3D &crypto_skcipher_type) walk->stride =3D alg->co.chunksize; else walk->stride =3D alg->walksize; =20 - return skcipher_walk_first(walk, atomic); + return skcipher_walk_first(walk); } EXPORT_SYMBOL_GPL(skcipher_walk_virt); =20 static int skcipher_walk_aead_common(struct skcipher_walk *__restrict walk, struct aead_request *__restrict req, @@ -78,12 +325,14 @@ static int skcipher_walk_aead_common(struct skcipher_w= alk *__restrict walk, struct crypto_aead *tfm =3D crypto_aead_reqtfm(req); =20 walk->nbytes =3D 0; walk->iv =3D req->iv; walk->oiv =3D req->iv; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP)) - atomic =3D true; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags =3D SKCIPHER_WALK_SLEEP; + else + walk->flags =3D 0; =20 if (unlikely(!walk->total)) return 0; =20 scatterwalk_start_at_pos(&walk->in, req->src, req->assoclen); @@ -92,11 +341,11 @@ static int skcipher_walk_aead_common(struct skcipher_w= alk *__restrict walk, walk->blocksize =3D crypto_aead_blocksize(tfm); walk->stride =3D crypto_aead_chunksize(tfm); walk->ivsize =3D crypto_aead_ivsize(tfm); walk->alignmask =3D crypto_aead_alignmask(tfm); =20 - return skcipher_walk_first(walk, atomic); + return skcipher_walk_first(walk); } =20 int skcipher_walk_aead_encrypt(struct skcipher_walk *__restrict walk, struct aead_request *__restrict req, bool atomic) diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index fc4574940636..05deea9dac5e 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -105,10 +105,22 @@ struct crypto_queue { =20 unsigned int qlen; unsigned int max_qlen; }; =20 +struct scatter_walk { + /* Must be the first member, see struct skcipher_walk. */ + union { + void *const addr; + + /* Private API field, do not touch. */ + union crypto_no_such_thing *__addr; + }; + struct scatterlist *sg; + unsigned int offset; +}; + struct crypto_attr_alg { char name[CRYPTO_MAX_ALG_NAME]; }; =20 struct crypto_attr_type { diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/s= kcipher.h index d5aa535263f6..0cad8e7364c8 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -8,11 +8,10 @@ #ifndef _CRYPTO_INTERNAL_SKCIPHER_H #define _CRYPTO_INTERNAL_SKCIPHER_H =20 #include #include -#include #include #include =20 /* * Set this if your algorithm is sync but needs a reqsize larger @@ -53,10 +52,51 @@ struct crypto_skcipher_spawn { =20 struct crypto_lskcipher_spawn { struct crypto_spawn base; }; =20 +struct skcipher_walk { + union { + /* Virtual address of the source. */ + struct { + struct { + const void *const addr; + } virt; + } src; + + /* Private field for the API, do not use. */ + struct scatter_walk in; + }; + + union { + /* Virtual address of the destination. */ + struct { + struct { + void *const addr; + } virt; + } dst; + + /* Private field for the API, do not use. */ + struct scatter_walk out; + }; + + unsigned int nbytes; + unsigned int total; + + u8 *page; + u8 *buffer; + u8 *oiv; + void *iv; + + unsigned int ivsize; + + int flags; + unsigned int blocksize; + unsigned int stride; + unsigned int alignmask; +}; + static inline struct crypto_instance *skcipher_crypto_instance( struct skcipher_instance *inst) { return &inst->s.base; } @@ -169,20 +209,26 @@ void crypto_unregister_lskcipher(struct lskcipher_alg= *alg); int crypto_register_lskciphers(struct lskcipher_alg *algs, int count); void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count); int lskcipher_register_instance(struct crypto_template *tmpl, struct lskcipher_instance *inst); =20 +int skcipher_walk_done(struct skcipher_walk *walk, int res); int skcipher_walk_virt(struct skcipher_walk *__restrict walk, struct skcipher_request *__restrict req, bool atomic); int skcipher_walk_aead_encrypt(struct skcipher_walk *__restrict walk, struct aead_request *__restrict req, bool atomic); int skcipher_walk_aead_decrypt(struct skcipher_walk *__restrict walk, struct aead_request *__restrict req, bool atomic); =20 +static inline void skcipher_walk_abort(struct skcipher_walk *walk) +{ + skcipher_walk_done(walk, -ECANCELED); +} + static inline void *crypto_skcipher_ctx(struct crypto_skcipher *tfm) { return crypto_tfm_ctx(&tfm->base); } =20 diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index f485454e3955..624fab589c2c 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -9,68 +9,15 @@ */ =20 #ifndef _CRYPTO_SCATTERWALK_H #define _CRYPTO_SCATTERWALK_H =20 -#include +#include + #include #include #include -#include - -struct scatter_walk { - /* Must be the first member, see struct skcipher_walk. */ - union { - void *const addr; - - /* Private API field, do not touch. */ - union crypto_no_such_thing *__addr; - }; - struct scatterlist *sg; - unsigned int offset; -}; - -struct skcipher_walk { - union { - /* Virtual address of the source. */ - struct { - struct { - const void *const addr; - } virt; - } src; - - /* Private field for the API, do not use. */ - struct scatter_walk in; - }; - - union { - /* Virtual address of the destination. */ - struct { - struct { - void *const addr; - } virt; - } dst; - - /* Private field for the API, do not use. */ - struct scatter_walk out; - }; - - unsigned int nbytes; - unsigned int total; - - u8 *page; - u8 *buffer; - u8 *oiv; - void *iv; - - unsigned int ivsize; - - int flags; - unsigned int blocksize; - unsigned int stride; - unsigned int alignmask; -}; =20 static inline void scatterwalk_crypto_chain(struct scatterlist *head, struct scatterlist *sg, int num) { if (sg) @@ -304,14 +251,6 @@ static inline void scatterwalk_map_and_copy(void *buf,= struct scatterlist *sg, =20 struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], struct scatterlist *src, unsigned int len); =20 -int skcipher_walk_first(struct skcipher_walk *walk, bool atomic); -int skcipher_walk_done(struct skcipher_walk *walk, int res); - -static inline void skcipher_walk_abort(struct skcipher_walk *walk) -{ - skcipher_walk_done(walk, -ECANCELED); -} - #endif /* _CRYPTO_SCATTERWALK_H */ --=20 2.51.2