From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29794CE79CE for ; Wed, 20 Sep 2023 13:05:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236463AbjITNFT (ORCPT ); Wed, 20 Sep 2023 09:05:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236435AbjITNFR (ORCPT ); Wed, 20 Sep 2023 09:05:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C511CA for ; Wed, 20 Sep 2023 06:04:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215060; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=71Btu3XYwWZK+FmqHCgDmjFCfU/69a0+IvtQoWaiN/g=; b=Lpd/WtB5M6F9AuiwDbMXORBM0P9a6lwwlkQg7zeMGDVOjXkwRHdo9Kw6IUPTQ8s9X0A14w 7+XIl7MA9JXpglHQ4spKpMXepYZK2L7jY5+RkwUixs46Mas1969yjaGqPU48qWYlAqEPss 7emQ75lTS9yj8629c6BW3Se778QDpSI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-499-gqjx5typNpO8hKhRJZWB-g-1; Wed, 20 Sep 2023 09:04:15 -0400 X-MC-Unique: gqjx5typNpO8hKhRJZWB-g-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 540B189C6A3; Wed, 20 Sep 2023 13:04:14 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 09F2E492B05; Wed, 20 Sep 2023 13:04:08 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Johannes Thumshirn , Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 1/9] iov_iter: Fix some checkpatch complaints in kunit tests Date: Wed, 20 Sep 2023 14:03:52 +0100 Message-ID: <20230920130400.203330-2-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Fix some checkpatch complaints in the new iov_iter kunit tests: (1) Some lines had eight spaces instead of a tab at the start. (2) Checkpatch doesn't like (void*)(unsigned long)0xnnnnnULL, so switch to using POISON_POINTER_DELTA plus an offset instead. Reported-by: Johannes Thumshirn Signed-off-by: David Howells cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- lib/kunit_iov_iter.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 859b67c4d697..4a6c0efd33f5 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -53,7 +53,7 @@ static void *__init iov_kunit_create_buffer(struct kunit = *test, void *buffer; =20 pages =3D kunit_kcalloc(test, npages, sizeof(struct page *), GFP_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages); *ppages =3D pages; =20 got =3D alloc_pages_bulk_array(GFP_KERNEL, npages, pages); @@ -63,7 +63,7 @@ static void *__init iov_kunit_create_buffer(struct kunit = *test, } =20 buffer =3D vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL); - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer); =20 kunit_add_action_or_reset(test, iov_kunit_unmap, buffer); return buffer; @@ -548,7 +548,7 @@ static void __init iov_kunit_extract_pages_kvec(struct = kunit *test) size_t offset0 =3D LONG_MAX; =20 for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) - pagelist[i] =3D (void *)(unsigned long)0xaa55aa55aa55aa55ULL; + pagelist[i] =3D (void *)POISON_POINTER_DELTA + 0x5a; =20 len =3D iov_iter_extract_pages(&iter, &pages, 100 * 1024, ARRAY_SIZE(pagelist), 0, &offset0); @@ -626,7 +626,7 @@ static void __init iov_kunit_extract_pages_bvec(struct = kunit *test) size_t offset0 =3D LONG_MAX; =20 for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) - pagelist[i] =3D (void *)(unsigned long)0xaa55aa55aa55aa55ULL; + pagelist[i] =3D (void *)POISON_POINTER_DELTA + 0x5a; =20 len =3D iov_iter_extract_pages(&iter, &pages, 100 * 1024, ARRAY_SIZE(pagelist), 0, &offset0); @@ -709,7 +709,7 @@ static void __init iov_kunit_extract_pages_xarray(struc= t kunit *test) size_t offset0 =3D LONG_MAX; =20 for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) - pagelist[i] =3D (void *)(unsigned long)0xaa55aa55aa55aa55ULL; + pagelist[i] =3D (void *)POISON_POINTER_DELTA + 0x5a; =20 len =3D iov_iter_extract_pages(&iter, &pages, 100 * 1024, ARRAY_SIZE(pagelist), 0, &offset0); From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3235FCE79CE for ; Wed, 20 Sep 2023 13:05:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236496AbjITNFa (ORCPT ); Wed, 20 Sep 2023 09:05:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236474AbjITNF1 (ORCPT ); Wed, 20 Sep 2023 09:05:27 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22D13CF for ; Wed, 20 Sep 2023 06:04:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215073; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4ut+h9nLVtpMfUa14begCRaPNr+Lp71R8/RKDVqE44g=; b=ZpLNpSN/NaIVHyL0VLdLvhNRMexvds+mGhKj97RwntuYHYCTEEgKqDVsD+i8RP+M2tXOuV y4cLGUJrtlKJ4bPIn15N7cHliVqPX8Bd3x/zx8vcuT5Ivcff8TkrV25HPYq+6dOGYBHrSf WMEF8uWXvaRd2XkfAHURRssrlBNrqIY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-682-AR2qMwyVMnipdkVKppdkYw-1; Wed, 20 Sep 2023 09:04:30 -0400 X-MC-Unique: AR2qMwyVMnipdkVKppdkYw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F2277101A529; Wed, 20 Sep 2023 13:04:28 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF1E3492B16; Wed, 20 Sep 2023 13:04:15 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 2/9] iov_iter: Consolidate some of the repeated code into helpers Date: Wed, 20 Sep 2023 14:03:53 +0100 Message-ID: <20230920130400.203330-3-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Consolidate some of the repeated code snippets into helper functions to reduce the line count. Signed-off-by: David Howells cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- lib/kunit_iov_iter.c | 189 +++++++++++++++++++------------------------ 1 file changed, 84 insertions(+), 105 deletions(-) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 4a6c0efd33f5..ee586eb652b4 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -19,18 +19,18 @@ MODULE_AUTHOR("David Howells "); MODULE_LICENSE("GPL"); =20 struct kvec_test_range { - int from, to; + int page, from, to; }; =20 static const struct kvec_test_range kvec_test_ranges[] =3D { - { 0x00002, 0x00002 }, - { 0x00027, 0x03000 }, - { 0x05193, 0x18794 }, - { 0x20000, 0x20000 }, - { 0x20000, 0x24000 }, - { 0x24000, 0x27001 }, - { 0x29000, 0xffffb }, - { 0xffffd, 0xffffe }, + { 0, 0x00002, 0x00002 }, + { 0, 0x00027, 0x03000 }, + { 0, 0x05193, 0x18794 }, + { 0, 0x20000, 0x20000 }, + { 0, 0x20000, 0x24000 }, + { 0, 0x24000, 0x27001 }, + { 0, 0x29000, 0xffffb }, + { 0, 0xffffd, 0xffffe }, { -1 } }; =20 @@ -69,6 +69,57 @@ static void *__init iov_kunit_create_buffer(struct kunit= *test, return buffer; } =20 +/* + * Build the reference pattern in the scratch buffer that we expect to see= in + * the iterator buffer (ie. the result of copy *to*). + */ +static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *s= cratch, + size_t bufsize, + const struct kvec_test_range *pr) +{ + int i, patt =3D 0; + + memset(scratch, 0, bufsize); + for (; pr->page >=3D 0; pr++) + for (i =3D pr->from; i < pr->to; i++) + scratch[i] =3D pattern(patt++); +} + +/* + * Build the reference pattern in the iterator buffer that we expect to se= e in + * the scratch buffer (ie. the result of copy *from*). + */ +static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 = *buffer, + size_t bufsize, + const struct kvec_test_range *pr) +{ + size_t i =3D 0, j; + + memset(buffer, 0, bufsize); + for (; pr->page >=3D 0; pr++) { + for (j =3D pr->from; j < pr->to; j++) { + buffer[i++] =3D pattern(j); + if (i >=3D bufsize) + return; + } + } +} + +/* + * Compare two kernel buffers to see that they're the same. + */ +static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer, + const u8 *scratch, size_t bufsize) +{ + size_t i; + + for (i =3D 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=3D%x", i); + if (buffer[i] !=3D scratch[i]) + return; + } +} + static void __init iov_kunit_load_kvec(struct kunit *test, struct iov_iter *iter, int dir, struct kvec *kvec, unsigned int kvmax, @@ -79,7 +130,7 @@ static void __init iov_kunit_load_kvec(struct kunit *tes= t, int i; =20 for (i =3D 0; i < kvmax; i++, pr++) { - if (pr->from < 0) + if (pr->page < 0) break; KUNIT_ASSERT_GE(test, pr->to, pr->from); KUNIT_ASSERT_LE(test, pr->to, bufsize); @@ -97,13 +148,12 @@ static void __init iov_kunit_load_kvec(struct kunit *t= est, */ static void __init iov_kunit_copy_to_kvec(struct kunit *test) { - const struct kvec_test_range *pr; struct iov_iter iter; struct page **spages, **bpages; struct kvec kvec[8]; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, patt; + int i; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -125,20 +175,8 @@ static void __init iov_kunit_copy_to_kvec(struct kunit= *test) KUNIT_EXPECT_EQ(test, iter.count, 0); KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); =20 - /* Build the expected image in the scratch buffer. */ - patt =3D 0; - memset(scratch, 0, bufsize); - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) - for (i =3D pr->from; i < pr->to; i++) - scratch[i] =3D pattern(patt++); - - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=3D%x", i); - if (buffer[i] !=3D scratch[i]) - return; - } - + iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ra= nges); + iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } =20 @@ -147,13 +185,12 @@ static void __init iov_kunit_copy_to_kvec(struct kuni= t *test) */ static void __init iov_kunit_copy_from_kvec(struct kunit *test) { - const struct kvec_test_range *pr; struct iov_iter iter; struct page **spages, **bpages; struct kvec kvec[8]; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, j; + int i; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -175,25 +212,8 @@ static void __init iov_kunit_copy_from_kvec(struct kun= it *test) KUNIT_EXPECT_EQ(test, iter.count, 0); KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); =20 - /* Build the expected image in the main buffer. */ - i =3D 0; - memset(buffer, 0, bufsize); - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { - for (j =3D pr->from; j < pr->to; j++) { - buffer[i++] =3D pattern(j); - if (i >=3D bufsize) - goto stop; - } - } -stop: - - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=3D%x", i); - if (scratch[i] !=3D buffer[i]) - return; - } - + iov_kunit_build_from_reference_pattern(test, buffer, bufsize, kvec_test_r= anges); + iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } =20 @@ -210,7 +230,7 @@ static const struct bvec_test_range bvec_test_ranges[] = =3D { { 5, 0x0000, 0x1000 }, { 6, 0x0000, 0x0ffb }, { 6, 0x0ffd, 0x0ffe }, - { -1, -1, -1 } + { -1 } }; =20 static void __init iov_kunit_load_bvec(struct kunit *test, @@ -225,7 +245,7 @@ static void __init iov_kunit_load_bvec(struct kunit *te= st, int i; =20 for (i =3D 0; i < bvmax; i++, pr++) { - if (pr->from < 0) + if (pr->page < 0) break; KUNIT_ASSERT_LT(test, pr->page, npages); KUNIT_ASSERT_LT(test, pr->page * PAGE_SIZE, bufsize); @@ -288,20 +308,14 @@ static void __init iov_kunit_copy_to_bvec(struct kuni= t *test) b =3D 0; patt =3D 0; memset(scratch, 0, bufsize); - for (pr =3D bvec_test_ranges; pr->from >=3D 0; pr++, b++) { + for (pr =3D bvec_test_ranges; pr->page >=3D 0; pr++, b++) { u8 *p =3D scratch + pr->page * PAGE_SIZE; =20 for (i =3D pr->from; i < pr->to; i++) p[i] =3D pattern(patt++); } =20 - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=3D%x", i); - if (buffer[i] !=3D scratch[i]) - return; - } - + iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } =20 @@ -341,7 +355,7 @@ static void __init iov_kunit_copy_from_bvec(struct kuni= t *test) /* Build the expected image in the main buffer. */ i =3D 0; memset(buffer, 0, bufsize); - for (pr =3D bvec_test_ranges; pr->from >=3D 0; pr++) { + for (pr =3D bvec_test_ranges; pr->page >=3D 0; pr++) { size_t patt =3D pr->page * PAGE_SIZE; =20 for (j =3D pr->from; j < pr->to; j++) { @@ -352,13 +366,7 @@ static void __init iov_kunit_copy_from_bvec(struct kun= it *test) } stop: =20 - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=3D%x", i); - if (scratch[i] !=3D buffer[i]) - return; - } - + iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } =20 @@ -409,7 +417,7 @@ static void __init iov_kunit_copy_to_xarray(struct kuni= t *test) struct page **spages, **bpages; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, patt; + int i; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -426,7 +434,7 @@ static void __init iov_kunit_copy_to_xarray(struct kuni= t *test) iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages); =20 i =3D 0; - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { + for (pr =3D kvec_test_ranges; pr->page >=3D 0; pr++) { size =3D pr->to - pr->from; KUNIT_ASSERT_LE(test, pr->to, bufsize); =20 @@ -439,20 +447,8 @@ static void __init iov_kunit_copy_to_xarray(struct kun= it *test) i +=3D size; } =20 - /* Build the expected image in the scratch buffer. */ - patt =3D 0; - memset(scratch, 0, bufsize); - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) - for (i =3D pr->from; i < pr->to; i++) - scratch[i] =3D pattern(patt++); - - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=3D%x", i); - if (buffer[i] !=3D scratch[i]) - return; - } - + iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ra= nges); + iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } =20 @@ -467,7 +463,7 @@ static void __init iov_kunit_copy_from_xarray(struct ku= nit *test) struct page **spages, **bpages; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, j; + int i; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -484,7 +480,7 @@ static void __init iov_kunit_copy_from_xarray(struct ku= nit *test) iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages); =20 i =3D 0; - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { + for (pr =3D kvec_test_ranges; pr->page >=3D 0; pr++) { size =3D pr->to - pr->from; KUNIT_ASSERT_LE(test, pr->to, bufsize); =20 @@ -497,25 +493,8 @@ static void __init iov_kunit_copy_from_xarray(struct k= unit *test) i +=3D size; } =20 - /* Build the expected image in the main buffer. */ - i =3D 0; - memset(buffer, 0, bufsize); - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { - for (j =3D pr->from; j < pr->to; j++) { - buffer[i++] =3D pattern(j); - if (i >=3D bufsize) - goto stop; - } - } -stop: - - /* Compare the images */ - for (i =3D 0; i < bufsize; i++) { - KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=3D%x", i); - if (scratch[i] !=3D buffer[i]) - return; - } - + iov_kunit_build_from_reference_pattern(test, buffer, bufsize, kvec_test_r= anges); + iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } =20 @@ -573,7 +552,7 @@ static void __init iov_kunit_extract_pages_kvec(struct = kunit *test) while (from =3D=3D pr->to) { pr++; from =3D pr->from; - if (from < 0) + if (pr->page < 0) goto stop; } ix =3D from / PAGE_SIZE; @@ -651,7 +630,7 @@ static void __init iov_kunit_extract_pages_bvec(struct = kunit *test) while (from =3D=3D pr->to) { pr++; from =3D pr->from; - if (from < 0) + if (pr->page < 0) goto stop; } ix =3D pr->page + from / PAGE_SIZE; @@ -698,7 +677,7 @@ static void __init iov_kunit_extract_pages_xarray(struc= t kunit *test) iov_kunit_create_buffer(test, &bpages, npages); iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages); =20 - for (pr =3D kvec_test_ranges; pr->from >=3D 0; pr++) { + for (pr =3D kvec_test_ranges; pr->page >=3D 0; pr++) { from =3D pr->from; size =3D pr->to - from; KUNIT_ASSERT_LE(test, pr->to, bufsize); From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C43ECCE79CE for ; Wed, 20 Sep 2023 13:05:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236541AbjITNFx (ORCPT ); Wed, 20 Sep 2023 09:05:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236530AbjITNFn (ORCPT ); Wed, 20 Sep 2023 09:05:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ABBC92 for ; Wed, 20 Sep 2023 06:04:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215086; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CMlwmXq8j0rs4LEdDpXfVHzrC0kRRfOaExs7WZepjuM=; b=LpgsTTkXCUIHgGYHnIllFU1UXKZe3gwpWudkuNN0qZ27xBeoMZmshKWiGHoQSATKmagKQH wQBnsNh30kFOzohEvV4hFpls7pMD5jIindyMGWZ7nMOL590XkoPQnCG0N9OYUQUAWSjB+t e/BmVry+14fel+A5kTGrizixzB3b6F4= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-100-Jfs8nTrRN4a0ci59La6Q3A-1; Wed, 20 Sep 2023 09:04:44 -0400 X-MC-Unique: Jfs8nTrRN4a0ci59La6Q3A-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 714F02999B33; Wed, 20 Sep 2023 13:04:43 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id A65F0492B16; Wed, 20 Sep 2023 13:04:32 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 3/9] iov_iter: Consolidate the test vector struct in the kunit tests Date: Wed, 20 Sep 2023 14:03:54 +0100 Message-ID: <20230920130400.203330-4-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Consolidate the test vector struct in the kunit tests so that the bvec pattern check helpers can share with the kvec check helpers. Signed-off-by: David Howells cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- lib/kunit_iov_iter.c | 90 ++++++++++++++++++++++++-------------------- 1 file changed, 50 insertions(+), 40 deletions(-) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index ee586eb652b4..4925ca37cde6 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -18,22 +18,46 @@ MODULE_DESCRIPTION("iov_iter testing"); MODULE_AUTHOR("David Howells "); MODULE_LICENSE("GPL"); =20 -struct kvec_test_range { +struct iov_kunit_range { int page, from, to; }; =20 -static const struct kvec_test_range kvec_test_ranges[] =3D { - { 0, 0x00002, 0x00002 }, - { 0, 0x00027, 0x03000 }, - { 0, 0x05193, 0x18794 }, - { 0, 0x20000, 0x20000 }, - { 0, 0x20000, 0x24000 }, - { 0, 0x24000, 0x27001 }, - { 0, 0x29000, 0xffffb }, - { 0, 0xffffd, 0xffffe }, +/* + * Ranges that to use in tests where we have address/offset ranges to play + * with (ie. KVEC) or where we have a single blob that we can copy + * arbitrary chunks of (ie. XARRAY). + */ +static const struct iov_kunit_range kvec_test_ranges[] =3D { + { 0, 0x00002, 0x00002 }, /* Start with an empty range */ + { 0, 0x00027, 0x03000 }, /* Midpage to page end */ + { 0, 0x05193, 0x18794 }, /* Midpage to midpage */ + { 0, 0x20000, 0x20000 }, /* Empty range in the middle */ + { 0, 0x20000, 0x24000 }, /* Page start to page end */ + { 0, 0x24000, 0x27001 }, /* Page end to midpage */ + { 0, 0x29000, 0xffffb }, /* Page start to midpage */ + { 0, 0xffffd, 0xffffe }, /* Almost contig to last, ending in same page */ { -1 } }; =20 +/* + * Ranges that to use in tests where we have a list of partial pages to + * play with (ie. BVEC). + */ +static const struct iov_kunit_range bvec_test_ranges[] =3D { + { 0, 0x0002, 0x0002 }, /* Start with an empty range */ + { 1, 0x0027, 0x0893 }, /* Random part of page */ + { 2, 0x0193, 0x0794 }, /* Random part of page */ + { 3, 0x0000, 0x1000 }, /* Full page */ + { 4, 0x0000, 0x1000 }, /* Full page logically contig to last */ + { 5, 0x0000, 0x1000 }, /* Full page logically contig to last */ + { 6, 0x0000, 0x0ffb }, /* Part page logically contig to last */ + { 6, 0x0ffd, 0x0ffe }, /* Part of prev page, but not quite contig */ + { -1 } +}; + +/* + * The pattern to fill with. + */ static inline u8 pattern(unsigned long x) { return x & 0xff; @@ -44,6 +68,9 @@ static void iov_kunit_unmap(void *data) vunmap(data); } =20 +/* + * Create a buffer out of some pages and return a vmap'd pointer to it. + */ static void *__init iov_kunit_create_buffer(struct kunit *test, struct page ***ppages, size_t npages) @@ -75,7 +102,7 @@ static void *__init iov_kunit_create_buffer(struct kunit= *test, */ static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *s= cratch, size_t bufsize, - const struct kvec_test_range *pr) + const struct iov_kunit_range *pr) { int i, patt =3D 0; =20 @@ -91,7 +118,7 @@ static void iov_kunit_build_to_reference_pattern(struct = kunit *test, u8 *scratch */ static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 = *buffer, size_t bufsize, - const struct kvec_test_range *pr) + const struct iov_kunit_range *pr) { size_t i =3D 0, j; =20 @@ -124,7 +151,7 @@ static void __init iov_kunit_load_kvec(struct kunit *te= st, struct iov_iter *iter, int dir, struct kvec *kvec, unsigned int kvmax, void *buffer, size_t bufsize, - const struct kvec_test_range *pr) + const struct iov_kunit_range *pr) { size_t size =3D 0; int i; @@ -217,28 +244,12 @@ static void __init iov_kunit_copy_from_kvec(struct ku= nit *test) KUNIT_SUCCEED(); } =20 -struct bvec_test_range { - int page, from, to; -}; - -static const struct bvec_test_range bvec_test_ranges[] =3D { - { 0, 0x0002, 0x0002 }, - { 1, 0x0027, 0x0893 }, - { 2, 0x0193, 0x0794 }, - { 3, 0x0000, 0x1000 }, - { 4, 0x0000, 0x1000 }, - { 5, 0x0000, 0x1000 }, - { 6, 0x0000, 0x0ffb }, - { 6, 0x0ffd, 0x0ffe }, - { -1 } -}; - static void __init iov_kunit_load_bvec(struct kunit *test, struct iov_iter *iter, int dir, struct bio_vec *bvec, unsigned int bvmax, struct page **pages, size_t npages, size_t bufsize, - const struct bvec_test_range *pr) + const struct iov_kunit_range *pr) { struct page *can_merge =3D NULL, *page; size_t size =3D 0; @@ -276,13 +287,13 @@ static void __init iov_kunit_load_bvec(struct kunit *= test, */ static void __init iov_kunit_copy_to_bvec(struct kunit *test) { - const struct bvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct bio_vec bvec[8]; struct page **spages, **bpages; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, b, patt; + int i, patt; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -305,10 +316,9 @@ static void __init iov_kunit_copy_to_bvec(struct kunit= *test) KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); =20 /* Build the expected image in the scratch buffer. */ - b =3D 0; patt =3D 0; memset(scratch, 0, bufsize); - for (pr =3D bvec_test_ranges; pr->page >=3D 0; pr++, b++) { + for (pr =3D bvec_test_ranges; pr->page >=3D 0; pr++) { u8 *p =3D scratch + pr->page * PAGE_SIZE; =20 for (i =3D pr->from; i < pr->to; i++) @@ -324,7 +334,7 @@ static void __init iov_kunit_copy_to_bvec(struct kunit = *test) */ static void __init iov_kunit_copy_from_bvec(struct kunit *test) { - const struct bvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct bio_vec bvec[8]; struct page **spages, **bpages; @@ -411,7 +421,7 @@ static struct xarray *iov_kunit_create_xarray(struct ku= nit *test) */ static void __init iov_kunit_copy_to_xarray(struct kunit *test) { - const struct kvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct xarray *xarray; struct page **spages, **bpages; @@ -457,7 +467,7 @@ static void __init iov_kunit_copy_to_xarray(struct kuni= t *test) */ static void __init iov_kunit_copy_from_xarray(struct kunit *test) { - const struct kvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct xarray *xarray; struct page **spages, **bpages; @@ -503,7 +513,7 @@ static void __init iov_kunit_copy_from_xarray(struct ku= nit *test) */ static void __init iov_kunit_extract_pages_kvec(struct kunit *test) { - const struct kvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct page **bpages, *pagelist[8], **pages =3D pagelist; struct kvec kvec[8]; @@ -583,7 +593,7 @@ static void __init iov_kunit_extract_pages_kvec(struct = kunit *test) */ static void __init iov_kunit_extract_pages_bvec(struct kunit *test) { - const struct bvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct page **bpages, *pagelist[8], **pages =3D pagelist; struct bio_vec bvec[8]; @@ -661,7 +671,7 @@ static void __init iov_kunit_extract_pages_bvec(struct = kunit *test) */ static void __init iov_kunit_extract_pages_xarray(struct kunit *test) { - const struct kvec_test_range *pr; + const struct iov_kunit_range *pr; struct iov_iter iter; struct xarray *xarray; struct page **bpages, *pagelist[8], **pages =3D pagelist; From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE492CE79CF for ; Wed, 20 Sep 2023 13:06:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236613AbjITNGR (ORCPT ); Wed, 20 Sep 2023 09:06:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236567AbjITNF5 (ORCPT ); Wed, 20 Sep 2023 09:05:57 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED20DDC for ; Wed, 20 Sep 2023 06:05:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215101; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0wrDnIJNvyLeCaZ76kAPm12sjpZAz+xrWcuHtRTLq38=; b=PLqkzncjcLl61d3C8gyRdfwdlUb1KzxAjL0ereDpGD9xFfQuWLglLy/e73leF6+FTyYQRk 7t5JXabZv2YivwLm9X2DfzoYTjGgBL7N+Y6NTvuz4vXO5geZy2g4zqo45SnlDa4meftcWh OtHAcZ4uaFzE/Lh0ORfICzAjWl+jIAs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-231-GSu9rdLnNoOiK0fFC_BW6g-1; Wed, 20 Sep 2023 09:04:54 -0400 X-MC-Unique: GSu9rdLnNoOiK0fFC_BW6g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CA268101A585; Wed, 20 Sep 2023 13:04:51 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3ABF40C6EBF; Wed, 20 Sep 2023 13:04:44 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 4/9] iov_iter: Consolidate bvec pattern checking Date: Wed, 20 Sep 2023 14:03:55 +0100 Message-ID: <20230920130400.203330-5-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make the BVEC-testing functions use the consolidated pattern checking functions to reduce the amount of duplicated code. Signed-off-by: David Howells cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- lib/kunit_iov_iter.c | 42 +++++++++++------------------------------- 1 file changed, 11 insertions(+), 31 deletions(-) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 4925ca37cde6..eb86371b67d0 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -107,9 +107,11 @@ static void iov_kunit_build_to_reference_pattern(struc= t kunit *test, u8 *scratch int i, patt =3D 0; =20 memset(scratch, 0, bufsize); - for (; pr->page >=3D 0; pr++) + for (; pr->page >=3D 0; pr++) { + u8 *p =3D scratch + pr->page * PAGE_SIZE; for (i =3D pr->from; i < pr->to; i++) - scratch[i] =3D pattern(patt++); + p[i] =3D pattern(patt++); + } } =20 /* @@ -124,8 +126,10 @@ static void iov_kunit_build_from_reference_pattern(str= uct kunit *test, u8 *buffe =20 memset(buffer, 0, bufsize); for (; pr->page >=3D 0; pr++) { + size_t patt =3D pr->page * PAGE_SIZE; + for (j =3D pr->from; j < pr->to; j++) { - buffer[i++] =3D pattern(j); + buffer[i++] =3D pattern(patt + j); if (i >=3D bufsize) return; } @@ -287,13 +291,12 @@ static void __init iov_kunit_load_bvec(struct kunit *= test, */ static void __init iov_kunit_copy_to_bvec(struct kunit *test) { - const struct iov_kunit_range *pr; struct iov_iter iter; struct bio_vec bvec[8]; struct page **spages, **bpages; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, patt; + int i; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -315,16 +318,7 @@ static void __init iov_kunit_copy_to_bvec(struct kunit= *test) KUNIT_EXPECT_EQ(test, iter.count, 0); KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); =20 - /* Build the expected image in the scratch buffer. */ - patt =3D 0; - memset(scratch, 0, bufsize); - for (pr =3D bvec_test_ranges; pr->page >=3D 0; pr++) { - u8 *p =3D scratch + pr->page * PAGE_SIZE; - - for (i =3D pr->from; i < pr->to; i++) - p[i] =3D pattern(patt++); - } - + iov_kunit_build_to_reference_pattern(test, scratch, bufsize, bvec_test_ra= nges); iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } @@ -334,13 +328,12 @@ static void __init iov_kunit_copy_to_bvec(struct kuni= t *test) */ static void __init iov_kunit_copy_from_bvec(struct kunit *test) { - const struct iov_kunit_range *pr; struct iov_iter iter; struct bio_vec bvec[8]; struct page **spages, **bpages; u8 *scratch, *buffer; size_t bufsize, npages, size, copied; - int i, j; + int i; =20 bufsize =3D 0x100000; npages =3D bufsize / PAGE_SIZE; @@ -362,20 +355,7 @@ static void __init iov_kunit_copy_from_bvec(struct kun= it *test) KUNIT_EXPECT_EQ(test, iter.count, 0); KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); =20 - /* Build the expected image in the main buffer. */ - i =3D 0; - memset(buffer, 0, bufsize); - for (pr =3D bvec_test_ranges; pr->page >=3D 0; pr++) { - size_t patt =3D pr->page * PAGE_SIZE; - - for (j =3D pr->from; j < pr->to; j++) { - buffer[i++] =3D pattern(patt + j); - if (i >=3D bufsize) - goto stop; - } - } -stop: - + iov_kunit_build_from_reference_pattern(test, buffer, bufsize, bvec_test_r= anges); iov_kunit_check_pattern(test, buffer, scratch, bufsize); KUNIT_SUCCEED(); } From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61B34CE79D0 for ; Wed, 20 Sep 2023 13:06:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236575AbjITNGI (ORCPT ); Wed, 20 Sep 2023 09:06:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236565AbjITNF4 (ORCPT ); Wed, 20 Sep 2023 09:05:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E647E5 for ; Wed, 20 Sep 2023 06:05:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215102; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uub17mzu8dgUj1tkqvUjJMJqsHG/bWf8Fylywc+muQE=; b=EY1PgM+JBGGWJsL1itf5wf0GRtKbPiaGlicCKoGdqDeQkwVK8co4Dag8cu3WQQMSyGoSlf mubsEOhoS8UYGRRAUHFk80Ishp1B7xx/dkefVbBDhaO482xCSWcdGnMlyc+STHACylwJir lPOUFAFqrGq/GmLvNEwWY2FPHAb8BRQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-106-K9l0k3J9MKGNVn_uOBNDsQ-1; Wed, 20 Sep 2023 09:05:00 -0400 X-MC-Unique: K9l0k3J9MKGNVn_uOBNDsQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8B6913C18C27; Wed, 20 Sep 2023 13:04:58 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2FF68202696C; Wed, 20 Sep 2023 13:04:54 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Andrew Morton , Christian Brauner , David Hildenbrand , John Hubbard , Huacai Chen , WANG Xuerui , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , loongarch@lists.linux.dev, linux-s390@vger.kernel.org Subject: [RFC PATCH v2 5/9] iov_iter: Create a function to prepare userspace VM for UBUF/IOVEC tests Date: Wed, 20 Sep 2023 14:03:56 +0100 Message-ID: <20230920130400.203330-6-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Create a function to set up a userspace VM for the kunit testing thread and set up a buffer within it such that ITER_UBUF and ITER_IOVEC tests can be performed. Note that this requires current->mm to point to a sufficiently set up mm_struct. This is done by partially mirroring what execve does. The following steps are performed: (1) Allocate an mm_struct and pick an arch layout (required to set mm->get_unmapped_area). (2) Create an empty "stack" VMA so that the VMA maple tree is set up and won't cause a crash in the maple tree code later. We don't actually care about the stack as we're not going to actually execute userspace. (3) Create an anon file and attach a bunch of folios to it so that the requested number of pages are accessible. (4) Make the kthread use the mm. This must be done before mmap is called. (5) Shared-mmap the anon file into the allocated mm_struct. This requires access to otherwise unexported core symbols: mm_alloc(), vm_area_alloc(), insert_vm_struct() arch_pick_mmap_layout() and anon_inode_getfile_secure(), which I've exported _GPL. [?] Would it be better if this were done in core and not in a module? Signed-off-by: David Howells cc: Andrew Morton cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: Huacai Chen cc: WANG Xuerui cc: Heiko Carstens cc: Vasily Gorbik cc: Alexander Gordeev cc: Christian Borntraeger cc: Sven Schnelle cc: linux-mm@kvack.org cc: loongarch@lists.linux.dev cc: linux-s390@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com --- arch/loongarch/include/asm/page.h | 1 + arch/s390/kernel/vdso.c | 1 + fs/anon_inodes.c | 1 + kernel/fork.c | 2 + lib/kunit_iov_iter.c | 142 ++++++++++++++++++++++++++++++ mm/mmap.c | 1 + mm/util.c | 3 + 7 files changed, 151 insertions(+) diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm= /page.h index 63f137ce82a4..c7c5f5b4c0d3 100644 --- a/arch/loongarch/include/asm/page.h +++ b/arch/loongarch/include/asm/page.h @@ -32,6 +32,7 @@ =20 #include #include +#include =20 /* * It's normally defined only for FLATMEM config but it's diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c index bbaefd84f15e..6849eac59129 100644 --- a/arch/s390/kernel/vdso.c +++ b/arch/s390/kernel/vdso.c @@ -223,6 +223,7 @@ unsigned long vdso_size(void) size +=3D vdso64_end - vdso64_start; return PAGE_ALIGN(size); } +EXPORT_SYMBOL_GPL(vdso_size); =20 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) { diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c index 24192a7667ed..4190336180ee 100644 --- a/fs/anon_inodes.c +++ b/fs/anon_inodes.c @@ -176,6 +176,7 @@ struct file *anon_inode_getfile_secure(const char *name, return __anon_inode_getfile(name, fops, priv, flags, context_inode, true); } +EXPORT_SYMBOL_GPL(anon_inode_getfile_secure); =20 static int __anon_inode_getfd(const char *name, const struct file_operations *fops, diff --git a/kernel/fork.c b/kernel/fork.c index 3b6d20dfb9a8..9ab604574400 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -494,6 +494,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *= mm) =20 return vma; } +EXPORT_SYMBOL_GPL(vm_area_alloc); =20 struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { @@ -1337,6 +1338,7 @@ struct mm_struct *mm_alloc(void) memset(mm, 0, sizeof(*mm)); return mm_init(mm, current, current_user_ns()); } +EXPORT_SYMBOL_GPL(mm_alloc); =20 static inline void __mmput(struct mm_struct *mm) { diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index eb86371b67d0..85387a25484e 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -10,6 +10,12 @@ #include #include #include +#include +#include +#include +#include +#include +#include #include #include #include @@ -68,6 +74,20 @@ static void iov_kunit_unmap(void *data) vunmap(data); } =20 +static void iov_kunit_mmdrop(void *data) +{ + struct mm_struct *mm =3D data; + + if (current->mm =3D=3D mm) + kthread_unuse_mm(mm); + mmdrop(mm); +} + +static void iov_kunit_fput(void *data) +{ + fput(data); +} + /* * Create a buffer out of some pages and return a vmap'd pointer to it. */ @@ -151,6 +171,128 @@ static void iov_kunit_check_pattern(struct kunit *tes= t, const u8 *buffer, } } =20 +static const struct file_operations iov_kunit_user_file_fops =3D { + .mmap =3D generic_file_mmap, +}; + +static int iov_kunit_user_file_read_folio(struct file *file, struct folio = *folio) +{ + folio_mark_uptodate(folio); + folio_unlock(folio); + return 0; +} + +static const struct address_space_operations iov_kunit_user_file_aops =3D { + .read_folio =3D iov_kunit_user_file_read_folio, + .dirty_folio =3D filemap_dirty_folio, +}; + +/* + * Create an anonymous file and attach a bunch of pages to it. We can the= n use + * this in mmap() and check the pages against it when doing extraction tes= ts. + */ +static struct file *iov_kunit_create_file(struct kunit *test, size_t npage= s, + struct page ***ppages) +{ + struct folio *folio; + struct file *file; + struct page **pages =3D NULL; + size_t i; + + if (ppages) { + pages =3D kunit_kcalloc(test, npages, sizeof(struct page *), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages); + *ppages =3D pages; + } + + file =3D anon_inode_getfile_secure("kunit-iov-test", + &iov_kunit_user_file_fops, + NULL, O_RDWR, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, file); + kunit_add_action_or_reset(test, iov_kunit_fput, file); + file->f_mapping->a_ops =3D &iov_kunit_user_file_aops; + + i_size_write(file_inode(file), npages * PAGE_SIZE); + for (i =3D 0; i < npages; i++) { + folio =3D filemap_grab_folio(file->f_mapping, i); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folio); + if (pages) + *pages++ =3D folio_page(folio, 0); + folio_unlock(folio); + folio_put(folio); + } + + return file; +} + +/* + * Attach a userspace buffer to a kernel thread by adding an mm_struct to = it + * and mmapping the buffer. If the caller requires a list of pages for + * checking, then an anon_inode file is created, populated with pages and + * mmapped otherwise an anonymous mapping is used. + */ +static u8 __user *__init iov_kunit_create_user_buf(struct kunit *test, + size_t npages, + struct page ***ppages) +{ + struct rlimit rlim_stack =3D { + .rlim_cur =3D LONG_MAX, + .rlim_max =3D LONG_MAX, + }; + struct vm_area_struct *vma; + struct mm_struct *mm; + struct file *file; + u8 __user *buffer; + int ret; + + KUNIT_ASSERT_NULL(test, current->mm); + + mm =3D mm_alloc(); + KUNIT_ASSERT_NOT_NULL(test, mm); + kunit_add_action_or_reset(test, iov_kunit_mmdrop, mm); + arch_pick_mmap_layout(mm, &rlim_stack); + + vma =3D vm_area_alloc(mm); + KUNIT_ASSERT_NOT_NULL(test, vma); + vma_set_anonymous(vma); + + /* + * Place the stack at the largest stack address the architecture + * supports. Later, we'll move this to an appropriate place. We don't + * use STACK_TOP because that can depend on attributes which aren't + * configured yet. + */ + vma->vm_end =3D STACK_TOP_MAX; + vma->vm_start =3D vma->vm_end - PAGE_SIZE; + vm_flags_init(vma, VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SE= TUP); + vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); + + ret =3D insert_vm_struct(mm, vma); + KUNIT_ASSERT_EQ(test, ret, 0); + + mm->stack_vm =3D mm->total_vm =3D 1; + + /* + * If we want the pages, attach the pages to a file to prevent swap + * interfering, otherwise use an anonymous mapping. + */ + if (ppages) { + file =3D iov_kunit_create_file(test, npages, ppages); + + kthread_use_mm(mm); + buffer =3D (u8 __user *)vm_mmap(file, 0, PAGE_SIZE * npages, + PROT_READ | PROT_WRITE, + MAP_SHARED, 0); + } else { + kthread_use_mm(mm); + buffer =3D (u8 __user *)vm_mmap(NULL, 0, PAGE_SIZE * npages, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, 0); + } + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, (void __force *)buffer); + return buffer; +} + static void __init iov_kunit_load_kvec(struct kunit *test, struct iov_iter *iter, int dir, struct kvec *kvec, unsigned int kvmax, diff --git a/mm/mmap.c b/mm/mmap.c index b56a7f0c9f85..2ea4a98a2cab 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3284,6 +3284,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) =20 return 0; } +EXPORT_SYMBOL_GPL(insert_vm_struct); =20 /* * Copy the vma structure to a new location in the same mm, diff --git a/mm/util.c b/mm/util.c index 8cbbfd3a3d59..09895358f067 100644 --- a/mm/util.c +++ b/mm/util.c @@ -455,6 +455,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct= rlimit *rlim_stack) mm->get_unmapped_area =3D arch_get_unmapped_area; } #endif +#ifdef CONFIG_MMU +EXPORT_SYMBOL_GPL(arch_pick_mmap_layout); +#endif =20 /** * __account_locked_vm - account locked pages to an mm's locked_vm From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24276CE79CF for ; Wed, 20 Sep 2023 13:06:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236443AbjITNGe (ORCPT ); Wed, 20 Sep 2023 09:06:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236586AbjITNGJ (ORCPT ); Wed, 20 Sep 2023 09:06:09 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51FCEC6 for ; Wed, 20 Sep 2023 06:05:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215112; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JsA8rUgjW6/Y4e1LExitHFafQGFdmwIk63HoELAb6c4=; b=BzDiU6mA8X+aIrUUjBV6E5eKbKkaLtJw1kLYjX69YsWg+r3irxSVyDA9TWPHNBKp4ViEQT fx8EXD+erCRDQBrCSE05J73NYEHrp0Ngxi88uBBGa6BIFsCMpalKyapq6gtwpB3EvtGVo7 FPAo+yajaaWkLHRJ1HeAbBF/yoaPD2w= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-459-kxk2XfuDPpiN2GuX7iYdJg-1; Wed, 20 Sep 2023 09:05:10 -0400 X-MC-Unique: kxk2XfuDPpiN2GuX7iYdJg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 407DE1C0CCAE; Wed, 20 Sep 2023 13:05:07 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9C4CCC15BB8; Wed, 20 Sep 2023 13:05:04 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Andrew Morton , Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 6/9] iov_iter: Add copy kunit tests for ITER_UBUF and ITER_IOVEC Date: Wed, 20 Sep 2023 14:03:57 +0100 Message-ID: <20230920130400.203330-7-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add copy kunit tests for ITER_UBUF- and ITER_IOVEC-type iterators. This attaches a userspace VM with a mapped file in it temporarily to the test thread. Signed-off-by: David Howells cc: Andrew Morton cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com --- lib/kunit_iov_iter.c | 236 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 236 insertions(+) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 85387a25484e..d1817ab4ffee 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -116,6 +116,23 @@ static void *__init iov_kunit_create_buffer(struct kun= it *test, return buffer; } =20 +/* + * Fill a user buffer with a recognisable pattern. + */ +static void iov_kunit_fill_user_buf(struct kunit *test, + u8 __user *buffer, size_t bufsize) +{ + size_t i; + int err; + + for (i =3D 0; i < bufsize; i++) { + err =3D put_user(pattern(i), &buffer[i]); + KUNIT_EXPECT_EQ(test, err, 0); + if (test->status =3D=3D KUNIT_FAILURE) + return; + } +} + /* * Build the reference pattern in the scratch buffer that we expect to see= in * the iterator buffer (ie. the result of copy *to*). @@ -171,6 +188,25 @@ static void iov_kunit_check_pattern(struct kunit *test= , const u8 *buffer, } } =20 +/* + * Compare a user and a scratch buffer to see that they're the same. + */ +static void iov_kunit_check_user_pattern(struct kunit *test, const u8 __us= er *buffer, + const u8 *scratch, size_t bufsize) +{ + size_t i; + int err; + u8 c; + + for (i =3D 0; i < bufsize; i++) { + err =3D get_user(c, &buffer[i]); + KUNIT_EXPECT_EQ(test, err, 0); + KUNIT_EXPECT_EQ_MSG(test, c, scratch[i], "at i=3D%x", i); + if (c !=3D scratch[i]) + return; + } +} + static const struct file_operations iov_kunit_user_file_fops =3D { .mmap =3D generic_file_mmap, }; @@ -293,6 +329,202 @@ static u8 __user *__init iov_kunit_create_user_buf(st= ruct kunit *test, return buffer; } =20 +/* + * Test copying to an ITER_UBUF-type iterator. + */ +static void __init iov_kunit_copy_to_ubuf(struct kunit *test) +{ + const struct iov_kunit_range *pr; + struct iov_iter iter; + struct page **spages; + u8 __user *buffer; + u8 *scratch; + ssize_t uncleared; + size_t bufsize, npages, size, copied; + int i; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + scratch =3D iov_kunit_create_buffer(test, &spages, npages); + for (i =3D 0; i < bufsize; i++) + scratch[i] =3D pattern(i); + + buffer =3D iov_kunit_create_user_buf(test, npages, NULL); + uncleared =3D clear_user(buffer, bufsize); + KUNIT_EXPECT_EQ(test, uncleared, 0); + if (uncleared) + return; + + i =3D 0; + for (pr =3D kvec_test_ranges; pr->page >=3D 0; pr++) { + size =3D pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_ubuf(&iter, ITER_DEST, buffer + pr->from, size); + copied =3D copy_to_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, size); + if (test->status =3D=3D KUNIT_FAILURE) + break; + i +=3D size; + } + + iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ra= nges); + iov_kunit_check_user_pattern(test, buffer, scratch, bufsize); + KUNIT_SUCCEED(); +} + +/* + * Test copying from an ITER_UBUF-type iterator. + */ +static void __init iov_kunit_copy_from_ubuf(struct kunit *test) +{ + const struct iov_kunit_range *pr; + struct iov_iter iter; + struct page **spages; + u8 __user *buffer; + u8 *scratch, *reference; + size_t bufsize, npages, size, copied; + int i; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + buffer =3D iov_kunit_create_user_buf(test, npages, NULL); + iov_kunit_fill_user_buf(test, buffer, bufsize); + + scratch =3D iov_kunit_create_buffer(test, &spages, npages); + memset(scratch, 0, bufsize); + + reference =3D iov_kunit_create_buffer(test, &spages, npages); + + i =3D 0; + for (pr =3D kvec_test_ranges; pr->page >=3D 0; pr++) { + size =3D pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_ubuf(&iter, ITER_SOURCE, buffer + pr->from, size); + copied =3D copy_from_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, size); + if (test->status =3D=3D KUNIT_FAILURE) + break; + i +=3D size; + } + + iov_kunit_build_from_reference_pattern(test, reference, bufsize, kvec_tes= t_ranges); + iov_kunit_check_pattern(test, scratch, reference, bufsize); + KUNIT_SUCCEED(); +} + +static void __init iov_kunit_load_iovec(struct kunit *test, + struct iov_iter *iter, int dir, + struct iovec *iov, unsigned int iovmax, + u8 __user *buffer, size_t bufsize, + const struct iov_kunit_range *pr) +{ + size_t size =3D 0; + int i; + + for (i =3D 0; i < iovmax; i++, pr++) { + if (pr->page < 0) + break; + KUNIT_ASSERT_GE(test, pr->to, pr->from); + KUNIT_ASSERT_LE(test, pr->to, bufsize); + iov[i].iov_base =3D buffer + pr->from; + iov[i].iov_len =3D pr->to - pr->from; + size +=3D pr->to - pr->from; + } + KUNIT_ASSERT_LE(test, size, bufsize); + + iov_iter_init(iter, dir, iov, i, size); +} + +/* + * Test copying to an ITER_IOVEC-type iterator. + */ +static void __init iov_kunit_copy_to_iovec(struct kunit *test) +{ + struct iov_iter iter; + struct page **spages; + struct iovec iov[8]; + u8 __user *buffer; + u8 *scratch; + ssize_t uncleared; + size_t bufsize, npages, size, copied; + int i; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + scratch =3D iov_kunit_create_buffer(test, &spages, npages); + for (i =3D 0; i < bufsize; i++) + scratch[i] =3D pattern(i); + + buffer =3D iov_kunit_create_user_buf(test, npages, NULL); + uncleared =3D clear_user(buffer, bufsize); + KUNIT_EXPECT_EQ(test, uncleared, 0); + if (uncleared) + return; + + iov_kunit_load_iovec(test, &iter, ITER_DEST, iov, ARRAY_SIZE(iov), + buffer, bufsize, kvec_test_ranges); + size =3D iter.count; + + copied =3D copy_to_iter(scratch, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); + + iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ra= nges); + iov_kunit_check_user_pattern(test, buffer, scratch, bufsize); + KUNIT_SUCCEED(); +} + +/* + * Test copying from an ITER_IOVEC-type iterator. + */ +static void __init iov_kunit_copy_from_iovec(struct kunit *test) +{ + struct iov_iter iter; + struct page **spages; + struct iovec iov[8]; + u8 __user *buffer; + u8 *scratch, *reference; + size_t bufsize, npages, size, copied; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + buffer =3D iov_kunit_create_user_buf(test, npages, NULL); + iov_kunit_fill_user_buf(test, buffer, bufsize); + + scratch =3D iov_kunit_create_buffer(test, &spages, npages); + memset(scratch, 0, bufsize); + + reference =3D iov_kunit_create_buffer(test, &spages, npages); + + iov_kunit_load_iovec(test, &iter, ITER_SOURCE, iov, ARRAY_SIZE(iov), + buffer, bufsize, kvec_test_ranges); + size =3D iter.count; + + copied =3D copy_from_iter(scratch, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.nr_segs, 0); + + iov_kunit_build_from_reference_pattern(test, reference, bufsize, kvec_tes= t_ranges); + iov_kunit_check_pattern(test, reference, scratch, bufsize); + KUNIT_SUCCEED(); +} + static void __init iov_kunit_load_kvec(struct kunit *test, struct iov_iter *iter, int dir, struct kvec *kvec, unsigned int kvmax, @@ -868,6 +1100,10 @@ static void __init iov_kunit_extract_pages_xarray(str= uct kunit *test) } =20 static struct kunit_case __refdata iov_kunit_cases[] =3D { + KUNIT_CASE(iov_kunit_copy_to_ubuf), + KUNIT_CASE(iov_kunit_copy_from_ubuf), + KUNIT_CASE(iov_kunit_copy_to_iovec), + KUNIT_CASE(iov_kunit_copy_from_iovec), KUNIT_CASE(iov_kunit_copy_to_kvec), KUNIT_CASE(iov_kunit_copy_from_kvec), KUNIT_CASE(iov_kunit_copy_to_bvec), From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6187CE79CE for ; Wed, 20 Sep 2023 13:06:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236449AbjITNGh (ORCPT ); Wed, 20 Sep 2023 09:06:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236540AbjITNGN (ORCPT ); Wed, 20 Sep 2023 09:06:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCF61F1 for ; Wed, 20 Sep 2023 06:05:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215115; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fIh9Fj3H3rRnydHywvA7K8Amir9a3UPzNcMRqZcgBSE=; b=JPBaHDhOXeF1t5Ic3kZokzSmD/riF0l4G/WmqPKKNx12h5vEzhAfT2Qdh00qJKJJH+fwjk WcYZwSFMtACpD37gUJv9djcolfMZT5B3g6pmJGcRlCpNK0Gq6JvDTwgc97pvOL7dXqkqOT 68sjQhkgomwgLJjfZtiE/6XqkWlm20Q= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-668-tGfIu03hPlqM41kxV_oZLg-1; Wed, 20 Sep 2023 09:05:11 -0400 X-MC-Unique: tGfIu03hPlqM41kxV_oZLg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C4BB43812592; Wed, 20 Sep 2023 13:05:10 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05B5E492B16; Wed, 20 Sep 2023 13:05:07 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Andrew Morton , Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 7/9] iov_iter: Add extract kunit tests for ITER_UBUF and ITER_IOVEC Date: Wed, 20 Sep 2023 14:03:58 +0100 Message-ID: <20230920130400.203330-8-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add extraction kunit tests for ITER_UBUF- and ITER_IOVEC-type iterators. This attaches a userspace VM with a mapped file in it temporarily to the test thread. [!] Note that this requires the kernel thread running the test to obtain and deploy an mm_struct so that a user-side buffer can be created with mmap - basically it has to emulated part of execve(). Doing so requires access to additional core symbols: mm_alloc(), vm_area_alloc(), insert_vm_struct() and arch_pick_mmap_layout(). See the iov_kunit_create_user_buf() function added in the patch. Signed-off-by: David Howells cc: Andrew Morton cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com --- lib/kunit_iov_iter.c | 164 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 164 insertions(+) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index d1817ab4ffee..2994c3f348ab 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -862,6 +862,168 @@ static void __init iov_kunit_copy_from_xarray(struct = kunit *test) KUNIT_SUCCEED(); } =20 +/* + * Test the extraction of ITER_UBUF-type iterators. + */ +static void __init iov_kunit_extract_pages_ubuf(struct kunit *test) +{ + const struct iov_kunit_range *pr; + struct iov_iter iter; + struct page **bpages, *pagelist[8], **pages =3D pagelist; + ssize_t len; + size_t bufsize, size =3D 0, npages; + int i, from; + u8 __user *buffer; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + buffer =3D iov_kunit_create_user_buf(test, npages, &bpages); + + for (pr =3D kvec_test_ranges; pr->page >=3D 0; pr++) { + from =3D pr->from; + size =3D pr->to - from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_ubuf(&iter, ITER_SOURCE, buffer + pr->from, size); + + do { + size_t offset0 =3D LONG_MAX; + + for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) + pagelist[i] =3D (void *)POISON_POINTER_DELTA + 0x5a; + + len =3D iov_iter_extract_pages(&iter, &pages, 100 * 1024, + ARRAY_SIZE(pagelist), 0, &offset0); + KUNIT_EXPECT_GE(test, len, 0); + if (len < 0) + break; + KUNIT_EXPECT_LE(test, len, size); + KUNIT_EXPECT_EQ(test, iter.count, size - len); + if (len =3D=3D 0) + break; + size -=3D len; + KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0); + KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE); + + /* We're only checking the page pointers */ + unpin_user_pages(pages, (offset0 + len) / PAGE_SIZE); + + for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) { + struct page *p; + ssize_t part =3D min_t(ssize_t, len, PAGE_SIZE - offset0); + int ix; + + KUNIT_ASSERT_GE(test, part, 0); + ix =3D from / PAGE_SIZE; + KUNIT_ASSERT_LT(test, ix, npages); + p =3D bpages[ix]; + KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p); + KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE); + from +=3D part; + len -=3D part; + KUNIT_ASSERT_GE(test, len, 0); + if (len =3D=3D 0) + break; + offset0 =3D 0; + } + + if (test->status =3D=3D KUNIT_FAILURE) + goto stop; + } while (iov_iter_count(&iter) > 0); + + KUNIT_EXPECT_EQ(test, size, 0); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to - pr->from); + } + +stop: + KUNIT_SUCCEED(); +} + +/* + * Test the extraction of ITER_IOVEC-type iterators. + */ +static void __init iov_kunit_extract_pages_iovec(struct kunit *test) +{ + const struct iov_kunit_range *pr; + struct iov_iter iter; + struct iovec iov[8]; + struct page **bpages, *pagelist[8], **pages =3D pagelist; + ssize_t len; + size_t bufsize, size =3D 0, npages; + int i, from; + u8 __user *buffer; + + bufsize =3D 0x100000; + npages =3D bufsize / PAGE_SIZE; + + buffer =3D iov_kunit_create_user_buf(test, npages, &bpages); + + iov_kunit_load_iovec(test, &iter, ITER_SOURCE, iov, ARRAY_SIZE(iov), + buffer, bufsize, kvec_test_ranges); + size =3D iter.count; + + pr =3D kvec_test_ranges; + from =3D pr->from; + do { + size_t offset0 =3D LONG_MAX; + + for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) + pagelist[i] =3D (void *)POISON_POINTER_DELTA + 0x5a; + + len =3D iov_iter_extract_pages(&iter, &pages, 100 * 1024, + ARRAY_SIZE(pagelist), 0, &offset0); + KUNIT_EXPECT_GE(test, len, 0); + if (len < 0) + break; + KUNIT_EXPECT_LE(test, len, size); + KUNIT_EXPECT_EQ(test, iter.count, size - len); + if (len =3D=3D 0) + break; + size -=3D len; + KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0); + KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE); + + /* We're only checking the page pointers */ + unpin_user_pages(pages, (offset0 + len) / PAGE_SIZE); + + for (i =3D 0; i < ARRAY_SIZE(pagelist); i++) { + struct page *p; + ssize_t part =3D min_t(ssize_t, len, PAGE_SIZE - offset0); + int ix; + + KUNIT_ASSERT_GE(test, part, 0); + while (from =3D=3D pr->to) { + pr++; + from =3D pr->from; + if (pr->page < 0) + goto stop; + } + + ix =3D from / PAGE_SIZE; + KUNIT_ASSERT_LT(test, ix, npages); + p =3D bpages[ix]; + KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p); + KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE); + from +=3D part; + len -=3D part; + KUNIT_ASSERT_GE(test, len, 0); + if (len =3D=3D 0) + break; + offset0 =3D 0; + } + + if (test->status =3D=3D KUNIT_FAILURE) + break; + } while (iov_iter_count(&iter) > 0); + +stop: + KUNIT_EXPECT_EQ(test, size, 0); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_SUCCEED(); +} + /* * Test the extraction of ITER_KVEC-type iterators. */ @@ -1110,6 +1272,8 @@ static struct kunit_case __refdata iov_kunit_cases[] = =3D { KUNIT_CASE(iov_kunit_copy_from_bvec), KUNIT_CASE(iov_kunit_copy_to_xarray), KUNIT_CASE(iov_kunit_copy_from_xarray), + KUNIT_CASE(iov_kunit_extract_pages_ubuf), + KUNIT_CASE(iov_kunit_extract_pages_iovec), KUNIT_CASE(iov_kunit_extract_pages_kvec), KUNIT_CASE(iov_kunit_extract_pages_bvec), KUNIT_CASE(iov_kunit_extract_pages_xarray), From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61CCACE79D1 for ; Wed, 20 Sep 2023 13:06:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236493AbjITNGq (ORCPT ); Wed, 20 Sep 2023 09:06:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236608AbjITNGQ (ORCPT ); Wed, 20 Sep 2023 09:06:16 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA852F3 for ; Wed, 20 Sep 2023 06:05:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215119; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YJCqPShPuxX0OSXQWhEKRturENKmTo+NpvJVS0wCU18=; b=crim0Bd7q8XcIih6T6ycIY4Fz2F3YwluOwN0NWyjxSfJLOZjKCCczc/yN+X7IDxaXetyAk /TZIuVtQkyLHr4UJNgI16CRYjG70bg2gT/f314SyQxvNrya08PxvW8gCRyf/RQWLNHftal OhCNKs9Ee4maBcpEIgptc/BQpAMpVR0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-688-aNGlfMQdOTe2VK1zYWIPzQ-1; Wed, 20 Sep 2023 09:05:14 -0400 X-MC-Unique: aNGlfMQdOTe2VK1zYWIPzQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C3D67858F19; Wed, 20 Sep 2023 13:05:13 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7BAA02156701; Wed, 20 Sep 2023 13:05:11 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 8/9] iov_iter: Add benchmarking kunit tests Date: Wed, 20 Sep 2023 14:03:59 +0100 Message-ID: <20230920130400.203330-9-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kunit tests to benchmark 256MiB copies to a KVEC iterator, a BVEC iterator, an XARRAY iterator and to a loop that allocates 256-page BVECs and fills them in (similar to a maximal bio struct being set up). Signed-off-by: David Howells cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- lib/kunit_iov_iter.c | 251 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 251 insertions(+) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 2994c3f348ab..17f85f24b239 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -1261,6 +1261,253 @@ static void __init iov_kunit_extract_pages_xarray(s= truct kunit *test) KUNIT_SUCCEED(); } =20 +static void iov_kunit_free_page(void *data) +{ + __free_page(data); +} + +#define IOV_KUNIT_NR_SAMPLES 16 +static void __init iov_kunit_benchmark_print_stats(struct kunit *test, + unsigned int *samples) +{ + unsigned long long sumsq =3D 0; + unsigned long total =3D 0, mean, stddev; + unsigned int n =3D IOV_KUNIT_NR_SAMPLES; + int i; + + //for (i =3D 0; i < n; i++) + // kunit_info(test, "run %x: %u uS\n", i, samples[i]); + + /* Ignore the 0th sample as that may include extra overhead such as + * setting up PTEs. + */ + samples++; + n--; + for (i =3D 0; i < n; i++) + total +=3D samples[i]; + mean =3D total / n; + + for (i =3D 0; i < n; i++) { + long s =3D samples[i] - mean; + + sumsq +=3D s * s; + } + stddev =3D int_sqrt64(sumsq); + + kunit_info(test, "avg %lu uS, stddev %lu uS\n", mean, stddev); +} + +/* + * Create a source buffer for benchmarking. + */ +static void *__init iov_kunit_create_source(struct kunit *test, size_t npa= ges) +{ + struct page *page, **pages; + void *scratch; + size_t i; + + /* Allocate a page and tile it repeatedly in the buffer. */ + page =3D alloc_page(GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, page); + kunit_add_action_or_reset(test, iov_kunit_free_page, page); + + pages =3D kunit_kmalloc_array(test, npages, sizeof(pages[0]), GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, pages); + for (i =3D 0; i < npages; i++) { + pages[i] =3D page; + get_page(page); + } + + scratch =3D vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, scratch); + kunit_add_action_or_reset(test, iov_kunit_unmap, scratch); + return scratch; +} + +/* + * Time copying 256MiB through an ITER_KVEC. + */ +static void __init iov_kunit_benchmark_kvec(struct kunit *test) +{ + struct iov_iter iter; + struct kvec kvec[8]; + unsigned int samples[IOV_KUNIT_NR_SAMPLES]; + ktime_t a, b; + ssize_t copied; + size_t size =3D 256 * 1024 * 1024, npages =3D size / PAGE_SIZE, part; + void *scratch, *buffer; + int i; + + /* Allocate a huge buffer and populate it with pages. */ + buffer =3D iov_kunit_create_source(test, npages); + + /* Create a single large buffer to copy to/from. */ + scratch =3D iov_kunit_create_source(test, npages); + + /* Split the target over a number of kvecs */ + copied =3D 0; + for (i =3D 0; i < ARRAY_SIZE(kvec); i++) { + part =3D size / ARRAY_SIZE(kvec); + kvec[i].iov_base =3D buffer + copied; + kvec[i].iov_len =3D part; + copied +=3D part; + } + kvec[i - 1].iov_len +=3D size - part; + + /* Perform and time a bunch of copies. */ + kunit_info(test, "Benchmarking copy_to_iter() over KVEC:\n"); + for (i =3D 0; i < IOV_KUNIT_NR_SAMPLES; i++) { + iov_iter_kvec(&iter, ITER_SOURCE, kvec, ARRAY_SIZE(kvec), size); + + a =3D ktime_get_real(); + copied =3D copy_from_iter(scratch, size, &iter); + b =3D ktime_get_real(); + KUNIT_EXPECT_EQ(test, copied, size); + samples[i] =3D ktime_to_us(ktime_sub(b, a)); + } + + iov_kunit_benchmark_print_stats(test, samples); + KUNIT_SUCCEED(); +} + +/* + * Time copying 256MiB through an ITER_BVEC. + */ +static void __init iov_kunit_benchmark_bvec(struct kunit *test) +{ + struct iov_iter iter; + struct bio_vec *bvec; + struct page *page; + unsigned int samples[IOV_KUNIT_NR_SAMPLES]; + ktime_t a, b; + ssize_t copied; + size_t size =3D 256 * 1024 * 1024, npages =3D size / PAGE_SIZE; + void *scratch; + int i; + + /* Allocate a page and tile it repeatedly in the buffer. */ + page =3D alloc_page(GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, page); + kunit_add_action_or_reset(test, iov_kunit_free_page, page); + + bvec =3D kunit_kmalloc_array(test, npages, sizeof(bvec[0]), GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, bvec); + for (i =3D 0; i < npages; i++) + bvec_set_page(&bvec[i], page, PAGE_SIZE, 0); + + /* Create a single large buffer to copy to/from. */ + scratch =3D iov_kunit_create_source(test, npages); + + /* Perform and time a bunch of copies. */ + kunit_info(test, "Benchmarking copy_to_iter() over BVEC:\n"); + for (i =3D 0; i < IOV_KUNIT_NR_SAMPLES; i++) { + iov_iter_bvec(&iter, ITER_SOURCE, bvec, npages, size); + a =3D ktime_get_real(); + copied =3D copy_from_iter(scratch, size, &iter); + b =3D ktime_get_real(); + KUNIT_EXPECT_EQ(test, copied, size); + samples[i] =3D ktime_to_us(ktime_sub(b, a)); + } + + iov_kunit_benchmark_print_stats(test, samples); + KUNIT_SUCCEED(); +} + +/* + * Time copying 256MiB through an ITER_BVEC in 256 page chunks. + */ +static void __init iov_kunit_benchmark_bvec_split(struct kunit *test) +{ + struct iov_iter iter; + struct bio_vec *bvec; + struct page *page; + unsigned int samples[IOV_KUNIT_NR_SAMPLES]; + ktime_t a, b; + ssize_t copied; + size_t size, npages =3D 64; + void *scratch; + int i, j; + + /* Allocate a page and tile it repeatedly in the buffer. */ + page =3D alloc_page(GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, page); + kunit_add_action_or_reset(test, iov_kunit_free_page, page); + + /* Create a single large buffer to copy to/from. */ + scratch =3D iov_kunit_create_source(test, npages); + + /* Perform and time a bunch of copies. */ + kunit_info(test, "Benchmarking copy_to_iter() over BVEC:\n"); + for (i =3D 0; i < IOV_KUNIT_NR_SAMPLES; i++) { + size =3D 256 * 1024 * 1024; + a =3D ktime_get_real(); + do { + size_t part =3D min_t(size_t, size, npages * PAGE_SIZE); + + bvec =3D kunit_kmalloc_array(test, npages, sizeof(bvec[0]), GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, bvec); + for (j =3D 0; j < npages; j++) + bvec_set_page(&bvec[j], page, PAGE_SIZE, 0); + + iov_iter_bvec(&iter, ITER_SOURCE, bvec, npages, part); + copied =3D copy_from_iter(scratch, part, &iter); + KUNIT_EXPECT_EQ(test, copied, part); + size -=3D part; + } while (size > 0); + b =3D ktime_get_real(); + samples[i] =3D ktime_to_us(ktime_sub(b, a)); + } + + iov_kunit_benchmark_print_stats(test, samples); + KUNIT_SUCCEED(); +} + +/* + * Time copying 256MiB through an ITER_XARRAY. + */ +static void __init iov_kunit_benchmark_xarray(struct kunit *test) +{ + struct iov_iter iter; + struct xarray *xarray; + struct page *page; + unsigned int samples[IOV_KUNIT_NR_SAMPLES]; + ktime_t a, b; + ssize_t copied; + size_t size =3D 256 * 1024 * 1024, npages =3D size / PAGE_SIZE; + void *scratch; + int i; + + /* Allocate a page and tile it repeatedly in the buffer. */ + page =3D alloc_page(GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, page); + kunit_add_action_or_reset(test, iov_kunit_free_page, page); + + xarray =3D iov_kunit_create_xarray(test); + + for (i =3D 0; i < npages; i++) { + void *x =3D xa_store(xarray, i, page, GFP_KERNEL); + + KUNIT_ASSERT_FALSE(test, xa_is_err(x)); + } + + /* Create a single large buffer to copy to/from. */ + scratch =3D iov_kunit_create_source(test, npages); + + /* Perform and time a bunch of copies. */ + kunit_info(test, "Benchmarking copy_to_iter() over XARRAY:\n"); + for (i =3D 0; i < IOV_KUNIT_NR_SAMPLES; i++) { + iov_iter_xarray(&iter, ITER_SOURCE, xarray, 0, size); + a =3D ktime_get_real(); + copied =3D copy_from_iter(scratch, size, &iter); + b =3D ktime_get_real(); + KUNIT_EXPECT_EQ(test, copied, size); + samples[i] =3D ktime_to_us(ktime_sub(b, a)); + } + + iov_kunit_benchmark_print_stats(test, samples); + KUNIT_SUCCEED(); +} + static struct kunit_case __refdata iov_kunit_cases[] =3D { KUNIT_CASE(iov_kunit_copy_to_ubuf), KUNIT_CASE(iov_kunit_copy_from_ubuf), @@ -1277,6 +1524,10 @@ static struct kunit_case __refdata iov_kunit_cases[]= =3D { KUNIT_CASE(iov_kunit_extract_pages_kvec), KUNIT_CASE(iov_kunit_extract_pages_bvec), KUNIT_CASE(iov_kunit_extract_pages_xarray), + KUNIT_CASE(iov_kunit_benchmark_kvec), + KUNIT_CASE(iov_kunit_benchmark_bvec), + KUNIT_CASE(iov_kunit_benchmark_bvec_split), + KUNIT_CASE(iov_kunit_benchmark_xarray), {} }; From nobody Fri Feb 13 19:27:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 466CACE79CE for ; Wed, 20 Sep 2023 13:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236520AbjITNG5 (ORCPT ); Wed, 20 Sep 2023 09:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236623AbjITNGT (ORCPT ); Wed, 20 Sep 2023 09:06:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 678CE122 for ; Wed, 20 Sep 2023 06:05:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695215123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WKL59oxitSx6DOUyNJ+1H0DeRSBlY0vVZI1oz1R1zFg=; b=LFy6dDPC2tomJN/FwHCsi7jJ6+BDP5Iv8j50RF5G45BBRr+IZ7ui4bFYa2MnTBcXQkLxW9 liu5EaAEnvdTOskAU82xE8D99sjB2jLyImXzHsX4KTa/2SQ0RXCO4RvOJ8Rq6lRmt8ryhZ RbIbku5qh52ttdXH7sIRjwLEUw1ojV8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-653-5VCNNxRlNwuawrr7coWaQA-1; Wed, 20 Sep 2023 09:05:18 -0400 X-MC-Unique: 5VCNNxRlNwuawrr7coWaQA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D5BEF2999B36; Wed, 20 Sep 2023 13:05:16 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 698FE51E3; Wed, 20 Sep 2023 13:05:14 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Andrew Morton , Christian Brauner , David Hildenbrand , John Hubbard Subject: [RFC PATCH v2 9/9] iov_iter: Add benchmarking kunit tests for UBUF/IOVEC Date: Wed, 20 Sep 2023 14:04:00 +0100 Message-ID: <20230920130400.203330-10-dhowells@redhat.com> In-Reply-To: <20230920130400.203330-1-dhowells@redhat.com> References: <20230920130400.203330-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add kunit tests to benchmark 256MiB copies to a UBUF iterator and an IOVEC iterator. This attaches a userspace VM with a mapped file in it temporarily to the test thread. Signed-off-by: David Howells cc: Andrew Morton cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- lib/kunit_iov_iter.c | 95 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 17f85f24b239..4ee939a1c5ec 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -1324,6 +1324,99 @@ static void *__init iov_kunit_create_source(struct k= unit *test, size_t npages) return scratch; } =20 +/* + * Time copying 256MiB through an ITER_UBUF. + */ +static void __init iov_kunit_benchmark_ubuf(struct kunit *test) +{ + struct iov_iter iter; + unsigned int samples[IOV_KUNIT_NR_SAMPLES]; + ktime_t a, b; + ssize_t copied; + size_t size =3D 256 * 1024 * 1024, npages =3D size / PAGE_SIZE; + void *scratch; + int i; + u8 __user *buffer; + + /* Allocate a huge buffer and populate it with pages. */ + buffer =3D iov_kunit_create_user_buf(test, npages, NULL); + + /* Create a single large buffer to copy to/from. */ + scratch =3D iov_kunit_create_source(test, npages); + + /* Perform and time a bunch of copies. */ + kunit_info(test, "Benchmarking copy_to_iter() over UBUF:\n"); + for (i =3D 0; i < IOV_KUNIT_NR_SAMPLES; i++) { + size_t remain =3D size; + + a =3D ktime_get_real(); + do { + size_t part =3D min(remain, PAGE_SIZE); + + iov_iter_ubuf(&iter, ITER_SOURCE, buffer, part); + copied =3D copy_from_iter(scratch, part, &iter); + KUNIT_EXPECT_EQ(test, copied, part); + remain -=3D part; + } while (remain > 0); + b =3D ktime_get_real(); + samples[i] =3D ktime_to_us(ktime_sub(b, a)); + } + + iov_kunit_benchmark_print_stats(test, samples); + KUNIT_SUCCEED(); +} + +/* + * Time copying 256MiB through an ITER_IOVEC. + */ +static void __init iov_kunit_benchmark_iovec(struct kunit *test) +{ + struct iov_iter iter; + struct iovec *iov; + unsigned int samples[IOV_KUNIT_NR_SAMPLES]; + ktime_t a, b; + ssize_t copied; + size_t size =3D 256 * 1024 * 1024, npages =3D size / PAGE_SIZE, part; + size_t ioc =3D size / PAGE_SIZE; + void *scratch; + int i; + u8 __user *buffer; + + iov =3D kunit_kmalloc_array(test, ioc, sizeof(*iov), GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test, iov); + + /* Allocate a huge buffer and populate it with pages. */ + buffer =3D iov_kunit_create_user_buf(test, npages, NULL); + + /* Create a single large buffer to copy to/from. */ + scratch =3D iov_kunit_create_source(test, npages); + + /* Split the target over a number of iovecs */ + copied =3D 0; + for (i =3D 0; i < ioc; i++) { + part =3D size / ioc; + iov[i].iov_base =3D buffer + copied; + iov[i].iov_len =3D part; + copied +=3D part; + } + iov[i - 1].iov_len +=3D size - part; + + /* Perform and time a bunch of copies. */ + kunit_info(test, "Benchmarking copy_to_iter() over IOVEC:\n"); + for (i =3D 0; i < IOV_KUNIT_NR_SAMPLES; i++) { + iov_iter_init(&iter, ITER_SOURCE, iov, npages, size); + + a =3D ktime_get_real(); + copied =3D copy_from_iter(scratch, size, &iter); + b =3D ktime_get_real(); + KUNIT_EXPECT_EQ(test, copied, size); + samples[i] =3D ktime_to_us(ktime_sub(b, a)); + } + + iov_kunit_benchmark_print_stats(test, samples); + KUNIT_SUCCEED(); +} + /* * Time copying 256MiB through an ITER_KVEC. */ @@ -1524,6 +1617,8 @@ static struct kunit_case __refdata iov_kunit_cases[] = =3D { KUNIT_CASE(iov_kunit_extract_pages_kvec), KUNIT_CASE(iov_kunit_extract_pages_bvec), KUNIT_CASE(iov_kunit_extract_pages_xarray), + KUNIT_CASE(iov_kunit_benchmark_ubuf), + KUNIT_CASE(iov_kunit_benchmark_iovec), KUNIT_CASE(iov_kunit_benchmark_kvec), KUNIT_CASE(iov_kunit_benchmark_bvec), KUNIT_CASE(iov_kunit_benchmark_bvec_split),