From nobody Fri Dec 19 12:16:31 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 389402561A7 for ; Fri, 10 Oct 2025 01:20:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760059211; cv=none; b=Mlyn8DIyMHrEN8nl+xG74pMl48Yem1+2mI5+BWJBxCO1OEFm05J99x5nVFEZtGAxHv+4rvHSAJks7DwZ+UC2BX+TGmlr7MyK1AjRUImg3n9h9wZP5FHziTnrfgJqJmJRUXqZTn1P+PJ+Zk3c0946rHsdd0rcl5mUlugRvxgQcFo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760059211; c=relaxed/simple; bh=VRxXrZZF2VXTYZ8iIq1VsjEC5bkGWZOb9Rg6YcJl7ZM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jzDWLMxC9bYptq20492CUJYM+k+25CmBYyForCuy+85AbRGBPeMQAfyNSJKZXpoNOxfafFIC8vyKsekoKGOp/4HPUY7DYiqUP3RYt8uEmx0o5etTjbT7wPNVJXebr2f0ugYiL4WcOGEWuASC5ET3d2RH15SvGe/OGfdWMLaftdk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ft0cEvhy; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ft0cEvhy" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2711a55da20so17104195ad.1 for ; Thu, 09 Oct 2025 18:20:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1760059207; x=1760664007; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pEIUd4ZHclhGCkol/EpGDU6dqFXNl4VLKrOL2ZuQAlU=; b=Ft0cEvhyhiTFZ2+PEFVmDwsHgKsVMYkRlLvN7WXeiLeq65/qRMVuUjlvJrAEnikO47 jL/3z5jMuWFUDs7ofHI/bRyXhG4bzecMTsjErODXVAe1lagfZG2YJTip6vJgcQ/QMyZi 1+ACipDWfzEBiGEzmnIFGpP6YdqP1/YFv2Y+L9cH/1dLUGDQ6sTe7cB22Rj6FnLnbGLm Mvo5HcpXPMMdSHzinBFlZSmmJnaLPx9SZSuD+8coIasAd1reKQPjGwk20fb3tOhKj66b KC79fhbCkog2YUh+x4UVydMK8EHjEu2JN+2Ao7DsyDj2DgDhFcnTgKjSpTJ4GU1PiZmj GlVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760059207; x=1760664007; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pEIUd4ZHclhGCkol/EpGDU6dqFXNl4VLKrOL2ZuQAlU=; b=bAOQI/c29ZEkPARd6Uuw8yG8n9RUOrbmjIOqHFwayo+n21X3lM+5XUEM+K8w48N7KE 0Ry6k2sdIOhSPwstXH9c9wxdjjFtxYdgIdGtqQ5KQr6FZJxcijGzdum3ky9cQIvytOWu aLxfYh+aSBiOfECIh2CDgzov9BYNnE2Zc3trLqtBVmvEVVRD9420WzjlMTaBP3objZBQ saujSqk1zw40vG9zVqhAZu9aN9yfLqF49BwIhJz98a1QUUp4oWXvu+HLyd74YadzIqyY 1e385X1pCzjcguNnLUq6VBzo7nryqXa4nNx6qxWQnAgZ1GaiqeAx3b4Ka9csIHe5Ff9B jocA== X-Forwarded-Encrypted: i=1; AJvYcCVf1Dr72HXOCmeSs58xM4SFli1LWs/t1O/E9JgUGjevVY02ZjdNz2bUrvPGSxj+dcdeVBO2R0L19yoDocs=@vger.kernel.org X-Gm-Message-State: AOJu0YyFRBb1D+gpGnDFZbwGh7jV9bUiXNUk1dQ/onQpc7VJfGsdpJzb OIbhGlvVfnC1N2WSrv2/eUGwvDrQG9uvwUxC6BbCqFitFYzWcKTXyozq+cIQuZRsHF/+bZ0Yc5R pXXqCMw== X-Google-Smtp-Source: AGHT+IHEfcZb//+A/XoFxHeEt6Ogc1LHbtC/aY5NYrm9NviJtA9b0iGFosX3+0YfoF7OGZYkFgLjPVJGLKw= X-Received: from plps24.prod.google.com ([2002:a17:902:9898:b0:267:de1d:2687]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e943:b0:264:f714:8dce with SMTP id d9443c01a7336-290272c2542mr118120205ad.36.1760059207349; Thu, 09 Oct 2025 18:20:07 -0700 (PDT) Date: Thu, 9 Oct 2025 18:19:48 -0700 In-Reply-To: <20251010011951.2136980-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251010011951.2136980-1-surenb@google.com> X-Mailer: git-send-email 2.51.0.740.g6adb054d12-goog Message-ID: <20251010011951.2136980-6-surenb@google.com> Subject: [PATCH 5/8] mm/tests: add cleancache kunit test From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: david@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, alexandru.elisei@arm.com, peterx@redhat.com, sj@kernel.org, rppt@kernel.org, mhocko@suse.com, corbet@lwn.net, axboe@kernel.dk, viro@zeniv.linux.org.uk, brauner@kernel.org, hch@infradead.org, jack@suse.cz, willy@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, minchan@kernel.org, surenb@google.com, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, iommu@lists.linux.dev Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a kunit test that creates fake inodes, fills them with folios with predefined content, registers a cleancache pool, allocates and donates folios to the new pool. After this initialization it runs several scenarios: 1. cleancache_restore_test - stores fake inode pages into cleancache, then restores them into auxiliary folios and checks restored content; 2. cleancache_invalidate_test - stores a folio, successfully restores it, invalidates it and tries to restore again expecting a failure; 3. cleancache_reclaim_test - fills up the cleancache, stores one more folio and verifies that the oldest folio got reclaimed; 4. cleancache_backend_api_test - takes all donated folios and puts them back verifying the results; Signed-off-by: Suren Baghdasaryan --- MAINTAINERS | 1 + mm/Kconfig.debug | 13 ++ mm/Makefile | 1 + mm/cleancache.c | 35 ++- mm/tests/Makefile | 6 + mm/tests/cleancache_kunit.c | 425 ++++++++++++++++++++++++++++++++++++ 6 files changed, 480 insertions(+), 1 deletion(-) create mode 100644 mm/tests/Makefile create mode 100644 mm/tests/cleancache_kunit.c diff --git a/MAINTAINERS b/MAINTAINERS index f66307cd9c4b..1c97227e7ffa 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -6057,6 +6057,7 @@ F: include/linux/cleancache.h F: mm/cleancache.c F: mm/cleancache_sysfs.c F: mm/cleancache_sysfs.h +F: mm/tests/cleancache_kunit.c =20 CLK API M: Russell King diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 32b65073d0cc..c3482f7bc977 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -309,3 +309,16 @@ config PER_VMA_LOCK_STATS overhead in the page fault path. =20 If in doubt, say N. + +config CLEANCACHE_KUNIT + tristate "KUnit test for cleancache" if !KUNIT_ALL_TESTS + depends on KUNIT + depends on CLEANCACHE + default KUNIT_ALL_TESTS + help + This builds the cleancache unit test. + Tests the clencache functionality. + For more information on KUnit and unit tests in general please refer + to the KUnit documentation in Documentation/dev-tools/kunit/. + + If unsure, say N. diff --git a/mm/Makefile b/mm/Makefile index a7a635f762ee..845841a140e3 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -70,6 +70,7 @@ obj-y +=3D init-mm.o obj-y +=3D memblock.o obj-y +=3D $(memory-hotplug-y) obj-y +=3D slub.o +obj-y +=3D tests/ =20 ifdef CONFIG_MMU obj-$(CONFIG_ADVISE_SYSCALLS) +=3D madvise.o diff --git a/mm/cleancache.c b/mm/cleancache.c index 56dce7e03709..fd18486b0407 100644 --- a/mm/cleancache.c +++ b/mm/cleancache.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include =20 #include "cleancache_sysfs.h" =20 @@ -74,6 +76,28 @@ static DEFINE_SPINLOCK(pools_lock); /* protects pools */ static LIST_HEAD(cleancache_lru); static DEFINE_SPINLOCK(lru_lock); /* protects cleancache_lru */ =20 +#if IS_ENABLED(CONFIG_CLEANCACHE_KUNIT) + +static bool is_pool_allowed(int pool_id) +{ + struct kunit *test =3D kunit_get_current_test(); + + /* Restrict kunit tests to using only the test pool */ + return test && *((int *)test->priv) =3D=3D pool_id; +} + +#else /* CONFIG_CLEANCACHE_KUNIT */ + +static bool is_pool_allowed(int pool_id) { return true; } + +#endif /* CONFIG_CLEANCACHE_KUNIT */ + +#if IS_MODULE(CONFIG_CLEANCACHE_KUNIT) +#define EXPORT_SYMBOL_FOR_KUNIT(x) EXPORT_SYMBOL(x) +#else +#define EXPORT_SYMBOL_FOR_KUNIT(x) +#endif + /* * Folio attributes: * folio->_mapcount - pool_id @@ -184,7 +208,7 @@ static struct folio *pick_folio_from_any_pool(void) for (int i =3D 0; i < count; i++) { pool =3D &pools[i]; spin_lock(&pool->lock); - if (!list_empty(&pool->folio_list)) { + if (!list_empty(&pool->folio_list) && is_pool_allowed(i)) { folio =3D list_last_entry(&pool->folio_list, struct folio, lru); WARN_ON(!remove_folio_from_pool(folio, pool)); @@ -747,6 +771,7 @@ void cleancache_add_fs(struct super_block *sb) err: sb->cleancache_id =3D CLEANCACHE_ID_INVALID; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_add_fs); =20 void cleancache_remove_fs(struct super_block *sb) { @@ -766,6 +791,7 @@ void cleancache_remove_fs(struct super_block *sb) /* free the object */ put_fs(fs); } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_remove_fs); =20 bool cleancache_store_folio(struct inode *inode, struct folio *folio) { @@ -795,6 +821,7 @@ bool cleancache_store_folio(struct inode *inode, struct= folio *folio) =20 return ret; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_store_folio); =20 bool cleancache_restore_folio(struct inode *inode, struct folio *folio) { @@ -822,6 +849,7 @@ bool cleancache_restore_folio(struct inode *inode, stru= ct folio *folio) =20 return ret; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_restore_folio); =20 bool cleancache_invalidate_folio(struct address_space *mapping, struct inode *inode, struct folio *folio) @@ -853,6 +881,7 @@ bool cleancache_invalidate_folio(struct address_space *= mapping, =20 return ret; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_invalidate_folio); =20 bool cleancache_invalidate_inode(struct address_space *mapping, struct inode *inode) @@ -877,6 +906,7 @@ bool cleancache_invalidate_inode(struct address_space *= mapping, =20 return count > 0; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_invalidate_inode); =20 struct cleancache_inode * cleancache_start_inode_walk(struct address_space *mapping, struct inode *i= node, @@ -906,6 +936,7 @@ cleancache_start_inode_walk(struct address_space *mappi= ng, struct inode *inode, =20 return ccinode; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_start_inode_walk); =20 void cleancache_end_inode_walk(struct cleancache_inode *ccinode) { @@ -914,6 +945,7 @@ void cleancache_end_inode_walk(struct cleancache_inode = *ccinode) put_inode(ccinode); put_fs(fs); } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_end_inode_walk); =20 bool cleancache_restore_from_inode(struct cleancache_inode *ccinode, struct folio *folio) @@ -940,6 +972,7 @@ bool cleancache_restore_from_inode(struct cleancache_in= ode *ccinode, =20 return ret; } +EXPORT_SYMBOL_FOR_KUNIT(cleancache_restore_from_inode); =20 /* Backend API */ /* diff --git a/mm/tests/Makefile b/mm/tests/Makefile new file mode 100644 index 000000000000..fac2e964b4d5 --- /dev/null +++ b/mm/tests/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for tests of kernel mm subsystem. + +# KUnit tests +obj-$(CONFIG_CLEANCACHE_KUNIT) +=3D cleancache_kunit.o diff --git a/mm/tests/cleancache_kunit.c b/mm/tests/cleancache_kunit.c new file mode 100644 index 000000000000..18b4386d6322 --- /dev/null +++ b/mm/tests/cleancache_kunit.c @@ -0,0 +1,425 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * KUnit test for the Cleancache. + * + * Copyright (C) 2025, Google LLC. + * Author: Suren Baghdasaryan + */ +#include + +#include +#include +#include + +#include "../internal.h" + +#define INODE_COUNT 5 +#define FOLIOS_PER_INODE 4 +#define FOLIO_COUNT (INODE_COUNT * FOLIOS_PER_INODE) + +static const u32 TEST_CONTENT =3D 0xBADCAB32; + +struct inode_data { + struct address_space mapping; + struct inode inode; + struct folio *folios[FOLIOS_PER_INODE]; +}; + +static struct test_data { + /* Mock a fs */ + struct super_block sb; + struct inode_data inodes[INODE_COUNT]; + /* Folios donated to the cleancache pools */ + struct folio *pool_folios[FOLIO_COUNT]; + /* Auxiliary folio */ + struct folio *aux_folio; + int pool_id; +} test_data; + +static void set_folio_content(struct folio *folio, u32 value) +{ + u32 *data; + + data =3D kmap_local_folio(folio, 0); + *data =3D value; + kunmap_local(data); +} + +static u32 get_folio_content(struct folio *folio) +{ + unsigned long value; + u32 *data; + + data =3D kmap_local_folio(folio, 0); + value =3D *data; + kunmap_local(data); + + return value; +} + +static void fill_cleancache(struct kunit *test) +{ + struct inode_data *inode_data; + struct folio *folio; + + /* Store inode folios into cleancache */ + for (int inode =3D 0; inode < INODE_COUNT; inode++) { + inode_data =3D &test_data.inodes[inode]; + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + folio =3D inode_data->folios[fidx]; + KUNIT_EXPECT_NOT_NULL(test, folio); + folio_lock(folio); /* Folio has to be locked */ + folio_set_workingset(folio); + KUNIT_EXPECT_TRUE(test, cleancache_store_folio(&inode_data->inode, foli= o)); + folio_unlock(folio); + } + } +} + +static int cleancache_suite_init(struct kunit_suite *suite) +{ + LIST_HEAD(pool_folios); + + /* Add a fake fs superblock */ + cleancache_add_fs(&test_data.sb); + + /* Initialize fake inodes */ + for (int inode =3D 0; inode < INODE_COUNT; inode++) { + struct inode_data *inode_data =3D &test_data.inodes[inode]; + + inode_data->inode.i_sb =3D &test_data.sb; + inode_data->inode.i_ino =3D inode; + inode_data->mapping.host =3D &inode_data->inode; + + /* Allocate folios for the inode */ + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + struct folio *folio =3D folio_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!folio) + return -ENOMEM; + + set_folio_content(folio, (u32)fidx); + folio->mapping =3D &inode_data->mapping; + folio->index =3D PAGE_SIZE * fidx; + inode_data->folios[fidx] =3D folio; + } + } + + /* Register new cleancache pool and donate test folios */ + test_data.pool_id =3D cleancache_backend_register_pool("kunit_pool"); + if (test_data.pool_id < 0) + return -EINVAL; + + /* Allocate folios and put them to cleancache */ + for (int fidx =3D 0; fidx < FOLIO_COUNT; fidx++) { + struct folio *folio =3D folio_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!folio) + return -ENOMEM; + + folio_ref_freeze(folio, 1); + test_data.pool_folios[fidx] =3D folio; + list_add(&folio->lru, &pool_folios); + } + + cleancache_backend_put_folios(test_data.pool_id, &pool_folios); + + /* Allocate auxiliary folio for testing */ + test_data.aux_folio =3D folio_alloc(GFP_KERNEL | __GFP_ZERO, 0); + if (!test_data.aux_folio) + return -ENOMEM; + + return 0; +} + +static void cleancache_suite_exit(struct kunit_suite *suite) +{ + /* Take back donated folios and free them */ + for (int fidx =3D 0; fidx < FOLIO_COUNT; fidx++) { + struct folio *folio =3D test_data.pool_folios[fidx]; + + if (folio) { + if (!cleancache_backend_get_folio(test_data.pool_id, + folio)) + set_page_refcounted(&folio->page); + folio_put(folio); + } + } + + /* Free the auxiliary folio */ + if (test_data.aux_folio) { + test_data.aux_folio->mapping =3D NULL; + folio_put(test_data.aux_folio); + } + + /* Free inode folios */ + for (int inode =3D 0; inode < INODE_COUNT; inode++) { + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + struct folio *folio =3D test_data.inodes[inode].folios[fidx]; + + if (folio) { + folio->mapping =3D NULL; + folio_put(folio); + } + } + } + + cleancache_remove_fs(&test_data.sb); +} + +static int cleancache_test_init(struct kunit *test) +{ + /* Pass pool_id to cleancache to restrict pools that can be used for test= s */ + test->priv =3D &test_data.pool_id; + + return 0; +} + +static void cleancache_restore_test(struct kunit *test) +{ + struct inode_data *inode_data; + struct folio *folio; + + /* Store inode folios into cleancache */ + fill_cleancache(test); + + /* Restore and validate folios stored in cleancache */ + for (int inode =3D 0; inode < INODE_COUNT; inode++) { + inode_data =3D &test_data.inodes[inode]; + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + folio =3D inode_data->folios[fidx]; + test_data.aux_folio->mapping =3D folio->mapping; + test_data.aux_folio->index =3D folio->index; + KUNIT_EXPECT_TRUE(test, cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio)); + KUNIT_EXPECT_EQ(test, get_folio_content(test_data.aux_folio), + get_folio_content(folio)); + } + } +} + +static void cleancache_walk_and_restore_test(struct kunit *test) +{ + struct cleancache_inode *ccinode; + struct inode_data *inode_data; + struct folio *folio; + + /* Store inode folios into cleancache */ + fill_cleancache(test); + + /* Restore and validate folios stored in the first inode */ + inode_data =3D &test_data.inodes[0]; + ccinode =3D cleancache_start_inode_walk(&inode_data->mapping, &inode_data= ->inode, + FOLIOS_PER_INODE); + KUNIT_EXPECT_NOT_NULL(test, ccinode); + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + folio =3D inode_data->folios[fidx]; + test_data.aux_folio->mapping =3D folio->mapping; + test_data.aux_folio->index =3D folio->index; + KUNIT_EXPECT_TRUE(test, cleancache_restore_from_inode(ccinode, + test_data.aux_folio)); + KUNIT_EXPECT_EQ(test, get_folio_content(test_data.aux_folio), + get_folio_content(folio)); + } + cleancache_end_inode_walk(ccinode); +} + +static void cleancache_invalidate_test(struct kunit *test) +{ + struct inode_data *inode_data; + struct folio *folio; + + /* Store inode folios into cleancache */ + fill_cleancache(test); + + /* Invalidate one folio */ + inode_data =3D &test_data.inodes[0]; + folio =3D inode_data->folios[0]; + test_data.aux_folio->mapping =3D folio->mapping; + test_data.aux_folio->index =3D folio->index; + KUNIT_EXPECT_TRUE(test, cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio)); + folio_lock(folio); /* Folio has to be locked */ + KUNIT_EXPECT_TRUE(test, cleancache_invalidate_folio(&inode_data->mapping, + &inode_data->inode, + inode_data->folios[0])); + folio_unlock(folio); + KUNIT_EXPECT_FALSE(test, cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio)); + + /* Invalidate one node */ + inode_data =3D &test_data.inodes[1]; + KUNIT_EXPECT_TRUE(test, cleancache_invalidate_inode(&inode_data->mapping, + &inode_data->inode)); + + /* Verify results */ + for (int inode =3D 0; inode < INODE_COUNT; inode++) { + inode_data =3D &test_data.inodes[inode]; + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + folio =3D inode_data->folios[fidx]; + test_data.aux_folio->mapping =3D folio->mapping; + test_data.aux_folio->index =3D folio->index; + if (inode =3D=3D 0 && fidx =3D=3D 0) { + /* Folio should be missing */ + KUNIT_EXPECT_FALSE(test, + cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio)); + continue; + } + if (inode =3D=3D 1) { + /* Folios in the node should be missing */ + KUNIT_EXPECT_FALSE(test, + cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio)); + continue; + } + KUNIT_EXPECT_TRUE(test, + cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio)); + KUNIT_EXPECT_EQ(test, get_folio_content(test_data.aux_folio), + get_folio_content(folio)); + } + } +} + +static void cleancache_reclaim_test(struct kunit *test) +{ + struct inode_data *inode_data; + struct inode_data *inode_new; + unsigned long new_index; + struct folio *folio; + + /* Store inode folios into cleancache */ + fill_cleancache(test); + + /* + * Store one extra new folio. There should be no free folios, so the + * oldest folio will be reclaimed to store new folio. Add it into the + * last node at the next unoccupied offset. + */ + inode_new =3D &test_data.inodes[INODE_COUNT - 1]; + new_index =3D inode_new->folios[FOLIOS_PER_INODE - 1]->index + PAGE_SIZE; + + test_data.aux_folio->mapping =3D &inode_new->mapping; + test_data.aux_folio->index =3D new_index; + set_folio_content(test_data.aux_folio, TEST_CONTENT); + folio_lock(test_data.aux_folio); /* Folio has to be locked */ + folio_set_workingset(test_data.aux_folio); + KUNIT_EXPECT_TRUE(test, cleancache_store_folio(&inode_new->inode, test_da= ta.aux_folio)); + folio_unlock(test_data.aux_folio); + + /* Verify results */ + for (int inode =3D 0; inode < INODE_COUNT; inode++) { + inode_data =3D &test_data.inodes[inode]; + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + folio =3D inode_data->folios[fidx]; + test_data.aux_folio->mapping =3D folio->mapping; + test_data.aux_folio->index =3D folio->index; + /* + * The first folio of the first node was added first, + * so it's the oldest and must have been reclaimed. + */ + if (inode =3D=3D 0 && fidx =3D=3D 0) { + /* Reclaimed folio should be missing */ + KUNIT_EXPECT_FALSE_MSG(test, + cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio), + "inode %d, folio %d is invalid\n", inode, fidx); + continue; + } + KUNIT_EXPECT_TRUE_MSG(test, + cleancache_restore_folio(&inode_data->inode, + test_data.aux_folio), + "inode %d, folio %d is invalid\n", + inode, fidx); + KUNIT_EXPECT_EQ_MSG(test, get_folio_content(test_data.aux_folio), + get_folio_content(folio), + "inode %d, folio %d content is invalid\n", + inode, fidx); + } + } + + /* Auxiliary folio should be stored */ + test_data.aux_folio->mapping =3D &inode_new->mapping; + test_data.aux_folio->index =3D new_index; + KUNIT_EXPECT_TRUE_MSG(test, + cleancache_restore_folio(&inode_new->inode, test_data.aux_folio), + "inode %lu, folio %ld is invalid\n", + inode_new->inode.i_ino, new_index); + KUNIT_EXPECT_EQ_MSG(test, get_folio_content(test_data.aux_folio), TEST_CO= NTENT, + "inode %lu, folio %ld content is invalid\n", + inode_new->inode.i_ino, new_index); +} + +static void cleancache_backend_api_test(struct kunit *test) +{ + struct folio *folio; + LIST_HEAD(folios); + int unused =3D 0; + int used =3D 0; + + /* Store inode folios into cleancache */ + fill_cleancache(test); + + /* Get all donated folios back */ + for (int fidx =3D 0; fidx < FOLIO_COUNT; fidx++) { + KUNIT_EXPECT_EQ(test, cleancache_backend_get_folio(test_data.pool_id, + test_data.pool_folios[fidx]), 0); + set_page_refcounted(&test_data.pool_folios[fidx]->page); + } + + /* Try putting a refcounted folio */ + KUNIT_EXPECT_NE(test, cleancache_backend_put_folio(test_data.pool_id, + test_data.pool_folios[0]), 0); + + /* Put some of the folios back into cleancache */ + for (int fidx =3D 0; fidx < FOLIOS_PER_INODE; fidx++) { + folio_ref_freeze(test_data.pool_folios[fidx], 1); + KUNIT_EXPECT_EQ(test, cleancache_backend_put_folio(test_data.pool_id, + test_data.pool_folios[fidx]), 0); + } + + /* Put the rest back into cleancache but keep half of folios still refcou= nted */ + for (int fidx =3D FOLIOS_PER_INODE; fidx < FOLIO_COUNT; fidx++) { + if (fidx % 2) { + folio_ref_freeze(test_data.pool_folios[fidx], 1); + unused++; + } else { + used++; + } + list_add(&test_data.pool_folios[fidx]->lru, &folios); + } + KUNIT_EXPECT_NE(test, cleancache_backend_put_folios(test_data.pool_id, + &folios), 0); + /* Used folios should be still in the list */ + KUNIT_EXPECT_EQ(test, list_count_nodes(&folios), used); + + /* Release refcounts and put the remaining folios into cleancache */ + list_for_each_entry(folio, &folios, lru) + folio_ref_freeze(folio, 1); + KUNIT_EXPECT_EQ(test, cleancache_backend_put_folios(test_data.pool_id, + &folios), 0); + KUNIT_EXPECT_TRUE(test, list_empty(&folios)); +} + +static struct kunit_case cleancache_test_cases[] =3D { + KUNIT_CASE(cleancache_restore_test), + KUNIT_CASE(cleancache_walk_and_restore_test), + KUNIT_CASE(cleancache_invalidate_test), + KUNIT_CASE(cleancache_reclaim_test), + KUNIT_CASE(cleancache_backend_api_test), + {}, +}; + +static struct kunit_suite hashtable_test_module =3D { + .name =3D "cleancache", + .init =3D cleancache_test_init, + .suite_init =3D cleancache_suite_init, + .suite_exit =3D cleancache_suite_exit, + .test_cases =3D cleancache_test_cases, +}; + +kunit_test_suites(&hashtable_test_module); + +MODULE_DESCRIPTION("KUnit test for the Kernel Cleancache"); +MODULE_LICENSE("GPL"); --=20 2.51.0.740.g6adb054d12-goog