From nobody Fri Apr 17 09:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD7C410785; Sat, 21 Feb 2026 18:03:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771697031; cv=none; b=rLOCkhhzMJwICcDyAb28SMx8D8xlixD5Lrwv4MPY3pCr3mbPwjuoIEqYxUTJAl+GiTGQHu7p9QsYjpKvq7aoKk8rczNo00LygHJYaLg3YwSkra3P8u554gl9MOO7ihyeXdrjw9sHUJ0gG9bmEpCiR43tNWzXGgwaw+xhEL6+igU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771697031; c=relaxed/simple; bh=WQkR1kCn24B+rN14IssLOKwmMbLKRgmj7l4b9UR3+s8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lhhvb+zonoblCmAUup16s6V9/+D0zgkeXfBYvCM+JqWkM5gOeWUQx/cIwmSgRcVqjzx5frE34YQOLW0Iet1+seBvw6J7wHTWk7V7JLsjcqkIaPvOBCbdhxYqZKPHuZcj5BUqJ6+dPxOlSo0ZQVInq0lm43WGnbsp9XBDlvTFhb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=o2QAHjuS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="o2QAHjuS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 838EEC19425; Sat, 21 Feb 2026 18:03:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771697031; bh=WQkR1kCn24B+rN14IssLOKwmMbLKRgmj7l4b9UR3+s8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o2QAHjuSXLYeKIxZfeF+YyRbSmIANMmGUmP+UyPLmmktprryPqVthm2rKxacCWiis ASO6aUjqRAUquNQXGtldAMEwFuYntkjmjV3L1y3p9tFJSllDz58SiBnZcGKRKv9sLB tzTLL+HbygNx3Jr+b2I9j5ZCnN0US0P9Cmq9JBry6yH80wi8dAUfY9mLpIZiprs6PD R7cHnKlRA2X5oY9ImDLWCPLQHzjJN1vOUzIXAgt+n/cC04ENTZ3tWBznkB0qt4kBNK CwDzmkhLRHPyizLuwv3ZgIaAVWrbNDXX2hHMjCmd1Bh8QolhXZpxOcJwxB/2ShwEDw jliRF45mQDw3Q== From: SeongJae Park To: Cc: SeongJae Park , Akinobu Mita , Andrew Morton , damon@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v2 1/3] mm/damon/core: split regions for min_nr_regions Date: Sat, 21 Feb 2026 10:03:38 -0800 Message-ID: <20260221180341.10313-2-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260221180341.10313-1-sj@kernel.org> References: <20260221180341.10313-1-sj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" DAMON core layer respects the min_nr_regions parameter by setting the maximum size of each region as total monitoring region size divided by the parameter value. And the limit is applied by preventing merge of regions that result in a region larger than the maximum size. The limit is updated per ops update interval, because vaddr updates the monitoring regions on the ops update callback. It does nothing for the beginning state. That's because the users can set the initial monitoring regions as they want. That is, if the users really care about the min_nr_regions, they are supposed to set the initial monitoring regions to have more than min_nr_regions regions. The virtual address space operation set, vaddr, has an exceptional case. Users can ask the ops set to configure the initial regions on its own. For the case, vaddr sets up the initial regions to meet the min_nr_regions. So, vaddr has exceptional support, but basically users are required to set the regions on their own if they want min_nr_regions to be respected. When 'min_nr_regions' is high, such initial setup is difficult. If DAMON sysfs interface is used for that, the memory for saving the initial setup is also a waste. Even if the user forgives the setup, DAMON will eventually make more than min_nr_regions regions by splitting operations. But it will take time. If the aggregation interval is long, the delay could be problematic. There was actually a report [1] of the case. The reporter wanted to do page granular monitoring with a large aggregation interval. Also, DAMON is doing nothing for online changes on monitoring regions and min_nr_regions. For example, the user can remove a monitoring region or increase min_nr_regions while DAMON is running. Split regions larger than the size at the beginning of the kdamond main loop, to fix the initial setup issue. Also do the split every aggregation interval, for online changes. This means the behavior is slightly changed. It is difficult to imagine a use case that actually depends on the old behavior, though. So this change is arguably fine. Note that the size limit is aligned by damon_ctx->min_region_sz and cannot be zero. That is, if min_nr_region is larger than the total size of monitoring regions divided by ->min_region_sz, that cannot be respected. [1] https://lore.kernel.org/CAC5umyjmJE9SBqjbetZZecpY54bHpn2AvCGNv3aF6J=3D1= cfoPXQ@mail.gmail.com Signed-off-by: SeongJae Park --- mm/damon/core.c | 45 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 39 insertions(+), 6 deletions(-) diff --git a/mm/damon/core.c b/mm/damon/core.c index 8e4cf71e2a3ed..602b85ef23597 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -1316,6 +1316,40 @@ static unsigned long damon_region_sz_limit(struct da= mon_ctx *ctx) return sz; } =20 +static void damon_split_region_at(struct damon_target *t, + struct damon_region *r, unsigned long sz_r); + +/* + * damon_apply_min_nr_regions() - Make effect of min_nr_regions parameter. + * @ctx: monitoring context. + * + * This function implement min_nr_regions (minimum number of damon_region + * objects in the given monitoring context) behavior. It first calculates + * maximum size of each region for enforcing the min_nr_regions as total s= ize + * of the regions divided by the min_nr_regions. After that, this function + * splits regions to ensure all regions are equal to or smaller than the s= ize + * limit. Finally, this function returns the maximum size limit. + * + * Returns: maximum size of each region for convincing min_nr_regions. + */ +static unsigned long damon_apply_min_nr_regions(struct damon_ctx *ctx) +{ + unsigned long max_region_sz =3D damon_region_sz_limit(ctx); + struct damon_target *t; + struct damon_region *r, *next; + + max_region_sz =3D ALIGN(max_region_sz, ctx->min_region_sz); + damon_for_each_target(t, ctx) { + damon_for_each_region_safe(r, next, t) { + while (damon_sz_region(r) > max_region_sz) { + damon_split_region_at(t, r, max_region_sz); + r =3D damon_next_region(r); + } + } + } + return max_region_sz; +} + static int kdamond_fn(void *data); =20 /* @@ -1672,9 +1706,6 @@ static void kdamond_tune_intervals(struct damon_ctx *= c) damon_set_attrs(c, &new_attrs); } =20 -static void damon_split_region_at(struct damon_target *t, - struct damon_region *r, unsigned long sz_r); - static bool __damos_valid_target(struct damon_region *r, struct damos *s) { unsigned long sz; @@ -2778,7 +2809,7 @@ static int kdamond_fn(void *data) if (!ctx->regions_score_histogram) goto done; =20 - sz_limit =3D damon_region_sz_limit(ctx); + sz_limit =3D damon_apply_min_nr_regions(ctx); =20 while (!kdamond_need_stop(ctx)) { /* @@ -2803,10 +2834,13 @@ static int kdamond_fn(void *data) if (ctx->ops.check_accesses) max_nr_accesses =3D ctx->ops.check_accesses(ctx); =20 - if (ctx->passed_sample_intervals >=3D next_aggregation_sis) + if (ctx->passed_sample_intervals >=3D next_aggregation_sis) { kdamond_merge_regions(ctx, max_nr_accesses / 10, sz_limit); + /* online updates might be made */ + sz_limit =3D damon_apply_min_nr_regions(ctx); + } =20 /* * do kdamond_call() and kdamond_apply_schemes() after @@ -2863,7 +2897,6 @@ static int kdamond_fn(void *data) sample_interval; if (ctx->ops.update) ctx->ops.update(ctx); - sz_limit =3D damon_region_sz_limit(ctx); } } done: --=20 2.47.3 From nobody Fri Apr 17 09:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A0E42F12AE; Sat, 21 Feb 2026 18:03:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771697032; cv=none; b=qB/gSJZtVbOYlGZsYP/4p58m6s/tVih6tSQbVA73NUK9Tb4gebYdg1YhX5YlL+ZdsloJeNhKWhycsHfjZMehUC3UO4Wdty/0oMuvwUPp5PpGfKUpnOlNwW7Jq3XbMhTakmjZc06hBYqgAzSwvuSNN7BOu+9JyyqOcm95sypAK2U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771697032; c=relaxed/simple; bh=O5C60gzaAjye3NBs36/CbgGUC2VmIz0qx2BwHipnsFA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UfbA1G6+3nDz8pkq8YuZHPuMVj6pbZat1I/24G0HQZbhP/uSYwvQPm26GyLPuRukco9+r/cNRfNAVLn2hbt238kCfC0U6onUdbuFtM0SRGIsbQ8s+amka+TpCqmhvWbg0Tu1xUNEcytMc7aN0cpnk+i2pO7BEBev56VLS3DrVv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gqDXXGlV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gqDXXGlV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE868C19421; Sat, 21 Feb 2026 18:03:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771697032; bh=O5C60gzaAjye3NBs36/CbgGUC2VmIz0qx2BwHipnsFA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gqDXXGlVX/k/2IrS1XIz/JSoNRKnaAWfaxmxznspptpHHK0T/U/T0/myG2Llxie3u chOK9W5x01FKFc5QU3tCgyruByyyMy3nIpRGvzebYd8AWsM5yOO6VFyPAyanAJByqn +x4/3wnbCeq7OeskKROFZXEHAfLr0PNVektAlXTEpnARDUli9rP35phVYpnQj+xeaq H/hX2c0jgvdznlNJVRdtO+VlekcIYAPGKvW5filv1B2e6G082lTZ2iQ9jg4K8gDjVs bPgilvJnbMyGvCYTPm5WDDB2eKNueTRbHKRWNpzbavN1FVO6V2DFUp1lGWqb16SSsh +nRcE+4+X5JDA== From: SeongJae Park To: Cc: SeongJae Park , Akinobu Mita , Andrew Morton , damon@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v2 2/3] mm/damon/vaddr: do not split regions for min_nr_regions Date: Sat, 21 Feb 2026 10:03:39 -0800 Message-ID: <20260221180341.10313-3-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260221180341.10313-1-sj@kernel.org> References: <20260221180341.10313-1-sj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The previous commit made DAMON core split regions at the beginning for min_nr_regions. The virtual address space operation set (vaddr) does similar work on its own, for a case user delegates entire initial monitoring regions setup to vaddr. It is unnecessary now, as DAMON core will do similar work for any case. Remove the duplicated work in vaddr. Also, remove a helper function that was being used only for the work, and the test code of the helper function. Signed-off-by: SeongJae Park --- mm/damon/tests/vaddr-kunit.h | 76 ------------------------------------ mm/damon/vaddr.c | 70 +-------------------------------- 2 files changed, 2 insertions(+), 144 deletions(-) diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h index cfae870178bfd..98e734d77d517 100644 --- a/mm/damon/tests/vaddr-kunit.h +++ b/mm/damon/tests/vaddr-kunit.h @@ -252,88 +252,12 @@ static void damon_test_apply_three_regions4(struct ku= nit *test) new_three_regions, expected, ARRAY_SIZE(expected)); } =20 -static void damon_test_split_evenly_fail(struct kunit *test, - unsigned long start, unsigned long end, unsigned int nr_pieces) -{ - struct damon_target *t =3D damon_new_target(); - struct damon_region *r; - - if (!t) - kunit_skip(test, "target alloc fail"); - - r =3D damon_new_region(start, end); - if (!r) { - damon_free_target(t); - kunit_skip(test, "region alloc fail"); - } - - damon_add_region(r, t); - KUNIT_EXPECT_EQ(test, - damon_va_evenly_split_region(t, r, nr_pieces), -EINVAL); - KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1u); - - damon_for_each_region(r, t) { - KUNIT_EXPECT_EQ(test, r->ar.start, start); - KUNIT_EXPECT_EQ(test, r->ar.end, end); - } - - damon_free_target(t); -} - -static void damon_test_split_evenly_succ(struct kunit *test, - unsigned long start, unsigned long end, unsigned int nr_pieces) -{ - struct damon_target *t =3D damon_new_target(); - struct damon_region *r; - unsigned long expected_width =3D (end - start) / nr_pieces; - unsigned long i =3D 0; - - if (!t) - kunit_skip(test, "target alloc fail"); - r =3D damon_new_region(start, end); - if (!r) { - damon_free_target(t); - kunit_skip(test, "region alloc fail"); - } - damon_add_region(r, t); - KUNIT_EXPECT_EQ(test, - damon_va_evenly_split_region(t, r, nr_pieces), 0); - KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_pieces); - - damon_for_each_region(r, t) { - if (i =3D=3D nr_pieces - 1) { - KUNIT_EXPECT_EQ(test, - r->ar.start, start + i * expected_width); - KUNIT_EXPECT_EQ(test, r->ar.end, end); - break; - } - KUNIT_EXPECT_EQ(test, - r->ar.start, start + i++ * expected_width); - KUNIT_EXPECT_EQ(test, r->ar.end, start + i * expected_width); - } - damon_free_target(t); -} - -static void damon_test_split_evenly(struct kunit *test) -{ - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5), - -EINVAL); - - damon_test_split_evenly_fail(test, 0, 100, 0); - damon_test_split_evenly_succ(test, 0, 100, 10); - damon_test_split_evenly_succ(test, 5, 59, 5); - damon_test_split_evenly_succ(test, 4, 6, 1); - damon_test_split_evenly_succ(test, 0, 3, 2); - damon_test_split_evenly_fail(test, 5, 6, 2); -} - static struct kunit_case damon_test_cases[] =3D { KUNIT_CASE(damon_test_three_regions_in_vmas), KUNIT_CASE(damon_test_apply_three_regions1), KUNIT_CASE(damon_test_apply_three_regions2), KUNIT_CASE(damon_test_apply_three_regions3), KUNIT_CASE(damon_test_apply_three_regions4), - KUNIT_CASE(damon_test_split_evenly), {}, }; =20 diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 4e3430d4191d1..400247d96eecc 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -53,52 +53,6 @@ static struct mm_struct *damon_get_mm(struct damon_targe= t *t) return mm; } =20 -/* - * Functions for the initial monitoring target regions construction - */ - -/* - * Size-evenly split a region into 'nr_pieces' small regions - * - * Returns 0 on success, or negative error code otherwise. - */ -static int damon_va_evenly_split_region(struct damon_target *t, - struct damon_region *r, unsigned int nr_pieces) -{ - unsigned long sz_orig, sz_piece, orig_end; - struct damon_region *n =3D NULL, *next; - unsigned long start; - unsigned int i; - - if (!r || !nr_pieces) - return -EINVAL; - - if (nr_pieces =3D=3D 1) - return 0; - - orig_end =3D r->ar.end; - sz_orig =3D damon_sz_region(r); - sz_piece =3D ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION_SZ); - - if (!sz_piece) - return -EINVAL; - - r->ar.end =3D r->ar.start + sz_piece; - next =3D damon_next_region(r); - for (start =3D r->ar.end, i =3D 1; i < nr_pieces; start +=3D sz_piece, i+= +) { - n =3D damon_new_region(start, start + sz_piece); - if (!n) - return -ENOMEM; - damon_insert_region(n, r, next, t); - r =3D n; - } - /* complement last region for possible rounding error */ - if (n) - n->ar.end =3D orig_end; - - return 0; -} - static unsigned long sz_range(struct damon_addr_range *r) { return r->end - r->start; @@ -240,10 +194,8 @@ static void __damon_va_init_regions(struct damon_ctx *= ctx, struct damon_target *t) { struct damon_target *ti; - struct damon_region *r; struct damon_addr_range regions[3]; - unsigned long sz =3D 0, nr_pieces; - int i, tidx =3D 0; + int tidx =3D 0; =20 if (damon_va_three_regions(t, regions)) { damon_for_each_target(ti, ctx) { @@ -255,25 +207,7 @@ static void __damon_va_init_regions(struct damon_ctx *= ctx, return; } =20 - for (i =3D 0; i < 3; i++) - sz +=3D regions[i].end - regions[i].start; - if (ctx->attrs.min_nr_regions) - sz /=3D ctx->attrs.min_nr_regions; - if (sz < DAMON_MIN_REGION_SZ) - sz =3D DAMON_MIN_REGION_SZ; - - /* Set the initial three regions of the target */ - for (i =3D 0; i < 3; i++) { - r =3D damon_new_region(regions[i].start, regions[i].end); - if (!r) { - pr_err("%d'th init region creation failed\n", i); - return; - } - damon_add_region(r, t); - - nr_pieces =3D (regions[i].end - regions[i].start) / sz; - damon_va_evenly_split_region(t, r, nr_pieces); - } + damon_set_regions(t, regions, 3, DAMON_MIN_REGION_SZ); } =20 /* Initialize '->regions_list' of every target (task) */ --=20 2.47.3 From nobody Fri Apr 17 09:30:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A26092F6560; Sat, 21 Feb 2026 18:03:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771697032; cv=none; b=uRtbbAtwgtc1cfVCNio7Gy0UgJdLla2wl20rLmbrb7rF/NwCRf7z84IJLxMPckilq68IeaGik5tfp7dFPCZf2fHOpl+wkdvgMKidIBsPGZnY66TAtwE6AWthfjyVDU33B1qm4apF4On6kczz5LTZ+5uKuex7nK4cQ+VXL2MFq4M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771697032; c=relaxed/simple; bh=bCw9hVCbQXONIXZgAbBvfEk2u6XzT2XAiGuqMrqAhNQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ojIHl3aP/GA+wHTx9EvvpWKVpKtpJEab3pq1b5SmsRvqRm/8GKaps9uqn3vk0cWifxHfgyocr2BCJmORKooMC1iABbvMmIFsJP5DHGwGeOXVLZv4zUJMR/rFiz0XHE6rfPMkkO22q4ElAplCudy/3VadZG2ieGcKNkSEpFLp1ko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eQ+JXmDf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eQ+JXmDf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40F54C2BCB1; Sat, 21 Feb 2026 18:03:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771697032; bh=bCw9hVCbQXONIXZgAbBvfEk2u6XzT2XAiGuqMrqAhNQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eQ+JXmDf6pJAEXnAMg/rBeh4QMAJ96ToBwrJSC3xu3M04NBGmadJ927JiIw4nGUjO 0OH2PQ/k+48lrdmKTXz1acmV6OU8VYOebosJba8HVzXawMOeqPQlkiPUQDTfsffSOG X86AgXLcw2by4bsWmjLG7lUhq1r+hfIUXUmQc4mH9e8i03S0268tw6mcLcoLjn1C8h oaxeiYMeKEiiCULy1/ml8nETzJH9kCpNrIs0xRQcxVHT72V4GQV1cK8RFCykqmlZxS 8Qa9pHyYYwB3DOE41AxRiVrtX6nPqEZwrEf40RvAO2ZDg/agHUdUyV4kDEjJSLKzDU mX0aaykirReog== From: SeongJae Park To: Cc: SeongJae Park , Akinobu Mita , Andrew Morton , Brendan Higgins , David Gow , damon@lists.linux.dev, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v2 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test Date: Sat, 21 Feb 2026 10:03:40 -0800 Message-ID: <20260221180341.10313-4-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260221180341.10313-1-sj@kernel.org> References: <20260221180341.10313-1-sj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a kunit test for the functionality of damon_apply_min_nr_regions(). Signed-off-by: SeongJae Park --- mm/damon/tests/core-kunit.h | 52 +++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h index 92ea25e2dc9e3..4c19ccac5a2ee 100644 --- a/mm/damon/tests/core-kunit.h +++ b/mm/damon/tests/core-kunit.h @@ -1241,6 +1241,57 @@ static void damon_test_set_filters_default_reject(st= ruct kunit *test) damos_free_filter(target_filter); } =20 +static void damon_test_apply_min_nr_regions_for(struct kunit *test, + unsigned long sz_regions, unsigned long min_region_sz, + unsigned long min_nr_regions, + unsigned long max_region_sz_expect, + unsigned long nr_regions_expect) +{ + struct damon_ctx *ctx; + struct damon_target *t; + struct damon_region *r; + unsigned long max_region_size; + + ctx =3D damon_new_ctx(); + if (!ctx) + kunit_skip(test, "ctx alloc fail\n"); + t =3D damon_new_target(); + if (!t) { + damon_destroy_ctx(ctx); + kunit_skip(test, "target alloc fail\n"); + } + damon_add_target(ctx, t); + r =3D damon_new_region(0, sz_regions); + if (!r) { + damon_destroy_ctx(ctx); + kunit_skip(test, "region alloc fail\n"); + } + damon_add_region(r, t); + + ctx->min_region_sz =3D min_region_sz; + ctx->attrs.min_nr_regions =3D min_nr_regions; + max_region_size =3D damon_apply_min_nr_regions(ctx); + + KUNIT_EXPECT_EQ(test, max_region_size, max_region_sz_expect); + KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_regions_expect); + + damon_destroy_ctx(ctx); +} + +static void damon_test_apply_min_nr_regions(struct kunit *test) +{ + /* common, expected setup */ + damon_test_apply_min_nr_regions_for(test, 10, 1, 10, 1, 10); + /* no zero size limit */ + damon_test_apply_min_nr_regions_for(test, 10, 1, 15, 1, 10); + /* max size should be aligned by min_region_sz */ + damon_test_apply_min_nr_regions_for(test, 10, 2, 2, 6, 2); + /* + * when min_nr_regions and min_region_sz conflicts, min_region_sz wins. + */ + damon_test_apply_min_nr_regions_for(test, 10, 2, 10, 2, 5); +} + static struct kunit_case damon_test_cases[] =3D { KUNIT_CASE(damon_test_target), KUNIT_CASE(damon_test_regions), @@ -1267,6 +1318,7 @@ static struct kunit_case damon_test_cases[] =3D { KUNIT_CASE(damos_test_filter_out), KUNIT_CASE(damon_test_feed_loop_next_input), KUNIT_CASE(damon_test_set_filters_default_reject), + KUNIT_CASE(damon_test_apply_min_nr_regions), {}, }; =20 --=20 2.47.3