From nobody Sun Feb 8 05:40:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E8F62C21DF; Sat, 17 Jan 2026 17:53:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768672387; cv=none; b=Y2ssD0kLVfF14XlVvtM3EZRMUqkPBfP/t7IdMCAc0AwWxdEdGOufV9xE6eDyXHYEk482OmKEvauBb+PU/lPjmQnA02k4Gfamf28UnXQLqEx/rOGUMIVt1h/wqPlLsTVdRgNJW7qBU+7Elim2p+Rw47dM0Z5Wa7SWsvTsgaPTn5g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768672387; c=relaxed/simple; bh=eZLadMFQedwAgPWbni+zR0dV+mf8L1sym/xg/ZsD0/4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mkZDMpORQLri/dlO4OCPRQKADMsAfEBsev7DsH4hJGn1oQNciofxdtQ/kmO0rCFqPCO1ZDgnWJ+2nPzmcO7kUzwciWWTPDGedJCttC1AVIVEmf8/Q3C6/ycl8r+2IU0Qz/9rsgXa2PSt3IAI8/b5aXmSqdlo1XUITm4eo6iN848= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=egHs3dir; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="egHs3dir" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E8FA9C19423; Sat, 17 Jan 2026 17:53:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768672387; bh=eZLadMFQedwAgPWbni+zR0dV+mf8L1sym/xg/ZsD0/4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=egHs3dirRF59oNlIYE7tn5Q1hvvkWUAw0w2lN8id4ucjKdiBK1l52DipnrBUqbhSG fs8aSqHBsNQkr5PBOQHb/u5jRiqT9z3dVTLMKs3uYG49dD51he3FYN2BUO0WafX2rJ VH3jbV9+BWgsndWIyOD7dbC3iLZ7nq/bKnEOPgj8nFnr+hnC5NXG9zCqBPuxliUps7 vw21zpSlAkGnXKDQXhVpZsj3ZAo7m/v2Jzzkgc+eWYt6jkwfPohZpU2udwWOnxCj+l IfnbwDvddBUx6TwkqOVFpxC9dxpqlrPlxmFp+yZsprkM6jljiNANlQqVOS4cdD+KRH 4fdYMRWResdPQ== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , damon@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 8/8] mm/damon: rename min_sz_region of damon_ctx to min_region_sz Date: Sat, 17 Jan 2026 09:52:55 -0800 Message-ID: <20260117175256.82826-9-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260117175256.82826-1-sj@kernel.org> References: <20260117175256.82826-1-sj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" 'min_sz_region' field of 'struct damon_ctx' represents the minimum size of each DAMON region for the context. 'struct damos_access_pattern' has a field of the same name. It confuses readers and makes 'grep' less optimal for them. Rename it to 'min_region_sz'. Signed-off-by: SeongJae Park --- include/linux/damon.h | 8 ++--- mm/damon/core.c | 69 ++++++++++++++++++++++--------------------- mm/damon/lru_sort.c | 4 +-- mm/damon/reclaim.c | 4 +-- mm/damon/stat.c | 2 +- mm/damon/sysfs.c | 9 +++--- 6 files changed, 49 insertions(+), 47 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 5bf8db1d78fe..a4fea23da857 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -773,7 +773,7 @@ struct damon_attrs { * * @ops: Set of monitoring operations for given use cases. * @addr_unit: Scale factor for core to ops address conversion. - * @min_sz_region: Minimum region size. + * @min_region_sz: Minimum region size. * @adaptive_targets: Head of monitoring targets (&damon_target) list. * @schemes: Head of schemes (&damos) list. */ @@ -818,7 +818,7 @@ struct damon_ctx { /* public: */ struct damon_operations ops; unsigned long addr_unit; - unsigned long min_sz_region; + unsigned long min_region_sz; =20 struct list_head adaptive_targets; struct list_head schemes; @@ -907,7 +907,7 @@ static inline void damon_insert_region(struct damon_reg= ion *r, void damon_add_region(struct damon_region *r, struct damon_target *t); void damon_destroy_region(struct damon_region *r, struct damon_target *t); int damon_set_regions(struct damon_target *t, struct damon_addr_range *ran= ges, - unsigned int nr_ranges, unsigned long min_sz_region); + unsigned int nr_ranges, unsigned long min_region_sz); void damon_update_region_access_rate(struct damon_region *r, bool accessed, struct damon_attrs *attrs); =20 @@ -975,7 +975,7 @@ int damos_walk(struct damon_ctx *ctx, struct damos_walk= _control *control); =20 int damon_set_region_biggest_system_ram_default(struct damon_target *t, unsigned long *start, unsigned long *end, - unsigned long min_sz_region); + unsigned long min_region_sz); =20 #endif /* CONFIG_DAMON */ =20 diff --git a/mm/damon/core.c b/mm/damon/core.c index 5508bc794172..70efbf22a2b4 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -203,7 +203,7 @@ static int damon_fill_regions_holes(struct damon_region= *first, * @t: the given target. * @ranges: array of new monitoring target ranges. * @nr_ranges: length of @ranges. - * @min_sz_region: minimum region size. + * @min_region_sz: minimum region size. * * This function adds new regions to, or modify existing regions of a * monitoring target to fit in specific ranges. @@ -211,7 +211,7 @@ static int damon_fill_regions_holes(struct damon_region= *first, * Return: 0 if success, or negative error code otherwise. */ int damon_set_regions(struct damon_target *t, struct damon_addr_range *ran= ges, - unsigned int nr_ranges, unsigned long min_sz_region) + unsigned int nr_ranges, unsigned long min_region_sz) { struct damon_region *r, *next; unsigned int i; @@ -248,16 +248,16 @@ int damon_set_regions(struct damon_target *t, struct = damon_addr_range *ranges, /* no region intersects with this range */ newr =3D damon_new_region( ALIGN_DOWN(range->start, - min_sz_region), - ALIGN(range->end, min_sz_region)); + min_region_sz), + ALIGN(range->end, min_region_sz)); if (!newr) return -ENOMEM; damon_insert_region(newr, damon_prev_region(r), r, t); } else { /* resize intersecting regions to fit in this range */ first->ar.start =3D ALIGN_DOWN(range->start, - min_sz_region); - last->ar.end =3D ALIGN(range->end, min_sz_region); + min_region_sz); + last->ar.end =3D ALIGN(range->end, min_region_sz); =20 /* fill possible holes in the range */ err =3D damon_fill_regions_holes(first, last, t); @@ -553,7 +553,7 @@ struct damon_ctx *damon_new_ctx(void) ctx->attrs.max_nr_regions =3D 1000; =20 ctx->addr_unit =3D 1; - ctx->min_sz_region =3D DAMON_MIN_REGION_SZ; + ctx->min_region_sz =3D DAMON_MIN_REGION_SZ; =20 INIT_LIST_HEAD(&ctx->adaptive_targets); INIT_LIST_HEAD(&ctx->schemes); @@ -1142,7 +1142,7 @@ static struct damon_target *damon_nth_target(int n, s= truct damon_ctx *ctx) * If @src has no region, @dst keeps current regions. */ static int damon_commit_target_regions(struct damon_target *dst, - struct damon_target *src, unsigned long src_min_sz_region) + struct damon_target *src, unsigned long src_min_region_sz) { struct damon_region *src_region; struct damon_addr_range *ranges; @@ -1159,7 +1159,7 @@ static int damon_commit_target_regions(struct damon_t= arget *dst, i =3D 0; damon_for_each_region(src_region, src) ranges[i++] =3D src_region->ar; - err =3D damon_set_regions(dst, ranges, i, src_min_sz_region); + err =3D damon_set_regions(dst, ranges, i, src_min_region_sz); kfree(ranges); return err; } @@ -1167,11 +1167,11 @@ static int damon_commit_target_regions(struct damon= _target *dst, static int damon_commit_target( struct damon_target *dst, bool dst_has_pid, struct damon_target *src, bool src_has_pid, - unsigned long src_min_sz_region) + unsigned long src_min_region_sz) { int err; =20 - err =3D damon_commit_target_regions(dst, src, src_min_sz_region); + err =3D damon_commit_target_regions(dst, src, src_min_region_sz); if (err) return err; if (dst_has_pid) @@ -1198,7 +1198,7 @@ static int damon_commit_targets( err =3D damon_commit_target( dst_target, damon_target_has_pid(dst), src_target, damon_target_has_pid(src), - src->min_sz_region); + src->min_region_sz); if (err) return err; } else { @@ -1225,7 +1225,7 @@ static int damon_commit_targets( return -ENOMEM; err =3D damon_commit_target(new_target, false, src_target, damon_target_has_pid(src), - src->min_sz_region); + src->min_region_sz); if (err) { damon_destroy_target(new_target, NULL); return err; @@ -1272,7 +1272,7 @@ int damon_commit_ctx(struct damon_ctx *dst, struct da= mon_ctx *src) } dst->ops =3D src->ops; dst->addr_unit =3D src->addr_unit; - dst->min_sz_region =3D src->min_sz_region; + dst->min_region_sz =3D src->min_region_sz; =20 return 0; } @@ -1305,8 +1305,8 @@ static unsigned long damon_region_sz_limit(struct dam= on_ctx *ctx) =20 if (ctx->attrs.min_nr_regions) sz /=3D ctx->attrs.min_nr_regions; - if (sz < ctx->min_sz_region) - sz =3D ctx->min_sz_region; + if (sz < ctx->min_region_sz) + sz =3D ctx->min_region_sz; =20 return sz; } @@ -1696,7 +1696,7 @@ static bool damos_valid_target(struct damon_ctx *c, s= truct damon_target *t, * @t: The target of the region. * @rp: The pointer to the region. * @s: The scheme to be applied. - * @min_sz_region: minimum region size. + * @min_region_sz: minimum region size. * * If a quota of a scheme has exceeded in a quota charge window, the schem= e's * action would applied to only a part of the target access pattern fulfil= ling @@ -1714,7 +1714,8 @@ static bool damos_valid_target(struct damon_ctx *c, s= truct damon_target *t, * Return: true if the region should be entirely skipped, false otherwise. */ static bool damos_skip_charged_region(struct damon_target *t, - struct damon_region **rp, struct damos *s, unsigned long min_sz_region) + struct damon_region **rp, struct damos *s, + unsigned long min_region_sz) { struct damon_region *r =3D *rp; struct damos_quota *quota =3D &s->quota; @@ -1736,11 +1737,11 @@ static bool damos_skip_charged_region(struct damon_= target *t, if (quota->charge_addr_from && r->ar.start < quota->charge_addr_from) { sz_to_skip =3D ALIGN_DOWN(quota->charge_addr_from - - r->ar.start, min_sz_region); + r->ar.start, min_region_sz); if (!sz_to_skip) { - if (damon_sz_region(r) <=3D min_sz_region) + if (damon_sz_region(r) <=3D min_region_sz) return true; - sz_to_skip =3D min_sz_region; + sz_to_skip =3D min_region_sz; } damon_split_region_at(t, r, sz_to_skip); r =3D damon_next_region(r); @@ -1766,7 +1767,7 @@ static void damos_update_stat(struct damos *s, =20 static bool damos_filter_match(struct damon_ctx *ctx, struct damon_target = *t, struct damon_region *r, struct damos_filter *filter, - unsigned long min_sz_region) + unsigned long min_region_sz) { bool matched =3D false; struct damon_target *ti; @@ -1783,8 +1784,8 @@ static bool damos_filter_match(struct damon_ctx *ctx,= struct damon_target *t, matched =3D target_idx =3D=3D filter->target_idx; break; case DAMOS_FILTER_TYPE_ADDR: - start =3D ALIGN_DOWN(filter->addr_range.start, min_sz_region); - end =3D ALIGN_DOWN(filter->addr_range.end, min_sz_region); + start =3D ALIGN_DOWN(filter->addr_range.start, min_region_sz); + end =3D ALIGN_DOWN(filter->addr_range.end, min_region_sz); =20 /* inside the range */ if (start <=3D r->ar.start && r->ar.end <=3D end) { @@ -1820,7 +1821,7 @@ static bool damos_core_filter_out(struct damon_ctx *c= tx, struct damon_target *t, =20 s->core_filters_allowed =3D false; damos_for_each_core_filter(filter, s) { - if (damos_filter_match(ctx, t, r, filter, ctx->min_sz_region)) { + if (damos_filter_match(ctx, t, r, filter, ctx->min_region_sz)) { if (filter->allow) s->core_filters_allowed =3D true; return !filter->allow; @@ -1955,7 +1956,7 @@ static void damos_apply_scheme(struct damon_ctx *c, s= truct damon_target *t, if (c->ops.apply_scheme) { if (quota->esz && quota->charged_sz + sz > quota->esz) { sz =3D ALIGN_DOWN(quota->esz - quota->charged_sz, - c->min_sz_region); + c->min_region_sz); if (!sz) goto update_stat; damon_split_region_at(t, r, sz); @@ -2003,7 +2004,7 @@ static void damon_do_apply_schemes(struct damon_ctx *= c, if (quota->esz && quota->charged_sz >=3D quota->esz) continue; =20 - if (damos_skip_charged_region(t, &r, s, c->min_sz_region)) + if (damos_skip_charged_region(t, &r, s, c->min_region_sz)) continue; =20 if (s->max_nr_snapshots && @@ -2496,7 +2497,7 @@ static void damon_split_region_at(struct damon_target= *t, =20 /* Split every region in the given target into 'nr_subs' regions */ static void damon_split_regions_of(struct damon_target *t, int nr_subs, - unsigned long min_sz_region) + unsigned long min_region_sz) { struct damon_region *r, *next; unsigned long sz_region, sz_sub =3D 0; @@ -2506,13 +2507,13 @@ static void damon_split_regions_of(struct damon_tar= get *t, int nr_subs, sz_region =3D damon_sz_region(r); =20 for (i =3D 0; i < nr_subs - 1 && - sz_region > 2 * min_sz_region; i++) { + sz_region > 2 * min_region_sz; i++) { /* * Randomly select size of left sub-region to be at * least 10 percent and at most 90% of original region */ sz_sub =3D ALIGN_DOWN(damon_rand(1, 10) * - sz_region / 10, min_sz_region); + sz_region / 10, min_region_sz); /* Do not allow blank region */ if (sz_sub =3D=3D 0 || sz_sub >=3D sz_region) continue; @@ -2552,7 +2553,7 @@ static void kdamond_split_regions(struct damon_ctx *c= tx) nr_subregions =3D 3; =20 damon_for_each_target(t, ctx) - damon_split_regions_of(t, nr_subregions, ctx->min_sz_region); + damon_split_regions_of(t, nr_subregions, ctx->min_region_sz); =20 last_nr_regions =3D nr_regions; } @@ -2902,7 +2903,7 @@ static bool damon_find_biggest_system_ram(unsigned lo= ng *start, * @t: The monitoring target to set the region. * @start: The pointer to the start address of the region. * @end: The pointer to the end address of the region. - * @min_sz_region: Minimum region size. + * @min_region_sz: Minimum region size. * * This function sets the region of @t as requested by @start and @end. I= f the * values of @start and @end are zero, however, this function finds the bi= ggest @@ -2914,7 +2915,7 @@ static bool damon_find_biggest_system_ram(unsigned lo= ng *start, */ int damon_set_region_biggest_system_ram_default(struct damon_target *t, unsigned long *start, unsigned long *end, - unsigned long min_sz_region) + unsigned long min_region_sz) { struct damon_addr_range addr_range; =20 @@ -2927,7 +2928,7 @@ int damon_set_region_biggest_system_ram_default(struc= t damon_target *t, =20 addr_range.start =3D *start; addr_range.end =3D *end; - return damon_set_regions(t, &addr_range, 1, min_sz_region); + return damon_set_regions(t, &addr_range, 1, min_region_sz); } =20 /* diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c index 9dde096a9064..7bc5c0b2aea3 100644 --- a/mm/damon/lru_sort.c +++ b/mm/damon/lru_sort.c @@ -298,7 +298,7 @@ static int damon_lru_sort_apply_parameters(void) if (!monitor_region_start && !monitor_region_end) addr_unit =3D 1; param_ctx->addr_unit =3D addr_unit; - param_ctx->min_sz_region =3D max(DAMON_MIN_REGION_SZ / addr_unit, 1); + param_ctx->min_region_sz =3D max(DAMON_MIN_REGION_SZ / addr_unit, 1); =20 if (!damon_lru_sort_mon_attrs.sample_interval) { err =3D -EINVAL; @@ -345,7 +345,7 @@ static int damon_lru_sort_apply_parameters(void) err =3D damon_set_region_biggest_system_ram_default(param_target, &monitor_region_start, &monitor_region_end, - param_ctx->min_sz_region); + param_ctx->min_region_sz); if (err) goto out; err =3D damon_commit_ctx(ctx, param_ctx); diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c index c343622a2f52..43d76f5bed44 100644 --- a/mm/damon/reclaim.c +++ b/mm/damon/reclaim.c @@ -208,7 +208,7 @@ static int damon_reclaim_apply_parameters(void) if (!monitor_region_start && !monitor_region_end) addr_unit =3D 1; param_ctx->addr_unit =3D addr_unit; - param_ctx->min_sz_region =3D max(DAMON_MIN_REGION_SZ / addr_unit, 1); + param_ctx->min_region_sz =3D max(DAMON_MIN_REGION_SZ / addr_unit, 1); =20 if (!damon_reclaim_mon_attrs.aggr_interval) { err =3D -EINVAL; @@ -251,7 +251,7 @@ static int damon_reclaim_apply_parameters(void) err =3D damon_set_region_biggest_system_ram_default(param_target, &monitor_region_start, &monitor_region_end, - param_ctx->min_sz_region); + param_ctx->min_region_sz); if (err) goto out; err =3D damon_commit_ctx(ctx, param_ctx); diff --git a/mm/damon/stat.c b/mm/damon/stat.c index 5e18b164f6d8..536f02bd173e 100644 --- a/mm/damon/stat.c +++ b/mm/damon/stat.c @@ -181,7 +181,7 @@ static struct damon_ctx *damon_stat_build_ctx(void) goto free_out; damon_add_target(ctx, target); if (damon_set_region_biggest_system_ram_default(target, &start, &end, - ctx->min_sz_region)) + ctx->min_region_sz)) goto free_out; return ctx; free_out: diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c index 57d36d60f329..b7f66196bec4 100644 --- a/mm/damon/sysfs.c +++ b/mm/damon/sysfs.c @@ -1365,7 +1365,7 @@ static int damon_sysfs_set_attrs(struct damon_ctx *ct= x, =20 static int damon_sysfs_set_regions(struct damon_target *t, struct damon_sysfs_regions *sysfs_regions, - unsigned long min_sz_region) + unsigned long min_region_sz) { struct damon_addr_range *ranges =3D kmalloc_array(sysfs_regions->nr, sizeof(*ranges), GFP_KERNEL | __GFP_NOWARN); @@ -1387,7 +1387,7 @@ static int damon_sysfs_set_regions(struct damon_targe= t *t, if (ranges[i - 1].end > ranges[i].start) goto out; } - err =3D damon_set_regions(t, ranges, sysfs_regions->nr, min_sz_region); + err =3D damon_set_regions(t, ranges, sysfs_regions->nr, min_region_sz); out: kfree(ranges); return err; @@ -1409,7 +1409,8 @@ static int damon_sysfs_add_target(struct damon_sysfs_= target *sys_target, return -EINVAL; } t->obsolete =3D sys_target->obsolete; - return damon_sysfs_set_regions(t, sys_target->regions, ctx->min_sz_region= ); + return damon_sysfs_set_regions(t, sys_target->regions, + ctx->min_region_sz); } =20 static int damon_sysfs_add_targets(struct damon_ctx *ctx, @@ -1469,7 +1470,7 @@ static int damon_sysfs_apply_inputs(struct damon_ctx = *ctx, ctx->addr_unit =3D sys_ctx->addr_unit; /* addr_unit is respected by only DAMON_OPS_PADDR */ if (sys_ctx->ops_id =3D=3D DAMON_OPS_PADDR) - ctx->min_sz_region =3D max( + ctx->min_region_sz =3D max( DAMON_MIN_REGION_SZ / sys_ctx->addr_unit, 1); err =3D damon_sysfs_set_attrs(ctx, sys_ctx->attrs); if (err) --=20 2.47.3