[PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers

Kairui Song via B4 Relay posted 8 patches 2 weeks, 6 days ago
There is a newer version of this series
[PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers
Posted by Kairui Song via B4 Relay 2 weeks, 6 days ago
From: Kairui Song <kasong@tencent.com>

Same as active / inactive LRU, MGLRU isolates and scans folios in
batches.  The batch split is done hidden deep in the helper, which
makes the code harder to follow.  The helper's arguments are also
confusing since callers usually request more folios than the batch
size, so the helper almost never processes the full requested amount.

Move the batch splitting into the top loop to make it cleaner, there
should be no behavior change.

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/vmscan.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d7fc7f1fe06d..d48074f9bd87 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4689,10 +4689,10 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
 	int scanned = 0;
 	int isolated = 0;
 	int skipped = 0;
-	int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
-	int remaining = scan_batch;
+	unsigned long remaining = nr_to_scan;
 	struct lru_gen_folio *lrugen = &lruvec->lrugen;
 
+	VM_WARN_ON_ONCE(nr_to_scan > MAX_LRU_BATCH);
 	VM_WARN_ON_ONCE(!list_empty(list));
 
 	if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
@@ -4745,7 +4745,7 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
 	mod_lruvec_state(lruvec, item, isolated);
 	mod_lruvec_state(lruvec, PGREFILL, sorted);
 	mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
-	trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
+	trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan,
 				scanned, skipped, isolated,
 				type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
 	if (type == LRU_GEN_FILE)
@@ -4827,7 +4827,8 @@ static int isolate_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 		*type_scanned = type;
 
-		scanned = scan_folios(nr_to_scan, lruvec, sc, type, tier, list);
+		scanned = scan_folios(nr_to_scan, lruvec, sc,
+				      type, tier, list);
 		if (scanned)
 			return scanned;
 
@@ -4999,7 +5000,7 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
 
 static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 {
-	long nr_to_scan;
+	long nr_batch, nr_to_scan;
 	unsigned long scanned = 0;
 	int swappiness = get_swappiness(lruvec, sc);
 
@@ -5010,7 +5011,8 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 		if (nr_to_scan <= 0)
 			break;
 
-		delta = evict_folios(nr_to_scan, lruvec, sc, swappiness);
+		nr_batch = min(nr_to_scan, MAX_LRU_BATCH);
+		delta = evict_folios(nr_batch, lruvec, sc, swappiness);
 		if (!delta)
 			break;
 
@@ -5615,6 +5617,7 @@ static int run_aging(struct lruvec *lruvec, unsigned long seq,
 static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_control *sc,
 			int swappiness, unsigned long nr_to_reclaim)
 {
+	int nr_batch;
 	DEFINE_MAX_SEQ(lruvec);
 
 	if (seq + MIN_NR_GENS > max_seq)
@@ -5631,8 +5634,8 @@ static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_co
 		if (sc->nr_reclaimed >= nr_to_reclaim)
 			return 0;
 
-		if (!evict_folios(nr_to_reclaim - sc->nr_reclaimed, lruvec, sc,
-				  swappiness))
+		nr_batch = min(nr_to_reclaim - sc->nr_reclaimed, MAX_LRU_BATCH);
+		if (!evict_folios(nr_batch, lruvec, sc, swappiness))
 			return 0;
 
 		cond_resched();

-- 
2.53.0
Re: [PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers
Posted by Barry Song 2 weeks, 1 day ago
On Wed, Mar 18, 2026 at 3:11 AM Kairui Song via B4 Relay
<devnull+kasong.tencent.com@kernel.org> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> Same as active / inactive LRU, MGLRU isolates and scans folios in
> batches.  The batch split is done hidden deep in the helper, which
> makes the code harder to follow.  The helper's arguments are also
> confusing since callers usually request more folios than the batch
> size, so the helper almost never processes the full requested amount.
>
> Move the batch splitting into the top loop to make it cleaner, there
> should be no behavior change.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/vmscan.c | 19 +++++++++++--------
>  1 file changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d7fc7f1fe06d..d48074f9bd87 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4689,10 +4689,10 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>         int scanned = 0;
>         int isolated = 0;
>         int skipped = 0;
> -       int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
> -       int remaining = scan_batch;
> +       unsigned long remaining = nr_to_scan;
>         struct lru_gen_folio *lrugen = &lruvec->lrugen;
>
> +       VM_WARN_ON_ONCE(nr_to_scan > MAX_LRU_BATCH);
>         VM_WARN_ON_ONCE(!list_empty(list));
>
>         if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
> @@ -4745,7 +4745,7 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>         mod_lruvec_state(lruvec, item, isolated);
>         mod_lruvec_state(lruvec, PGREFILL, sorted);
>         mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
> -       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
> +       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan,
>                                 scanned, skipped, isolated,
>                                 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
>         if (type == LRU_GEN_FILE)
> @@ -4827,7 +4827,8 @@ static int isolate_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>
>                 *type_scanned = type;
>
> -               scanned = scan_folios(nr_to_scan, lruvec, sc, type, tier, list);
> +               scanned = scan_folios(nr_to_scan, lruvec, sc,
> +                                     type, tier, list);

Do we need to change this?

>                 if (scanned)
>                         return scanned;
>
> @@ -4999,7 +5000,7 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
>
>  static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>  {
> -       long nr_to_scan;
> +       long nr_batch, nr_to_scan;
>         unsigned long scanned = 0;
>         int swappiness = get_swappiness(lruvec, sc);
>
> @@ -5010,7 +5011,8 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>                 if (nr_to_scan <= 0)
>                         break;
>
> -               delta = evict_folios(nr_to_scan, lruvec, sc, swappiness);
> +               nr_batch = min(nr_to_scan, MAX_LRU_BATCH);

I wonder if we should modify get_nr_to_scan() to return
a maximum of MAX_LRU_BATCH?

> +               delta = evict_folios(nr_batch, lruvec, sc, swappiness);
>                 if (!delta)
>                         break;
>
> @@ -5615,6 +5617,7 @@ static int run_aging(struct lruvec *lruvec, unsigned long seq,
>  static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_control *sc,
>                         int swappiness, unsigned long nr_to_reclaim)
>  {
> +       int nr_batch;
>         DEFINE_MAX_SEQ(lruvec);
>
>         if (seq + MIN_NR_GENS > max_seq)
> @@ -5631,8 +5634,8 @@ static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_co
>                 if (sc->nr_reclaimed >= nr_to_reclaim)
>                         return 0;
>
> -               if (!evict_folios(nr_to_reclaim - sc->nr_reclaimed, lruvec, sc,
> -                                 swappiness))
> +               nr_batch = min(nr_to_reclaim - sc->nr_reclaimed, MAX_LRU_BATCH);

Looks good to me.

> +               if (!evict_folios(nr_batch, lruvec, sc, swappiness))
>                         return 0;
>
>                 cond_resched();
>
> --
> 2.53.0

Thanks
Barry
Re: [PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers
Posted by Kairui Song 1 week, 6 days ago
On Sun, Mar 22, 2026 at 04:14:31PM +0800, Barry Song wrote:
> On Wed, Mar 18, 2026 at 3:11 AM Kairui Song via B4 Relay
> <devnull+kasong.tencent.com@kernel.org> wrote:
> >
> > From: Kairui Song <kasong@tencent.com>
> >
> > Same as active / inactive LRU, MGLRU isolates and scans folios in
> > batches.  The batch split is done hidden deep in the helper, which
> > makes the code harder to follow.  The helper's arguments are also
> > confusing since callers usually request more folios than the batch
> > size, so the helper almost never processes the full requested amount.
> >
> > Move the batch splitting into the top loop to make it cleaner, there
> > should be no behavior change.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > ---
> >  mm/vmscan.c | 19 +++++++++++--------
> >  1 file changed, 11 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index d7fc7f1fe06d..d48074f9bd87 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -4689,10 +4689,10 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
> >         int scanned = 0;
> >         int isolated = 0;
> >         int skipped = 0;
> > -       int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
> > -       int remaining = scan_batch;
> > +       unsigned long remaining = nr_to_scan;
> >         struct lru_gen_folio *lrugen = &lruvec->lrugen;
> >
> > +       VM_WARN_ON_ONCE(nr_to_scan > MAX_LRU_BATCH);
> >         VM_WARN_ON_ONCE(!list_empty(list));
> >
> >         if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
> > @@ -4745,7 +4745,7 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
> >         mod_lruvec_state(lruvec, item, isolated);
> >         mod_lruvec_state(lruvec, PGREFILL, sorted);
> >         mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
> > -       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
> > +       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan,
> >                                 scanned, skipped, isolated,
> >                                 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
> >         if (type == LRU_GEN_FILE)
> > @@ -4827,7 +4827,8 @@ static int isolate_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
> >
> >                 *type_scanned = type;
> >
> > -               scanned = scan_folios(nr_to_scan, lruvec, sc, type, tier, list);
> > +               scanned = scan_folios(nr_to_scan, lruvec, sc,
> > +                                     type, tier, list);
> 
> Do we need to change this?

That's a irrelevant blank line change, will drop it, thanks!
> 
> >                 if (scanned)
> >                         return scanned;
> >
> > @@ -4999,7 +5000,7 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
> >
> >  static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> >  {
> > -       long nr_to_scan;
> > +       long nr_batch, nr_to_scan;
> >         unsigned long scanned = 0;
> >         int swappiness = get_swappiness(lruvec, sc);
> >
> > @@ -5010,7 +5011,8 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> >                 if (nr_to_scan <= 0)
> >                         break;
> >
> > -               delta = evict_folios(nr_to_scan, lruvec, sc, swappiness);
> > +               nr_batch = min(nr_to_scan, MAX_LRU_BATCH);
> 
> I wonder if we should modify get_nr_to_scan() to return
> a maximum of MAX_LRU_BATCH?

We'll change that in a later commit to let each iteration use a smaller batch.
Re: [PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers
Posted by Axel Rasmussen 2 weeks, 3 days ago
On Tue, Mar 17, 2026 at 12:11 PM Kairui Song via B4 Relay
<devnull+kasong.tencent.com@kernel.org> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> Same as active / inactive LRU, MGLRU isolates and scans folios in
> batches.  The batch split is done hidden deep in the helper, which
> makes the code harder to follow.  The helper's arguments are also
> confusing since callers usually request more folios than the batch
> size, so the helper almost never processes the full requested amount.
>
> Move the batch splitting into the top loop to make it cleaner, there
> should be no behavior change.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>

Reviewed-by: Axel Rasmussen <axelrasmussen@google.com>

To Chen's concern, I see patch 5 makes use of this refactor for example.

I don't have a super strong opinion on keeping this separate here vs.
squashing into patch 5. I slightly prefer keeping this
no-functional-change part separate, then patch 5 becomes very easy to
review.

> ---
>  mm/vmscan.c | 19 +++++++++++--------
>  1 file changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d7fc7f1fe06d..d48074f9bd87 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4689,10 +4689,10 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>         int scanned = 0;
>         int isolated = 0;
>         int skipped = 0;
> -       int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
> -       int remaining = scan_batch;
> +       unsigned long remaining = nr_to_scan;
>         struct lru_gen_folio *lrugen = &lruvec->lrugen;
>
> +       VM_WARN_ON_ONCE(nr_to_scan > MAX_LRU_BATCH);
>         VM_WARN_ON_ONCE(!list_empty(list));
>
>         if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
> @@ -4745,7 +4745,7 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>         mod_lruvec_state(lruvec, item, isolated);
>         mod_lruvec_state(lruvec, PGREFILL, sorted);
>         mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
> -       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
> +       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan,
>                                 scanned, skipped, isolated,
>                                 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
>         if (type == LRU_GEN_FILE)
> @@ -4827,7 +4827,8 @@ static int isolate_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>
>                 *type_scanned = type;
>
> -               scanned = scan_folios(nr_to_scan, lruvec, sc, type, tier, list);
> +               scanned = scan_folios(nr_to_scan, lruvec, sc,
> +                                     type, tier, list);
>                 if (scanned)
>                         return scanned;
>
> @@ -4999,7 +5000,7 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
>
>  static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>  {
> -       long nr_to_scan;
> +       long nr_batch, nr_to_scan;
>         unsigned long scanned = 0;
>         int swappiness = get_swappiness(lruvec, sc);
>
> @@ -5010,7 +5011,8 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>                 if (nr_to_scan <= 0)
>                         break;
>
> -               delta = evict_folios(nr_to_scan, lruvec, sc, swappiness);
> +               nr_batch = min(nr_to_scan, MAX_LRU_BATCH);
> +               delta = evict_folios(nr_batch, lruvec, sc, swappiness);
>                 if (!delta)
>                         break;
>
> @@ -5615,6 +5617,7 @@ static int run_aging(struct lruvec *lruvec, unsigned long seq,
>  static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_control *sc,
>                         int swappiness, unsigned long nr_to_reclaim)
>  {
> +       int nr_batch;
>         DEFINE_MAX_SEQ(lruvec);
>
>         if (seq + MIN_NR_GENS > max_seq)
> @@ -5631,8 +5634,8 @@ static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_co
>                 if (sc->nr_reclaimed >= nr_to_reclaim)
>                         return 0;
>
> -               if (!evict_folios(nr_to_reclaim - sc->nr_reclaimed, lruvec, sc,
> -                                 swappiness))
> +               nr_batch = min(nr_to_reclaim - sc->nr_reclaimed, MAX_LRU_BATCH);
> +               if (!evict_folios(nr_batch, lruvec, sc, swappiness))
>                         return 0;
>
>                 cond_resched();
>
> --
> 2.53.0
>
>
Re: [PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers
Posted by Chen Ridong 2 weeks, 4 days ago

On 2026/3/18 3:08, Kairui Song via B4 Relay wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> Same as active / inactive LRU, MGLRU isolates and scans folios in
> batches.  The batch split is done hidden deep in the helper, which
> makes the code harder to follow.  The helper's arguments are also
> confusing since callers usually request more folios than the batch
> size, so the helper almost never processes the full requested amount.
> 
> Move the batch splitting into the top loop to make it cleaner, there
> should be no behavior change.
> 
> Signed-off-by: Kairui Song <kasong@tencent.com>

I prefer to keep it as is.

If we move min(nr_to_scan, MAX_LRU_BATCH) out of scan_folios, callers
(potentially many functions in the future) would need to handle this logic
themselves, which seems unnecessary. The scan_folios helper should remain cohesive.

Thanks.

> ---
>  mm/vmscan.c | 19 +++++++++++--------
>  1 file changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d7fc7f1fe06d..d48074f9bd87 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4689,10 +4689,10 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>  	int scanned = 0;
>  	int isolated = 0;
>  	int skipped = 0;
> -	int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
> -	int remaining = scan_batch;
> +	unsigned long remaining = nr_to_scan;
>  	struct lru_gen_folio *lrugen = &lruvec->lrugen;
>  
> +	VM_WARN_ON_ONCE(nr_to_scan > MAX_LRU_BATCH);
>  	VM_WARN_ON_ONCE(!list_empty(list));
>  
>  	if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
> @@ -4745,7 +4745,7 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>  	mod_lruvec_state(lruvec, item, isolated);
>  	mod_lruvec_state(lruvec, PGREFILL, sorted);
>  	mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
> -	trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
> +	trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan,
>  				scanned, skipped, isolated,
>  				type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
>  	if (type == LRU_GEN_FILE)
> @@ -4827,7 +4827,8 @@ static int isolate_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>  
>  		*type_scanned = type;
>  
> -		scanned = scan_folios(nr_to_scan, lruvec, sc, type, tier, list);
> +		scanned = scan_folios(nr_to_scan, lruvec, sc,
> +				      type, tier, list);
>  		if (scanned)
>  			return scanned;
>  
> @@ -4999,7 +5000,7 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
>  
>  static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>  {
> -	long nr_to_scan;
> +	long nr_batch, nr_to_scan;
>  	unsigned long scanned = 0;
>  	int swappiness = get_swappiness(lruvec, sc);
>  
> @@ -5010,7 +5011,8 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>  		if (nr_to_scan <= 0)
>  			break;
>  
> -		delta = evict_folios(nr_to_scan, lruvec, sc, swappiness);
> +		nr_batch = min(nr_to_scan, MAX_LRU_BATCH);
> +		delta = evict_folios(nr_batch, lruvec, sc, swappiness);
>  		if (!delta)
>  			break;
>  
> @@ -5615,6 +5617,7 @@ static int run_aging(struct lruvec *lruvec, unsigned long seq,
>  static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_control *sc,
>  			int swappiness, unsigned long nr_to_reclaim)
>  {
> +	int nr_batch;
>  	DEFINE_MAX_SEQ(lruvec);
>  
>  	if (seq + MIN_NR_GENS > max_seq)
> @@ -5631,8 +5634,8 @@ static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_co
>  		if (sc->nr_reclaimed >= nr_to_reclaim)
>  			return 0;
>  
> -		if (!evict_folios(nr_to_reclaim - sc->nr_reclaimed, lruvec, sc,
> -				  swappiness))
> +		nr_batch = min(nr_to_reclaim - sc->nr_reclaimed, MAX_LRU_BATCH);
> +		if (!evict_folios(nr_batch, lruvec, sc, swappiness))
>  			return 0;
>  
>  		cond_resched();
> 

-- 
Best regards,
Ridong
Re: [PATCH 2/8] mm/mglru: relocate the LRU scan batch limit to callers
Posted by Kairui Song 2 weeks, 4 days ago
On Thu, Mar 19, 2026 at 10:00 AM Chen Ridong <chenridong@huaweicloud.com> wrote:
> On 2026/3/18 3:08, Kairui Song via B4 Relay wrote:
> > From: Kairui Song <kasong@tencent.com>
> >
> > Same as active / inactive LRU, MGLRU isolates and scans folios in
> > batches.  The batch split is done hidden deep in the helper, which
> > makes the code harder to follow.  The helper's arguments are also
> > confusing since callers usually request more folios than the batch
> > size, so the helper almost never processes the full requested amount.
> >
> > Move the batch splitting into the top loop to make it cleaner, there
> > should be no behavior change.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
>
> I prefer to keep it as is.
>
> If we move min(nr_to_scan, MAX_LRU_BATCH) out of scan_folios, callers
> (potentially many functions in the future) would need to handle this logic
> themselves, which seems unnecessary. The scan_folios helper should remain cohesive.
>

Hi Ridong,

This patch is mostly for later use, and there are currently only two
callers. One from the default reclaim loop, one from the manual
reclaim interface (not memory.reclaim, I mean the MGLRU's command
interface).

In the default reclaim loop, later we want to control the exact number
of folios being scanned for each iteration, or at least use a smaller
batch value. For the manual reclaim interface using a large batch
seems more reasonable.

So this patch is needed I think. I can merge it into a later patch but
keeping it seperate seems more clean.