[PATCH] mm: compaction: include compound page count for scanning in pageblock isolation

William Lam posted 1 patch 3 years, 9 months ago
mm/compaction.c | 3 +++
1 file changed, 3 insertions(+)
[PATCH] mm: compaction: include compound page count for scanning in pageblock isolation
Posted by William Lam 3 years, 9 months ago
The number of scanned pages can be lower than the number of isolated
pages when isolating mirgratable or free pageblock. The metric is being
reported in trace event and also used in vmstat.

This behaviour is confusing since currently the count for isolated pages
takes account of compound page but not for the case of scanned pages.
And given that the number of isolated pages(nr_taken) reported in
mm_compaction_isolate_template trace event is on a single-page basis,
the ambiguity when reporting the number of scanned pages can be removed
by also including compound page count.

Signed-off-by: William Lam <william.lam@bytedance.com>
---
 mm/compaction.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/compaction.c b/mm/compaction.c
index 1f89b969c12b..1b51cf2d32b6 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -616,6 +616,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
 			break;
 		set_page_private(page, order);
 
+		nr_scanned += isolated - 1;
 		total_isolated += isolated;
 		cc->nr_freepages += isolated;
 		list_add_tail(&page->lru, freelist);
@@ -1101,6 +1102,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 isolate_success_no_list:
 		cc->nr_migratepages += compound_nr(page);
 		nr_isolated += compound_nr(page);
+		nr_scanned += compound_nr(page) - 1;
 
 		/*
 		 * Avoid isolating too much unless this block is being
@@ -1504,6 +1506,7 @@ fast_isolate_freepages(struct compact_control *cc)
 			if (__isolate_free_page(page, order)) {
 				set_page_private(page, order);
 				nr_isolated = 1 << order;
+				nr_scanned += nr_isolated - 1;
 				cc->nr_freepages += nr_isolated;
 				list_add_tail(&page->lru, &cc->freepages);
 				count_compact_events(COMPACTISOLATED, nr_isolated);
-- 
2.30.1 (Apple Git-130)
Re: [PATCH] mm: compaction: include compound page count for scanning in pageblock isolation
Posted by Punit Agrawal 3 years, 9 months ago
Hi William,

William Lam <william.lam@bytedance.com> writes:

> The number of scanned pages can be lower than the number of isolated
> pages when isolating mirgratable or free pageblock. The metric is being
> reported in trace event and also used in vmstat.
>
> This behaviour is confusing since currently the count for isolated pages
> takes account of compound page but not for the case of scanned pages.
> And given that the number of isolated pages(nr_taken) reported in
> mm_compaction_isolate_template trace event is on a single-page basis,
> the ambiguity when reporting the number of scanned pages can be removed
> by also including compound page count.

A minor suggestion - It maybe useful to include an example trace output
to highlight the issue.

>
> Signed-off-by: William Lam <william.lam@bytedance.com>
> ---
>  mm/compaction.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 1f89b969c12b..1b51cf2d32b6 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -616,6 +616,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>  			break;
>  		set_page_private(page, order);
>  
> +		nr_scanned += isolated - 1;
>  		total_isolated += isolated;
>  		cc->nr_freepages += isolated;
>  		list_add_tail(&page->lru, freelist);
> @@ -1101,6 +1102,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  isolate_success_no_list:
>  		cc->nr_migratepages += compound_nr(page);
>  		nr_isolated += compound_nr(page);
> +		nr_scanned += compound_nr(page) - 1;
>  
>  		/*
>  		 * Avoid isolating too much unless this block is being
> @@ -1504,6 +1506,7 @@ fast_isolate_freepages(struct compact_control *cc)
>  			if (__isolate_free_page(page, order)) {
>  				set_page_private(page, order);
>  				nr_isolated = 1 << order;
> +				nr_scanned += nr_isolated - 1;
>  				cc->nr_freepages += nr_isolated;
>  				list_add_tail(&page->lru, &cc->freepages);
>  				count_compact_events(COMPACTISOLATED, nr_isolated);

Regardless of the comment above -

Reviewed-by: Punit Agrawal <punit.agrawal@bytedance.com>
Re: [PATCH] mm: compaction: include compound page count for scanning in pageblock isolation
Posted by William Lam 3 years, 9 months ago
On Fri, Jul 15, 2022 at 10:23:07AM +0100, Punit Agrawal wrote:
> Hi William,
> 
> William Lam <william.lam@bytedance.com> writes:
> 
> > The number of scanned pages can be lower than the number of isolated
> > pages when isolating mirgratable or free pageblock. The metric is being
> > reported in trace event and also used in vmstat.
> >
> > This behaviour is confusing since currently the count for isolated pages
> > takes account of compound page but not for the case of scanned pages.
> > And given that the number of isolated pages(nr_taken) reported in
> > mm_compaction_isolate_template trace event is on a single-page basis,
> > the ambiguity when reporting the number of scanned pages can be removed
> > by also including compound page count.
> 
> A minor suggestion - It maybe useful to include an example trace output
> to highlight the issue.
> 

some example output from trace where it shows nr_taken can be greater
than nr_scanned:

Produced by kernel v5.19-rc6
kcompactd0-42      [001] .....  1210.268022: mm_compaction_isolate_migratepages: range=(0x107ae4 ~ 0x107c00) nr_scanned=265 nr_taken=255
[...]
kcompactd0-42      [001] .....  1210.268382: mm_compaction_isolate_freepages: range=(0x215800 ~ 0x215a00) nr_scanned=13 nr_taken=128
kcompactd0-42      [001] .....  1210.268383: mm_compaction_isolate_freepages: range=(0x215600 ~ 0x215680) nr_scanned=1 nr_taken=128

mm_compaction_isolate_migratepages does not seem to have this behaviour,
but for the reason of consistency, nr_scanned should also be taken care
of in that side.

> >
> > Signed-off-by: William Lam <william.lam@bytedance.com>
> > ---
> >  mm/compaction.c | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 1f89b969c12b..1b51cf2d32b6 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -616,6 +616,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
> >  			break;
> >  		set_page_private(page, order);
> >  
> > +		nr_scanned += isolated - 1;
> >  		total_isolated += isolated;
> >  		cc->nr_freepages += isolated;
> >  		list_add_tail(&page->lru, freelist);
> > @@ -1101,6 +1102,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> >  isolate_success_no_list:
> >  		cc->nr_migratepages += compound_nr(page);
> >  		nr_isolated += compound_nr(page);
> > +		nr_scanned += compound_nr(page) - 1;
> >  
> >  		/*
> >  		 * Avoid isolating too much unless this block is being
> > @@ -1504,6 +1506,7 @@ fast_isolate_freepages(struct compact_control *cc)
> >  			if (__isolate_free_page(page, order)) {
> >  				set_page_private(page, order);
> >  				nr_isolated = 1 << order;
> > +				nr_scanned += nr_isolated - 1;
> >  				cc->nr_freepages += nr_isolated;
> >  				list_add_tail(&page->lru, &cc->freepages);
> >  				count_compact_events(COMPACTISOLATED, nr_isolated);
> 
> Regardless of the comment above -
> 
> Reviewed-by: Punit Agrawal <punit.agrawal@bytedance.com>