mm/compaction.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
When scanning free pages for memory compaction, if the compaction target
order is explicitly specified, do not split pages in buddy whose order
are larger than compaction target order.
Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
---
mm/compaction.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index 3925cb61dbb8..b0ed0831c400 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -656,6 +656,17 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
/* Found a free page, will break it into order-0 pages */
order = buddy_order(page);
+
+ /*
+ * Do not break free pages whose order is larger than
+ * compact's desired order
+ */
+ if (cc->order != -1 && order >= cc->order) {
+ blockpfn += (1 << order) - 1;
+ page += (1 << order) - 1;
+ goto isolate_fail;
+ }
+
isolated = __isolate_free_page(page, order);
if (!isolated)
break;
--
2.34.1
On 2025/4/24 23:38, Wenchao Hao wrote:
> When scanning free pages for memory compaction, if the compaction target
> order is explicitly specified, do not split pages in buddy whose order
> are larger than compaction target order.
We've already checked this in suitable_migration_target(), so how did
you observe that there are still attempts to isolate such non-suitable
free large folios? Please explain your usecase in detail.
> Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
> ---
> mm/compaction.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 3925cb61dbb8..b0ed0831c400 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -656,6 +656,17 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>
> /* Found a free page, will break it into order-0 pages */
> order = buddy_order(page);
> +
> + /*
> + * Do not break free pages whose order is larger than
> + * compact's desired order
> + */
> + if (cc->order != -1 && order >= cc->order) {
> + blockpfn += (1 << order) - 1;
> + page += (1 << order) - 1;
> + goto isolate_fail;
> + }
> +
> isolated = __isolate_free_page(page, order);
> if (!isolated)
> break;
On Fri, Apr 25, 2025 at 2:53 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/24 23:38, Wenchao Hao wrote:
> > When scanning free pages for memory compaction, if the compaction target
> > order is explicitly specified, do not split pages in buddy whose order
> > are larger than compaction target order.
>
> We've already checked this in suitable_migration_target(), so how did
> you observe that there are still attempts to isolate such non-suitable
> free large folios? Please explain your usecase in detail.
>
I found such non-suitable during testing proactive compaction on
android device (6.6 kernel), and I want to use proactive compaction
to produce order4 pages for mthp.
The test device did not include your patch "mm: compaction: limit the suitable
target page order to be less than cc->order".
However, from the analysis of the code flow, even if it is included,
similar non-suitable free large folios may still appear.
suitable_migration_target() only check before isolate_freepages_block(), and
check is based on the starting page of pageblock.
If the target order is relatively small(order4 for example), just one check in
suitable_migration_target() is not enough, because there may be other free
folios which are larger than order4 in this pageblock.
> > Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
> > ---
> > mm/compaction.c | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 3925cb61dbb8..b0ed0831c400 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -656,6 +656,17 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
> >
> > /* Found a free page, will break it into order-0 pages */
> > order = buddy_order(page);
> > +
> > + /*
> > + * Do not break free pages whose order is larger than
> > + * compact's desired order
> > + */
> > + if (cc->order != -1 && order >= cc->order) {
> > + blockpfn += (1 << order) - 1;
> > + page += (1 << order) - 1;
> > + goto isolate_fail;
> > + }
> > +
> > isolated = __isolate_free_page(page, order);
> > if (!isolated)
> > break;
On Thu, Apr 24, 2025 at 11:38:15PM +0800, Wenchao Hao wrote: > When scanning free pages for memory compaction, if the compaction target > order is explicitly specified, do not split pages in buddy whose order > are larger than compaction target order. Have you observed this to be an issue in practice? compact_finished() would have bailed if such a page had existed. compaction_capture() would steal such a page upon production. It could help with blocks freed by chance from somewhere else, where you'd preserve it to grab it later from the allocation retry. But if that's the target, it might be better to indeed isolate the page, and then capture it inside compaction_alloc()?
On Thu Apr 24, 2025 at 3:42 PM EDT, Johannes Weiner wrote: > On Thu, Apr 24, 2025 at 11:38:15PM +0800, Wenchao Hao wrote: >> When scanning free pages for memory compaction, if the compaction target >> order is explicitly specified, do not split pages in buddy whose order >> are larger than compaction target order. If such a free page exist, why is the compaction triggered? I assume buddy allocator would use the free page for allocation. I see your email to Baolin mentioning it is about proactive compaction. In that scenario, the order is -1 and the added code does not apply, right? > > Have you observed this to be an issue in practice? I wonder the same. The free pages are kept in cc.freepages list at its order. compaction_alloc() tries to use a page with the smallest order from the list to avoid split a large order free page. But if no small free pages are available, compaction_alloc() will split a large free page to make compaction going. But what I do not understand is that if there is a free page with order greater than cc->order, why didn't buddy allocator use it for allocation? > > compact_finished() would have bailed if such a page had existed. > > compaction_capture() would steal such a page upon production. > > It could help with blocks freed by chance from somewhere else, where > you'd preserve it to grab it later from the allocation retry. But if > that's the target, it might be better to indeed isolate the page, and > then capture it inside compaction_alloc()? -- Best Regards, Yan, Zi
On Fri, Apr 25, 2025 at 3:42 AM Johannes Weiner <hannes@cmpxchg.org> wrote: > > On Thu, Apr 24, 2025 at 11:38:15PM +0800, Wenchao Hao wrote: > > When scanning free pages for memory compaction, if the compaction target > > order is explicitly specified, do not split pages in buddy whose order > > are larger than compaction target order. > > Have you observed this to be an issue in practice? > > compact_finished() would have bailed if such a page had existed. > Yes, when proactive memory compaction is enabled, there may be situations where the order of isolated free pages is greater than the compaction requested order, and compact_finished() will return continue. > compaction_capture() would steal such a page upon production. > > It could help with blocks freed by chance from somewhere else, where > you'd preserve it to grab it later from the allocation retry. But if > that's the target, it might be better to indeed isolate the page, and > then capture it inside compaction_alloc()?
On Fri, Apr 25, 2025 at 10:28:42PM +0800, Wenchao Hao wrote: > On Fri, Apr 25, 2025 at 3:42 AM Johannes Weiner <hannes@cmpxchg.org> wrote: > > > > On Thu, Apr 24, 2025 at 11:38:15PM +0800, Wenchao Hao wrote: > > > When scanning free pages for memory compaction, if the compaction target > > > order is explicitly specified, do not split pages in buddy whose order > > > are larger than compaction target order. > > > > Have you observed this to be an issue in practice? > > > > compact_finished() would have bailed if such a page had existed. > > > > Yes, when proactive memory compaction is enabled, there may be situations > where the order of isolated free pages is greater than the compaction > requested order, and compact_finished() will return continue. proactive compaction has an order of -1?
On Fri, Apr 25, 2025 at 11:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Fri, Apr 25, 2025 at 10:28:42PM +0800, Wenchao Hao wrote:
> > On Fri, Apr 25, 2025 at 3:42 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > >
> > > On Thu, Apr 24, 2025 at 11:38:15PM +0800, Wenchao Hao wrote:
> > > > When scanning free pages for memory compaction, if the compaction target
> > > > order is explicitly specified, do not split pages in buddy whose order
> > > > are larger than compaction target order.
> > >
> > > Have you observed this to be an issue in practice?
> > >
> > > compact_finished() would have bailed if such a page had existed.
> > >
> >
> > Yes, when proactive memory compaction is enabled, there may be situations
> > where the order of isolated free pages is greater than the compaction
> > requested order, and compact_finished() will return continue.
>
> proactive compaction has an order of -1?
The order in struct compact_control is not directly related to
proactive compaction.
I just added a check here, if the compaction is awakened by wakeup_kcompactd()
and the target order of compaction is set, the free folios larger than
the target order
will not be split when isolating the free pages.
The following scenarios will appear when there are free pages larger
than the target
order of compaction but compact_finished() will return continue:
1. proactive compaction is enabled, kcompactd is awakened for compaction
2. the defrag_mode you added, __compact_finished() will return continue if
the number of pageblocks size folios is smaller than watermark
© 2016 - 2026 Red Hat, Inc.