[PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.

Zi Yan posted 3 patches 2 months ago
[PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Zi Yan 2 months ago
Page cache folios from a file system that support large block size (LBS)
can have minimal folio order greater than 0, thus a high order folio might
not be able to be split down to order-0. Commit e220917fa507 ("mm: split a
folio in minimum folio order chunks") bumps the target order of
split_huge_page*() to the minimum allowed order when splitting a LBS folio.
This causes confusion for some split_huge_page*() callers like memory
failure handling code, since they expect after-split folios all have
order-0 when split succeeds but in really get min_order_for_split() order
folios.

Fix it by failing a split if the folio cannot be split to the target order.
Rename try_folio_split() to try_folio_split_to_order() to reflect the added
new_order parameter. Remove its unused list parameter.

Fixes: e220917fa507 ("mm: split a folio in minimum folio order chunks")
[The test poisons LBS folios, which cannot be split to order-0 folios, and
also tries to poison all memory. The non split LBS folios take more memory
than the test anticipated, leading to OOM. The patch fixed the kernel
warning and the test needs some change to avoid OOM.]
Reported-by: syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68d2c943.a70a0220.1b52b.02b3.GAE@google.com/
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Pankaj Raghav <p.raghav@samsung.com>
---
 include/linux/huge_mm.h | 55 +++++++++++++++++------------------------
 mm/huge_memory.c        |  9 +------
 mm/truncate.c           |  6 +++--
 3 files changed, 28 insertions(+), 42 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index c4a811958cda..3d9587f40c0b 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -383,45 +383,30 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
 }
 
 /*
- * try_folio_split - try to split a @folio at @page using non uniform split.
+ * try_folio_split_to_order - try to split a @folio at @page to @new_order using
+ * non uniform split.
  * @folio: folio to be split
- * @page: split to order-0 at the given page
- * @list: store the after-split folios
+ * @page: split to @order at the given page
+ * @new_order: the target split order
  *
- * Try to split a @folio at @page using non uniform split to order-0, if
- * non uniform split is not supported, fall back to uniform split.
+ * Try to split a @folio at @page using non uniform split to @new_order, if
+ * non uniform split is not supported, fall back to uniform split. After-split
+ * folios are put back to LRU list. Use min_order_for_split() to get the lower
+ * bound of @new_order.
  *
  * Return: 0: split is successful, otherwise split failed.
  */
-static inline int try_folio_split(struct folio *folio, struct page *page,
-		struct list_head *list)
+static inline int try_folio_split_to_order(struct folio *folio,
+		struct page *page, unsigned int new_order)
 {
-	int ret = min_order_for_split(folio);
-
-	if (ret < 0)
-		return ret;
-
-	if (!non_uniform_split_supported(folio, 0, false))
-		return split_huge_page_to_list_to_order(&folio->page, list,
-				ret);
-	return folio_split(folio, ret, page, list);
+	if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
+		return split_huge_page_to_list_to_order(&folio->page, NULL,
+				new_order);
+	return folio_split(folio, new_order, page, NULL);
 }
 static inline int split_huge_page(struct page *page)
 {
-	struct folio *folio = page_folio(page);
-	int ret = min_order_for_split(folio);
-
-	if (ret < 0)
-		return ret;
-
-	/*
-	 * split_huge_page() locks the page before splitting and
-	 * expects the same page that has been split to be locked when
-	 * returned. split_folio(page_folio(page)) cannot be used here
-	 * because it converts the page to folio and passes the head
-	 * page to be split.
-	 */
-	return split_huge_page_to_list_to_order(page, NULL, ret);
+	return split_huge_page_to_list_to_order(page, NULL, 0);
 }
 void deferred_split_folio(struct folio *folio, bool partially_mapped);
 #ifdef CONFIG_MEMCG
@@ -611,14 +596,20 @@ static inline int split_huge_page(struct page *page)
 	return -EINVAL;
 }
 
+static inline int min_order_for_split(struct folio *folio)
+{
+	VM_WARN_ON_ONCE_FOLIO(1, folio);
+	return -EINVAL;
+}
+
 static inline int split_folio_to_list(struct folio *folio, struct list_head *list)
 {
 	VM_WARN_ON_ONCE_FOLIO(1, folio);
 	return -EINVAL;
 }
 
-static inline int try_folio_split(struct folio *folio, struct page *page,
-		struct list_head *list)
+static inline int try_folio_split_to_order(struct folio *folio,
+		struct page *page, unsigned int new_order)
 {
 	VM_WARN_ON_ONCE_FOLIO(1, folio);
 	return -EINVAL;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8c82a0ac6e69..f308f11dc72f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3805,8 +3805,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 
 		min_order = mapping_min_folio_order(folio->mapping);
 		if (new_order < min_order) {
-			VM_WARN_ONCE(1, "Cannot split mapped folio below min-order: %u",
-				     min_order);
 			ret = -EINVAL;
 			goto out;
 		}
@@ -4158,12 +4156,7 @@ int min_order_for_split(struct folio *folio)
 
 int split_folio_to_list(struct folio *folio, struct list_head *list)
 {
-	int ret = min_order_for_split(folio);
-
-	if (ret < 0)
-		return ret;
-
-	return split_huge_page_to_list_to_order(&folio->page, list, ret);
+	return split_huge_page_to_list_to_order(&folio->page, list, 0);
 }
 
 /*
diff --git a/mm/truncate.c b/mm/truncate.c
index 91eb92a5ce4f..9210cf808f5c 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -194,6 +194,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 	size_t size = folio_size(folio);
 	unsigned int offset, length;
 	struct page *split_at, *split_at2;
+	unsigned int min_order;
 
 	if (pos < start)
 		offset = start - pos;
@@ -223,8 +224,9 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 	if (!folio_test_large(folio))
 		return true;
 
+	min_order = mapping_min_folio_order(folio->mapping);
 	split_at = folio_page(folio, PAGE_ALIGN_DOWN(offset) / PAGE_SIZE);
-	if (!try_folio_split(folio, split_at, NULL)) {
+	if (!try_folio_split_to_order(folio, split_at, min_order)) {
 		/*
 		 * try to split at offset + length to make sure folios within
 		 * the range can be dropped, especially to avoid memory waste
@@ -254,7 +256,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 		 */
 		if (folio_test_large(folio2) &&
 		    folio2->mapping == folio->mapping)
-			try_folio_split(folio2, split_at2, NULL);
+			try_folio_split_to_order(folio2, split_at2, min_order);
 
 		folio_unlock(folio2);
 out:
-- 
2.51.0
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Wei Yang 2 months ago
On Wed, Oct 15, 2025 at 11:34:50PM -0400, Zi Yan wrote:
>Page cache folios from a file system that support large block size (LBS)
>can have minimal folio order greater than 0, thus a high order folio might
>not be able to be split down to order-0. Commit e220917fa507 ("mm: split a
>folio in minimum folio order chunks") bumps the target order of
>split_huge_page*() to the minimum allowed order when splitting a LBS folio.
>This causes confusion for some split_huge_page*() callers like memory
>failure handling code, since they expect after-split folios all have
>order-0 when split succeeds but in really get min_order_for_split() order
>folios.
>
>Fix it by failing a split if the folio cannot be split to the target order.
>Rename try_folio_split() to try_folio_split_to_order() to reflect the added
>new_order parameter. Remove its unused list parameter.
>
>Fixes: e220917fa507 ("mm: split a folio in minimum folio order chunks")
>[The test poisons LBS folios, which cannot be split to order-0 folios, and
>also tries to poison all memory. The non split LBS folios take more memory
>than the test anticipated, leading to OOM. The patch fixed the kernel
>warning and the test needs some change to avoid OOM.]
>Reported-by: syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com
>Closes: https://lore.kernel.org/all/68d2c943.a70a0220.1b52b.02b3.GAE@google.com/
>Signed-off-by: Zi Yan <ziy@nvidia.com>
>Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
>Reviewed-by: Pankaj Raghav <p.raghav@samsung.com>

Do we want to cc stable?

>---
> include/linux/huge_mm.h | 55 +++++++++++++++++------------------------
> mm/huge_memory.c        |  9 +------
> mm/truncate.c           |  6 +++--
> 3 files changed, 28 insertions(+), 42 deletions(-)
>
>diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>index c4a811958cda..3d9587f40c0b 100644
>--- a/include/linux/huge_mm.h
>+++ b/include/linux/huge_mm.h
>@@ -383,45 +383,30 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
> }
> 
> /*
>- * try_folio_split - try to split a @folio at @page using non uniform split.
>+ * try_folio_split_to_order - try to split a @folio at @page to @new_order using
>+ * non uniform split.
>  * @folio: folio to be split
>- * @page: split to order-0 at the given page
>- * @list: store the after-split folios
>+ * @page: split to @order at the given page

split to @new_order?

>+ * @new_order: the target split order
>  *
>- * Try to split a @folio at @page using non uniform split to order-0, if
>- * non uniform split is not supported, fall back to uniform split.
>+ * Try to split a @folio at @page using non uniform split to @new_order, if
>+ * non uniform split is not supported, fall back to uniform split. After-split
>+ * folios are put back to LRU list. Use min_order_for_split() to get the lower
>+ * bound of @new_order.

We removed min_order_for_split() here right?

>  *
>  * Return: 0: split is successful, otherwise split failed.
>  */
>-static inline int try_folio_split(struct folio *folio, struct page *page,
>-		struct list_head *list)
>+static inline int try_folio_split_to_order(struct folio *folio,
>+		struct page *page, unsigned int new_order)
> {
>-	int ret = min_order_for_split(folio);
>-
>-	if (ret < 0)
>-		return ret;
>-
>-	if (!non_uniform_split_supported(folio, 0, false))
>-		return split_huge_page_to_list_to_order(&folio->page, list,
>-				ret);
>-	return folio_split(folio, ret, page, list);
>+	if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>+		return split_huge_page_to_list_to_order(&folio->page, NULL,
>+				new_order);
>+	return folio_split(folio, new_order, page, NULL);
> }

-- 
Wei Yang
Help you, Help me
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Zi Yan 2 months ago
On 16 Oct 2025, at 3:31, Wei Yang wrote:

> On Wed, Oct 15, 2025 at 11:34:50PM -0400, Zi Yan wrote:
>> Page cache folios from a file system that support large block size (LBS)
>> can have minimal folio order greater than 0, thus a high order folio might
>> not be able to be split down to order-0. Commit e220917fa507 ("mm: split a
>> folio in minimum folio order chunks") bumps the target order of
>> split_huge_page*() to the minimum allowed order when splitting a LBS folio.
>> This causes confusion for some split_huge_page*() callers like memory
>> failure handling code, since they expect after-split folios all have
>> order-0 when split succeeds but in really get min_order_for_split() order
>> folios.
>>
>> Fix it by failing a split if the folio cannot be split to the target order.
>> Rename try_folio_split() to try_folio_split_to_order() to reflect the added
>> new_order parameter. Remove its unused list parameter.
>>
>> Fixes: e220917fa507 ("mm: split a folio in minimum folio order chunks")
>> [The test poisons LBS folios, which cannot be split to order-0 folios, and
>> also tries to poison all memory. The non split LBS folios take more memory
>> than the test anticipated, leading to OOM. The patch fixed the kernel
>> warning and the test needs some change to avoid OOM.]
>> Reported-by: syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com
>> Closes: https://lore.kernel.org/all/68d2c943.a70a0220.1b52b.02b3.GAE@google.com/
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
>> Reviewed-by: Pankaj Raghav <p.raghav@samsung.com>
>
> Do we want to cc stable?

This only triggers a warning, so I am inclined not to.
But some config decides to crash on kernel warnings. If anyone thinks
it is worth ccing stable, please let me know.

>
>> ---
>> include/linux/huge_mm.h | 55 +++++++++++++++++------------------------
>> mm/huge_memory.c        |  9 +------
>> mm/truncate.c           |  6 +++--
>> 3 files changed, 28 insertions(+), 42 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index c4a811958cda..3d9587f40c0b 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -383,45 +383,30 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
>> }
>>
>> /*
>> - * try_folio_split - try to split a @folio at @page using non uniform split.
>> + * try_folio_split_to_order - try to split a @folio at @page to @new_order using
>> + * non uniform split.
>>  * @folio: folio to be split
>> - * @page: split to order-0 at the given page
>> - * @list: store the after-split folios
>> + * @page: split to @order at the given page
>
> split to @new_order?

Will fix it.

>
>> + * @new_order: the target split order
>>  *
>> - * Try to split a @folio at @page using non uniform split to order-0, if
>> - * non uniform split is not supported, fall back to uniform split.
>> + * Try to split a @folio at @page using non uniform split to @new_order, if
>> + * non uniform split is not supported, fall back to uniform split. After-split
>> + * folios are put back to LRU list. Use min_order_for_split() to get the lower
>> + * bound of @new_order.
>
> We removed min_order_for_split() here right?

We removed it from the code, but caller should use min_order_for_split()
to get the lower bound of new_order if they do not want to split to fail
unexpectedly.

Thank you for the review.

>
>>  *
>>  * Return: 0: split is successful, otherwise split failed.
>>  */
>> -static inline int try_folio_split(struct folio *folio, struct page *page,
>> -		struct list_head *list)
>> +static inline int try_folio_split_to_order(struct folio *folio,
>> +		struct page *page, unsigned int new_order)
>> {
>> -	int ret = min_order_for_split(folio);
>> -
>> -	if (ret < 0)
>> -		return ret;
>> -
>> -	if (!non_uniform_split_supported(folio, 0, false))
>> -		return split_huge_page_to_list_to_order(&folio->page, list,
>> -				ret);
>> -	return folio_split(folio, ret, page, list);
>> +	if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> +		return split_huge_page_to_list_to_order(&folio->page, NULL,
>> +				new_order);
>> +	return folio_split(folio, new_order, page, NULL);
>> }
>
> -- 
> Wei Yang
> Help you, Help me


--
Best Regards,
Yan, Zi
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Wei Yang 2 months ago
On Thu, Oct 16, 2025 at 10:32:17AM -0400, Zi Yan wrote:
>On 16 Oct 2025, at 3:31, Wei Yang wrote:
>
[...]
>
>>
>>> + * @new_order: the target split order
>>>  *
>>> - * Try to split a @folio at @page using non uniform split to order-0, if
>>> - * non uniform split is not supported, fall back to uniform split.
>>> + * Try to split a @folio at @page using non uniform split to @new_order, if
>>> + * non uniform split is not supported, fall back to uniform split. After-split
>>> + * folios are put back to LRU list. Use min_order_for_split() to get the lower
>>> + * bound of @new_order.
>>
>> We removed min_order_for_split() here right?
>
>We removed it from the code, but caller should use min_order_for_split()
>to get the lower bound of new_order if they do not want to split to fail
>unexpectedly.
>
>Thank you for the review.

Thanks, my poor English, I got what you mean.

No other comments.

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Andrew Morton 2 months ago
On Thu, 16 Oct 2025 10:32:17 -0400 Zi Yan <ziy@nvidia.com> wrote:

> > Do we want to cc stable?
> 
> This only triggers a warning, so I am inclined not to.
> But some config decides to crash on kernel warnings. If anyone thinks
> it is worth ccing stable, please let me know.

Yes please.  Kernel warnings are pretty serious and I do like to fix
them in -stable when possible.

That means this patch will have a different routing and priority than
the other two so please split the warning fix out from the series.
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Zi Yan 2 months ago
On 16 Oct 2025, at 16:59, Andrew Morton wrote:

> On Thu, 16 Oct 2025 10:32:17 -0400 Zi Yan <ziy@nvidia.com> wrote:
>
>>> Do we want to cc stable?
>>
>> This only triggers a warning, so I am inclined not to.
>> But some config decides to crash on kernel warnings. If anyone thinks
>> it is worth ccing stable, please let me know.
>
> Yes please.  Kernel warnings are pretty serious and I do like to fix
> them in -stable when possible.
>
> That means this patch will have a different routing and priority than
> the other two so please split the warning fix out from the series.

OK. Let me send this one and cc stable.

Best Regards,
Yan, Zi
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Lorenzo Stoakes 2 months ago
On Thu, Oct 16, 2025 at 09:03:27PM -0400, Zi Yan wrote:
> On 16 Oct 2025, at 16:59, Andrew Morton wrote:
>
> > On Thu, 16 Oct 2025 10:32:17 -0400 Zi Yan <ziy@nvidia.com> wrote:
> >
> >>> Do we want to cc stable?
> >>
> >> This only triggers a warning, so I am inclined not to.
> >> But some config decides to crash on kernel warnings. If anyone thinks
> >> it is worth ccing stable, please let me know.
> >
> > Yes please.  Kernel warnings are pretty serious and I do like to fix
> > them in -stable when possible.
> >
> > That means this patch will have a different routing and priority than
> > the other two so please split the warning fix out from the series.
>
> OK. Let me send this one and cc stable.

You've added a bunch of confusion here, now if I review the rest of this series
it looks like I'm reviewing it with this stale patch included.

Can you please resend the remainder of the series as a v3 so it's clear? Thanks!
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Lorenzo Stoakes 2 months ago
On Fri, Oct 17, 2025 at 10:06:41AM +0100, Lorenzo Stoakes wrote:
> On Thu, Oct 16, 2025 at 09:03:27PM -0400, Zi Yan wrote:
> > On 16 Oct 2025, at 16:59, Andrew Morton wrote:
> >
> > > On Thu, 16 Oct 2025 10:32:17 -0400 Zi Yan <ziy@nvidia.com> wrote:
> > >
> > >>> Do we want to cc stable?
> > >>
> > >> This only triggers a warning, so I am inclined not to.
> > >> But some config decides to crash on kernel warnings. If anyone thinks
> > >> it is worth ccing stable, please let me know.
> > >
> > > Yes please.  Kernel warnings are pretty serious and I do like to fix
> > > them in -stable when possible.
> > >
> > > That means this patch will have a different routing and priority than
> > > the other two so please split the warning fix out from the series.
> >
> > OK. Let me send this one and cc stable.
>
> You've added a bunch of confusion here, now if I review the rest of this series
> it looks like I'm reviewing it with this stale patch included.
>
> Can you please resend the remainder of the series as a v3 so it's clear? Thanks!

Oh and now this entire series relies on that one landing to work :/

What a mess - Can't we just live with one patch from a series being stable and
the rest not? Seems crazy otherwise.

I guess when you resend you'll need to put explicitly in the cover letter
'relies on patch xxxx'
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Zi Yan 2 months ago
On 17 Oct 2025, at 5:10, Lorenzo Stoakes wrote:

> On Fri, Oct 17, 2025 at 10:06:41AM +0100, Lorenzo Stoakes wrote:
>> On Thu, Oct 16, 2025 at 09:03:27PM -0400, Zi Yan wrote:
>>> On 16 Oct 2025, at 16:59, Andrew Morton wrote:
>>>
>>>> On Thu, 16 Oct 2025 10:32:17 -0400 Zi Yan <ziy@nvidia.com> wrote:
>>>>
>>>>>> Do we want to cc stable?
>>>>>
>>>>> This only triggers a warning, so I am inclined not to.
>>>>> But some config decides to crash on kernel warnings. If anyone thinks
>>>>> it is worth ccing stable, please let me know.
>>>>
>>>> Yes please.  Kernel warnings are pretty serious and I do like to fix
>>>> them in -stable when possible.
>>>>
>>>> That means this patch will have a different routing and priority than
>>>> the other two so please split the warning fix out from the series.
>>>
>>> OK. Let me send this one and cc stable.
>>
>> You've added a bunch of confusion here, now if I review the rest of this series

What confusion I have added here? Do you mind elaborating?

>> it looks like I'm reviewing it with this stale patch included.
>>
>> Can you please resend the remainder of the series as a v3 so it's clear? Thanks!
>
> Oh and now this entire series relies on that one landing to work :/
>
> What a mess - Can't we just live with one patch from a series being stable and
> the rest not? Seems crazy otherwise.

This is what Andrew told me. Please settle this with Andrew if you do not like
it. I will hold on sending new version of this patchset until either you or
Andrew give me a clear guidance on how to send this patchset.

>
> I guess when you resend you'll need to put explicitly in the cover letter
> 'relies on patch xxxx'

Why? I will simply wait until this patch is merged, then I can send the rest
of two. Separate patchsets with dependency is hard for review, why would I
send them at the same time?

--
Best Regards,
Yan, Zi
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Lorenzo Stoakes 2 months ago
On Fri, Oct 17, 2025 at 10:16:10AM -0400, Zi Yan wrote:
> On 17 Oct 2025, at 5:10, Lorenzo Stoakes wrote:
>
> > On Fri, Oct 17, 2025 at 10:06:41AM +0100, Lorenzo Stoakes wrote:
> >> On Thu, Oct 16, 2025 at 09:03:27PM -0400, Zi Yan wrote:
> >>> On 16 Oct 2025, at 16:59, Andrew Morton wrote:
> >>>
> >>>> On Thu, 16 Oct 2025 10:32:17 -0400 Zi Yan <ziy@nvidia.com> wrote:
> >>>>
> >>>>>> Do we want to cc stable?
> >>>>>
> >>>>> This only triggers a warning, so I am inclined not to.
> >>>>> But some config decides to crash on kernel warnings. If anyone thinks
> >>>>> it is worth ccing stable, please let me know.
> >>>>
> >>>> Yes please.  Kernel warnings are pretty serious and I do like to fix
> >>>> them in -stable when possible.
> >>>>
> >>>> That means this patch will have a different routing and priority than
> >>>> the other two so please split the warning fix out from the series.
> >>>
> >>> OK. Let me send this one and cc stable.
> >>
> >> You've added a bunch of confusion here, now if I review the rest of this series
>
> What confusion I have added here? Do you mind elaborating?

There's 2 series in the tree now:

v2 -> with a stale patch 1/3 + 2/3, 3/3

v3 -> 1/3 separate

If I use any tooling (b4 shazam etc.) to pull this series to review, it'll pull
the state patch.

if 2/3 or 3/3 depend on 1/3 then it's super confused.

All I'm asking is for you to resend/respin the 2 patches without the stale one.

>
> >> it looks like I'm reviewing it with this stale patch included.
> >>
> >> Can you please resend the remainder of the series as a v3 so it's clear? Thanks!
> >
> > Oh and now this entire series relies on that one landing to work :/
> >
> > What a mess - Can't we just live with one patch from a series being stable and
> > the rest not? Seems crazy otherwise.
>
> This is what Andrew told me. Please settle this with Andrew if you do not like

Didn't he just ask you to send 1/3 separately? I don't think he said send 1/3
separately and do not resend 2/3, 3/3...

> it. I will hold on sending new version of this patchset until either you or
> Andrew give me a clear guidance on how to send this patchset.

I mean if you want to delay resending this until the hotfix is sorted out then
just reply to 0/3 saying 'please drop this until that patch is merged'.

Otherwise it looks live.

>
> >
> > I guess when you resend you'll need to put explicitly in the cover letter
> > 'relies on patch xxxx'
>
> Why? I will simply wait until this patch is merged, then I can send the rest
> of two. Separate patchsets with dependency is hard for review, why would I
> send them at the same time?

So you're planning to only resend once the hotfix is upstreamed completely?

Sometimes this can be delayed a couple weeks. But fine.

As long as there's clarity.

>
> --
> Best Regards,
> Yan, Zi

Thanks, Lorenzo
Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently.
Posted by Andrew Morton 2 months ago
On Fri, 17 Oct 2025 15:32:13 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote:

> > it. I will hold on sending new version of this patchset until either you or
> > Andrew give me a clear guidance on how to send this patchset.
> 
> I mean if you want to delay resending this until the hotfix is sorted out then
> just reply to 0/3 saying 'please drop this until that patch is merged'.
> 
> Otherwise it looks live.

Yeah, hotfixes come first and separately please.  A hotfix will hit
mainline in a week or so.  Whether or not they are cc:stable.  The
not-hotfix material won't hit mainline for as long as two months!

So mixing hotfixes with next-merge-window patches is to be avoided.

Note that a "hotfix" may or may not be cc:stable - it depends on
whether the Fixes: commit was present in earlier kernel releases.


Actually, if a developer has a hotfix as well as a bunch of
next-merge-window material then it's really best to send the hotfix
only.  Hold off on the next-merge-window material so the hotfix gets
standalone testing.  Because it's possible that the next-merge-window
material accidentally fixes an issue in the hotfix.

(otoh the hotfixes *will* get that standalone testing from people who
test Linus-latest, but it's bad of us to depend on that!)

I regularly get patchsets which mix hotfixes (sometimes cc:stable) with
next-merge-window material.  Pretty often the hotfix isn't very urgent
so I'll say screwit and merge it all as-is, after adding a cc:stable. 
The hotfix will get merged and backported eventually.

I hope that nobody really needs to worry much about all this stuff.
Juggling patch priority and timing is what akpms are for.