When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
reduce redundant operation.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
mm/khugepaged.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1ca034a5f653..d4ed0f397335 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
* Release the current mm_slot if this mm is about to die, or
* if we scanned all vmas of this mm.
*/
- if (hpage_collapse_test_exit(mm) || !vma) {
+ if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
/*
* Make sure that if mm_users is reaching zero while
* khugepaged runs here, khugepaged_exit will find
--
2.51.0
On 2026/1/4 13:41, Vernon Yang wrote:
> When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
> scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
> reduce redundant operation.
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> ---
> mm/khugepaged.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1ca034a5f653..d4ed0f397335 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> * Release the current mm_slot if this mm is about to die, or
> * if we scanned all vmas of this mm.
> */
> - if (hpage_collapse_test_exit(mm) || !vma) {
> + if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
> /*
> * Make sure that if mm_users is reaching zero while
> * khugepaged runs here, khugepaged_exit will find
Let's convert hpage_collapse_test_exit() in collect_mm_slot() as well,
otherwise the mm_slot would not be freed and will be scanned again ...
static void collect_mm_slot(struct mm_slot *slot)
{
struct mm_struct *mm = slot->mm;
lockdep_assert_held(&khugepaged_mm_lock);
if (hpage_collapse_test_exit(mm)) { <-
hash_del(&slot->hash);
list_del(&slot->mm_node);
mm_slot_free(mm_slot_cache, slot);
mmdrop(mm);
}
}
On Sun, Jan 4, 2026 at 8:20 PM Lance Yang <lance.yang@linux.dev> wrote:
>
> On 2026/1/4 13:41, Vernon Yang wrote:
> > When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
> > scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
> > reduce redundant operation.
> >
> > Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> > ---
> > mm/khugepaged.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 1ca034a5f653..d4ed0f397335 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> > * Release the current mm_slot if this mm is about to die, or
> > * if we scanned all vmas of this mm.
> > */
> > - if (hpage_collapse_test_exit(mm) || !vma) {
> > + if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
> > /*
> > * Make sure that if mm_users is reaching zero while
> > * khugepaged runs here, khugepaged_exit will find
>
>
> Let's convert hpage_collapse_test_exit() in collect_mm_slot() as well,
> otherwise the mm_slot would not be freed and will be scanned again ...
>
> static void collect_mm_slot(struct mm_slot *slot)
> {
> struct mm_struct *mm = slot->mm;
>
> lockdep_assert_held(&khugepaged_mm_lock);
>
> if (hpage_collapse_test_exit(mm)) { <-
>
> hash_del(&slot->hash);
> list_del(&slot->mm_node);
>
> mm_slot_free(mm_slot_cache, slot);
> mmdrop(mm);
> }
> }
This patch just reduces redundant operation, For a detailed
discussion[1].
You already commit 5dad604809c5 ("mm/khugepaged: keep mm in mm_slot
without MMF_DISABLE_THP check"), I assume there is some problem here,
e.g. not can readd? data-race? etc. Can you explain the root cause? Thanks!
[1] https://lore.kernel.org/linux-mm/CACZaFFOvDad09MUopairAoAjZG6X5gffMaQbnfy0sCHGz8xSfg@mail.gmail.com
--
Thanks,
Vernon
On 2026/1/5 10:06, Vernon Yang wrote:
> On Sun, Jan 4, 2026 at 8:20 PM Lance Yang <lance.yang@linux.dev> wrote:
>>
>> On 2026/1/4 13:41, Vernon Yang wrote:
>>> When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
>>> scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
>>> reduce redundant operation.
>>>
>>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>>> ---
>>> mm/khugepaged.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 1ca034a5f653..d4ed0f397335 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
>>> * Release the current mm_slot if this mm is about to die, or
>>> * if we scanned all vmas of this mm.
>>> */
>>> - if (hpage_collapse_test_exit(mm) || !vma) {
>>> + if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
>>> /*
>>> * Make sure that if mm_users is reaching zero while
>>> * khugepaged runs here, khugepaged_exit will find
>>
>>
>> Let's convert hpage_collapse_test_exit() in collect_mm_slot() as well,
>> otherwise the mm_slot would not be freed and will be scanned again ...
>>
>> static void collect_mm_slot(struct mm_slot *slot)
>> {
>> struct mm_struct *mm = slot->mm;
>>
>> lockdep_assert_held(&khugepaged_mm_lock);
>>
>> if (hpage_collapse_test_exit(mm)) { <-
>>
>> hash_del(&slot->hash);
>> list_del(&slot->mm_node);
>>
>> mm_slot_free(mm_slot_cache, slot);
>> mmdrop(mm);
>> }
>> }
>
> This patch just reduces redundant operation, For a detailed
> discussion[1].
>
> You already commit 5dad604809c5 ("mm/khugepaged: keep mm in mm_slot
> without MMF_DISABLE_THP check"), I assume there is some problem here,
> e.g. not can readd? data-race? etc. Can you explain the root cause? Thanks!
Ah, I didn't fully recall that ...
Maybe I kept the slot because it's hard for khugepaged to re-add the mm
later.
But looking at the code again, I'm not sure if that was the right call :(
>
> [1] https://lore.kernel.org/linux-mm/CACZaFFOvDad09MUopairAoAjZG6X5gffMaQbnfy0sCHGz8xSfg@mail.gmail.com
>
> --
> Thanks,
> Vernon
On Sun, Jan 04, 2026 at 08:20:29PM +0800, Lance Yang wrote:
>
>
>On 2026/1/4 13:41, Vernon Yang wrote:
>> When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
>> scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
>> reduce redundant operation.
>>
>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>> ---
>> mm/khugepaged.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 1ca034a5f653..d4ed0f397335 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
>> * Release the current mm_slot if this mm is about to die, or
>> * if we scanned all vmas of this mm.
>> */
>> - if (hpage_collapse_test_exit(mm) || !vma) {
>> + if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
>> /*
>> * Make sure that if mm_users is reaching zero while
>> * khugepaged runs here, khugepaged_exit will find
>
>
>Let's convert hpage_collapse_test_exit() in collect_mm_slot() as well,
>otherwise the mm_slot would not be freed and will be scanned again ...
>
>static void collect_mm_slot(struct mm_slot *slot)
>{
> struct mm_struct *mm = slot->mm;
>
> lockdep_assert_held(&khugepaged_mm_lock);
>
> if (hpage_collapse_test_exit(mm)) { <-
>
What if user toggle the MMF_DISABLE_THP_COMPLETELY flag again?
> hash_del(&slot->hash);
> list_del(&slot->mm_node);
>
> mm_slot_free(mm_slot_cache, slot);
> mmdrop(mm);
> }
>}
--
Wei Yang
Help you, Help me
On 2026/1/5 08:31, Wei Yang wrote:
> On Sun, Jan 04, 2026 at 08:20:29PM +0800, Lance Yang wrote:
>>
>>
>> On 2026/1/4 13:41, Vernon Yang wrote:
>>> When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
>>> scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
>>> reduce redundant operation.
>>>
>>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>>> ---
>>> mm/khugepaged.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 1ca034a5f653..d4ed0f397335 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
>>> * Release the current mm_slot if this mm is about to die, or
>>> * if we scanned all vmas of this mm.
>>> */
>>> - if (hpage_collapse_test_exit(mm) || !vma) {
>>> + if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
>>> /*
>>> * Make sure that if mm_users is reaching zero while
>>> * khugepaged runs here, khugepaged_exit will find
>>
>>
>> Let's convert hpage_collapse_test_exit() in collect_mm_slot() as well,
>> otherwise the mm_slot would not be freed and will be scanned again ...
>>
>> static void collect_mm_slot(struct mm_slot *slot)
>> {
>> struct mm_struct *mm = slot->mm;
>>
>> lockdep_assert_held(&khugepaged_mm_lock);
>>
>> if (hpage_collapse_test_exit(mm)) { <-
>>
>
> What if user toggle the MMF_DISABLE_THP_COMPLETELY flag again?
Maybe it's fine :)
If user sets MMF_DISABLE_THP_COMPLETELY, they probaly would not
clear it soon. Keeping the slot wastes memory.
If they do clear it later, page faults will trigger
do_huge_pmd_anonymous_page() -> khugepaged_enter_vma(), which
re-adds the mm.
Anyway, no strong opinion on that.
>
>> hash_del(&slot->hash);
>> list_del(&slot->mm_node);
>>
>> mm_slot_free(mm_slot_cache, slot);
>> mmdrop(mm);
>> }
>> }
>
© 2016 - 2026 Red Hat, Inc.