mm/ksm.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-)
From: xu xin <xu.xin16@zte.com.cn>
Considering that commit 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing
a suitable addressrange") seems to have already been merged, this new patch is
proposed to address the issue raised by David at:
https://lore.kernel.org/all/ba03780a-fd65-4a03-97de-bc0905106260@kernel.org/
This initialize rmap values (addr, pgoff_start, pgoff_end) directly and
make them const to make code more robust. Besides, since KSM folios are always
order-0, so folio_nr_pages(KSM folio) is always 1, so the line:
"pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"
becomes directly:
"pgoff_end = pgoff_start;"
The test reproducer of rmap_walk_ksm can be found at:
https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
---
mm/ksm.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 031c17e4ada6..c7ca117024a4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3171,8 +3171,11 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
struct anon_vma *anon_vma = rmap_item->anon_vma;
struct anon_vma_chain *vmac;
struct vm_area_struct *vma;
- unsigned long addr;
- pgoff_t pgoff_start, pgoff_end;
+ /* Ignore the stable/unstable/sqnr flags */
+ const unsigned long addr = rmap_item->address & PAGE_MASK;
+ const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT;
+ /* KSM folios are always order-0 normal pages */
+ const pgoff_t pgoff_end = pgoff_start;
cond_resched();
if (!anon_vma_trylock_read(anon_vma)) {
@@ -3183,12 +3186,6 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
anon_vma_lock_read(anon_vma);
}
- /* Ignore the stable/unstable/sqnr flags */
- addr = rmap_item->address & PAGE_MASK;
-
- pgoff_start = rmap_item->address >> PAGE_SHIFT;
- pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;
-
anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root,
pgoff_start, pgoff_end) {
--
2.25.1
On Fri, Feb 06, 2026 at 03:22:54PM +0800, xu.xin16@zte.com.cn wrote: > make them const to make code more robust. Besides, since KSM folios are always > order-0, so folio_nr_pages(KSM folio) is always 1, so the line: > > "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;" > > becomes directly: > > "pgoff_end = pgoff_start;" How do you know KSM folios will always be order 0? I don't. NAK this change.
On 2/6/26 16:29, Matthew Wilcox wrote: > On Fri, Feb 06, 2026 at 03:22:54PM +0800, xu.xin16@zte.com.cn wrote: >> make them const to make code more robust. Besides, since KSM folios are always >> order-0, so folio_nr_pages(KSM folio) is always 1, so the line: >> >> "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;" >> >> becomes directly: >> >> "pgoff_end = pgoff_start;" > > How do you know KSM folios will always be order 0? I don't. NAK this > change. Once that changes we can revisit. ACK from me stands. -- Cheers, David
On 2/6/26 18:34, David Hildenbrand (Arm) wrote: > On 2/6/26 16:29, Matthew Wilcox wrote: >> On Fri, Feb 06, 2026 at 03:22:54PM +0800, xu.xin16@zte.com.cn wrote: >>> make them const to make code more robust. Besides, since KSM folios >>> are always >>> order-0, so folio_nr_pages(KSM folio) is always 1, so the line: >>> >>> "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;" >>> >>> becomes directly: >>> >>> "pgoff_end = pgoff_start;" >> >> How do you know KSM folios will always be order 0? I don't. NAK this >> change. > > Once that changes we can revisit. ACK from me stands. And just to elaborate a bit: I don't see support for > 0 happening any time soon, and it will require significant changes that I am not even sure we would want to maintain upstream :) -- Cheers, David
On Fri, Feb 06, 2026 at 07:09:56PM +0100, David Hildenbrand (Arm) wrote: > On 2/6/26 18:34, David Hildenbrand (Arm) wrote: > > On 2/6/26 16:29, Matthew Wilcox wrote: > > > On Fri, Feb 06, 2026 at 03:22:54PM +0800, xu.xin16@zte.com.cn wrote: > > > > make them const to make code more robust. Besides, since KSM > > > > folios are always > > > > order-0, so folio_nr_pages(KSM folio) is always 1, so the line: > > > > > > > > "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;" > > > > > > > > becomes directly: > > > > > > > > "pgoff_end = pgoff_start;" > > > > > > How do you know KSM folios will always be order 0? I don't. NAK this > > > change. > > > > Once that changes we can revisit. ACK from me stands. > > And just to elaborate a bit: I don't see support for > 0 happening any time > soon, and it will require significant changes that I am not even sure we > would want to maintain upstream :) OK, you're the KSM maintainer. There doesn't seem to be anywhere else in the KSM code which handles large folios, so this is just removing code that was recently added?
On 2/6/26 19:49, Matthew Wilcox wrote: > On Fri, Feb 06, 2026 at 07:09:56PM +0100, David Hildenbrand (Arm) wrote: >> On 2/6/26 18:34, David Hildenbrand (Arm) wrote: >>> >>> Once that changes we can revisit. ACK from me stands. >> >> And just to elaborate a bit: I don't see support for > 0 happening any time >> soon, and it will require significant changes that I am not even sure we >> would want to maintain upstream :) > > OK, you're the KSM maintainer. There doesn't seem to be anywhere else > in the KSM code which handles large folios, so this is just removing > code that was recently added? Yes! Sorry for not being clear. :) This is a fixup for a patch that introduced that code in the first place. It will be squashed into the original patch that's not upstream yet. -- Cheers, David
On 2/6/26 08:22, xu.xin16@zte.com.cn wrote:
> From: xu xin <xu.xin16@zte.com.cn>
>
> Considering that commit 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing
> a suitable addressrange") seems to have already been merged, this new patch is
> proposed to address the issue raised by David at:
>
> https://lore.kernel.org/all/ba03780a-fd65-4a03-97de-bc0905106260@kernel.org/
>
> This initialize rmap values (addr, pgoff_start, pgoff_end) directly and
> make them const to make code more robust. Besides, since KSM folios are always
> order-0, so folio_nr_pages(KSM folio) is always 1, so the line:
>
> "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"
>
> becomes directly:
>
> "pgoff_end = pgoff_start;"
>
> The test reproducer of rmap_walk_ksm can be found at:
> https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
Thanks!
>
> Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
> Signed-off-by: xu xin <xu.xin16@zte.com.cn>
The patch does not seem to be upstream / in mm-stable yet.
Can you resend the original patch with these changes included and the
reproducer referenced in the updated patch description?
Thanks!
> ---
> mm/ksm.c | 13 +++++--------
> 1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 031c17e4ada6..c7ca117024a4 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -3171,8 +3171,11 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
> struct anon_vma *anon_vma = rmap_item->anon_vma;
> struct anon_vma_chain *vmac;
> struct vm_area_struct *vma;
> - unsigned long addr;
> - pgoff_t pgoff_start, pgoff_end;
> + /* Ignore the stable/unstable/sqnr flags */
> + const unsigned long addr = rmap_item->address & PAGE_MASK;
> + const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT;
> + /* KSM folios are always order-0 normal pages */
> + const pgoff_t pgoff_end = pgoff_start;
I would move them all the way up, above the "struct anon_vma *anon_vma =
rmap_item->anon_vma;"
--
Cheers,
David
> > Considering that commit 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing
> > a suitable addressrange") seems to have already been merged, this new patch is
> > proposed to address the issue raised by David at:
> >
> > https://lore.kernel.org/all/ba03780a-fd65-4a03-97de-bc0905106260@kernel.org/
> >
> > This initialize rmap values (addr, pgoff_start, pgoff_end) directly and
> > make them const to make code more robust. Besides, since KSM folios are always
> > order-0, so folio_nr_pages(KSM folio) is always 1, so the line:
> >
> > "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"
> >
> > becomes directly:
> >
> > "pgoff_end = pgoff_start;"
> >
> > The test reproducer of rmap_walk_ksm can be found at:
> > https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
>
> Thanks!
>
> >
> > Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
> > Signed-off-by: xu xin <xu.xin16@zte.com.cn>
>
> The patch does not seem to be upstream / in mm-stable yet.
>
> Can you resend the original patch with these changes included and the
> reproducer referenced in the updated patch description?
Sure, I thought the original patch was merged in linux-next.
>
> Thanks!
>
> > ---
> > mm/ksm.c | 13 +++++--------
> > 1 file changed, 5 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/ksm.c b/mm/ksm.c
> > index 031c17e4ada6..c7ca117024a4 100644
> > --- a/mm/ksm.c
> > +++ b/mm/ksm.c
> > @@ -3171,8 +3171,11 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
> > struct anon_vma *anon_vma = rmap_item->anon_vma;
> > struct anon_vma_chain *vmac;
> > struct vm_area_struct *vma;
> > - unsigned long addr;
> > - pgoff_t pgoff_start, pgoff_end;
> > + /* Ignore the stable/unstable/sqnr flags */
> > + const unsigned long addr = rmap_item->address & PAGE_MASK;
> > + const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT;
> > + /* KSM folios are always order-0 normal pages */
> > + const pgoff_t pgoff_end = pgoff_start;
>
> I would move them all the way up, above the "struct anon_vma *anon_vma =
> rmap_item->anon_vma;"
Yes, that looks better indeed.
>
> --
> Cheers,
>
> David
On Fri, 6 Feb 2026 16:38:50 +0800 (CST) <xu.xin16@zte.com.cn> wrote:
> > >
> > > Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
> > > Signed-off-by: xu xin <xu.xin16@zte.com.cn>
> >
> > The patch does not seem to be upstream / in mm-stable yet.
> >
> > Can you resend the original patch with these changes included and the
> > reproducer referenced in the updated patch description?
>
> Sure, I thought the original patch was merged in linux-next.
I've recently moved the original series into mm.git's mm-new branch, which
isn't included in linux-next. Because it appeared that the v2 series
wouldn't be available in time for the next merge window.
It is a wonderful performance improvement, but I think it's best that
we revisit the series for the next -rc cycle. So can you please
integrate this new change into "ksm: optimize rmap_walk_ksm by passing
a suitable addressrange", rework the changelogs to reflect the
discussion thus far and resend the "KSM: Optimizations for
rmap_walk_ksm" series?
On 2/6/26 09:38, xu.xin16@zte.com.cn wrote:
>>> Considering that commit 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing
>>> a suitable addressrange") seems to have already been merged, this new patch is
>>> proposed to address the issue raised by David at:
>>>
>>> https://lore.kernel.org/all/ba03780a-fd65-4a03-97de-bc0905106260@kernel.org/
>>>
>>> This initialize rmap values (addr, pgoff_start, pgoff_end) directly and
>>> make them const to make code more robust. Besides, since KSM folios are always
>>> order-0, so folio_nr_pages(KSM folio) is always 1, so the line:
>>>
>>> "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"
>>>
>>> becomes directly:
>>>
>>> "pgoff_end = pgoff_start;"
>>>
>>> The test reproducer of rmap_walk_ksm can be found at:
>>> https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
>>
>> Thanks!
>>
>>>
>>> Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
>>> Signed-off-by: xu xin <xu.xin16@zte.com.cn>
>>
>> The patch does not seem to be upstream / in mm-stable yet.
>>
>> Can you resend the original patch with these changes included and the
>> reproducer referenced in the updated patch description?
>
> Sure, I thought the original patch was merged in linux-next.
linux-next is just an integration tree.
What you want to look out for is whether the patch ended up in mm-stable
or mm-hotfixes-stable
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git
As long as it's not in there (1) the commit id is not stable; and (2) we
can still modify it.
--
Cheers,
David
© 2016 - 2026 Red Hat, Inc.