[PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping

Dev Jain posted 9 patches 4 weeks, 1 day ago
include/linux/mm_inline.h  |  37 +++--
include/linux/page-flags.h |  11 ++
include/linux/rmap.h       |  38 ++++-
mm/internal.h              |  26 ++++
mm/memory.c                |  26 +---
mm/mprotect.c              |  17 ---
mm/rmap.c                  | 274 ++++++++++++++++++++++++-------------
mm/shmem.c                 |   8 +-
mm/swap.h                  |  10 +-
mm/swapfile.c              |  25 ++--
10 files changed, 298 insertions(+), 174 deletions(-)
[PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping
Posted by Dev Jain 4 weeks, 1 day ago
Speed up unmapping of anonymous large folios by clearing the ptes, and
setting swap ptes, in one go.

The following benchmark (stolen from Barry at [1]) is used to measure the
time taken to swapout 256M worth of memory backed by 64K large folios:

 #define _GNU_SOURCE
 #include <stdio.h>
 #include <stdlib.h>
 #include <sys/mman.h>
 #include <string.h>
 #include <time.h>
 #include <unistd.h>
 #include <errno.h>

 #define SIZE_MB 256
 #define SIZE_BYTES (SIZE_MB * 1024 * 1024)

 int main() {
     void *addr = mmap(NULL, SIZE_BYTES, PROT_READ | PROT_WRITE,
                       MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
     if (addr == MAP_FAILED) {
         perror("mmap failed");
         return 1;
     }

     memset(addr, 0, SIZE_BYTES);

     struct timespec start, end;
     clock_gettime(CLOCK_MONOTONIC, &start);

     if (madvise(addr, SIZE_BYTES, MADV_PAGEOUT) != 0) {
         perror("madvise(MADV_PAGEOUT) failed");
         munmap(addr, SIZE_BYTES);
         return 1;
     }

     clock_gettime(CLOCK_MONOTONIC, &end);

     long duration_ns = (end.tv_sec - start.tv_sec) * 1e9 +
                        (end.tv_nsec - start.tv_nsec);
     printf("madvise(MADV_PAGEOUT) took %ld ns (%.3f ms)\n",
            duration_ns, duration_ns / 1e6);

     munmap(addr, SIZE_BYTES);
     return 0;
 }

On arm64, only showing one of the middle values in the distribution:

without patch:
madvise(MADV_PAGEOUT) took 52192959 ns (52.193 ms)

with patch:
madvise(MADV_PAGEOUT) took 26676625 ns (26.677 ms)


[1] https://lore.kernel.org/all/20250513084620.58231-1-21cnbao@gmail.com/

---
Based on mm-unstable bb420884e9e0. mm-selftests pass.

Dev Jain (9):
  mm/rmap: make nr_pages signed in try_to_unmap_one
  mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one
  mm/rmap: refactor lazyfree unmap commit path to
    commit_ttu_lazyfree_folio()
  mm/memory: Batch set uffd-wp markers during zapping
  mm/rmap: batch unmap folios belonging to uffd-wp VMAs
  mm/swapfile: Make folio_dup_swap batchable
  mm/swapfile: Make folio_put_swap batchable
  mm/rmap: introduce folio_try_share_anon_rmap_ptes
  mm/rmap: enable batch unmapping of anonymous folios

 include/linux/mm_inline.h  |  37 +++--
 include/linux/page-flags.h |  11 ++
 include/linux/rmap.h       |  38 ++++-
 mm/internal.h              |  26 ++++
 mm/memory.c                |  26 +---
 mm/mprotect.c              |  17 ---
 mm/rmap.c                  | 274 ++++++++++++++++++++++++-------------
 mm/shmem.c                 |   8 +-
 mm/swap.h                  |  10 +-
 mm/swapfile.c              |  25 ++--
 10 files changed, 298 insertions(+), 174 deletions(-)

-- 
2.34.1
Re: [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping
Posted by Lorenzo Stoakes (Oracle) 4 weeks, 1 day ago
On Tue, Mar 10, 2026 at 01:00:04PM +0530, Dev Jain wrote:
> Speed up unmapping of anonymous large folios by clearing the ptes, and
> setting swap ptes, in one go.
>
> The following benchmark (stolen from Barry at [1]) is used to measure the
> time taken to swapout 256M worth of memory backed by 64K large folios:
>
>  #define _GNU_SOURCE
>  #include <stdio.h>
>  #include <stdlib.h>
>  #include <sys/mman.h>
>  #include <string.h>
>  #include <time.h>
>  #include <unistd.h>
>  #include <errno.h>
>
>  #define SIZE_MB 256
>  #define SIZE_BYTES (SIZE_MB * 1024 * 1024)
>
>  int main() {
>      void *addr = mmap(NULL, SIZE_BYTES, PROT_READ | PROT_WRITE,
>                        MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>      if (addr == MAP_FAILED) {
>          perror("mmap failed");
>          return 1;
>      }
>
>      memset(addr, 0, SIZE_BYTES);
>
>      struct timespec start, end;
>      clock_gettime(CLOCK_MONOTONIC, &start);
>
>      if (madvise(addr, SIZE_BYTES, MADV_PAGEOUT) != 0) {
>          perror("madvise(MADV_PAGEOUT) failed");
>          munmap(addr, SIZE_BYTES);
>          return 1;
>      }
>
>      clock_gettime(CLOCK_MONOTONIC, &end);
>
>      long duration_ns = (end.tv_sec - start.tv_sec) * 1e9 +
>                         (end.tv_nsec - start.tv_nsec);
>      printf("madvise(MADV_PAGEOUT) took %ld ns (%.3f ms)\n",
>             duration_ns, duration_ns / 1e6);
>
>      munmap(addr, SIZE_BYTES);
>      return 0;
>  }
>
> On arm64, only showing one of the middle values in the distribution:
>

This doesn't seem very statistically valid.

How about you give median, stddev etc.? Variance matters too.

> without patch:
> madvise(MADV_PAGEOUT) took 52192959 ns (52.193 ms)
>
> with patch:
> madvise(MADV_PAGEOUT) took 26676625 ns (26.677 ms)

You have a habit of only giving data on arm64, and not mentioning whether you've
tested on any other arch/setup.

I've commented on this before so I'm a bit disappointed you've done the exact
same thing here again. Especially since you've previously introduced regressions
this way.

Please can you test this on (hardware!) x86-64 _at least_ as well and confirm
you aren't regressing anything for 4K pages?

>
>
> [1] https://lore.kernel.org/all/20250513084620.58231-1-21cnbao@gmail.com/
>
> ---
> Based on mm-unstable bb420884e9e0. mm-selftests pass.
>
> Dev Jain (9):
>   mm/rmap: make nr_pages signed in try_to_unmap_one
>   mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one
>   mm/rmap: refactor lazyfree unmap commit path to
>     commit_ttu_lazyfree_folio()
>   mm/memory: Batch set uffd-wp markers during zapping
>   mm/rmap: batch unmap folios belonging to uffd-wp VMAs
>   mm/swapfile: Make folio_dup_swap batchable
>   mm/swapfile: Make folio_put_swap batchable
>   mm/rmap: introduce folio_try_share_anon_rmap_ptes
>   mm/rmap: enable batch unmapping of anonymous folios
>
>  include/linux/mm_inline.h  |  37 +++--
>  include/linux/page-flags.h |  11 ++
>  include/linux/rmap.h       |  38 ++++-
>  mm/internal.h              |  26 ++++
>  mm/memory.c                |  26 +---
>  mm/mprotect.c              |  17 ---
>  mm/rmap.c                  | 274 ++++++++++++++++++++++++-------------
>  mm/shmem.c                 |   8 +-
>  mm/swap.h                  |  10 +-
>  mm/swapfile.c              |  25 ++--
>  10 files changed, 298 insertions(+), 174 deletions(-)
>
> --
> 2.34.1
>
Re: [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping
Posted by Dev Jain 4 weeks, 1 day ago

On 10/03/26 1:32 pm, Lorenzo Stoakes (Oracle) wrote:
> On Tue, Mar 10, 2026 at 01:00:04PM +0530, Dev Jain wrote:
>> Speed up unmapping of anonymous large folios by clearing the ptes, and
>> setting swap ptes, in one go.
>>
>> The following benchmark (stolen from Barry at [1]) is used to measure the
>> time taken to swapout 256M worth of memory backed by 64K large folios:
>>
>>  #define _GNU_SOURCE
>>  #include <stdio.h>
>>  #include <stdlib.h>
>>  #include <sys/mman.h>
>>  #include <string.h>
>>  #include <time.h>
>>  #include <unistd.h>
>>  #include <errno.h>
>>
>>  #define SIZE_MB 256
>>  #define SIZE_BYTES (SIZE_MB * 1024 * 1024)
>>
>>  int main() {
>>      void *addr = mmap(NULL, SIZE_BYTES, PROT_READ | PROT_WRITE,
>>                        MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>>      if (addr == MAP_FAILED) {
>>          perror("mmap failed");
>>          return 1;
>>      }
>>
>>      memset(addr, 0, SIZE_BYTES);
>>
>>      struct timespec start, end;
>>      clock_gettime(CLOCK_MONOTONIC, &start);
>>
>>      if (madvise(addr, SIZE_BYTES, MADV_PAGEOUT) != 0) {
>>          perror("madvise(MADV_PAGEOUT) failed");
>>          munmap(addr, SIZE_BYTES);
>>          return 1;
>>      }
>>
>>      clock_gettime(CLOCK_MONOTONIC, &end);
>>
>>      long duration_ns = (end.tv_sec - start.tv_sec) * 1e9 +
>>                         (end.tv_nsec - start.tv_nsec);
>>      printf("madvise(MADV_PAGEOUT) took %ld ns (%.3f ms)\n",
>>             duration_ns, duration_ns / 1e6);
>>
>>      munmap(addr, SIZE_BYTES);
>>      return 0;
>>  }
>>
>> On arm64, only showing one of the middle values in the distribution:
>>
> 
> This doesn't seem very statistically valid.
> 
> How about you give median, stddev etc.? Variance matters too.

Okay.

> 
>> without patch:
>> madvise(MADV_PAGEOUT) took 52192959 ns (52.193 ms)
>>
>> with patch:
>> madvise(MADV_PAGEOUT) took 26676625 ns (26.677 ms)
> 
> You have a habit of only giving data on arm64, and not mentioning whether you've
> tested on any other arch/setup.

I did do an x86 build, forgot to mention that.
I didn't do the numbers thinking this patchset is quite generic and has got
nothing to do with the arm64 cont bit - but arguably I should have.

> 
> I've commented on this before so I'm a bit disappointed you've done the exact
> same thing here again. Especially since you've previously introduced regressions
> this way.
> 
> Please can you test this on (hardware!) x86-64 _at least_ as well and confirm
> you aren't regressing anything for 4K pages?

Lemme go and manage that :)

> 
>>
>>
>> [1] https://lore.kernel.org/all/20250513084620.58231-1-21cnbao@gmail.com/
>>
>> ---
>> Based on mm-unstable bb420884e9e0. mm-selftests pass.
>>
>> Dev Jain (9):
>>   mm/rmap: make nr_pages signed in try_to_unmap_one
>>   mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one
>>   mm/rmap: refactor lazyfree unmap commit path to
>>     commit_ttu_lazyfree_folio()
>>   mm/memory: Batch set uffd-wp markers during zapping
>>   mm/rmap: batch unmap folios belonging to uffd-wp VMAs
>>   mm/swapfile: Make folio_dup_swap batchable
>>   mm/swapfile: Make folio_put_swap batchable
>>   mm/rmap: introduce folio_try_share_anon_rmap_ptes
>>   mm/rmap: enable batch unmapping of anonymous folios
>>
>>  include/linux/mm_inline.h  |  37 +++--
>>  include/linux/page-flags.h |  11 ++
>>  include/linux/rmap.h       |  38 ++++-
>>  mm/internal.h              |  26 ++++
>>  mm/memory.c                |  26 +---
>>  mm/mprotect.c              |  17 ---
>>  mm/rmap.c                  | 274 ++++++++++++++++++++++++-------------
>>  mm/shmem.c                 |   8 +-
>>  mm/swap.h                  |  10 +-
>>  mm/swapfile.c              |  25 ++--
>>  10 files changed, 298 insertions(+), 174 deletions(-)
>>
>> --
>> 2.34.1
>>
Re: [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping
Posted by Lance Yang 4 weeks, 1 day ago
On Tue, Mar 10, 2026 at 01:00:04PM +0530, Dev Jain wrote:
>Speed up unmapping of anonymous large folios by clearing the ptes, and
>setting swap ptes, in one go.
>
>The following benchmark (stolen from Barry at [1]) is used to measure the
>time taken to swapout 256M worth of memory backed by 64K large folios:
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/mman.h>
> #include <string.h>
> #include <time.h>
> #include <unistd.h>
> #include <errno.h>
>
> #define SIZE_MB 256
> #define SIZE_BYTES (SIZE_MB * 1024 * 1024)
>
> int main() {
>     void *addr = mmap(NULL, SIZE_BYTES, PROT_READ | PROT_WRITE,
>                       MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>     if (addr == MAP_FAILED) {
>         perror("mmap failed");
>         return 1;
>     }
>
>     memset(addr, 0, SIZE_BYTES);
>
>     struct timespec start, end;
>     clock_gettime(CLOCK_MONOTONIC, &start);
>
>     if (madvise(addr, SIZE_BYTES, MADV_PAGEOUT) != 0) {
>         perror("madvise(MADV_PAGEOUT) failed");
>         munmap(addr, SIZE_BYTES);
>         return 1;
>     }
>
>     clock_gettime(CLOCK_MONOTONIC, &end);
>
>     long duration_ns = (end.tv_sec - start.tv_sec) * 1e9 +
>                        (end.tv_nsec - start.tv_nsec);
>     printf("madvise(MADV_PAGEOUT) took %ld ns (%.3f ms)\n",
>            duration_ns, duration_ns / 1e6);
>
>     munmap(addr, SIZE_BYTES);
>     return 0;
> }
>
>On arm64, only showing one of the middle values in the distribution:
>
>without patch:
>madvise(MADV_PAGEOUT) took 52192959 ns (52.193 ms)
>
>with patch:
>madvise(MADV_PAGEOUT) took 26676625 ns (26.677 ms)

Good numbers! Just tested on x86 KVM with THP=never, no performance
regression observed.

Cheers,
Lance
Re: [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping
Posted by Dev Jain 4 weeks ago

On 10/03/26 6:29 pm, Lance Yang wrote:
> 
> On Tue, Mar 10, 2026 at 01:00:04PM +0530, Dev Jain wrote:
>> Speed up unmapping of anonymous large folios by clearing the ptes, and
>> setting swap ptes, in one go.
>>
>> The following benchmark (stolen from Barry at [1]) is used to measure the
>> time taken to swapout 256M worth of memory backed by 64K large folios:
>>
>> #define _GNU_SOURCE
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <sys/mman.h>
>> #include <string.h>
>> #include <time.h>
>> #include <unistd.h>
>> #include <errno.h>
>>
>> #define SIZE_MB 256
>> #define SIZE_BYTES (SIZE_MB * 1024 * 1024)
>>
>> int main() {
>>     void *addr = mmap(NULL, SIZE_BYTES, PROT_READ | PROT_WRITE,
>>                       MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>>     if (addr == MAP_FAILED) {
>>         perror("mmap failed");
>>         return 1;
>>     }
>>
>>     memset(addr, 0, SIZE_BYTES);
>>
>>     struct timespec start, end;
>>     clock_gettime(CLOCK_MONOTONIC, &start);
>>
>>     if (madvise(addr, SIZE_BYTES, MADV_PAGEOUT) != 0) {
>>         perror("madvise(MADV_PAGEOUT) failed");
>>         munmap(addr, SIZE_BYTES);
>>         return 1;
>>     }
>>
>>     clock_gettime(CLOCK_MONOTONIC, &end);
>>
>>     long duration_ns = (end.tv_sec - start.tv_sec) * 1e9 +
>>                        (end.tv_nsec - start.tv_nsec);
>>     printf("madvise(MADV_PAGEOUT) took %ld ns (%.3f ms)\n",
>>            duration_ns, duration_ns / 1e6);
>>
>>     munmap(addr, SIZE_BYTES);
>>     return 0;
>> }
>>
>> On arm64, only showing one of the middle values in the distribution:
>>
>> without patch:
>> madvise(MADV_PAGEOUT) took 52192959 ns (52.193 ms)
>>
>> with patch:
>> madvise(MADV_PAGEOUT) took 26676625 ns (26.677 ms)
> 
> Good numbers! Just tested on x86 KVM with THP=never, no performance
> regression observed.

Thanks Lance!

Although still I'll try to get no-regression numbers and perf-boost
numbers on x86 myself and post it in next version.

> 
> Cheers,
> Lance
>