mm/mremap.c | 57 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 42 insertions(+), 15 deletions(-)
Currently move_ptes() iterates through ptes one by one. If the underlying
folio mapped by the ptes is large, we can process those ptes in a batch
using folio_pte_batch(), thus clearing and setting the PTEs in one go.
For arm64 specifically, this results in a 16x reduction in the number of
ptep_get() calls (since on a contig block, ptep_get() on arm64 will iterate
through all 16 entries to collect a/d bits), and we also elide extra TLBIs
through get_and_clear_full_ptes, replacing ptep_get_and_clear.
Mapping 1M of memory with 64K folios, memsetting it, remapping it to
src + 1M, and munmapping it 10,000 times, the average execution time
reduces from 1.9 to 1.2 seconds, giving a 37% performance optimization,
on Apple M3 (arm64). No regression is observed for small folios.
The patchset is based on mm-unstable (6ebffe676fcf).
Test program for reference:
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <string.h>
#include <errno.h>
#define SIZE (1UL << 20) // 1M
int main(void) {
void *new_addr, *addr;
for (int i = 0; i < 10000; ++i) {
addr = mmap((void *)(1UL << 30), SIZE, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (addr == MAP_FAILED) {
perror("mmap");
return 1;
}
memset(addr, 0xAA, SIZE);
new_addr = mremap(addr, SIZE, SIZE, MREMAP_MAYMOVE | MREMAP_FIXED, addr + SIZE);
if (new_addr != (addr + SIZE)) {
perror("mremap");
return 1;
}
munmap(new_addr, SIZE);
}
}
v2->v3:
- Refactor mremap_folio_pte_batch, drop maybe_contiguous_pte_pfns, fix
indentation (Lorenzo), fix cover letter description (512K -> 1M)
v1->v2:
- Expand patch descriptions, move pte declarations to a new line,
reduce indentation in patch 2 by introducing mremap_folio_pte_batch(),
fix loop iteration (Lorenzo)
- Merge patch 2 and 3 (Anshuman, Lorenzo)
- Fix maybe_contiguous_pte_pfns (Willy)
Dev Jain (2):
mm: Call pointers to ptes as ptep
mm: Optimize mremap() by PTE batching
mm/mremap.c | 57 +++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 42 insertions(+), 15 deletions(-)
--
2.30.2
I seem to recall we agreed you'd hold off on this until the mprotect work
was done :>) I see a lot of review there and was expecting a respin, unless
I'm mistaken?
At any rate we're in the merge window now so it's maybe not quite as
important now :)
We're pretty close to this being done anyway, just need some feedback on
points raised (obviously David et al. may have further comments).
Thanks, Lorenzo
On Tue, May 27, 2025 at 01:20:47PM +0530, Dev Jain wrote:
> Currently move_ptes() iterates through ptes one by one. If the underlying
> folio mapped by the ptes is large, we can process those ptes in a batch
> using folio_pte_batch(), thus clearing and setting the PTEs in one go.
> For arm64 specifically, this results in a 16x reduction in the number of
> ptep_get() calls (since on a contig block, ptep_get() on arm64 will iterate
> through all 16 entries to collect a/d bits), and we also elide extra TLBIs
> through get_and_clear_full_ptes, replacing ptep_get_and_clear.
OK this is more general than the stuff in 2/2, so you are doing this work
for page-table split large folios also.
I do think this _should_ be fine for that unless I've missed something. At
any rate I've commented on this in 2/2.
>
> Mapping 1M of memory with 64K folios, memsetting it, remapping it to
> src + 1M, and munmapping it 10,000 times, the average execution time
> reduces from 1.9 to 1.2 seconds, giving a 37% performance optimization,
> on Apple M3 (arm64). No regression is observed for small folios.
>
> The patchset is based on mm-unstable (6ebffe676fcf).
>
> Test program for reference:
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/mman.h>
> #include <string.h>
> #include <errno.h>
>
> #define SIZE (1UL << 20) // 1M
>
> int main(void) {
> void *new_addr, *addr;
>
> for (int i = 0; i < 10000; ++i) {
> addr = mmap((void *)(1UL << 30), SIZE, PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> if (addr == MAP_FAILED) {
> perror("mmap");
> return 1;
> }
> memset(addr, 0xAA, SIZE);
>
> new_addr = mremap(addr, SIZE, SIZE, MREMAP_MAYMOVE | MREMAP_FIXED, addr + SIZE);
> if (new_addr != (addr + SIZE)) {
> perror("mremap");
> return 1;
> }
> munmap(new_addr, SIZE);
> }
>
> }
>
> v2->v3:
> - Refactor mremap_folio_pte_batch, drop maybe_contiguous_pte_pfns, fix
> indentation (Lorenzo), fix cover letter description (512K -> 1M)
>
> v1->v2:
> - Expand patch descriptions, move pte declarations to a new line,
> reduce indentation in patch 2 by introducing mremap_folio_pte_batch(),
> fix loop iteration (Lorenzo)
> - Merge patch 2 and 3 (Anshuman, Lorenzo)
> - Fix maybe_contiguous_pte_pfns (Willy)
>
> Dev Jain (2):
> mm: Call pointers to ptes as ptep
> mm: Optimize mremap() by PTE batching
>
> mm/mremap.c | 57 +++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 42 insertions(+), 15 deletions(-)
>
> --
> 2.30.2
>
On 27/05/25 4:20 pm, Lorenzo Stoakes wrote:
> I seem to recall we agreed you'd hold off on this until the mprotect work
> was done :>) I see a lot of review there and was expecting a respin, unless
Oh, my interpretation was that you requested to hold this off for a bit to get
some review on the mprotect series first, apologies if you meant otherwise! I
posted that one or so week before so I thought enough time has passed : )
> I'm mistaken?
>
> At any rate we're in the merge window now so it's maybe not quite as
> important now :)
>
> We're pretty close to this being done anyway, just need some feedback on
> points raised (obviously David et al. may have further comments).
>
> Thanks, Lorenzo
>
> On Tue, May 27, 2025 at 01:20:47PM +0530, Dev Jain wrote:
>> Currently move_ptes() iterates through ptes one by one. If the underlying
>> folio mapped by the ptes is large, we can process those ptes in a batch
>> using folio_pte_batch(), thus clearing and setting the PTEs in one go.
>> For arm64 specifically, this results in a 16x reduction in the number of
>> ptep_get() calls (since on a contig block, ptep_get() on arm64 will iterate
>> through all 16 entries to collect a/d bits), and we also elide extra TLBIs
>> through get_and_clear_full_ptes, replacing ptep_get_and_clear.
> OK this is more general than the stuff in 2/2, so you are doing this work
> for page-table split large folios also.
>
> I do think this _should_ be fine for that unless I've missed something. At
> any rate I've commented on this in 2/2.
>
>> Mapping 1M of memory with 64K folios, memsetting it, remapping it to
>> src + 1M, and munmapping it 10,000 times, the average execution time
>> reduces from 1.9 to 1.2 seconds, giving a 37% performance optimization,
>> on Apple M3 (arm64). No regression is observed for small folios.
>>
>> The patchset is based on mm-unstable (6ebffe676fcf).
>>
>> Test program for reference:
>>
>> #define _GNU_SOURCE
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <unistd.h>
>> #include <sys/mman.h>
>> #include <string.h>
>> #include <errno.h>
>>
>> #define SIZE (1UL << 20) // 1M
>>
>> int main(void) {
>> void *new_addr, *addr;
>>
>> for (int i = 0; i < 10000; ++i) {
>> addr = mmap((void *)(1UL << 30), SIZE, PROT_READ | PROT_WRITE,
>> MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>> if (addr == MAP_FAILED) {
>> perror("mmap");
>> return 1;
>> }
>> memset(addr, 0xAA, SIZE);
>>
>> new_addr = mremap(addr, SIZE, SIZE, MREMAP_MAYMOVE | MREMAP_FIXED, addr + SIZE);
>> if (new_addr != (addr + SIZE)) {
>> perror("mremap");
>> return 1;
>> }
>> munmap(new_addr, SIZE);
>> }
>>
>> }
>>
>> v2->v3:
>> - Refactor mremap_folio_pte_batch, drop maybe_contiguous_pte_pfns, fix
>> indentation (Lorenzo), fix cover letter description (512K -> 1M)
>>
>> v1->v2:
>> - Expand patch descriptions, move pte declarations to a new line,
>> reduce indentation in patch 2 by introducing mremap_folio_pte_batch(),
>> fix loop iteration (Lorenzo)
>> - Merge patch 2 and 3 (Anshuman, Lorenzo)
>> - Fix maybe_contiguous_pte_pfns (Willy)
>>
>> Dev Jain (2):
>> mm: Call pointers to ptes as ptep
>> mm: Optimize mremap() by PTE batching
>>
>> mm/mremap.c | 57 +++++++++++++++++++++++++++++++++++++++--------------
>> 1 file changed, 42 insertions(+), 15 deletions(-)
>>
>> --
>> 2.30.2
>>
On Tue, May 27, 2025 at 09:56:37PM +0530, Dev Jain wrote:
>
> On 27/05/25 4:20 pm, Lorenzo Stoakes wrote:
> > I seem to recall we agreed you'd hold off on this until the mprotect work
> > was done :>) I see a lot of review there and was expecting a respin, unless
>
>
> Oh, my interpretation was that you requested to hold this off for a bit to get
> some review on the mprotect series first, apologies if you meant otherwise! I
> posted that one or so week before so I thought enough time has passed : )
Yeah sorry maybe I wasn't clear. At any rate, I don't think we're miles off here
once we resolve the questions, so doesn't matter too much... :)
>
>
> > I'm mistaken?
> >
> > At any rate we're in the merge window now so it's maybe not quite as
> > important now :)
> >
> > We're pretty close to this being done anyway, just need some feedback on
> > points raised (obviously David et al. may have further comments).
> >
> > Thanks, Lorenzo
> >
> > On Tue, May 27, 2025 at 01:20:47PM +0530, Dev Jain wrote:
> > > Currently move_ptes() iterates through ptes one by one. If the underlying
> > > folio mapped by the ptes is large, we can process those ptes in a batch
> > > using folio_pte_batch(), thus clearing and setting the PTEs in one go.
> > > For arm64 specifically, this results in a 16x reduction in the number of
> > > ptep_get() calls (since on a contig block, ptep_get() on arm64 will iterate
> > > through all 16 entries to collect a/d bits), and we also elide extra TLBIs
> > > through get_and_clear_full_ptes, replacing ptep_get_and_clear.
> > OK this is more general than the stuff in 2/2, so you are doing this work
> > for page-table split large folios also.
> >
> > I do think this _should_ be fine for that unless I've missed something. At
> > any rate I've commented on this in 2/2.
> >
> > > Mapping 1M of memory with 64K folios, memsetting it, remapping it to
> > > src + 1M, and munmapping it 10,000 times, the average execution time
> > > reduces from 1.9 to 1.2 seconds, giving a 37% performance optimization,
> > > on Apple M3 (arm64). No regression is observed for small folios.
> > >
> > > The patchset is based on mm-unstable (6ebffe676fcf).
> > >
> > > Test program for reference:
> > >
> > > #define _GNU_SOURCE
> > > #include <stdio.h>
> > > #include <stdlib.h>
> > > #include <unistd.h>
> > > #include <sys/mman.h>
> > > #include <string.h>
> > > #include <errno.h>
> > >
> > > #define SIZE (1UL << 20) // 1M
> > >
> > > int main(void) {
> > > void *new_addr, *addr;
> > >
> > > for (int i = 0; i < 10000; ++i) {
> > > addr = mmap((void *)(1UL << 30), SIZE, PROT_READ | PROT_WRITE,
> > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> > > if (addr == MAP_FAILED) {
> > > perror("mmap");
> > > return 1;
> > > }
> > > memset(addr, 0xAA, SIZE);
> > >
> > > new_addr = mremap(addr, SIZE, SIZE, MREMAP_MAYMOVE | MREMAP_FIXED, addr + SIZE);
> > > if (new_addr != (addr + SIZE)) {
> > > perror("mremap");
> > > return 1;
> > > }
> > > munmap(new_addr, SIZE);
> > > }
> > >
> > > }
> > >
> > > v2->v3:
> > > - Refactor mremap_folio_pte_batch, drop maybe_contiguous_pte_pfns, fix
> > > indentation (Lorenzo), fix cover letter description (512K -> 1M)
> > >
> > > v1->v2:
> > > - Expand patch descriptions, move pte declarations to a new line,
> > > reduce indentation in patch 2 by introducing mremap_folio_pte_batch(),
> > > fix loop iteration (Lorenzo)
> > > - Merge patch 2 and 3 (Anshuman, Lorenzo)
> > > - Fix maybe_contiguous_pte_pfns (Willy)
> > >
> > > Dev Jain (2):
> > > mm: Call pointers to ptes as ptep
> > > mm: Optimize mremap() by PTE batching
> > >
> > > mm/mremap.c | 57 +++++++++++++++++++++++++++++++++++++++--------------
> > > 1 file changed, 42 insertions(+), 15 deletions(-)
> > >
> > > --
> > > 2.30.2
> > >
© 2016 - 2025 Red Hat, Inc.