[PATCH v4 00/13] x86/mm: Add multi-page clearing

Ankur Arora posted 13 patches 3 months, 3 weeks ago
arch/x86/include/asm/page_32.h               |  18 +-
arch/x86/include/asm/page_64.h               |  38 +-
arch/x86/lib/clear_page_64.S                 |  39 +-
arch/x86/mm/Makefile                         |   1 +
arch/x86/mm/memory.c                         |  97 +++++
mm/memory.c                                  |   5 +-
tools/perf/bench/bench.h                     |   1 +
tools/perf/bench/mem-functions.c             | 391 ++++++++++++++-----
tools/perf/bench/mem-memcpy-arch.h           |   2 +-
tools/perf/bench/mem-memcpy-x86-64-asm-def.h |   4 +
tools/perf/bench/mem-memset-arch.h           |   2 +-
tools/perf/bench/mem-memset-x86-64-asm-def.h |   4 +
tools/perf/builtin-bench.c                   |   1 +
13 files changed, 452 insertions(+), 151 deletions(-)
create mode 100644 arch/x86/mm/memory.c
[PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Ankur Arora 3 months, 3 weeks ago
This series adds multi-page clearing for hugepages, improving on the
current page-at-a-time approach in two ways:

 - amortizes the per-page setup cost over a larger extent
 - when using string instructions, exposes the real region size to the
   processor. A processor could use that as a hint to optimize based
   on the full extent size. AMD Zen uarchs, as an example, elide
   allocation of cachelines for regions larger than L3-size.

Demand faulting a 64GB region shows good performance improvements:

 $ perf bench mem map -p $page-size -f demand -s 64GB -l 5

                 mm/folio_zero_user    x86/folio_zero_user       change
                  (GB/s  +- %stdev)     (GB/s  +- %stdev)

  pg-sz=2MB       11.82  +- 0.67%        16.48  +-  0.30%       + 39.4%
  pg-sz=1GB       17.51  +- 1.19%        40.03  +-  7.26% [#]   +129.9%

[#] Only with preempt=full|lazy because cooperatively preempted models
need regular invocations of cond_resched(). This limits the extent
sizes that can be cleared as a unit.

Raghavendra also tested on AMD Genoa and that shows similar
improvements [1].

Series structure:

Patches 1-5, 8,
  "perf bench mem: Remove repetition around time measurement"
  "perf bench mem: Defer type munging of size to float"
  "perf bench mem: Move mem op parameters into a structure"
  "perf bench mem: Pull out init/fini logic"
  "perf bench mem: Switch from zalloc() to mmap()"
  "perf bench mem: Refactor mem_options"

refactor, and patches 6-7, 9
  "perf bench mem: Allow mapping of hugepages"
  "perf bench mem: Allow chunking on a memory region"
  "perf bench mem: Add mmap() workload"

add a few new perf bench mem workloads (chunking and mapping performance).

Patches 10-11,
  "x86/mm: Simplify clear_page_*"
  "x86/clear_page: Introduce clear_pages()"

inlines the ERMS and REP_GOOD implementations used from clear_page()
and adds clear_pages() to handle page extents.

And finally patches 12-13, allow an arch override to folio_zero_user()
and provide the x86 implementation that can do the actual multi-page
clearing.

  "mm: memory: allow arch override for folio_zero_user()"
  "x86/folio_zero_user: Add multi-page clearing"

Changelog:

v4:
 - adds perf bench workloads to exercise mmap() populate/demand-fault (Ingo)
 - inline stosb etc (PeterZ)
 - handle cooperative preemption models (Ingo)
 - interface and other cleanups all over (Ingo)

v3:
 - get rid of preemption dependency (TIF_ALLOW_RESCHED); this version
   was limited to preempt=full|lazy.
 - override folio_zero_user() (Linus)
 (https://lore.kernel.org/lkml/20250414034607.762653-1-ankur.a.arora@oracle.com/)

v2:
 - addressed review comments from peterz, tglx.
 - Removed clear_user_pages(), and CONFIG_X86_32:clear_pages()
 - General code cleanup
 (https://lore.kernel.org/lkml/20230830184958.2333078-1-ankur.a.arora@oracle.com/)

Comments appreciated!

Also at:
  github.com/terminus/linux clear-pages.v4

[1] https://lore.kernel.org/lkml/0d6ba41c-0c90-4130-896a-26eabbd5bd24@amd.com/

Ankur Arora (13):
  perf bench mem: Remove repetition around time measurement
  perf bench mem: Defer type munging of size to float
  perf bench mem: Move mem op parameters into a structure
  perf bench mem: Pull out init/fini logic
  perf bench mem: Switch from zalloc() to mmap()
  perf bench mem: Allow mapping of hugepages
  perf bench mem: Allow chunking on a memory region
  perf bench mem: Refactor mem_options
  perf bench mem: Add mmap() workload
  x86/mm: Simplify clear_page_*
  x86/clear_page: Introduce clear_pages()
  mm: memory: allow arch override for folio_zero_user()
  x86/folio_zero_user: Add multi-page clearing

 arch/x86/include/asm/page_32.h               |  18 +-
 arch/x86/include/asm/page_64.h               |  38 +-
 arch/x86/lib/clear_page_64.S                 |  39 +-
 arch/x86/mm/Makefile                         |   1 +
 arch/x86/mm/memory.c                         |  97 +++++
 mm/memory.c                                  |   5 +-
 tools/perf/bench/bench.h                     |   1 +
 tools/perf/bench/mem-functions.c             | 391 ++++++++++++++-----
 tools/perf/bench/mem-memcpy-arch.h           |   2 +-
 tools/perf/bench/mem-memcpy-x86-64-asm-def.h |   4 +
 tools/perf/bench/mem-memset-arch.h           |   2 +-
 tools/perf/bench/mem-memset-x86-64-asm-def.h |   4 +
 tools/perf/builtin-bench.c                   |   1 +
 13 files changed, 452 insertions(+), 151 deletions(-)
 create mode 100644 arch/x86/mm/memory.c

--
2.43.5
Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Dave Hansen 3 months, 3 weeks ago
On 6/15/25 22:22, Ankur Arora wrote:
> This series adds multi-page clearing for hugepages, improving on the
> current page-at-a-time approach in two ways:
> 
>  - amortizes the per-page setup cost over a larger extent
>  - when using string instructions, exposes the real region size to the
>    processor. A processor could use that as a hint to optimize based
>    on the full extent size. AMD Zen uarchs, as an example, elide
>    allocation of cachelines for regions larger than L3-size.

Have you happened to do any testing outside of 'perf bench'?
Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Ankur Arora 3 months, 3 weeks ago
Dave Hansen <dave.hansen@intel.com> writes:

> On 6/15/25 22:22, Ankur Arora wrote:
>> This series adds multi-page clearing for hugepages, improving on the
>> current page-at-a-time approach in two ways:
>>
>>  - amortizes the per-page setup cost over a larger extent
>>  - when using string instructions, exposes the real region size to the
>>    processor. A processor could use that as a hint to optimize based
>>    on the full extent size. AMD Zen uarchs, as an example, elide
>>    allocation of cachelines for regions larger than L3-size.
>
> Have you happened to do any testing outside of 'perf bench'?

Yeah. My original tests were with qemu creating a pinned guest (where it
would go and touch pages after allocation.)

I think perf bench is a reasonably good test is because a lot of demand
faulting often just boils down to the same kind of loop. And of course
MAP_POPULATE is essentially equal to the clearing loop in the kernel.

I'm happy to try other tests if you have some in mind.

And, thanks for the quick comments!

--
ankur
Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Dave Hansen 3 months, 3 weeks ago
On 6/16/25 11:25, Ankur Arora wrote:
> I'm happy to try other tests if you have some in mind.

I'd just want to make sure that the normal 4k clear_page() users aren't
seeing anything weird.

A good old kernel compile would be fine.
Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Ankur Arora 3 months, 3 weeks ago
Dave Hansen <dave.hansen@intel.com> writes:

> On 6/16/25 11:25, Ankur Arora wrote:
>> I'm happy to try other tests if you have some in mind.
>
> I'd just want to make sure that the normal 4k clear_page() users aren't
> seeing anything weird.
>
> A good old kernel compile would be fine.

Makes sense. I can do that both with and without THP.

--
ankur
Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Raghavendra K T 3 months, 1 week ago
On 6/16/2025 10:52 AM, Ankur Arora wrote:
> This series adds multi-page clearing for hugepages, improving on the
> current page-at-a-time approach in two ways:
> 
>   - amortizes the per-page setup cost over a larger extent
>   - when using string instructions, exposes the real region size to the
>     processor. A processor could use that as a hint to optimize based
>     on the full extent size. AMD Zen uarchs, as an example, elide
>     allocation of cachelines for regions larger than L3-size.
> 
> Demand faulting a 64GB region shows good performance improvements:
> 
>   $ perf bench mem map -p $page-size -f demand -s 64GB -l 5
> 
>                   mm/folio_zero_user    x86/folio_zero_user       change
>                    (GB/s  +- %stdev)     (GB/s  +- %stdev)
> 
>    pg-sz=2MB       11.82  +- 0.67%        16.48  +-  0.30%       + 39.4%
>    pg-sz=1GB       17.51  +- 1.19%        40.03  +-  7.26% [#]   +129.9%
> 
> [#] Only with preempt=full|lazy because cooperatively preempted models
> need regular invocations of cond_resched(). This limits the extent
> sizes that can be cleared as a unit.
> 
> Raghavendra also tested on AMD Genoa and that shows similar
> improvements [1].
> 
[...]
Sorry for coming back late on this:
It was nice to have it integrated to perf bench mem (easy to test :)).

I do see similar (almost same) improvement again with the rebased kernel
and patchset.
Tested only preempt=lazy and boost=1

base       6.16-rc4 + 1-9 patches of this series
patched =  6.16-rc4 + all patches

SUT: Genoa+ AMD EPYC 9B24

  $ perf bench mem map -p $page-size -f populate -s 64GB -l 10
                    base               patched              change
   pg-sz=2MB       12.731939 GB/sec    26.304263 GB/sec     106.6%
   pg-sz=1GB       26.232423 GB/sec    61.174836 GB/sec     133.2%

for 4kb page size there is a slight improvement (mostly a noise).

Thanks and Regards
- Raghu
Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Posted by Ankur Arora 3 months ago
Raghavendra K T <raghavendra.kt@amd.com> writes:

> On 6/16/2025 10:52 AM, Ankur Arora wrote:
>> This series adds multi-page clearing for hugepages, improving on the
>> current page-at-a-time approach in two ways:
>>   - amortizes the per-page setup cost over a larger extent
>>   - when using string instructions, exposes the real region size to the
>>     processor. A processor could use that as a hint to optimize based
>>     on the full extent size. AMD Zen uarchs, as an example, elide
>>     allocation of cachelines for regions larger than L3-size.
>> Demand faulting a 64GB region shows good performance improvements:
>>   $ perf bench mem map -p $page-size -f demand -s 64GB -l 5
>>                   mm/folio_zero_user    x86/folio_zero_user       change
>>                    (GB/s  +- %stdev)     (GB/s  +- %stdev)
>>    pg-sz=2MB       11.82  +- 0.67%        16.48  +-  0.30%       + 39.4%
>>    pg-sz=1GB       17.51  +- 1.19%        40.03  +-  7.26% [#]   +129.9%
>> [#] Only with preempt=full|lazy because cooperatively preempted models
>> need regular invocations of cond_resched(). This limits the extent
>> sizes that can be cleared as a unit.
>> Raghavendra also tested on AMD Genoa and that shows similar
>> improvements [1].
>>
> [...]
> Sorry for coming back late on this:
> It was nice to have it integrated to perf bench mem (easy to test :)).
>
> I do see similar (almost same) improvement again with the rebased kernel
> and patchset.
> Tested only preempt=lazy and boost=1
>
> base       6.16-rc4 + 1-9 patches of this series
> patched =  6.16-rc4 + all patches
>
> SUT: Genoa+ AMD EPYC 9B24
>
>  $ perf bench mem map -p $page-size -f populate -s 64GB -l 10
>                    base               patched              change
>   pg-sz=2MB       12.731939 GB/sec    26.304263 GB/sec     106.6%
>   pg-sz=1GB       26.232423 GB/sec    61.174836 GB/sec     133.2%

Thanks for trying them out. Looks great.

--
ankur