[RFC PATCH v3 0/8] mm: Hot page tracking and promotion infrastructure

Bharata B Rao posted 8 patches 1 month, 1 week ago
There is a newer version of this series
arch/x86/events/amd/ibs.c           |  11 +
arch/x86/include/asm/entry-common.h |   3 +
arch/x86/include/asm/hardirq.h      |   2 +
arch/x86/include/asm/ibs.h          |   9 +
arch/x86/include/asm/msr-index.h    |  16 +
arch/x86/mm/Makefile                |   3 +-
arch/x86/mm/ibs.c                   | 343 +++++++++++++++++
include/linux/migrate.h             |   6 +
include/linux/mmzone.h              |  19 +
include/linux/pghot.h               |  55 +++
include/linux/vm_event_item.h       |  21 +
kernel/sched/debug.c                |   1 -
kernel/sched/fair.c                 | 152 +-------
mm/Kconfig                          |  19 +
mm/Makefile                         |   2 +
mm/huge_memory.c                    |  26 +-
mm/internal.h                       |   4 +
mm/klruscand.c                      | 110 ++++++
mm/memory.c                         |  31 +-
mm/migrate.c                        |  41 +-
mm/mm_init.c                        |  10 +
mm/page_ext.c                       |  11 +
mm/pghot.c                          | 571 ++++++++++++++++++++++++++++
mm/vmscan.c                         | 181 ++++++---
mm/vmstat.c                         |  21 +
25 files changed, 1427 insertions(+), 241 deletions(-)
create mode 100644 arch/x86/include/asm/ibs.h
create mode 100644 arch/x86/mm/ibs.c
create mode 100644 include/linux/pghot.h
create mode 100644 mm/klruscand.c
create mode 100644 mm/pghot.c
[RFC PATCH v3 0/8] mm: Hot page tracking and promotion infrastructure
Posted by Bharata B Rao 1 month, 1 week ago
[If someone wants to be off the CC-list, pls drop me a note. Will remove from
the next iteration]

Hi,

This patchset introduces a new subsystem for hot page tracking and
promotion (pghot) with the following goals:

- Unify hot page detection from multiple sources like hint faults, page table
  scans, hardware hints (IBS).
- Decouple detection from migration.
- Centralize promotion logic via per-lowertier-node kernel thread.
- Move migration rate limiting and associated logic in NUMAB=2 (current NUMA
  Balancing based hot page promotion) from scheduler to pghot sub-system to
  enable broader reuse.
  
Currently, multiple kernel subsystems detect page accesses independently.
This patchset consolidates accesses from these mechanisms by providing:

- A common API for reporting page accesses
- Shared infrastructure for tracking hotness at PFN granularity
- Per-lowertier-node kernel threads for promoting pages.

Here is a brief summary of how this subsystem works:

- Tracks frequency, last access time and accessing node for each recorded
  access.
- These hotness parameters are maintained on a per-PFN in an unsigned long
  variable within the existing mem_section data structure.
  Bits 0-31 are used to store nid, frequency and time.
  Bits 32-62 are unused now.
  Bit 63 is used to indicate the page is ready for migration.
- Classifies pages as hot based on configurable thresholds.
- Pages classified as hot are marked as ready for migration using the ready bit.
- Per-lowertier-node kmigrated threads periodically scan the PFNs of lower tier
  nodes, checking for the migration-ready bit to perform batched migrations.

Three page hotness sources have been integrated with pghot subsystem on
experimental basis:

1. IBS
2. klruscand (based on MGLRU page table walks)
3. NUMA Balancing (mode 2).

Major change in v3
==================
The major design change in this version is to move away from the hash and heap
based hot page records management and instead use statically allocated
per-PFN unsigned long variable for storing hotness parameters. This was the
approach that I had used in what was called the kmigrated patchset [1]. While
earlier I had used extended page flags, here mem_section data structure is used
to store per-PFN hotness information for PFNs spanning the section.

Advantages of this approach:

- Eliminates the need for dynamic allocation and deallocation of hot page
  records. Also, no more atomic context allocations.
- Removes the requirement for special data structures (like hash lists and heap)
  to manage hot page records.
- Considerable space savings per hot page record (Just an unsigned long now
  instead of 40 bytes per record in the earlier approach)
- Fixed complexity for looking up the hot page record of a PFN.
- No locking complexity but just atomic updates to per-PFN record.

Downsides:

- Not easily possible to obtain top N hot pages list but a kernel thread will
  periodically scan the hotness records of its corresponding lower tier to
  obtain the hot pages for promotion.
- A page may become cold by the time kmigrated gets to act on it.

Space overhead:

- One pointer overhead for each memory section to store hotness array pointer.
  With a section size of 128MB resulting in 8192 sections per TB of node memory,
  there will be 64KB of memory used per TB. Currently I am using mem_section to
  store the hotness array pointer instead of creating a parallel data structure.
  If the latter method is preferred, then hotness array pointers are required
  only for the lower tier nodes.
- With 4K PFNs, there can be 32768 PFNs in a section and hence with 8 bytes
  (unsigned long) per PFN, hotness array will consume 2GB per TB of node memory.
  This will be for lower tier nodes only.

Other changes in v3
===================
- Migration thread is renamed to kmigrated (earlier called kpromoted).
- Most code cleanups as suggested by Jonathan Cameron.
- NUMAB mode 2 is now fully enabled as hotness source to pghot sub-system with
  off-loading of large pages migration to kmigrated.
- Sysctl knobs to enable access recording from different sources independently.

Results
=======
System details
--------------
3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2)

$ numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0-95,192-287
node 0 size: 128460 MB
node 1 cpus: 96-191,288-383
node 1 size: 128893 MB
node 2 cpus:
node 2 size: 257993 MB
node distances:
node   0   1   2 
  0:  10  32  50 
  1:  32  10  60 
  2:  255  255  10

Microbenchmark details
----------------------
Multi-threaded application with 64 threads that access memory at 4K granularity
repetitively and randomly. The number of accesses per thread and the randomness
pattern for each thread are fixed beforehand. The accesses are divided into stores
and loads.

Benchmark threads run on Node 0, while memory is initially provisioned on
CXL node 2 before the accesses start. There are three modes in which the
benchmark is run:

Mode 1: Regular 4K page accesses. The memory is provisioned on CXL node using
mmap(MAP_POPULATE). 50% loads and 50% stores.

Mode 2: mmapped file 4K accesses. The memory is provisioned on CXL node using
mmap(fd, MAP_POPULATE|MAP_SHARED). 100% loads.

Mode 3: 2M THP page accesses. The memory is provisioned on CXL node using mmap,
madvise(MADV_HUGEPAGE) and move_pages(to cxl node). 50% loads and 50% stores.

Repetitive accesses results in lowertier pages becoming hot and kmigrated
detecting and migrating them. The benchmark score is the time taken to finish
the accesses in microseconds. The sooner it finishes the better it is. All the
numbers shown below are average of 3 runs.

Hotness sources
---------------
NUMAB0 - Without NUMA Balancing in base case and with no source enabled
	 in the patched case. No migrations occur.
NUMAB2 - Existing hot page promotion for the base case and
	 use of hint faults as source in the patched case.
pgtscan - Klruscand (MGLRU based PTE A bit scanning) source
hwhints - IBS as source

Results summary
---------------
Performance Impact:
- NUMAB2: 4.5% regression in Mode 1 and 19.8% regression in Mode 2.
- Hardware hints (IBS): Shows close to original NUMAB2 performance.
- Page table scanning: Good performance, comprehensive migration.

Migration Effectiveness:
- NUMAB2 and pgtscan achieve similar migration counts to baseline.
- THP migration significantly improved with new sources.
- Hardware hints show some sampling limitations.

Mode 1 - Time taken (microseconds, lower is better)
------------------------------------------------------
Source		Base		Patched		Change
------------------------------------------------------
NUMAB0		115,668,771	117,775,032	+1.8%
NUMAB2		102,894,589	107,576,615	+4.5%
pgtscan		NA		111,399,698	NA
hwhints		NA		103,232,152	NA
------------------------------------------------------

Mode 1 - Pages migrated (pgpromote_success)
------------------------------------------------------
Source		Base		Patched		Change
------------------------------------------------------
NUMAB0		0		0		0%
NUMAB2		2097144		2097152		+0.0%
pgtscan		NA		2097152		NA
hwhints		NA		1269467		NA
------------------------------------------------------

Mode 2 - Time taken (microseconds, lower is better)
------------------------------------------------------
Source		Base		Patched		Change
------------------------------------------------------
NUMAB0		110,273,416	113,801,899	+3.2%
NUMAB2		71,859,123	86,098,560	+19.8%
pgtscan		NA		71,545,031	NA
hwhints		NA		71,857,476	NA
------------------------------------------------------

Mode 2 - Pages migrated (pgpromote_success)
------------------------------------------------------
Source		Base		Patched		Change
------------------------------------------------------
NUMAB0		0		0		0%
NUMAB2		2097152		2080128		-0.8%
pgtscan		NA		2097152		NA
hwhints		NA		2097115		NA
------------------------------------------------------

Mode 3 - Time taken (microseconds, lower is better)
------------------------------------------------------
Source		Base		Patched		Change
------------------------------------------------------
NUMAB0		30,944,794	30,537,137	-1.3%
NUMAB2		29,773,930	31,184,442	+4.7%
pgtscan		NA		28,580,878	NA
hwhints		NA		28,732,128	NA
------------------------------------------------------

Mode 3 - Pages migrated (thp_migration_success)
------------------------------------------------------
Source		Base		Patched		Change
------------------------------------------------------
NUMAB0		0		0		0
NUMAB2		3754		1278		-65.9%
pgtscan		NA		33032		NA
hwhints		NA		32768		NA
------------------------------------------------------

Results Analysis TODO
---------------------
- Regression in NUMAB2 needs further analysis. The overhead of pghot path and
  effect of batched migration needs to be identified. It is seen that
  migrations get kicked off a bit later in kmigrated-NUMAB2 case compared to
  base-NUMAB2 case. This also needs further investigation.

This v3 patchset applies on top of upstream commit e53642b87a4f and
can be fetched from:

https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv3

v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/
v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/
v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/

TODOs
=====
- Check if the page is still within the hotness time window when
  kmigrated gets to it.
- Per-zone or per-section indicators to walk only zones or sections that
  have hot PFNs instead of kmigrated walking all the PFNs of the lower
  tier node.
- Bulk access reporting may be desirable for sources like IBS.
- Take care of memory hotplug for allocation/freeing of mem_section->hot_map.
- Currently I am defaulting to node 0 if target NID isn't specified by the
  source. The best fallback target node may have to determined dynamically.
- Provide compatibility alias for the sysctls moved from sched to pghot.
- Wider testing and benchmark coverage.
- Address Ying Huang's comment about merging migrate_misplaced_folio()
  and migrate_misplaced_folios_batch() and correctly handling memcg
  stats counting properly in the latter.

[1] kmigrated approach: https://lore.kernel.org/linux-mm/20250616133931.206626-1-bharata@amd.com/

Bharata B Rao (5):
  mm: migrate: Allow misplaced migration without VMA too
  mm: Hot page tracking and promotion
  x86: ibs: In-kernel IBS driver for memory access profiling
  x86: ibs: Enable IBS profiling for memory accesses
  mm: sched: Move hot page promotion from NUMAB=2 to pghot tracking

Gregory Price (1):
  migrate: implement migrate_misplaced_folios_batch

Kinsey Ho (2):
  mm: mglru: generalize page table walk
  mm: klruscand: use mglru scanning for page promotion

 arch/x86/events/amd/ibs.c           |  11 +
 arch/x86/include/asm/entry-common.h |   3 +
 arch/x86/include/asm/hardirq.h      |   2 +
 arch/x86/include/asm/ibs.h          |   9 +
 arch/x86/include/asm/msr-index.h    |  16 +
 arch/x86/mm/Makefile                |   3 +-
 arch/x86/mm/ibs.c                   | 343 +++++++++++++++++
 include/linux/migrate.h             |   6 +
 include/linux/mmzone.h              |  19 +
 include/linux/pghot.h               |  55 +++
 include/linux/vm_event_item.h       |  21 +
 kernel/sched/debug.c                |   1 -
 kernel/sched/fair.c                 | 152 +-------
 mm/Kconfig                          |  19 +
 mm/Makefile                         |   2 +
 mm/huge_memory.c                    |  26 +-
 mm/internal.h                       |   4 +
 mm/klruscand.c                      | 110 ++++++
 mm/memory.c                         |  31 +-
 mm/migrate.c                        |  41 +-
 mm/mm_init.c                        |  10 +
 mm/page_ext.c                       |  11 +
 mm/pghot.c                          | 571 ++++++++++++++++++++++++++++
 mm/vmscan.c                         | 181 ++++++---
 mm/vmstat.c                         |  21 +
 25 files changed, 1427 insertions(+), 241 deletions(-)
 create mode 100644 arch/x86/include/asm/ibs.h
 create mode 100644 arch/x86/mm/ibs.c
 create mode 100644 include/linux/pghot.h
 create mode 100644 mm/klruscand.c
 create mode 100644 mm/pghot.c

-- 
2.34.1
Re: [RFC PATCH v3 0/8] mm: Hot page tracking and promotion infrastructure
Posted by Bharata B Rao 1 month ago
On 10-Nov-25 10:53 AM, Bharata B Rao wrote:
<snip>
> Results
> =======

Earlier I included results from the scenario where there was enough free
memory in the toptier node and hence demotions weren't getting triggered.
Here I am including results from a similar microbenchmark that results in
demotion too.

System details
--------------
3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2)

$ numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0-95,192-287
node 0 size: 128460 MB
node 1 cpus: 96-191,288-383
node 1 size: 128893 MB
node 2 cpus:
node 2 size: 257993 MB
node distances:
node   0   1   2 
  0:  10  32  50 
  1:  32  10  60 
  2:  255  255  10

Microbenchmark details
----------------------
Single threaded application that allocates memory on both DRAM and CXL nodes
using mmap(MAP_POPULATE). Every 1G region of allocated memory on CXL node is
accessed at 4K granularity randomly and repetitively to build up the notion
of hotness in the 1GB region that is under access. This should drive promotion.
For promotion to work successfully, the DRAM memory that has been provisioned
(and not being accessed) should be demoted first. There is enough free memory
in the CXL node to for demotions.

In summary, this benchmark creates a memory pressure on DRAM node and does
CXL memory accesses to drive both demotion and promotion.

The number of accesses are fixed and hence, the quicker the accessed pages
get promoted to DRAM, the sooner the benchmark is expected to finish.

DRAM-node			= 1
CXL-node			= 2
Initial DRAM alloc ratio	= 75%
Allocation-size			= 171798691840
Initial DRAM Alloc-size	=	 128849018880
Initial CXL Alloc-size		= 42949672960
Hot-region-size			= 1073741824
Nr-regions			= 160
Nr-regions DRAM			= 120 (provisioned but not accessed)
Nr-hot-regions CXL		= 40
Access pattern			= random
Access granularity		= 4096
Delay b/n accesses		= 0
Load/store ratio		= 50l50s
THP used			= no
Nr accesses			= 42949672960
Nr repetitions			= 1024

Hotness sources
---------------
NUMAB0 - Without NUMA Balancing in base case and with no source enabled
	 in the patched case. No migrations.
NUMAB2 - Existing hot page promotion for the base case and
	 use of hint faults as source in the patched case.
pgtscan - Klruscand (MGLRU based PTE A bit scanning) source
hwhints - IBS as source

Time taken (microseconds, lower is better)
----------------------------------------------
Source	Base		Patched		Change
----------------------------------------------
NUMAB0	63,036,030	64,441,675	+2.2%
NUMAB2	62,286,691	68,786,394	+10.4%(#)
pgtscan	NA		68,702,226
hwhints	NA		67,455,607
----------------------------------------------

Pages migrated (pgpromote_success)
----------------------------------------------
Source	Base		Patched
----------------------------------------------
NUMAB0	0		0
NUMAB2	82134(*)	0(#)
pgtscan	NA		6,561,136
hwhints	NA		3,293($)
----------------------------------------------
(#) Unlike base NUMAB2, pghot migrates after 2 accesses.
    Getting two successive accesses within the observation window is hard with
    NUMA hint faults. The default sysctl_numa_balancing_scan_size of 256MB is
    too less to obtain significant number of hint faults.
(*) High run-to-run variation, so the average isn't really representative.
    Hint fault latency comes out higher than the default 1s threshold
    mostly, preventing migrations.
($) Sampling limitation

Pages demoted (pgdemote_kswapd+pgdemote_direct)
(This data is not really a comparision point but just providing
these numbers to show that the workload results in both promotion
and demotion)
----------------------------------------------
Source	Base		Patched
----------------------------------------------
NUMAB0	5,222,366	5,341,502
NUMAB2	5,256,310	5,325,845
pgtscan	NA		5,317,709
hwhints	NA		5,287,091
----------------------------------------------

Promotion candidate pages (pgpromote_candidate)
----------------------------------------------
Source	Base		Patched
----------------------------------------------
NUMAB0	0		0
NUMAB2	82,848		0
pgtscan	NA		0
hwhints	NA		0
----------------------------------------------

Non-rate limited Promotion candidate pages (pgpromote_candidate_nrl)
----------------------------------------------
Source	Base		Patched
----------------------------------------------
NUMAB0	0		0
NUMAB2	0		0
pgtscan	NA		6,561,147
hwhints	NA		3,292
----------------------------------------------