From nobody Thu Dec 18 02:26:13 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D10141EA7DB for ; Mon, 24 Mar 2025 22:03:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742853793; cv=none; b=K3s5NikuCm/0hyzCWuebeSjLDRU2/Kyu9DPqbtwlpv/H8eNgVv7YJBuDior2z+XQMEtiq11SL8eCUZEJKC8KuFGWfHvt08IUnxNg4s6IXTo9Hq5zDvP/xixl+e6vYk2axDFkndlzGW9FJYrUuv276MLAq7X+A5jzhK8z+vsVn0g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742853793; c=relaxed/simple; bh=gp9T/F36DdNhP/D9MfW72999RfCukwTxvi2VPy0Z57c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ck681n+cleT9/EauAX8Nv/mm5LQJvooocVgvW6K9EnOXzgUDNusJtTQMn+tJ3EMWCWYVNFlg4qvXDxG/fBPmH56TJRVtmvBJdH3BRcI2LZE4KnKJ373zrcDJ18H3XMK24EqCPo4G07PWs9iqFatRf6VzpWvNRRv6HZBHXkW26+4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jGZ9YGGh; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jGZ9YGGh" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2242ade807fso138846025ad.2 for ; Mon, 24 Mar 2025 15:03:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742853791; x=1743458591; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H6xLSNMv7dH+XhTTh/MX0iK8IzdaNSHq8WhjOCavuno=; b=jGZ9YGGhY+jzptKBClmZEatcb+9xXjBRPwW73zB8x73vYUsKf30UCybjdtNJ50bQpd JNn6CUe6DbuiznoeUOjY/jtMBFzrPHN6psAohh4LTCPBuhxb/GT8XGqJCxhllB4RaoxA x/VVkGbRqFwVnwGQN/VmN9mTNPPtICzy1GuCgurTGeQoopCrLYZxxI8OTJIGGYGboTp0 3R74seguyfticVU9UOy8oPavtIE0OxCdRy3BUhjygStjgCCIXcBuoBELugKVQNSzHp7j 89QapkQnYkMvBFD/xpEPB5Lx61iQvDFliKagW3owHPdVNG2rj81aj5dC+z9tIPNExqq6 tWZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742853791; x=1743458591; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H6xLSNMv7dH+XhTTh/MX0iK8IzdaNSHq8WhjOCavuno=; b=F9bqLao/eIZP+4msWLpSwjsZm0SCgfdR/05Tg8UavdAKCA3UQk8x3FSI/HF/pYE6Ai uiF0hjm/ZlTVOy7l97echP7XejNihw8tWLRQXPP6WLfrBQw3CyVgYu46w1u4Dcicq8kF 9S+V09ZLLr0FbMfQNCSLzqrQ4Yrsp+EjKyT4J45XGsop8yhqckHvhNcfiGuk8VrAo632 dJ5Bbkl5GyFkFhrj/qykOsHaCz9rh+/bLJNtHMrlBP1rnImY5+sfM4Fcqz6044GaP7sU EWTSOsD1m+7ro+DLoFeJ0+PaspU3nPXrGKyaUzbe8doINsW+jdp8RZrn+QwaflPHZbe8 kuUA== X-Forwarded-Encrypted: i=1; AJvYcCXf0zjWEnL3tGVUjKiDXgCG6jlB2PVYby8rwAWaKsTIMU9VCACqA+IElsxXoynz//3k3MTJdRlhx8iJ75w=@vger.kernel.org X-Gm-Message-State: AOJu0Yyoox/uIKPdTeVgFAFQrSeAq1SwIb5yInmM+viAGhmlvusYVpwf HWN9BfGYVC10DDqpCxtHviEpeilDMC03suYkPElJ9vD7Lnf/8dFQMIYLLZ8MC+1B/2C7DrMqYFe RMYaqS5Sgxg== X-Google-Smtp-Source: AGHT+IFxcf1ulizGHCmrAYuaueILB7KnJ+mFxdngS8qoVQOFjaGknbkRH4uweWaOG90A2VClWCEi/Qzqir1kug== X-Received: from pfbhq26.prod.google.com ([2002:a05:6a00:681a:b0:736:6fb6:7fc]) (user=kinseyho job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3a07:b0:736:3768:6d74 with SMTP id d2e1a72fcca58-7390598e60amr22559759b3a.7.1742853791050; Mon, 24 Mar 2025 15:03:11 -0700 (PDT) Date: Mon, 24 Mar 2025 15:03:00 -0700 In-Reply-To: <20250324220301.1273038-1-kinseyho@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324220301.1273038-1-kinseyho@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324220301.1273038-2-kinseyho@google.com> Subject: [RFC PATCH v1 1/2] mm: mglru: generalize page table walk From: Kinsey Ho To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuanchu@google.com, AneeshKumar.KizhakeVeetil@arm.com, Hasan.Maruf@amd.com, Jonathan.Cameron@huawei.com, Michael.Day@amd.com, akpm@linux-foundation.org, dave.hansen@intel.com, david@redhat.com, feng.tang@intel.com, gourry@gourry.net, hannes@cmpxchg.org, honggyu.kim@sk.com, hughd@google.com, jhubbard@nvidia.com, k.shutemov@gmail.com, kbusch@meta.com, kmanaouil.dev@gmail.com, leesuyeon0506@gmail.com, leillc@google.com, liam.howlett@oracle.com, mgorman@techsingularity.net, mingo@redhat.com, nadav.amit@gmail.com, nphamcs@gmail.com, peterz@infradead.org, raghavendra.kt@amd.com, riel@surriel.com, rientjes@google.com, rppt@kernel.org, shivankg@amd.com, shy828301@gmail.com, sj@kernel.org, vbabka@suse.cz, weixugc@google.com, willy@infradead.org, ying.huang@linux.alibaba.com, ziy@nvidia.com, dave@stgolabs.net, hyeonggon.yoo@sk.com, bharata@amd.com, Kinsey Ho Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor the existing MGLRU page table walking logic to make it resumable. Additionally, introduce two hooks into the MGLRU page table walk: accessed callback and flush callback. The accessed callback is called for each accessed page detected via the scanned accessed bit. The flush callback is called when the accessed callback reports an out of space error. This allows for processing pages in batches for efficiency. With a generalised page table walk, introduce a new scan function which repeatedly scans on the same young generation and does not add a new young generation. Signed-off-by: Kinsey Ho Signed-off-by: Yuanchu Xie --- include/linux/mmzone.h | 5 ++ mm/internal.h | 4 + mm/vmscan.c | 177 ++++++++++++++++++++++++++++++----------- 3 files changed, 140 insertions(+), 46 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a5c4e789aa55..bab586961a82 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -511,6 +511,8 @@ struct lru_gen_mm_walk { unsigned long seq; /* the next address within an mm to scan */ unsigned long next_addr; + /* called for each accessed pte/pmd */ + int (*accessed_cb)(pfn_t pfn); /* to batch promoted pages */ int nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; /* to batch the mm stats */ @@ -518,6 +520,9 @@ struct lru_gen_mm_walk { /* total batched items */ int batched; int swappiness; + /* for the pmd under scanning */ + int nr_young_pte; + int nr_total_pte; bool force_scan; }; =20 diff --git a/mm/internal.h b/mm/internal.h index 20b3535935a3..3bf528af2deb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -476,6 +476,10 @@ extern unsigned long highest_memmap_pfn; bool folio_isolate_lru(struct folio *folio); void folio_putback_lru(struct folio *folio); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state = reason); +void set_task_reclaim_state(struct task_struct *task, + struct reclaim_state *rs); +void lru_gen_scan_lruvec(struct lruvec *lruvec, unsigned long seq, + int (*accessed_cb)(pfn_t), void (*flush_cb)(void)); =20 /* * in mm/rmap.c: diff --git a/mm/vmscan.c b/mm/vmscan.c index c767d71c43d7..fb828a429645 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -57,6 +57,7 @@ #include #include #include +#include =20 #include #include @@ -271,7 +272,7 @@ static int sc_swappiness(struct scan_control *sc, struc= t mem_cgroup *memcg) } #endif =20 -static void set_task_reclaim_state(struct task_struct *task, +void set_task_reclaim_state(struct task_struct *task, struct reclaim_state *rs) { /* Check for an overwrite */ @@ -3023,7 +3024,7 @@ static bool iterate_mm_list(struct lru_gen_mm_walk *w= alk, struct mm_struct **ite =20 VM_WARN_ON_ONCE(mm_state->seq + 1 < walk->seq); =20 - if (walk->seq <=3D mm_state->seq) + if (!walk->accessed_cb && walk->seq <=3D mm_state->seq) goto done; =20 if (!mm_state->head) @@ -3452,16 +3453,14 @@ static void walk_update_folio(struct lru_gen_mm_wal= k *walk, struct folio *folio, } } =20 -static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long = end, - struct mm_walk *args) +static int walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long e= nd, + struct mm_walk *args, bool *suitable) { - int i; + int i, err =3D 0; bool dirty; pte_t *pte; spinlock_t *ptl; unsigned long addr; - int total =3D 0; - int young =3D 0; struct folio *last =3D NULL; struct lru_gen_mm_walk *walk =3D args->private; struct mem_cgroup *memcg =3D lruvec_memcg(walk->lruvec); @@ -3471,17 +3470,21 @@ static bool walk_pte_range(pmd_t *pmd, unsigned lon= g start, unsigned long end, pmd_t pmdval; =20 pte =3D pte_offset_map_rw_nolock(args->mm, pmd, start & PMD_MASK, &pmdval= , &ptl); - if (!pte) - return false; + if (!pte) { + *suitable =3D false; + return 0; + } =20 if (!spin_trylock(ptl)) { pte_unmap(pte); - return true; + *suitable =3D true; + return 0; } =20 if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) { pte_unmap_unlock(pte, ptl); - return false; + *suitable =3D false; + return 0; } =20 arch_enter_lazy_mmu_mode(); @@ -3491,7 +3494,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, struct folio *folio; pte_t ptent =3D ptep_get(pte + i); =20 - total++; + walk->nr_total_pte++; walk->mm_stats[MM_LEAF_TOTAL]++; =20 pfn =3D get_pte_pfn(ptent, args->vma, addr, pgdat); @@ -3515,23 +3518,34 @@ static bool walk_pte_range(pmd_t *pmd, unsigned lon= g start, unsigned long end, if (pte_dirty(ptent)) dirty =3D true; =20 - young++; + walk->nr_young_pte++; walk->mm_stats[MM_LEAF_YOUNG]++; + + if (!walk->accessed_cb) + continue; + + err =3D walk->accessed_cb(pfn_to_pfn_t(pfn)); + if (err) { + walk->next_addr =3D addr + PAGE_SIZE; + break; + } } =20 walk_update_folio(walk, last, gen, dirty); last =3D NULL; =20 - if (i < PTRS_PER_PTE && get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &= end)) + if (!err && i < PTRS_PER_PTE && + get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &end)) goto restart; =20 arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte, ptl); =20 - return suitable_to_scan(total, young); + *suitable =3D suitable_to_scan(walk->nr_total_pte, walk->nr_young_pte); + return err; } =20 -static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct v= m_area_struct *vma, +static int walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm= _area_struct *vma, struct mm_walk *args, unsigned long *bitmap, unsigned long *first) { int i; @@ -3544,6 +3558,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigne= d long addr, struct vm_area struct pglist_data *pgdat =3D lruvec_pgdat(walk->lruvec); DEFINE_MAX_SEQ(walk->lruvec); int gen =3D lru_gen_from_seq(max_seq); + int err =3D 0; =20 VM_WARN_ON_ONCE(pud_leaf(*pud)); =20 @@ -3551,13 +3566,13 @@ static void walk_pmd_range_locked(pud_t *pud, unsig= ned long addr, struct vm_area if (*first =3D=3D -1) { *first =3D addr; bitmap_zero(bitmap, MIN_LRU_BATCH); - return; + return 0; } =20 i =3D addr =3D=3D -1 ? 0 : pmd_index(addr) - pmd_index(*first); if (i && i <=3D MIN_LRU_BATCH) { __set_bit(i - 1, bitmap); - return; + return 0; } =20 pmd =3D pmd_offset(pud, *first); @@ -3607,6 +3622,16 @@ static void walk_pmd_range_locked(pud_t *pud, unsign= ed long addr, struct vm_area dirty =3D true; =20 walk->mm_stats[MM_LEAF_YOUNG]++; + if (!walk->accessed_cb) + goto next; + + err =3D walk->accessed_cb(pfn_to_pfn_t(pfn)); + if (err) { + i =3D find_next_bit(bitmap, MIN_LRU_BATCH, i) + 1; + + walk->next_addr =3D (*first & PMD_MASK) + i * PMD_SIZE; + break; + } next: i =3D i > MIN_LRU_BATCH ? 0 : find_next_bit(bitmap, MIN_LRU_BATCH, i) + = 1; } while (i <=3D MIN_LRU_BATCH); @@ -3617,9 +3642,10 @@ static void walk_pmd_range_locked(pud_t *pud, unsign= ed long addr, struct vm_area spin_unlock(ptl); done: *first =3D -1; + return err; } =20 -static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long = end, +static int walk_pmd_range(pud_t *pud, unsigned long start, unsigned long e= nd, struct mm_walk *args) { int i; @@ -3631,6 +3657,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long = start, unsigned long end, unsigned long first =3D -1; struct lru_gen_mm_walk *walk =3D args->private; struct lru_gen_mm_state *mm_state =3D get_mm_state(walk->lruvec); + int err =3D 0; =20 VM_WARN_ON_ONCE(pud_leaf(*pud)); =20 @@ -3644,6 +3671,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long = start, unsigned long end, /* walk_pte_range() may call get_next_vma() */ vma =3D args->vma; for (i =3D pmd_index(start), addr =3D start; addr !=3D end; i++, addr =3D= next) { + bool suitable; pmd_t val =3D pmdp_get_lockless(pmd + i); =20 next =3D pmd_addr_end(addr, end); @@ -3660,7 +3688,10 @@ static void walk_pmd_range(pud_t *pud, unsigned long= start, unsigned long end, walk->mm_stats[MM_LEAF_TOTAL]++; =20 if (pfn !=3D -1) - walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); + err =3D walk_pmd_range_locked(pud, addr, vma, args, + bitmap, &first); + if (err) + return err; continue; } =20 @@ -3669,33 +3700,50 @@ static void walk_pmd_range(pud_t *pud, unsigned lon= g start, unsigned long end, if (!pmd_young(val)) continue; =20 - walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); + err =3D walk_pmd_range_locked(pud, addr, vma, args, + bitmap, &first); + if (err) + return err; } =20 if (!walk->force_scan && !test_bloom_filter(mm_state, walk->seq, pmd + i= )) continue; =20 + err =3D walk_pte_range(&val, addr, next, args, &suitable); + if (err && walk->next_addr < next && first =3D=3D -1) + return err; + + walk->nr_total_pte =3D 0; + walk->nr_young_pte =3D 0; + walk->mm_stats[MM_NONLEAF_FOUND]++; =20 - if (!walk_pte_range(&val, addr, next, args)) - continue; + if (!suitable) + goto next; =20 walk->mm_stats[MM_NONLEAF_ADDED]++; =20 /* carry over to the next generation */ update_bloom_filter(mm_state, walk->seq + 1, pmd + i); +next: + if (err) { + walk->next_addr =3D first; + return err; + } } =20 - walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first); + err =3D walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first); =20 - if (i < PTRS_PER_PMD && get_next_vma(PUD_MASK, PMD_SIZE, args, &start, &e= nd)) + if (!err && i < PTRS_PER_PMD && get_next_vma(PUD_MASK, PMD_SIZE, args, &s= tart, &end)) goto restart; + + return err; } =20 static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long e= nd, struct mm_walk *args) { - int i; + int i, err; pud_t *pud; unsigned long addr; unsigned long next; @@ -3713,7 +3761,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long s= tart, unsigned long end, if (!pud_present(val) || WARN_ON_ONCE(pud_leaf(val))) continue; =20 - walk_pmd_range(&val, addr, next, args); + err =3D walk_pmd_range(&val, addr, next, args); + if (err) + return err; =20 if (need_resched() || walk->batched >=3D MAX_LRU_BATCH) { end =3D (addr | ~PUD_MASK) + 1; @@ -3734,40 +3784,48 @@ static int walk_pud_range(p4d_t *p4d, unsigned long= start, unsigned long end, return -EAGAIN; } =20 -static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) +static int try_walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) { + int err; static const struct mm_walk_ops mm_walk_ops =3D { .test_walk =3D should_skip_vma, .p4d_entry =3D walk_pud_range, .walk_lock =3D PGWALK_RDLOCK, }; - int err; struct lruvec *lruvec =3D walk->lruvec; =20 - walk->next_addr =3D FIRST_USER_ADDRESS; + DEFINE_MAX_SEQ(lruvec); =20 - do { - DEFINE_MAX_SEQ(lruvec); + err =3D -EBUSY; =20 - err =3D -EBUSY; + /* another thread might have called inc_max_seq() */ + if (walk->seq !=3D max_seq) + return err; =20 - /* another thread might have called inc_max_seq() */ - if (walk->seq !=3D max_seq) - break; + /* the caller might be holding the lock for write */ + if (mmap_read_trylock(mm)) { + err =3D walk_page_range(mm, walk->next_addr, ULONG_MAX, + &mm_walk_ops, walk); =20 - /* the caller might be holding the lock for write */ - if (mmap_read_trylock(mm)) { - err =3D walk_page_range(mm, walk->next_addr, ULONG_MAX, &mm_walk_ops, w= alk); + mmap_read_unlock(mm); + } =20 - mmap_read_unlock(mm); - } + if (walk->batched) { + spin_lock_irq(&lruvec->lru_lock); + reset_batch_size(walk); + spin_unlock_irq(&lruvec->lru_lock); + } =20 - if (walk->batched) { - spin_lock_irq(&lruvec->lru_lock); - reset_batch_size(walk); - spin_unlock_irq(&lruvec->lru_lock); - } + return err; +} + +static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) +{ + int err; =20 + walk->next_addr =3D FIRST_USER_ADDRESS; + do { + err =3D try_walk_mm(mm, walk); cond_resched(); } while (err =3D=3D -EAGAIN); } @@ -3964,6 +4022,33 @@ static bool inc_max_seq(struct lruvec *lruvec, unsig= ned long seq, int swappiness return success; } =20 +void lru_gen_scan_lruvec(struct lruvec *lruvec, unsigned long seq, + int (*accessed_cb)(pfn_t), void (*flush_cb)(void)) +{ + struct lru_gen_mm_walk *walk =3D current->reclaim_state->mm_walk; + struct mm_struct *mm =3D NULL; + + walk->lruvec =3D lruvec; + walk->seq =3D seq; + walk->accessed_cb =3D accessed_cb; + walk->swappiness =3D MAX_SWAPPINESS; + + do { + int err =3D -EBUSY; + + iterate_mm_list(walk, &mm); + if (!mm) + break; + + walk->next_addr =3D FIRST_USER_ADDRESS; + do { + err =3D try_walk_mm(mm, walk); + cond_resched(); + flush_cb(); + } while (err =3D=3D -EAGAIN); + } while (mm); +} + static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness, bool force_scan) { --=20 2.49.0.395.g12beb8f557-goog