From nobody Fri Dec 19 21:51:52 2025 Received: from smtpout.efficios.com (smtpout.efficios.com [158.69.130.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0F7C334368; Fri, 7 Nov 2025 17:22:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=158.69.130.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762536153; cv=none; b=jy5ShCdLVzMDfvQYCtCF6Q/+KG1pjHoyj3QikBdo1Ia6A+Nfb7y2US3L4ukvc5Ih4rx4v7UrjMjM5nuGvO7wcBiLL+2Vev+7DgEGQxM/IFIGnI792Jg4U02RySt0fChfFt6/NQtRpA30qOJNYTGlTZjoowMayN1z5JPt143t5u4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762536153; c=relaxed/simple; bh=Bblux92M5/0KYDmi4f5bxJWtGn77rTnGjWs+Nth9KrM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZV0q6Ia1BppMu2fdD6umagC78exiLNMeHelTkOyPSbV61CDnKpSqXsPWz/J1wkY0ABBmOQNyhDeupX/2mxGGl+L8xKwqlSLGHMP4Z5LTVa0jLgDt+ATu1m5uWxrl8nLh4IUTvrB/cWJNm9vxWSyHkoJwfEhHIGMlaxrfs36jsZs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=I/M1e6C0; arc=none smtp.client-ip=158.69.130.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="I/M1e6C0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com; s=smtpout1; t=1762536144; bh=26XTliWJQALV0nAIFdTB4ISdGV5rnIP0dpWzQxw2eGI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I/M1e6C0LmKENIfdQOD0lBMbNXArenQdfNXNHXduSMUa6KfP2S7PCmBlYmAFCWA8p Jr8M9IrGBWstMO0TIN960Z2I24Zw4L8XJ5QZDtJBBm5cy2l9vowaWYrCdC4LK6Drmd KVmbCjcG3TYRLvZXs14eB/dB423n26pkp7tz0G0owmcwQdPYuwSTv/VhXiraQe9rPJ r2HhXlYUJ75DrlCmz+fu5N3RlEHznn9qhjBiP9v2bcy2mGK05yg/uz7M7Xa6ByI8tZ QFofzq/U2I5XXDlfeofloZeAAEtVhLrMHQ4cST2X4fSHPNkzoY7SzZKEGdPjppewCj vDwE+adyreoTQ== Received: from thinkos.internal.efficios.com (unknown [IPv6:2606:6d00:100:4000:cacb:9855:de1f:ded2]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4d35Tr34nFzKQy; Fri, 07 Nov 2025 12:22:24 -0500 (EST) From: Mathieu Desnoyers To: Shakeel Butt , Mateusz Guzik , Sweet Tea Dorminy , Vlastimil Babka Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Andrew Morton , "Paul E. McKenney" , Steven Rostedt , Masami Hiramatsu , Dennis Zhou , Tejun Heo , Christoph Lameter , Martin Liu , David Rientjes , christian.koenig@amd.com, SeongJae Park , Michal Hocko , Johannes Weiner , Lorenzo Stoakes , "Liam R . Howlett" , Mike Rapoport , Suren Baghdasaryan , Christian Brauner , Wei Yang , David Hildenbrand , Miaohe Lin , Al Viro , linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, Yu Zhao , Roman Gushchin , Matthew Wilcox , Baolin Wang , Aboorva Devarajan Subject: [RFC PATCH v8 2/2] mm: Fix OOM killer inaccuracy on large many-core systems Date: Fri, 7 Nov 2025 12:22:16 -0500 Message-Id: <20251107172216.515754-3-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20251107172216.515754-1-mathieu.desnoyers@efficios.com> References: <20251107172216.515754-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use hierarchical per-cpu counters for rss tracking to fix the per-mm RSS tracking which has become too inaccurate for OOM killer purposes on large many-core systems. The following rss tracking issues were noted by Sweet Tea Dorminy [1], which lead to picking wrong tasks as OOM kill target: Recently, several internal services had an RSS usage regression as part o= f a kernel upgrade. Previously, they were on a pre-6.2 kernel and were able to read RSS statistics in a backup watchdog process to monitor and decide if they'd overrun their memory budget. Now, however, a representative service with five threads, expected to use about a hundred MB of memory, on a 250= -cpu machine had memory usage tens of megabytes different from the expected am= ount -- this constituted a significant percentage of inaccuracy, causing the watchdog to act. This was a result of f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter") [1]. Previously, the memory error was bounded by 64*nr_threads pages, a very livable megabyte. Now, however, as a result of scheduler decisions moving the threads around the CPUs, the memory error = could be as large as a gigabyte. This is a really tremendous inaccuracy for any few-threaded program on a large machine and impedes monitoring significantly. These stat counters a= re also used to make OOM killing decisions, so this additional inaccuracy co= uld make a big difference in OOM situations -- either resulting in the wrong process being killed, or in less memory being returned from an OOM-kill t= han expected. Here is a (possibly incomplete) list of the prior approaches that were used or proposed, along with their downside: 1) Per-thread rss tracking: large error on many-thread processes. 2) Per-CPU counters: up to 12% slower for short-lived processes and 9% increased system time in make test workloads [1]. Moreover, the inaccuracy increases with O(n^2) with the number of CPUs. 3) Per-NUMA-node counters: requires atomics on fast-path (overhead), error is high with systems that have lots of NUMA nodes (32 times the number of NUMA nodes). The approach proposed here is to replace this by the hierarchical per-cpu counters, which bounds the inaccuracy based on the system topology with O(N*logN). commit 82241a83cd15 ("Baolin Wang ") introduced get_mm_counter_sum() for precise /proc memory status queries. Implement it with percpu_counter_tree_precise_sum() since it is not a fast path and precision is preferred over speed. Link: https://lore.kernel.org/lkml/20250331223516.7810-2-sweettea-kernel@do= rminy.me/ # [1] Link: https://lore.kernel.org/lkml/20250704150226.47980-1-mathieu.desnoyers= @efficios.com/ Signed-off-by: Mathieu Desnoyers Cc: Andrew Morton Cc: "Paul E. McKenney" Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Mathieu Desnoyers Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Martin Liu Cc: David Rientjes Cc: christian.koenig@amd.com Cc: Shakeel Butt Cc: SeongJae Park Cc: Michal Hocko Cc: Johannes Weiner Cc: Sweet Tea Dorminy Cc: Lorenzo Stoakes Cc: "Liam R . Howlett" Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Christian Brauner Cc: Wei Yang Cc: David Hildenbrand Cc: Miaohe Lin Cc: Al Viro Cc: linux-mm@kvack.org Cc: linux-trace-kernel@vger.kernel.org Cc: Yu Zhao Cc: Roman Gushchin Cc: Mateusz Guzik Cc: Matthew Wilcox Cc: Baolin Wang Cc: Aboorva Devarajan --- Changes since v7: - Use precise sum positive API to handle a scenario where an unlucky precise sum iteration would observe negative counter values due to concurrent updates. Changes since v6: - Rebased on v6.18-rc3. - Implement get_mm_counter_sum as percpu_counter_tree_precise_sum for /proc virtual files memory state queries. Changes since v5: - Use percpu_counter_tree_approximate_sum_positive. Change since v4: - get_mm_counter needs to return 0 or a positive value. get_mm_counter_sum -> precise sum positive --- include/linux/mm.h | 10 +++++----- include/linux/mm_types.h | 4 ++-- include/trace/events/kmem.h | 2 +- kernel/fork.c | 32 +++++++++++++++++++++----------- 4 files changed, 29 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d16b33bacc32..987069c0dccc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2679,33 +2679,33 @@ static inline bool get_user_page_fast_only(unsigned= long addr, */ static inline unsigned long get_mm_counter(struct mm_struct *mm, int membe= r) { - return percpu_counter_read_positive(&mm->rss_stat[member]); + return percpu_counter_tree_approximate_sum_positive(&mm->rss_stat[member]= ); } =20 static inline unsigned long get_mm_counter_sum(struct mm_struct *mm, int m= ember) { - return percpu_counter_sum_positive(&mm->rss_stat[member]); + return percpu_counter_tree_precise_sum_positive(&mm->rss_stat[member]); } =20 void mm_trace_rss_stat(struct mm_struct *mm, int member); =20 static inline void add_mm_counter(struct mm_struct *mm, int member, long v= alue) { - percpu_counter_add(&mm->rss_stat[member], value); + percpu_counter_tree_add(&mm->rss_stat[member], value); =20 mm_trace_rss_stat(mm, member); } =20 static inline void inc_mm_counter(struct mm_struct *mm, int member) { - percpu_counter_inc(&mm->rss_stat[member]); + percpu_counter_tree_add(&mm->rss_stat[member], 1); =20 mm_trace_rss_stat(mm, member); } =20 static inline void dec_mm_counter(struct mm_struct *mm, int member) { - percpu_counter_dec(&mm->rss_stat[member]); + percpu_counter_tree_add(&mm->rss_stat[member], -1); =20 mm_trace_rss_stat(mm, member); } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 90e5790c318f..adb2f227bac7 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -18,7 +18,7 @@ #include #include #include -#include +#include #include #include =20 @@ -1119,7 +1119,7 @@ struct mm_struct { unsigned long saved_e_flags; #endif =20 - struct percpu_counter rss_stat[NR_MM_COUNTERS]; + struct percpu_counter_tree rss_stat[NR_MM_COUNTERS]; =20 struct linux_binfmt *binfmt; =20 diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 7f93e754da5c..91c81c44f884 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -442,7 +442,7 @@ TRACE_EVENT(rss_stat, __entry->mm_id =3D mm_ptr_to_hash(mm); __entry->curr =3D !!(current->mm =3D=3D mm); __entry->member =3D member; - __entry->size =3D (percpu_counter_sum_positive(&mm->rss_stat[member]) + __entry->size =3D (percpu_counter_tree_approximate_sum_positive(&mm->rss= _stat[member]) << PAGE_SHIFT); ), =20 diff --git a/kernel/fork.c b/kernel/fork.c index 3da0f08615a9..e3dd00809cf3 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -133,6 +133,11 @@ */ #define MAX_THREADS FUTEX_TID_MASK =20 +/* + * Batch size of rss stat approximation + */ +#define RSS_STAT_BATCH_SIZE 32 + /* * Protected counters by write_lock_irq(&tasklist_lock) */ @@ -583,14 +588,12 @@ static void check_mm(struct mm_struct *mm) "Please make sure 'struct resident_page_types[]' is updated as well"); =20 for (i =3D 0; i < NR_MM_COUNTERS; i++) { - long x =3D percpu_counter_sum(&mm->rss_stat[i]); - - if (unlikely(x)) { - pr_alert("BUG: Bad rss-counter state mm:%p type:%s val:%ld Comm:%s Pid:= %d\n", - mm, resident_page_types[i], x, + if (unlikely(percpu_counter_tree_precise_compare_value(&mm->rss_stat[i],= 0) !=3D 0)) + pr_alert("BUG: Bad rss-counter state mm:%p type:%s val:%d Comm:%s Pid:%= d\n", + mm, resident_page_types[i], + percpu_counter_tree_precise_sum(&mm->rss_stat[i]), current->comm, task_pid_nr(current)); - } } =20 if (mm_pgtables_bytes(mm)) @@ -673,6 +676,8 @@ static void cleanup_lazy_tlbs(struct mm_struct *mm) */ void __mmdrop(struct mm_struct *mm) { + int i; + BUG_ON(mm =3D=3D &init_mm); WARN_ON_ONCE(mm =3D=3D current->mm); =20 @@ -688,8 +693,8 @@ void __mmdrop(struct mm_struct *mm) put_user_ns(mm->user_ns); mm_pasid_drop(mm); mm_destroy_cid(mm); - percpu_counter_destroy_many(mm->rss_stat, NR_MM_COUNTERS); - + for (i =3D 0; i < NR_MM_COUNTERS; i++) + percpu_counter_tree_destroy(&mm->rss_stat[i]); free_mm(mm); } EXPORT_SYMBOL_GPL(__mmdrop); @@ -1030,6 +1035,8 @@ static void mmap_init_lock(struct mm_struct *mm) static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct = *p, struct user_namespace *user_ns) { + int i; + mt_init_flags(&mm->mm_mt, MM_MT_FLAGS); mt_set_external_lock(&mm->mm_mt, &mm->mmap_lock); atomic_set(&mm->mm_users, 1); @@ -1083,15 +1090,18 @@ static struct mm_struct *mm_init(struct mm_struct *= mm, struct task_struct *p, if (mm_alloc_cid(mm, p)) goto fail_cid; =20 - if (percpu_counter_init_many(mm->rss_stat, 0, GFP_KERNEL_ACCOUNT, - NR_MM_COUNTERS)) - goto fail_pcpu; + for (i =3D 0; i < NR_MM_COUNTERS; i++) { + if (percpu_counter_tree_init(&mm->rss_stat[i], RSS_STAT_BATCH_SIZE, GFP_= KERNEL_ACCOUNT)) + goto fail_pcpu; + } =20 mm->user_ns =3D get_user_ns(user_ns); lru_gen_init_mm(mm); return mm; =20 fail_pcpu: + for (i--; i >=3D 0; i--) + percpu_counter_tree_destroy(&mm->rss_stat[i]); mm_destroy_cid(mm); fail_cid: destroy_context(mm); --=20 2.39.5