From nobody Sat Sep 13 15:18:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1597EC636D4 for ; Wed, 1 Feb 2023 19:53:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231886AbjBATxZ (ORCPT ); Wed, 1 Feb 2023 14:53:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231779AbjBATxV (ORCPT ); Wed, 1 Feb 2023 14:53:21 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E75055399E for ; Wed, 1 Feb 2023 11:52:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=7C1hmy1E7+0MQkK/mZzmUZ/gKoFVIBd7d2YZY/3tmcQ=; b=K0TUK8Jf/YQ1ZfGDvVwedSuO7AIlE/pq6gzEq+CPDDSWHcwRiFTlrOlU80+LFz3rdlkZtR u2Lzi0aH89+0/KJs3q+8vjwf2V5XAyYZs/ODIHuEZGYiwFVXYw/ABjcM8DmImByOVhZbyE soCbd52DKRdXNFRK3eX9qVXTFuYXCZ4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-511-mZt9PK5VNHS2IdT6gYfx_w-1; Wed, 01 Feb 2023 14:52:31 -0500 X-MC-Unique: mZt9PK5VNHS2IdT6gYfx_w-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4A1D73C02558; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 182731121339; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 0D1C1403C42BF; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.411744803@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:14 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 1/5] mm/vmstat: remove remote node draining References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Draining of pages from the local pcp for a remote zone was necessary since: "Note that remote node draining is a somewhat esoteric feature that is required on large NUMA systems because otherwise significant portions of system memory can become trapped in pcp queues. The number of pcp is determined by the number of processors and nodes in a system. A system with 4 processors and 2 nodes has 8 pcps which is okay. But a system with 1024 processors and 512 nodes has 512k pcps with a high potential for large amount of memory being caught in them." Since commit 443c2accd1b6679a1320167f8f56eed6536b806e ("mm/page_alloc: remotely drain per-cpu lists"), drain_all_pages() is able=20 to remotely free those pages when necessary. Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/include/linux/mmzone.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/include/linux/mmzone.h +++ linux-vmstat-remote/include/linux/mmzone.h @@ -577,9 +577,6 @@ struct per_cpu_pages { int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ short free_factor; /* batch scaling factor during free */ -#ifdef CONFIG_NUMA - short expire; /* When 0, remote pagesets are drained */ -#endif =20 /* Lists of pages, one per migrate type stored on the pcp-lists */ struct list_head lists[NR_PCP_LISTS]; Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -803,7 +803,7 @@ static int fold_diff(int *zone_diff, int * * The function returns the number of global counters updated. */ -static int refresh_cpu_vm_stats(bool do_pagesets) +static int refresh_cpu_vm_stats(void) { struct pglist_data *pgdat; struct zone *zone; @@ -814,9 +814,6 @@ static int refresh_cpu_vm_stats(bool do_ =20 for_each_populated_zone(zone) { struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; -#ifdef CONFIG_NUMA - struct per_cpu_pages __percpu *pcp =3D zone->per_cpu_pageset; -#endif =20 for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { int v; @@ -826,44 +823,8 @@ static int refresh_cpu_vm_stats(bool do_ =20 atomic_long_add(v, &zone->vm_stat[i]); global_zone_diff[i] +=3D v; -#ifdef CONFIG_NUMA - /* 3 seconds idle till flush */ - __this_cpu_write(pcp->expire, 3); -#endif } } -#ifdef CONFIG_NUMA - - if (do_pagesets) { - cond_resched(); - /* - * Deal with draining the remote pageset of this - * processor - * - * Check if there are pages remaining in this pageset - * if not then there is nothing to expire. - */ - if (!__this_cpu_read(pcp->expire) || - !__this_cpu_read(pcp->count)) - continue; - - /* - * We never drain zones local to this processor. - */ - if (zone_to_nid(zone) =3D=3D numa_node_id()) { - __this_cpu_write(pcp->expire, 0); - continue; - } - - if (__this_cpu_dec_return(pcp->expire)) - continue; - - if (__this_cpu_read(pcp->count)) { - drain_zone_pages(zone, this_cpu_ptr(pcp)); - changes++; - } - } -#endif } =20 for_each_online_pgdat(pgdat) { @@ -1864,7 +1825,7 @@ int sysctl_stat_interval __read_mostly =3D #ifdef CONFIG_PROC_FS static void refresh_vm_stats(struct work_struct *work) { - refresh_cpu_vm_stats(true); + refresh_cpu_vm_stats(); } =20 int vmstat_refresh(struct ctl_table *table, int write, @@ -1928,7 +1889,7 @@ int vmstat_refresh(struct ctl_table *tab =20 static void vmstat_update(struct work_struct *w) { - if (refresh_cpu_vm_stats(true)) { + if (refresh_cpu_vm_stats()) { /* * Counters were updated so we expect more updates * to occur in the future. Keep on running the @@ -1991,7 +1952,7 @@ void quiet_vmstat(void) * it would be too expensive from this path. * vmstat_shepherd will take care about that for us. */ - refresh_cpu_vm_stats(false); + refresh_cpu_vm_stats(); } =20 /* From nobody Sat Sep 13 15:18:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F057C05027 for ; Wed, 1 Feb 2023 19:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232022AbjBATxc (ORCPT ); Wed, 1 Feb 2023 14:53:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbjBATxX (ORCPT ); Wed, 1 Feb 2023 14:53:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E763C6E408 for ; Wed, 1 Feb 2023 11:52:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=3slieG8t4CDqujz/admEP/IS8ck8iEpPaLpD8PiApzU=; b=dDT47h26d7zzO8pa01/T1YRaH4G6EQAgfKIOdxLxxkHy7OSZE5zxCJIdgKQM2vQJyzYDXz 0NbRuH57aTBJ7mK0jFWf05x285CKamVXN0F4BOQjD4RjMDKAf2mkqGmg8wxpnJMAbbGbww tqKwUdGLhdtjz3MS5n7tZyLlhHvD3R0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-558-Q-mtDjocPpCEVfJ-vCta9w-1; Wed, 01 Feb 2023 14:52:31 -0500 X-MC-Unique: Q-mtDjocPpCEVfJ-vCta9w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6E9D12806059; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0BC6EC15BAE; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 10943403C47C1; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.436627422@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:15 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 2/5] mm/vmstat: switch counter modification to cmpxchg References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation to switch vmstat shepherd to flush per-CPU counters remotely, switch all functions that modify the counters to use cmpxchg. To test the performance difference, a page allocator microbenchmark: https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/benc= h/page_bench01.c=20 with loops=3D1000000 was used, on Intel Core i7-11850H @ 2.50GHz. For the single_page_alloc_free test, which does /** Loop to measure **/ for (i =3D 0; i < rec->loops; i++) { my_page =3D alloc_page(gfp_mask); if (unlikely(my_page =3D=3D NULL)) return 0; __free_page(my_page); } Unit is cycles. Vanilla Patched Diff 159 156 -1.9% Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -334,6 +334,188 @@ void set_pgdat_percpu_threshold(pg_data_ } } =20 +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +/* + * If we have cmpxchg_local support then we do not need to incur the overh= ead + * that comes with local_irq_save/restore if we use this_cpu_cmpxchg. + * + * mod_state() modifies the zone counter state through atomic per cpu + * operations. + * + * Overstep mode specifies how overstep should handled: + * 0 No overstepping + * 1 Overstepping half of threshold + * -1 Overstepping minus half of threshold + */ +static inline void mod_zone_state(struct zone *zone, enum zone_stat_item i= tem, + long delta, int overstep_mode) +{ + struct per_cpu_zonestat __percpu *pcp =3D zone->per_cpu_zonestats; + s8 __percpu *p =3D pcp->vm_stat_diff + item; + long o, n, t, z; + + do { + z =3D 0; /* overflow to zone counters */ + + /* + * The fetching of the stat_threshold is racy. We may apply + * a counter threshold to the wrong the cpu if we get + * rescheduled while executing here. However, the next + * counter update will apply the threshold again and + * therefore bring the counter under the threshold again. + * + * Most of the time the thresholds are the same anyways + * for all cpus in a zone. + */ + t =3D this_cpu_read(pcp->stat_threshold); + + o =3D this_cpu_read(*p); + n =3D delta + o; + + if (abs(n) > t) { + int os =3D overstep_mode * (t >> 1); + + /* Overflow must be added to zone counters */ + z =3D n + os; + n =3D -os; + } + } while (this_cpu_cmpxchg(*p, o, n) !=3D o); + + if (z) + zone_page_state_add(z, zone, item); +} + +void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, + long delta) +{ + mod_zone_state(zone, item, delta, 0); +} +EXPORT_SYMBOL(mod_zone_page_state); + +void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, + long delta) +{ + mod_zone_state(zone, item, delta, 0); +} +EXPORT_SYMBOL(__mod_zone_page_state); + +void inc_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, 1, 1); +} +EXPORT_SYMBOL(inc_zone_page_state); + +void __inc_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, 1, 1); +} +EXPORT_SYMBOL(__inc_zone_page_state); + +void dec_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, -1, -1); +} +EXPORT_SYMBOL(dec_zone_page_state); + +void __dec_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, -1, -1); +} +EXPORT_SYMBOL(__dec_zone_page_state); + +static inline void mod_node_state(struct pglist_data *pgdat, + enum node_stat_item item, + int delta, int overstep_mode) +{ + struct per_cpu_nodestat __percpu *pcp =3D pgdat->per_cpu_nodestats; + s8 __percpu *p =3D pcp->vm_node_stat_diff + item; + long o, n, t, z; + + if (vmstat_item_in_bytes(item)) { + /* + * Only cgroups use subpage accounting right now; at + * the global level, these items still change in + * multiples of whole pages. Store them as pages + * internally to keep the per-cpu counters compact. + */ + VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); + delta >>=3D PAGE_SHIFT; + } + + do { + z =3D 0; /* overflow to node counters */ + + /* + * The fetching of the stat_threshold is racy. We may apply + * a counter threshold to the wrong the cpu if we get + * rescheduled while executing here. However, the next + * counter update will apply the threshold again and + * therefore bring the counter under the threshold again. + * + * Most of the time the thresholds are the same anyways + * for all cpus in a node. + */ + t =3D this_cpu_read(pcp->stat_threshold); + + o =3D this_cpu_read(*p); + n =3D delta + o; + + if (abs(n) > t) { + int os =3D overstep_mode * (t >> 1); + + /* Overflow must be added to node counters */ + z =3D n + os; + n =3D -os; + } + } while (this_cpu_cmpxchg(*p, o, n) !=3D o); + + if (z) + node_page_state_add(z, pgdat, item); +} + +void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item it= em, + long delta) +{ + mod_node_state(pgdat, item, delta, 0); +} +EXPORT_SYMBOL(mod_node_page_state); + +void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item = item, + long delta) +{ + mod_node_state(pgdat, item, delta, 0); +} +EXPORT_SYMBOL(__mod_node_page_state); + +void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) +{ + mod_node_state(pgdat, item, 1, 1); +} + +void inc_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, 1, 1); +} +EXPORT_SYMBOL(inc_node_page_state); + +void __inc_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, 1, 1); +} +EXPORT_SYMBOL(__inc_node_page_state); + +void dec_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, -1, -1); +} +EXPORT_SYMBOL(dec_node_page_state); + +void __dec_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, -1, -1); +} +EXPORT_SYMBOL(__dec_node_page_state); +#else /* * For use when we know that interrupts are disabled, * or when we know that preemption is disabled and that @@ -541,149 +723,6 @@ void __dec_node_page_state(struct page * } EXPORT_SYMBOL(__dec_node_page_state); =20 -#ifdef CONFIG_HAVE_CMPXCHG_LOCAL -/* - * If we have cmpxchg_local support then we do not need to incur the overh= ead - * that comes with local_irq_save/restore if we use this_cpu_cmpxchg. - * - * mod_state() modifies the zone counter state through atomic per cpu - * operations. - * - * Overstep mode specifies how overstep should handled: - * 0 No overstepping - * 1 Overstepping half of threshold - * -1 Overstepping minus half of threshold -*/ -static inline void mod_zone_state(struct zone *zone, - enum zone_stat_item item, long delta, int overstep_mode) -{ - struct per_cpu_zonestat __percpu *pcp =3D zone->per_cpu_zonestats; - s8 __percpu *p =3D pcp->vm_stat_diff + item; - long o, n, t, z; - - do { - z =3D 0; /* overflow to zone counters */ - - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a zone. - */ - t =3D this_cpu_read(pcp->stat_threshold); - - o =3D this_cpu_read(*p); - n =3D delta + o; - - if (abs(n) > t) { - int os =3D overstep_mode * (t >> 1) ; - - /* Overflow must be added to zone counters */ - z =3D n + os; - n =3D -os; - } - } while (this_cpu_cmpxchg(*p, o, n) !=3D o); - - if (z) - zone_page_state_add(z, zone, item); -} - -void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, - long delta) -{ - mod_zone_state(zone, item, delta, 0); -} -EXPORT_SYMBOL(mod_zone_page_state); - -void inc_zone_page_state(struct page *page, enum zone_stat_item item) -{ - mod_zone_state(page_zone(page), item, 1, 1); -} -EXPORT_SYMBOL(inc_zone_page_state); - -void dec_zone_page_state(struct page *page, enum zone_stat_item item) -{ - mod_zone_state(page_zone(page), item, -1, -1); -} -EXPORT_SYMBOL(dec_zone_page_state); - -static inline void mod_node_state(struct pglist_data *pgdat, - enum node_stat_item item, int delta, int overstep_mode) -{ - struct per_cpu_nodestat __percpu *pcp =3D pgdat->per_cpu_nodestats; - s8 __percpu *p =3D pcp->vm_node_stat_diff + item; - long o, n, t, z; - - if (vmstat_item_in_bytes(item)) { - /* - * Only cgroups use subpage accounting right now; at - * the global level, these items still change in - * multiples of whole pages. Store them as pages - * internally to keep the per-cpu counters compact. - */ - VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); - delta >>=3D PAGE_SHIFT; - } - - do { - z =3D 0; /* overflow to node counters */ - - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a node. - */ - t =3D this_cpu_read(pcp->stat_threshold); - - o =3D this_cpu_read(*p); - n =3D delta + o; - - if (abs(n) > t) { - int os =3D overstep_mode * (t >> 1) ; - - /* Overflow must be added to node counters */ - z =3D n + os; - n =3D -os; - } - } while (this_cpu_cmpxchg(*p, o, n) !=3D o); - - if (z) - node_page_state_add(z, pgdat, item); -} - -void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item it= em, - long delta) -{ - mod_node_state(pgdat, item, delta, 0); -} -EXPORT_SYMBOL(mod_node_page_state); - -void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) -{ - mod_node_state(pgdat, item, 1, 1); -} - -void inc_node_page_state(struct page *page, enum node_stat_item item) -{ - mod_node_state(page_pgdat(page), item, 1, 1); -} -EXPORT_SYMBOL(inc_node_page_state); - -void dec_node_page_state(struct page *page, enum node_stat_item item) -{ - mod_node_state(page_pgdat(page), item, -1, -1); -} -EXPORT_SYMBOL(dec_node_page_state); -#else /* * Use interrupt disable to serialize counter updates */ From nobody Sat Sep 13 15:18:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFC05C636D4 for ; Wed, 1 Feb 2023 19:53:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231911AbjBATxW (ORCPT ); Wed, 1 Feb 2023 14:53:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229551AbjBATxU (ORCPT ); Wed, 1 Feb 2023 14:53:20 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D874125A0 for ; Wed, 1 Feb 2023 11:52:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281152; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=UNuyx8K2ySiAV6ZnRT9lDE7gCUj8cZTtuvFWAoUAsDg=; b=fuh4WPSmO2zZKy24bvQPxMb7NES03h2rbC2bjy2RUmZLJAdtLtAY6GsWZ3S2DzcIosFWb+ /EqYVyJRIOkjzfbp5Jrv0B/E6eTVSxF6SJIHGd8CzWXmOQqjrRmbiYwFbC2/Yt3UZZbfNS kyypSu48Syfjf5EVKSMc8Bc2j+bpaCk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-594--FZ7uIvhM-eLBBPBAMydmw-1; Wed, 01 Feb 2023 14:52:30 -0500 X-MC-Unique: -FZ7uIvhM-eLBBPBAMydmw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 497DE386B751; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E39C42166B34; Wed, 1 Feb 2023 19:52:29 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 12F5E403C47C6; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.460373427@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:16 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 3/5] mm/vmstat: use cmpxchg loop in cpu_vm_stats_fold References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation to switch vmstat shepherd to flush per-CPU counters remotely, use a cmpxchg loop=20 instead of a pair of read/write instructions. Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -885,7 +885,7 @@ static int refresh_cpu_vm_stats(void) } =20 /* - * Fold the data for an offline cpu into the global array. + * Fold the data for a cpu into the global array. * There cannot be any access by the offline cpu and therefore * synchronization is simplified. */ @@ -906,8 +906,9 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_stat_diff[i]) { int v; =20 - v =3D pzstats->vm_stat_diff[i]; - pzstats->vm_stat_diff[i] =3D 0; + do { + v =3D pzstats->vm_stat_diff[i]; + } while (cmpxchg(&pzstats->vm_stat_diff[i], v, 0) !=3D v); atomic_long_add(v, &zone->vm_stat[i]); global_zone_diff[i] +=3D v; } @@ -917,8 +918,9 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_numa_event[i]) { unsigned long v; =20 - v =3D pzstats->vm_numa_event[i]; - pzstats->vm_numa_event[i] =3D 0; + do { + v =3D pzstats->vm_numa_event[i]; + } while (cmpxchg(&pzstats->vm_numa_event[i], v, 0) !=3D v); zone_numa_event_add(v, zone, i); } } @@ -934,8 +936,9 @@ void cpu_vm_stats_fold(int cpu) if (p->vm_node_stat_diff[i]) { int v; =20 - v =3D p->vm_node_stat_diff[i]; - p->vm_node_stat_diff[i] =3D 0; + do { + v =3D p->vm_node_stat_diff[i]; + } while (cmpxchg(&p->vm_node_stat_diff[i], v, 0) !=3D v); atomic_long_add(v, &pgdat->vm_stat[i]); global_node_diff[i] +=3D v; } From nobody Sat Sep 13 15:18:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3390EC05027 for ; Wed, 1 Feb 2023 19:53:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231465AbjBATx2 (ORCPT ); Wed, 1 Feb 2023 14:53:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231791AbjBATxV (ORCPT ); Wed, 1 Feb 2023 14:53:21 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0FF280160 for ; Wed, 1 Feb 2023 11:52:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=5/RVJiIAz9b+9GlCe0yao1cdf4Ljv2ahozjVsA9E+ck=; b=BDQsxnKP8AoJD8cqa7pHVz1tXMfTXSPLo9tB0ct3KfF0WKT8jAkyEcqAjPQZxXL3G7LlTZ /EupkvZeFsK/BqCEtbUQmmsqa5nvyWvkjsLXNE2StL+W7HIDw5rKb8aPwvWmovMDUc3Zgl jrqfNSdyv3XAlNH9z5XyOpWdkVdh7L8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-664-incrzUbIOv2YRg8uK3EuQw-1; Wed, 01 Feb 2023 14:52:30 -0500 X-MC-Unique: incrzUbIOv2YRg8uK3EuQw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 498A32806053; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 059CF140EBF4; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 16264403C47C7; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.484635830@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:17 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 4/5] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that the counters are modified via cmpxchg both CPU locally (via the account functions), and remotely (via cpu_vm_stats_fold), its possible to switch vmstat_shepherd to perform the per-CPU=20 vmstats folding remotely. This fixes the following two problems: 1. A customer provided some evidence which indicates that the idle tick was stopped; albeit, CPU-specific vmstat counters still remained populated. Thus one can only assume quiet_vmstat() was not invoked on return to the idle loop. If I understand correctly, I suspect this divergence might erroneously prevent a reclaim attempt by kswapd. If the number of zone specific free pages are below their per-cpu drift value then zone_page_state_snapshot() is used to compute a more accurate view of the aforementioned statistic. Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will be unable to make significant progress via direct reclaim unless it is killed after being woken up by kswapd (see throttle_direct_reclaim()) 2. With a SCHED_FIFO task that busy loops on a given CPU, and kworker for that CPU at SCHED_OTHER priority, queuing work to sync per-vmstats will either cause that work to never execute, or stalld (i.e. stall daemon) boosts kworker priority which causes a latency violation Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -2007,6 +2007,23 @@ static void vmstat_shepherd(struct work_ =20 static DECLARE_DEFERRABLE_WORK(shepherd, vmstat_shepherd); =20 +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +/* Flush counters remotely if CPU uses cmpxchg to update its per-CPU count= ers */ +static void vmstat_shepherd(struct work_struct *w) +{ + int cpu; + + cpus_read_lock(); + for_each_online_cpu(cpu) { + cpu_vm_stats_fold(cpu); + cond_resched(); + } + cpus_read_unlock(); + + schedule_delayed_work(&shepherd, + round_jiffies_relative(sysctl_stat_interval)); +} +#else static void vmstat_shepherd(struct work_struct *w) { int cpu; @@ -2026,6 +2043,7 @@ static void vmstat_shepherd(struct work_ schedule_delayed_work(&shepherd, round_jiffies_relative(sysctl_stat_interval)); } +#endif =20 static void __init start_shepherd_timer(void) { From nobody Sat Sep 13 15:18:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 531D5C636D4 for ; Wed, 1 Feb 2023 19:54:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232112AbjBATyK (ORCPT ); Wed, 1 Feb 2023 14:54:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbjBATyH (ORCPT ); Wed, 1 Feb 2023 14:54:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBABC81B3E for ; Wed, 1 Feb 2023 11:52:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281158; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=RfB/f1HZnnyt4KmfLrM33LwiKIajQ5La7psNOJtN5bk=; b=eClg/Id8/IXSXoB8yHtcRuRiVwhS6/VzCMlYNWAqEi9VYHA8Zu3nXiL0PBlNGXt3y5o+gL bWb0K0OiMNYiyjPoXUH6JX9LKEIc62IKWAE3CCWSPxz52+sgOAUBa65QTFOSO0mKooILph EwtFJhDAUr80GkRB3p64JyhyM/GPtcs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-185-B_uXfheIP_evsOcpf2nA9A-1; Wed, 01 Feb 2023 14:52:32 -0500 X-MC-Unique: B_uXfheIP_evsOcpf2nA9A-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2E4B5800186; Wed, 1 Feb 2023 19:52:32 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E6744112132C; Wed, 1 Feb 2023 19:52:31 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 1B70D403C47C9; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.507817318@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:18 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 5/5] mm/vmstat: refresh stats remotely instead of via work item References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refresh per-CPU stats remotely, instead of queueing=20 work items, for the stat_refresh procfs method. This fixes sosreport hang (which uses vmstat_refresh) with spinning SCHED_FIFO process. Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -1865,11 +1865,21 @@ static DEFINE_PER_CPU(struct delayed_wor int sysctl_stat_interval __read_mostly =3D HZ; =20 #ifdef CONFIG_PROC_FS + +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +static int refresh_all_vm_stats(void); +#else static void refresh_vm_stats(struct work_struct *work) { refresh_cpu_vm_stats(); } =20 +static int refresh_all_vm_stats(void) +{ + return schedule_on_each_cpu(refresh_vm_stats); +} +#endif + int vmstat_refresh(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { @@ -1889,7 +1899,7 @@ int vmstat_refresh(struct ctl_table *tab * transiently negative values, report an error here if any of * the stats is negative, so we know to go looking for imbalance. */ - err =3D schedule_on_each_cpu(refresh_vm_stats); + err =3D refresh_all_vm_stats(); if (err) return err; for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { @@ -2009,7 +2019,7 @@ static DECLARE_DEFERRABLE_WORK(shepherd, =20 #ifdef CONFIG_HAVE_CMPXCHG_LOCAL /* Flush counters remotely if CPU uses cmpxchg to update its per-CPU count= ers */ -static void vmstat_shepherd(struct work_struct *w) +static int refresh_all_vm_stats(void) { int cpu; =20 @@ -2019,7 +2029,12 @@ static void vmstat_shepherd(struct work_ cond_resched(); } cpus_read_unlock(); + return 0; +} =20 +static void vmstat_shepherd(struct work_struct *w) +{ + refresh_all_vm_stats(); schedule_delayed_work(&shepherd, round_jiffies_relative(sysctl_stat_interval)); }