From nobody Wed Sep 17 17:37:29 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BC99C4332F for ; Fri, 16 Dec 2022 19:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231915AbiLPTxg (ORCPT ); Fri, 16 Dec 2022 14:53:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231736AbiLPTx0 (ORCPT ); Fri, 16 Dec 2022 14:53:26 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57FB1659BE for ; Fri, 16 Dec 2022 11:52:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671220360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=kyxh01Z6aQw9/MQYaoI58SSLBFSUwnSJa8fNd5lmbmc=; b=Pc6GAknHVru0jDDUdiEOUCmXEkySXRTKi+jp3N62g0vB9ktKXLLO4MFXxu7Yff49Hw9bka QW6ae9eZIoEsxPwx4LhFvw9zJKYm8DZ0QIqYANI4QWHwXZkveO90Pv/4XmApCNOzLVTv4G NhvwgDrvRYFhgSiqWQwlUS9M0GwFHaQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-607-kZFRYmw_Ofm-GfhYazO0XA-1; Fri, 16 Dec 2022 14:52:37 -0500 X-MC-Unique: kZFRYmw_Ofm-GfhYazO0XA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2681D85CCE3; Fri, 16 Dec 2022 19:52:36 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8CA27492C14; Fri, 16 Dec 2022 19:52:35 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id C48684097957F; Fri, 16 Dec 2022 16:52:10 -0300 (-03) Message-ID: <20221216194904.075275493@redhat.com> User-Agent: quilt/0.66 Date: Fri, 16 Dec 2022 16:45:41 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v10 1/6] mm/vmstat: Add CPU-specific variable to track a vmstat discrepancy References: <20221216194540.202752779@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Aaron Tomlin Introduce a CPU-specific variable namely vmstat_dirty to indicate if a vmstat imbalance is present for a given CPU. Therefore, at the appropriate time, we can fold all the remaining differentials. This patch also provides trivial helpers for modification and testing. Signed-off-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- mm/vmstat.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) Index: linux-2.6/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/vmstat.c +++ linux-2.6/mm/vmstat.c @@ -194,6 +194,22 @@ void fold_vm_numa_events(void) #endif =20 #ifdef CONFIG_SMP +static DEFINE_PER_CPU_ALIGNED(bool, vmstat_dirty); + +static inline void vmstat_mark_dirty(void) +{ + this_cpu_write(vmstat_dirty, true); +} + +static inline void vmstat_clear_dirty(void) +{ + this_cpu_write(vmstat_dirty, false); +} + +static inline bool is_vmstat_dirty(void) +{ + return this_cpu_read(vmstat_dirty); +} =20 int calculate_pressure_threshold(struct zone *zone) { From nobody Wed Sep 17 17:37:29 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D65FDC4332F for ; Fri, 16 Dec 2022 19:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231728AbiLPTxc (ORCPT ); Fri, 16 Dec 2022 14:53:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231713AbiLPTx0 (ORCPT ); Fri, 16 Dec 2022 14:53:26 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB56B659BD for ; Fri, 16 Dec 2022 11:52:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671220360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9wyWU7q9zcUtfqIAPMNd6wk67vz331U9cm7/N6A7Uqs=; b=dcdZftYi0TFH49OTVvuAAMSFL6RxwI3H8OowtCnFXia6uxZ85ENoIv+ooBNEqgkHlyoDz/ 1Kb+57MKQ9u6OrUA4VK2IMbebCi7c3zSjAYYrUfq93XzYRB8zRODnwLsyCkXyFSafeAmfP XClf1wrIgXRjTYuZzvhHxT+k157ShQ4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-264-59YoxDBaO_aCqgO6Mf_Ftg-1; Fri, 16 Dec 2022 14:52:36 -0500 X-MC-Unique: 59YoxDBaO_aCqgO6Mf_Ftg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 28D2D2805585; Fri, 16 Dec 2022 19:52:36 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6250649BB6A; Fri, 16 Dec 2022 19:52:35 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id C7D304269CA80; Fri, 16 Dec 2022 16:52:10 -0300 (-03) Message-ID: <20221216194904.115519799@redhat.com> User-Agent: quilt/0.66 Date: Fri, 16 Dec 2022 16:45:42 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v10 2/6] mm/vmstat: Use vmstat_dirty to track CPU-specific vmstat discrepancies References: <20221216194540.202752779@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Aaron Tomlin This patch will now use the previously introduced CPU-specific variable namely vmstat_dirty to indicate if a vmstat differential/or imbalance is present for a given CPU. So, at the appropriate time, vmstat processing can be initiated. The hope is that this particular approach is "cheaper" when compared to need_update(). The idea is based on Marcelo's patch [1]. [1]: https://lore.kernel.org/lkml/20220204173554.763888172@fedora.localdoma= in/ Signed-off-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- mm/vmstat.c | 48 ++++++++++++++---------------------------------- 1 file changed, 14 insertions(+), 34 deletions(-) Index: linux-2.6/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/vmstat.c +++ linux-2.6/mm/vmstat.c @@ -381,6 +381,7 @@ void __mod_zone_page_state(struct zone * x =3D 0; } __this_cpu_write(*p, x); + vmstat_mark_dirty(); =20 preempt_enable_nested(); } @@ -417,6 +418,7 @@ void __mod_node_page_state(struct pglist x =3D 0; } __this_cpu_write(*p, x); + vmstat_mark_dirty(); =20 preempt_enable_nested(); } @@ -606,6 +608,7 @@ static inline void mod_zone_state(struct =20 if (z) zone_page_state_add(z, zone, item); + vmstat_mark_dirty(); } =20 void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, @@ -674,6 +677,7 @@ static inline void mod_node_state(struct =20 if (z) node_page_state_add(z, pgdat, item); + vmstat_mark_dirty(); } =20 void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item it= em, @@ -828,6 +832,14 @@ static int refresh_cpu_vm_stats(bool do_ int global_node_diff[NR_VM_NODE_STAT_ITEMS] =3D { 0, }; int changes =3D 0; =20 + /* + * Clear vmstat_dirty before clearing the percpu vmstats. + * If interrupts are enabled, it is possible that an interrupt + * or another task modifies a percpu vmstat, which will + * set vmstat_dirty to true. + */ + vmstat_clear_dirty(); + for_each_populated_zone(zone) { struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; #ifdef CONFIG_NUMA @@ -1957,35 +1969,6 @@ static void vmstat_update(struct work_st } =20 /* - * Check if the diffs for a certain cpu indicate that - * an update is needed. - */ -static bool need_update(int cpu) -{ - pg_data_t *last_pgdat =3D NULL; - struct zone *zone; - - for_each_populated_zone(zone) { - struct per_cpu_zonestat *pzstats =3D per_cpu_ptr(zone->per_cpu_zonestats= , cpu); - struct per_cpu_nodestat *n; - - /* - * The fast way of checking if there are any vmstat diffs. - */ - if (memchr_inv(pzstats->vm_stat_diff, 0, sizeof(pzstats->vm_stat_diff))) - return true; - - if (last_pgdat =3D=3D zone->zone_pgdat) - continue; - last_pgdat =3D zone->zone_pgdat; - n =3D per_cpu_ptr(zone->zone_pgdat->per_cpu_nodestats, cpu); - if (memchr_inv(n->vm_node_stat_diff, 0, sizeof(n->vm_node_stat_diff))) - return true; - } - return false; -} - -/* * Switch off vmstat processing and then fold all the remaining differenti= als * until the diffs stay at zero. The function is used by NOHZ and can only= be * invoked when tick processing is not active. @@ -1995,10 +1978,7 @@ void quiet_vmstat(void) if (system_state !=3D SYSTEM_RUNNING) return; =20 - if (!delayed_work_pending(this_cpu_ptr(&vmstat_work))) - return; - - if (!need_update(smp_processor_id())) + if (!is_vmstat_dirty()) return; =20 /* @@ -2029,7 +2009,7 @@ static void vmstat_shepherd(struct work_ for_each_online_cpu(cpu) { struct delayed_work *dw =3D &per_cpu(vmstat_work, cpu); =20 - if (!delayed_work_pending(dw) && need_update(cpu)) + if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); =20 cond_resched(); From nobody Wed Sep 17 17:37:29 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86C54C4332F for ; Fri, 16 Dec 2022 19:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231752AbiLPTx2 (ORCPT ); Fri, 16 Dec 2022 14:53:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229506AbiLPTxU (ORCPT ); Fri, 16 Dec 2022 14:53:20 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1837E659BC for ; Fri, 16 Dec 2022 11:52:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671220358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=z8vp/u7LZUrRwGre1Iv9OOlhsfTbo/8utVSvhDfhbTk=; b=hFkvPh6Cl7LqFttnwGqv/Yo881MVJ2WEXnO1QD+Okuuhn+NSdNFCuxAnMjEIWqIU8jVgPf PM/1MRLlrNh9RvLDZePR2m0Ez4QiFkLqVTRN8WxmGlicruLrVYmEaNrxyQqmhJRv/Eb/D4 vE/4wFvxHu1WfrHXdOLZslnihLLQSHQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-202-XXBE8xf3OdemLz0OFcaz-A-1; Fri, 16 Dec 2022 14:52:36 -0500 X-MC-Unique: XXBE8xf3OdemLz0OFcaz-A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 49BD585A588; Fri, 16 Dec 2022 19:52:36 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 64DA753A0; Fri, 16 Dec 2022 19:52:35 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id CAB8E4269CA81; Fri, 16 Dec 2022 16:52:10 -0300 (-03) Message-ID: <20221216194904.155675758@redhat.com> User-Agent: quilt/0.66 Date: Fri, 16 Dec 2022 16:45:43 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v10 3/6] mm/vmstat: manage per-CPU stats from CPU context when NOHZ full References: <20221216194540.202752779@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For nohz full CPUs, we'd like the per-CPU vm statistics to be synchronized when userspace is executing. Otherwise,=20 the vmstat_shepherd might queue a work item to synchronize them, which is undesired intereference for isolated CPUs. This means that its necessary to check for, and possibly sync, the statistics when returning to userspace. This means that there are now two execution contexes, on different CPUs, which require awareness about each other: context switch and vmstat shepherd kernel threadr. To avoid the shared variables between these two contexes (which would require atomic accesses), delegate the responsability of statistics synchronization from vmstat_shepherd to local CPU context, for nohz_full CPUs. Do that by queueing a delayed work when marking per-CPU vmstat dirty. When returning to userspace, fold the stats and cancel the delayed work. When entering idle, only fold the stats. Signed-off-by: Marcelo Tosatti --- include/linux/vmstat.h | 4 ++-- kernel/time/tick-sched.c | 2 +- mm/vmstat.c | 41 ++++++++++++++++++++++++++++++++--------- 3 files changed, 35 insertions(+), 12 deletions(-) Index: linux-2.6/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/vmstat.c +++ linux-2.6/mm/vmstat.c @@ -28,6 +28,7 @@ #include #include #include +#include =20 #include "internal.h" =20 @@ -195,9 +196,26 @@ void fold_vm_numa_events(void) =20 #ifdef CONFIG_SMP static DEFINE_PER_CPU_ALIGNED(bool, vmstat_dirty); +static DEFINE_PER_CPU(struct delayed_work, vmstat_work); +int sysctl_stat_interval __read_mostly =3D HZ; =20 static inline void vmstat_mark_dirty(void) { +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER + int cpu =3D smp_processor_id(); + + if (tick_nohz_full_cpu(cpu) && !this_cpu_read(vmstat_dirty)) { + struct delayed_work *dw; + + dw =3D this_cpu_ptr(&vmstat_work); + if (!delayed_work_pending(dw)) { + unsigned long delay; + + delay =3D round_jiffies_relative(sysctl_stat_interval); + queue_delayed_work_on(cpu, mm_percpu_wq, dw, delay); + } + } +#endif this_cpu_write(vmstat_dirty, true); } =20 @@ -1886,9 +1904,6 @@ static const struct seq_operations vmsta #endif /* CONFIG_PROC_FS */ =20 #ifdef CONFIG_SMP -static DEFINE_PER_CPU(struct delayed_work, vmstat_work); -int sysctl_stat_interval __read_mostly =3D HZ; - #ifdef CONFIG_PROC_FS static void refresh_vm_stats(struct work_struct *work) { @@ -1973,7 +1988,7 @@ static void vmstat_update(struct work_st * until the diffs stay at zero. The function is used by NOHZ and can only= be * invoked when tick processing is not active. */ -void quiet_vmstat(void) +void quiet_vmstat(bool user) { if (system_state !=3D SYSTEM_RUNNING) return; @@ -1981,13 +1996,19 @@ void quiet_vmstat(void) if (!is_vmstat_dirty()) return; =20 + refresh_cpu_vm_stats(false); + +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER + if (!user) + return; /* - * Just refresh counters and do not care about the pending delayed - * vmstat_update. It doesn't fire that often to matter and canceling - * it would be too expensive from this path. - * vmstat_shepherd will take care about that for us. + * If the tick is stopped, cancel any delayed work to avoid + * interruptions to this CPU in the future. */ - refresh_cpu_vm_stats(false); + dw =3D this_cpu_ptr(&vmstat_work); + if (delayed_work_pending(this_cpu_ptr(&vmstat_work))) + cancel_delayed_work(this_cpu_ptr(&vmstat_work)); +#endif } =20 /* @@ -2009,6 +2030,12 @@ static void vmstat_shepherd(struct work_ for_each_online_cpu(cpu) { struct delayed_work *dw =3D &per_cpu(vmstat_work, cpu); =20 +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER + /* NOHZ full CPUs manage their own vmstat flushing */ + if (tick_nohz_full_cpu(cpu)) + continue; +#endif + if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); =20 Index: linux-2.6/include/linux/vmstat.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/include/linux/vmstat.h +++ linux-2.6/include/linux/vmstat.h @@ -290,7 +290,7 @@ extern void dec_zone_state(struct zone * extern void __dec_zone_state(struct zone *, enum zone_stat_item); extern void __dec_node_state(struct pglist_data *, enum node_stat_item); =20 -void quiet_vmstat(void); +void quiet_vmstat(bool user); void cpu_vm_stats_fold(int cpu); void refresh_zone_stat_thresholds(void); =20 @@ -403,7 +403,7 @@ static inline void __dec_node_page_state =20 static inline void refresh_zone_stat_thresholds(void) { } static inline void cpu_vm_stats_fold(int cpu) { } -static inline void quiet_vmstat(void) { } +static inline void quiet_vmstat(bool user) { } =20 static inline void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) { } Index: linux-2.6/kernel/time/tick-sched.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/kernel/time/tick-sched.c +++ linux-2.6/kernel/time/tick-sched.c @@ -911,7 +911,7 @@ static void tick_nohz_stop_tick(struct t */ if (!ts->tick_stopped) { calc_load_nohz_start(); - quiet_vmstat(); + quiet_vmstat(false); =20 ts->last_tick =3D hrtimer_get_expires(&ts->sched_timer); ts->tick_stopped =3D 1; Index: linux-2.6/mm/Kconfig =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/Kconfig +++ linux-2.6/mm/Kconfig @@ -1124,6 +1124,19 @@ config PTE_MARKER_UFFD_WP purposes. It is required to enable userfaultfd write protection on file-backed memory types like shmem and hugetlbfs. =20 +config FLUSH_WORK_ON_RESUME_USER + bool "Flush per-CPU vmstats on user return (for nohz full CPUs)" + depends on NO_HZ_FULL + default y + + help + By default, nohz full CPUs flush per-CPU vm statistics on return + to userspace (to avoid additional interferences when executing + userspace code). This has a small but measurable impact on + system call performance. You can disable this to improve system call + performance, at the expense of potential interferences to userspace + execution. + # multi-gen LRU { config LRU_GEN bool "Multi-Gen LRU" From nobody Wed Sep 17 17:37:29 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FC11C4332F for ; Fri, 16 Dec 2022 19:54:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231504AbiLPTyk (ORCPT ); Fri, 16 Dec 2022 14:54:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232069AbiLPTyG (ORCPT ); Fri, 16 Dec 2022 14:54:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B38AC6F0F1 for ; Fri, 16 Dec 2022 11:52:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671220361; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Dj30RymbV1ceM4Dy8ttNJR7SWnDvhwByWmjv10/VX0U=; b=IPglY0aqvnEm0C1nSlCk3w7HoTeCEVH3uDsZbz4QVDqPNv0OP4+5DyUxyvQetIxODbtjGL NMRmaI6QIvtoIq7BxBF6I0BpGGSCWuGifwewZfZJO1KUx/8OmSx9dVzJtMeORs9KcVsbnI 4dbBez/tHQMMHrt7HnQ5VPf6X5RIjiw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-60-JsGNMvx0Nm2XLJKR7vCj_w-1; Fri, 16 Dec 2022 14:52:36 -0500 X-MC-Unique: JsGNMvx0Nm2XLJKR7vCj_w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 28B64183B3C8; Fri, 16 Dec 2022 19:52:36 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7479440C2064; Fri, 16 Dec 2022 19:52:35 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id CD8CF4269CA82; Fri, 16 Dec 2022 16:52:10 -0300 (-03) Message-ID: <20221216194904.194285953@redhat.com> User-Agent: quilt/0.66 Date: Fri, 16 Dec 2022 16:45:44 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v10 4/6] tick/nohz_full: Ensure quiet_vmstat() is called on exit to user-mode when the idle tick is stopped References: <20221216194540.202752779@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Aaron Tomlin For nohz full CPUs, we'd like the per-CPU vm statistics to be synchronized when userspace is executing. Otherwise, the vmstat_shepherd might queue a work item to synchronize them, which is undesired intereference for isolated CPUs. This patch syncs CPU-specific vmstat differentials, on return to userspace, if CONFIG_FLUSH_WORK_ON_RESUME_USER is enabled and the tick is stopped. A trivial test program was used to determine the impact of the proposed changes and under vanilla. The mlock(2) and munlock(2) system calls was used solely to modify vmstat item 'NR_MLOCK'. The following is an average count of CPU-cycles across the aforementioned system calls: Vanilla Modified Cycles per syscall 8461 8690 (+2.6%) Signed-off-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- include/linux/tick.h | 5 +++-- kernel/time/tick-sched.c | 15 +++++++++++++++ 2 files changed, 18 insertions(+), 2 deletions(-) Index: linux-2.6/include/linux/tick.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/include/linux/tick.h +++ linux-2.6/include/linux/tick.h @@ -11,7 +11,6 @@ #include #include #include -#include =20 #ifdef CONFIG_GENERIC_CLOCKEVENTS extern void __init tick_init(void); @@ -272,6 +271,7 @@ static inline void tick_dep_clear_signal =20 extern void tick_nohz_full_kick_cpu(int cpu); extern void __tick_nohz_task_switch(void); +void __tick_nohz_user_enter_prepare(void); extern void __init tick_nohz_full_setup(cpumask_var_t cpumask); #else static inline bool tick_nohz_full_enabled(void) { return false; } @@ -296,6 +296,7 @@ static inline void tick_dep_clear_signal =20 static inline void tick_nohz_full_kick_cpu(int cpu) { } static inline void __tick_nohz_task_switch(void) { } +static inline void __tick_nohz_user_enter_prepare(void) { } static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { } #endif =20 @@ -308,7 +309,7 @@ static inline void tick_nohz_task_switch static inline void tick_nohz_user_enter_prepare(void) { if (tick_nohz_full_cpu(smp_processor_id())) - rcu_nocb_flush_deferred_wakeup(); + __tick_nohz_user_enter_prepare(); } =20 #endif Index: linux-2.6/kernel/time/tick-sched.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/kernel/time/tick-sched.c +++ linux-2.6/kernel/time/tick-sched.c @@ -26,6 +26,7 @@ #include #include #include +#include =20 #include =20 @@ -519,6 +520,22 @@ void __tick_nohz_task_switch(void) } } =20 +void __tick_nohz_user_enter_prepare(void) +{ + if (tick_nohz_full_cpu(smp_processor_id())) { +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER + struct tick_sched *ts; + + ts =3D this_cpu_ptr(&tick_cpu_sched); + + if (ts->tick_stopped) + quiet_vmstat(true); +#endif + rcu_nocb_flush_deferred_wakeup(); + } +} +EXPORT_SYMBOL_GPL(__tick_nohz_user_enter_prepare); + /* Get the boot-time nohz CPU list from the kernel parameters. */ void __init tick_nohz_full_setup(cpumask_var_t cpumask) { From nobody Wed Sep 17 17:37:29 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A65AFC4332F for ; Fri, 16 Dec 2022 19:53:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231201AbiLPTxY (ORCPT ); Fri, 16 Dec 2022 14:53:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229453AbiLPTxU (ORCPT ); Fri, 16 Dec 2022 14:53:20 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7287A659BF for ; Fri, 16 Dec 2022 11:52:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671220360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=XMdjLmtzP5L6H9Cd/6sMe3g0xhQEAqoO6Fwkx2+y5AA=; b=R82NqLVhQZ5fnVXKXG0uRMpC9qnHPX8aiXVZIRFIHGj4ex6W6LGv9vlT3NUyI8IMnUuwGk 5RsKLCR8bmEGjBZDIwVDeQtS9gY+1TaJBGNlgy0uytuV3IETjJOxBaq4o4ZDX7eD/DN7aD tLQpyPxAFDMxX44DZkgl1fdvxGjeWiw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-352-T_pEuXCDMzKMDR2dUVU_3A-1; Fri, 16 Dec 2022 14:52:38 -0500 X-MC-Unique: T_pEuXCDMzKMDR2dUVU_3A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2E5622805583; Fri, 16 Dec 2022 19:52:38 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C050814171BE; Fri, 16 Dec 2022 19:52:37 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id D262E4269CA83; Fri, 16 Dec 2022 16:52:10 -0300 (-03) Message-ID: <20221216194904.233941906@redhat.com> User-Agent: quilt/0.66 Date: Fri, 16 Dec 2022 16:45:45 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v10 5/6] tick/sched: Ensure quiet_vmstat() is called when the idle tick was stopped too References: <20221216194540.202752779@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Aaron Tomlin In the context of the idle task and an adaptive-tick mode/or a nohz_full CPU, quiet_vmstat() can be called: before stopping the idle tick, entering an idle state and on exit. In particular, for the latter case, when the idle task is required to reschedule, the idle tick can remain stopped and the timer expiration time endless i.e., KTIME_MAX. Now, indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat counters should be processed to ensure the respective values have been reset and folded into the zone specific 'vm_stat[]'. That being said, it can only occur when: the idle tick was previously stopped, and reprogramming of the timer is not required. A customer provided some evidence which indicates that the idle tick was stopped; albeit, CPU-specific vmstat counters still remained populated. Thus one can only assume quiet_vmstat() was not invoked on return to the idle loop. If I understand correctly, I suspect this divergence might erroneously prevent a reclaim attempt by kswapd. If the number of zone specific free pages are below their per-cpu drift value then zone_page_state_snapshot() is used to compute a more accurate view of the aforementioned statistic. Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will be unable to make significant progress via direct reclaim unless it is killed after being woken up by kswapd (see throttle_direct_reclaim()). Consider the following theoretical scenario: - Note: CPU X is part of 'tick_nohz_full_mask' 1. CPU Y migrated running task A to CPU X that was in an idle state i.e. waiting for an IRQ; marked the current task on CPU X to need/or require a reschedule i.e., set TIF_NEED_RESCHED and invoked a reschedule IPI to CPU X (see sched_move_task()) 2. CPU X acknowledged the reschedule IPI. Generic idle loop code noticed the TIF_NEED_RESCHED flag against the idle task and attempts to exit of the loop and calls the main scheduler function i.e. __schedule(). Since the idle tick was previously stopped no scheduling-clock tick would occur. So, no deferred timers would be handled 3. Post transition to kernel execution Task A running on CPU X, indirectly released a few pages (e.g. see __free_one_page()); CPU X's 'vm_stat_diff[NR_FREE_PAGES]' was updated and zone specific 'vm_stat[]' update was deferred as per the CPU-specific stat threshold 4. Task A does invoke exit(2) and the kernel does remove the task from the run-queue; the idle task was selected to execute next since there are no other runnable tasks assigned to the given CPU (see pick_next_task() and pick_next_task_idle()) 5. On return to the idle loop since the idle tick was already stopped and can remain so (see [1] below) e.g. no pending soft IRQs, no attempt is made to zero and fold CPU X's vmstat counters since reprogramming of the scheduling-clock tick is not required/or needed (see [2]) ... do_idle { __current_set_polling() tick_nohz_idle_enter() while (!need_resched()) { local_irq_disable() ... /* No polling or broadcast event */ cpuidle_idle_call() { if (cpuidle_not_available(drv, dev)) { tick_nohz_idle_stop_tick() __tick_nohz_idle_stop_tick(this_cpu_ptr(&tick_cpu_sched)) { int cpu =3D smp_processor_id() if (ts->timer_expires_base) expires =3D ts->timer_expires else if (can_stop_idle_tick(cpu, ts)) (1) -------> expires =3D tick_nohz_next_event(ts, cpu) else return ts->idle_calls++ if (expires > 0LL) { tick_nohz_stop_tick(ts, cpu) { if (ts->tick_stopped && (expires =3D=3D ts->next_tick)) { (2) -------> if (tick =3D=3D KTIME_MAX || ts->next_tick = =3D=3D hrtimer_get_expires(&ts->sched_timer)) return } ... } So, the idea of this patch is to ensure refresh_cpu_vm_stats(false) is called, when it is appropriate, on return to the idle loop if the idle tick was previously stopped too. A trivial test program was used to determine the impact of the proposed changes and under vanilla. The nanosleep(2) system call was used several times to suspend execution for a period of time to approximately compute the number of CPU-cycles in the idle code path. The following is an average count of CPU-cycles: Vanilla Modified Cycles per idle loop 151858 153258 (+1.0%) Signed-off-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- kernel/time/tick-sched.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) Index: linux-2.6/kernel/time/tick-sched.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/kernel/time/tick-sched.c +++ linux-2.6/kernel/time/tick-sched.c @@ -928,13 +928,14 @@ static void tick_nohz_stop_tick(struct t */ if (!ts->tick_stopped) { calc_load_nohz_start(); - quiet_vmstat(false); =20 ts->last_tick =3D hrtimer_get_expires(&ts->sched_timer); ts->tick_stopped =3D 1; trace_tick_stop(1, TICK_DEP_MASK_NONE); } =20 + /* Attempt to fold when the idle tick is stopped or not */ + quiet_vmstat(false); ts->next_tick =3D tick; =20 /* From nobody Wed Sep 17 17:37:29 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31340C4332F for ; Fri, 16 Dec 2022 19:54:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231794AbiLPTym (ORCPT ); Fri, 16 Dec 2022 14:54:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231941AbiLPTyI (ORCPT ); Fri, 16 Dec 2022 14:54:08 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4988E659BA for ; Fri, 16 Dec 2022 11:52:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671220362; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=8S7mt7lE8e/IJlnGX6npC3VGrMTwvxTRAvd6yzbBL9I=; b=XRl4Lomw6mJ/Kz/oPV7licJtKs30SU3weGJGVPyP7QFuZYQxI5jROcM+x0jtc8/7sE6uWY xBkxAegQEVmGFbOi9vkJugpNuLEuEJhJj4GhDUmPXozf18sQ+/fbFMMQRKltsu4tLz/O4Q o1EzMsQHrHfHpZuEGlkDycbA1fTHb2c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-322-VaFJA5p6PziY1me1vqiQcQ-1; Fri, 16 Dec 2022 14:52:38 -0500 X-MC-Unique: VaFJA5p6PziY1me1vqiQcQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 05030183B3C0; Fri, 16 Dec 2022 19:52:38 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C2F491121314; Fri, 16 Dec 2022 19:52:37 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id D7C7C4269CA84; Fri, 16 Dec 2022 16:52:10 -0300 (-03) Message-ID: <20221216194904.272106293@redhat.com> User-Agent: quilt/0.66 Date: Fri, 16 Dec 2022 16:45:46 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v10 6/6] mm/vmstat: avoid queueing work item if cpu stats are clean References: <20221216194540.202752779@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is not necessary to queue work item to run refresh_vm_stats on a remote CPU if that CPU has no dirty stats and no per-CPU allocations for remote nodes. This fixes sosreport hang (which uses vmstat_refresh) with spinning SCHED_FIFO process. Signed-off-by: Marcelo Tosatti Index: linux-2.6/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/vmstat.c +++ linux-2.6/mm/vmstat.c @@ -1905,6 +1905,31 @@ static const struct seq_operations vmsta =20 #ifdef CONFIG_SMP #ifdef CONFIG_PROC_FS +static bool need_drain_remote_zones(int cpu) +{ +#ifdef CONFIG_NUMA + struct zone *zone; + + for_each_populated_zone(zone) { + struct per_cpu_pages *pcp; + + pcp =3D per_cpu_ptr(zone->per_cpu_pageset, cpu); + if (!pcp->count) + continue; + + if (!pcp->expire) + continue; + + if (zone_to_nid(zone) =3D=3D cpu_to_node(cpu)) + continue; + + return true; + } +#endif + + return false; +} + static void refresh_vm_stats(struct work_struct *work) { refresh_cpu_vm_stats(true); @@ -1914,8 +1939,12 @@ int vmstat_refresh(struct ctl_table *tab void *buffer, size_t *lenp, loff_t *ppos) { long val; - int err; - int i; + int i, cpu; + struct work_struct __percpu *works; + + works =3D alloc_percpu(struct work_struct); + if (!works) + return -ENOMEM; =20 /* * The regular update, every sysctl_stat_interval, may come later @@ -1929,9 +1958,19 @@ int vmstat_refresh(struct ctl_table *tab * transiently negative values, report an error here if any of * the stats is negative, so we know to go looking for imbalance. */ - err =3D schedule_on_each_cpu(refresh_vm_stats); - if (err) - return err; + cpus_read_lock(); + for_each_online_cpu(cpu) { + struct work_struct *work =3D per_cpu_ptr(works, cpu); + + INIT_WORK(work, refresh_vm_stats); + if (per_cpu(vmstat_dirty, cpu) || need_drain_remote_zones(cpu)) + schedule_work_on(cpu, work); + } + for_each_online_cpu(cpu) + flush_work(per_cpu_ptr(works, cpu)); + cpus_read_unlock(); + free_percpu(works); + for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { /* * Skip checking stats known to go negative occasionally.