From nobody Sat Apr 11 00:41:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B299C25B08 for ; Wed, 17 Aug 2022 19:17:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241438AbiHQTRP (ORCPT ); Wed, 17 Aug 2022 15:17:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238316AbiHQTRG (ORCPT ); Wed, 17 Aug 2022 15:17:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A8F779A67 for ; Wed, 17 Aug 2022 12:17:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660763824; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=QjliH6Zbs7SxREI0yAVgUWAGnTR2b2eGLogi48MSCUs=; b=Z34GgvY7XY+wj/GnuFGlozrhCMpdb5SExr2kuPiwPQlQeJe5ZmSjj28GXZ14yICinyinsn 7oMBjT68tDTe4o9lgL4i/62k6wmvNGGmIhjEwLuX4WThSfZzoKt2gsTsu8Zeygho0Av9se 2yYoUoPFkKS2DjQV5usrZ3TvnMSPsNM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-671-XqXhdPn-OJ2pzBO5XwoWnw-1; Wed, 17 Aug 2022 15:17:01 -0400 X-MC-Unique: XqXhdPn-OJ2pzBO5XwoWnw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9C3521C05158; Wed, 17 Aug 2022 19:17:00 +0000 (UTC) Received: from fuller.cnet (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 67B5F400E122; Wed, 17 Aug 2022 19:17:00 +0000 (UTC) Received: by fuller.cnet (Postfix, from userid 1000) id 55C9F416D5E9; Wed, 17 Aug 2022 16:16:24 -0300 (-03) Message-ID: <20220817191524.140710201@redhat.com> User-Agent: quilt/0.66 Date: Wed, 17 Aug 2022 16:13:47 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v7 1/3] mm/vmstat: Use per cpu variable to track a vmstat discrepancy References: <20220817191346.287594886@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Aaron Tomlin Add CPU-specific variable namely vmstat_dirty to indicate if a vmstat imbalance is present for a given CPU. Therefore, at the appropriate time, we can fold all the remaining differentials. This speeds up quiet_vmstat in case no per-CPU differentials exist. Based on=20 https://lore.kernel.org/lkml/20220204173554.763888172@fedora.localdomain/ Signed-off-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- mm/vmstat.c | 54 ++++++++++++++++++++---------------------------------- 1 file changed, 20 insertions(+), 34 deletions(-) Index: linux-2.6/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/vmstat.c +++ linux-2.6/mm/vmstat.c @@ -195,6 +195,12 @@ void fold_vm_numa_events(void) #endif =20 #ifdef CONFIG_SMP +static DEFINE_PER_CPU_ALIGNED(bool, vmstat_dirty); + +static inline void mark_vmstat_dirty(void) +{ + this_cpu_write(vmstat_dirty, true); +} =20 int calculate_pressure_threshold(struct zone *zone) { @@ -367,6 +373,7 @@ void __mod_zone_page_state(struct zone * x =3D 0; } __this_cpu_write(*p, x); + mark_vmstat_dirty(); =20 if (IS_ENABLED(CONFIG_PREEMPT_RT)) preempt_enable(); @@ -405,6 +412,7 @@ void __mod_node_page_state(struct pglist x =3D 0; } __this_cpu_write(*p, x); + mark_vmstat_dirty(); =20 if (IS_ENABLED(CONFIG_PREEMPT_RT)) preempt_enable(); @@ -603,6 +611,7 @@ static inline void mod_zone_state(struct =20 if (z) zone_page_state_add(z, zone, item); + mark_vmstat_dirty(); } =20 void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, @@ -671,6 +680,7 @@ static inline void mod_node_state(struct =20 if (z) node_page_state_add(z, pgdat, item); + mark_vmstat_dirty(); } =20 void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item it= em, @@ -825,6 +835,14 @@ static int refresh_cpu_vm_stats(bool do_ int global_node_diff[NR_VM_NODE_STAT_ITEMS] =3D { 0, }; int changes =3D 0; =20 + /* + * Clear vmstat_dirty before clearing the percpu vmstats. + * If interrupts are enabled, it is possible that an interrupt + * or another task modifies a percpu vmstat, which will + * set vmstat_dirty to true. + */ + this_cpu_write(vmstat_dirty, false); + for_each_populated_zone(zone) { struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; #ifdef CONFIG_NUMA @@ -1949,35 +1967,6 @@ static void vmstat_update(struct work_st } =20 /* - * Check if the diffs for a certain cpu indicate that - * an update is needed. - */ -static bool need_update(int cpu) -{ - pg_data_t *last_pgdat =3D NULL; - struct zone *zone; - - for_each_populated_zone(zone) { - struct per_cpu_zonestat *pzstats =3D per_cpu_ptr(zone->per_cpu_zonestats= , cpu); - struct per_cpu_nodestat *n; - - /* - * The fast way of checking if there are any vmstat diffs. - */ - if (memchr_inv(pzstats->vm_stat_diff, 0, sizeof(pzstats->vm_stat_diff))) - return true; - - if (last_pgdat =3D=3D zone->zone_pgdat) - continue; - last_pgdat =3D zone->zone_pgdat; - n =3D per_cpu_ptr(zone->zone_pgdat->per_cpu_nodestats, cpu); - if (memchr_inv(n->vm_node_stat_diff, 0, sizeof(n->vm_node_stat_diff))) - return true; - } - return false; -} - -/* * Switch off vmstat processing and then fold all the remaining differenti= als * until the diffs stay at zero. The function is used by NOHZ and can only= be * invoked when tick processing is not active. @@ -1987,10 +1976,7 @@ void quiet_vmstat(void) if (system_state !=3D SYSTEM_RUNNING) return; =20 - if (!delayed_work_pending(this_cpu_ptr(&vmstat_work))) - return; - - if (!need_update(smp_processor_id())) + if (!__this_cpu_read(vmstat_dirty)) return; =20 /* @@ -2021,7 +2007,7 @@ static void vmstat_shepherd(struct work_ for_each_online_cpu(cpu) { struct delayed_work *dw =3D &per_cpu(vmstat_work, cpu); =20 - if (!delayed_work_pending(dw) && need_update(cpu)) + if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); =20 cond_resched(); From nobody Sat Apr 11 00:41:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E4CCC25B08 for ; Wed, 17 Aug 2022 19:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241429AbiHQTRM (ORCPT ); Wed, 17 Aug 2022 15:17:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241418AbiHQTRG (ORCPT ); Wed, 17 Aug 2022 15:17:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CB5477E87 for ; Wed, 17 Aug 2022 12:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660763824; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=A9mez3un0kb/dY1Y5a1jB6zoEKYBA9P7qIxMNfLm/80=; b=az0BSDsR/IN9mTpMhYOIGdfHR8Wr+B+BJN6kcCx1Wp9ud+plaudGWXNDebu+gDI+DpKZHa 0xEIt2flgmKdHU9p/St/4j4bnl0Q8ljUNQ5ZxpqPTq9jkHIC+6TBQVtcJZVAU235+Rpktm xrtM7dWxbhxU+hiZAAmVzm37ZMJN+jI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-315-nfImuNj6OM-aMEIKQG5DPQ-1; Wed, 17 Aug 2022 15:17:01 -0400 X-MC-Unique: nfImuNj6OM-aMEIKQG5DPQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 98E6B811E75; Wed, 17 Aug 2022 19:17:00 +0000 (UTC) Received: from fuller.cnet (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3C17E40149B4; Wed, 17 Aug 2022 19:17:00 +0000 (UTC) Received: by fuller.cnet (Postfix, from userid 1000) id 59172416D882; Wed, 17 Aug 2022 16:16:24 -0300 (-03) Message-ID: <20220817191524.201253713@redhat.com> User-Agent: quilt/0.66 Date: Wed, 17 Aug 2022 16:13:48 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v7 2/3] tick/sched: Ensure quiet_vmstat() is called when the idle tick was stopped too References: <20220817191346.287594886@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Aaron Tomlin In the context of the idle task and an adaptive-tick mode/or a nohz_full CPU, quiet_vmstat() can be called: before stopping the idle tick, entering an idle state and on exit. In particular, for the latter case, when the idle task is required to reschedule, the idle tick can remain stopped and the timer expiration time endless i.e., KTIME_MAX. Now, indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat counters should be processed to ensure the respective values have been reset and folded into the zone specific 'vm_stat[]'. That being said, it can only occur when: the idle tick was previously stopped, and reprogramming of the timer is not required. A customer provided some evidence which indicates that the idle tick was stopped; albeit, CPU-specific vmstat counters still remained populated. Thus one can only assume quiet_vmstat() was not invoked on return to the idle loop. If I understand correctly, I suspect this divergence might erroneously prevent a reclaim attempt by kswapd. If the number of zone specific free pages are below their per-cpu drift value then zone_page_state_snapshot() is used to compute a more accurate view of the aforementioned statistic. Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will be unable to make significant progress via direct reclaim unless it is killed after being woken up by kswapd (see throttle_direct_reclaim()). Consider the following theoretical scenario: 1. CPU Y migrated running task A to CPU X that was in an idle state i.e. waiting for an IRQ - not polling; marked the current task on CPU X to need/or require a reschedule i.e., set TIF_NEED_RESCHED and invoked a reschedule IPI to CPU X (see sched_move_task()) 2. CPU X acknowledged the reschedule IPI from CPU Y; generic idle loop code noticed the TIF_NEED_RESCHED flag against the idle task and attempts to exit of the loop and calls the main scheduler function i.e. __schedule(). Since the idle tick was previously stopped no scheduling-clock tick would occur. So, no deferred timers would be handled 3. Post transition to kernel execution Task A running on CPU Y, indirectly released a few pages (e.g. see __free_one_page()); CPU Y's 'vm_stat_diff[NR_FREE_PAGES]' was updated and zone specific 'vm_stat[]' update was deferred as per the CPU-specific stat threshold 4. Task A does invoke exit(2) and the kernel does remove the task from the run-queue; the idle task was selected to execute next since there are no other runnable tasks assigned to the given CPU (see pick_next_task() and pick_next_task_idle()) 5. On return to the idle loop since the idle tick was already stopped and can remain so (see [1] below) e.g. no pending soft IRQs, no attempt is made to zero and fold CPU Y's vmstat counters since reprogramming of the scheduling-clock tick is not required/or needed (see [2]) ... do_idle { __current_set_polling() tick_nohz_idle_enter() while (!need_resched()) { local_irq_disable() ... /* No polling or broadcast event */ cpuidle_idle_call() { if (cpuidle_not_available(drv, dev)) { tick_nohz_idle_stop_tick() __tick_nohz_idle_stop_tick(this_cpu_ptr(&tick_cpu_sched)) { int cpu =3D smp_processor_id() if (ts->timer_expires_base) expires =3D ts->timer_expires else if (can_stop_idle_tick(cpu, ts)) (1) -------> expires =3D tick_nohz_next_event(ts, cpu) else return ts->idle_calls++ if (expires > 0LL) { tick_nohz_stop_tick(ts, cpu) { if (ts->tick_stopped && (expires =3D=3D ts->next_tick)) { (2) -------> if (tick =3D=3D KTIME_MAX || ts->next_tick = =3D=3D hrtimer_get_expires(&ts->sched_timer)) return } ... } So the idea of with this patch is to ensure refresh_cpu_vm_stats(false) is called, when it is appropriate, on return to the idle loop when the idle tick was previously stopped too. Additionally, in the context of nohz_full, when the scheduling-tick is stopped and before exiting to user-mode, ensure no CPU-specific vmstat differentials remain. Signed-off-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- include/linux/tick.h | 5 +++-- kernel/time/tick-sched.c | 19 ++++++++++++++++++- 2 files changed, 21 insertions(+), 3 deletions(-) Index: linux-2.6/include/linux/tick.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/include/linux/tick.h +++ linux-2.6/include/linux/tick.h @@ -11,7 +11,6 @@ #include #include #include -#include =20 #ifdef CONFIG_GENERIC_CLOCKEVENTS extern void __init tick_init(void); @@ -272,6 +271,7 @@ static inline void tick_dep_clear_signal =20 extern void tick_nohz_full_kick_cpu(int cpu); extern void __tick_nohz_task_switch(void); +void __tick_nohz_user_enter_prepare(void); extern void __init tick_nohz_full_setup(cpumask_var_t cpumask); #else static inline bool tick_nohz_full_enabled(void) { return false; } @@ -296,6 +296,7 @@ static inline void tick_dep_clear_signal =20 static inline void tick_nohz_full_kick_cpu(int cpu) { } static inline void __tick_nohz_task_switch(void) { } +static inline void __tick_nohz_user_enter_prepare(void) { } static inline void tick_nohz_full_setup(cpumask_var_t cpumask) { } #endif =20 @@ -308,7 +309,7 @@ static inline void tick_nohz_task_switch static inline void tick_nohz_user_enter_prepare(void) { if (tick_nohz_full_cpu(smp_processor_id())) - rcu_nocb_flush_deferred_wakeup(); + __tick_nohz_user_enter_prepare(); } =20 #endif Index: linux-2.6/kernel/time/tick-sched.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/kernel/time/tick-sched.c +++ linux-2.6/kernel/time/tick-sched.c @@ -26,6 +26,7 @@ #include #include #include +#include =20 #include =20 @@ -519,6 +520,20 @@ void __tick_nohz_task_switch(void) } } =20 +void __tick_nohz_user_enter_prepare(void) +{ + struct tick_sched *ts; + + if (tick_nohz_full_cpu(smp_processor_id())) { + ts =3D this_cpu_ptr(&tick_cpu_sched); + + if (ts->tick_stopped) + quiet_vmstat(); + rcu_nocb_flush_deferred_wakeup(); + } +} +EXPORT_SYMBOL_GPL(__tick_nohz_user_enter_prepare); + /* Get the boot-time nohz CPU list from the kernel parameters. */ void __init tick_nohz_full_setup(cpumask_var_t cpumask) { @@ -890,6 +905,9 @@ static void tick_nohz_stop_tick(struct t ts->do_timer_last =3D 0; } =20 + /* Attempt to fold when the idle tick is stopped or not */ + quiet_vmstat(); + /* Skip reprogram of event if its not changed */ if (ts->tick_stopped && (expires =3D=3D ts->next_tick)) { /* Sanity check: make sure clockevent is actually programmed */ @@ -911,7 +929,6 @@ static void tick_nohz_stop_tick(struct t */ if (!ts->tick_stopped) { calc_load_nohz_start(); - quiet_vmstat(); =20 ts->last_tick =3D hrtimer_get_expires(&ts->sched_timer); ts->tick_stopped =3D 1; From nobody Sat Apr 11 00:41:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 426EBC25B08 for ; Wed, 17 Aug 2022 19:17:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241448AbiHQTRT (ORCPT ); Wed, 17 Aug 2022 15:17:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241420AbiHQTRH (ORCPT ); Wed, 17 Aug 2022 15:17:07 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C9AB4D142 for ; Wed, 17 Aug 2022 12:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660763825; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=QYqdYLqJ3BIjbiDz9gg2I/w9f9Ix7ZBsTkYeq0twwGo=; b=IXqj2PJL6F/OSUcl/FRTAQOSxknwailMAuT+BwIqbHe/uvBr3DjSoB6MWwjeshCL5wCTn+ X927gSoo3Vn7rRpyaUqOnIlKRdtC9YvqAl32KG3MCSH9B+xEZd1LDqE//gbs6Gjjb3UnHj WFHReMJrMTtXKDlMv7pRUZi78dEX4pI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-620-neGK6-J5PMOWa2OM7VkZOw-1; Wed, 17 Aug 2022 15:17:01 -0400 X-MC-Unique: neGK6-J5PMOWa2OM7VkZOw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 996B43C10142; Wed, 17 Aug 2022 19:17:00 +0000 (UTC) Received: from fuller.cnet (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 67B28492CA4; Wed, 17 Aug 2022 19:17:00 +0000 (UTC) Received: by fuller.cnet (Postfix, from userid 1000) id 5DF1F416D8A2; Wed, 17 Aug 2022 16:16:24 -0300 (-03) Message-ID: <20220817191524.263315216@redhat.com> User-Agent: quilt/0.66 Date: Wed, 17 Aug 2022 16:13:49 -0300 From: Marcelo Tosatti To: atomlin@redhat.com, frederic@kernel.org Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH v7 3/3] mm/vmstat: do not queue vmstat_update if tick is stopped References: <20220817191346.287594886@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" >From the vmstat shepherd, for CPUs that have the tick stopped, do not queue local work to flush the per-CPU vmstats, since=20 in that case the flush is performed on return to userspace or when entering idle. Per-cpu pages can be freed remotely from housekeeping CPUs. Move the quiet_vmstat call after ts->tick_stopped =3D 1 assignment. Signed-off-by: Marcelo Tosatti --- kernel/time/tick-sched.c | 6 +++--- mm/vmstat.c | 22 +++++++++++++++++----- 2 files changed, 20 insertions(+), 8 deletions(-) Index: linux-2.6/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/mm/vmstat.c +++ linux-2.6/mm/vmstat.c @@ -29,6 +29,7 @@ #include #include #include +#include =20 #include "internal.h" =20 @@ -1973,19 +1974,27 @@ static void vmstat_update(struct work_st */ void quiet_vmstat(void) { + struct delayed_work *dw; + if (system_state !=3D SYSTEM_RUNNING) return; =20 if (!__this_cpu_read(vmstat_dirty)) return; =20 + refresh_cpu_vm_stats(false); + /* - * Just refresh counters and do not care about the pending delayed - * vmstat_update. It doesn't fire that often to matter and canceling - * it would be too expensive from this path. - * vmstat_shepherd will take care about that for us. + * If the tick is stopped, cancel any delayed work to avoid + * interruptions to this CPU in the future. + * + * Otherwise just refresh counters and do not care about the pending + * delayed vmstat_update. It doesn't fire that often to matter + * and canceling it would be too expensive from this path. */ - refresh_cpu_vm_stats(false); + dw =3D &per_cpu(vmstat_work, smp_processor_id()); + if (delayed_work_pending(dw) && tick_nohz_tick_stopped()) + cancel_delayed_work(dw); } =20 /* @@ -2007,6 +2016,9 @@ static void vmstat_shepherd(struct work_ for_each_online_cpu(cpu) { struct delayed_work *dw =3D &per_cpu(vmstat_work, cpu); =20 + if (tick_nohz_tick_stopped_cpu(cpu)) + continue; + if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); =20 Index: linux-2.6/kernel/time/tick-sched.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-2.6.orig/kernel/time/tick-sched.c +++ linux-2.6/kernel/time/tick-sched.c @@ -905,9 +905,6 @@ static void tick_nohz_stop_tick(struct t ts->do_timer_last =3D 0; } =20 - /* Attempt to fold when the idle tick is stopped or not */ - quiet_vmstat(); - /* Skip reprogram of event if its not changed */ if (ts->tick_stopped && (expires =3D=3D ts->next_tick)) { /* Sanity check: make sure clockevent is actually programmed */ @@ -935,6 +932,9 @@ static void tick_nohz_stop_tick(struct t trace_tick_stop(1, TICK_DEP_MASK_NONE); } =20 + /* Attempt to fold when the idle tick is stopped or not */ + quiet_vmstat(); + ts->next_tick =3D tick; =20 /*