From nobody Sun May 5 22:16:24 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44529C7EE29 for ; Fri, 2 Jun 2023 19:05:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237125AbjFBTFB (ORCPT ); Fri, 2 Jun 2023 15:05:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232978AbjFBTEz (ORCPT ); Fri, 2 Jun 2023 15:04:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B06B1A5 for ; Fri, 2 Jun 2023 12:04:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685732653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=N0hJfqn4EoxiBlmJOP3aHXOKCKD0R3BQJl7dwB2jg74=; b=TL9h9fat51oaDIijWMJ90zFKM0oVYeR0irqAZXkKul/fqIuk1fhdeqEBYiJu95ljQEVmI8 fT3huhhxAsTCLF1agGupP7isqMuPXnrcoQsbARDwFjjAc743uwd5/QJzhtPOSdzbXeYHJI l26oxQu0bmS7IQhlnGEhZpxTq+K2q4w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-542-ljUmNRMxP0CjAd128YeQiQ-1; Fri, 02 Jun 2023 15:04:08 -0400 X-MC-Unique: ljUmNRMxP0CjAd128YeQiQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D0248030D6; Fri, 2 Jun 2023 19:04:06 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 38DDC140E95D; Fri, 2 Jun 2023 19:04:06 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 0D626401E1576; Fri, 2 Jun 2023 16:03:42 -0300 (-03) Message-ID: <20230602190115.497160508@redhat.com> User-Agent: quilt/0.67 Date: Fri, 02 Jun 2023 15:57:58 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v2 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot References: <20230602185757.110910188@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A customer provided evidence indicating that a process was stalled in direct reclaim: - The process was trapped in throttle_direct_reclaim(). The function wait_event_killable() was called to wait condition =20 allow_direct_reclaim(pgdat) for current node to be true. =20 The allow_direct_reclaim(pgdat) examined the number of free pages =20 on the node by zone_page_state() which just returns value in =20 zone->vm_stat[NR_FREE_PAGES]. =20 =20 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. =20 However, the freelist on this node was not empty. =20 =20 - This inconsistent of vmstat value was caused by percpu vmstat on =20 nohz_full cpus. Every increment/decrement of vmstat is performed =20 on percpu vmstat counter at first, then pooled diffs are cumulated =20 to the zone's vmstat counter in timely manner. However, on nohz_full =20 cpus (in case of this customer's system, 48 of 52 cpus) these pooled =20 diffs were not cumulated once the cpu had no event on it so that =20 the cpu started sleeping infinitely. =20 I checked percpu vmstat and found there were total 69 counts not =20 cumulated to the zone's vmstat counter yet. =20 =20 - In this situation, kswapd did not help the trapped process. =20 In pgdat_balanced(), zone_wakermark_ok_safe() examined the number =20 of free pages on the node by zone_page_state_snapshot() which =20 checks pending counts on percpu vmstat. =20 Therefore kswapd could know there were 69 free pages correctly. =20 Since zone->_watermark =3D {8, 20, 32}, kswapd did not work because =20 69 was greater than 32 as high watermark. =20 Change allow_direct_reclaim to use zone_page_state_snapshot, which allows a more precise version of the vmstat counters to be used. allow_direct_reclaim will only be called from try_to_free_pages, which is not a hot path. Testing: Due to difficulties accessing the system, it has not been possible for the reproducer to test the patch (however its clear from available data and analysis that it should fix it). Reviewed-by: Michal Hocko Reviewed-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- Index: linux-vmstat-remote/mm/vmscan.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmscan.c +++ linux-vmstat-remote/mm/vmscan.c @@ -6887,7 +6887,7 @@ static bool allow_direct_reclaim(pg_data continue; =20 pfmemalloc_reserve +=3D min_wmark_pages(zone); - free_pages +=3D zone_page_state(zone, NR_FREE_PAGES); + free_pages +=3D zone_page_state_snapshot(zone, NR_FREE_PAGES); } =20 /* If there are no reserves (unexpected config) then do not throttle */ From nobody Sun May 5 22:16:24 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FEE2C77B7A for ; Fri, 2 Jun 2023 19:05:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237128AbjFBTFF (ORCPT ); Fri, 2 Jun 2023 15:05:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237121AbjFBTFA (ORCPT ); Fri, 2 Jun 2023 15:05:00 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8A121B5 for ; Fri, 2 Jun 2023 12:04:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685732655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=QJoQ4NIFYgqvZuzmyQS3JO7Z0w/EGIPt/606uSPKQ0E=; b=Ods2WV5Ha0G3gUcvNOhEpTp2ivI6GXJayzevmgWD58kYDObLEf4dsqlzXvXYXG4yFb7Muw cjjyYUP7GSEnTXE13DFI59rHjFJMiFpp9pFivV2b8O3BfpZaSMQsBua44NDiNaE5t2Owrh TLHbsrOJUjiTDMhvBQWyLi352aukFOc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-304-peNUCNUlOFqfgKiEXAwsmw-1; Fri, 02 Jun 2023 15:04:09 -0400 X-MC-Unique: peNUCNUlOFqfgKiEXAwsmw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EB4B3800EAB; Fri, 2 Jun 2023 19:04:06 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3859D492B00; Fri, 2 Jun 2023 19:04:06 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 1437B403BF03A; Fri, 2 Jun 2023 16:03:42 -0300 (-03) Message-ID: <20230602190115.521067386@redhat.com> User-Agent: quilt/0.67 Date: Fri, 02 Jun 2023 15:57:59 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs References: <20230602185757.110910188@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The interruption caused by vmstat_update is undesirable=20 for certain aplications: oslat 1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000) oslat 1094.456971: workqueue_queue_work: ... function=3Dvmstat_update ... oslat 1094.456974: sched_switch: prev_comm=3Doslat ... =3D=3D> next_comm= =3Dkworker/5:1 ... kworker 1094.456978: sched_switch: prev_comm=3Dkworker/5:1 =3D=3D> next_com= m=3Doslat ... The example above shows an additional 7us for the oslat -> kworker -> oslat switches. In the case of a virtualized CPU, and the vmstat_update =20 interruption in the host (of a qemu-kvm vcpu), the latency penalty observed in the guest is higher than 50us, violating the acceptable latency threshold. Skip periodic updates for nohz full CPUs. Any callers who need precise values should use a snapshot of the per-CPU counters, or use the global counters with measures to=20 handle errors up to thresholds (see calculate_normal_threshold). Suggested by Michal Hocko. Signed-off-by: Marcelo Tosatti Acked-by: Michal Hocko --- v2: use cpu_is_isolated (Michal Hocko) Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -28,6 +28,7 @@ #include #include #include +#include =20 #include "internal.h" =20 @@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_ for_each_online_cpu(cpu) { struct delayed_work *dw =3D &per_cpu(vmstat_work, cpu); =20 + /* + * Skip periodic updates for isolated CPUs. + * Any callers who need precise values should use + * a snapshot of the per-CPU counters, or use the global + * counters with measures to handle errors up to + * thresholds (see calculate_normal_threshold). + */ + if (cpu_is_isolated(cpu)) + continue; + if (!delayed_work_pending(dw) && need_update(cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); From nobody Sun May 5 22:16:24 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32C4DC77B7A for ; Fri, 2 Jun 2023 19:05:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237141AbjFBTFJ (ORCPT ); Fri, 2 Jun 2023 15:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237115AbjFBTFD (ORCPT ); Fri, 2 Jun 2023 15:05:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45C211B7 for ; Fri, 2 Jun 2023 12:04:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685732659; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=lJAFxNimyhgGkmw8GXkGsiBNva5tBIiiepv3OQs0cGk=; b=AgYcQ6l/D00LoQZ/P/17iIEZ64fZTmBETs1LUX4SoUWh0jj//c526JYE+fLO9wOmOYx9/4 /KceDumZyikaIc9rJG2Bn7FXcb54rcWl79wREwF5/ykAsrf+PML5g+YJvqr+cu9vikFMRA L58Y3LznhmfcOD/XMuNslvmNUkgN/yM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-504-gCOX5Fs8O-Gvp5nPcrtFrw-1; Fri, 02 Jun 2023 15:04:08 -0400 X-MC-Unique: gCOX5Fs8O-Gvp5nPcrtFrw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D3998007CE; Fri, 2 Jun 2023 19:04:06 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3891EC0297C; Fri, 2 Jun 2023 19:04:06 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 1D840403BFA58; Fri, 2 Jun 2023 16:03:42 -0300 (-03) Message-ID: <20230602190115.545766386@redhat.com> User-Agent: quilt/0.67 Date: Fri, 02 Jun 2023 15:58:00 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs References: <20230602185757.110910188@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The interruption caused by queueing work on nohz_full CPUs=20 is undesirable for certain aplications. Fix by not refreshing per-CPU stats of nohz_full CPUs.=20 Signed-off-by: Marcelo Tosatti --- v2: opencode schedule_on_each_cpu (Michal Hocko) Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab void *buffer, size_t *lenp, loff_t *ppos) { long val; - int err; int i; + int cpu; + struct work_struct __percpu *works; + + works =3D alloc_percpu(struct work_struct); + if (!works) + return -ENOMEM; =20 /* * The regular update, every sysctl_stat_interval, may come later @@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab * transiently negative values, report an error here if any of * the stats is negative, so we know to go looking for imbalance. */ - err =3D schedule_on_each_cpu(refresh_vm_stats); - if (err) - return err; + cpus_read_lock(); + for_each_online_cpu(cpu) { + struct work_struct *work; + + if (cpu_is_isolated(cpu)) + continue; + work =3D per_cpu_ptr(works, cpu); + INIT_WORK(work, refresh_vm_stats); + schedule_work_on(cpu, work); + } + + for_each_online_cpu(cpu) { + if (cpu_is_isolated(cpu)) + continue; + flush_work(per_cpu_ptr(works, cpu)); + } + cpus_read_unlock(); + free_percpu(works); for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { /* * Skip checking stats known to go negative occasionally.