From nobody Sun May 19 03:38:13 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7292BC7EE25 for ; Mon, 5 Jun 2023 19:04:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229791AbjFETEC (ORCPT ); Mon, 5 Jun 2023 15:04:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233034AbjFETDl (ORCPT ); Mon, 5 Jun 2023 15:03:41 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 713A394 for ; Mon, 5 Jun 2023 12:02:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685991773; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=N0hJfqn4EoxiBlmJOP3aHXOKCKD0R3BQJl7dwB2jg74=; b=FBOp+cFdzgqYhQJbtyLEULpbqqUSj9EpImzYTAu3R2imbrpOBgjPjL2BznJcVEikDQFNx4 2KqI8in07LxpqoXuObNJGzppp3HSS67AT4ipenRv4OlxqUbkzW5k26xe6uC+NulOTc/rk4 MgRetEHc+1/+EiDTIUQaOx8l+P6QJzc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-589-4vyAOX_1PqS0OwsPnk5LVA-1; Mon, 05 Jun 2023 15:02:51 -0400 X-MC-Unique: 4vyAOX_1PqS0OwsPnk5LVA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D4AF29A9CB5; Mon, 5 Jun 2023 19:02:50 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 33DBA4021AC; Mon, 5 Jun 2023 19:02:50 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 37288400E71AD; Mon, 5 Jun 2023 16:02:31 -0300 (-03) Message-ID: <20230605190132.032121742@redhat.com> User-Agent: quilt/0.67 Date: Mon, 05 Jun 2023 15:56:28 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v3 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot References: <20230605185627.923698377@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A customer provided evidence indicating that a process was stalled in direct reclaim: - The process was trapped in throttle_direct_reclaim(). The function wait_event_killable() was called to wait condition =20 allow_direct_reclaim(pgdat) for current node to be true. =20 The allow_direct_reclaim(pgdat) examined the number of free pages =20 on the node by zone_page_state() which just returns value in =20 zone->vm_stat[NR_FREE_PAGES]. =20 =20 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. =20 However, the freelist on this node was not empty. =20 =20 - This inconsistent of vmstat value was caused by percpu vmstat on =20 nohz_full cpus. Every increment/decrement of vmstat is performed =20 on percpu vmstat counter at first, then pooled diffs are cumulated =20 to the zone's vmstat counter in timely manner. However, on nohz_full =20 cpus (in case of this customer's system, 48 of 52 cpus) these pooled =20 diffs were not cumulated once the cpu had no event on it so that =20 the cpu started sleeping infinitely. =20 I checked percpu vmstat and found there were total 69 counts not =20 cumulated to the zone's vmstat counter yet. =20 =20 - In this situation, kswapd did not help the trapped process. =20 In pgdat_balanced(), zone_wakermark_ok_safe() examined the number =20 of free pages on the node by zone_page_state_snapshot() which =20 checks pending counts on percpu vmstat. =20 Therefore kswapd could know there were 69 free pages correctly. =20 Since zone->_watermark =3D {8, 20, 32}, kswapd did not work because =20 69 was greater than 32 as high watermark. =20 Change allow_direct_reclaim to use zone_page_state_snapshot, which allows a more precise version of the vmstat counters to be used. allow_direct_reclaim will only be called from try_to_free_pages, which is not a hot path. Testing: Due to difficulties accessing the system, it has not been possible for the reproducer to test the patch (however its clear from available data and analysis that it should fix it). Reviewed-by: Michal Hocko Reviewed-by: Aaron Tomlin Signed-off-by: Marcelo Tosatti --- Index: linux-vmstat-remote/mm/vmscan.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmscan.c +++ linux-vmstat-remote/mm/vmscan.c @@ -6887,7 +6887,7 @@ static bool allow_direct_reclaim(pg_data continue; =20 pfmemalloc_reserve +=3D min_wmark_pages(zone); - free_pages +=3D zone_page_state(zone, NR_FREE_PAGES); + free_pages +=3D zone_page_state_snapshot(zone, NR_FREE_PAGES); } =20 /* If there are no reserves (unexpected config) then do not throttle */ From nobody Sun May 19 03:38:13 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A66C77B73 for ; Mon, 5 Jun 2023 19:04:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234726AbjFETEL (ORCPT ); Mon, 5 Jun 2023 15:04:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233162AbjFETDm (ORCPT ); Mon, 5 Jun 2023 15:03:42 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4717ED for ; Mon, 5 Jun 2023 12:02:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685991774; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=IAf4R5o0gQZjozbQ8tE/X+XgLQMuANKlfa1dzEwKui8=; b=Tt5gns3cK2TH3PU7w2gLIfATM2RrpHK/d1Sfdf0X8Vc6Kmc9YUIidfKVCeJiQ8lVW3Z7Cs 5J/sVeEZRNZktqlFg6QoBrQ0YKs+HZRtxtZesF/Nh3t+RtZd6DCbNMYpu2qJEX/1Ln6lSw aYX2tV6UMJNMrxFXI4MnUCCXbkfF8AM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-517-z2HfVbExNb-4TD6HCY8g_Q-1; Mon, 05 Jun 2023 15:02:51 -0400 X-MC-Unique: z2HfVbExNb-4TD6HCY8g_Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 69CB38039B1; Mon, 5 Jun 2023 19:02:50 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3196240D1B61; Mon, 5 Jun 2023 19:02:50 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 39AF6400F7B4D; Mon, 5 Jun 2023 16:02:31 -0300 (-03) Message-ID: <20230605190132.059270652@redhat.com> User-Agent: quilt/0.67 Date: Mon, 05 Jun 2023 15:56:29 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v3 2/3] vmstat: skip periodic vmstat update for isolated CPUs References: <20230605185627.923698377@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Problem: The interruption caused by vmstat_update is undesirable for certain applications. With workloads that are running on isolated cpus with nohz full mode to shield off any kernel interruption. For example, a VM running a time sensitive application with a 50us maximum acceptable interruption=20 (use case: soft PLC). oslat 1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000) oslat 1094.456971: workqueue_queue_work: ... function=3Dvmstat_update ... oslat 1094.456974: sched_switch: prev_comm=3Doslat ... =3D=3D> next_comm= =3Dkworker/5:1 ... kworker 1094.456978: sched_switch: prev_comm=3Dkworker/5:1 =3D=3D> next_com= m=3Doslat ... The example above shows an additional 7us for the = = = = =20 oslat -> kworker -> oslat switches. In the case of a virtualized CPU, and the vmstat_update interruption in the host (of a qemu-kvm vcpu), the latency penalty observed in the guest is higher than 50us, violating the acceptable latency threshold. The isolated vCPU can perform operations that modify per-CPU page counters, for example to complete I/O operations: CPU 11/KVM-9540 [001] dNh1. 2314.248584: mod_zone_page_state <-__= folio_end_writeback CPU 11/KVM-9540 [001] dNh1. 2314.248585: =3D> 0xffffffffc042b083 =3D> mod_zone_page_state =3D> __folio_end_writeback =3D> folio_end_writeback =3D> iomap_finish_ioend =3D> blk_mq_end_request_batch =3D> nvme_irq =3D> __handle_irq_event_percpu =3D> handle_irq_event =3D> handle_edge_irq =3D> __common_interrupt =3D> common_interrupt =3D> asm_common_interrupt =3D> vmx_do_interrupt_nmi_irqoff =3D> vmx_handle_exit_irqoff =3D> vcpu_enter_guest =3D> vcpu_run =3D> kvm_arch_vcpu_ioctl_run =3D> kvm_vcpu_ioctl =3D> __x64_sys_ioctl =3D> do_syscall_64 =3D> entry_SYSCALL_64_after_hwframe In kernel users of vmstat counters either require the precise value and they are using zone_page_state_snapshot interface or they can live with an imprecision as the regular flushing can happen at arbitrary time and cumulative error can grow (see calculate_normal_threshold). >From that POV the regular flushing can be postponed for CPUs that have been isolated from the kernel interference without critical infrastructure ever noticing. Skip regular flushing from vmstat_shepherd for all isolated CPUs to avoid interference with the isolated workload. Suggested by Michal Hocko. Acked-by: Michal Hocko Signed-off-by: Marcelo Tosatti --- v3: improve changelog (Michal Hocko) v2: use cpu_is_isolated (Michal Hocko) Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -28,6 +28,7 @@ #include #include #include +#include =20 #include "internal.h" =20 @@ -2022,6 +2023,20 @@ static void vmstat_shepherd(struct work_ for_each_online_cpu(cpu) { struct delayed_work *dw =3D &per_cpu(vmstat_work, cpu); =20 + /* + * In kernel users of vmstat counters either require the precise value a= nd + * they are using zone_page_state_snapshot interface or they can live wi= th + * an imprecision as the regular flushing can happen at arbitrary time a= nd + * cumulative error can grow (see calculate_normal_threshold). + * + * From that POV the regular flushing can be postponed for CPUs that have + * been isolated from the kernel interference without critical + * infrastructure ever noticing. Skip regular flushing from vmstat_sheph= erd + * for all isolated CPUs to avoid interference with the isolated workloa= d. + */ + if (cpu_is_isolated(cpu)) + continue; + if (!delayed_work_pending(dw) && need_update(cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); From nobody Sun May 19 03:38:13 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C353C7EE25 for ; Mon, 5 Jun 2023 19:04:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234370AbjFETEF (ORCPT ); Mon, 5 Jun 2023 15:04:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233045AbjFETDl (ORCPT ); Mon, 5 Jun 2023 15:03:41 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CB84EA for ; Mon, 5 Jun 2023 12:02:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685991772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=HgdXzA8bdDwPq3PKBOP9VV5rvUGTehn183zDvLMapls=; b=YpC4X5ipFKv8JRCPLYSxzkpnGFqUXu4qbjrYQT0p8Cuc7/nRwTLGhpaHropJ0wTC50CVL+ MKntsWFc3KdPUGusARU+/keohodtGhITnsFP9qzJJ+PoqxVoInYQLrTFF48Mc8WOjyFYQU UXtHMsNfZpYGXsRqEtApSF/U43nGTHU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-517-PsIrh79mM7mEmfoGH_DiIw-1; Mon, 05 Jun 2023 15:02:51 -0400 X-MC-Unique: PsIrh79mM7mEmfoGH_DiIw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EFB63C0BE39; Mon, 5 Jun 2023 19:02:50 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 31999492B00; Mon, 5 Jun 2023 19:02:50 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 3C0C140103220; Mon, 5 Jun 2023 16:02:31 -0300 (-03) Message-ID: <20230605190132.087124739@redhat.com> User-Agent: quilt/0.67 Date: Mon, 05 Jun 2023 15:56:30 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v3 3/3] mm/vmstat: do not refresh stats for isolated CPUs References: <20230605185627.923698377@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" schedule_work_on API uses the workqueue mechanism to queue a work item on a queue. A kernel thread, which runs on the target CPU, executes those work items. Therefore, when using the schedule_work_on API, it is necessary for the kworker kernel thread to be scheduled in, for the work function to be executed. Time sensitive applications such as SoftPLCs (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Ar= e_Virtual_PLCs.pdf), have their response times affected by such interruptions. The /proc/sys/vm/stat_refresh file was originally introduced with the goal to: "Provide /proc/sys/vm/stat_refresh to force an immediate update of per-cpu into global vmstats: useful to avoid a sleep(2) or whatever before checking counts when testing. Originally added to work around a bug which left counts stranded indefinitely on a cpu going idle (an inaccuracy magnified when small below-batch numbers represent "huge" amounts of memory), but I believe that bug is now fixed: nonetheless, this is still a useful knob." Other than the potential interruption to a time sensitive application, if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then system hangs can occur: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D978688 To avoid the problems above, do not schedule the work to synchronize per-CPU mm counters on isolated CPUs. Given the possibility for breaking existing userspace applications, avoid returning errors from access to /proc/sys/vm/stat_refresh. Signed-off-by: Marcelo Tosatti --- v3: improve changelog (Michal Hocko) v2: opencode schedule_on_each_cpu (Michal Hocko) Index: linux-vmstat-remote/mm/vmstat.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab void *buffer, size_t *lenp, loff_t *ppos) { long val; - int err; int i; + int cpu; + struct work_struct __percpu *works; + + works =3D alloc_percpu(struct work_struct); + if (!works) + return -ENOMEM; =20 /* * The regular update, every sysctl_stat_interval, may come later @@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab * transiently negative values, report an error here if any of * the stats is negative, so we know to go looking for imbalance. */ - err =3D schedule_on_each_cpu(refresh_vm_stats); - if (err) - return err; + cpus_read_lock(); + for_each_online_cpu(cpu) { + struct work_struct *work; + + if (cpu_is_isolated(cpu)) + continue; + work =3D per_cpu_ptr(works, cpu); + INIT_WORK(work, refresh_vm_stats); + schedule_work_on(cpu, work); + } + + for_each_online_cpu(cpu) { + if (cpu_is_isolated(cpu)) + continue; + flush_work(per_cpu_ptr(works, cpu)); + } + cpus_read_unlock(); + free_percpu(works); for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { /* * Skip checking stats known to go negative occasionally.