From nobody Fri Jan 2 15:50:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F4F2CD68FE for ; Tue, 10 Oct 2023 08:32:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442874AbjJJIcP (ORCPT ); Tue, 10 Oct 2023 04:32:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442867AbjJJIcM (ORCPT ); Tue, 10 Oct 2023 04:32:12 -0400 Received: from outbound-smtp17.blacknight.com (outbound-smtp17.blacknight.com [46.22.139.234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4E5FA7 for ; Tue, 10 Oct 2023 01:32:06 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp17.blacknight.com (Postfix) with ESMTPS id 942F41C4F57 for ; Tue, 10 Oct 2023 09:32:05 +0100 (IST) Received: (qmail 7039 invoked from network); 10 Oct 2023 08:32:05 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPA; 10 Oct 2023 08:32:05 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM , Mel Gorman Subject: [PATCH 1/6] sched/numa: Document vma_numab_state fields Date: Tue, 10 Oct 2023 09:31:38 +0100 Message-Id: <20231010083143.19593-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231010083143.19593-1-mgorman@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Document the intended usage of the fields. Signed-off-by: Mel Gorman --- include/linux/mm_types.h | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 36c5b43999e6..0fe054afc4d6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -551,9 +551,33 @@ struct vma_lock { }; =20 struct vma_numab_state { - unsigned long next_scan; - unsigned long next_pid_reset; - unsigned long access_pids[2]; + unsigned long next_scan; /* Initialised as time in + * jiffies after which VMA + * should be scanned. Delays + * first scan of new VMA by at + * least + * sysctl_numa_balancing_scan_delay + */ + unsigned long next_pid_reset; /* Time in jiffies when + * access_pids is reset to + * detect phase change + * behaviour. + */ + unsigned long access_pids[2]; /* Approximate tracking of PIDS + * that trapped a NUMA hinting + * fault. May produce false + * positives due to hash + * collisions. + * + * [0] Previous PID tracking + * [1] Current PID tracking + * + * Window moves after + * next_pid_reset has expired + * approximately every + * VMA_PID_RESET_PERIOD + * jiffies. + */ }; =20 /* --=20 2.35.3 From nobody Fri Jan 2 15:50:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD08CCD690A for ; Tue, 10 Oct 2023 08:41:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229738AbjJJIlq (ORCPT ); Tue, 10 Oct 2023 04:41:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229487AbjJJIlm (ORCPT ); Tue, 10 Oct 2023 04:41:42 -0400 X-Greylist: delayed 563 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Tue, 10 Oct 2023 01:41:40 PDT Received: from outbound-smtp02.blacknight.com (outbound-smtp02.blacknight.com [81.17.249.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA374AF for ; Tue, 10 Oct 2023 01:41:40 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id E67D7BADA5 for ; Tue, 10 Oct 2023 09:32:15 +0100 (IST) Received: (qmail 7586 invoked from network); 10 Oct 2023 08:32:15 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPA; 10 Oct 2023 08:32:15 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM , Mel Gorman Subject: [PATCH 2/6] sched/numa: Rename vma_numab_state.access_pids Date: Tue, 10 Oct 2023 09:31:39 +0100 Message-Id: <20231010083143.19593-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231010083143.19593-1-mgorman@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The access_pids field name is somewhat ambiguous as no PIDs are accessed. Similarly, it's not clear that next_pid_reset is related to access_pids. Rename the fields to more accurately reflect their purpose. Signed-off-by: Mel Gorman --- include/linux/mm.h | 4 ++-- include/linux/mm_types.h | 4 ++-- kernel/sched/fair.c | 12 ++++++------ 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bf5d0b1b16f4..19fc73b02c9f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1726,8 +1726,8 @@ static inline void vma_set_access_pid_bit(struct vm_a= rea_struct *vma) unsigned int pid_bit; =20 pid_bit =3D hash_32(current->pid, ilog2(BITS_PER_LONG)); - if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids= [1])) { - __set_bit(pid_bit, &vma->numab_state->access_pids[1]); + if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->pids_active= [1])) { + __set_bit(pid_bit, &vma->numab_state->pids_active[1]); } } #else /* !CONFIG_NUMA_BALANCING */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0fe054afc4d6..8cb1dec3e358 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -558,12 +558,12 @@ struct vma_numab_state { * least * sysctl_numa_balancing_scan_delay */ - unsigned long next_pid_reset; /* Time in jiffies when + unsigned long pids_active_reset; /* Time in jiffies when * access_pids is reset to * detect phase change * behaviour. */ - unsigned long access_pids[2]; /* Approximate tracking of PIDS + unsigned long pids_active[2]; /* Approximate tracking of PIDS * that trapped a NUMA hinting * fault. May produce false * positives due to hash diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cb225921bbca..81405627b9ed 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3142,7 +3142,7 @@ static bool vma_is_accessed(struct vm_area_struct *vm= a) if (READ_ONCE(current->mm->numa_scan_seq) < 2) return true; =20 - pids =3D vma->numab_state->access_pids[0] | vma->numab_state->access_pids= [1]; + pids =3D vma->numab_state->pids_active[0] | vma->numab_state->pids_active= [1]; return test_bit(hash_32(current->pid, ilog2(BITS_PER_LONG)), &pids); } =20 @@ -3258,7 +3258,7 @@ static void task_numa_work(struct callback_head *work) msecs_to_jiffies(sysctl_numa_balancing_scan_delay); =20 /* Reset happens after 4 times scan delay of scan start */ - vma->numab_state->next_pid_reset =3D vma->numab_state->next_scan + + vma->numab_state->pids_active_reset =3D vma->numab_state->next_scan + msecs_to_jiffies(VMA_PID_RESET_PERIOD); } =20 @@ -3279,11 +3279,11 @@ static void task_numa_work(struct callback_head *wo= rk) * vma for recent access to avoid clearing PID info before access.. */ if (mm->numa_scan_seq && - time_after(jiffies, vma->numab_state->next_pid_reset)) { - vma->numab_state->next_pid_reset =3D vma->numab_state->next_pid_reset + + time_after(jiffies, vma->numab_state->pids_active_reset)) { + vma->numab_state->pids_active_reset =3D vma->numab_state->pids_active_r= eset + msecs_to_jiffies(VMA_PID_RESET_PERIOD); - vma->numab_state->access_pids[0] =3D READ_ONCE(vma->numab_state->access= _pids[1]); - vma->numab_state->access_pids[1] =3D 0; + vma->numab_state->pids_active[0] =3D READ_ONCE(vma->numab_state->pids_a= ctive[1]); + vma->numab_state->pids_active[1] =3D 0; } =20 do { --=20 2.35.3 From nobody Fri Jan 2 15:50:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F6FACD6907 for ; Tue, 10 Oct 2023 08:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442864AbjJJIcc (ORCPT ); Tue, 10 Oct 2023 04:32:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442876AbjJJIc3 (ORCPT ); Tue, 10 Oct 2023 04:32:29 -0400 Received: from outbound-smtp44.blacknight.com (outbound-smtp44.blacknight.com [46.22.136.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 345BAA4 for ; Tue, 10 Oct 2023 01:32:28 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp44.blacknight.com (Postfix) with ESMTPS id A0BB3F846C for ; Tue, 10 Oct 2023 09:32:26 +0100 (IST) Received: (qmail 8011 invoked from network); 10 Oct 2023 08:32:26 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPA; 10 Oct 2023 08:32:26 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM , Mel Gorman Subject: [PATCH 3/6] sched/numa: Trace decisions related to skipping VMAs Date: Tue, 10 Oct 2023 09:31:40 +0100 Message-Id: <20231010083143.19593-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231010083143.19593-1-mgorman@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" NUMA Balancing skip or scans VMAs for a variety of reasons. In preparation for completing scans of VMAs regardless of PID access, trace the reasons why a VMA was skipped. In a later patch, the tracing will be used to track if a VMA was forcibly scanned. Signed-off-by: Mel Gorman --- include/linux/sched/numa_balancing.h | 8 +++++ include/trace/events/sched.h | 50 ++++++++++++++++++++++++++++ kernel/sched/fair.c | 17 +++++++--- 3 files changed, 71 insertions(+), 4 deletions(-) diff --git a/include/linux/sched/numa_balancing.h b/include/linux/sched/num= a_balancing.h index 3988762efe15..c127a1509e2f 100644 --- a/include/linux/sched/numa_balancing.h +++ b/include/linux/sched/numa_balancing.h @@ -15,6 +15,14 @@ #define TNF_FAULT_LOCAL 0x08 #define TNF_MIGRATE_FAIL 0x10 =20 +enum numa_vmaskip_reason { + NUMAB_SKIP_UNSUITABLE, + NUMAB_SKIP_SHARED_RO, + NUMAB_SKIP_INACCESSIBLE, + NUMAB_SKIP_SCAN_DELAY, + NUMAB_SKIP_PID_INACTIVE, +}; + #ifdef CONFIG_NUMA_BALANCING extern void task_numa_fault(int last_node, int node, int pages, int flags); extern pid_t task_numa_group_id(struct task_struct *p); diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index fbb99a61f714..b0d0dbf491ea 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -664,6 +664,56 @@ DEFINE_EVENT(sched_numa_pair_template, sched_swap_numa, TP_ARGS(src_tsk, src_cpu, dst_tsk, dst_cpu) ); =20 +#ifdef CONFIG_NUMA_BALANCING +#define NUMAB_SKIP_REASON \ + EM( NUMAB_SKIP_UNSUITABLE, "unsuitable" ) \ + EM( NUMAB_SKIP_SHARED_RO, "shared_ro" ) \ + EM( NUMAB_SKIP_INACCESSIBLE, "inaccessible" ) \ + EM( NUMAB_SKIP_SCAN_DELAY, "scan_delay" ) \ + EMe(NUMAB_SKIP_PID_INACTIVE, "pid_inactive" ) + +/* Redefine for export. */ +#undef EM +#undef EMe +#define EM(a, b) TRACE_DEFINE_ENUM(a); +#define EMe(a, b) TRACE_DEFINE_ENUM(a); + +NUMAB_SKIP_REASON + +/* Redefine for symbolic printing. */ +#undef EM +#undef EMe +#define EM(a, b) { a, b }, +#define EMe(a, b) { a, b } + +TRACE_EVENT(sched_skip_vma_numa, + + TP_PROTO(struct mm_struct *mm, struct vm_area_struct *vma, + enum numa_vmaskip_reason reason), + + TP_ARGS(mm, vma, reason), + + TP_STRUCT__entry( + __field(unsigned long, numa_scan_offset) + __field(unsigned long, vm_start) + __field(unsigned long, vm_end) + __field(enum numa_vmaskip_reason, reason) + ), + + TP_fast_assign( + __entry->numa_scan_offset =3D mm->numa_scan_offset; + __entry->vm_start =3D vma->vm_start; + __entry->vm_end =3D vma->vm_end; + __entry->reason =3D reason; + ), + + TP_printk("numa_scan_offset=3D%lX vm_start=3D%lX vm_end=3D%lX reason=3D%s= ", + __entry->numa_scan_offset, + __entry->vm_start, + __entry->vm_end, + __print_symbolic(__entry->reason, NUMAB_SKIP_REASON)) +); +#endif /* CONFIG_NUMA_BALANCING */ =20 /* * Tracepoint for waking a polling cpu without an IPI. diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 81405627b9ed..0535c57f6a77 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3227,6 +3227,7 @@ static void task_numa_work(struct callback_head *work) do { if (!vma_migratable(vma) || !vma_policy_mof(vma) || is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_UNSUITABLE); continue; } =20 @@ -3237,15 +3238,19 @@ static void task_numa_work(struct callback_head *wo= rk) * as migrating the pages will be of marginal benefit. */ if (!vma->vm_mm || - (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) =3D=3D (VM_REA= D))) + (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) =3D=3D (VM_REA= D))) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_SHARED_RO); continue; + } =20 /* * Skip inaccessible VMAs to avoid any confusion between * PROT_NONE and NUMA hinting ptes */ - if (!vma_is_accessible(vma)) + if (!vma_is_accessible(vma)) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_INACCESSIBLE); continue; + } =20 /* Initialise new per-VMA NUMAB state. */ if (!vma->numab_state) { @@ -3267,12 +3272,16 @@ static void task_numa_work(struct callback_head *wo= rk) * delay the scan for new VMAs. */ if (mm->numa_scan_seq && time_before(jiffies, - vma->numab_state->next_scan)) + vma->numab_state->next_scan)) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_SCAN_DELAY); continue; + } =20 /* Do not scan the VMA if task has not accessed */ - if (!vma_is_accessed(vma)) + if (!vma_is_accessed(vma)) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_PID_INACTIVE); continue; + } =20 /* * RESET access PIDs regularly for old VMAs. Resetting after checking --=20 2.35.3 From nobody Fri Jan 2 15:50:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB718CD68FE for ; Tue, 10 Oct 2023 08:32:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442836AbjJJIcm (ORCPT ); Tue, 10 Oct 2023 04:32:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442886AbjJJIck (ORCPT ); Tue, 10 Oct 2023 04:32:40 -0400 Received: from outbound-smtp36.blacknight.com (outbound-smtp36.blacknight.com [46.22.139.219]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DB79B4 for ; Tue, 10 Oct 2023 01:32:38 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp36.blacknight.com (Postfix) with ESMTPS id 3003A257E for ; Tue, 10 Oct 2023 09:32:37 +0100 (IST) Received: (qmail 8544 invoked from network); 10 Oct 2023 08:32:37 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPA; 10 Oct 2023 08:32:36 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM , Mel Gorman Subject: [PATCH 4/6] sched/numa: Move up the access pid reset logic Date: Tue, 10 Oct 2023 09:31:41 +0100 Message-Id: <20231010083143.19593-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231010083143.19593-1-mgorman@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Raghavendra K T Recent NUMA hinting faulting activity is reset approximately every VMA_PID_RESET_PERIOD milliseconds. However, if the current task has not accessed a VMA then the reset check is missed and the reset is potentially deferred forever. Check if the PID activity information should be reset before checking if the current task recently trapped a NUMA hinting fault. [mgorman@techsingularity.net: Rewrite changelog] Suggested-by: Mel Gorman Signed-off-by: Raghavendra K T Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0535c57f6a77..05e89a7950d0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3277,16 +3277,7 @@ static void task_numa_work(struct callback_head *wor= k) continue; } =20 - /* Do not scan the VMA if task has not accessed */ - if (!vma_is_accessed(vma)) { - trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_PID_INACTIVE); - continue; - } - - /* - * RESET access PIDs regularly for old VMAs. Resetting after checking - * vma for recent access to avoid clearing PID info before access.. - */ + /* RESET access PIDs regularly for old VMAs. */ if (mm->numa_scan_seq && time_after(jiffies, vma->numab_state->pids_active_reset)) { vma->numab_state->pids_active_reset =3D vma->numab_state->pids_active_r= eset + @@ -3295,6 +3286,12 @@ static void task_numa_work(struct callback_head *wor= k) vma->numab_state->pids_active[1] =3D 0; } =20 + /* Do not scan the VMA if task has not accessed */ + if (!vma_is_accessed(vma)) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_PID_INACTIVE); + continue; + } + do { start =3D max(start, vma->vm_start); end =3D ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE); --=20 2.35.3 From nobody Fri Jan 2 15:50:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F5E2CD6907 for ; Tue, 10 Oct 2023 08:32:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442886AbjJJIcx (ORCPT ); Tue, 10 Oct 2023 04:32:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442873AbjJJIcu (ORCPT ); Tue, 10 Oct 2023 04:32:50 -0400 Received: from outbound-smtp12.blacknight.com (outbound-smtp12.blacknight.com [46.22.139.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AA5EA7 for ; Tue, 10 Oct 2023 01:32:48 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp12.blacknight.com (Postfix) with ESMTPS id AB0CD1C433F for ; Tue, 10 Oct 2023 09:32:47 +0100 (IST) Received: (qmail 8972 invoked from network); 10 Oct 2023 08:32:47 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPA; 10 Oct 2023 08:32:47 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM , Mel Gorman Subject: [PATCH 5/6] sched/numa: Complete scanning of partial VMAs regardless of PID activity Date: Tue, 10 Oct 2023 09:31:42 +0100 Message-Id: <20231010083143.19593-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231010083143.19593-1-mgorman@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" NUMA Balancing skips VMAs when the current task has not trapped a NUMA fault within the VMA. If the VMA is skipped then mm->numa_scan_offset advances and a task that is trapping faults within the VMA may never fully update PTEs within the VMA. Force tasks to update PTEs for partially scanned PTEs. The VMA will be tagged for NUMA hints by some task but this removes some of the benefit of tracking PID activity within a VMA. A follow-on patch will mitigate this problem. The test cases and machines evaluated did not trigger the corner case so the performance results are neutral with only small changes within the noise from normal test-to-test variance. However, the next patch makes the corner case easier to trigger. Signed-off-by: Mel Gorman --- include/linux/sched/numa_balancing.h | 1 + include/trace/events/sched.h | 3 ++- kernel/sched/fair.c | 18 +++++++++++++++--- 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/include/linux/sched/numa_balancing.h b/include/linux/sched/num= a_balancing.h index c127a1509e2f..7dcc0bdfddbb 100644 --- a/include/linux/sched/numa_balancing.h +++ b/include/linux/sched/numa_balancing.h @@ -21,6 +21,7 @@ enum numa_vmaskip_reason { NUMAB_SKIP_INACCESSIBLE, NUMAB_SKIP_SCAN_DELAY, NUMAB_SKIP_PID_INACTIVE, + NUMAB_SKIP_IGNORE_PID, }; =20 #ifdef CONFIG_NUMA_BALANCING diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index b0d0dbf491ea..27b51c81b106 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -670,7 +670,8 @@ DEFINE_EVENT(sched_numa_pair_template, sched_swap_numa, EM( NUMAB_SKIP_SHARED_RO, "shared_ro" ) \ EM( NUMAB_SKIP_INACCESSIBLE, "inaccessible" ) \ EM( NUMAB_SKIP_SCAN_DELAY, "scan_delay" ) \ - EMe(NUMAB_SKIP_PID_INACTIVE, "pid_inactive" ) + EM( NUMAB_SKIP_PID_INACTIVE, "pid_inactive" ) \ + EMe(NUMAB_SKIP_IGNORE_PID, "ignore_pid_inactive" ) =20 /* Redefine for export. */ #undef EM diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 05e89a7950d0..150f01948ec6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3130,7 +3130,7 @@ static void reset_ptenuma_scan(struct task_struct *p) p->mm->numa_scan_offset =3D 0; } =20 -static bool vma_is_accessed(struct vm_area_struct *vma) +static bool vma_is_accessed(struct mm_struct *mm, struct vm_area_struct *v= ma) { unsigned long pids; /* @@ -3143,7 +3143,19 @@ static bool vma_is_accessed(struct vm_area_struct *v= ma) return true; =20 pids =3D vma->numab_state->pids_active[0] | vma->numab_state->pids_active= [1]; - return test_bit(hash_32(current->pid, ilog2(BITS_PER_LONG)), &pids); + if (test_bit(hash_32(current->pid, ilog2(BITS_PER_LONG)), &pids)) + return true; + + /* + * Complete a scan that has already started regardless of PID access or + * some VMAs may never be scanned in multi-threaded applications + */ + if (mm->numa_scan_offset > vma->vm_start) { + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_IGNORE_PID); + return true; + } + + return false; } =20 #define VMA_PID_RESET_PERIOD (4 * sysctl_numa_balancing_scan_delay) @@ -3287,7 +3299,7 @@ static void task_numa_work(struct callback_head *work) } =20 /* Do not scan the VMA if task has not accessed */ - if (!vma_is_accessed(vma)) { + if (!vma_is_accessed(mm, vma)) { trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_PID_INACTIVE); continue; } --=20 2.35.3 From nobody Fri Jan 2 15:50:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3D87CD6907 for ; Tue, 10 Oct 2023 08:33:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442882AbjJJIdD (ORCPT ); Tue, 10 Oct 2023 04:33:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442880AbjJJIdB (ORCPT ); Tue, 10 Oct 2023 04:33:01 -0400 Received: from outbound-smtp58.blacknight.com (outbound-smtp58.blacknight.com [46.22.136.242]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F9E59F for ; Tue, 10 Oct 2023 01:32:59 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp58.blacknight.com (Postfix) with ESMTPS id D1E98FABE0 for ; Tue, 10 Oct 2023 09:32:57 +0100 (IST) Received: (qmail 9460 invoked from network); 10 Oct 2023 08:32:57 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPA; 10 Oct 2023 08:32:57 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM , Mel Gorman Subject: [PATCH 6/6] sched/numa: Complete scanning of inactive VMAs when there is no alternative Date: Tue, 10 Oct 2023 09:31:43 +0100 Message-Id: <20231010083143.19593-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231010083143.19593-1-mgorman@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" VMAs are skipped if there is no recent fault activity but this represents a chicken-and-egg problem as there may be no fault activity if the PTEs are never updated to trap NUMA hints. There is an indirect reliance on scanning to be forced early in the lifetime of a task but this may fail to detect changes in phase behaviour. Force inactive VMAs to be scanned when all other eligible VMAs have been updated within the same scan sequence. Test results in general look good with some changes in performance, both negative and positive, depending on whether the additional scanning and faulting was beneficial or not to the workload. The autonuma benchmark workload NUMA01_THREADLOCAL was picked for closer examination. The workload creates two processes with numerous threads and thread-local storage that is zero-filled in a loop. It exercises the corner case where unrelated threads may skip VMAs that are thread-local to another thread and still has some VMAs that inactive while the workload executes. The VMA skipping activity frequency with and without the patch is as follows; 6.6.0-rc2-sched-numabtrace-v1 649 reason=3Dscan_delay 9094 reason=3Dunsuitable 48915 reason=3Dshared_ro 143919 reason=3Dinaccessible 193050 reason=3Dpid_inactive 6.6.0-rc2-sched-numabselective-v1 146 reason=3Dseq_completed 622 reason=3Dignore_pid_inactive 624 reason=3Dscan_delay 6570 reason=3Dunsuitable 16101 reason=3Dshared_ro 27608 reason=3Dinaccessible 41939 reason=3Dpid_inactive Note that with the patch applied, the PID activity is ignored (ignore_pid_inactive) to ensure a VMA with some activity is completely scanned. In addition, a small number of VMAs are scanned when no other eligible VMA is available during a single scan window (seq_completed). The number of times a VMA is skipped due to no PID activity from the scanning task (pid_inactive) drops dramatically. It is expected that this will increase the number of PTEs updated for NUMA hinting faults as well as hinting faults but these represent PTEs that would otherwise have been missed. The tradeoff is scan+fault overhead versus improving locality due to migration. On a 2-socket Cascade Lake test machine, the time to complete the workload is as follows; 6.6.0-rc2 6.6.0= -rc2 sched-numabtrace-v1 sched-numabselecti= ve-v1 Min elsp-NUMA01_THREADLOCAL 174.22 ( 0.00%) 117.64 ( 32.= 48%) Amean elsp-NUMA01_THREADLOCAL 175.68 ( 0.00%) 123.34 * 29.= 79%* Stddev elsp-NUMA01_THREADLOCAL 1.20 ( 0.00%) 4.06 (-238.= 20%) CoeffVar elsp-NUMA01_THREADLOCAL 0.68 ( 0.00%) 3.29 (-381.= 70%) Max elsp-NUMA01_THREADLOCAL 177.18 ( 0.00%) 128.03 ( 27.= 74%) The time to complete the workload is reduced by almost 30% 6.6.0-rc2 6.6.0-rc2 sched-numabtrace-v1 sched-numabselective-v1 / Duration User 91201.80 63506.64 Duration System 2015.53 1819.78 Duration Elapsed 1234.77 868.37 In this specific case, system CPU time was not increased but it's not universally true. From vmstat, the NUMA scanning and fault activity is as follows; 6.6.0-rc2 6.6.0-rc2 sched-numabtrace-v1 sched-numabselective-v1 Ops NUMA base-page range updates 64272.00 26374386.00 Ops NUMA PTE updates 36624.00 55538.00 Ops NUMA PMD updates 54.00 51404.00 Ops NUMA hint faults 15504.00 75786.00 Ops NUMA hint local faults % 14860.00 56763.00 Ops NUMA hint local percent 95.85 74.90 Ops NUMA pages migrated 1629.00 6469222.00 Both the number of PTE updates and hint faults is dramatically increased. While this is superficially unfortunate, it represents ranges that were simply skipped without the patch. As a result of the scanning and hinting faults, many more pages were also migrated but as the time to completion is reduced, the overhead is offset by the gain. Signed-off-by: Mel Gorman --- include/linux/mm_types.h | 6 +++ include/linux/sched/numa_balancing.h | 1 + include/trace/events/sched.h | 3 +- kernel/sched/fair.c | 55 ++++++++++++++++++++++++++-- 4 files changed, 61 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8cb1dec3e358..a123c1a58617 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -578,6 +578,12 @@ struct vma_numab_state { * VMA_PID_RESET_PERIOD * jiffies. */ + int prev_scan_seq; /* MM scan sequence ID when + * the VMA was last completely + * scanned. A VMA is not + * eligible for scanning if + * prev_scan_seq =3D=3D numa_scan_seq + */ }; =20 /* diff --git a/include/linux/sched/numa_balancing.h b/include/linux/sched/num= a_balancing.h index 7dcc0bdfddbb..b69afb8630db 100644 --- a/include/linux/sched/numa_balancing.h +++ b/include/linux/sched/numa_balancing.h @@ -22,6 +22,7 @@ enum numa_vmaskip_reason { NUMAB_SKIP_SCAN_DELAY, NUMAB_SKIP_PID_INACTIVE, NUMAB_SKIP_IGNORE_PID, + NUMAB_SKIP_SEQ_COMPLETED, }; =20 #ifdef CONFIG_NUMA_BALANCING diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 27b51c81b106..010ba1b7cb0e 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -671,7 +671,8 @@ DEFINE_EVENT(sched_numa_pair_template, sched_swap_numa, EM( NUMAB_SKIP_INACCESSIBLE, "inaccessible" ) \ EM( NUMAB_SKIP_SCAN_DELAY, "scan_delay" ) \ EM( NUMAB_SKIP_PID_INACTIVE, "pid_inactive" ) \ - EMe(NUMAB_SKIP_IGNORE_PID, "ignore_pid_inactive" ) + EM( NUMAB_SKIP_IGNORE_PID, "ignore_pid_inactive" ) \ + EMe(NUMAB_SKIP_SEQ_COMPLETED, "seq_completed" ) =20 /* Redefine for export. */ #undef EM diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 150f01948ec6..72ef60f394ba 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3175,6 +3175,8 @@ static void task_numa_work(struct callback_head *work) unsigned long nr_pte_updates =3D 0; long pages, virtpages; struct vma_iterator vmi; + bool vma_pids_skipped; + bool vma_pids_forced =3D false; =20 SCHED_WARN_ON(p !=3D container_of(work, struct task_struct, numa_work)); =20 @@ -3217,7 +3219,6 @@ static void task_numa_work(struct callback_head *work) */ p->node_stamp +=3D 2 * TICK_NSEC; =20 - start =3D mm->numa_scan_offset; pages =3D sysctl_numa_balancing_scan_size; pages <<=3D 20 - PAGE_SHIFT; /* MB in pages */ virtpages =3D pages * 8; /* Scan up to this much virtual space */ @@ -3227,6 +3228,16 @@ static void task_numa_work(struct callback_head *wor= k) =20 if (!mmap_read_trylock(mm)) return; + + /* + * VMAs are skipped if the current PID has not trapped a fault within + * the VMA recently. Allow scanning to be forced if there is no + * suitable VMA remaining. + */ + vma_pids_skipped =3D false; + +retry_pids: + start =3D mm->numa_scan_offset; vma_iter_init(&vmi, mm, start); vma =3D vma_next(&vmi); if (!vma) { @@ -3277,6 +3288,13 @@ static void task_numa_work(struct callback_head *wor= k) /* Reset happens after 4 times scan delay of scan start */ vma->numab_state->pids_active_reset =3D vma->numab_state->next_scan + msecs_to_jiffies(VMA_PID_RESET_PERIOD); + + /* + * Ensure prev_scan_seq does not match numa_scan_seq + * to prevent VMAs being skipped prematurely on the + * first scan. + */ + vma->numab_state->prev_scan_seq =3D mm->numa_scan_seq - 1; } =20 /* @@ -3298,8 +3316,19 @@ static void task_numa_work(struct callback_head *wor= k) vma->numab_state->pids_active[1] =3D 0; } =20 - /* Do not scan the VMA if task has not accessed */ - if (!vma_is_accessed(mm, vma)) { + /* Do not rescan VMAs twice within the same sequence. */ + if (vma->numab_state->prev_scan_seq =3D=3D mm->numa_scan_seq) { + mm->numa_scan_offset =3D vma->vm_end; + trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_SEQ_COMPLETED); + continue; + } + + /* + * Do not scan the VMA if task has not accessed unless no other + * VMA candidate exists. + */ + if (!vma_pids_forced && !vma_is_accessed(mm, vma)) { + vma_pids_skipped =3D true; trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_PID_INACTIVE); continue; } @@ -3328,8 +3357,28 @@ static void task_numa_work(struct callback_head *wor= k) =20 cond_resched(); } while (end !=3D vma->vm_end); + + /* VMA scan is complete, do not scan until next sequence. */ + vma->numab_state->prev_scan_seq =3D mm->numa_scan_seq; + + /* + * Only force scan within one VMA at a time to limit the + * cost of scanning a potentially uninteresting VMA. + */ + if (vma_pids_forced) + break; } for_each_vma(vmi, vma); =20 + /* + * If no VMAs are remaining and VMAs were skipped due to the PID + * not accessing the VMA previously then force a scan to ensure + * forward progress. + */ + if (!vma && !vma_pids_forced && vma_pids_skipped) { + vma_pids_forced =3D true; + goto retry_pids; + } + out: /* * It is possible to reach the end of the VMA list but the last few --=20 2.35.3