From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F27BEE020C for ; Tue, 26 Sep 2023 06:09:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233770AbjIZGJr (ORCPT ); Tue, 26 Sep 2023 02:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233733AbjIZGJi (ORCPT ); Tue, 26 Sep 2023 02:09:38 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63E90F0 for ; Mon, 25 Sep 2023 23:09:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708571; x=1727244571; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kkrugMJHHzL4iBIjWV0YJcdllhvq6CWQwVNpR1GjvwQ=; b=UQFp8K2cLTI+CrWy3bpWxIrGTNWTQhIqU72TogVb/U49x5IYBKGO4s2H kBCjXxJG108gMM9gcbh+/KUhvMR75l/LBjdOdYQsne55RLaXdrBgnb0M2 xXaFgoHtf5FmuOR6S0v/lED9HqNTBio5y49ZURkr0Ugi7ROaqWkqNDKfu l7dPAQUsM0D2GYRvvCyHeG1JtuFFNEHBYMMPux5Tbz9NdBg0O8yr3jcxD /vMjed6jLOwy2sKPQyBCYA+XZxnDFTkUj9Z7tJAp2Yo4CbEJzs6bcFkrF yorZTQvaY6aHaCj3KeNXRayGcJjBh759k5yTcSkZJwTMhmBxvv2srXCi4 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991263" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991263" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075849" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075849" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:22 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 01/10] mm, pcp: avoid to drain PCP when process exit Date: Tue, 26 Sep 2023 14:09:02 +0800 Message-Id: <20230926060911.266511-2-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In commit f26b3fa04611 ("mm/page_alloc: limit number of high-order pages on PCP during bulk free"), the PCP (Per-CPU Pageset) will be drained when PCP is mostly used for high-order pages freeing to improve the cache-hot pages reusing between page allocation and freeing CPUs. But, the PCP draining mechanism may be triggered unexpectedly when process exits. With some customized trace point, it was found that PCP draining (free_high =3D=3D true) was triggered with the order-1 page freeing with the following call stack, =3D> free_unref_page_commit =3D> free_unref_page =3D> __mmdrop =3D> exit_mm =3D> do_exit =3D> do_group_exit =3D> __x64_sys_exit_group =3D> do_syscall_64 Checking the source code, this is the page table PGD freeing (mm_free_pgd()). It's a order-1 page freeing if CONFIG_PAGE_TABLE_ISOLATION=3Dy. Which is a common configuration for security. Just before that, page freeing with the following call stack was found, =3D> free_unref_page_commit =3D> free_unref_page_list =3D> release_pages =3D> tlb_batch_pages_flush =3D> tlb_finish_mmu =3D> exit_mmap =3D> __mmput =3D> exit_mm =3D> do_exit =3D> do_group_exit =3D> __x64_sys_exit_group =3D> do_syscall_64 So, when a process exits, - a large number of user pages of the process will be freed without page allocation, it's highly possible that pcp->free_factor becomes > 0. - after freeing all user pages, the PGD will be freed, which is a order-1 page freeing, PCP will be drained. All in all, when a process exits, it's high possible that the PCP will be drained. This is an unexpected behavior. To avoid this, in the patch, the PCP draining will only be triggered for 2 consecutive high-order page freeing. On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild instances in parallel (each with `make -j 28`) in 8 cgroup. This simulates the kbuild server that is used by 0-Day kbuild service. With the patch, the cycles% of the spinlock contention (mostly for zone lock) decreases from 13.5% to 10.6% (with PCP size =3D=3D 361). The number of PCP draining for high order pages freeing (free_high) decreases 80.8%. This helps network workload too for reduced zone lock contention. On a 2-socket Intel server with 128 logical CPU, with the patch, the network bandwidth of the UNIX (AF_UNIX) test case of lmbench test suite with 16-pair processes increase 17.1%. The cycles% of the spinlock contention (mostly for zone lock) decreases from 50.0% to 45.8%. The number of PCP draining for high order pages freeing (free_high) decreases 27.4%. The cache miss rate keeps 0.3%. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 5 ++++- mm/page_alloc.c | 11 ++++++++--- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 4106fbc5b4b3..64d5ed2bb724 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -676,12 +676,15 @@ enum zone_watermarks { #define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boos= t) #define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost) =20 +#define PCPF_PREV_FREE_HIGH_ORDER 0x01 + struct per_cpu_pages { spinlock_t lock; /* Protects lists field */ int count; /* number of pages in the list */ int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ - short free_factor; /* batch scaling factor during free */ + u8 flags; /* protected by pcp->lock */ + u8 free_factor; /* batch scaling factor during free */ #ifdef CONFIG_NUMA short expire; /* When 0, remote pagesets are drained */ #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 95546f376302..295e61f0c49d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2370,7 +2370,7 @@ static void free_unref_page_commit(struct zone *zone,= struct per_cpu_pages *pcp, { int high; int pindex; - bool free_high; + bool free_high =3D false; =20 __count_vm_events(PGFREE, 1 << order); pindex =3D order_to_pindex(migratetype, order); @@ -2383,8 +2383,13 @@ static void free_unref_page_commit(struct zone *zone= , struct per_cpu_pages *pcp, * freeing without allocation. The remainder after bulk freeing * stops will be drained from vmstat refresh context. */ - free_high =3D (pcp->free_factor && order && order <=3D PAGE_ALLOC_COSTLY_= ORDER); - + if (order && order <=3D PAGE_ALLOC_COSTLY_ORDER) { + free_high =3D (pcp->free_factor && + (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER)); + pcp->flags |=3D PCPF_PREV_FREE_HIGH_ORDER; + } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { + pcp->flags &=3D ~PCPF_PREV_FREE_HIGH_ORDER; + } high =3D nr_pcp_high(pcp, zone, free_high); if (pcp->count >=3D high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex); --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3936EE020C for ; Tue, 26 Sep 2023 06:34:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbjIZGea (ORCPT ); Tue, 26 Sep 2023 02:34:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233756AbjIZGJm (ORCPT ); Tue, 26 Sep 2023 02:09:42 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF168116 for ; Mon, 25 Sep 2023 23:09:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708574; x=1727244574; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lVKcfu7RB9vKN2Hm50wym0zo99YnMLypq+qR+yt7FI8=; b=RDOmTF/413hbwfcVAseKSF2G+ufErzgXGGnnjc5sfB7Al2zmCMIQEAjC EkHfi0xst6DQ1/flr0Unr30PGwcjZUnB/y54XqKimHedS+IBFft4o3zq/ Lq8AjfWWxev44okHY9a6pUfk3FNmsRvwxJ/tX+r6OEwKW9r4szP1x3kG1 7ntBUj06LLY048aUPBVREkocJDJ4YD3lh15k9wueyBvK/pHL4qv2LkSF3 HVpdjXcOn6sFyCVPC5H+cy85sb8V7CF0T1xJcxnLrrI2eJsC5Zx4Jao2d +gISwA6TRe0zt9Shj/qa/pDvOnvBOspQLiB81xENt3QbIOK8wZR4Ib/f+ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991288" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991288" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075860" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075860" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:27 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Sudeep Holla , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 02/10] cacheinfo: calculate per-CPU data cache size Date: Tue, 26 Sep 2023 14:09:03 +0800 Message-Id: <20230926060911.266511-3-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Per-CPU data cache size is useful information. For example, it can be used to determine per-CPU cache size. So, in this patch, the data cache size for each CPU is calculated via data_cache_size / shared_cpu_weight. A brute-force algorithm to iterate all online CPUs is used to avoid to allocate an extra cpumask, especially in offline callback. Signed-off-by: "Huang, Ying" Cc: Sudeep Holla Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- include/linux/cacheinfo.h | 1 + 2 files changed, 42 insertions(+), 1 deletion(-) diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c index cbae8be1fe52..3e8951a3fbab 100644 --- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) return rc; } =20 +static void update_data_cache_size_cpu(unsigned int cpu) +{ + struct cpu_cacheinfo *ci; + struct cacheinfo *leaf; + unsigned int i, nr_shared; + unsigned int size_data =3D 0; + + if (!per_cpu_cacheinfo(cpu)) + return; + + ci =3D ci_cacheinfo(cpu); + for (i =3D 0; i < cache_leaves(cpu); i++) { + leaf =3D per_cpu_cacheinfo_idx(cpu, i); + if (leaf->type !=3D CACHE_TYPE_DATA && + leaf->type !=3D CACHE_TYPE_UNIFIED) + continue; + nr_shared =3D cpumask_weight(&leaf->shared_cpu_map); + if (!nr_shared) + continue; + size_data +=3D leaf->size / nr_shared; + } + ci->size_data =3D size_data; +} + +static void update_data_cache_size(bool cpu_online, unsigned int cpu) +{ + unsigned int icpu; + + for_each_online_cpu(icpu) { + if (!cpu_online && icpu =3D=3D cpu) + continue; + update_data_cache_size_cpu(icpu); + } +} + static int cacheinfo_cpu_online(unsigned int cpu) { int rc =3D detect_cache_attributes(cpu); @@ -906,7 +941,11 @@ static int cacheinfo_cpu_online(unsigned int cpu) return rc; rc =3D cache_add_dev(cpu); if (rc) - free_cache_attributes(cpu); + goto err; + update_data_cache_size(true, cpu); + return 0; +err: + free_cache_attributes(cpu); return rc; } =20 @@ -916,6 +955,7 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu) cpu_cache_sysfs_exit(cpu); =20 free_cache_attributes(cpu); + update_data_cache_size(false, cpu); return 0; } =20 diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h index a5cfd44fab45..4e7ccfa0c36d 100644 --- a/include/linux/cacheinfo.h +++ b/include/linux/cacheinfo.h @@ -73,6 +73,7 @@ struct cacheinfo { =20 struct cpu_cacheinfo { struct cacheinfo *info_list; + unsigned int size_data; unsigned int num_levels; unsigned int num_leaves; bool cpu_map_populated; --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF9FAE8181D for ; Tue, 26 Sep 2023 06:09:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233754AbjIZGJy (ORCPT ); Tue, 26 Sep 2023 02:09:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233760AbjIZGJq (ORCPT ); Tue, 26 Sep 2023 02:09:46 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02DA5FF for ; Mon, 25 Sep 2023 23:09:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708578; x=1727244578; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DeMClJs3Mthv/xHdoFt3z4ewOtGnsu4v3P02NwqV+vA=; b=VOBmpBkdzmmxLP1p7/AsfxqbnDSe427lVvh7D3GTe6lBuSZs+wlZ5tt1 opVjduS2p950Kr/MPJgOj12Muy2YH+AdliJuI3zUtyFzF5xyIlQw3Tvu9 7SVwm2sw5MF69YxZlhmoUSFt0uXZbbDFFgcXO2wFl9jfxmHt85GK6hVHt k/9Cx18eIAPa0SlDH/7byTozJJ0nTA4vOrPetiAMbaOozN3ihWTK4IA7C Nrd2qjK4i5S1Uz00vXkuVtvy1oosSOGqDMUKhR+1Bx2OVNjKpq2FyvrTb 5aOA6fazUjo/o6dQixIZYM2j336UOzaDvY7A74Ym93gRbj6pKNjW+28BJ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991309" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991309" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075877" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075877" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:30 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 03/10] mm, pcp: reduce lock contention for draining high-order pages Date: Tue, 26 Sep 2023 14:09:04 +0800 Message-Id: <20230926060911.266511-4-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In commit f26b3fa04611 ("mm/page_alloc: limit number of high-order pages on PCP during bulk free"), the PCP (Per-CPU Pageset) will be drained when PCP is mostly used for high-order pages freeing to improve the cache-hot pages reusing between page allocating and freeing CPUs. On system with small per-CPU data cache, pages shouldn't be cached before draining to guarantee cache-hot. But on a system with large per-CPU data cache, more pages can be cached before draining to reduce zone lock contention. So, in this patch, instead of draining without any caching, "batch" pages will be cached in PCP before draining if the per-CPU data cache size is more than "4 * batch". On a 2-socket Intel server with 128 logical CPU, with the patch, the network bandwidth of the UNIX (AF_UNIX) test case of lmbench test suite with 16-pair processes increase 72.2%. The cycles% of the spinlock contention (mostly for zone lock) decreases from 45.8% to 21.2%. The number of PCP draining for high order pages freeing (free_high) decreases 89.8%. The cache miss rate keeps 0.3%. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- drivers/base/cacheinfo.c | 2 ++ include/linux/gfp.h | 1 + include/linux/mmzone.h | 1 + mm/page_alloc.c | 37 ++++++++++++++++++++++++++++++++++++- 4 files changed, 40 insertions(+), 1 deletion(-) diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c index 3e8951a3fbab..a55b2f83958b 100644 --- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -943,6 +943,7 @@ static int cacheinfo_cpu_online(unsigned int cpu) if (rc) goto err; update_data_cache_size(true, cpu); + setup_pcp_cacheinfo(); return 0; err: free_cache_attributes(cpu); @@ -956,6 +957,7 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu) =20 free_cache_attributes(cpu); update_data_cache_size(false, cpu); + setup_pcp_cacheinfo(); return 0; } =20 diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 665f06675c83..665edc11fb9f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -325,6 +325,7 @@ void drain_all_pages(struct zone *zone); void drain_local_pages(struct zone *zone); =20 void page_alloc_init_late(void); +void setup_pcp_cacheinfo(void); =20 /* * gfp_allowed_mask is set to GFP_BOOT_MASK during early boot to restrict = what diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 64d5ed2bb724..4132e7490b49 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -677,6 +677,7 @@ enum zone_watermarks { #define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost) =20 #define PCPF_PREV_FREE_HIGH_ORDER 0x01 +#define PCPF_FREE_HIGH_BATCH 0x02 =20 struct per_cpu_pages { spinlock_t lock; /* Protects lists field */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 295e61f0c49d..e97814985710 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include "internal.h" #include "shuffle.h" @@ -2385,7 +2386,9 @@ static void free_unref_page_commit(struct zone *zone,= struct per_cpu_pages *pcp, */ if (order && order <=3D PAGE_ALLOC_COSTLY_ORDER) { free_high =3D (pcp->free_factor && - (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER)); + (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) && + (!(pcp->flags & PCPF_FREE_HIGH_BATCH) || + pcp->count >=3D READ_ONCE(pcp->batch))); pcp->flags |=3D PCPF_PREV_FREE_HIGH_ORDER; } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { pcp->flags &=3D ~PCPF_PREV_FREE_HIGH_ORDER; @@ -5418,6 +5421,38 @@ static void zone_pcp_update(struct zone *zone, int c= pu_online) mutex_unlock(&pcp_batch_high_lock); } =20 +static void zone_pcp_update_cacheinfo(struct zone *zone) +{ + int cpu; + struct per_cpu_pages *pcp; + struct cpu_cacheinfo *cci; + + for_each_online_cpu(cpu) { + pcp =3D per_cpu_ptr(zone->per_cpu_pageset, cpu); + cci =3D get_cpu_cacheinfo(cpu); + /* + * If per-CPU data cache is large enough, up to + * "batch" high-order pages can be cached in PCP for + * consecutive freeing. This can reduce zone lock + * contention without hurting cache-hot pages sharing. + */ + spin_lock(&pcp->lock); + if ((cci->size_data >> PAGE_SHIFT) > 4 * pcp->batch) + pcp->flags |=3D PCPF_FREE_HIGH_BATCH; + else + pcp->flags &=3D ~PCPF_FREE_HIGH_BATCH; + spin_unlock(&pcp->lock); + } +} + +void setup_pcp_cacheinfo(void) +{ + struct zone *zone; + + for_each_populated_zone(zone) + zone_pcp_update_cacheinfo(zone); +} + /* * Allocate per cpu pagesets and initialize them. * Before this call only boot pagesets were available. --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B034E7D0C5 for ; Tue, 26 Sep 2023 06:10:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233784AbjIZGKF (ORCPT ); Tue, 26 Sep 2023 02:10:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233769AbjIZGJz (ORCPT ); Tue, 26 Sep 2023 02:09:55 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 811E310E for ; Mon, 25 Sep 2023 23:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708583; x=1727244583; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XuWRBp7awNhgI2gxGvdBoVlgeQn1EFxLB/yFfQ9Zo1k=; b=i/KlUGvVVDuW94LW+gJBQP0CFY1ui215I1Pj20kVlXujUFk8oXWl7tSo znB2+6YdO3qeD2SaOVByQLMpPRNlU0e0GFKKYeADvC/byTaheUIpRTCUj s9buj8QTFkNwiszguVvPf1YsJR5gy1HK6TV5ewpRVmOr81YOBPTPdBr4P mvztanv8Jstbma7L3K05DBwXhq1ufBJs1pSXpI3Gw82nMTrG4G01aFjeF d8MR1w+n5CdT/1WYLRUU1xIJgaWqWqyri/6b8m37ktU9Ezfd7ac9N36ms nTP29MnyxZ/bK/0BExSD+JIE2s6IuyjIzP/JF9LB2B5SkHJqLE9ltLekN g==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991335" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991335" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075894" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075894" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:34 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 04/10] mm: restrict the pcp batch scale factor to avoid too long latency Date: Tue, 26 Sep 2023 14:09:05 +0800 Message-Id: <20230926060911.266511-5-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In page allocator, PCP (Per-CPU Pageset) is refilled and drained in batches to increase page allocation throughput, reduce page allocation/freeing latency per page, and reduce zone lock contention. But too large batch size will cause too long maximal allocation/freeing latency, which may punish arbitrary users. So the default batch size is chosen carefully (in zone_batchsize(), the value is 63 for zone > 1GB) to avoid that. In commit 3b12e7e97938 ("mm/page_alloc: scale the number of pages that are batch freed"), the batch size will be scaled for large number of page freeing to improve page freeing performance and reduce zone lock contention. Similar optimization can be used for large number of pages allocation too. To find out a suitable max batch scale factor (that is, max effective batch size), some tests and measurement on some machines were done as follows. A set of debug patches are implemented as follows, - Set PCP high to be 2 * batch to reduce the effect of PCP high - Disable free batch size scaling to get the raw performance. - The code with zone lock held is extracted from rmqueue_bulk() and free_pcppages_bulk() to 2 separate functions to make it easy to measure the function run time with ftrace function_graph tracer. - The batch size is hard coded to be 63 (default), 127, 255, 511, 1023, 2047, 4095. Then will-it-scale/page_fault1 is used to generate the page allocation/freeing workload. The page allocation/freeing throughput (page/s) is measured via will-it-scale. The page allocation/freeing average latency (alloc/free latency avg, in us) and allocation/freeing latency at 99 percentile (alloc/free latency 99%, in us) are measured with ftrace function_graph tracer. The test results are as follows, Sapphire Rapids Server =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Batch throughput free latency free latency alloc latency alloc latency page/s avg / us 99% / us avg / us 99% / us ----- ---------- ------------ ------------ ------------- ------------- 63 513633.4 2.33 3.57 2.67 6.83 127 517616.7 4.35 6.65 4.22 13.03 255 520822.8 8.29 13.32 7.52 25.24 511 524122.0 15.79 23.42 14.02 49.35 1023 525980.5 30.25 44.19 25.36 94.88 2047 526793.6 59.39 84.50 45.22 140.81 Ice Lake Server =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Batch throughput free latency free latency alloc latency alloc latency page/s avg / us 99% / us avg / us 99% / us ----- ---------- ------------ ------------ ------------- ------------- 63 620210.3 2.21 3.68 2.02 4.35 127 627003.0 4.09 6.86 3.51 8.28 255 630777.5 7.70 13.50 6.17 15.97 511 633651.5 14.85 22.62 11.66 31.08 1023 637071.1 28.55 42.02 20.81 54.36 2047 638089.7 56.54 84.06 39.28 91.68 Cascade Lake Server =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Batch throughput free latency free latency alloc latency alloc latency page/s avg / us 99% / us avg / us 99% / us ----- ---------- ------------ ------------ ------------- ------------- 63 404706.7 3.29 5.03 3.53 4.75 127 422475.2 6.12 9.09 6.36 8.76 255 411522.2 11.68 16.97 10.90 16.39 511 428124.1 22.54 31.28 19.86 32.25 1023 414718.4 43.39 62.52 40.00 66.33 2047 429848.7 86.64 120.34 71.14 106.08 Commet Lake Desktop =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Batch throughput free latency free latency alloc latency alloc latency page/s avg / us 99% / us avg / us 99% / us ----- ---------- ------------ ------------ ------------- ------------- 63 795183.13 2.18 3.55 2.03 3.05 127 803067.85 3.91 6.56 3.85 5.52 255 812771.10 7.35 10.80 7.14 10.20 511 817723.48 14.17 27.54 13.43 30.31 1023 818870.19 27.72 40.10 27.89 46.28 Coffee Lake Desktop =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Batch throughput free latency free latency alloc latency alloc latency page/s avg / us 99% / us avg / us 99% / us ----- ---------- ------------ ------------ ------------- ------------- 63 510542.8 3.13 4.40 2.48 3.43 127 514288.6 5.97 7.89 4.65 6.04 255 516889.7 11.86 15.58 8.96 12.55 511 519802.4 23.10 28.81 16.95 26.19 1023 520802.7 45.30 52.51 33.19 45.95 2047 519997.1 90.63 104.00 65.26 81.74 From the above data, to restrict the allocation/freeing latency to be less than 100 us in most times, the max batch scale factor needs to be less than or equal to 5. So, in this patch, the batch scale factor is restricted to be less than or equal to 5. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- mm/page_alloc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e97814985710..4b601f505401 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -86,6 +86,9 @@ typedef int __bitwise fpi_t; */ #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) =20 +/* Maximum PCP batch scale factor to restrict max allocation/freeing laten= cy */ +#define PCP_BATCH_SCALE_MAX 5 + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8) @@ -2340,7 +2343,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int= high, bool free_high) * freeing of pages without any allocation. */ batch <<=3D pcp->free_factor; - if (batch < max_nr_free) + if (batch < max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX) pcp->free_factor++; batch =3D clamp(batch, min_nr_free, max_nr_free); =20 --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6668FE7D0C5 for ; Tue, 26 Sep 2023 06:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233774AbjIZGKL (ORCPT ); Tue, 26 Sep 2023 02:10:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233794AbjIZGKA (ORCPT ); Tue, 26 Sep 2023 02:10:00 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55C34199 for ; Mon, 25 Sep 2023 23:09:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708587; x=1727244587; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZtH26z6BJyiTQzNvHwodkC3VC0x9VX0B8MkwItrlS3k=; b=P3UNhv/zAZz09D9nv4Yo/kpXMR/CYI5XtN2Y/Z+8nnNU5KODgePXwOXn KNLTqixsr1DhB92CPOPkLSqSTdNFQ5XJCJflIaSXVFqViBPDejZIlUYdD 5uoV6ZROLT2IoXOLuzflf2YLpk63NfQPIAio+zKXq6CpiG61Vm1YyAoUl bYVjivihThW4FMW78d/1xpK8Tp3augNY66nTtm9iq+JEGmJJ6lnUMy5lq GhPMM7ClolfgbyK7R1z33KJ514DKQsWs+eBAebI5HT/OugOcA92RulRF8 QfJ4nqZqfaX7wKYEb7r8VZSCuBtjkBDi9Wo0sWiQWuM/UHHb5ihDJOjq5 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991360" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991360" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075929" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075929" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:39 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 05/10] mm, page_alloc: scale the number of pages that are batch allocated Date: Tue, 26 Sep 2023 14:09:06 +0800 Message-Id: <20230926060911.266511-6-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When a task is allocating a large number of order-0 pages, it may acquire the zone->lock multiple times allocating pages in batches. This may unnecessarily contend on the zone lock when allocating very large number of pages. This patch adapts the size of the batch based on the recent pattern to scale the batch size for subsequent allocations. On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild instances in parallel (each with `make -j 28`) in 8 cgroup. This simulates the kbuild server that is used by 0-Day kbuild service. With the patch, the cycles% of the spinlock contention (mostly for zone lock) decreases from 11.7% to 10.0% (with PCP size =3D=3D 361). Signed-off-by: "Huang, Ying" Suggested-by: Mel Gorman Cc: Andrew Morton Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 3 ++- mm/page_alloc.c | 52 ++++++++++++++++++++++++++++++++++-------- 2 files changed, 44 insertions(+), 11 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 4132e7490b49..4f7420e35fbb 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -685,9 +685,10 @@ struct per_cpu_pages { int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ u8 flags; /* protected by pcp->lock */ + u8 alloc_factor; /* batch scaling factor during allocate */ u8 free_factor; /* batch scaling factor during free */ #ifdef CONFIG_NUMA - short expire; /* When 0, remote pagesets are drained */ + u8 expire; /* When 0, remote pagesets are drained */ #endif =20 /* Lists of pages, one per migrate type stored on the pcp-lists */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4b601f505401..b9226845abf7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2376,6 +2376,12 @@ static void free_unref_page_commit(struct zone *zone= , struct per_cpu_pages *pcp, int pindex; bool free_high =3D false; =20 + /* + * On freeing, reduce the number of pages that are batch allocated. + * See nr_pcp_alloc() where alloc_factor is increased for subsequent + * allocations. + */ + pcp->alloc_factor >>=3D 1; __count_vm_events(PGFREE, 1 << order); pindex =3D order_to_pindex(migratetype, order); list_add(&page->pcp_list, &pcp->lists[pindex]); @@ -2682,6 +2688,41 @@ struct page *rmqueue_buddy(struct zone *preferred_zo= ne, struct zone *zone, return page; } =20 +static int nr_pcp_alloc(struct per_cpu_pages *pcp, int order) +{ + int high, batch, max_nr_alloc; + + high =3D READ_ONCE(pcp->high); + batch =3D READ_ONCE(pcp->batch); + + /* Check for PCP disabled or boot pageset */ + if (unlikely(high < batch)) + return 1; + + /* + * Double the number of pages allocated each time there is subsequent + * refiling of order-0 pages without drain. + */ + if (!order) { + max_nr_alloc =3D max(high - pcp->count - batch, batch); + batch <<=3D pcp->alloc_factor; + if (batch <=3D max_nr_alloc && pcp->alloc_factor < PCP_BATCH_SCALE_MAX) + pcp->alloc_factor++; + batch =3D min(batch, max_nr_alloc); + } + + /* + * Scale batch relative to order if batch implies free pages + * can be stored on the PCP. Batch can be 1 for small zones or + * for boot pagesets which should never store free pages as + * the pages may belong to arbitrary zones. + */ + if (batch > 1) + batch =3D max(batch >> order, 2); + + return batch; +} + /* Remove page from the per-cpu list, caller must protect the list */ static inline struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, @@ -2694,18 +2735,9 @@ struct page *__rmqueue_pcplist(struct zone *zone, un= signed int order, =20 do { if (list_empty(list)) { - int batch =3D READ_ONCE(pcp->batch); + int batch =3D nr_pcp_alloc(pcp, order); int alloced; =20 - /* - * Scale batch relative to order if batch implies - * free pages can be stored on the PCP. Batch can - * be 1 for small zones or for boot pagesets which - * should never store free pages as the pages may - * belong to arbitrary zones. - */ - if (batch > 1) - batch =3D max(batch >> order, 2); alloced =3D rmqueue_bulk(zone, order, batch, list, migratetype, alloc_flags); --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1CADE8181F for ; Tue, 26 Sep 2023 06:10:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233838AbjIZGKZ (ORCPT ); Tue, 26 Sep 2023 02:10:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233828AbjIZGKG (ORCPT ); Tue, 26 Sep 2023 02:10:06 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 572F0CDB for ; Mon, 25 Sep 2023 23:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708593; x=1727244593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LGvzAogqWzGmZcRGU4lmeTYFMWywK93t3PV7OpA2ock=; b=LjCOiiGXcrK7w4BZIaY/eJaTkjokYoaY3Ah61ktpxIrHspt3rdIVE0e6 QZvLlmoh7NzHgiNloYYax+FV9/+CQMhHJZp08XnN2ZUhDE6E06nMgbq2D f4QlWaib36sp3Y7Ttd1lLEongvv4WcOUxvb/za6IdpZeQSRz8TVIgtq8R GW1NQgI4YpAf9BRdWaKmbfla7aj4z8e4qSkc+0W1cFVaUhwOoQL8yeM92 sup4ynEMFU2Ddzo/dLmj5RW3pd68KfehOaZtB5aosSVclfrz0ahb0+CqP krxTkXlbLqc12iIkP5fb/qOdbIAoVoOBDKuAId0pe+vchvGrwidu5aRlC Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991392" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991392" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075936" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075936" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:43 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 06/10] mm: add framework for PCP high auto-tuning Date: Tue, 26 Sep 2023 14:09:07 +0800 Message-Id: <20230926060911.266511-7-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The page allocation performance requirements of different workloads are usually different. So, we need to tune PCP (per-CPU pageset) high to optimize the workload page allocation performance. Now, we have a system wide sysctl knob (percpu_pagelist_high_fraction) to tune PCP high by hand. But, it's hard to find out the best value by hand. And one global configuration may not work best for the different workloads that run on the same system. One solution to these issues is to tune PCP high of each CPU automatically. This patch adds the framework for PCP high auto-tuning. With it, pcp->high of each CPU will be changed automatically by tuning algorithm at runtime. The minimal high (pcp->high_min) is the original PCP high value calculated based on the low watermark pages. While the maximal high (pcp->high_max) is the PCP high value when percpu_pagelist_high_fraction sysctl knob is set to MIN_PERCPU_PAGELIST_HIGH_FRACTION. That is, the maximal pcp->high that can be set via sysctl knob by hand. It's possible that PCP high auto-tuning doesn't work well for some workloads. So, when PCP high is tuned by hand via the sysctl knob, the auto-tuning will be disabled. The PCP high set by hand will be used instead. This patch only adds the framework, so pcp->high will be set to pcp->high_min (original default) always. We will add actual auto-tuning algorithm in the following patches in the series. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- Documentation/admin-guide/sysctl/vm.rst | 12 +++-- include/linux/mmzone.h | 5 +- mm/page_alloc.c | 71 ++++++++++++++++--------- 3 files changed, 58 insertions(+), 30 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index 45ba1f4dc004..7386366fe114 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -843,10 +843,14 @@ each zone between per-cpu lists. The batch value of each per-cpu page list remains the same regardless of the value of the high fraction so allocation latencies are unaffected. =20 -The initial value is zero. Kernel uses this value to set the high pcp->high -mark based on the low watermark for the zone and the number of local -online CPUs. If the user writes '0' to this sysctl, it will revert to -this default behavior. +The initial value is zero. With this value, kernel will tune pcp->high +automatically according to the requirements of workloads. The lower +limit of tuning is based on the low watermark for the zone and the +number of local online CPUs. The upper limit is the page number when +the sysctl is set to the minimal value (8). If the user writes '0' to +this sysctl, it will revert to this default behavior. In another +words, if the user write other value, the auto-tuning will be disabled +and the user specified pcp->high will be used. =20 =20 stat_interval diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 4f7420e35fbb..d6cfb5023f3e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -683,6 +683,8 @@ struct per_cpu_pages { spinlock_t lock; /* Protects lists field */ int count; /* number of pages in the list */ int high; /* high watermark, emptying needed */ + int high_min; /* min high watermark */ + int high_max; /* max high watermark */ int batch; /* chunk size for buddy add/remove */ u8 flags; /* protected by pcp->lock */ u8 alloc_factor; /* batch scaling factor during allocate */ @@ -842,7 +844,8 @@ struct zone { * the high and batch values are copied to individual pagesets for * faster access */ - int pageset_high; + int pageset_high_min; + int pageset_high_max; int pageset_batch; =20 #ifndef CONFIG_SPARSEMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b9226845abf7..df07580dbd53 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2353,7 +2353,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int= high, bool free_high) static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, bool free_high) { - int high =3D READ_ONCE(pcp->high); + int high =3D READ_ONCE(pcp->high_min); =20 if (unlikely(!high || free_high)) return 0; @@ -2692,7 +2692,7 @@ static int nr_pcp_alloc(struct per_cpu_pages *pcp, in= t order) { int high, batch, max_nr_alloc; =20 - high =3D READ_ONCE(pcp->high); + high =3D READ_ONCE(pcp->high_min); batch =3D READ_ONCE(pcp->batch); =20 /* Check for PCP disabled or boot pageset */ @@ -5298,14 +5298,15 @@ static int zone_batchsize(struct zone *zone) } =20 static int percpu_pagelist_high_fraction; -static int zone_highsize(struct zone *zone, int batch, int cpu_online) +static int zone_highsize(struct zone *zone, int batch, int cpu_online, + int high_fraction) { #ifdef CONFIG_MMU int high; int nr_split_cpus; unsigned long total_pages; =20 - if (!percpu_pagelist_high_fraction) { + if (!high_fraction) { /* * By default, the high value of the pcp is based on the zone * low watermark so that if they are full then background @@ -5318,15 +5319,15 @@ static int zone_highsize(struct zone *zone, int bat= ch, int cpu_online) * value is based on a fraction of the managed pages in the * zone. */ - total_pages =3D zone_managed_pages(zone) / percpu_pagelist_high_fraction; + total_pages =3D zone_managed_pages(zone) / high_fraction; } =20 /* * Split the high value across all online CPUs local to the zone. Note * that early in boot that CPUs may not be online yet and that during * CPU hotplug that the cpumask is not yet updated when a CPU is being - * onlined. For memory nodes that have no CPUs, split pcp->high across - * all online CPUs to mitigate the risk that reclaim is triggered + * onlined. For memory nodes that have no CPUs, split the high value + * across all online CPUs to mitigate the risk that reclaim is triggered * prematurely due to pages stored on pcp lists. */ nr_split_cpus =3D cpumask_weight(cpumask_of_node(zone_to_nid(zone))) + cp= u_online; @@ -5354,19 +5355,21 @@ static int zone_highsize(struct zone *zone, int bat= ch, int cpu_online) * However, guaranteeing these relations at all times would require e.g. w= rite * barriers here but also careful usage of read barriers at the read side,= and * thus be prone to error and bad for performance. Thus the update only pr= events - * store tearing. Any new users of pcp->batch and pcp->high should ensure = they - * can cope with those fields changing asynchronously, and fully trust onl= y the - * pcp->count field on the local CPU with interrupts disabled. + * store tearing. Any new users of pcp->batch, pcp->high_min and pcp->high= _max + * should ensure they can cope with those fields changing asynchronously, = and + * fully trust only the pcp->count field on the local CPU with interrupts + * disabled. * * mutex_is_locked(&pcp_batch_high_lock) required when calling this functi= on * outside of boot time (or some other assurance that no concurrent update= rs * exist). */ -static void pageset_update(struct per_cpu_pages *pcp, unsigned long high, - unsigned long batch) +static void pageset_update(struct per_cpu_pages *pcp, unsigned long high_m= in, + unsigned long high_max, unsigned long batch) { WRITE_ONCE(pcp->batch, batch); - WRITE_ONCE(pcp->high, high); + WRITE_ONCE(pcp->high_min, high_min); + WRITE_ONCE(pcp->high_max, high_max); } =20 static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_z= onestat *pzstats) @@ -5386,20 +5389,21 @@ static void per_cpu_pages_init(struct per_cpu_pages= *pcp, struct per_cpu_zonesta * need to be as careful as pageset_update() as nobody can access the * pageset yet. */ - pcp->high =3D BOOT_PAGESET_HIGH; + pcp->high_min =3D BOOT_PAGESET_HIGH; + pcp->high_max =3D BOOT_PAGESET_HIGH; pcp->batch =3D BOOT_PAGESET_BATCH; pcp->free_factor =3D 0; } =20 -static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned = long high, - unsigned long batch) +static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned = long high_min, + unsigned long high_max, unsigned long batch) { struct per_cpu_pages *pcp; int cpu; =20 for_each_possible_cpu(cpu) { pcp =3D per_cpu_ptr(zone->per_cpu_pageset, cpu); - pageset_update(pcp, high, batch); + pageset_update(pcp, high_min, high_max, batch); } } =20 @@ -5409,19 +5413,34 @@ static void __zone_set_pageset_high_and_batch(struc= t zone *zone, unsigned long h */ static void zone_set_pageset_high_and_batch(struct zone *zone, int cpu_onl= ine) { - int new_high, new_batch; + int new_high_min, new_high_max, new_batch; =20 new_batch =3D max(1, zone_batchsize(zone)); - new_high =3D zone_highsize(zone, new_batch, cpu_online); + if (percpu_pagelist_high_fraction) { + new_high_min =3D zone_highsize(zone, new_batch, cpu_online, + percpu_pagelist_high_fraction); + /* + * PCP high is tuned manually, disable auto-tuning via + * setting high_min and high_max to the manual value. + */ + new_high_max =3D new_high_min; + } else { + new_high_min =3D zone_highsize(zone, new_batch, cpu_online, 0); + new_high_max =3D zone_highsize(zone, new_batch, cpu_online, + MIN_PERCPU_PAGELIST_HIGH_FRACTION); + } =20 - if (zone->pageset_high =3D=3D new_high && + if (zone->pageset_high_min =3D=3D new_high_min && + zone->pageset_high_max =3D=3D new_high_max && zone->pageset_batch =3D=3D new_batch) return; =20 - zone->pageset_high =3D new_high; + zone->pageset_high_min =3D new_high_min; + zone->pageset_high_max =3D new_high_max; zone->pageset_batch =3D new_batch; =20 - __zone_set_pageset_high_and_batch(zone, new_high, new_batch); + __zone_set_pageset_high_and_batch(zone, new_high_min, new_high_max, + new_batch); } =20 void __meminit setup_zone_pageset(struct zone *zone) @@ -5529,7 +5548,8 @@ __meminit void zone_pcp_init(struct zone *zone) */ zone->per_cpu_pageset =3D &boot_pageset; zone->per_cpu_zonestats =3D &boot_zonestats; - zone->pageset_high =3D BOOT_PAGESET_HIGH; + zone->pageset_high_min =3D BOOT_PAGESET_HIGH; + zone->pageset_high_max =3D BOOT_PAGESET_HIGH; zone->pageset_batch =3D BOOT_PAGESET_BATCH; =20 if (populated_zone(zone)) @@ -6431,13 +6451,14 @@ EXPORT_SYMBOL(free_contig_range); void zone_pcp_disable(struct zone *zone) { mutex_lock(&pcp_batch_high_lock); - __zone_set_pageset_high_and_batch(zone, 0, 1); + __zone_set_pageset_high_and_batch(zone, 0, 0, 1); __drain_all_pages(zone, true); } =20 void zone_pcp_enable(struct zone *zone) { - __zone_set_pageset_high_and_batch(zone, zone->pageset_high, zone->pageset= _batch); + __zone_set_pageset_high_and_batch(zone, zone->pageset_high_min, + zone->pageset_high_max, zone->pageset_batch); mutex_unlock(&pcp_batch_high_lock); } =20 --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7692E7D0C5 for ; Tue, 26 Sep 2023 06:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233707AbjIZGKa (ORCPT ); Tue, 26 Sep 2023 02:10:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233792AbjIZGKV (ORCPT ); Tue, 26 Sep 2023 02:10:21 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EE14CFE for ; Mon, 25 Sep 2023 23:09:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708598; x=1727244598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=M9RgF+xRqLmrLbo/+CwFvgtwRRXK0bl1PRniucs7Q6Y=; b=oGgjDvXnMi+/cRbCU6NeZy5czqALM41CnH+5E2M4MkXfLhuYvVNjpdCt 1HoG4G2iwSYTPPHjMBzCodPXSJ1d4r3zEWyhgRufHYWYx/FlAQPH3p8jr Et2ZwE02CEv6F6U6E0AnLvNgy2zXO7dajgEEeEK2qJpZWezDxlm0geOXq mw71LtALHvzMKVz4NjEA0osnwYhD7nhD2el6ZziaIn9IrYflm5ZMnWd7U Fo0gmoC66DTjpaM8rVSW0z8Ra1rTQ1VSV7b9VCayK70lnv9viE1WSORd0 IaIJxd7dFubqwujXz9OVVRw+e8YIg/CbojAHx599cTnhqVgTX1qxfD3Ll w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991417" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991417" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892075960" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892075960" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:48 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Michal Hocko , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 07/10] mm: tune PCP high automatically Date: Tue, 26 Sep 2023 14:09:08 +0800 Message-Id: <20230926060911.266511-8-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The target to tune PCP high automatically is as follows, - Minimize allocation/freeing from/to shared zone - Minimize idle pages in PCP - Minimize pages in PCP if the system free pages is too few To reach these target, a tuning algorithm as follows is designed, - When we refill PCP via allocating from the zone, increase PCP high. Because if we had larger PCP, we could avoid to allocate from the zone. - In periodic vmstat updating kworker (via refresh_cpu_vm_stats()), decrease PCP high to try to free possible idle PCP pages. - When page reclaiming is active for the zone, stop increasing PCP high in allocating path, decrease PCP high and free some pages in freeing path. So, the PCP high can be tuned to the page allocating/freeing depth of workloads eventually. One issue of the algorithm is that if the number of pages allocated is much more than that of pages freed on a CPU, the PCP high may become the maximal value even if the allocating/freeing depth is small. But this isn't a severe issue, because there are no idle pages in this case. One alternative choice is to increase PCP high when we drain PCP via trying to free pages to the zone, but don't increase PCP high during PCP refilling. This can avoid the issue above. But if the number of pages allocated is much less than that of pages freed on a CPU, there will be many idle pages in PCP and it may be hard to free these idle pages. On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild instances in parallel (each with `make -j 28`) in 8 cgroup. This simulates the kbuild server that is used by 0-Day kbuild service. With the patch, the build time decreases 3.6%. The cycles% of the spinlock contention (mostly for zone lock) decreases from 10.0% to 0.7% (with PCP size =3D=3D 361). The number of PCP draining for high order pages freeing (free_high) decreases 63.4%. The number of pages allocated from zone (instead of from PCP) decreases 80.4%. Signed-off-by: "Huang, Ying" Suggested-by: Mel Gorman Suggested-by: Michal Hocko Cc: Andrew Morton Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/gfp.h | 1 + mm/page_alloc.c | 118 ++++++++++++++++++++++++++++++++++---------- mm/vmstat.c | 8 +-- 3 files changed, 98 insertions(+), 29 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 665edc11fb9f..5b917e5b9350 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -320,6 +320,7 @@ extern void page_frag_free(void *addr); #define free_page(addr) free_pages((addr), 0) =20 void page_alloc_init_cpuhp(void); +int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp); void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp); void drain_all_pages(struct zone *zone); void drain_local_pages(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df07580dbd53..0d482a55235b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2160,6 +2160,40 @@ static int rmqueue_bulk(struct zone *zone, unsigned = int order, return i; } =20 +/* + * Called from the vmstat counter updater to decay the PCP high. + * Return whether there are addition works to do. + */ +int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) +{ + int high_min, to_drain, batch; + int todo =3D 0; + + high_min =3D READ_ONCE(pcp->high_min); + batch =3D READ_ONCE(pcp->batch); + /* + * Decrease pcp->high periodically to try to free possible + * idle PCP pages. And, avoid to free too many pages to + * control latency. + */ + if (pcp->high > high_min) { + pcp->high =3D max3(pcp->count - (batch << PCP_BATCH_SCALE_MAX), + pcp->high * 4 / 5, high_min); + if (pcp->high > high_min) + todo++; + } + + to_drain =3D pcp->count - pcp->high; + if (to_drain > 0) { + spin_lock(&pcp->lock); + free_pcppages_bulk(zone, to_drain, pcp, 0); + spin_unlock(&pcp->lock); + todo++; + } + + return todo; +} + #ifdef CONFIG_NUMA /* * Called from the vmstat counter updater to drain pagesets of this @@ -2321,14 +2355,13 @@ static bool free_unref_page_prepare(struct page *pa= ge, unsigned long pfn, return true; } =20 -static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high) +static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, boo= l free_high) { int min_nr_free, max_nr_free; - int batch =3D READ_ONCE(pcp->batch); =20 - /* Free everything if batch freeing high-order pages. */ + /* Free as much as possible if batch freeing high-order pages. */ if (unlikely(free_high)) - return pcp->count; + return min(pcp->count, batch << PCP_BATCH_SCALE_MAX); =20 /* Check for PCP disabled or boot pageset */ if (unlikely(high < batch)) @@ -2343,7 +2376,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int= high, bool free_high) * freeing of pages without any allocation. */ batch <<=3D pcp->free_factor; - if (batch < max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX) + if (batch <=3D max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX) pcp->free_factor++; batch =3D clamp(batch, min_nr_free, max_nr_free); =20 @@ -2351,28 +2384,47 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, i= nt high, bool free_high) } =20 static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, - bool free_high) + int batch, bool free_high) { - int high =3D READ_ONCE(pcp->high_min); + int high, high_min, high_max; =20 - if (unlikely(!high || free_high)) + high_min =3D READ_ONCE(pcp->high_min); + high_max =3D READ_ONCE(pcp->high_max); + high =3D pcp->high =3D clamp(pcp->high, high_min, high_max); + + if (unlikely(!high)) return 0; =20 - if (!test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) - return high; + if (unlikely(free_high)) { + pcp->high =3D max(high - (batch << PCP_BATCH_SCALE_MAX), high_min); + return 0; + } =20 /* * If reclaim is active, limit the number of pages that can be * stored on pcp lists */ - return min(READ_ONCE(pcp->batch) << 2, high); + if (test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) { + pcp->high =3D max(high - (batch << pcp->free_factor), high_min); + return min(batch << 2, pcp->high); + } + + if (pcp->count >=3D high && high_min !=3D high_max) { + int need_high =3D (batch << pcp->free_factor) + batch; + + /* pcp->high should be large enough to hold batch freed pages */ + if (pcp->high < need_high) + pcp->high =3D clamp(need_high, high_min, high_max); + } + + return high; } =20 static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages= *pcp, struct page *page, int migratetype, unsigned int order) { - int high; + int high, batch; int pindex; bool free_high =3D false; =20 @@ -2387,6 +2439,7 @@ static void free_unref_page_commit(struct zone *zone,= struct per_cpu_pages *pcp, list_add(&page->pcp_list, &pcp->lists[pindex]); pcp->count +=3D 1 << order; =20 + batch =3D READ_ONCE(pcp->batch); /* * As high-order pages other than THP's stored on PCP can contribute * to fragmentation, limit the number stored when PCP is heavily @@ -2397,14 +2450,15 @@ static void free_unref_page_commit(struct zone *zon= e, struct per_cpu_pages *pcp, free_high =3D (pcp->free_factor && (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) && (!(pcp->flags & PCPF_FREE_HIGH_BATCH) || - pcp->count >=3D READ_ONCE(pcp->batch))); + pcp->count >=3D READ_ONCE(batch))); pcp->flags |=3D PCPF_PREV_FREE_HIGH_ORDER; } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { pcp->flags &=3D ~PCPF_PREV_FREE_HIGH_ORDER; } - high =3D nr_pcp_high(pcp, zone, free_high); + high =3D nr_pcp_high(pcp, zone, batch, free_high); if (pcp->count >=3D high) { - free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex); + free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), + pcp, pindex); } } =20 @@ -2688,24 +2742,38 @@ struct page *rmqueue_buddy(struct zone *preferred_z= one, struct zone *zone, return page; } =20 -static int nr_pcp_alloc(struct per_cpu_pages *pcp, int order) +static int nr_pcp_alloc(struct per_cpu_pages *pcp, struct zone *zone, int = order) { - int high, batch, max_nr_alloc; + int high, base_batch, batch, max_nr_alloc; + int high_max, high_min; =20 - high =3D READ_ONCE(pcp->high_min); - batch =3D READ_ONCE(pcp->batch); + base_batch =3D READ_ONCE(pcp->batch); + high_min =3D READ_ONCE(pcp->high_min); + high_max =3D READ_ONCE(pcp->high_max); + high =3D pcp->high =3D clamp(pcp->high, high_min, high_max); =20 /* Check for PCP disabled or boot pageset */ - if (unlikely(high < batch)) + if (unlikely(high < base_batch)) return 1; =20 + if (order) + batch =3D base_batch; + else + batch =3D (base_batch << pcp->alloc_factor); + /* - * Double the number of pages allocated each time there is subsequent - * refiling of order-0 pages without drain. + * If we had larger pcp->high, we could avoid to allocate from + * zone. */ + if (high_min !=3D high_max && !test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags= )) + high =3D pcp->high =3D min(high + batch, high_max); + if (!order) { - max_nr_alloc =3D max(high - pcp->count - batch, batch); - batch <<=3D pcp->alloc_factor; + max_nr_alloc =3D max(high - pcp->count - base_batch, base_batch); + /* + * Double the number of pages allocated each time there is + * subsequent refiling of order-0 pages without drain. + */ if (batch <=3D max_nr_alloc && pcp->alloc_factor < PCP_BATCH_SCALE_MAX) pcp->alloc_factor++; batch =3D min(batch, max_nr_alloc); @@ -2735,7 +2803,7 @@ struct page *__rmqueue_pcplist(struct zone *zone, uns= igned int order, =20 do { if (list_empty(list)) { - int batch =3D nr_pcp_alloc(pcp, order); + int batch =3D nr_pcp_alloc(pcp, zone, order); int alloced; =20 alloced =3D rmqueue_bulk(zone, order, diff --git a/mm/vmstat.c b/mm/vmstat.c index 00e81e99c6ee..2f716ad14168 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -814,9 +814,7 @@ static int refresh_cpu_vm_stats(bool do_pagesets) =20 for_each_populated_zone(zone) { struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; -#ifdef CONFIG_NUMA struct per_cpu_pages __percpu *pcp =3D zone->per_cpu_pageset; -#endif =20 for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { int v; @@ -832,10 +830,12 @@ static int refresh_cpu_vm_stats(bool do_pagesets) #endif } } -#ifdef CONFIG_NUMA =20 if (do_pagesets) { cond_resched(); + + changes +=3D decay_pcp_high(zone, this_cpu_ptr(pcp)); +#ifdef CONFIG_NUMA /* * Deal with draining the remote pageset of this * processor @@ -862,8 +862,8 @@ static int refresh_cpu_vm_stats(bool do_pagesets) drain_zone_pages(zone, this_cpu_ptr(pcp)); changes++; } - } #endif + } } =20 for_each_online_pgdat(pgdat) { --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D826AE7D0C5 for ; Tue, 26 Sep 2023 06:10:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233841AbjIZGKq (ORCPT ); Tue, 26 Sep 2023 02:10:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233730AbjIZGKY (ORCPT ); Tue, 26 Sep 2023 02:10:24 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC023E5F for ; Mon, 25 Sep 2023 23:10:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708604; x=1727244604; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y9KtfnCLPgA/SXtfssQyiyzVZDDHiki6dZr93OTwpTQ=; b=LXGSa84Je5uepGb7AEOrLOXz5KWch/lhkQe7PtQuDbxpxUGLCkUPBMV6 u0foCItjsnM30MkWTkam7bT80hhV9Ah7/k2Ba4e3cCdtzg7LANlWUh5z+ CvCwsEqFhBhvg3CQdfO999ZTanPXu8z3eo6CK+lxP2rAi7sBQc8FAiakw raqPUHiZ3Uzv6/FCBd38mbKFds6SjEWM3wXWEceorYUxHSwlBoGx2coEM khIrPSFbQ4URkwwI1uphqpRHi9K7hGTiGp871hElCRdAi/Qd3QHqWM9JH TLV0Hsao7XQrSMPKWEc/cRSb2T8qdkJeGgQPPaUohHFvVh2iOQmZ/7eKm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991452" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991452" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:10:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892076071" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892076071" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:54 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 08/10] mm, pcp: decrease PCP high if free pages < high watermark Date: Tue, 26 Sep 2023 14:09:09 +0800 Message-Id: <20230926060911.266511-9-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" One target of PCP is to minimize pages in PCP if the system free pages is too few. To reach that target, when page reclaiming is active for the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in allocating path, decrease PCP high and free some pages in freeing path. But this may be too late because the background page reclaiming may introduce latency for some workloads. So, in this patch, during page allocation we will detect whether the number of free pages of the zone is below high watermark. If so, we will stop increasing PCP high in allocating path, decrease PCP high and free some pages in freeing path. With this, we can reduce the possibility of the premature background page reclaiming caused by too large PCP. The high watermark checking is done in allocating path to reduce the overhead in hotter freeing path. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 22 ++++++++++++++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index d6cfb5023f3e..8a19e2af89df 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1006,6 +1006,7 @@ enum zone_flags { * Cleared when kswapd is woken. */ ZONE_RECLAIM_ACTIVE, /* kswapd may be scanning the zone. */ + ZONE_BELOW_HIGH, /* zone is below high watermark. */ }; =20 static inline unsigned long zone_managed_pages(struct zone *zone) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0d482a55235b..08b74c65b88a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2409,7 +2409,13 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, st= ruct zone *zone, return min(batch << 2, pcp->high); } =20 - if (pcp->count >=3D high && high_min !=3D high_max) { + if (high_min =3D=3D high_max) + return high; + + if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) { + pcp->high =3D max(high - (batch << pcp->free_factor), high_min); + high =3D max(pcp->count, high_min); + } else if (pcp->count >=3D high) { int need_high =3D (batch << pcp->free_factor) + batch; =20 /* pcp->high should be large enough to hold batch freed pages */ @@ -2459,6 +2465,10 @@ static void free_unref_page_commit(struct zone *zone= , struct per_cpu_pages *pcp, if (pcp->count >=3D high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), pcp, pindex); + if (test_bit(ZONE_BELOW_HIGH, &zone->flags) && + zone_watermark_ok(zone, 0, high_wmark_pages(zone), + ZONE_MOVABLE, 0)) + clear_bit(ZONE_BELOW_HIGH, &zone->flags); } } =20 @@ -2765,7 +2775,7 @@ static int nr_pcp_alloc(struct per_cpu_pages *pcp, st= ruct zone *zone, int order) * If we had larger pcp->high, we could avoid to allocate from * zone. */ - if (high_min !=3D high_max && !test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags= )) + if (high_min !=3D high_max && !test_bit(ZONE_BELOW_HIGH, &zone->flags)) high =3D pcp->high =3D min(high + batch, high_max); =20 if (!order) { @@ -3226,6 +3236,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int = order, int alloc_flags, } } =20 + mark =3D high_wmark_pages(zone); + if (zone_watermark_fast(zone, order, mark, + ac->highest_zoneidx, alloc_flags, + gfp_mask)) + goto try_this_zone; + else if (!test_bit(ZONE_BELOW_HIGH, &zone->flags)) + set_bit(ZONE_BELOW_HIGH, &zone->flags); + mark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); if (!zone_watermark_fast(zone, order, mark, ac->highest_zoneidx, alloc_flags, --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73CB9E7D0C5 for ; Tue, 26 Sep 2023 06:10:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233808AbjIZGKt (ORCPT ); Tue, 26 Sep 2023 02:10:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233895AbjIZGKZ (ORCPT ); Tue, 26 Sep 2023 02:10:25 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6BB2E76 for ; Mon, 25 Sep 2023 23:10:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708607; x=1727244607; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qaIFP424oa1dLN+3UGtNGktIElxaTAH6Kr61bKGSDDg=; b=SV7ON1j13SFQD8wFbgq48J+TvQBdusPCjU5cZCoEuBJBhXDD4lN5bAMI kkPqBEveZegZwyZIQP7bFDgJIuwieb7Pk+W6xPaYbTDK9rh+bJPG3TsMl VDKu7BZpCWjCXpGAtXQAAxnnIIyw2dQ6fGaDYlF2YK/wMwdbsrfSufIKa tyn3wXNroKn/95nL0Wll2o3rdQoVRQ0FKYEayG3YdLXJnvA9KDXqkL8kP 3mVsDYs/5M8M4WB0gvM348QKRFWIjxK+cJW+7hsEHJuUL8hkKdGlzXCfB MLItB/jxPuVMPQNOdC4QQefE7ILZotGmBtPWXeUSTptMhHUOMR09oF2zP w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991478" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991478" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:10:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892076115" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892076115" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:59 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Date: Tue, 26 Sep 2023 14:09:10 +0800 Message-Id: <20230926060911.266511-10-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In PCP high auto-tuning algorithm, to minimize idle pages in PCP, in periodic vmstat updating kworker (via refresh_cpu_vm_stats()), we will decrease PCP high to try to free possible idle PCP pages. One issue is that even if the page allocating/freeing depth is larger than maximal PCP high, we may reduce PCP high unnecessarily. To avoid the above issue, in this patch, we will track the minimal PCP page count. And, the periodic PCP high decrement will not more than the recent minimal PCP page count. So, only detected idle pages will be freed. On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild instances in parallel (each with `make -j 28`) in 8 cgroup. This simulates the kbuild server that is used by 0-Day kbuild service. With the patch, The number of pages allocated from zone (instead of from PCP) decreases 21.4%. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 15 ++++++++++----- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8a19e2af89df..35b78c7522a7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -682,6 +682,7 @@ enum zone_watermarks { struct per_cpu_pages { spinlock_t lock; /* Protects lists field */ int count; /* number of pages in the list */ + int count_min; /* minimal number of pages in the list recently */ int high; /* high watermark, emptying needed */ int high_min; /* min high watermark */ int high_max; /* max high watermark */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 08b74c65b88a..d7b602822ab3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2166,19 +2166,20 @@ static int rmqueue_bulk(struct zone *zone, unsigned= int order, */ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) { - int high_min, to_drain, batch; + int high_min, decrease, to_drain, batch; int todo =3D 0; =20 high_min =3D READ_ONCE(pcp->high_min); batch =3D READ_ONCE(pcp->batch); /* - * Decrease pcp->high periodically to try to free possible - * idle PCP pages. And, avoid to free too many pages to - * control latency. + * Decrease pcp->high periodically to free idle PCP pages counted + * via pcp->count_min. And, avoid to free too many pages to + * control latency. This caps pcp->high decrement too. */ if (pcp->high > high_min) { + decrease =3D min(pcp->count_min, pcp->high / 5); pcp->high =3D max3(pcp->count - (batch << PCP_BATCH_SCALE_MAX), - pcp->high * 4 / 5, high_min); + pcp->high - decrease, high_min); if (pcp->high > high_min) todo++; } @@ -2191,6 +2192,8 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_= pages *pcp) todo++; } =20 + pcp->count_min =3D pcp->count; + return todo; } =20 @@ -2828,6 +2831,8 @@ struct page *__rmqueue_pcplist(struct zone *zone, uns= igned int order, page =3D list_first_entry(list, struct page, pcp_list); list_del(&page->pcp_list); pcp->count -=3D 1 << order; + if (pcp->count < pcp->count_min) + pcp->count_min =3D pcp->count; } while (check_new_pages(page, order)); =20 return page; --=20 2.39.2 From nobody Fri Feb 13 10:59:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE92E8181F for ; Tue, 26 Sep 2023 06:10:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233890AbjIZGLD (ORCPT ); Tue, 26 Sep 2023 02:11:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233832AbjIZGKn (ORCPT ); Tue, 26 Sep 2023 02:10:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10D3DCDD for ; Mon, 25 Sep 2023 23:10:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708624; x=1727244624; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H9bLA8f1e2FoPWWn6dF2nLkxOgqJitjO66ZJfB8995Y=; b=XVLFs706H23P32SAw1H48Hr3Ks8REt0nOXfIWF9crbkCgIdjL5iz3GlI NerxWuWosyHgSTaRT/G+jH+Q7+B9Yu0+SieQ5N3fmV52o9FGRsshdvmZr WRwEkTUxOyqaDJz5Dyp4E7eGdPrwaSsbjVpS1pFMg+LMUctNQJAJrtGdN 0hgPtwZQVv/cnxMY1/jthsczyStLUbPAqjZh1BiP+D/fmjPIOSlOBDifd KxaKBW22G82KeHU0Vv/QUZCW5KK460YjZPK0FLwtizS0qAeFz30ViLNW7 c+bdZLH6ps2b+M4QrRXcp/7TT43umD78i5AKpi7BZuz3Yo8iUj58n6UtL A==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991511" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991511" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:10:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892076144" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892076144" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:09:03 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing Date: Tue, 26 Sep 2023 14:09:11 +0800 Message-Id: <20230926060911.266511-11-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In current PCP auto-tuning design, if the number of pages allocated is much more than that of pages freed on a CPU, the PCP high may become the maximal value even if the allocating/freeing depth is small, for example, in the sender of network workloads. If a CPU was used as sender originally, then it is used as receiver after context switching, we need to fill the whole PCP with maximal high before triggering PCP draining for consecutive high order freeing. This will hurt the performance of some network workloads. To solve the issue, in this patch, we will track the consecutive page freeing with a counter in stead of relying on PCP draining. So, we can detect consecutive page freeing much earlier. On a 2-socket Intel server with 128 logical CPU, we tested SCTP_STREAM_MANY test case of netperf test suite with 64-pair processes. With the patch, the network bandwidth improves 3.1%. This restores the performance drop caused by PCP auto-tuning. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 2 +- mm/page_alloc.c | 23 +++++++++++------------ 2 files changed, 12 insertions(+), 13 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 35b78c7522a7..44f6dc3cdeeb 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -689,10 +689,10 @@ struct per_cpu_pages { int batch; /* chunk size for buddy add/remove */ u8 flags; /* protected by pcp->lock */ u8 alloc_factor; /* batch scaling factor during allocate */ - u8 free_factor; /* batch scaling factor during free */ #ifdef CONFIG_NUMA u8 expire; /* When 0, remote pagesets are drained */ #endif + short free_count; /* consecutive free count */ =20 /* Lists of pages, one per migrate type stored on the pcp-lists */ struct list_head lists[NR_PCP_LISTS]; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d7b602822ab3..206ab768ec23 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2375,13 +2375,10 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, i= nt batch, int high, bool free max_nr_free =3D high - batch; =20 /* - * Double the number of pages freed each time there is subsequent - * freeing of pages without any allocation. + * Increase the batch number to the number of the consecutive + * freed pages to reduce zone lock contention. */ - batch <<=3D pcp->free_factor; - if (batch <=3D max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX) - pcp->free_factor++; - batch =3D clamp(batch, min_nr_free, max_nr_free); + batch =3D clamp_t(int, pcp->free_count, min_nr_free, max_nr_free); =20 return batch; } @@ -2408,7 +2405,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, str= uct zone *zone, * stored on pcp lists */ if (test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) { - pcp->high =3D max(high - (batch << pcp->free_factor), high_min); + pcp->high =3D max(high - pcp->free_count, high_min); return min(batch << 2, pcp->high); } =20 @@ -2416,10 +2413,10 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, s= truct zone *zone, return high; =20 if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) { - pcp->high =3D max(high - (batch << pcp->free_factor), high_min); + pcp->high =3D max(high - pcp->free_count, high_min); high =3D max(pcp->count, high_min); } else if (pcp->count >=3D high) { - int need_high =3D (batch << pcp->free_factor) + batch; + int need_high =3D pcp->free_count + batch; =20 /* pcp->high should be large enough to hold batch freed pages */ if (pcp->high < need_high) @@ -2456,7 +2453,7 @@ static void free_unref_page_commit(struct zone *zone,= struct per_cpu_pages *pcp, * stops will be drained from vmstat refresh context. */ if (order && order <=3D PAGE_ALLOC_COSTLY_ORDER) { - free_high =3D (pcp->free_factor && + free_high =3D (pcp->free_count >=3D batch && (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) && (!(pcp->flags & PCPF_FREE_HIGH_BATCH) || pcp->count >=3D READ_ONCE(batch))); @@ -2464,6 +2461,8 @@ static void free_unref_page_commit(struct zone *zone,= struct per_cpu_pages *pcp, } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { pcp->flags &=3D ~PCPF_PREV_FREE_HIGH_ORDER; } + if (pcp->free_count < (batch << PCP_BATCH_SCALE_MAX)) + pcp->free_count +=3D (1 << order); high =3D nr_pcp_high(pcp, zone, batch, free_high); if (pcp->count >=3D high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), @@ -2861,7 +2860,7 @@ static struct page *rmqueue_pcplist(struct zone *pref= erred_zone, * See nr_pcp_free() where free_factor is increased for subsequent * frees. */ - pcp->free_factor >>=3D 1; + pcp->free_count >>=3D 1; list =3D &pcp->lists[order_to_pindex(migratetype, order)]; page =3D __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, li= st); pcp_spin_unlock(pcp); @@ -5483,7 +5482,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *= pcp, struct per_cpu_zonesta pcp->high_min =3D BOOT_PAGESET_HIGH; pcp->high_max =3D BOOT_PAGESET_HIGH; pcp->batch =3D BOOT_PAGESET_BATCH; - pcp->free_factor =3D 0; + pcp->free_count =3D 0; } =20 static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned = long high_min, --=20 2.39.2