From nobody Wed Dec 17 11:31:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A55B0C7EE23 for ; Wed, 31 May 2023 02:52:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234003AbjEaCwf (ORCPT ); Tue, 30 May 2023 22:52:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231417AbjEaCwe (ORCPT ); Tue, 30 May 2023 22:52:34 -0400 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62A67D9 for ; Tue, 30 May 2023 19:52:30 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34V2pM9k099530; Wed, 31 May 2023 10:51:22 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.73.76) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Wed, 31 May 2023 10:51:17 +0800 From: "zhaoyang.huang" To: Andrew Morton , Matthew Wilcox , Suren Baghdasaryan , Minchan Kim , , , Zhaoyang Huang , Subject: [PATCHv5] mm: skip CMA pages when they are not available Date: Wed, 31 May 2023 10:51:01 +0800 Message-ID: <1685501461-19290-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.73.76] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34V2pM9k099530 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang This patch fixes unproductive reclaiming of CMA pages by skipping them when= they are not available for current context. It is arise from bellowing OOM issue= , which caused by large proportion of MIGRATE_CMA pages among free pages. [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failur= e: order:0, mode:0xc00(GFP_NOIO), nodemask=3D(null),cpuset=3Dforeground,mem= s_allowed=3D0 [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 1= 24*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2= 048kB 0*4096kB =3D 35848kB [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*= 16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*= 2048kB 0*4096kB =3D 3236kB ... [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node= -1, gfp=3D0xa20(GFP_ATOMIC) [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, bu= ffer size: 64, default order: 0, min order: 0 [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 Signed-off-by: Zhaoyang Huang Acked-by: David Hildenbrand --- v2: update commit message and fix build error when CONFIG_CMA is not set v3,v4,v5: update code and comments --- --- mm/vmscan.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637f..972a54d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2193,6 +2193,25 @@ static __always_inline void update_lru_sizes(struct = lruvec *lruvec, =20 } =20 +#ifdef CONFIG_CMA +/* + * It is waste of effort to scan and reclaim CMA pages if it is not availa= ble + * for current allocation context. Kswapd can not be enrolled as it can not + * distinguish this scenario by using sc->gfp_mask =3D GFP_KERNEL + */ +static bool skip_cma(struct folio *folio, struct scan_control *sc) +{ + return !current_is_kswapd() && + gfp_migratetype(sc->gfp_mask) !=3D MIGRATE_MOVABLE && + get_pageblock_migratetype(&folio->page) =3D=3D MIGRATE_CMA; +} +#else +static bool skip_cma(struct folio *folio, struct scan_control *sc) +{ + return false; +} +#endif + /* * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. * @@ -2239,7 +2258,8 @@ static unsigned long isolate_lru_folios(unsigned long= nr_to_scan, nr_pages =3D folio_nr_pages(folio); total_scan +=3D nr_pages; =20 - if (folio_zonenum(folio) > sc->reclaim_idx) { + if (folio_zonenum(folio) > sc->reclaim_idx || + skip_cma(folio, sc)) { nr_skipped[folio_zonenum(folio)] +=3D nr_pages; move_to =3D &folios_skipped; goto move; --=20 1.9.1