From nobody Mon Feb 9 01:01:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 635FBC77B73 for ; Mon, 22 May 2023 03:12:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231271AbjEVDJC (ORCPT ); Sun, 21 May 2023 23:09:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229707AbjEVDIw (ORCPT ); Sun, 21 May 2023 23:08:52 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0368DC1 for ; Sun, 21 May 2023 20:08:43 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34M38SV7079848; Mon, 22 May 2023 11:08:28 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.73.76) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 22 May 2023 11:08:23 +0800 From: "zhaoyang.huang" To: Andrew Morton , Matthew Wilcox , Minchan Kim , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCHv3] mm: skip CMA pages when they are not available Date: Mon, 22 May 2023 11:08:02 +0800 Message-ID: <1684724882-22266-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.73.76] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34M38SV7079848 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang This patch fixes unproductive reclaiming of CMA pages by skipping them when= they are not available for current context. It is arise from bellowing OOM issue= , which caused by large proportion of MIGRATE_CMA pages among free pages. [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failur= e: order:0, mode:0xc00(GFP_NOIO), nodemask=3D(null),cpuset=3Dforeground,mem= s_allowed=3D0 [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 1= 24*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2= 048kB 0*4096kB =3D 35848kB [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*= 16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*= 2048kB 0*4096kB =3D 3236kB ... [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node= -1, gfp=3D0xa20(GFP_ATOMIC) [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, bu= ffer size: 64, default order: 0, min order: 0 [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 Signed-off-by: Zhaoyang Huang --- v2: update commit message and fix build error when CONFIG_CMA is not set V3: update code and comments --- --- mm/vmscan.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637f..17cd246 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2192,7 +2192,24 @@ static __always_inline void update_lru_sizes(struct = lruvec *lruvec, } =20 } - +#ifdef CONFIG_CMA +/* + * It is waste of effort to scan and reclaim CMA pages if it is not availa= ble + * for current allocation context + */ +static bool skip_cma(struct page *page, struct scan_control *sc) +{ + if (!current_is_kswapd() && gfp_migratetype(sc->gfp_mask) !=3D MIGRATE_MO= VABLE + && get_pageblock_migratetype(page) =3D=3D MIGRATE_CMA) + return true; + return false; +} +#else +static bool skip_cma(struct page *page, struct scan_control *sc) +{ + return false; +} +#endif /* * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. * @@ -2225,10 +2242,12 @@ static unsigned long isolate_lru_folios(unsigned lo= ng nr_to_scan, unsigned long nr_skipped[MAX_NR_ZONES] =3D { 0, }; unsigned long skipped =3D 0; unsigned long scan, total_scan, nr_pages; + struct page *page; LIST_HEAD(folios_skipped); =20 total_scan =3D 0; scan =3D 0; + while (scan < nr_to_scan && !list_empty(src)) { struct list_head *move_to =3D src; struct folio *folio; @@ -2239,12 +2258,14 @@ static unsigned long isolate_lru_folios(unsigned lo= ng nr_to_scan, nr_pages =3D folio_nr_pages(folio); total_scan +=3D nr_pages; =20 - if (folio_zonenum(folio) > sc->reclaim_idx) { + page =3D &folio->page; + + if (folio_zonenum(folio) > sc->reclaim_idx + || skip_cma(page, sc)) { nr_skipped[folio_zonenum(folio)] +=3D nr_pages; move_to =3D &folios_skipped; goto move; } - /* * Do not count skipped folios because that makes the function * return with no isolated folios if the LRU mostly contains --=20 1.9.1