From nobody Wed Feb 11 16:31:01 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2714DC77B73 for ; Sat, 6 May 2023 06:47:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230367AbjEFGrA (ORCPT ); Sat, 6 May 2023 02:47:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbjEFGq6 (ORCPT ); Sat, 6 May 2023 02:46:58 -0400 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22B2B30FD for ; Fri, 5 May 2023 23:46:55 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 3466k8e5050232; Sat, 6 May 2023 14:46:09 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Sat, 6 May 2023 14:46:06 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Minchan Kim , Roman Gushchin , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCHv3] mm: optimization on page allocation when CMA enabled Date: Sat, 6 May 2023 14:45:47 +0800 Message-ID: <1683355547-10524-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 3466k8e5050232 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang Let us look at the series of scenarios below with WMARK_LOW=3D25MB,WMARK_MI= N=3D5MB (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to = use CMA since C which actually has caused U&R lower than WMARK_LOW (this should= be deemed as against current memory policy, that is, UNMOVABLE & RECLAIMABLE s= hould either stay around WATERMARK_LOW when no allocation or do reclaim via enter= ing slowpath) -- Free_pages | | -- WMARK_LOW | -- Free_CMA | | -- Free_CMA/Free_pages(MB) A(12/30) B(12/25) C(12/20) fixed 1/2 ratio N N Y this commit Y Y Y Signed-off-by: Zhaoyang Huang --- v2: do proportion check when zone_watermark_ok, update commit message v3: update coding style and simplify the logic when zone_watermark_ok --- --- mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 40 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..7aca49d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,41 @@ static bool unreserve_highatomic_pageblock(const str= uct alloc_context *ac, =20 } =20 +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks = via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_waterma= rk_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsi= gned int alloc_flags) +{ + unsigned long watermark; + bool cma_first =3D false; + + watermark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of C= MA */ + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_C= MA))) + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first =3D true; + else + /* + * remain previous fixed 1/2 logic when watermark ok as we have + * above protection now + */ + cma_first =3D (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + return cma_first; +} +#else +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsi= gned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3084,13 +3119,12 @@ static bool unreserve_highatomic_pageblock(const st= ruct alloc_context *ac, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page =3D __rmqueue_cma_fallback(zone, order); + if (migratetype =3D=3D MIGRATE_MOVABLE) { + page =3D __if_use_cma_first(zone, order, alloc_flags) ? + __rmqueue_cma_fallback(zone, order) : NULL; if (page) return page; } --=20 1.9.1