From nobody Mon Feb 9 06:36:09 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D335C7EE22 for ; Thu, 11 May 2023 05:24:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236620AbjEKFYh (ORCPT ); Thu, 11 May 2023 01:24:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236939AbjEKFYb (ORCPT ); Thu, 11 May 2023 01:24:31 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6A4E61A2 for ; Wed, 10 May 2023 22:23:57 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34B5MmNo050137; Thu, 11 May 2023 13:22:48 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Thu, 11 May 2023 13:22:44 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Minchan Kim , Roman Gushchin , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCHv5] mm: optimization on page allocation when CMA enabled Date: Thu, 11 May 2023 13:22:30 +0800 Message-ID: <1683782550-25799-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34B5MmNo050137 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang Let us look at the timeline of scenarios below with WMARK_LOW=3D25MB WMARK_= MIN=3D5MB (managed pages 1.9GB). We can find that CMA begin to be used until 'C' unde= r the method of 'fixed 2 times of free cma over free pages' which could have the scenario 'A' and 'B' into a fault state, that is, free UNMOVABLE & RECLAIMA= BLE pages is lower than corresponding watermark without reclaiming which should= be deemed as against current memory policy. This commit try to solve this by checking zone_watermark_ok again with removing CMA pages which could lead t= o a proper time point of CMA's utilization. -- Free_pages | | -- WMARK_LOW | -- Free_CMA | | -- Free_CMA/Free_pages(MB) A(12/30) --> B(12/25) --> C(12/20) fixed 1/2 ratio N N Y this commit Y Y Y Signed-off-by: Zhaoyang Huang --- v2: do proportion check when zone_watermark_ok, update commit message v3: update coding style and simplify the logic when zone_watermark_ok v4: code update according to Roman's suggest v5: update commit message --- --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..4719800 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,43 @@ static bool unreserve_highatomic_pageblock(const str= uct alloc_context *ac, =20 } =20 +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks = via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_waterma= rk_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned = int alloc_flags) +{ + unsigned long watermark; + bool cma_first =3D false; + + watermark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of C= MA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CM= A))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first =3D (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first =3D true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned = int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3084,12 +3121,11 @@ static bool unreserve_highatomic_pageblock(const st= ruct alloc_context *ac, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page =3D __rmqueue_cma_fallback(zone, order); if (page) return page; --=20 1.9.1