From nobody Wed Dec 17 17:28:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BAB6C61D97 for ; Fri, 24 Nov 2023 11:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345761AbjKXLGi (ORCPT ); Fri, 24 Nov 2023 06:06:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345742AbjKXLGe (ORCPT ); Fri, 24 Nov 2023 06:06:34 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB1A3D71 for ; Fri, 24 Nov 2023 03:06:40 -0800 (PST) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3AO66F0G031216; Fri, 24 Nov 2023 11:06:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=XevrrF9/BcNyXtfgPOZjSxlAUBouemoME5zNztn1Vg8=; b=Ox4aOybwvkKC9GaMCV5HxQUruXA0yH6ef3yTi+Aul6L6RXTbWCdwnkRypAI8345YZycw yJvH3cLs/qnracKLrtjT/T/D4InOgWFrJMWYVgvlKNuOfBteTP97NMiJ7Kral8hU1LP1 GL1CckUcqJqRxW2uVg370rQxlbSy5U9JdiIJjE9TJGVP7gfm44AzAyTrUNLvLB42Oex/ x2i1D1ZfVRQ6qXci0dPDdio2iIGn6t4qCBJ1ov2e8+rSjvqPtzemEELdqOQKt+27ZfEe 28m12XHFxGouOry5v9vkWt6yW2/Li7IyV3qKBXkQ4zBCz2/jYxMGibTvNZ+eCobSYFbw tQ== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3ujp8x0ub6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 24 Nov 2023 11:06:23 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 3AOB6NYA001785 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 24 Nov 2023 11:06:23 GMT Received: from hu-charante-hyd.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Fri, 24 Nov 2023 03:06:19 -0800 From: Charan Teja Kalla To: , , , , , , CC: , , Charan Teja Kalla Subject: [PATCH V3 1/2] mm: page_alloc: correct high atomic reserve calculations Date: Fri, 24 Nov 2023 16:35:52 +0530 Message-ID: <1660034138397b82a0a8b6ae51cbe96bd583d89e.1700821416.git.quic_charante@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: vvC70aAGvT9o9sBR5JZxO4PBCB_o6bew X-Proofpoint-ORIG-GUID: vvC70aAGvT9o9sBR5JZxO4PBCB_o6bew X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-11-23_15,2023-11-22_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 adultscore=0 mlxlogscore=638 clxscore=1015 spamscore=0 lowpriorityscore=0 impostorscore=0 malwarescore=0 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311060000 definitions=main-2311240086 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" reserve_highatomic_pageblock() aims to reserve the 1% of the managed pages of a zone, which is used for the high order atomic allocations. It uses the below calculation to reserve: static void reserve_highatomic_pageblock(struct page *page, ....) { ....... max_managed =3D (zone_managed_pages(zone) / 100) + pageblock_nr_pages; if (zone->nr_reserved_highatomic >=3D max_managed) goto out; zone->nr_reserved_highatomic +=3D pageblock_nr_pages; set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); out: .... } Since we are always appending the 1% of zone managed pages count to pageblock_nr_pages, the minimum it is turning into 2 pageblocks as the nr_reserved_highatomic is incremented/decremented in pageblock sizes. Encountered a system(actually a VM running on the Linux kernel) with the below zone configuration: Normal free:7728kB boost:0kB min:804kB low:1004kB high:1204kB reserved_highatomic:8192KB managed:49224kB The existing calculations making it to reserve the 8MB(with pageblock size of 4MB) i.e. 16% of the zone managed memory. Reserving such high amount of memory can easily exert memory pressure in the system thus may lead into unnecessary reclaims till unreserving of high atomic reserves. Since high atomic reserves are managed in pageblock size granules, as MIGRATE_HIGHATOMIC is set for such pageblock, fix the calculations for high atomic reserves as, minimum is pageblock size , maximum is approximately 1% of the zone managed pages. Acked-by: Mel Gorman Signed-off-by: Charan Teja Kalla Acked-by: David Rientjes --- mm/page_alloc.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 733732e..a789dfd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1884,10 +1884,11 @@ static void reserve_highatomic_pageblock(struct pag= e *page, struct zone *zone) unsigned long max_managed, flags; =20 /* - * Limit the number reserved to 1 pageblock or roughly 1% of a zone. + * The number reserved as: minimum is 1 pageblock, maximum is + * roughly 1% of a zone. * Check is race-prone but harmless. */ - max_managed =3D (zone_managed_pages(zone) / 100) + pageblock_nr_pages; + max_managed =3D ALIGN((zone_managed_pages(zone) / 100), pageblock_nr_page= s); if (zone->nr_reserved_highatomic >=3D max_managed) return; =20 --=20 2.7.4